DIGITAL PLATFORMS AND ONLINE CENSORSHIP

Status: 
ongoing
Period: 
October 2022 - present
Funding: 
in-kind
Funding organization: 

N/A

Person(s) in charge: 

Maurizio Borghi (Coordinator), Beatrice Balzola (Researcher)

Executive summary: 

The project intends to explore the field of online content moderation through the collection of a significant number of case studies representative of the legal conflicts between freedom of thought and policies against disinformation and hate speech online.

Background: 

The state of the art of the regulation of online content moderation is continuously evolving. In recent years, the European Union has brought on numerous actions aimed at creating a uniform and complete regulatory framework which can balance both the right to freedom of expression and the contrast to illegal activity.

The lack of a general obligation to monitor, at first enshrined in Directive 31/2000 and then reaffirmed by the Digital Services Act Package (DSA), creates an interesting liability regime for online service providers. Such providers are excluded from liability regarding content hosted on their platforms unless, once aware of the presence of illegal content, they don’t act expeditiously to remove it. As stated in the DSA, the concept of ‘illegal content’ should broadly reflect the existing rules in the offline environment, therefore including illegal hate speech but leaving out disinformation, to be considered as harmful although not illegal. Soft law deeds such as the Code of Conduct in Countering Illegal Hate Speech Online, adopted in 2016, or the Code of Practice on Discrimination (2018), although non-binding agreements, have had an impact in the creation of monitoring standards for disinformation and hate speech, leading towards the creation of a culture of self-regulation.

On the one hand, this legal landscape shapes the will of the European Union to regulate digital platforms. On the other hand, however, it raises concerns on the instruments used to do so. While it is true that online service providers have a duty to transparency of content moderation, it is also true that users don’t have the tools to keep in check the provider’s activity, especially in a broader context of constitutionally protected rights such as freedom of expression and freedom of information. From this assumption stems the will of the researches of this project, to collect a number of case studies in order to study and find patterns on the way in which platforms interact with users, moderate content or potentially target specific topics.

Objectives: 

The research will focus on Italy and will take into consideration the main hosting platforms. An online questionnaire on “digital censorship” will be distributed to the public.

Join the project by filling out the questionnaire HERE to identify representative case studies of legal conflicts.

Results: 

The results of the research will be consolidated in a scientific paper and will be presented in public workshops and seminars.

Related Publications: