Blog

Free speech, content moderation and the Fake news Bill

Written by

7 de July de 2020

On 6/30, the Federal Senate passed Bill nº 2630/2020, a proposal authored by Senator Alessandro Vieira (Cidadania Party), who earned the label “Fake News Bill”. The proposal, which now goes to the House of Deputies, was the subject of great public controversy, and it was met with enormous resistance from human rights activists and internet scholars, who warned of threats to privacy and freedom of expression in the proposal. Amid this debate, a broader discussion emerged about the role of platforms in combating disinformation and what would be the most appropriate regulatory instruments to combat disinformation without harming users’ rights.

In today’s post, the Fake News Bill is examined in the broader context of the regulatory debate on content moderation.

The responsibility of the providers in question – from the Communications Decency Act to the Internet Bill of Rights

To understand what is at stake in current conflicts over the regulation of content moderation, it is necessary to examine the facts and values ​​that have shaped the current regime. If the decade of 2010 was characterized by debates on how to regulate the internet, in the 1990s it was discussed whether it should be regulated at all. The liberal ideals of decentralization and affirmation of individual freedoms had participated as much in the construction of the net and as had the raw material of the submarine cables that constituted its physical support. Consequently, a very widespread perspective at the time framed the online environment as a scenario for the ultimate achievement of this imaginary of self-government, being, therefore, opposed to any regulatory intervention. One of its greatest symbols was the Declaration of Independence of Cyberspace, a manifesto written by John Perry Barlow during the 1996 World Economic Forum, which denied both the legitimacy and the viability of any state regulation on digital territory.

Although this detail is often overlooked, the Declaration was written in response to the Telecommunications Act of 1996, a reform of US telecommunications legislation with decisive implications for the internet. One of the titles of this reform, the Communications Decency Act (CDA), garnered special attention for criminalizing the distribution of “obscene or indecent” material to minors, as well as harassment and threats in the digital environment. Among other things, it was an institutional reaction to concerns about the circulation of allegedly undesirable content on the web, especially pornography and pirated material – threats that were then treated in a similar way to fake news today. Although his approval mobilized a lot of public attention, the CDA quickly lost effectiveness: less than a year after its approval, the US Supreme Court would find it incompatible with the freedom of expression protections contained in the First Amendment to the US Constitution and decide for its unconstitutionality.

There was, however, a section of the CDA that survived and that would be decisive in building the debate on content moderation in the 21st century. The so-called section 230 contains two main provisions. First, a safe harbor clause (§230, c, 1), which establishes that service providers and users are not responsible for the content provided by third parties. As media scholar Tarleton Gillespie notes, the implication was that the role played by internet service providers with respect to content was similar to that of telephone companies: they provided the medium, but the responsibility for what was said rested with the users. In practice, this meant a “safe harbor” for these stakeholders, which would not be legally compelled to police users speech, given that such an obligation could encourage self-preservation riven proactive censhorsip on the providers’ side.

This provision was followed by the Good Samaritan clause (§230, c, 2), which protected providers from liability for voluntary moderation of content that they deemed questionable, provided that such measures were taken in good faith. According to legal scholar Kate Klonick, the US Supreme Court’s understanding was that such immunity would encourage providers to take a proactive stance to combat the circulation of harmful or obscene material in their media. In summary, the section 230 paradigm provided providers with the right to moderate content, but not the obligation to do so. This model significantly impacted the internet in the following decades, informing business decisions about content moderation practices on a global scale and influencing the debate in other countries, including Brazil.

In our case, the regulatory regime adopted in the matter addressed similar concerns with the protection of civil liberties in the online environment. The law dealing with the subject, the Internet Bill of Rights, resulted from a long process of collaborative and democratic construction marked by broad civil participation. From a political point of view, it responded to legislative proposals that threatened to restrict the exercise of rights on the Internet, such as Bill nº 89/2003, which received the label “Digital AI-5” due to its approach aimed at criminalizing various conducts online. It was also the result of a context in which the political importance of the internet became increasingly evident, as Edward Snowden revealed the massive surveillance systems employed by the NSA and other security agencies, and demonstrations such as Occupy Wall Street, the June Mass Protests and the Arab Spring revealed the growing power of social media to mobilize people.

Approved in 2014, the Internet Bill of Rights was born aiming at consolidating rights and affirming guiding principles of internet governance in the country, including privacy and freedom of expression. The main expression of this last principle can be found in its Article 19, which establishes the regime known as Judicial notice and takedown. Inspired by the United States, this model establishes that the obligation to remove content only exists when the platform is judicially commanded to do so. It differs from the so-called notice and takedown model, often found in copyright rules, in which the mere extrajudicial notification is sufficient to create the removal obligation, generating the right of indemnification for the notifying user when the content has not been removed.

As we will see in the next section, the Judicial notice and takedown model would end up becoming central to contemporary debates about misinformation and freedom of expression on the internet.

Disinformation, freedom of speech and the search for regulatory instruments to address content moderation

In recent years, moderation of online content has gradually gained the spotlight of public debate as digital platforms have occupied spaces and functions previously reserved for traditional media in social life. This centrality has raised significant concerns from national and international actors. In a rapport, the UN special rapporteur on freedom of expression and opinion warns of the dangers of circulating hate speech and misinformation on platforms, as well as of growing private control over users’ freedom of expression. Similarly, a joint statement by several international organizations highlights private control as a threat to freedom of expression for the next decade and invites stakeholders to develop independent and transparent multistakeholder oversight mechanisms to address this challenge.

The search for solutions to these problems mobilizes national and international, state and non-state actors and resources. Such efforts are based on the phenomenon characterized by political scientist David Levi-Faur as “the proliferation of new regulatory technologies”, that is, the multiplication and diversification of instruments for modulating institutional behavior since the 1970s. Before that, he says, the main regulatory mechanism was binding rules (hard law), the governance of different sectors is increasingly conducted through technologies as diverse as codes of conduct, audit systems, lists of guiding principles, transparency rankings, corporate social responsibility standards, and naming and shaming procedures. This can be seen in regulatory sectors as diverse as environment, consumer protection and antitrust.

Online content moderation is no different. One of the main consequences of the Good Samaritan clause adopted in section 230 was the development of private content policies: community standards or guidelines. Gillespie, the aforementioned media scholar, notes that these documents differ from the terms of use in that their effectiveness is not legal, but pedagogical and cultural: they inform users about what content is encouraged, discouraged and prohibited on the platform, they communicate to advertisers that they are safe spaces for the circulation of advertising and signal to state regulators that they are acting to curb the circulation of harmful material in their media.

In addition to community guidelines, other regulatory instruments have emerged in recent years. In 2015, civil society organizations from different countries produced the Manila Principles on Intermediary Liability, recommendations aimed at guiding policy makers and private sector representatives in developing solutions that respect freedom of expression and favor an innovative environment. In 2018, the European Union announced its Code of Practice on Disinformation, the first normative standards of self-regulation aimed at tackling the problem. In the same year, academic and third sector entities from different countries produced the Santa Clara Principles on Transparency and Accountability in Content Moderation, an instrument that establishes good practices to guarantee minimum levels of transparency and accountability in the private sector.

It is in this complex ecosystem that the recent clashes over misinformation, freedom of expression and intermediary liability have been developing in several countries: a scenario that involves the growing concerns of political actors with the circulation of undesirable speech, especially disinformation, a model for binding regulation based on a priori non-liability of intermediaries for the content generated by third parties, and the proliferation of regulatory technologies produced by the private sector and the third sector to face the challenges related to these socio-technical transformations.

The Brazilian debate and the Fake News Bill

The demand for tougher measures against undesirable content has motivated the claim that non-binding instruments are ineffective in encouraging platforms to take meaningful actions to curb its circulation. Until 2018, the main institutional expression of this rhetoric was the questioning of the constitutionality of the Judicial notice and takedown model established by the Internet Bill of Rights. With the elections of that year, however, the disinformation debate went through an accelerated process of ultrapolitization. As a consequence, there was a series of new institutional responses driven by the search for solutions against fake news that were both quick and forceful. These responses included the Fake News Parliamentary Committee of Inquiry, several legislative proposals on disinformation, and an investigation conducted by the Supreme Court against fake news.

In 2020, the wave of disinformation about the new coronavirus pandemic sparked a new chapter in the debate. Its main character is PL 2630/2020, known as Fake News Bill and recently passed by the Federal Senate. Being the subject of enormous controversy among stakeholders, the project faced immense resistance from experts and digital rights activists. To understand the causes of this reaction, it is necessary to examine its text and its context.

The first aspect that draws attention in the Fake News Bill is its difficult characterization. This is because the discussion of the proposal has been marked by a combination of reduced level of transparency, and frequent and substantial changes in its content. For example: on May 18, IRIS published a technical paper on the project. The document recognized the legitimacy of the intentions underlying it and pointed out its points that were considered positive, which consisted mainly of demands for greater transparency in content moderation practices. It also identified points that we found to be negative: insufficient social participation in its construction process, imprecise definitions that could give rise to abuses of power, the attribution of police functions to platforms and the imposition of discursive policing obligations on platforms.

 

About two weeks after the publication of the technical note, the substitute text prepared by the rapporteur was made public. This version fundamentally rewrote the proposal, adding dozens of articles to it. Since then, the bill has undergone repeated and notable changes, with different versions being published every few days. During this period, numerous privacy-violating provisions were added, such as obligations for massive identification of users by the platforms and the requirement for documentation for accessing social networks and for registering prepaid telephone numbers. As a result, national and international criticism has multiplied: from the Human Rights Watch to the UN rapporteur on freedom of expression and opinion, from the Global Network Initiative to the National Human Rights Council. The Direitos na Rede Coalition, an network of 39 civil society organizations, scholars and digital rights activists in Brazil, including IRIS, has repeatedly pointed out the risks and failures of the project to the fragility of its construction.

With regard to freedom of expression, some versions of the bill have imposed an obligation for platforms to monitor content published by users, and to display corrections alongside content labeled as disinformation by independent verifiers. These provisions were criticized by the organized civil society, who shared the understanding that the creation of discourse policing obligations would bring back the problems that the Judicial notice and takedown model was born to prevent: platforms would be encouraged to use their systems automated content detection and evaluation to support much more removals. We argued that this would aggravate the problem of private control over discourse and that it could increase the number of unjust removals, given the well-known biases of these systems.

In the version approved by the Federal Senate, some of these criticisms were absorbed and many of the provisions in question were removed from the text. The debate, however, was conducted in a very precarious way, with different strategies being mobilized by the supporters of the proposal to stifle the debate about its risks. In the parliamentary discussion, for example, a contemporary version of the expression “I have nothing to hide” was often used, according to which only fake news producers would have anything to fear. Alternatively, parliamentary leaders recognized the problems with the proposal, but outsourced corrections to the House of Deputies, which will review it.

From the beginning, the lack of social participation in the legislative process of the project was questioned by civil society. The two main other laws that underpin Internet governance in the country, the Internet Bill of Rights and the General Data Protection Law, were debated over at least five years each. Their construction processes included several public hearings and consultations with wide civil participation. The Fake News Bill, on the other hand, did not even exist until a few months ago. While its defenders try to present it as a kind of “magic bullet” against disinformation, experts and activists remain concerned about the harmful effects that a law approved at in such hurry may have on the internet.

Conclusion – Between questions and answers

In the context of the increasing demand for responses to the social and political challenges brought about by the advent of content moderation on a massive scale, scholars and activists have produced a growing body of contributions based on scientific research and multistakeholder dialogue. This production indicates paths to the answers. How to fight disinformation without violating fundamental rights? With a non-penalizing approach focused on transparency and information literacy. How to ensure accountability in content moderation? Compelling platforms to implement measures regarding transparency and due process. Is the Judicial notice and takedown model sufficient to respond to society’s current demands? The general understanding shared by the parties involved is that it is still the best protection for freedom of expression that we have. Obviously, these responses are far from exhausting the debate, given that we are facing some of the biggest problems that regulators, researchers and human rights defenders will grapple with in the 21st century.

However, when the time and energy of stakeholders is dedicated to dealing with proposals that are in stark contrast to the accumulation built in the discussion, the development of effective and informed responses is hampered. On these occasions, it is another set of questions, this time raised by the experts themselves, that ends up gaining the spotlight: is it reasonable to approve rules with such devastating impacts on the internet without time for debate and in the context of ultrapolitization? Is it acceptable to remove users’ privacy in the name of combating disinformation? Is it appropriate to make registration in social networks conditional on the presentation of documents in a country where millions of people do not have documentation, expanding the digital divide?

From the point of view of defending civil liberties and respecting democratic values, the answer to these other questions is much simpler: a resounding no.

Are you interested in the discussion on freedom of expression and misinformation in Brazil today? Check out our post about the Supreme Court’s investigation against fake news.

The views and opinions expressed in this article are those of the authors.

Written by

Director at the Institute for Research on Internet and Society. Gustavo holds a bachelor’s degree in Anthropology from the Federal University of Minas Gerais (UFMG), and is currently undertaking a Master’s degree in Communication of Science and Culture at the University of Campinas (Unicamp). Member of the Brazilian Internet Governance Research Network steering group. Alumnus of the Brazilian School of Internet Governance. His research and policy interests are anthropology of the State, privacy and data protection, science and technology studies, platform governance and encryption policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Veja também

See all blog posts