For misinformation not to be law: proposals against fake news
Written by
Lahis Kurtz (See all posts from this author)
21 de January de 2020
Spreading information without grounding in fact, untrue or inaccurate is not a new problem. The use of information technology to produce, direct and propagate, strategically and for political purposes, yes. What has been proposed in regulatory terms to counter misinformation campaigns?
The price of information
The production of information, in the sense of selection of approaches, guidelines, direction, who produces, was transformed by the advance of data processing provided by digital. In particular, data collection by applications widely used on personal devices contributed to this new communicational reality. Along with information, however, circulates misinformation and misinformation, which are now categorized into seven distinct types according to content, motivation, and propagation medium.
We gain the power to communicate and post information on the internet. But participating in social networks or having applications sign in implies providing data – which will be used for framing a profile. Targeting advertising by interest profile has been the business model of online applications. Data is the payment for the freedom to access and make information available.
Freedom of expression, always associated with the democratic capacity of a population, carries with it a counterpart in responsibility that is beginning to gain attention from the legal world. What is the limit of use of this data? With whom can they be shared? How to audit their use and sharing between trading partners? Considering the transnationality of these companies, what is the legal regime applicable in these relations? What are the risks of user profiling for targeting content to be shown to them?
What does misinformation have to do with profiling?
In particular, the scandal involving the Cambridge Analytica has made these agendas the media in recent times. The company bought from a researcher a data package of millions of Facebook users. These were obtained through a personality test questionnaire (which eventually gave access to the data of the friends who filled it out). Based on them, he identified opinion groups and planted false news aimed at manipulating the outcome of the US elections.
In this case, there was a well-known practice of targeting tailor-made content for specific groups of people – used in public relations and marketing to increase product sales and popularity. The purpose for which it was used also does not differ from what many media outlets have been promoting for decades – influencing public opinion about a given fact.
What’s new is the combination of these two aspects with the use of technology for profiling and systematic deployment of fake news.
What is conventionally called fake news is more than just an everyday social phenomenon, it is no longer an accident, a journalistic mistake or a bad joke. People have a common body of information at their fingertips, but the way they reach each other is increasingly personalized. Tailor-made misinformation has effects on the social imagination. Prior to Cambridge Analytica, in 2016 there was the case of hacker Andres Sepulveda, who says he used an army of bots to promote the presidential campaign in Mexico.
Because of this difference between careless use of social media and the strategic adoption of profiling and targeting fake content, authors and activists propose using the term “misinformation campaigns” instead of fake news jargon, which gives the idea of accidentality.
The growth of the practice raises legal concerns with the accountability of those providing such an environment for making it safe. There is no confidence that the “invisible hand” of fact checking by independent third parties is sufficient. Some bogus news regulation proposals have been aired and are currently on the agenda in some legislative houses, including the Brazilian one.
When the legislature finds misinformation
Searching the Brazilian Chamber of Deputies website for “fake news” bills results in twelve proposals. All are after February 2018 (although there are older projects attached). They refer to the influence of this type of content on political aspects, especially in election campaigns.
It is noteworthy that all seek criminalization of the practice. They also adopt various terms to define it. These include “offering”, “publishing”, “distributing”, “spreading”, “creating”, “disseminating”, “sharing”, “participating in production and dissemination tasks”, “propagating”, “transmitting” untrue news . The way projects propose to address these behaviors faces problems on three fronts: the kind of legal solution they seek, what they want to fight, and who looks back to prevent disinformation campaigns.
What if misinformation is a crime?
The first problem is precisely this: to propose treatment in the criminal sphere. This contradicts the justifications for projects that misinformation “leaves no trace”. This means that it is easy for those who created and propagated misinformation intentionally to remain anonymous. It is true that there are similar initiatives around the world: Malaysia passed in 2018 the first law criminalizing fake news with up to six years in prison. But this is not a recommended measure.
The UN, in the Joint Declaration on Freedom of Expression, Fake News, Misinformation and Propaganda, recognizes that the criminalization of conduct such as defamation and the like is very restrictive and should be abandoned. It also points out that generic bans on the dissemination of information without precise definition should be abolished.
In May 2018, the Federal Senate introduced a different bill that seeks to amend the Internet’s Marco Civil to impose a daily fine for noncompliance with a court order that determines the unavailability of false content by the application provider. This bill may be a sign of a change in the legislative treatment of the subject, but it is only one of the 13 currently in Congress.
What can be considered misinformation?
The vagueness of the terms used in the proposals and their direction for addressing misinformation in election campaigns also gives rise to censorship. This is because it proposes removal of content considered untruthful or that may harm the honor of others. Especially in the latter case, the verification is subjective and the short deadlines for measures intended to be included in criminal and electoral laws may lead to the removal of public discussion agenda of relevant issues.
Such a proposal departs from the international tendency to guarantee freedom of expression and communication, where the main obligation of governments is to refrain from interfering and ensuring an environment conducive to pluralistic debate, as stated in the EU document on combating misinformation in line.
Who is responsible for a disinformation campaign?
Another weak point in the proposals found is the equalization between who creates and who disseminates the news. Offender status is conferred on both the malicious agent and the victim himself, who shares that news with his social circle.
The proposals frame only the political candidate or defamed institutions as victims. This disregards that the risk of the misinformation campaign lies in impairing transparency and informed public decision-making. Still, it can be highlighted the fact that two proposals are themselves motivated by misinformation about false news, as analyzed by an investigative journalism agency.
That is, we are all victims in a scenario of widespread misinformation, which makes the treatment from the criminal, or even reparatory, civil level of this kind of attitude questionable. The damage of a misinformation campaign is felt even if there is later retraction; the erratum hardly has the same scope as untrue news.
Good practices against misinformation
Seeking alternatives to ineffective criminal or repressive treatment and targeting prevention, a report published by Yale University hypothesizes legal tools. The focus is not on content, but on information distribution methods. That is, there could be regulation on content boosting bots, making it easier to identify their source and responsible.
The Communication “Fighting Online Disinformation: A European Strategy” outlines some measures, such as: platform proactivity in identifying false news, broadening the accountability of misinformation carriers and promoting reliable content on the Internet.
Specifically for social networks, it is pointed out: transparency on ad targeting, as well as blocking fake accounts, identifying robots, providing diverse content and auditing data by academia and experts. There is also a focus on fact-checking, supporting quality journalism and educating users on “media literacy”.
All of these measures were proposed taking into account public consultation on fake news and online misinformation. In it, most participants considered the role of social networks in the matter of great importance.
In view of these recommendations, an opinion on the bills pending in Brazil was issued by the National Congress committee in the first half of 2018. Further discussion was recommended to understand the complexity of the issue. But, given the criminal nature of the projects in progress, it was eventually indicated better definition of the elements in the bills, such as dosimetry and responsible.
Control over who accesses what
The possibility of censorship for government control of information flows always causes fears. Self-regulation or multisectoral regulation for the issue may be interesting in order to avoid this.
However, the UN Joint Declaration on Misinformation allows for restrictions on freedom of expression. Content control applies to cases of interest recognized under international law and to preventing hatred, incitement to violence, hostility and discrimination. It is even allowed restrictions that go beyond the jurisdiction and affect populations outside the territory in which they are regulated.
Blocking websites, IPs, protocols and access ports are considered extreme measures, which can only be adopted when provided by law and are the only option able to protect the public interest or any human right.
Users who share or promote non-authored content are protected, and dissemination platforms in this model are also not responsible. Misinformation campaigns rely on the massive volume of automated messaging from fake users.
Perhaps some formal control over the creation of automated accounts would be more effective than punishing those who receive and share news.
This brings back the data issue: none of Brazil’s fake news anti-law bills seeks to regulate data sharing among different actors in the digital world.
In fact, the intention was to unify the discussions, proposing, without success, to add the disinformation bills to the current General Data Protection Law, when it was still pending. By considering that the subjects are not correlated, the legislature demonstrates to be approaching the subject in a segmented way, although the problems are part of a larger context.
We need comprehensive data insight
International initiatives, which seek to combine legal instruments to combat misinformation and data protection, restrict unauthorized use and impose transparency in the processing of user data. Meanwhile, the Brazilian legislature continues to treat issues in a decoupled manner. Profiling is seen as risk and misinformation as well, but looking at only one incompletely deals with the other.
The concern with defining what a disinformation campaign is, who can be held responsible, how to prevent its spread, ensure freedom of communication and expression on the Internet, are intrinsically linked to the regulation of users’ data use.
Who provides your personal data and shares content plays a secondary role in a world of collecting, processing and directing content. We need to think about legal instruments that go beyond blaming and circumventing the damage caused by information manipulation. Regulation needs to be mindful of the commercial and organized processing of data that drives disinformation campaigns. Thus, people may have self-determination in the flow of information, instead of being carried away by it.
Learn more about European Union data protection influences on Brazilian law in our GDPR paper and its repercussions on Brazilian law.
Some discussions on this topic can be found in the panels of the VIII Internet Forum in Brazil, which took place in November 2018. If you are interested, watch here.
The views and opinions expressed in this article are those of the authors.
Written by
Lahis Kurtz (See all posts from this author)
Head of research and researcher at the Institute of Research on Internet and Society (IRIS), PhD candidate at Law Programme of Federal University of Minas Gerais (UFMG), Master of Law on Information Society and Intellectual Property by Federal University of Santa Catarina (UFSC), Bachelor of Law by Federal University of Santa Maria (UFSM).
Member of research groups Electronic Government, digital inclusion and knowledge society (Egov) and Informational Law Research Center (NUDI), with ongoing research since 2010.
Interested in: information society, law and internet, electronic government, internet governance, access to information. Lawyer.