Fake news, bots, filters and other threats to Brazilian democracy
2 de July de 2018
Polarization. Apparently, one of the most used words in political headlines in recent years. Whether it is regarding the Brexit, North America or Impeachment suffered by former President Dilma Rousseff, societies never talk about building physical and imaginary walls as much as they talk today. Manichaean and extremist visions are sustained by blocking and lack of interest in different opinions with the help of the invisible filter in social media and with Fake News on various digital platforms.
The first threat: The invisible filter
As there is nothing more satisfying than being surrounded by things we like, algorithms filter what is relevant and what is not relevant according to the user’s reaction, social networks have become extremely effective tools to please and show what the user wants to see, this is what many researchers in this field call the “filter bubble”.
Popularized after a TED lecture and the book “The filter bubble: what the Internet is hiding from you”, from Eli Paiser, the term seeks to explain how algorithms influence what we see on the Internet from our personal preferences. Many believed that the mechanism would only bring benefits, since a platform that offers only what consumers want to see and care about, makes their experience more comfortable and enjoyable.
However, the filtering also leads to the disappearance of opposing opinions and dissent. In an environment of political debate, especially this year with the Brazilian elections, this mechanism opens the door to the growth of polarized visions and the deterioration of political debates. It is believed that tolerance levels are also modified. Through the contact with divergent or different opinions, the human being becomes susceptible to the construction of patterns, stereotypes, prejudices and radical visions.
According to Eduardo Magrani, professor at the Institute of Technology and Society of Rio de Janeiro (ITS Rio), in his work Democracia Conectada, the filtering mechanism in social media feeds can bring two consequences to its users. The first would be the false impression that all user’s connections are viewing the posted post when in fact probably only those who are interested in that particular point of view and probably share the same opinion will have this post on their feeds. The second false impression produced by this mechanism is the idea of a consensus. Because the filter bubble only shares this content with individuals who hold the same point of view because they love and hate something as much as the user, there is a false impression that this view is the most popular and reasonable one adopted by the vast majority of connections.
For Eli Pariser, the false impressions do not end there. For a democratic process to be built and people live in harmony, it is necessary that the debate takes place, showing the existing points of view and ideals. It is also necessary to consider the needs of the other. However, today the filter bubbles seem to prevent the exchange of divergent information and to encourage individualistic policies where self-interest always comes first.
The second threat: Fake news
In addition to the fear of political debates in schools and the filter bubble, the actors who play the main role in disinformation of the citizen are the Fake News and the bots. Social networking robots, also known as bots seek to mimic human behavior by posting, tweeting, retweeting, transmitting information on the social network as determined by their operators and developers. Despite the magnitude of their possible impact on democratic processes, many of these bots do not use complex technologies, they are often based on accessible tools and their dissemination is simple and inexpensive.
The similarity between bots and humans may be a key part of this manipulation tool. Content shared and published by bots usually includes fake news, attacks on political opponents, and posts in favor of their candidates. According to some researchers, bot performance is often decisive and influential in the online community due to the use of quantitative methods for measurements and statistics by most social networks. Thus, if bots exist in large numbers, implying a large amount of information, it is possible that they can direct the flow of this information in social networks, because the algorithms present in these platforms generally prioritize the quantitative element, not distinguishing between bots and humans. According to a study published in August of 2017, about 20% of the Twitter interactions related to the impeachment of President Dilma Rousseff were carried out by bots.
But bots are not always bad, these tools are often used for online advice and for controlling public money, for example. However, all these bots make it clear that they are bots, this information does not exist in cases of political manipulation by bots.
Fighting against misinformation
It is important to analyze how legal systems deal with the use of fake news and anonymous or fake social media profiles to influence political debate and proliferate hate speech, for example, and the mechanisms for removing irregular content from the Internet and how this can affect or limit the freedom of expression of citizens expressing themselves on political issues on digital platforms.
Fake News is a threat to democratic processes in Brazil and the world, however, as IRIS stated in another post, the use of criminal law is also considered a major risk in this field. This solution can easily be misrepresented to suit the will of the government.
The best solution to combat Fake News in Brazil is still the use of education. Through campaigns that raise awareness of the problem, raise critical standards, encourage facts checking, source verification and the reliability of the news they consume, it is possible to reduce the impacts of Fake News.
There are a few ways to begin the discussion of how to fight these misguiding news in ways other than regulations. The first and long-term way is to increase awareness of the issue by bringing improvements in national digital and media education by training people to identify fake content in networks. The Vaza, Falsiane! is a good example of initiatives that aim to inform people and encourage their critical thinking. The free online course against fake news aimed at the general public, especially teenagers, young adults and educators.
A second form of combat, of a more immediate nature, attacks the effects of fake news. Some examples of this type of initiative are the Brazilian organizations that have come together and created several institutions to check facts. An example with great female participation are the Brazilian fact-checking organizations.
One of them is Aos Fatos, a member of the International Fact-Checking Network, an initiative of the journalist Tai Nalon that has journalists daily accompanying statements from politicians and national authorities from various parties to see if they are telling the truth. Thus, they contextualize and classify the statement into one of six categories: truth, inaccurate, exaggerated, false, contradictory or unsustainable.
To reach any conclusion, the article is reviewed by at least one reporter and one publisher. Both must reach a verdict on the seal that will be granted to the statement or verified information. If necessary, a third journalist from the fixed team should be consulted to decide the deadlock. All the journalists on the Aos Fatos team are under public scrutiny, so they are strongly advised not to formally or informally endorse any political-partisan discourse. The team does not make militancy or manifest preference – it is encouraged to listen to all sides.
Another important Brazilian fact-checking company with leadership and a predominantly female participation is the Lupa. Under the command of Cristina Tardáguila, the agency was the first company specialized in fact-checking in Brazil.
In February of this year Lupa Agency and Futura Channel, in partnership with Google, launched the project “Fake or News”, which aims at digital education and help young people in identifying fake news. Until the 2018 elections, according to the creators, about 30 knowledge trails will be posted on fakeounews.org/
Thinking outside the box (bubble)
But the question remains: can Brazil stop the filter bubble or the influence of fake news in the general elections of 2018? It is believed that the Internet will be a major differential in a dispute that will be fierce point-to-point in these elections. According to Antonio Lavareda, a professor at the Federal University of Pernambuco and president of the scientific council of the Institute of Social, Political and Economic Research (Ipespe), the Brazilian educational deficit is one of the great barriers in the fight against fake news. The population’s lack of ability to discern a fake from truth, no matter how large inconsistencies and contradictions exist, is one of the biggest problems.
Currently in Brazil there are more than 200 million smartphones, access to the Internet by these devices and computers has always been seen as a bridge to strengthen democracy and improve communication and information of the individual. Even though good initiatives from fact-checking agencies exist, in a highly polarized environment, believing becomes biased.
It does not appear that Brazil will be able to eradicate the effect of the Fake News until October 2018. However, its impact can be mitigated with the help of the initiatives created and with the encouragement of public debates that bring awareness to the public.
Long-term strategies such as education, media literacy and the development of critical thinking seem to be the best alternatives as the risks of content manipulation and speech are not present only in the online ecosystem. The Brazilian deficit in education and the lack of stimulation of critical thinking and questioning make the population vulnerable to external influences, whether on the internet or in any other medium of communication.