Blog

Challenges in content moderation involving racism

Written by

21 de February de 2022

Content moderation on social media has been a topic of frequent discussion. Recently, an episode involving Twitter and posts containing disinformation (fake news) in Brazil demonstrated that more progress is still needed to ensure that harmful content isn’t disseminated on the internet.

 However, if there’s one thing that all the incidents involving moderation have taught us, is that there’s usually no easy answer. Risks such as prior censorship and improper content removal are just some of the challenges posed by this discussion, but its combination with other factors, such as social prejudices, results in even greater problems than those we already know.

Here at IRIS, we already had a publication on how content moderation can affect and harm the expression of the LGBTQIA+ population on the internet. In today’s text, the focus is on what to do when content moderation needs to deal with racism.

The dispute over language

It is expected that the debate on content moderation begins, above all, with the analysis of language itself. After all, it is the use of certain terms that usually defines whether or not a specific post should be removed. However, when talking about racism, there is an additional approach to note: language is not a neutral field.

In his book “Linguistic Racism: the underground of language and racism”, Gabriel Nascimento explores this theme and recalls that language is a project of power and also a space for the struggle of raciality. Historically racist expressions are maintained in Brazilian vocabulary, for example, and reinforce bad stereotypes about black population.

On the other hand, non-formal use of the language by marginalized black people also generates a dispute over the “correct” use of words, a theme that Lélia Gonzalez has long addressed, defending not only the creation of the category of Amefricanity, but also of the “pretoguês” itself. The linguistic prejudice, therefore, reinforces that the politicization of language shelters a place for racial dispute.

 Perhaps you may be thinking, however, that moderating racist content is easier than it really appears, after all, it wouldn’t even be necessary to go that far in the analysis of language. Notably racist posts often come with associated keywords, such as monkey, and a whole plethora of known offenses, or with expressions that show obvious contempt for the black population.

However, it turns out that this type of moderation is not always so obvious, as it requires attention not only to what is moderated, but also to the way in which content is moderated.

Content moderation “race-blind”

You may have seen people wr1ting w1th numb3rs inste4d of letters in order to avoid having their posts taken down or with reduced reach on social media. In some cases, this type of action is related to posts involving non-harmful sexual content, but which could be blocked by the platform’s algorithm.

In the case of black people, studies show that this has occurred in posts involving debates on racial justice and racism, when this group seeks to share their marginalized experiences. In addition, another research found that AI systems for hate speech moderation are more likely to flag posts by African Americans as offensive, as well as those written in African American English.

Just to have an idea, in 2017, a black activist had her post taken down by Facebook after denouncing racist attacks she had been suffering due to a comment on the internet. Black activists in the US have even adopted new ways to talk about this issue without being suspended or banned – or, as they call it, “getting Zucked”. Among the alternatives created by them, are the use of emojis, hashtags and even slang like “wypipo” (in reference to “white people”).

As a result, in 2020, Facebook announced changes to its algorithms, which for years would have acted through the “race-blind” policy, as a way of acting more neutrally and impartially. According to the report, this old perspective would have led the platform to be more rigorous in removing offenses against white people, with the exclusion of posts by people of color on the other side.

However, still according to the news, when announcing the aforementioned changes, Facebook would have recognized that it would improve the automatic content moderation of language considered “the worst of the worst” – WoW, which would be those with offenses to black people, the LGBTQIA+ community, Muslims, people of more than one race and Jews.

We can’t solve problems if we don’t see them

Situations like this trigger an alert for a number of reasons. One of them is realizing that, in addition to may becoming the target of trolls and extremist groups on social media, black people may also have their right to speak out and denounce these episodes cut off. In an increasingly connected world, where being on the internet is another way of existing, this situation means making even more invisible a part of the population that already suffers from it on a daily basis.

Indeed, the movement that platforms – in particular, Facebook – have been carrying out is very necessary (and comes almost late). Repeatedly, studies indicate that the era of “race-blind” has not brought efficient results in the fight against racism, which, mainly due to its structural character, escapes a more superficial content moderation.

In fact, it is known that discourses that value neutrality can give rise to oppressive discourses, for not paying due attention to prejudices that are rooted in society. When talking about technology, this type of speech is often used by those who believe that new technologies are neutral, without remembering that even the most sophisticated artificial intelligence probably involved a human creation behind it. The internet, as a new place of social interaction, is not exempt from repeating social facts and, consequently, prejudices that we have to witness in our routine.

The debate on algorithmic racism, which has gained more and more voices around the world, seeks precisely to bring to light the dynamics of structural racism that is also present in technology. Tarcízio Silva, who is one of the exponents in Brazil on the subject, has already highlighted in numerous papers how much this prejudice can bias algorithms in the most diverse ways.

 A direction by Djamila Ribeiro, in her book “What is a place of speech?”, makes even more evident what is necessary to follow: there is no way to improve a reality that is not named. Recognizing bias in content moderation is the first step, but we still need to go further if we want the internet to become a safe place for minorities.

Annual reports and transparency in the platforms’ content detection and removal procedures are also essential for this project. The better you understand how moderation algorithm works, the better the solutions can be – which also involves bringing the affected minorities to participate in this review process.

For a content moderation that sees everyone

Even on the internet, language continues to be a space for racial disputes. Along with the humans who are behind it, whether in creation or in human review, content moderation AI systems add up to a battle that involves spaces of power, racism and language, with the aim of making internet a safer place.

Without dealing with it objectively and directly, however, a “race-blind” moderation policy acted to prevent not only racist speeches, but, in some cases, the speech against discrimination and denunciations against this type of content itself. Perhaps the idea of a blindfolded goddess of justice, who would not care who is being judged, has conquered the platforms to reproduce this idea within their spaces.

However, what seems urgent is the need for a moderation regime that sees exactly who it is affecting, in what context, and why. Recognizing the existence of structural racism implies understanding that the fight against this prejudice requires constant vigilance and eyes wide open, so that its victims do not end up being victims twice – of their offender and of those who should protect them.

The views and opinions expressed in this blogpost are those of the author. 
Illustration by Freepik Stories.

Written by

Head of research and researcher at the Institute for Research on Internet & Society (IRIS). Doctoral student in Law at the Federal University of Minas Gerais (UFMG). She holds a Master’s Degree in Network Society Rights and a Law Degree from the Federal University of Santa Maria (UFSM). Member of the Collective AqualtuneLab. She is interested in research in the area of algorithmic governance and racism, facial recognition, and content moderation.

Categorised in:

Leave a Reply

Your email address will not be published. Required fields are marked *

Veja também

See all blog posts