Blog

“Dangerous content”? the invisibilization of the LGBTQIA+ Community in content moderation

Written by

28 de June de 2021

“We believe that everyone deserves to have a voice, and that the world is a better place when we listen, share and build community through our stories.” (YouTube | About)

Seen as one of the largest and leading platforms for video creation and sharing, YouTube, in its intention to “give everyone a voice and show them the world“, lists as its values freedom of expression, the right to information, the right to opportunity, and the freedom to belong. It thus appears to be a free, open and participatory space and even an egalitarian environment – a world seemingly of peaceful coexistence, without censorship or control.

However, as expected in any social network, YouTube regulates the content published, either to curb copyright violations or to combat the spread of fake news, hate speech, sexual content, among other types of dangerous content. This regulation is established through an adhesion contract: either we accept and follow all the rules, or we will be punished with demonetization (loss of the possibility of earning money with advertisements) of the video identified as infringing the community guidelines, limitation of the reach of this video on the platform, the blocking of the content or even its removal, without the possibility of returning the content to the platform.

It is in this relationship of unilateral control that happens everyday on YouTube and on other platforms that this article is inserted. The LGBTQIA+ community ends up often, as in the case of YouTube, falling into these ‘boxes’ of content considered dangerous or copyright infringing, based on the platform’s automatic identification. It is in this context, of invisibilization of the community empowered by YouTube, that we will explore in this article. Check it out below!

Space for free speech, representation and remuneration of the LGBTQIA+ Community

YouTube is seen as one of the main platforms for sharing and producing audiovisual content. The LGBTQIA+ Community even seems to prefer this platform to address issues that are not represented in mainstream media and even to come out as LGBTQIA+. The platform is seen by the Community not only as a means in which it can strengthen and empower itself, but also as a means of breaking the bubble and expanding the reach of these themes beyond the circles within the Community.

In addition to being a platform increasingly used for the dissemination of subjects not covered in mainstream media, YouTube has a partnership program that allows the producers of online content to receive compensation for advertising placed in their videos, proportional to the number of views of their videos, the YouTube Partners Program. To participate in the program, the producer, besides having a channel with more than 4,000 views in the last 12 months and at least 1,000 subscribers, must join AdSense, a Google program that monetizes online content creators by displaying ads on their websites, which imposes some content limitations, such as, among others, nudity or sexual content, privacy-violating content, and other policies.

As CEPI reported in its research “Converging Dissonant Voices: Freedom of Expression, Vulnerable Groups, and Challenges to Online Cultural Production,” 

“much of this content is difficult to determine, but by engaging YouTube’s Partner Program (AdSense), the platform [grants itself] the right to disable the display of ads in your content; disable your Google AdSense account; suspend your participation in YouTube’s Partner Program; suspend or even shut down your YouTube channel.” 

Numerous criticisms have been leveled at the platform for automatically removing content from channels dealing with topics such as diversity due to the terms that are used in their videos, as they often fall into the category of, for example, “sexual content.”

As a private platform, YouTube regulates content as it sees fit to avoid liability for crimes, offenses, and other wrongdoings that occur within the platform. It even makes it clear in its “Policies and Safety” Guidelines that there is no room for debate about the categories listed as inappropriate content: 

“Here are some common-sense rules that’ll help you steer clear of trouble. Please take these rules seriously and take them to heart. Don’t try to look for loopholes or try to lawyer your way around the guidelines—just understand them and try to respect the spirit in which they were created.”

However, in 2019, a group of LGBTQIA+ channel producers in the United States sued YouTube claiming that the platform was discriminating against their content, often removing their compensation. They allege, among other points, that YouTube removes advertising from videos with words deemed “trigger” such as “gay” or “lesbian,” frequently labels LGBTQIA+-themed videos as “sensitive” or “mature”, restricts them from appearing in search results or recommendations, and finally, does not do enough to filter out harassment and hate speech in the comments section. Including Stephanie Frosch, a queer YouTuber with about 370,000 subscribers, was earning approximately $23,000 a year on YouTube in 2009. In 2021, Frosch says she is “lucky if she gets $100 a month.”

The group’s claim, however, is based not only on the issue of the content itself of the videos, but also on the identification of the authorship of the content, i.e. of the producers being LGBTQIA+ people. Google disputes these claims, saying that the platform’s distribution algorithms are protected by Section 230 of the US Communications Decency Act – which even got into dispute when former President Trump sought to repeal this Section after a public fight with Twitter over the moderation of his tweets.

Whether it is the content of the videos or the identity of the authorship of the audiovisual material, it is a fact that there is an increasing invisibilization of the LGBTQIA+ Community, which not only undermines YouTube’s goal of “giving everyone a voice and showing them the world,” but also hinders the remuneration of content producers belonging to the community.

Content ID in Copyright protection: an automatic limitation of the expression of the LGBTQIA+ Community

Another way that YouTube potentiates the invisibilization of the LGBTQIA+ Community is based on the functioning of the Content ID tool – a mechanism for automatically identifying videos and audios registered by their authors to guarantee their Copyright.

As Katharine Trendacosta, from the Electronic Frontier Foundation, says, YouTube has already proven to be more interested in pleasing the big copyright holders than in protecting freedom of expression or promoting creativity, as it defends in its list of values. With the Content ID tool, YouTube offers a filtering system that, besides changing over time – creating insecurity for online content producers – does not consider the context of the use and the scope of the infringing content, often automatically demonetizing small producers and directing money to large copyright holders only.

The LGBTQIA+ Community uses YouTube as a space to empower its voice, a safe space not only to bring up issues not found in mainstream media, but also to create content and sustain themselves in the online content production business. One of the most common ways of discussing “alternative” topics or/and celebrating the advancement of these topics in traditional media is the production of reaction videos, in which one uses other content in order to analyze it, or video with clips of various contents to illustrate a certain situation under discussion.

These uses of video or audio snippets can often be seen by Content ID as an illegal, punishable use. As IRIS-BH itself reported in its study “Transparency on content moderation in community policies”, YouTube presents explicit contextual analysis criteria for the configuration, or not, of violation for all content listed as inappropriate by the platform, except for Copyright violations.

Thus, the tool works very well for what it was programmed for, quick and faithful identification of videos and audios registered as protected by Copyright, but it ends up helping in the invisibilization of a community that uses – a lot – mainstream media content to discuss issues and empower itself in a space that, theoretically, would be a space for the outflow of “everyone’s voice”.

Of course, this is not only true for the LGBTQIA+ Community. Tools like Content ID are seen as such powerful weapons in automatically removing content that uses protected works to the point that even a police officer would use this tool to his advantage. In February 2021, when Sennett Devermont – an activist who regularly live-streams protests and interactions with police to his more than 300. 000 followers on Instagram – live-streamed his interaction with Sergeant Billy Fair. The Sergeant, upon realizing that the conversation was being live-streamed via Instagram, silenced himself and played from his cell phone for about 1 minute the song “Santeria” (by the band Sublime) in order to have Devermont’s video identified as a copyright infringer and thus have the audio removed from the video, or even the live-stream itself stopped and the video removed entirely.

Is private regulation the problem?

The platforms, being services offered by private companies, regulate in different ways the moderation of the content published by their users. The problem lies in the lack of transparency and impossibility of discussion about the categories used to classify content as inappropriate. These rules often end up being automatically applied, without a case-by-case analysis, which hinders the creation of an open space for debate, since the discourses are much more complex than mere ‘classification boxes’.

Returning to the IRIS report, the Institute found out that YouTube has a serious lack of transparency when it comes to the means of detecting inappropriate content, according to its community policies. The research team found that there is an indication that the platform performs proactive detection without reference to the content (automated or otherwise) and there is a reduction in the user’s ability to contest. In addition, it was also indicated that YouTube uses ambiguous terminology, with the use of the verb “may” to specify the prohibited content on its platform, creating insecurity for the content producer and hindering their power of contestation – since the rule is so broad that it is difficult to show why certain content is outside the categories of prohibited content. 

Thus, on the one hand, platforms are required to curb harmful content, including hate speech, misinformation, offenses and criminal conduct, which raises the need to classify content as prohibited and invest in initiatives to curb the spread of such content on the web. On the other hand, we create room for a broader power of choice of private agents in determining the limitation of freedom of expression. This limits not only the individual expression of each user, but also the creation and cultivation of a space for empowerment, information, compensation, and representation of vulnerable groups, such  as the LGBTQIA+ Community.

Transparency is essential for freedom of expression

In this month of June, in which we celebrate LGBTQIA+ pride, we wanted to raise the discussion about how content moderation on YouTube can and is reducing the space and invisibilizing the LGBTQIA+ Community. The platform has its benefits: it offers a space for (paid) content creation and production, allows the community to express itself, connect, and reach previously unoccupied spaces. But it also offers risks, limiting the freedom of expression of creators beyond what is necessary (I say ‘beyond what is necessary’, because the very prohibition of hate speech, offenses and other harmful conducts is already a limitation to freedom of expression, widely recognized as necessary). This limitation occurs through the use of broad categories of content deemed inappropriate, the operation of Content ID, the little transparency in the identification of inappropriate content, or even the reduction of the user’s ability to contest the process of reporting, appeal and punishment for uses of prohibited content.

The views and opinions expressed in this blogpost are those of the author. 
Illustration by Freepik Stories.

Written by

Pesquisadora no Centro de Ensino e Pesquisa em Inovação da FGV Direito SP. Mestranda em Direito e Desenvolvimento na FGV Direito SP, com Bolsa Mario Henrique Simonsen de Ensino e Pesquisa. Graduada em Direito pela Pontifícia Universidade de São Paulo — PUC-SP. Atualmente, desenvolve pesquisa nas áreas de Direitos Humanos Digitais, Moderação de Conteúdo, Direito Autoral e Ensino Jurídico.

Leave a Reply

Your email address will not be published. Required fields are marked *

Veja também

See all blog posts