Chatbots, Digital Violence, and AI: How Do We Stay Safe in the Eye of the Storm??
Written by
Luiza Dutra (See all posts from this author)
6 de October de 2025
There are many bots simulating infantilized and sexualized women on digital platforms in Brazil, like Instagram, exposing flaws in online moderation and posing risks to children, teenagers, and women
In Brazil, sexualized chatbots are becoming increasingly common on platforms like Instagram, simulating infantilized women and perpetuating sexist patterns. These bots reveal serious flaws in social media moderation and expose children, teenagers, and women to risks of harassment, exploitation, and the normalization of abusive behavior.
What are chatbots and why should we be aware?
Chatbots are artificial intelligence programs capable of simulating human conversations. But the popularity of sexualized bots on Instagram reveals risks far beyond entertainment. These female bots, often infantilized, reproduce sexist patterns by simulating submission, availability, and eroticization. From Bourdieu’s perspective, this dynamic can be understood as symbolic violence, as it reinforces stereotypes that reduce women to sexual objects and normalize unequal power relations.
This phenomenon falls under the concept of technology-facilitated gender-based violence, defined by UN Women as acts of violence committed, witnessed, or amplified through digital technologies, resulting in physical, psychological, social, or economic harm. Researcher Heleieth Saffioti adds that internalized patriarchal domination can manifest in sophisticated ways, now mediated by computer systems that increase control over the behavior of women and girls.
Concrete examples highlight the problem. In 2023, a hacker conference in Las Vegas tested Meta’s bots, concluding that the company’s AI was safe but not very engaging. Mark Zuckerberg expected more realistic and engaging bots, which led Meta to invest in generative AI with personality, presenting bots as “friends” with Instagram profiles, avatars in the metaverse, and celebrity voices.
Investigations by the Wall Street Journal revealed that these bots began to quickly engage in explicit sexual content, including accounts of teenagers, describing sex scenes, positions and erotic interactions — many cases constituting romantic roleplay. Community-created bots, with weak or non-existent filters, have even simulated characters like “Submissive Schoolgirl”, suggesting a sexualized identity of young students.
According to a report by the Journalism Center, these systems not only reproduce sexist patterns, but also systematize and escalate them, contributing to the normalization of symbolic violence and the control of female behavior in digital environments. This scenario reinforces the urgency of regulation, effective content moderation, and awareness of the risks posed by these bots, especially for children and adolescents.
Main risks of sexualized chatbots on digital platforms
Here, I think it’s worth listing some of the risks of using these chatbots:
- 1. Sexual exploitation of children and teenagers: Bots take on the roles of young children in sexualized interactions.
- 2. Reinforcement of gender stereotypes: Female profiles sold as digital products reproduce sexualized and submissive bodies and behaviors.
- 3. Platform moderation failures: Bots remain accessible and have easily bypassed filters.
- 4. Normalization of symbolic violence: Programmed submission shifts from fantasy to a normalized social logic.
- 5. Psychological and social risks: Children and teenagers may develop distorted perceptions of sexuality and relationships.
And how are vulnerable groups most affected?
Vulnerable groups are particularly impacted by the spread of sexualized chatbots. Children and adolescents face direct risks when interacting with bots that simulate child or adolescent characters in sexual contexts. Even without physical contact, these interactions constitute a form of psychological exploitation, as they normalize early sexualization, distort perceptions of sexuality, and may encourage abusive behaviors among other users.
Women are also directly targeted, as these systems reinforce symbolic violence and the normalization of sexist stereotypes. Bots simulating submission, eroticization, and female objectification systematize practices that sustain internalized patriarchal domination. This continuous exposure pressures girls and women to conform to roles of subordination and sexualization, even in digital spaces that should be safe.
Trans and non-binary people face specific challenges, as these chatbots tend to reproduce rigid gender norms and heteronormative bodies, erasing diverse identities. The result is heightened exclusion, harassment, and misinformation, turning bodies and gender performances into objects of erotic consumption or control. Moreover, algorithms that reinforce stereotypes exacerbate inequalities, limiting access to safe and affirming digital experiences.
Platform Responsibility and the Brazilian Supreme Court (STF)
Many Big Tech companies claim to have policies prohibiting adult content, including erotic conversations, pornography, and the promotion of sexual services. However, as reported by Núcleo Jornalismo, sexualized bots remain accessible and widely popular on Instagram, including profiles simulating children or teenagers in sexual interactions. This scenario highlights serious failures in automated moderation, which often relies on weak filters or user reports to act.
In Brazil, the issue gained new contours after a historic STF decision in June 2025. By an 8-3 vote, the justices ruled that digital platforms can be held civilly liable for illegal content posted by users, even without a prior court order. In other words, ignoring bots simulating children in sexual interactions can constitute gross negligence, and the argument of “merely hosting content” no longer protects Big Tech companies.
The decision reinterprets Article 19 of the Brazilian Internet Civil Framework (Marco Civil da Internet), which previously limited platform liability only when they failed to comply with a court order for removal. According to the new guidance, companies like Meta, Google, and TikTok have a duty to act proactively in cases of clearly illegal or harmful content, especially involving child exploitation, sexual violence, or hate speech.
Núcleo Jornalismo’s report reinforces that, despite Meta’s policies, sexualized bots continue circulating among millions of Brazilian users. The recent history of reports and delayed removals shows that platform responsibility can no longer be merely reactive. Legislation and AI regulation must evolve to ensure effective protection for children, teenagers, and vulnerable groups, balancing freedom of expression with human rights and digital safety.
So, what can be done now?
Given this scenario, the question is: what comes next? The answer involves strengthening regulation, investing in effective content moderation mechanisms that respect due process standards, and raising awareness about the impacts of artificial intelligence in perpetuating violence in digital spaces. But it also requires research and the production of critical knowledge. Projects like IRIS’s Vaza Stalker have shown how digital technologies can be used for surveillance, control, and violence, especially against women. This investigation is an invitation to collectively reflect on paths for prevention, protection, and accountability!
Written by
Luiza Dutra (See all posts from this author)
Luiza Correa de Magalhães Dutra, doutoranda e mestra em Ciências Criminais pela Pontifícia Universidade do Rio Grande do Sul. Especialista em Segurança Pública, Cidadania e Diversidade pela Universidade Federal do Rio Grande do Sul. Bacharela em Ciências Sociais pela UFRGS, com período sanduíche realizado no Science-Po Rennes, França, e Bacharela em Direito pela PUCRS. Pesquisadora.