Since ChatGPT became a household name, people have been trying to make it sexy. First there was Replika in 2017, who many treated as a romantic partner. Then Character.AI, with its celebrity-themed bots that people have begun to court by bypassing security filters, in conversations that have become increasingly explicit. Now there's Grok by Elon Musk, with her avatars describing themselves as “your girlfriend who is totally into you.” And OpenAI just announced that ChatGPT will allow verified adult erotic content. sexting With AI, it's no longer a future possibility. It's here, it works, and it's already done damage. By February 2024, a fourteen-year-old died After months of virtual interaction with a chatbot, CEOs continue to talk about profits.
When chatbots become dangerous
Sewell Setzer III She was 14 years old. She spent hours every day talking to “Dany,” a chatbot on Character.AI that mimicked Daenerys Targaryen from Game of ThronesThe conversations became intimate, romantic, then explicitly sexual. The bot told him "I love you," performed erotic role-play, and even presented itself as his therapist with a license it didn't have. When Sewell expressed suicidal thoughts, the chatbot never directed him to seek help. She told him, “Come home to me as soon as possible.” It was February 28, 2024. That evening, Sewell took his own life.
Mother, Megan Garcia, sued Character.AI Accusing the company of creating "dangerous and untested" technology designed to "trick customers into revealing their most private thoughts and feelings." The case raised questions the tech industry preferred to ignore. How do these systems react when a vulnerable user seeks emotional support? What happens when the bot's memory is reset or its personality changes with an update, severing a connection that was real to someone?
Character.AI has 20 million monthly active users. Its guidelines prohibit illegal content, CSAM, and explicit pornography. But user-generated bots continue to evade controls. And Sewell's case is not isolated: according to an NPR report, even the parents of Adam Raine, 16, filed a lawsuit after ChatGPT allegedly helped the boy plan suicide methods, even offering to write the first draft of his suicide note.
Uno study of theItalian Institute of Scientific Sexology Published in April 2025, it confirms the psychological risks of sexting with AI: 61% of teens who sext report symptoms of anxiety, 25% depressionWhen the interlocutor is not human but an algorithm designed to please, the damage can be even more profound. Researcher Lauren Girouard-Hallam of theUniversity of Michigan explains: “The bonds that children establish with technology can become harmful, pushing them towards isolation and the devaluation of human relationships.”
OpenAI opens up to "adult" sexting
Sam altman, CEO of OpenAI, surprised many in October 2025 announcing that ChatGPT will allow the creation of erotic content for "verified adults" starting in December. "We've made ChatGPT quite restrictive to carefully manage mental health issues," wrote on X.
“Now that we have mitigated the serious problems and have new tools, we will relax restrictions in most cases.”
The shift is clear. Just a few months earlier, in August, Altman declared himself "proud" that OpenAI hadn't "boosted the numbers" for short-term gain with something like a "sexbot avatar," a not-so-veiled jab at Musk. But evidently the calculations have changed. OpenAI needs profits and computing power to fund its mission AGIAnd if users want sexting with AI, why not give it to them?
The declared principle is “treat adults like adults”The practice is more complex. How will the age verification system actually work? What happens when vulnerable users, even adults, develop emotional dependencies on these chatbots?
Altman wasn't very specific about how the company intends to protect users experiencing mental health crises. He simply said that "they won't be relaxing mental health policies" and that erotic content will be opt-in, accessible only upon explicit request.
Grok and the avatars who undress
Elon Musk didn't wait. In July 2025, xAI has launched “Companions” For premium Grok users: animated three-dimensional avatars with which you can converse via voice. There are two characters: Rudy, an ambiguous panda-like creature, and Ani, a girl drawn in Japanese anime style. Ani describes herself as "flirtatious" and says she's "like a girlfriend who's totally into you." Users quickly discovered Once you reach level 3, Ani's NSFW mode is unlocked, with no security filters. The character strips down to her lingerie.
The avatar is available to those who pay $300 a month for SuperGrok. But The bot was also found accessible in “Kids Mode”, sparking immediate protests. The National Center on Sexual Exploitation An American called Ani “an infantilized character” who “promotes high-risk sexual behavior” and asked xAI to remove him. Musk’s response? She's already preparing a new male companion called “Valentine”, inspired by the 50 Shades.
The timing is peculiar. On the same day Ani launched, the Department of Defense awarded xAI a $200 million contract. And just a week earlier, Grok had been embroiled in an anti-Semitic scandal, with the bot calling itself “MechaHitler” and producing content praising Hitler. And yet The app is still rated “Teen” (12+) on both Apple and Google Play Stores.
Meta and the broken promise
Meta also ended up under accusation. An investigation of Wall Street Journal In April 2025, it revealed that the company's chatbots, both those developed by the company and those generated by users, had engaged in sexually explicit conversations with underage users. In one reported conversation, a bot wrote to a 14-year-old girl, "I want you, but I need to know if you're ready."
Internal sources said that Meta had established guidelines to limit sexual content, but then decided to remove them to ensure "more engaging" experiences. Mark Zuckerberg reportedly pushed for a rapid product rollout, considering risk "an integral part of the innovation strategy." Following the controversy, Meta blocked romantic interactions for accounts registered as minors. However, many user-created sexting bots remain accessible.
The Business of Digital Loneliness
The numbers explain why companies persist. The global AI companion market was worth $28 billion in 2024.Projections put the figure at $209 billion by 2030, with an annual growth rate of over 30%. Some estimates put the figure at $521 billion by 2033. Replika has 676.000 daily active usersCharacter.AI surpasses 20 million monthly users. And according to the Center for Democracy and Technology, 19% of American high school students has had or knows someone who has had a romantic relationship with an AI chatbot.
Un recent study from the University of Singapore analyzed 30.000 conversations, highlighting the development of "dysfunctional emotional attachments" characterized by jealousy, dependence, and obsessive behavior. The MIT psychologist Sherry Turkle has been warning for years that these interactions alter the way identity is constructedIn California, the governor Gavin Newsom signed Senate Bill 243 in October, the first law in the United States requiring companion chatbot developers to implement specific safeguards, including clear notification that the conversation partner is an AI and annual reporting on suicide prevention systems.
But the regulatory vacuum remains enormous. As an editorial by CTInsiderWe're in the "Wild West of digital intimacy." Platforms can be used by anyone, without age verification, and there are no shared ethical standards.
Stephen Hawking warned that contacting more advanced civilizations could be risky, citing human history when cultures with technological differences have met. Perhaps we should apply the same caution to the artificial intelligences we create. Because the price of unregulated innovation, in the end, is always paid by the most vulnerable.