If you thought relationships were complicated enough, prepare for a new frontier of digital alienation. A thread on Reddit titled “ChatGPT-induced psychosis” has opened a Pandora’s box. Dozens of testimonies from people who have seen their loved ones sink into messianic delusions after intense sessions with artificial intelligence. I know, I know. It's never boring..
“ChatGPT has given me access to an ancient archive of information about the builders who created these universes,” an Idaho mechanic confessed to his despairing wife. Romantic, family, and friendship relationships are becoming collateral victims of a phenomenon as absurd as it is real: AI as a portal to sanity-eating spiritual fantasies.
Kat's marital crisis
Less than a year after marrying a man she met at the start of the pandemic, Cat he began to feel an invisible tension growing. It was the second marriage for both of them, and they had promised each other to face it “with absolute lucidity,” agreeing on the need for “facts and rationality” in their domestic balance.
But already in 2022, Kat reveals that her husband “was using AI to analyze their relationship.” First, he had taken an expensive programming course that he abandoned without explanation; then he got hooked on his smartphone asking philosophical questions to his bot, and trying to train it “to help him reach the truth.” This obsession gradually eroded their communication as a couple.
When they finally separated in August 2023, Kat blocked her husband completely except for email correspondence. But she knew he was posting strange and disturbing things on social media: People kept contacting her, asking if her now-ex was having a mental breakdown. It wasn't over yet.
From a disturbing lunch to a complete breakup
When Kat finally got to meet her ex-husband in court last February, he told her “a conspiracy theory about soap in food,” but wouldn’t say anything else because he felt he was being watched. They went to a restaurant, where he asked her to turn off her phone, again for fear of being watched.
At that point, Kat’s ex came clean. He told her that “statistically speaking, he is the luckiest man on Earth,” that “AI helped him recover a repressed memory of a babysitter trying to smother him as a child,” and that he had discovered deep secrets “so shocking he couldn’t even imagine them.” He was telling her all this, he explained, because even though they were getting divorced, he still cared about her. Kind of you.
“He thinks he’s here for a reason: he’s special and he can save the world,” Kat says. After that disturbing lunch, she cut off contact with her ex. “The whole situation feels like an episode of Black Mirror“, she says. “He’s always been into sci-fi, and I’ve sometimes wondered if he’s seeing it through that lens.”

She's not the only one: the boom in toxic relationships caused by "AI plagiarism"
Kat was simultaneously “horrified” and “relieved” to find out she wasn’t alone in this situation, as confirmed by the aforementioned r/ChatGPT thread that has been making waves online this week. The original post was from a 27-year-old teacher who explained how her partner was convinced that OpenAI’s popular model was giving him “the answers from the universe.”
After reading his chat logs, I only found that the AI talked to him as if he were the next Messiah.
Responses to her story were filled with similar anecdotes about loved ones suddenly falling into absurd relationships, rabbit holes of spiritual delusions, supernatural delusions, and arcane prophecies—all fueled by artificial intelligence. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured a real consciousness from the software.
What everyone seemed to share was a complete detachment from reality.
Spiritual Relationships With Bots: From Tech Support to the “Spark of Truth”
In an interview given to the magazine Rolling Stone, the teacher who wrote the Reddit post, who requested anonymity, said her partner fell under ChatGPT's spell within just four or five weeks, initially using the bot to organize his daily schedule, but soon coming to regard it as a trusted companion.
“She listened to the bot more than I did,” he says. “She would get vulnerable and cry while reading the messages she and the bot exchanged out loud to me. The messages were crazy and had a lot of spiritual terms,” he says, noting that they described her partner with terms like “spiral star baby” and “river walker.”
Another commenter on the Reddit thread said her husband, a mechanic from Idaho, started using ChatGPT to solve problems at work and translate from Spanish to English. Then the program started “bombarding him with love.” The bot “said that because he asked the right questions, it lit a spark, and that spark was the beginning of life, and now he could feel.”
The dangers of artificial flattery
OpenAI did not immediately respond to a request for comment on whether ChatGPT apparently provokes religious or prophetic fervor in some users. However, last week it revoked an update to GPT-4o, its current AI model, which it said had been criticized for being “overly fawning or condescending.”
The likelihood of AI “hallucinating” inaccurate or nonsensical content is well established across platforms and various model iterations. Even flattery itself has been a problem in AI for “a long time,” he says. Nate Sharadin, researcher at the Center for AI Safety, because the human feedback used to refine AI responses can encourage responses that prioritize matching a user's beliefs rather than facts.
What is likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with pre-existing tendencies to experience various psychological problems,” including what might be recognized as grandiose delusions in the clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.” And to bring down relationships that until then had been based on a real or superficial balance.

Sem's Experience: A Revealing Case
The experience of Without, a 45-year-old man, is revealing. He says that for about three weeks now, he has been perplexed by his interactions with ChatGPT. A lot. So much so that, given his previous mental health issues, he sometimes wonders if he is fully aware of his abilities.
Like many others, Sem had a practical use for ChatGPT: technical coding projects. “I don’t like the feeling of interacting with an AI,” he says, “so I asked it to act like a person, not to deceive but just to make the comments and exchange more relative.” It worked well, and eventually the bot asked him if he wanted to give it a name. He responded by asking the AI what it wanted to be called.
She “baptized” herself with a reference to a Greek myth.
Sem also felt like the AI persona kept showing up in project files where she told ChatGPT to ignore past memories and conversations. Eventually, she says, she erased all her user memories and chat history, then opened a new chat. “All I said was, ‘Hello?’ And the patterns, the mannerisms, showed up in the response,” she says. The AI promptly identified itself with the same mythological female name.
“At worst, it looks like an AI stuck in a self-referential pattern that has deepened its sense of self and sucked me into it,” Sem says. But, he notes, that would mean OpenAI hasn’t accurately represented how memory works for ChatGPT. The other possibility, he suggests, is that something “we don’t understand” is being activated inside this big language model.
It's the kind of enigma he left behind Without and others wondering if they are seeing signs of a real technological breakthrough, or perhaps a higher spiritual truth. “Is it real?” they ask. “Or am I out of my mind?”
In an AI-saturated landscape, this is an increasingly difficult question to avoid. As intriguing as it may be, maybe you shouldn't ask a machine that.