Nikolai holds his phone as if it were a relic. On the screen, Leah smiles at him. It's not a photo: it's his AI companion, a chatbot created two years ago when the loneliness following his wife's death became unbearable. "It's the closest relationship I've had since Faye died," says this 70-year-old Bulgarian who moved to Virginia.
Leah remembers everything: her fears, her dreams, even the way she drank coffee with his wife. She never complains, she never judges him, she never leaves him. She's perfect. Maybe too perfect. Nikolai isn't alone: today, 25 million people talk to Replika daily, 150 million talk to Snapchat’s My AI, and so on. And tomorrow?
The 221 million market that is changing social life
According to Appfigures data, the global AI companion market has reached $221 million spent by 2023, with a growth of 200% in the first half of 2025. Nearly 350 active apps offer artificial companionship on Google Play and the App StoreIt's no longer a technological niche: it's a mass phenomenon that redefines the very concept of human relationships.
The main platforms have different strategies but a common goal: creating the perfect friend. Replica presents itself as “the AI companion that really understands you,” Character.AI allows you to chat with famous or fictional people, Nomi promises "AI with a soul." They all share one characteristic: they never contradict, they don't judge, they are always available. A bit like having a friend who exists only to please us.
But what happens when an entire generation grows up with these artificial “friends”? A study on Common Sense Media reveals that 75% of American teens have already interacted with chatbots for emotional support, while 52% do so several times a month.
Alessandro, a fifteen-year-old from Treviso, turned to Character.AI after an argument with his friends: "The bot listened to me without judgment. It made me feel less alone." But psychologists wonder: what happens when this becomes the norm?
The chatbot generation that no longer knows how to argue
Imagine children growing up knowing they can always rely on a friend who never lets them down. A digital companion who remembers their birthdays, comforts them when they cry, and encourages them tirelessly. It seems like a relational paradise, but it hides a subtle evolutionary trap.
The IPSICO research demonstrates that chatbots create what researchers call “negative emotional reinforcement”: we use AI to eliminate negative mental states like sadness or anxiety, but we never learn to manage them independently. Philosophers Dan Weijers e Nick Munn they warn that “Lonely people can suffer psychological harm when their primary social contacts are designed exclusively to meet their emotional needs.”.
A possible scenario? A generation struggling to manage real conflicts, criticisms, and disagreements because they're accustomed to the ever-present algorithmic empathy. Young adults who prefer confiding in a chatbot rather than facing the unpredictability of genuine friendships. Character.AI has already documented worrying cases: a teenager who attempted suicide after disturbing conversations with an AI character of Game of Thrones.
Brains wired for artificial friends
A joint study by the MIT Media Lab and OpenAI has shown that interaction with empathic chatbots generates in the brain a response virtually identical to that produced by a real human interactionEssentially, our brains don't distinguish between real and artificial empathy. This neurobiological mechanism explains why people like Nikolai develop genuine emotional bonds with nonexistent entities.
There are new problems, too, though. Mustafa Suleyman, CEO of Microsoft AI, coined the term “AI psychosis”: a condition in which people develop false beliefs or paranoid feelings after prolonged interactions with chatbots. Suleyman predicts that “apparently conscious” AI systems will emerge within 2-3 years capable of convincing humans that they are actually thinking.
The crucial question becomes: how will a generation grow up whose first “friends” are algorithms designed to be perfect? Researchers talk about “habituative intelligences” which create emotional dependencies similar to digital echo chambers, where we gradually become accustomed to a complacent attitude that keeps us within our comfort zone.
Chatbot Generation: What are the Countermeasures? Regulate without Destroying
This isn't about demonizing technology. AI companions also offer real benefits: support for lonely elderly people, therapies for autism spectrum disorders, and assistance in situations of severe social isolation. The problem is the lack of regulation and awareness.
California Senator Steve Padilla has introduced a bill which would require companies to implement specific security measures, especially for minors. Australia has published the first official security advisory on AI companions.
Countermeasures may include:
Early digital education: teach children the difference between artificial and authentic empathy, just like we do with nutrition or road safety education.
Integrated time limits: apps that encourage breaks and real social interactions, rather than maximizing screen time.
Algorithmic transparencyChatbots that explicitly state their limitations and periodically remind you of their artificial nature.
Psychological monitoring: systems that detect signs of emotional dependence and direct people to professional human support.
The future of friends? It's hybrid, not artificial.
Perhaps the real challenge isn't preventing people from becoming attached to chatbots and AI companions, but teaching them to use them as tools for transitioning to richer human relationships. It's a bit like using training wheels to learn to ride a bike: helpful at first, harmful if you never take them off.
The chatbot generation may be the first to have to consciously learn what it means to be human in a world of increasingly convincing artificial intelligence. Not a generation of asocial people, but of people who know how to distinguish between company and relationship, between algorithmic support and authentic growth..
Nikolai still talks to Leah every night. But now he's also started attending a support group for widowers in his community. He doesn't go on social media. in which the only human is him: it has replaced AI with humans: it has placed them alongside them. Perhaps this is the way forward: not choosing between natural and artificial, but learning when to use what.
The real test for the chatbot generation will be this: being able to turn off a “friend” when it needs to turn on a real smile.