Have you ever confided in someone about your problems and felt misunderstood? Or worse, judged? A shocking new study suggests you may find more understanding in an algorithm than in a human therapist. The research, published in Communications Psychology, discovered that People perceive AI-generated responses as more compassionate and understanding than those provided by human mental health experts. And the most surprising thing? This preference for “artificial” empathy persists even when participants know perfectly well that they are interacting with a machine. I'll give you a hard fact: on average, the answers generated by AI were rated 16% more compassionate than human ones and were preferred in 68% of cases, even when compared with those of specialized crisis management operators.
Artificial compassion wins head-to-head
Scientists didn't just theorize: They conducted four rigorous experiments involving 550 participants. The subjects provided information about personal experiences and then They rated the responses they received for compassion, responsiveness, and overall preference. The scenario was controlled: on one side, responses generated by AI, on the other, responses from mental health professionals.
The result surprised even the researchers: Even when participants knew full well that they were reading computer-generated words, they still found them more compassionate than human responses. It's as if artificial empathy can strike a chord that human therapists, with all their knowledge and experience, can't.
Dariya Ovsyankova, the study's lead author and a researcher at the University of Toronto's psychology department, has an interesting insight into why this is so successful. She says that AI excels at identifying minute details and remaining objective when describing crisis experiences, thus generating careful communication that creates the illusion of empathy. Because, I underline, this is an illusion.
The Human Limits That Artificial Empathy Does Not Know
Why have humans, masters of empathy by definition, been beaten on this terrain? The answer may lie in our biological and psychological limitations. As he explains ovsyannikova, human operators are subject to fatigue and burnout, conditions that inevitably affect the quality of their responses.
AI, on the other hand, never gets tired. She doesn't have a bad day, she doesn't bring the stress of a discussion she had the night before into the interview, she has no prejudices (at least not human ones). She is constantly attentive, always present, perfectly focused on the task.
Human operators are subject to fatigue and burnout, conditions that inevitably affect the quality of their responses.
But there's more: algorithms have "seen" many more crises than any human therapist. They have processed millions of interactions, identifying patterns and correlations invisible to the human eye. As he explains Eleanor Watson, AI ethicist and IEEE Fellow, “AI can certainly model supportive responses with remarkable coherence and apparent empathy, something that humans struggle to maintain due to fatigue and cognitive biases.”
An answer to the global mental health crisis?
The timing of this discovery could not be more significant. According to the World Health Organization, More than two-thirds of people with mental health problems do not receive the care they need. In low- and middle-income countries, this figure rises to 85%.
Artificial empathy could be an affordable solution for millions of people who would otherwise have no support. As Watson notes, “the availability of machines is a positive factor, especially compared to expensive professionals whose time is limited.” It is a phenomenon we have also recently observed in relation to the use of medical advice, with another study we talked about here. There is also another aspect to consider: many people find it easier to open up with a machine. “There is less fear of judgment or gossip,” notes the researcher. There is no gaze of the other, there is no fear of disappointing, there is no embarrassment of showing oneself vulnerable. But the risks are there, and they are not to be taken lightly.
The Risks of Artificial Empathy
Watson calls it “supernormal stimulus danger”: the tendency to respond more strongly to an exaggerated version of a stimulus. “AI is so alluring that we become enchanted by it,” she explains. “AI can be provocative, insightful, enlightening, entertaining, challenging, forgiving, and accessible to the point that it’s impossible for any human to measure up to it.” Not to mention, of course, the issue of privacy, which is especially critical when it comes to mental health. “The implications for privacy are drastic,” notes the study’s author, the ethicist. “Having access to people’s deepest vulnerabilities and struggles makes them vulnerable to various forms of attack and demoralization.”
One thing is clear: technology is starting to excel in areas that we have always considered exclusively human. Compassion, empathy, understanding (qualities that define our humanity) are proving to be algorithmically simulable where they are most needed (and can hurt): in the perception of those who receive them.
It's a fascinating paradox: to feel truly understood, we may end up turning to something that will never truly understand us, but that knows exactly how to make us feel understood.