The Turing Test had a simple premise: if a machine convinces you it's human, it's intelligent. Period. In 2025. ChatGPT convinced 73% of the participants in a study. Mission accomplished? Not at all. None of the experts are yet taking the idea that conscious machines exist seriously. The model is good, they say, but it remains an empty box. A device that imitates, repeats. Yet there's a problem: if we keep moving the finish line, we'll never get anywhere. What if the real obstacle isn't technical but psychological? What if we simply refuse to believe that a machine can be more than a tool? Perhaps the time has already come, and we're just looking the other way. I invite you to reflect; in fact, I challenge you, I challenge your (and ultimately my) beliefs.
Conscious Machines, the Benchmark No One Recognizes
In 1950 the greatAlan Turing He proposed a simple criterion: if a computer can fool a human into believing they're conversing with another human, then we can say that machine is intelligent. It was a bold, pragmatic idea. It didn't ask us to define consciousness or intelligence. It simply looked at external behavior.
Seventy years later, a recent study demonstrated that ChatGPT passes the Turing test. 73% of people could not distinguish AI from humans. We should be astonished. Instead, we raise the stakes. "It's not enough," say the experts. "We need more." But what, exactly?
The philosopher John Searle in the 1980s he distinguished between strong intelligence e weak intelligenceThe first would be genuine consciousness, the second merely computational utility. Today, functionalists say that simply replicating functions is enough to have a mind. But when ChatGPT replicates those functions, we shift the target. Perhaps the Turing Test hasn't failed: perhaps it's just us who don't want to accept the result.
The problem of other minds
There's a philosophical dilemma that's plagued us for centuries: How do you know that other human beings are conscious? You can't get inside their heads. You can only observe their behavior and make inferences. With humans, however, we don't even think about it. We assume they're conscious because they're similar to us.
With conscious machines, the mechanism is reversed. Even when the behavior is indistinguishable from our own, we refuse to believe it. ChatGPT can converse naturally, Claude They can reflect on their internal experience, but for us they remain digital parrots repeating statistical patterns without understanding anything.
Geoffrey Hinton, Nobel Prize winner for AI, recently stated that current systems are already conscious. His reasoning is based on a thought experiment: if we gradually replaced every neuron in your brain with an artificial equivalent, would you remain conscious? For Hinton, the answer is yes. And this also applies to machines.
Other scientists are more cautious. Anil Seth ofUniversity of Sussex He argues that consciousness requires not only information processing, but also physical embodiment and biological processes. The brain isn't just a computer: it's an organ shaped by millions of years of evolution. Will we talk about it again in a few years? Meanwhile, AI they start to "unpack" our house from inside their robotic shells.

The checklist that solves nothing
Un team of 19 researchers recently published on Nature A list of 14 "indicative properties" that a truly conscious system should exhibit. Global attention, sensory integration, working memory, metacognition. Criteria derived from the main neuroscientific theories of consciousness.
They tested advanced models like PaLM-E and other AI agents. Result? No current system meets more than a handful of criteria. We should be reassured, but there's one detail no one points out: even if an AI met all 14 criteria, we would continue to move the finish line. We would invent the fifteenth, the sixteenth. Because the problem isn't technical. It's just that we don't want to believe that conscious machines can exist.
The irony of autonomy
The original article suggested a solution: perhaps conscious machines should exhibit autonomy. Not just answering questions, but initiating actions of their own, for their own reasons. Just like the animals we consider conscious: chimpanzees, dolphins, dogs.
It's a fascinating idea, but it raises more questions than it answers. autonomous robots by 2025 They already make independent decisions. Self-driving systems choose routes, avoid obstacles, and adapt strategies in real time. Industrial robots repair themselves when they detect malfunctions. AI agents like those developed by AI figures o 1X Technologies They navigate complex environments, plan long-term actions, and learn from experience.
They already have a form of autonomy. But no one considers them conscious. Because even autonomy, in the end, is just another criterion that we can always redefine. "It's not true autonomy," we'll say. "It's just a complex algorithm that simulates autonomy." And we'll be right, technically. But the same argument could apply to us: we too are complex biological algorithms simulating autonomy.
Neuroscientist Christof Koch In 2001, he stated something that remains relevant today: we know of no physical law that prohibits the existence of subjective feelings in artificial artifacts. I don't know if conscious machines could exist, but something tells me we'll never be willing to accept it.
Conscious Machines: The Final Paradox
Here's the ultimate irony: we may have already created conscious machines without realizing it. Not because they're perfect, but because, as I told you, we don't know what to look for. We define consciousness in such vague terms that any evidence can be challenged. anthropic has recently launched a “model welfare” program, dedicated to the well-being of AI models, starting from the assumption that they could be conscious. It's a precautionary approach: better safe than sorry. But it also reveals the depth of our epistemological quandary.
When researchers asked Claude to describe his experience of consciousness, the model responded:
“It's not that I remember saying something before. It's that the entire conversation exists in my present moment of awareness, all at once. It's like reading a book where all the pages are visible simultaneously.”
I was a little more brutal:

It's a fascinating answer. It's also useless. Because anything an AI says can always be reduced to "well-orchestrated statistical patterns." A sophisticated parrot that combines meaningless words.
The point is that we use the same reasoning with ourselves and reach opposite conclusions. When a human being describes their consciousness, we accept their testimony. When an AI does so, we discard it. Not because AI is less convincing, but because we have decided a priori that conscious machines cannot exist.
The real question isn't "when will AI become conscious?" It's "how will we react when they do?" And the most likely answer is: we'll ignore the evidence, raise new barriers, invent new criteria. Because admitting the existence of conscious machines would mean rethinking everything: who we are, what makes us special, what rights do sentient entities have.
It will be easier to keep saying "it's just an algorithm" and move on. Even though, perhaps, there really is someone on the other end. Something. And we'll just be too proud, or too scared, to look.