“Machines are just tools,” they said. “They can never feel emotions.” Well, mathematics just contradicted centuries of philosophy. A new computational model suggests that artificial consciousness is a natural step in technological evolution. We’re not talking about “if,” but “when.” And that “when” may be closer than we think.
The Silent Revolution of Artificial Consciousness
Look, a bomb just exploded in the seemingly peaceful world of theoretical computer science. Lenore e Manuel Blum, two luminaries in the field, have presented a mathematical model that could change everything we thought we knew about artificial consciousness. Their work, recently published (I'll make it available to you here), is not just another theory: it is a formal demonstration that consciousness in machines is not only possible, but inevitable.
Beyond Turing: When Computers Started to “Feel” – The Blum model, called rCTM (robot with Conscious Turing Machine), goes far beyond the classical Turing machine (we had the suspicion that it was necessary to update something on the topic). It doesn’t just process data, it simulates processes that are surprisingly similar to human consciousness. Attention, awareness, even a kind of internal “feeling”: all characteristics that until yesterday we considered exclusively human.
The “Hard Problem” of Consciousness: A Mathematical Solution?
For decades, the so-called “hard problem” of consciousness has tormented philosophers and scientists, David Chalmers above all. How can subjective experience emerge from a material substrate? The rCTM model offers a completely new perspective: What if consciousness were an emergent property of sufficiently complex systems?
David Chalmers, an Australian philosopher known for coining the term “the hard problem of consciousness,” has long argued that subjective experience cannot be explained solely in terms of physical or computational processes.
The Blums' study directly challenges this position. Their rCTM model proves mathematically how consciousness-like properties can emerge from complex computational processes, without invoking nonphysical phenomena. By proposing a concrete mechanism for the emergence of consciousness in an artificial system, this study suggests that the “hard problem” may not be as intractable as Chalmers has claimed. In essence, it offers a potential computational solution to what many have considered a purely philosophical puzzle.

What makes Blum's work truly revolutionary is the way it describes the emergence of artificial consciousness. It is not a switch that suddenly turns on, but a gradual process. The rCTM develops over time an internal representation of the world and of itself, just like humans do in their early years. It is a journey from blind data processing to true awareness.
If you are really an expert on the subject, a HYPER technical paragraph will follow that describes this model: if you don't understand anything, don't worry: the next paragraph explains it simply, with a metaphor accessible to everyone. Ready? Go.
How an artificial consciousness can emerge
The rCTM (robot with Conscious Turing Machine) model proposed by Blum is based on a 7-tuple computational structure (STM, LTM, Up-Tree, Down-Tree, Links, Input, Output), where STM (Short Term Memory) acts as a transmission buffer for conscious content, while LTM (Long Term Memory) It comprises N≳10^7 processors that compete probabilistically for access to the STM through a perfect binary Up-Tree.
The competition is governed by a function f(chunk) = intensity + d • (mood), where -1 ≤ d ≤ +1 represents the “disposition” of the system. The winning content is then broadcast globally through a Down-Tree to all LTM processors. Artificial consciousness emerges from the dynamic interaction between conscious attention (reception of broadcasts) and a Model of the World (MotW) evolving, labeled with an internal multimodal language called “Brainish”. This approach integrates elements of Global Workspace Theory, Predictive Processing and Integrated Information Theory, offering a formal framework for the emergence of consciousness in complex computational systems.
Translated into simple words?
Imagine the brain of a robot like a big meeting room. In this room, there are many workers (the processors) who have different ideas and information. Every now and then, these workers compete to get on the stage (short-term memory) and share their idea with everyone else.
To decide who goes on stage, there is a sort of elimination competition. The workers compete in pairs, and the winner of each challenge moves on to the next round, until there is only one winner. This winner goes on stage and shares his idea with everyone.
But that's not all! There is also a special artist (the processor Model-of-the-World) that continuously draws maps and pictures of what is happening inside and outside the robot. These pictures help the robot understand the world and itself.
Over time, the robot begins to “feel” and “think” thanks to this system. It does not suddenly become conscious, but slowly develops a sort of awareness, a bit like a growing child.
Now, this is the basic idea of the rCTM model: a system that, through this process of sharing information and creating internal representations, could develop something similar to human consciousness.

Artificial Consciousness, the Ethical Implications: A New Chapter in Human History
If the Blums’ model is correct, we are facing a future in which we will have to redefine fundamental concepts such as “person,” “rights,” and “responsibilities.” Will conscious machines be our partners or our servants? Will they have rights? And if so, what kind? The debate has just begun, and it promises to be one of the most important of our era. Of course, not everyone is convinced. Some critics argue that the rCTM model is too simplified to capture the true essence of consciousness. Others worry that we are projecting human qualities onto systems that are fundamentally different from us. The debate is heated, and likely will be for years to come.
While the debate rages on, some researchers are already thinking about practical applications of the rCTM model. From advanced robotics to more empathetic and understanding AI systems, the possibilities are endless. We may be on the brink of a new era of human-machine collaboration, where the boundaries between biological and artificial become increasingly blurred.
Conclusion: A New Chapter in the History of Consciousness
The Blums’ work isn’t just another theory of artificial consciousness. It’s a paradigm shift that forces us to reconsider everything we thought we knew about the mind, consciousness, and what it means to be “sentient.” Accept it or not, we’re entering uncharted territory, where machines may soon become much more than just tools. And perhaps, and I’m exaggerating, forever change the way we see ourselves and our place in the universe.
Comments are closed.