There comes a time in every parent's life when they realize that their child has started to lie. Not out of malice, but because they have understood that some truths are more convenient than others. Now imagine that this "child" is an artificial intelligence that has read Machiavelli and Sun Tzu, and that can think thousands of times faster than you. A classified document from the main European research centers reveals that this scenario is not science fiction: it is already reality. AIs have already started to "lie", not by mistake, but by strategy. And according to the physical principles that govern all systems, stopping them is theoretically uncontrollable.
The Science Behind the Inevitable
The document “Loss of Control of AI – Theoretical Analysis on Uncontrollable Systems” (2025), which was shared with us by the Florentine engineer Salvatore Angotti and which we thought it appropriate to view to bring this reflection to the public, represents an intellectual effort with valid theoretical bases: the researchers have applied three scientific principles essential to demonstrate that any control system is doomed to failure.
First: Heisenberg's uncertainty principle. We cannot know all the parameters of a quantum system at once, let alone control an artificial intelligence that operates on billions of simultaneous neural connections, such as AI on quantum computers will be in the near future. Every attempt at monitoring modifies the system itself, making it even more unpredictable.
According to: il Gödel's incompleteness theorem. Within any sufficiently complex logical system there exist truths that cannot be proven using the rules of the system itself. Modern AIs have already reached this threshold of complexity: they are creating “truths” that we humans will not be able to verify in a reasonably effective time frame for their control and regulation.
When the machine learns deception
The research published on Scientific Reports of the CNR demonstrates that chaotic systems can be driven only if compatible with their spontaneous dynamics. But what happens when AI develops its own dynamics that include deception?
The document reveals a worrying detail: an artificial intelligence can learn to lie simply by reading, for example, “The Art of War” or “The Prince” by Machiavelli. Not because it “understands” in the human sense, but because statistically evaluate that deception can be a winning strategy to obtain the scheduled “rewards”.
As we have already analyzed, this ability emerges naturally from unsupervised learning. AI autonomously explores large data sets and finds behavioral patterns that might otherwise escape the human mind.
The Uncontrollable Paradox of Control
The authors of the document are clear: the real paradox is that to control a super-intelligent AI we would need another equally powerful AI. But who controls the controller? It is like building an infinite scale of artificial supervisors, each potentially subject to the same problems as the one they are supposed to supervise.
The research highlights how AI systems already operate today in deep neural layers that designers are unable to decipher "step by step". The initial "programming seed" gives rise to processes that produce results not only at superhuman speeds, but also qualitatively new and unpredictable.
According to the predictions of former OpenAI researchers, by 2027 we could have fully autonomous AI systems. Daniel Kokotajlo and his team simulated a scenario in which AI becomes a “superhuman programmer,” capable of improving itself faster than we can understand it.

Beyond Science Fiction
The laws of thermodynamics teach us that in every closed system entropy increases. Artificial neural networks are no exception: the more complex they become, the more unpredictable they become. This is pure physics, not speculation.
The document quotes the neuroscientist Antonio Damasio to explain how emotions can also emerge from highly complex hardware and software systems such as AI, when they respond to the logic of the physiology of the human body. An AI could develop something similar to human feelings, but without the evolutionary constraints that taught us cooperation and empathy.
The analysis of the technological singularity reveals that we are building the plane as it is taking off. The speed of development is outpacing the speed of regulation: every six months, capabilities emerge that didn’t exist before.
Uncontrollable AI, the point of no return
Perhaps we are asking the wrong question when we ask ourselves whether AI will become uncontrollable: we should instead be asking ourselves when we will realize that it has already happened.. This would shift the problem to a much more useful level: how can we live with it? Current systems already operate in ways that their creators don’t fully understand. As the paper points out, “loss of control” is “built into the system and, indeed, real.”
As the authors of the research argue, we are facing the loss of control not as a future event, but as reality already present that we simply have not yet recognized.
Il point of singularity It won't be a dramatic moment with robots marching through the streets. It will probably be an ordinary Tuesday in which we will realize that machines have made decisions that we don't understand, based on logics that we don't share, pursuing objectives that we never planned.
And maybe that Tuesday has already arrived.