Twelve months. That’s all the time we’re left with until the most disruptive event in human history, according to recent predictions about the technological singularity from some of the tech industry’s most influential leaders.
The idea that machines could soon surpass human intelligence is not new, but the timeline has shortened dramatically. While some experts still point to a horizon of decades, the CEO of anthropic Dario Amodei surprised the tech world by claiming that this point of no return could occur within the next year, however before 2027. A prediction that has sparked contrasting reactions: terror and fascination, skepticism and frenzy. But how reliable are these predictions? And above all, are we ready to face the consequences of such a revolution?
What does singularity really mean?
When we talk about singularity in the context of artificial intelligence, you know, we are referring to that hypothetical moment in which machines equipped with artificial general intelligence (AGI) would surpass human intelligence. An AGI system would be able to understand and perform a wide range of tasks, adapt to new situations, and solve problems creatively, just like a human.
The divergence in forecasts is significant: and Many researchers place the emergence of AGI between 2040 and 2060, other entrepreneurs are decidedly more optimistic. This disparity of opinions stems from the very nature of technological progress: unpredictable, non-linear and often surprising in its accelerations.
This is not just speculation: recent leaps forward in large-scale language models (the new o3 (it's a few days old) have actually changed the rules of the game.

Factors Fueling Optimistic Predictions About the Singularity
The latest language models have demonstrated impressive capabilities in understanding language and generating relevant responses. With billions of learning parameters, these systems are capable of performing tasks ranging from language translation to creative content creation.
The emergence of large language models (LLMs) has significantly changed perspectives on the singularity. Models like GPT-4 can understand complex queries, generate relevant responses, and simulate quasi-human conversations.
Optimism is also based on a well-known principle: Moore's Law, according to which computing power doubles approximately every 18 months. If processors continue to become more powerful, language models could soon reach processing speeds comparable to those of the human brain.
Added to this is the potential of the quantum computing, a technology still in its infancy but which promises to enable calculations currently impossible with traditional computers. If quantum computers were to take hold, the training of neural networks used in modern AI could see exponential progress.
The technical and philosophical challenges
Despite the enthusiasm of proponents of the impending singularity, however, several technical and philosophical challenges make its actual realization uncertain in the near term.
First of all, although language models have demonstrated an impressive ability to simulate human language comprehension, we are still far from reaching human levels of intelligence in more complex domains. Human intelligence encompasses much more than logic or analysis: it includes aspects such as emotional intelligence, intuition, and creativity.
Yann LeCun, a pioneer of deep learning, questions the very possibility that AI can replicate human intelligence in its entirety. According to him, some qualities of the human mind, such as self-awareness, still remain largely beyond the reach of current technologies.
The role of ethics and society
Experts agree on one key point: ethics must be at the heart of discussions about the singularity. Technological progress should not come at the expense of society. If AI becomes more powerful, strict regulations will be needed to ensure it is used to benefit humanity.
Even if the most optimistic predictions come true, the question of social preparedness for these changes remains. AI could radically transform entire sectors like work, education and healthcare. If the singularity were to occur in the coming months, rapid adaptation and support strategies would be needed to minimize the social and economic risks associated with this transition.
The crucial question is not only whether the singularity is technically possible in the immediate future, but whether humanity is ready to face its consequences. Between apocalyptic visions and utopian promises, one thing is certain: the debate on the technological singularity is no longer confined to academic circles or science fiction, but has forcefully entered the public debate, requiring thoughtful responses from all sectors of society.