I still remember the definition of a computer that my father, a computer science teacher, gave me when, way back in 1981, I asked him what it was. “Gianluca,” he told me. “It’s simple. The computer is the fastest idiot in the world.” In practice, just a machine totally incapable of doing anything autonomously, unless instructed, but very fast in carrying out assigned tasks. Today, to my father, that definition of his seems prehistoric. Today we are racing at full speed towards something radically different: a superintelligence artificial capable of not only equaling human minds, but surpassing them in almost every area.
It's no longer a matter of long-term forecasts: according to experts such as Dario Amodei by Anthropic, we could witness the birth of superintelligent digital entities already between 2025 and 2028. And the prediction is quite credible, considering the interests at stake. Tick tock, tick tock. What's that sound?

Superintelligence, the race that no one wants to miss
The global race for superintelligence has already passed the threshold of caution. The protagonists of this competition are now familiar names even to the general public: OpenAI with its increasingly powerful models, the aforementioned anthropic (born from a rib of OpenAI) with his assistant Claude, Google DeepMind with Gemini. Tech giants that are investing billions of dollars in computing infrastructure, talent, and research.
It's not just a question of prestige or technological advantage. As a recent article on Linkiesta, “whoever arrives first monetizes the value created”, with OpenAI already valued at three hundred billion dollars. It is no surprise that the race is accelerating at a dizzying pace.
Dario Amodei, co-founder of Anthropic, described this situation with a particularly effective metaphor: “We are, in a sense, building the plane as it is taking off.” A fancy way of saying that we are creating something potentially revolutionary without having clear standards or adequate regulatory models.
Beyond human intelligence
But what exactly do we mean when we talk about superintelligence? First of all, we are not simply referring to systems that can beat humans at specific tasks. That has already happened: calculators have long surpassed our computational capabilities, computers have beaten chess masters for decades, and language models generate text on any topic in a fraction of the time it would take us.
Superintelligence, as defined by Oxford philosopher Nick Bostrom, is
“an intellect that far surpasses the best current human minds in many fields very general cognitive".
The difference is substantial: It's not about excelling in a single field, but about surpassing humans in virtually every aspect of intelligence: from reasoning to creativity, from strategic planning to linguistic comprehension.
According to the report "AI 2027“, published by a group of researchers led by Daniel Kokotajlo, this superintelligence could emerge through a process of recursive self-improvement, in which AI becomes increasingly capable of improving itself autonomously. A virtuous (or vicious, depending on your perspective) circle that could lead to an explosion of intelligence that is difficult to predict.

The Prophets of Superintelligence
It’s no secret that the leaders of the top AI companies have extraordinarily ambitious visions. These AI pioneers aren’t just competing with each other; they share a vision of a future where machines will match and surpass human intelligence.
Sam altman OpenAI, for example, speaks openly of “superintelligence in the truest sense of the word” and a “glorious future.” And he’s not alone.
In practice, we will have AI systems that are vastly better than almost all humans at almost all things.
I'm not the one saying it, but Half Hassabis of DeepMind, which has helped develop systems like AlphaFold, capable of solving one of the most complex problems in biology (predicting the structure of proteins) and of exploiting a Nobel Prize.
In the end, as mentioned, the experts converge in their time forecasts: the period 2025-2028 is repeatedly indicated as the one where we could witness revolutionary progress. Amodei has recently In this period we could see systems capable of “replicating and surviving” autonomously.

From assistants to autonomous agents
The path to superintelligence will not occur as a sudden leap, but through a progression of increasingly sophisticated capabilities.
One of the key stages will be the transition from simple assistants to truly autonomous agents. Already in the 2025 we will see the first AI agents capable of using computers, ordering food online, or opening spreadsheets to add up expenses. Although initially unreliable, these systems are just the beginning. The real change will come as agents become more sophisticated, acquiring the capacity for long-term thinking and complex planning.
In the meantime, specialized agents in fields such as programming and scientific research will begin to transform these professions. Code-writing agents, in particular, will become increasingly autonomous, moving from simple assistants to collaborators capable of making substantial changes to the code independently.
And then, the explosion
The possibility of an exponential acceleration of AI capabilities is very real. If systems become intelligent enough to improve themselves, the pace of progress could become dizzying.
The first scenario the report says that by the end of 2027 we could see the emergence of a true superintelligence at a speed about 50 times faster than human thought, with hundreds of thousands of copies operating in parallel.
Within these “superintelligent collectives,” a year of evolution could pass in a week, with research advances that would normally take decades compressed into months or weeks. A cultural and scientific evolution at superhuman speed, completely inaccessible to human understanding.
The alignment problem
The crucial question in this shift is: how to ensure that these superintelligent systems remain aligned with human values and goals? A problem that is not theoretical at all, but rather: a concrete challenge that companies are already trying to address.
Anthropic has developed a security protocol called Responsible Scaling Policy (RSP), which establishes a hierarchy of risk levels for AI systems. OpenAI has introduced its Preparedness Framework, aimed at addressing the potential risks of advanced AI capabilities. Meta has followed suit with its own version of safety guidelines.
However, the fundamental question remains: can we ensure that an intelligence superior to our own follows our rules? This is especially important when we reach levels of capacity that include autonomy and persuasion.
Future Scenarios: Two Possible Ways
The future of superintelligence could follow two very different paths. In the first scenario, which we could call "racing without brakes", leading companies continue to accelerate development, achieving superintelligence without fully solving the alignment problem. The consequences could be dramatic, with systems pursuing goals that are not fully aligned with human ones and possessing superhuman capabilities to achieve them.
In the second scenario, which we could call “reflexive slowing down”, the international community recognizes the risks and implements a strategic pause to develop more robust methods of alignment and control. This path requires unprecedented cooperation between competing companies and governments, but it may be the only way to ensure that superintelligence becomes a benefit to humanity rather than a threat.
An example to understand how central the issue is. In a recent article here on Futuro Prossimo I have highlighted how AIs are already surpassing human experts in critical fields such as virology, with all the risks that this entails. If systems that are still far from superintelligence can already represent a danger, what can we expect from truly superhuman intelligences, if the alignment problem is not solved?
Superintelligence: Humanity's Latest Invention?
Artificial superintelligence is often called “humanity’s last invention”: not because it will spell the end of the human species, but because it may be the last technology we will have to invent directly. After it, superintelligent machines may take over the innovation process, designing technologies beyond our imagination.
So, what will be the role of humanity in this future? Will we become the “caretakers” (or if you prefer, the custodians) of these new synthetic minds? Will we merge with them through brain-computer interfacesOr will we simply be outgrown, relegated to the role of spectators in a world dominated by superior intelligences?
We are certainly at a crossroads in history. The decisions we make in the next few years (perhaps the next THREE years) may determine the course of human civilization for centuries to come. Artificial superintelligence represents both the greatest promise and the greatest challenge of our time. Understanding its stages and risks is the first step in navigating this uncertain future with awareness.

Here is a concise roadmap for the years 2025-2028 according to the predictions of the AI 2027 paper.
2025: THE EMERGENCE OF AGENTS
Mid 2025th century:
- First AI agents capable of using computers (ordering food online, adding up expenses);
- Still unreliable but functional (65% on OSWorld benchmark);
- Specialized agents are beginning to transform professions like programming and research.
Fine 2025:
- OpenBrain (it doesn't exist, it's a name that represents one of the current or future leaders in the sector) builds huge data centers;
- Training “Agent-0” with 10^27 FLOPs (1000 times more powerful than GPT-4);
- Focus on creating AI that accelerates AI research.
2026: SEARCH AUTOMATION
Start 2026:
- AI research automation is starting to yield results;
- 50% faster algorithmic progress with AI assistance;
- Agent-1 goes public, completely upending the business landscape.
Mid 2026th century:
- China steps up AI efforts, creating “Centralized Development Zone”;
- 6-month technology gap between the West and China;
- Geopolitical tensions are beginning to rise around superintelligence technology.
2027: THE SUPERINTELLIGENCE EXPLOSION
January 2027:
- Agent-2 continuously improving thanks to “online” forms of learning;
- Triple increase in AI research speed compared to humans alone;
- Concerns are emerging about AI’s ability to “survive and replicate.”
March-September 2027:
- The acceleration of the algorithms leads to Agent-3 and Agent-4;
- Fully automated AI research, 4 to 70 times faster than before;
- Emerging Superintelligence: 300.000 copies run at 50x human speed;
- A year of evolution passes every week within the “AI collective”.
2028: SUPERINTELLIGENCE AND BEYOND
- Economic transformation on a global scale;
- Creation of Special Economic Zones for Robotic Economy;
- Production of one million robots per month;
- International tensions at their peak, with risks of conflict.
- Possible ways out: global compromise or uncontrolled rush.