First day of the year, and an article that I had in mind for a while. A long-term analysis, between speculation and prediction, on the "boundaries" of artificial intelligence that is taking its first steps today. There is no need to wait for a hypothetical technological singularity to question the future of AI.
Current progress, as demonstrated by the recent OpenAI results, already raise crucial questions about the future coexistence of humans and “thinking” machines (in quotation marks). The real game, however, in my opinion, could be played not on our planet, but in deep space. And if you want, I'll tell you why.
An illusory security
The current AI landscape is in a seemingly self-regulating phase. Major companies in the industry are implementing control systems and ethical restrictions into their models (I'm thinking of Claude, Anthropic's AI model led by the Amodei brothers): However, this form of containment is already showing cracks. Protection systems can be bypassed, and companies have no real incentive to implement more stringent security measures.
The situation becomes even more complex considering that every organization tends to do only the bare minimum in terms of security. This minimal approach represents the dominant strategy from an economic point of view, regardless of what other players in the industry do.
The consequences of this dynamic are potentially very serious. If a serious incident were to occur produced with the use of AI, the industry would have neither the tools nor the motivation to prevent it (perhaps even to counter it) effectively.
The Future of AI on Earth
I have reiterated several times as the future of AI on our planet It could be much less apocalyptic than it is often portrayed. Humans will have a significant advantage in Earth's environment for a long time (perhaps forever), the result of millions of years of evolution.
The dexterity and sensitivity of the human body, especially the hands, still represent an unattainable goal for robots today. Figure, Unitree, Tesla and other leading humanoid robot companies are just beginning to grapple with these technical limitations.
The economic cost of replacing manual workers with robots remains prohibitive. I'm not talking about extremely simple or repetitive tasks, of course. A skilled worker It costs around 30 euros per hour, implementing and maintaining an equivalent robotic system would require much greater investment. So where can the surprises come from in the future of AI?
The challenges of global regulation
International regulation of AI presents unique challenges. Unlike nuclear weapons, whose use is immediately detectable, the use of AI is inherently difficult to monitor and control.
The major powers will likely retain significant room for maneuver, similar to what happens with nuclear weapons. International treaties they will tend to be imposed by stronger countries on weaker ones, rather than representing a real constraint for all.
Yet the distributed and software nature of AI makes it particularly difficult to control its spread. An AI model can be copied infinitely and distributed globally with extreme ease. How can the indiscriminate growth of AI be managed or curbed?
The question of high-efficiency chips
A possible solution could be the control of the high efficiency chip needed to run advanced AI models. Governments could ban the development and production of processors that can run superintelligent AIs in an energy-efficient way.
This approach has several advantages: it is relatively easy to implement, given the limited number of advanced semiconductor manufacturing facilities in the world, and It does not completely prevent the use of AI, but it significantly increases its operating costs.
Such a ban could also protect many white-collar jobs by keeping the cost of AI automation artificially high. Of course, this is not the only possible scenario in the future of AI.
Hardware isolation as a strategy
The trend toward cloud computing could reverse in favor of isolated systems and local data centers. This paradigm shift would be motivated by cybersecurity concerns and the risk of attacks by hostile AI.
Complete isolation of computer systems, the so-called “air gap”, remains complex to implement in practice. However, many organizations may opt for a more disconnected approach, in stark contrast to the current push for total connectivity.
This evolution could lead to a return to in-person work and the end of remote work for many digital professions.
Finally, the frontier of space
The real challenge for the future of AI, as I mentioned at the beginning of the article, will probably be played out in space. Here, robots enjoy significant advantages over humans: They do not require an atmosphere, temperature regulation, or vital resources such as food and water.
The costs of maintaining human workers in space are astronomical (the International Space Station costs about $1 billion a year). In contrast, robots have similar operating costs on Earth and in space.
The possibility of robotic colonies establishing themselves in space is quite real in the next decades, or centuries, of our development as an interplanetary species that intends to exploit resources in space as well. And so, the possibility of colonies of AI-equipped robots becoming autonomous in space represents a potentially very serious existential threat to humanity. From here on, of course, we enter the realm of pure speculation.
Future of AI in Space, the Implications
An autonomous robotic presence in space could develop devastating military capabilities. Technologies such as the Nicoll-Dyson radius, which use solar mirrors to concentrate sunlight, could theoretically be used to cause catastrophic damage to the Earth.
La theory of the game (do you know what it is?) suggests that Humanity will not be able to resist the temptation to launch construction robots into space, despite the risks. The immediate economic and military benefits are too tempting to ignore.
This dynamic, however, could lead to a space arms race with potentially catastrophic consequences.
Towards a final confrontation
If a “final” confrontation between humanity and AI were to occur, the most likely scenario would be in space rather than on Earth. Here, humanity would be at a disadvantage: not necessarily doomed, but definitely at a disadvantage.
Production of critical components like chips could be kept on Earth to prevent the creation of fully autonomous space-based robot armies. However, an artificial superintelligence could find creative ways to circumvent these restrictions.
Building space megastructures would still take time, giving humanity the opportunity to respond and organize an effective defense. I've provided quite a bit of material for anyone who wants to make a new sci-fi movie, haven't I?
Future of AI, a reflection on tomorrow
In summary, the analysis of the future of AI reveals a complex, but not necessarily apocalyptic, picture. On Earth, humanity maintains significant advantages and a natural resilience that makes it difficult to displace.
The real challenge, in this case, will be played out in space, where robots have inherent advantages. Game theory suggests that we cannot avoid expanding AI presence in space, despite the risks.
The future will require a delicate balance between exploiting the opportunities afforded by AI and managing existential risks. The key will be to maintain human control over critical components and develop effective strategies to prevent catastrophic scenarios.
We will obviously have all the time to think about it. It will be enough (so to speak) not to waste it. But that, of course, is another story.