Anytime between 2025 and 2028. It is the time window in which Dario Amodei, CEO of Anthropic and “dad” of Claude AI, predicts that AI models could achieve the ability to replicate and survive autonomously. A shocking statement, which comes from one of the protagonists of the race to create the most powerful and "responsible" artificial intelligence. In interview at the New York Times, Amodei compares the development of AI to the biosafety levels of virology labs, warning that without “responsible scaling,” the technology could soon gain autonomy and extreme persuasiveness, with alarming implications for global security.
The analogy with biosafety levels
To explain his vision, Amodei uses a powerful analogy: the biosafety levels (ASLs) of virology laboratories. According to the CEO of Anthropic, we are currently at ASL level 2 in AI development. But ASL level 4, which would include “autonomy” and “persuasion,” could be around the corner.
ASL 4 will be more about, on the misuse side, allowing state actors to dramatically increase their capabilities, which is much more difficult than allowing that to random people. It would be worrisome if North Korea, China, or Russia could significantly improve their offensive capabilities in various military areas with AI, in a way that would give them a substantial geopolitical advantage.
Dario Amodei
And it is precisely on the side of "autonomy" that Amodei's predictions become even more alarming.
Various versions of these models are quite close to being able to replicate and survive in the wild.
Dario Amodei
When the interviewer asks the Italian-American researcher how long it will take to reach these various levels of threat, Amodei (who says he is inclined to think "exponentially") states that the "replicate and survive" level could be reached "everywhere between 2025 and 2028”. “I'm really talking about the near future, here. I'm not talking about 50 years from now,” the Anthropic CEO emphasizes. “God grant me chastity, but not now. But 'not now' doesn't mean when I'm old and grey. I think it could be a short-term thing."
Anthropic, words that carry weight
Amodei's words carry particular weight, considering his leading role in the AI sector. in 2021, he and his sister Daniela left OpenAI due to differences over the company's direction, after helping create GPT-3 and seeing the partnership with Microsoft. Soon after, the brothers founded Anthropic together with other former OpenAI employees, with the goal of continuing their efforts to “responsibly scale” AI.
"I might be wrong. But I think it could be a short-term thing." Words which, despite their uncertainty, sound like a signal not to be underestimated.
In a context where concerns about AI appear to be growing by the day, Amodei's perspective (with his highly privileged vantage point from within the industry) adds further weight to the need for responsible governance of this disruptive technology. Anthropic's mission, “to make sure transformative AI helps people and society thrive,” seems more urgent than ever in the face of scenarios like those conjured by its CEO. If AI models are truly close to achieving the ability to replicate and survive autonomously, especially by accelerating their "evolution" thanks toembodiment, it is essential that their development is guided by ethical principles and responsibility.
I already know what you're thinking
This is feedback I often receive when I report the statements of the various "Capataz" of artificial intelligence. Altman, Musk and now also Amodei are working hard on the development of something that they sometimes like to define as very dangerous. Why? Many of you write to me that it is marketing: the "rants", even the alarmist ones, draw attention to the company and the product. As if to say, "Hey, we are handling this kind of lethal virus, but know that we will do it very carefully, because we care very much."
Maybe. Of course, Amodei's predictions seem alarmist, exaggerated. But in a rapidly evolving field like artificial intelligence, where progress follows one another at an exponential rate, it is wise to prepare for even the most extreme scenarios. Whether it is preventing the malicious use of AI by state actors (all of them, not just those named by Amodeo, because “the cleanest one has trouble”) or ensuring that models do not escape human control, the The challenge is immense and requires a joint effort from companies, governments and civil society.
Amodei's words, despite their speculative nature, must serve as a spur to accelerate debate and action on these crucial issues. A debate, of course, in which the 'ball' must not belong only to the AI developers themselves, but to the whole of civil society. Let's start first, actually: The sooner we start, the better.