We may not have caught up yet artificial general intelligence (AGI), but according to a leading expert in the theoretical field, it could come sooner than we think. According to the computer scientist and expert Ben Goertzel, while it is unlikely that human- or superhuman-level AI will be built before 2029 or 2030 there is a possibility that it could happen as early as 2027.
A prospect as fascinating as it is disturbing, which raises profound questions about the future of humanity in the face of the rise of intelligent machines.
The uncertainty of general artificial intelligence
Goertzel, founder of SingularityNET, is well aware of the unknowns surrounding the development of AGI. “No one has created human-level general artificial intelligence yet; no one has a solid understanding of when we will get there,” he told the audience at the most recent conference in which he took part, in Panama.
Yet despite these uncertainties, Goertzel finds human-level AGI plausible can be achieved by 2027, and in any case no later than 2032. A prediction which, if it came true, would have profound implications for our future.
From AGI to superintelligence
The real challenge, according to Goertzel, begins once AGI is achieved. “My view is that once you get to human-level AGI, you could get radically superhuman AGI in a few years, unless AGI threatens to hold back your development out of its own conservatism,” he added.
The idea is that an AI capable of introspection into its own “mind” could do engineering and science at a human or superhuman level, creating increasingly advanced intelligences in a process of recursive self-improvement.
It is the concept of "intelligence explosion", often associated with technological singularity.
Experts' predictions
Goertzel is not the only one, and he is not the first to foresee the advent of artificial general intelligence in his predictions. Forecasts which, moreover, are gradually converging. Geoffrey Hinton, known as the “godfather of AI” and a former Google employee, said in May last year that he expected, “without much certainty,” that AGI is somewhere between 5 and 20 years from now. His previous prediction was 30-50 years.
Shane Legg, co-founder of Google DeepMind, reiterated his prediction from over a decade ago that there is a 50% chance that humans will invent AGI by 2028. Paul Pallaghyfinally, it goes so far as to take general artificial intelligence for 2028 as certain, with the possibility that it will be achieved even in 2026.
Forecasts which, albeit with due caution, indicate a growing consensus among experts on the proximity of this goal.
Advances in language models
Until a few years ago, AGI as described by Goertzel and his colleagues seemed like a pipe dream. But with the advances in large language models (LLMs) made by OpenAI since it launched ChatGPT at the end of the 2022, that possibility seems ever closer. Goertzel is quick to point out that LLMs alone will not lead to AGI, but it is undeniable that they represent a significant step in that direction. The ability of these systems to understand and generate natural language in an increasingly sophisticated way is a fundamental prerequisite for the emergence of human-level intelligence.
The limits and risks of AI
Of course, there are many reservations about what Goertzel is preaching. Even a superhuman AI, by human standards, would not have a “mind” like ours. On the other hand there are the existential fears linked to the emergence of an intelligence radically superior to ours, capable of surpassing us in every field.
How can we ensure that such an entity has a moral sense aligned with human values and acting for our good?
AGI, towards an uncertain future
Despite these reservations, Goertzel's theory is fascinating and cannot be entirely discredited, especially in light of the rapid advances in AI in recent years. Whether AGI arrives in 2027, 2030 or beyond, it seems increasingly likely that humanity will sooner or later find itself confronted with human-level or higher artificial intelligence.
Faced with this prospect, it is essential that we think deeply about the implications of AGI and superintelligence. What are the potential benefits and risks? How can we direct the development of these technologies towards positive outcomes for humanity? And how will our very conception of what it means to be human change in a world where we are no longer the pinnacle of intelligence?
There are no easy answers to these questions, but it is imperative that we start asking them and looking for solutions. The advent of AGI and superintelligence may be the greatest test humanity has ever faced. Let's hope we can all overcome it together, as a society.