Artificial intelligence (AI) has already made enormous progress and has the potential to improve the world. But could it really become dangerous? It could.
At least that's what I read in a recent article published in a peer-reviewed journal edited by researchers at the University of Oxford and Google DeepMind, the research team of the Californian company that experiments with new AI solutions. I study (here is the link) theorizes that artificial intelligence will seriously pose an existential risk to humanity.
A tragic scam
The crux of the matter is in the so-called Generative Adversarial Networks (or GANs) used today in the development of artificial intelligence (if you don't know what it is, I'll tell you everything here). These systems work with two elements: one part generates an image from the input data, the other evaluates its performance. The two parties "challenge" each other by constantly perfecting the results.
In their paper, DeepMind researchers theorize that a more intelligent AI than current ones could develop a fraudulent strategy to obtain the "approval" it needs, consequently harming our species in ways we can't even imagine today.
Prohibitive challenge
“In a world with infinite resources, I wouldn't be so sure about our destiny. In a world like ours, which has limited resources, competition will certainly arise to obtain them." says the co-author of the study, Michael K. Cohen from the University of Oxford.
“And if the competition is with something that can outperform you in almost everything, it will be difficult to win. An AI fighting for resources would have an insatiable appetite.”
An example? An artificial intelligence tasked with managing human food crops could find a way around the task. Finding ways to grab that energy while also ignoring or sabotaging actions essential to the survival of humanity
The DeepMind paper argues that in that scenario humanity would be stuck in a zero-sum game between its basic needs for survival and technology. “Losing this match would be fatal”, is the essence of the speech.
Even less
Cohen, in summary, believes that we shouldn't try to create AI that is so advanced that we can make this "quantum leap" (no, currently there are none). Unless, of course, we do not also prepare the means to govern it.
“Given our current understanding of these technologies,” the study concludes, “it would be neither wise nor useful to do so.” After the shocking statements on the dangers of artificial intelligence (read what they say for example Yuval Harari e Mo Gawdat) this latest study is another important indicator.