There is good news and very bad news. The good news, at least according to a former Google executive, is that the singularity is on its way. It is now upon us. The very bad news is that it poses a serious threat to all of humanity.
The terrible warning comes from Mo Gawdat, a very “expert” manager. He was Chief Business Officer for Google's Moonshot organization, which it was called at the time Google X. His words of warning come straight from an interview that Gawdat he conceded to the Times.
"General artificial intelligence is inevitable"
In the interview, the former Google executive said he believes thegeneral artificial intelligence (AGI), the kind of omnipotent, sentient AI (whose nefarious effects we have explored in science fiction speculations like Skynet from “The Terminator”) is inevitable. Once here, humanity may very well find itself undergoing an apocalypse caused by godlike machines.
Gawdat told the Times that he had this frightening revelation while working with artificial intelligence developers at Google X. The experiment the executive witnessed was nothing special, at least on the surface. The developers were building robotic arms that could find and pick up a ball. After a period of slow progress, Gawdat reports that one arm grabbed the ball and seemed to hold it up towards the researchers in a gesture that, to him, looked like a gesture of exhibition.
“And suddenly I realized this is really scary,” Gawdat said. "It completely froze me." “The reality is”, he added, “that we are creating God”.
The dangers the former Google executive sees are realistic. But first you have to face the real ones.
The warning from the former Google executive is disturbing, but not the only one. There is no shortage of wise cautioners (or fear sellers, depending on your point of view) of artificial intelligence in the tech sector. Elon Musk has repeatedly warned the world about the danger of AI one day outclassing humanity, for example. Border speculations which, on the one hand warn about the dangers, on the other seem too "huge" and perhaps distract us from those evident already now.
For example, facial recognition and predictive police cause real harm in disadvantaged communities. Countless algorithms out there continue to propagate and encode institutional racism across the board. These are problems that can be solved through supervision and regulation.