There is good news and very bad news. The good news, at least according to one former Google executive, is that the singularity is coming. It's upon us now. The very bad news is that it represents a grave threat to all humanity.
The terrible warning comes from Mo Gawdat, a very "professional" manager. He was Chief Business Officer for Google's Moonshot organization, which it was called at the time Google X. His words of warning come straight from an interview that Gawdat he conceded to the Times.
“General artificial intelligence is inevitable”
In the interview, the former Google executive said he believes thegeneral artificial intelligence (AGI), the type of omnipotent and sentient AI (whose harmful effects we have explored in science fiction speculations such as Skynet from “The Terminator”) is inevitable. Once it's here, humanity could very well find itself suffering an apocalypse brought on by god-like machines.
Gawdat told the Times that he had this frightening revelation while working with artificial intelligence developers at Google X. The experiment the executive witnessed was nothing special, at least on the surface. Developers were building robotic arms that could find and pick up a ball. After a period of slow progress, Gawdat says an arm grabbed the ball and appeared to hold it up toward the researchers in a gesture that, to him, seemed like a show-off.
“And suddenly I realized this is really scary,” Gawdat said. “She completely froze me.” “The reality is,” he added, “that we are creating God.”
The dangers the former Google executive sees are realistic. But first we need to deal with the real ones.
The former Google executive's warning is disturbing, but not the only one. There is no shortage of wise cautioners (or fearmongers, depending on your point of view) of AI in the tech sector. Elon Musk has repeatedly warned the world about the danger of AI one day overtaking humanity, for example. Frontier speculations which, if on the one hand they warn of dangers, on the other seem too "enormous" and perhaps distract us from those evident already now.
For example, facial recognition and predictive police cause real harm in disadvantaged communities. Countless algorithms out there continue to propagate and encode institutional racism across the board. These are problems that can be solved through supervision and regulation.