This post is part of "Periscopio", the Linkedin newsletter which every week delves into Futuro Prossimo themes, and is published in advance on the LinkedIn platform. If you want to subscribe and preview it, find it all here.
The Washington Post last Saturday reported the statements of a Google engineer suspended on June 6 for violating the Palo Alto company's confidentiality agreements. Essentially the engineer, Blake Lemoine, has spread “private” chats between himself and an artificial intelligence chatbot which would demonstrate something rather feared by many experts: this AI would have become sentient.
These are strong statements from an industry expert, not just any guy. And they are rendered after hundreds of interactions with a cutting-edge and unprecedented artificial intelligence system called TheMDA. But it's true? Has this AI really become sentient?
What are we talking about
LaMDA stands for “Language Model for Dialog Applications”. It is one of those AI systems that can respond to written requests once trained on large volumes of data.
Systems have gotten better and better at answering questions by writing in ways that are more and more human-like. Just last May, Google itself presented LaMDA on his official blog calling her “capable of writing on an infinite number of topics”.
Yes, but is it sentient?
After the engineer's statements, Google tried to throw water on the fire by denying the interview that appeared in the Washington Post. “Our team,” Big G wrote yesterday, “reviewed Blake's concerns and informed him that the evidence does not support his claims.” Several artificial intelligence experts echoed this: some soundly rejected this thesis, others used it as an example of our propensity to assign human attributes to machines.
A bit like when we argue with the mouse, so to speak.
It is not a joke, however. Such a thing cannot be dismissed like this. And not for the fears of people like Ilya Sutskever (“AIs are becoming sentient”), Yuval Harari (“AI will be able to hack people”), Or Mo Gawdat (“AI researchers play at creating God”).
The belief that Google's AI could be sentient is very important, because it highlights both our fears and our expectations about the potential of this technology.
For now, however, it is a mistaken belief
The development and use of advanced computer programs trained on massive amounts of data raises many ethical concerns. In some cases, however, progress is judged by what could happen rather than what is currently achievable.
At the moment, according to almost all the world's leading IT experts, there is only one conclusion: no, Google's AI is not at all close to being sentient. It's just better at appearing to be, matching patterns of language to similar things it finds in an almost infinite storehouse of sentences.
You have to think of it as a super powerful version of the autocomplete software we have on a smartphone. Okay: super super super powerful. However, no one should confuse this with being sentient.
The very AI developers, however, are playing the charge these days, alarming people. Their statements, dictated in part by the shock of seeing the potential of this technology first and in part also to promote them, have a great resonance in the media.
Google's AI is not sentient
Last week Blaise Aguera y Arcas, vice president of Google Research, he wrote in an article for the Economist that when he started using LaMDA last year, he felt more and more like he was talking to something intelligent. It is understandable amazement, even subtle fear.
At the moment, however, LaMDA has undergone 11 different review processes on the principles of artificial intelligence. She has also been subjected to many tests on her ability to come up with fact-based claims. Anything. It is not sentient.
This does not take anything away from the need to develop artificial intelligence following ethics and morals. Some, to be honest, say so having developed an AI directly EQUIPPED with a moral, but that's another matter.
The main duty of researchers, if they really care about the progress of this technology, is not to anthropomorphize its manifestations. Above all, do not overly alarm public opinion, if anything, remain vigilant so as to be able to "apply the brakes" as soon as there is actual evidence of a spark of self-awareness.
IF there ever will be. What do you say?