This post is part of "Periscopio", the Linkedin newsletter that explores future future issues every week, and is published in advance on the LinkedIn platform. If you want to subscribe and preview it, find it all here.
The Washington Post last Saturday reported the statements of a Google engineer suspended on June 6 for violating the Palo Alto company's confidentiality agreements. In essence, the engineer, Blake Lemoine, has spread "private" chats between him and an artificial intelligence chatbot that would demonstrate something rather feared by many experts: this AI would have become sentient.
These are strong statements from an industry expert, not just any guy. And they are rendered after hundreds of interactions with a cutting-edge and unprecedented artificial intelligence system called TheMDA. But it's true? Has this AI really become sentient?

What are we talking about
LaMDA stands for "Language Model for Dialog Applications". It is one of those AI systems that can respond to written requests once trained on large volumes of data.
Systems have gotten better and better at answering questions by writing in ways that are more and more human-like. Just last May, Google itself presented LaMDA on his official blog calling it "capable of writing on an infinite number of topics".
Yes, but is it sentient?
After the engineer's claims, Google tried to throw water on the fire, denying the interview that appeared in the Washington Post. "Our team," Big G wrote yesterday, "examined Blake's concerns and informed him that the evidence does not support his claims." Several artificial intelligence experts echoed it: some have soundly rejected this thesis, others have used it as an example of our propensity to assign human attributes to machines.
A bit like when we fight with the mouse, so to speak.
It is not a joke, however. Such a thing cannot be dismissed like this. And not for the fears of people like Ilya Sutskever ("AIs are becoming sentient"), Yuval Harari ("AI will be able to hack people"), Or Mo Gawdat ("AI researchers play at creating God").
The belief that Google's AI can be sentient is very important, because it highlights both our fears and our expectations about the potential of this technology.

For now, however, it is a mistaken belief
The development and use of advanced and trained computer programs on huge amounts of data raises many ethical concerns. In some cases, however, progress is judged on what could happen rather than what is currently feasible.
The conclusion at the moment, according to almost all the major computer experts in the world, appears to be only one: no, Google's AI is not at all close to being sentient. She's just better at looking like it, pairing language patterns with similar things she finds in an almost infinite store of sentences.
You have to imagine it as a super powerful version of the autocomplete software we have on a smartphone. Ok: super super super powerful. However, no one should confuse this with being sentient.
The very AI developers, however, are playing the charge these days, alarming people. Their statements, dictated in part by the shock of seeing the potential of this technology first and in part also to promote them, have a great resonance in the media.

Google's AI is not sentient
Last week Blaise Aguera y Arcas, vice president of Google Research, wrote in an article for the Economist that when he started using LaMDA last year, he felt more and more like he was talking to something smart. It is understandable astonishment, even subtle fear.
At the moment, however, LaMDA has undergone 11 different review processes on the principles of artificial intelligence. She has also been subjected to many tests on her ability to come up with fact-based claims. Anything. It is not sentient.
This does not detract from the need to develop artificial intelligence following ethics and morals. Someone, to tell the truth, says to have developed an AI directly EQUIPPED with a morality, but that's another matter.
The main duty of researchers, if they really care about the progress of this technology, is not to anthropomorphize its manifestations. Above all, do not overly alarm the public, if anything by remaining vigilant to be able to "pull the brake" as soon as there is actual evidence of a spark of self-awareness.
IF there ever will be. What do you say?