Near future
No Result
View All Result
24 September 2023
  • Home
  • Tech
  • Health
  • Environment
  • Energy
  • Transports
  • Spazio
  • AI
  • concepts
  • H+
Understand, anticipate, improve the future.
CES2023 / Coronavirus / Russia-Ukraine
Near future
  • Home
  • Tech
  • Health
  • Environment
  • Energy
  • Transports
  • Spazio
  • AI
  • concepts
  • H+

Understand, anticipate, improve the future.

No Result
View All Result
Technology

No, Google's AI is not sentient

AI is often praised for its continued growth. This week, however, things have taken a decidedly 'creepy' turn

June 14 2022
Gianluca RiccioGianluca Riccio
⚪ 5 minutes
Share98Pin23Tweet62SendShare17ShareShare12
PERISCOPE COVER 23

READ IN:

This post is part of "Periscopio", the Linkedin newsletter that explores future future issues every week, and is published in advance on the LinkedIn platform. If you want to subscribe and preview it, find it all here.

The Washington Post last Saturday reported the statements of a Google engineer suspended on June 6 for violating the Palo Alto company's confidentiality agreements. In essence, the engineer, Blake Lemoine, has spread "private" chats between him and an artificial intelligence chatbot that would demonstrate something rather feared by many experts: this AI would have become sentient.

These are strong statements from an industry expert, not just any guy. And they are rendered after hundreds of interactions with a cutting-edge and unprecedented artificial intelligence system called TheMDA. But it's true? Has this AI really become sentient?

sentient artificial intelligence
Is the MDA sentient?

What are we talking about

LaMDA stands for "Language Model for Dialog Applications". It is one of those AI systems that can respond to written requests once trained on large volumes of data.

Systems have gotten better and better at answering questions by writing in ways that are more and more human-like. Just last May, Google itself presented LaMDA on his official blog calling it "capable of writing on an infinite number of topics".

The article continues after the related links

The bizarre AI that translates the language of chickens

Artificial intelligence: the next evolution? From generative to interactive

Yes, but is it sentient?

After the engineer's claims, Google tried to throw water on the fire, denying the interview that appeared in the Washington Post. "Our team," Big G wrote yesterday, "examined Blake's concerns and informed him that the evidence does not support his claims." Several artificial intelligence experts echoed it: some have soundly rejected this thesis, others have used it as an example of our propensity to assign human attributes to machines.

A bit like when we fight with the mouse, so to speak.

It is not a joke, however. Such a thing cannot be dismissed like this. And not for the fears of people like Ilya Sutskever ("AIs are becoming sentient"), Yuval Harari ("AI will be able to hack people"), Or Mo Gawdat ("AI researchers play at creating God").

The belief that Google's AI can be sentient is very important, because it highlights both our fears and our expectations about the potential of this technology.

Yuval Harari
Yuval Harari

For now, however, it is a mistaken belief

The development and use of advanced and trained computer programs on huge amounts of data raises many ethical concerns. In some cases, however, progress is judged on what could happen rather than what is currently feasible.

The conclusion at the moment, according to almost all the major computer experts in the world, appears to be only one: no, Google's AI is not at all close to being sentient. She's just better at looking like it, pairing language patterns with similar things she finds in an almost infinite store of sentences.

You have to imagine it as a super powerful version of the autocomplete software we have on a smartphone. Ok: super super super powerful. However, no one should confuse this with being sentient.

The very AI developers, however, are playing the charge these days, alarming people. Their statements, dictated in part by the shock of seeing the potential of this technology first and in part also to promote them, have a great resonance in the media.

Google's AI is not sentient

Last week Blaise Aguera y Arcas, vice president of Google Research, wrote in an article for the Economist that when he started using LaMDA last year, he felt more and more like he was talking to something smart. It is understandable astonishment, even subtle fear.

At the moment, however, LaMDA has undergone 11 different review processes on the principles of artificial intelligence. She has also been subjected to many tests on her ability to come up with fact-based claims. Anything. It is not sentient.

This does not detract from the need to develop artificial intelligence following ethics and morals. Someone, to tell the truth, says to have developed an AI directly EQUIPPED with a morality, but that's another matter.

The main duty of researchers, if they really care about the progress of this technology, is not to anthropomorphize its manifestations. Above all, do not overly alarm the public, if anything by remaining vigilant to be able to "pull the brake" as soon as there is actual evidence of a spark of self-awareness.

IF there ever will be. What do you say?

Tags: googleartificial intelligencePeriscope

Latest news

  • EV, goodbye frequent charging: I-State promises over 1.000 km of autonomy.
  • Revolutionary generator transforms humidity into continuous electrical energy
  • All the maps of the future: from research tools to doors to tomorrow
  • The bizarre AI that translates the language of chickens
  • Jeddah Tower, construction site of the one kilometer high skyscraper reopens
  • Spider silk from modified silkworms: stronger than Kevlar and 100% natural
  • VIR-1388, HIV vaccine being tested in the USA and South Africa
  • MOWT, innovative floating hydroelectric for slow-flowing waters
  • CRAFT, what stage is the Chinese project to build an artificial Sun?
  • Remote work, halved emissions


GPT Chat Megaeasy!

Concrete guide for those approaching this artificial intelligence tool, also designed for the school world: many examples of applications, usage indications and ready-to-use instructions for training and interrogating Chat GPT.

To submit articles, disclose the results of a research or scientific discoveries write to the editorial staff

Enter the Telegram channel of Futuroprossimo, click here. Or follow us on Instagram, Facebook, Twitter, Mastodon e LinkedIn.

FacebookTwitterInstagramTelegramLinkedInMastodonPinterestTikTok

The daily tomorrow.


Futuroprossimo.it provides news on the future of technology, science and innovation: if there is something that is about to arrive, here it has already arrived. FuturoProssimo is part of the network ForwardTo, studies and skills for future scenarios.

  • Environment
  • Architecture
  • Artificial intelligence
  • Gadgets
  • concepts
  • Design
  • Medicine
  • Spazio
  • Robotica
  • Work
  • Transports
  • Energy
  • Edition Francaise
  • Deutsche Ausgabe
  • Japanese version
  • English Edition
  • Portuguese Edition
  • Read more
  • Spanish edition

Subscribe to our newsletter

  • The Editor
  • Advertising on FP
  • Privacy Policy

© 2023 Near future - Creative Commons License
This work is distributed under license Creative Commons Attribution 4.0 International.

No Result
View All Result
Understand, anticipate, improve the future.
  • Home
  • Tech
  • Health
  • Environment
  • Energy
  • Transports
  • Spazio
  • AI
  • concepts
  • H+