You all know the story of the Google engineer who saw signs of personality in Google's latest AI chatbot (and he was later fired). The unanimous opinion of journalists and especially of experts? contrary , contrary , contrary is... contrary.
Through different paths, everyone seems to have reached the same conclusion, which if you think carefully is also a conclusion of common sense as well as a technical one. A chatbot, however developed, is a function. Functions they are not sentient. But they can be so evolved that they fool people, oh yes they can. So the real question is: who will control them? Will they be used transparently? Follow me.

Because AI is "only" working
Let's go back to our rusty school notions to say: a function is a rule for turning a number (or list of numbers) into another number (or list of numbers). According to this definition, ALL today's AI systems are functions, including the LaMDA chatbot that sparked the ruckus and cost the engineer his seat.
Sure, these are very complex functions, but they still work: The software behind a chatbot includes rules to convert a letter written message into a number that the software uses as input x and then to convert the output f (x) back into a lettered message. Any current computer, including the one on your cell phones, performs these tasks on a regular basis.
Instead of speaking in full circle about what is meant by the word "sentient" (which no one seems to be able to define), we might wonder how humans should learn to harness the extraordinary power of these complex functions.
The core: the pseudorandom functions
Mathematicians and engineers have long been discovering or developing new functions. For example, the "pseudorandom" functions, which generate completely predictable results (to anyone who knows their underlying formula), but which appear random to everyone else.
During its "training" phase, the chatbot software examines large amounts of text. Imagine finding that half the time the phrase "my dog likes it" is followed by "fetching" or "chewing on furniture". When the trained chatbot is put to chat with a human, the inputs it receives (the words of its interlocutor) may signal that it is time to say something about a dog.
The running software uses one of the pseudorandom functions to choose what to say about the dog: "play fetch" or "chew furniture".

Who owns these formulas?
Traditionally, those who developed a better method of simulating randomness published it so that others could criticize and copy it. This is why functions of this type have been very successful and widespread: today simulated randomness is used in many applications, including the protection of Internet transactions. With no prospect of becoming a billionaire, many scientists have discovered new functions and shared them - all have made progress. It was a good system. It has worked for centuries. And now?
In the field of artificial intelligence, progress on the knowledge frontier is now dominated by a few private companies. They have access to enough data and enough computing power to discover and exploit extraordinarily powerful functions that no one observing from the outside can understand.
The crossroads
Let's take LaMDA. It is impressive. It shows that artificial intelligence could offer people amazing new ways to access all human knowledge. In time, perhaps, Google will be able to modify LaMDA to make it the new "search engine". He will appear to us as a much smarter assistant than the current ones. He will listen to us, understand us, and often even anticipate our thoughts.
It will be able, if desired, to guide our choices by bringing billions of dollars in revenue to the companies that will benefit from it.
Or, in the hands of disinterested groups, they will improve access to human knowledge, perhaps becoming the new wikipedia (hopefully better than the previous one).
How do you think it will end?
Don't be fooled by functions, no matter how complex they may be. Google's engineers are not modern-day Dr Frankensteins. They are not giving birth to sentient beings. They are just creating the "New Coca Cola", a super widespread, pervasive product, based on a secret and unique ingredient of which only they know the formula.
Instead of discussing whether a chatbot is "sentient" we should consider the long-term consequences of a system that strays from science. A system that passes from a knowledge that benefits everyone to one in which a "secret" knowledge transmits power and profit to a few technology giants, which gradually become more and more powerful, even politically. New rulers of a less just world.