You all know the story of the Google engineer who saw signs of personality in Google's latest AI chatbot (and he was later fired). The unanimous opinion of journalists and especially of experts? contrary , contrary , contrary is… contrary.
Through different paths, everyone seems to have reached the same conclusion, which if you think carefully is also a conclusion of common sense as well as a technical one. A chatbot, however developed, is a function. Functions they are not sentient. But they can be so evolved that they fool people, oh yes they can. So the real question is: who will control them? Will they be used transparently? Follow me.
Because AIs are “just” functions
Let's take back our rusty scholastic notions to say: a function is a rule for transforming a number (or a list of numbers) into another number (or list of numbers). By this definition, ALL today's AI systems are functions, including the LaMDA chatbot that caused a ruckus by costing the engineer his job.
Sure, these are very complex functions, but they still work: the software behind a chatbot includes rules to convert a message written in letters of the alphabet into a number that the software uses as input x and then to convert the output f(x) back to a message in letters. Any current computer, including the one in your cell phones, regularly performs these operations.
Instead of talking in circles about what is meant by the word “sentient” (which no one seems to be able to define), we could ask ourselves how human beings should learn to harness the extraordinary power of these complex functions.
The core: the pseudorandom functions
Mathematicians and engineers have been discovering or developing new functions for some time. For example, “pseudorandom” functions, which generate results that are completely predictable (to anyone who knows their underlying formula), but which appear random to everyone else.
During its “training” phase, chatbot software examines large amounts of text. Imagine discovering that half the time the phrase “my dog likes it” is followed by “playing fetch” or “chewing furniture.” When the trained chatbot is put to chat with a human, the inputs it receives (the words of its interlocutor) may signal to it that it is time to say something about a dog.
The running software uses one of the pseudorandom functions to choose what to say about the dog: “play fetch” or “chew furniture.”
Who owns these formulas?
Traditionally, those who developed a better method of simulating randomness published it so that others could criticize and copy it. This is why functions of this type have been very successful and widespread: today simulated randomness is used in many applications, including the protection of Internet transactions. With no prospect of becoming a billionaire, many scientists have discovered new functions and shared them - all have made progress. It was a good system. It has worked for centuries. And now?
In the field of artificial intelligence, progress on the knowledge frontier is now dominated by a few private companies. They have access to enough data and enough computing power to discover and exploit extraordinarily powerful functions that no one looking in from the outside can understand.
The crossroads
Let's take LaMDA. It's impressive. It shows that artificial intelligence could offer people surprising new ways to access all of human knowledge. Over time, perhaps, Google will be able to modify LaMDA to make it the new "search engine". He will appear to us as a much more cunning assistant than the current ones. He will listen to us, understand us, and often even anticipate our thoughts.
It will be able, if desired, to guide our choices by bringing billions of dollars in revenue to the companies that will benefit from it.
Or, in the hands of disinterested groups, they will improve access to human knowledge, perhaps becoming the new Wikipedia (hopefully better than the previous one).
How do you think it will end?
Don't be fooled by the features, no matter how complex they may be. Google engineers are not modern-day Dr. Frankensteins. They are not giving birth to sentient beings. They are just creating the "New Coca Cola", a super widespread, pervasive product, based on a secret and unique ingredient of which only they know the formula.
Instead of arguing about whether a chatbot is “sentient” we should consider the long-term consequences of a system that strays from science. A system that moves from knowledge that benefits everyone to one in which "secret" knowledge transmits power and profit to a few technology giants, who gradually become more and more powerful, even politically. New rulers of a less just world.