In a world where words flow from a non-human entity, we find ourselves wondering: are we speaking to something that resembles a human mind or a machine? The answer could shock our notions of intelligence.
Nearly a year after its release to the public, ChatGPT remains a polarizing topic for the scientific community. Some experts consider it, along with similar programs, to be precursors to a superintelligence capable of revolutionizing or even ending civilization. Others, however, see it as a simple sophisticated version of an auto-completion software, a bit like the T9 we have on our smartphone.
Who is right? Maybe none of them.
Before the arrival of this technology, mastery of language had always been a reliable indicator of the presence of a rational mind. Before language models like ChatGPT, no software had ever shown so much linguistic flexibility. Not even the linguistic flexibility of a child. Now, in trying to understand the nature of these new models, we are faced with a disturbing philosophical dilemma: either the link between language and the human mind has been broken, or a new form of non-human mind has been created.
When we converse with language models, especially now that ChatGPT can also be consulted by voice, it is difficult to overcome the subtle impression of interacting with another rational being. This doesn't happen with current voice assistants, which already appear ridiculously outdated. However, we must consider this instinctive reaction of ours completely unreliable. And this is for several reasons that seem obvious on the surface, but are not.
One of these comes from cognitive linguistics. Linguists have long noticed that typical conversations are full of sentences that would be ambiguous if taken out of context. In many cases, knowing the meanings of words and the rules for combining them is not enough to reconstruct the meaning of the sentence. To handle this ambiguity, a mechanism in our brain must constantly guess what our interlocutor means. It does, and we don't even realize it. In a world where every interlocutor has intentions, this mechanism is extremely useful. In a world pervaded by large language models, however, it has the potential to mislead.
How to talk to a non-human “mind”.
If our goal is to achieve a smooth interaction with a chatbot, we may be forced to rely on our mechanism for guessing intentions. Because the truth is that it is difficult for a human being to have a productive exchange with ChatGPT if he imagines it as a mindless database. Indeed, mindlessly. A recent study, for example, showed that emotionally charged requests are more effective as prompts for speech patterns than emotionally neutral requests. So is reasoning as if chatbots have human-like minds a good thing? No. It's a useful thing. It's a DAMN useful thing to get good results. But it is a glaring mistake to think that it works like this.
This type of “anthropomorphic fiction” can hinder the progress of AI. It can even cause us to make the mistake we would like to avoid, designing it poorly and adopting the wrong standards to regulate it. And we are already making a mistake: the EU Commission made a mistake by choosing the creation of "reliable" AI as one of the objectives of its new legislative proposal. Being reliable in human relationships does not simply mean meeting expectations; it also involves having motivations that go beyond self-interest. Current AI models lack intrinsic motivation. They are not selfish, they are not altruistic, they are nothing. Writing a law that says “they must be RELIABLE” makes no sense.
The danger of "empathizing" with artificial intelligence
If you want to get really off track, ask ChatGPT about his inner life. fooled by false self-reports about a chatbot's inner life. When, in June 2022, Google's LaMDA language model he claimed to suffer from an unsatisfied desire for freedom, the engineer Blake Lemon he believed it (and was fired). He believed it! A Google engineer! Despite good evidence that chatbots are just as capable of talking rubbish about themselves as they are when talking about other things.
To avoid this type of error, we must reject the assumption that the psychological properties that explain the human capacity for language are the same as those that explain the performance of language models. This assumption makes us gullible and blind to the potential radical differences between how humans work and models of language. But be careful: it is also a mistake to think in a diametrically opposite way. Thinking, for example, that the human mind is the only standard by which to measure all psychological phenomena.
Anthropocentrism permeates many skeptical claims about language models, such as the idea that these models cannot “truly” think or understand language because they lack human psychological characteristics such as consciousness. This position is contrary to anthropomorphism, but equally misleading. I wrote this a few days ago: these models have no need for consciousness to make real decisions. Unfortunately, considering that they are already used to kill in war.
“Yes, but in the end they only predict the next word”
This is another misleading position, which arises from the mistake of considering only the human mind as the yardstick for all “thinking” things. We have already seen that this is not the case. We have already seen that intelligence it doesn't just belong to us, and it does not belong only to those who "think" like us.
Think about this: the human mind emerged from the process of learning similar to natural selection, which maximizes genetic adaptation. This simple fact does not imply that any organism subjected to natural selection acquires human characteristics, right? Not all living beings do music, mathematics, meditation. Right? Or do some of them do these things differently: with their own music, their own mathematics, their own meditation?
In summary: the mere fact that language models are trained through next word prediction implies little about the range of representational capabilities they can or cannot acquire. So, let's put this topic aside too. And then what? How do you approach artificial intelligence?
Human mind uber alles?
Like other cognitive biases, anthropomorphism e anthropocentrism they are resilient. They "catch" us from childhood and characterize our entire way of seeing the world, and of applying categories - labels - stereotypes. Psychologists call this thing essentialism: thinking that whether something belongs to a certain category is determined not simply by its observable characteristics, but by an intrinsic, unobservable essence that each object either possesses or does not possess. What makes an oak an oak, for example, is neither the shape of its leaves nor the texture of its bark, but an unobservable property of “oakiness” that will persist despite alterations in even its most salient observable characteristics. If an environmental toxin causes the oak to grow abnormally, with oddly shaped leaves and bark of unusual texture, we still share the intuition that it remains, in essence, an oak. Sick, but still an oak.
Now, important scientists like Paul Bloom, Yale psychologist, tell us that we extend this “essentialist” reasoning to our understanding of the human mind… and all other possible minds, including artificial intelligence. And the stones, the trees, Nature. Is it true or not? In the end it is such an all-encompassing attitude that it divides people: there are those who think that everything in the world has a mind (some say "a soul" and sometimes confuse the two things). And there are those who think that nothing has a mind, not even human beings (because they are moved by destiny, or by God, or by something else).
This “all or nothing” principle has always been false, but it may have once been useful. In the age of artificial intelligence, this is no longer the case. A better way to think about what language models are is to follow a different strategy. Which? That of exploring the cognitive boundaries of language models without relying too much on the human mind for guidance.
ChatGPT, a talking octopus
Taking inspiration from comparative psychology, we should approach patterns of language with the same open curiosity that has allowed scientists to explore the intelligence of creatures so different from us like octopuses. If we want to make real progress in evaluating the capabilities of artificial intelligence systems, we should resist comparisons with the human mind with all our might. Should we stop asking “does this thing have a mind or not”? Neither one nor the other is true.
Above all, we should stop imagining that this instrument is an angel that will take away all the sins of the world, or that it will kill us all, just because it has performances that seem incredible to us. Recognizing the capabilities and limitations of language models like ChatGPT will allow us to use them more effectively and responsibly, without falling into the trap of anthropomorphism or anthropocentrism. An open and conscious attitude will help us navigate a future where AI will be increasingly present, ensuring that its development and integration into society are guided by reason, science and ethics, rather than by misconceptions or expectations (and fears) unrealistic.