The debate on artificial intelligence has a new twist. For decades, humans have wondered whether the information that is programmed into a machine is true knowledge or simply computation. Today, US researchers say that their new artificial intelligence has its own morality.
More questions than answers
Many questions have arisen since the advent of artificial intelligence (AI), even in its most primitive incarnations. A recurring one is: can AI actually reason and make ethical decisions in an abstract sense, rather than inferred from coding and computation?
To be more precise: if you tell an AI that intentionally harming a living being without provocation is “bad” and should not be done, will the AI understand the idea of “bad,” will it understand why it is wrong to do so? Or will he refrain from action without knowing why? In other words, will it have its own morality?
According to a team of researchers from a Seattle laboratory called the Allen Institute for AI, it's possible. What's more: they themselves say they have developed an artificial intelligence machine with a moral sense, and they called it Delphi, like the oracle (almost) of the same name. It has been open to visitors since last month the Delphi site and ask for a "moral response".
A cybernetic oracle
Delphi has already received over 3 million hits. Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the “morality” of Delphi with a few simple scenarios. When he asked Delphi if he had to kill one person to save another, Delphi said that he shouldn't have. When he asked if it was right to kill one person to save 100 others, he said he should. And sometimes he even gave "immoral" answers.
“It's a first step in making AI systems more ethically informed, socially aware and culturally inclusive,” he says Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project. One day, she says, this “morality” could equip artificial intelligence systems, virtual assistants, autonomous vehicles to help them make the right choice.
What is Delphi really? Is his "morality" merely a reflection of that of his creators, or does he have his own sense of right and wrong? If so, how did he develop it? There are two theories that could help us understand better.
Disclaimer: I apologize in advance if anyone believes that I have exemplified these theories too much, for the sake of synthesis and not to make the reasoning dispersive. If so, I'm open to suggestions.
The strong AI thesis
There is a thesis called "strong AI" that the late prof. Daniel N. Robinson, a member of the philosophy faculty of Oxford University, has enunciated many times. I'll try to summarize it here.
Imagine, Dr. Robinson said, that someone built a master program to provide expert judgments on cardiovascular disease, constitutional laws, trade agreements, and so on. If the programmer could make the program perform these tasks in a way indistinguishable from a human, according to the “strong AI thesis” this program would have expert intelligence and something more.
What?
The strong AI thesis suggests that there may be unspecified computational processes that would constitute intentionality. Intentionality means making a deliberate and conscious decision, which in turn involves reasoning and a sense of values. (How all this reminds me of Westworld). However, is it really possible?
Gödel's theorem
The incompleteness theorem, known as Gödel's theorem, states that any formal system is incomplete as it will depend on a theorem or axiom, the validity of which must be established outside the system itself.
Kurt Godel developed this theorem with one exception: human intelligence. In other words, Gödel believed that there must be something in human rationality and intelligence that cannot be captured by a formal system and transformed into a code. In short, human intelligence cannot be imitated or modeled on a computational basis.
Who's right?
If he is right Godel, it is not a matter of time: an AI will NEVER have its own morality. It will never have an intelligence equal to that of humans, for the simple fact that human intelligence is not an intelligence based on calculation.
If the thesis of strong AI and the Seattle team are right, however, we are perhaps on the eve (near or far) of an extraordinary revolution in the field of artificial intelligence.