The debate on artificial intelligence has a new starting point. For decades, humans have wondered whether the information being programmed into a machine is real knowledge or simply calculation. Today, US researchers claim that their new artificial intelligence has a morality of its own.
More questions than answers
Many questions have arisen since the advent of artificial intelligence (AI), even in its most primitive incarnations. One applicant is: Can AI actually reason and make ethical decisions in an abstract sense, rather than inferred from coding and computation?
To be more precise: if an AI is told that intentionally harming a living being without provocation is "bad" and must not be done, the AI will understand the idea of "bad", will it understand why it is wrong to do so? Or will he abstain from action without knowing why? In other words, will it have its own morality?
According to a team of researchers from a Seattle laboratory called the Allen Institute for AI, it's possible. What's more: they themselves say that they have developed an artificial intelligence machine with a moral sense, and they have called it Delphi, after the (almost) namesake oracle. Since last month it can be visited the Delphi site and ask for a "moral response".
A cybernetic oracle
Delphi has already received over 3 million hits. Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested Delphi's "morality" with a few simple scenarios. When asked Delphi if he had to kill one person to save another, Delphi said he shouldn't. When he asked if it was okay to kill one person to save 100 more, he said he should. And sometimes he even gave "immoral" answers.
"It is a first step in making artificial intelligence systems more ethically informed, socially aware and culturally inclusive," he says. Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project. One day, he says, this "morality" could equip artificial intelligence systems, virtual assistants, autonomous vehicles to help them make the right choice.
What is Delphi really? Is its "morality" just a reflection of that of its creators, or does it have its own sense of what is right and what is wrong? If so, how did he develop it? There are two theories that could help us understand better.
Disclaimer: I apologize in advance if anyone believes that I have exemplified these theories too much, for the sake of synthesis and not to make the reasoning dispersive. If so, I'm open to suggestions.
The strong AI thesis
There is a thesis called "strong AI" that the late prof. Daniel N. Robinson, a member of the Faculty of Philosophy at the University of Oxford, has enunciated many times. I try to summarize it here.
Imagine, Dr. Robinson said, that someone builds a general program to provide expert judgment on cardiovascular disease, constitutional laws, trade deals, and so on. If the programmer could make the program perform these tasks in a way indistinguishable from a human, according to the "strong AI thesis" this program would have expert intelligence and something more.
The strong AI thesis suggests that unspecified computational processes may exist that would constitute intentionality. Intentionality means making a deliberate and conscious decision, which in turn involves reasoning and a sense of values. (How much Westworld reminds me of all this). However, is it really possible?
The incompleteness theorem, known as Gödel's theorem, states that any formal system is incomplete as it will depend on a theorem or axiom, the validity of which must be established outside the system itself.
Kurt Gödel developed this theorem with one exception: human intelligence. In other words, Gödel believed that there must be something in human rationality and intelligence that cannot be captured by a formal system and turned into a code. Human intelligence, in short, cannot be imitated or modeled on a computational basis.
If he is right Godel, it is not a matter of time: an AI will NEVER have its own morality. It will never have an intelligence equal to that of humans, for the simple fact that human intelligence is not an intelligence based on calculation.
If the thesis of strong AI and the Seattle team are right, however, we are perhaps on the eve (near or far) of an extraordinary revolution in the field of artificial intelligence.