The explosion of artificial intelligence and the classic division into "sects", among the exalted and catastrophists (some balance never) have reignited the debate on the ethics of AI. A topic that has been alive for decades, to be honest. Ever since the word "robot" was conceived, questions have been asked about how to limit machines so that they do not destroy humanity. Do you remember? Read about Asimov and off you go.
The work of Isaac Asimov and his laws
They are the most famous example of reflection on how to limit technology. Isaac Asimov's laws of robotics, which in works such as the story "Runaround" or "I, Robot", are incorporated into all artificial intelligences as a safety measure.
Someone deluded themselves that somehow they would work in reality, or inspired similar solutions. It's not like that, and I'll make it short: Asimov's laws are not real, and there is no way to implement them in reality. They are already waste paper, as Midjourney also shows you.
Do you remember them? Shall we review?
Asimov's laws are four:
- First law: a robot cannot harm a human or, by inaction, allow a human to be harmed.
- Second law: a robot must obey orders given by humans, unless such orders contravene the First Law.
- Third law: a robot must protect its own existence, provided that such protection does not contravene the First or Second Law.
The most passionate readers of Asimov also know that there is a fourth law, introduced in 1985 with the novel "robots and the empire“. Is called Law Zero and reads like this:
A robot cannot harm humanity or, through inaction, allow humanity to come to harm.
Isaac Asimov
Now forget them.
Although he began writing and reasoning in the 40s, Isaac Asimov simply did not understand that it would be necessary to program AIs with specific laws to prevent them from doing harm. He also realized that these laws would fail.
The first for ethical problems too complex to have a simple yes or no answer. The second by its very nature unethical: it requires sentient beings to remain slaves. The third because it involves permanent social stratification, with a vast amount of potential exploitation. And the zero law? It fails by itself, with all the others.
In summary: Asimov's laws represent an interesting starting point for thinking about the ethics of artificial intelligence, but the real world requires more concrete and adaptable solutions.
Which?
Experts are working to ensure that AI is safe and ethical by exploring in different directions. The 4 main ones:
- Transparency and explainability: Algorithms should be transparent and explainable, so that users can understand how and why the AI makes certain decisions.
- Human values and bias: AI systems should be designed to respect fundamental human values and reduce unwanted biases. This includes training on diverse datasets and analyzing the effects of decisions made by AI on various groups of people.
- Safety and reliability: This is self explanatory. The risk of malfunctions or cyber attacks must be avoided.
- Control and responsibility: It is important to establish who is responsible for the actions performed by the artificial intelligence, to assign consequences in case of problems.
To these "new Asimov laws" (which are not Asimov's) must be added global regulations and ethical standards: this is why we need international cooperation on the development of artificial intelligence, and not sectarianism.
Geoffrey Hinton, one of the "fathers" of AI, has defined artificial intelligence as “the new atomic bomb”. I don't agree, and I'm not alone. It could become one, however, and it would be our fault, not the artificial intelligence. Especially if we first conceive it as a "club" against others.
Read by Asimov, goodbye. New laws, hurry up
The first autonomous vehicles, indeed: semi-autonomous already have the "power" to kill people inadvertently. Weapons like killer drones can, in fact, even kill acting autonomously. Let's be clear: AI is currently unable to understand laws, let alone follow them.
The emulation of human behavior has not yet been well studied, and the development of rational behavior has focused on limited and well-defined areas. Two very serious shortcomings, because they would allow a sentient AI (which at the moment, I underline, does not exist and in spite than what they say its pygmalions we do not know if it will exist) to disinterpret any indication. And end up, in two simple words, out of control.
For this, I don't know how much time we have. One year, ten, eternity. I know that, as with Asimov's laws, someone needs to solve the problem of how to prevent AI from harming humans, and do it now.
Skynet is only fiction, but in fiction it gave us no escape. And you know: science fiction doesn't predict the future, but it often inspires it.