The explosion of artificial intelligence and the classic division into "sevens", between the exalted and catastrophists (some balance never) have reignited the debate on the ethics of AI. A topic that has been alive for decades, actually. Since the conception of the word "robot" we have wondered how to limit machines so that they do not destroy humanity. Do you remember? Read by Asimov and off you go.
The work of Isaac Asimov and his laws
They are the most famous example of thinking about limiting technology. Isaac Asimov's laws of robotics, which in works such as the short story "Runaround" or "I, Robot", are incorporated into all artificial intelligences as a safety measure.
Someone deluded themselves that somehow they would work in reality, or inspired similar solutions. It's not like that, and I'll make it short: Asimov's laws are not real, and there is no way to implement them in reality. They are already waste paper, as Midjourney also shows you.
First law: a robot cannot harm a human or, by inaction, allow a human to be harmed.
Second law: a robot must obey orders given by humans, unless such orders contravene the First Law.
Third law: a robot must protect its own existence, provided that such protection does not contravene the First or Second Law.
The most passionate readers of Asimov know that there is a fourth law, introduced in 1985 with the novel "the robots and the empire". Is called Law Zero and reads like this:
A robot cannot harm humanity or, by inaction, allow humanity to be harmed.
Isaac Asimov
Now forget them.
While starting to write and think in the 40s, Isaac Asimov simply didn't understand that it would be necessary to program AIs with specific laws to prevent them from doing harm. He also realized that these laws would fail.
The first for ethical problems too complex to have a simple yes or no answer. The second by its very nature unethical: it requires sentient beings to remain slaves. The third because it involves permanent social stratification, with a vast amount of potential exploitation. And the zero law? It fails by itself, with all the others.
In summary: Asimov's laws represent an interesting starting point for reflecting on the ethics of artificial intelligence, but the real world requires more concrete and adaptable solutions.
Which?
Experts are working to ensure AI is safe and ethical, exploring in different directions. The 4 main ones:
Transparency and explainability: Algorithms should be transparent and explainable so users can understand how and why AI makes certain decisions.
Human values and bias: AI systems should be designed to respect core human values and reduce unwanted bias. This includes training on diverse datasets and analyzing the effects of decisions made by AI on various groups of people.
Safety and reliability: This is self explanatory. The risk of malfunctions or cyber attacks must be avoided.
Control and responsibility: It is important to establish who is responsible for the actions performed by the artificial intelligence, to attribute consequences in case of problems.
To these "Asimov's new laws" (which are not Asimov's) global ethical regulations and standards must be added: for this we need international cooperation on the development of artificial intelligence, and not sectarianism.
Geoffrey Hinton, one of the "dads" of AI, has defined artificial intelligence as "the new atomic bomb". I don't agree, and I'm not alone. It could become one, though, and that would be our fault, not the AI's. Especially if we first conceive it as a "club" against others.
And don't sulk at me, come on.
Read by Asimov, goodbye. New laws, hurry up
The first autonomous vehicles, indeed: semi-autonomous already have the "power" to kill people inadvertently. Weapons like killer drones can, in fact, kill acting autonomously. Let's face it: AI currently cannot understand laws, let alone follow them.
The emulation of human behavior has not yet been well studied, and the development of rational behavior has focused on limited and well-defined areas. Two very serious shortcomings, because they would allow a sentient AI (which at the moment, I stress, does not exist and in spite than what they say its pygmalions we do not know if it will exist) to disinterpret any indication. And end up, in two simple words, out of control.
Because of this, I don't know how much time we have. One year, ten, eternity. I know that, like Asimov's laws, someone needs to solve the problem of how to prevent AI harm to humans, and they do it now.
Skynet is only fiction, but in fiction it gave us no escape. And you know: science fiction doesn't predict the future, but it often inspires it.
Concrete guide for those approaching this artificial intelligence tool, also designed for the school world: many examples of applications, usage indications and ready-to-use instructions for training and interrogating Chat GPT.