Secondo the latest report of the United Nations on the Libyan civil war, the year that is ending saw for the first time an autonomous weapon system, a killer robot, killing someone. In the future we may remember it as the starting point of a new arms race that could hurt us, very badly.
In recent days (from 13 to 17 December) La UN Convention on Certain Conventional Weapons met on the subject, as happens every five years. Since 1983 this assembly has chosen to limit some cruel weapons such as anti-personnel mines. This time he discussed a possible ban on these autonomous weapons. And it hasn't reached a consensus to limit them.
It may have been a catastrophic mistake.
Autonomous weapon systems are killer robots. Throughout this post you will always hear me call them this: killer robots. Because this is what I am. Systems with weapons that can function alone. Systems made for war, to kill people. World governments are investing heavily in this trend.
On the other hand, of course, humanitarian organizations (one of them, Stop The Killer Robots, you find here) are spent asking for rules and prohibitions on the development of these weapons. And they do well. If we do not limit the use of autonomous weapons, it will end very badly. Disruptive technologies in this field can destabilize all military strategies, increasing the risk of preemptive attacks. Including chemical, biological, nuclear ones.
Given the pace of development of killer robots, the UN meeting just ended may have been one of the last occasions to avoid a new arms race. Or the last one.
There are four dangers that appear obvious to anyone when it comes to killer robotic systems.
First problem of killer robots: identification.
Will autonomous weapons ALWAYS be able to distinguish between a hostile soldier and a child with a toy gun? The difference between a single human error (which is also possible) and the wrong algorithm setting could take the problem on an incredible scale. An autonomous weapons expert called Paul Scharre uses a metaphor: that of a faulty machine gun that continues to fire even when you take your finger off the trigger. And shoot until the ammo runs out, because it's just a machine. He doesn't know he's making a mistake.
Whatever people say, artificial intelligence is not yet equipped with its own morality (and probably will never have it, will never be able to learn it).
The problem isn't just that even an AI is wrong, for example when it recognizes asthma as a factor which reduces the risk of pneumonia, or when it identifies people of color like gorilla. It is that when he makes a mistake, whoever created it does not know why he made a mistake, and does not know how to correct it. This is why I believe it impossible that killer robots can develop with a "moral" criterion of any kind.
Second problem with killer robots: low-end proliferation.
The armed forces that are developing autonomous weapons assume that they will be able to contain and control them. Have you heard a dumber idea than this? If there is one thing that the history of weapons technology teaches us, only one thing, it is that weapons spread. Also in this case it was very predictable.
What happened in the past with the Kalashinkov, an assault rifle that has become so accessible that it ends up in anyone's hands, can be repeated for killer robots. Market pressures can lead to the creation of autonomous weapons that are effective, cheap and virtually impossible to stop. Most of all: widespread. In the hands of governments, crazy horses, organized crime or terrorist groups.
It may have already happened. The Kargu-2, made by a Turkish defense contractor, is a cross between a drone and a bomb. It has artificial intelligence to find and track targets. He is a killer robot, and he has already acted autonomously in the theater of the Libyan civil war to attack people.
Third problem of killer robots: high-end proliferation.
If we then think of "high-end" risks, we come to Zenit. Nations could compete to develop increasingly devastating versions of autonomous weapons, including those capable of mount chemical, biological, radiological and nuclear weapons . The moral dangers of escalating weapon lethality would be magnified by escalating weapon use.
Sure, these killer robots are likely to come with expensive ones ethical controllers designed to minimize collateral damage, chasing the myth of the "surgical" attack. A good stuff only for public opinion, in short. The truth is that autonomous weapons will alter even the most mundane cost-benefit analysis one does before planning a war. They will be dice of a deadly risk, to be rolled with less concern.
Asymmetrical wars waged on the ground by nations lacking competing technologies will become more common. An enormously widespread instability.
Fourth and final problem: the laws of war
Killer robots (if they proliferate it is a sure thing) will undermine humanity's last palliative against war crimes and atrocities: international war laws. These laws, codified in treaties that start from the first Geneva Convention, they are the fine line that separates war from something even worse that I can hardly imagine.
The laws of war are fundamental, because they also impose responsibility on those who are fighting a war. Slobodan Milosevic was the president of a country and had to answer for his actions. He was tried in the United Nations International Criminal Tribunal for the former Yugoslavia and he had to answer for what he did.
And now? Is it the fault of a killer robot committing war crimes? Who gets tried? The weapon? The soldier? The soldier's commanders? The company that made the weapon? NGOs and international law experts fear that autonomous weapons lead to a grave liability gap .
A soldier must be shown to have committed a crime using a self-contained weapon. To do this, it will be necessary to prove that the soldier committed a guilty act and also had a specific intention to do so. A rather complicated dynamic, in a world of killer robots.
A world of killer robots is a world with no rules they impose significant human control on weapons. It is a world where war crimes will be committed with no war criminals to be held accountable. The structure of the laws of war, together with their deterrent value, will be considerably weakened.
A new global arms race
Imagine that everyone can use all the strength they want, when they want. With fewer consequences for anyone. Imagine a planet where national and international military, insurgent groups and terrorists can deploy a theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal liability.