Il future of war could involve advanced artificial intelligence (AI) algorithms with the ability and authority to assess situations and attack enemies without the control of human beings.
It looks like the kind of scenario from Sci-fi movies like Terminator and The Matrix. An advanced technology to the point of taking matters in hand by unleashing war robots during an armed conflict.
In movies, AI always ends up attacking humans. In real life, AI could help the military in operations where human control would slow down the mission. An obvious disadvantage is that the enemy may employ equally advanced technology.
And who would ultimately attack AI with its war robots? Always humans.
A future of war robots that decide for themselves
The Pentagon studies combat scenarios in which the AI would be allowed to act on its own initiative based on pre-set needs.
One such exercise took place near Seattle last August.
Several dozen military drones and war robots similar to tanks they have been deployed with a simple mission: to find terrorists suspected of hiding between different buildings.
The number of war robots involved was impossible for a human operator to control. For this they received preliminary instructions to find and eliminate enemy fighters when necessary. And then they left.
Technical tests of automatic warfare
Run by the Defense Advanced Research Projects Agency (DARPA), the simulation exercise did not involve real weapons. In their place, radio transmitters that drones and robots have used to simulate interactions with hostile entities.
Drones and war robots were about the size of a large backpack. They were coordinated by artificial intelligence algorithms that devised attack plans.
Some of the war robots surrounded the buildings, others carried out surveillance. Some identified the enemy, others were destroyed by simulated explosives.
It was just one of the artificial intelligence drills conducted to simulate automation in military systems in situations too complex and fast for humans to decide.
The Pentagon wants war robots to decide
A report of Wired explains that there is a growing interest in the Pentagon to give autonomous weapons some degree of freedom in the execution of orders.
A human would still make high-level decisions, but AI could adapt to the situation on the ground better and faster than humans.
Another report of the National Security Commission on Artificial Intelligence (NSCAI) recommended this May that the United States resist calls for an international ban on the development of autonomous weapons.
It is not a viable practice. It is inhuman in form and substance. As with nuclear weapons, the same algorithms that the United States could employ to power swarms of drones and war robots could be used by other militaries.
A robot apocalypse
"Lethal autonomous weapons that any terrorist can obtain are not in the interest of any national security," says MIT professor Max Tegmark, co-founder of the Future of Life Institute, a non-profit organization that opposes autonomous weapons.
For him, war robots and AI weapons should be "stigmatized and banned like biological weapons".