Artificial intelligence is revolutionizing numerous sectors, and the military field is no exception. The growing ability of machines to make autonomous decisions without the need for humans raises crucial questions about the nature of warfare in the near future.
Paul Scharre, vice president of Center for a New American Security and expert in the field of AI applied to defense, highlights the potential challenges and opportunities of this evolution. In the context of this discussion, the recent film "The Creator", despite being an entertainment product, represents a cultural reflection on the same issues. Let's think about it together.
In the film "The Creator", set during a future war between humans and artificial intelligence, ex-special forces agent Joshua, distressed by the disappearance of his wife, is recruited to find and kill the "Creator", the architect of the Advanced AI. In reality this task could be much more complicated.
The first duel (lost by humans)? In the skies
There are several recent examples that highlight the potential of AI in the military sector. Of one (that of military drones already autonomous also in their choices) we talked here. You've certainly heard of the other one. I am referring to theAlphaDogfight Challenge from DARPA, who saw a human pilot face off against an AI in an airplane simulator. The result? The AI dominated the pilot by a score of 15 to zero, displaying almost superhuman capabilities, such as split-second precision shots that human pilots could never pull off.
In a real dogfight, an AI might already have the upper hand. But what does all this mean for the future of warfare?
AI on the battlefield
The idea of machines making life-or-death decisions raises complex ethical questions. Although such machines aim for much greater, "almost" infallible precision, any off-track action would constitute a war crime: but who would be directly responsible? According to Scharre, establishing responsibility, and direct control over an AI may not be so simple.
In the near future, however, AI is likely to be used by humans primarily for tactics and analysis. AI can process information more efficiently, making militaries more effective. But this could also lead to an increasing reliance on AI for decision making, as the competitive advantage in a military environment may be too tempting to ignore.
For Scharre, in summary, humans are building systems that they do not fully understand and may not be able to control, finding us facing unprecedented threats.
The AI arms race now seems inevitable
Experts have long since dissolved their reservations on the matter: we will see a sort of arms race for artificial intelligence, similar to that for nuclear weapons.
Some Chinese scholars they hypothesized a "battlefield singularity": a point where the speed of AI-driven decisions surpasses the ability of humans to understand. In such a scenario, we may have to “hand over the keys” to autonomous systems.
On the outcome of a hypothetical war between humans and an artificial intelligence that is beyond our control, Scharre is not optimistic. And he makes it clear with a provocative and glaring comparison: could we ever lose a hypothetical war against monkeys? Draw your own conclusions.