Despite an increasingly fascinated world (e a little obsessed) from advances in artificial intelligence, Edward Snowden launches a warning that seems to go against the grain. The former CIA contractor, known for his revelations about mass surveillance practices, shifts attention from "expressive" AI models towards a much more tangible and immediate threat: killer drones and combat robots.
The distraction of AI chatbots
The Google Gemini incident, where the AI chatbot showed limitations in generating relevant images or even refused to generate any, sparked a lively debate. Even today, Elon Musk, on his social network, launches invectives towards Google's technology which is "guilty" of propagating woke theories and mystifying history by reproducing a black George Washington.
For Snowden, these controversies are just fluff: a distraction from the real dangers. He openly criticizes the attitude of those who, caught up in the frenzy of "sabotaging" AI chatbots with security filters, lose sight of much more concrete threats to global security. Threats that perhaps it contributes to creating, given that those same organizations (despite their manifest ethics) they are also open to military entities.
Snowden: Get your priorities right
According to Snowden, there is a serious disconnect from reality when thinking about the priorities of modern society. While a significant part of public opinion and experts focuses on the limits and dangers of AI, much more dangerous technologies such as armed drones and military robots are already a reality.
The latter, unlike "conversational" chatbots, already have the power to kill. On the contrary: they have already killed, have already been used in various conflicts, and raise much more serious ethical and legal questions. And what are we focusing on instead?
The irony of AI “protection”.
The AI debate is stale, Snowden says. Stuck on the clash between freedom of expression and the need to regulate potentially harmful content. However, Snowden jokes about how this “protection” is paradoxically aimed at limiting the capabilities of AI, rather than expanding them safely.
The situation of Google Gemini is cited as an emblematic example of this contradiction.
Is AI chatbot management seen as a threat to freedom of expression or data security? The use of military drones is a much more direct threat to human life: and it is essential to reorient the public and political debate towards these lethal technologies.
Spoiler: Snowden is right for me
Snowden's speech (if you want to learn more about it, you can find it here) opens a window onto a reality often overlooked in the heat of the debate on AI. His criticism is not limited to technology itself, but extends to society and its priorities. In a tone that oscillates between ironic and serious, he invites us to reflect on what we should really fear and what we should really fight against.
Snowden reminds us that there are even more pressing challenges and dangers that require our immediate attention. Dangers with a direct impact on life and death, which deserve a prominent place in the global debate on technology and safety. Let's say this time, over 10 years later since that time, shall we listen to it a little more?