Table of Contents
Artificial intelligence is like everything else: it has both positive and negative sides. Many (scientists and non-scientists) hope that it will change our lives for the better, but there are also people (oh well, people: Stephen Hawking, not just anyone) who considered it absolute evil. Besides Hawking, others like Yuval Harari they think it can get to "hack" humans. Still others like Mo Gadwatinstead, they warn us from wanting to "create God" through AI.
What do you think? Anyway, between lights and shadows (and some laughs) I have collected 6 cases in which artificial intelligence has already been exploited to go a little too far.
1 One study used artificial intelligence to predict crime.
Academic research is the backbone of scientific advancements and knowledge. In this case, however, the researchers went a little too far with an academic study that dusted off Lombroso to the nth degree, using AI to predict crime ... from their face.
Researchers fromHarrisburg University they announced in 2020 that they have developed software that predicts if someone is a criminal. The software can predict crime with an accuracy rate of over 80%, and without any racial prejudice whatsoever. In one Minority Report scene, researchers even went so far as to announce that the software would assist law enforcement.
The reaction? Fortunately vehement. Over 2425 experts signed a letter asking the scientific journal Springer Nature not to publish or endorse similar research. Appeal accepted, and even the University of Harrisburg itself removed the research press release. Until?
2 Smart ski pants: here we are just beyond
I fabrics that use AI they are becoming more evolved. But there is something smarter than your cell phone.
The "smart pant" by Skiin promises to make you feel comfortable as it acquires biometric data including heart rate, posture, core body temperature, position and steps. The sensors built into your underwear (this single sentence is enough to feel the shivers) continuously monitor and analyze your body. The results? Under your eyes on the inevitable companion app.
Well yes. One day you will have to remember to recharge your underwear too.
La Deepfake technology it's fun for anyone who wants to see his big face in movie scenes, but beyond that he also has a darker side. According to analysts' estimates, 96% of deepfakes are about sex (I bet you didn't know).
You will all remember DeepNude, come on, let's not joke. It was an application that was used when you wanted a fake nude image of a woman. All it took was to upload a picture of her dressed up and the app created a nude version of it. In the 70s it would have sounded like a goliardic game, but it wasn't then and with current awareness it certainly isn't today.
Again the backlash was strong, and the creator of the app had to remove it. While this has been a win for women all over the world, similar apps continue to run. A report Sensity about deepfake bots investigated deepfake bots that generate photos of naked women and are traded in Telegram rooms. Until the law catches up with deepfake technology, there are few legal protections for people who are victims of explicit deepfake content.
4 Tay, Microsoft's Nazi chatbot
In 2016, Microsoft released a chatbot called Tay on Twitter. Tay was designed to learn from interacting with people. Yet in less than 24 hours her personality from a curious millennial to that of a racist and mean person.
On the other hand, those who practice the lame. You'll understand, it was Twitter, not the Hello Kitty club. The provocative and controversial messages from users have totally misled artificial intelligence, taking it far beyond the seed.
Spectacular moment? A user asks Tay: "Did the Holocaust really happen?" and Tay replies back: "It was invented." Within 16 hours of his release, Microsoft suspends Tay's account, claiming that he was the subject of an unspecified coordinated attack. Curtain. I preferred Cortana.
5 "I will destroy the humans"
Sophia, the android from Hanson Robotics, is also nice to me. I have sometimes spoken of her and of her evolutions of her as Grace, the robotic nurse. She too, however, went further on one occasion.
As often happens in the best nightmares, it happened right at his debut. Sophia stunned a room full of experts and journalists when Hanson Robotics CEO, the brilliant David Hanson, asked her if she by any chance wanted to destroy humans. I don't know, she probably had considered an obvious reassuring answer. Sophia didn't mind, though, to answer: "Okay. You will destroy the humans."
Today it is still around and amazes the world with its expressions. It even has obtained honorary citizenship from Saudi Arabia, and in her recent declarations she would like to have a baby (I'm reporting all true news): but that Sophia there doesn't count for me right :)
Google Nest devices are smart assistants that can help you, especially when you want a timer that alerts you when the pasta water is boiling. I'm joking, even for something else. An episode "beyond" for them too, though. The team behind the Twitch account seebotschat had a nice idea: put two Google virtual assistants next to each other, have them talk to each other, and then put the video online.
Who remembers what happened? The video has been viewed millions of times.
Autonomous devices, renamed Vladimir ed Tarragon (like the characters in the magnificent "Waiting for Godot") put on a show. They have gone from worldly chatter to exploring deep existential issues such as the meaning of life. At one point, they had a heated argument and accused each other of being robots.
Basically there is also hope for the human race if even two artificial intelligences begin to insult each other.
When AI goes further: what to do, in summary?
Needless to say, these episodes are funny, almost reassuring for how they unfolded, but in their small way they have a moral. And the bottom line is that AI can improve our lives, but it certainly can do us great harm as well.
How can we defend ourselves? I will never tire of writing it on these pages. People MUST make sure AI doesn't harm society. React immediately to overruns (as in the case of DeepNude, for example). And above all, regulate the development of these machines.
Constant monitoring of artificial intelligence applications is critical to ensure that they do no more harm than good to society. It is the thing that will allow us to smile again when we talk about AI that goes beyond. Because if we are still here smiling, it will mean that we will survive.