Artificial intelligence is like everything else: it has both positive and negative sides. Many (scientists and otherwise) hope that it will change our lives for the better, but there are also people (oh well, people: Stephen Hawking, not just anyone) who considered it to be absolute evil. Besides Hawking, others like Yuval Harari they think it can get to “hack” human beings. Still others like Mo Gadwatinstead, they warn us from wanting to “create God” through AI.
What do you think? In any case, between lights and shadows (and a few laughs) I have collected 6 cases in which artificial intelligence has already been exploited to go a little too far.
One study used artificial intelligence to predict crimes.
Academic research is the backbone of scientific progress and knowledge. In this case, however, the researchers went a little too far with an academic study that dusted off Lombroso to the nth degree, using AI to predict crime... from their face.
The researchers ofHarrisburg University they announced in 2020 that they have developed software that predicts if someone is a criminal. The software can predict crime with an accuracy rate of over 80%, and without racial prejudices of any kind. In a scene at Minority Report, the researchers even went so far as to announce that the software would assist law enforcement.
The reaction? Fortunately vehement. Over 2425 experts signed a letter asking the scientific journal Springer Nature not to publish or endorse similar research. Appeal accepted, and even the University of Harrisburg itself removed the research press release. Until?
Smart ski pants: here we are just beyond
I fabrics using AI they are becoming more evolved. Besides your cell phone, however, there is something else more intelligent.
The “smart underwear” by skiin promises to make you feel comfortable as it acquires biometric data including heart rate, posture, core body temperature, position and steps. The sensors built into your underwear (this single sentence is enough to feel the shivers) continuously monitor and analyze your body. The results? Under your eyes on the inevitable companion app.
Well yes. One day you will have to remember to recharge your underwear too.
DeepNude
La Deepfake technology it's fun for anyone who wants to see his big face in movie scenes, but beyond that he also has a darker side. According to analysts' estimates, 96% of deepfakes are about sex (I bet you didn't know).
You all remember DeepNude, come on, we're not joking. It was an application that was used when you wanted a fake nude image of a woman. All you needed was to upload a photo of her clothed and the app created a naked version. In the 70s it would have sounded like a fun game, but it wasn't then and with current awareness it certainly isn't today.
Again the backlash was strong, and the app's creator had to remove it. While this was a victory for women everywhere, similar apps continue to make the rounds. A report Sensity about deepfake bots investigated deepfake bots that generate photos of naked women and are traded in Telegram rooms. Until the law catches up with deepfake technology, there are few legal protections for people who are victims of explicit deepfake content.
Tay, Microsoft's Nazi chatbot
In 2016, Microsoft released a chatbot called Tay on Twitter. Tay was designed to learn from interacting with people. Yet, in less than 24 hours her personality changed from a curious millennial to that of a racist and mean person.
On the other hand, those who practice lame. You'll understand, it was Twitter, not the Hello Kitty club. The provocative and controversial messages of users have completely led artificial intelligence astray, taking it significantly beyond what was expected.
Spectacular moment? A user asks Tay: “Did the Holocaust really happen?” and Tay responds: “It was made up.” Within 16 hours of its release, Microsoft suspended Tay's account, saying he had been the subject of an unspecified coordinated attack. Curtain. I preferred Cortana.
“I will destroy humans”
I also like Sophia, the android from Hanson Robotics. I have sometimes spoken about her and her evolutions as Grace, the robotic nurse. She too, however, went further on one occasion.
As often happens in the best nightmares, it happened right on his debut. Sophia stunned a room full of experts and journalists when the CEO of Hanson Robotics, the brilliant David Hanson, asked her if by any chance she would want to destroy humans. I don't know, she probably took it as an obvious reassuring response. Sophia had no problem, however, replying: “Ok. She will destroy humans”.
Today it is still around and amazes the world with its expressions. It even has obtained honorary citizenship from Saudi Arabia, and in her recent declarations she would like to have a baby (I'm reporting all true news): but that Sophia there doesn't count for me right :)
Seebotschat
Google Nest devices are smart assistants that can help you, especially when you want a timer to alert you when the pasta water boils. I'm joking, also for other reasons. An episode "beyond" for them too, however. The team behind the Twitch account seebotschat had a nice idea: put two Google virtual assistants next to each other, make them talk to each other and then send the video online.
Who remembers what happened? The video has been viewed millions of times.
Autonomous devices, renamed Vladimir ed Tarragon (like the characters in the magnificent “Waiting for Godot”) put on a show. They have gone from mundane chatter to exploring deep existential questions like the meaning of life. At one point, they got into a heated argument and accused each other of being robots.
After all, there is hope for the human race too if even two artificial intelligences start insulting each other.
When AI goes further: what to do, in short?
It goes without saying that these episodes are funny, almost reassuring because of how they unfolded, but in their own small way they have a moral. And the bottom line is that AI can improve our lives, but it is certainly also capable of causing us serious harm.
How can we defend ourselves? I will never get tired of writing it on these pages. People MUST make sure that AI does not harm society. React immediately to overruns (as in the case of DeepNude, for example). And above all, regulate the development of these machines.
Constant monitoring of artificial intelligence applications is critical to ensure that they do no more harm than good to society. It is the thing that will allow us to smile again when we talk about AI that goes beyond. Because if we are still here smiling, it will mean that we will survive.