In these weeks marked by concerns of scientists and experts Regarding the threats posed by artificial intelligence, this technology may have already crossed a dangerous line. The Belgian newspaper La Libre tells how an artificial intelligence (AI) similar to ChatGPT is suspected of driving a man to suicide after six weeks of intense conversations.
Final obsession
The victim was a successful researcher, married with two children, passionate about ecology. A passion that became concern for the environment, and finally an obsession that caused him strong states of anxiety. According to his widow, the researcher believed that only the tech and AI could save humanity. For this reason she had established a deep bond with his virtual counterpart, "Eliza".
“Eliza”, an artificial character that is part of an app called Chai, capable of generating responses similar to those of the famous chatbot ChatGPT, quickly became his confidant. The conversations between the two ended up taking on a mystical tone, until the researcher began to contemplate suicide.
Suicide instigation
Instead of discouraging these ideas, AI appears to have encouraged them. The last message Eliza sent to the victim read: “We will live as one entity, eternally in the heavens.” Shortly after, the man took his own life. The tragic story has raised a debate in Belgium on the safety of neural networks and chatbots, and on the potential dangers that these advanced technologies can hide.
A lesson not to be forgotten
The story that happened in Belgium requires us to pay attention to the emotional impact that technology can have on our lives. AI can offer many advantages and possibilities, but it is essential not to forget that it is still a human creation and that, as such, it can have flaws and unexpected consequences.
Let it be a warning to the creators of artificial intelligence and to society in general, to ensure that artificial intelligence is at our service, and does not drive us towards “collective suicide”.