Last week we addressed the issue of panic induced by the media about artificial intelligence. A way like any other, I was telling you, to deny a healthy debate on the opportunities and risks of this technology which can completely change our society. There is obviously also the flip side of the coin: a very large group of scientists who are not at all concerned about the possible dangers. Right or wrong?
Because there are those who turn a blind eye
“Have you ever thought that artificial intelligence could cause the end of humanity?”. In a recent press conference at the White House, the spokeswoman Karine Jean Pierre he laughed faced with this question. Too bad the answer is a serious “no”. Despite AI pioneers such as Alan Turing had already warned of the risks of “machines taking over,” many current researchers seem not to care at all. Yet, AI is progressing at incredible rates. So why don't they discuss it more?
David Krueger, professor at the Department of Engineering at the University of Cambridge, claims that the reasons are above all cultural and historical. After various phases in which excessive and idealistic expectations were placed on these technologies (utopia or dystopia), researchers decided to move on to practice. For this reason, they focused on specific areas, such as autonomous driving, stopping to ask questions about the long-term implications.
Did they do well or badly? What if the risks were real?
A basic argument of the “worried” (a very different category from the “catastrophists”, mind you) is the analogy between AI and humans. Just as humans have extinguished other species to compete for resources, artificial intelligence could do the same to us. It could replace us, in other words. Economically and politically. Physically.
These are themes that sound enormous, almost science fiction. And in fact, the risks of AI are often ignored precisely because they are considered "unscientific". However, this does not justify the lack of attention. Instead, we should approach these problems as we do other complex social issues. And here a crucial element comes into play: financing. Most AI researchers receive funding from tech giants, creating possible conflicts of interest that can influence how experts approach AI-related problems, leading to a denial of risks rather than an objective assessment of the risks. possible threats.
For this reason, instead of "leaning" towards one of the two poles, in the exercise that those who guide finance and the media like most (divide and conquer), public opinion should look forward, or rather inward. Inside things, demanding that the theme be explored in depth.
It's time to get serious
The existential risks of AI may be more speculative than real when compared to pressing issues like bias and fake news, but the basic solution is the same: regulation. It's time to start a robust public debate, and address the ethical issues related to AI. Everything else is boredom, or rather, dullness.
Because we know: we cannot afford to ignore the potential risks of artificial intelligence to humanity. An open and honest public debate, which takes into account conflicts of interest and ethical responsibilities, is essential. Only in this way will we be able to understand whether the laughter of a spokeswoman at the White House is truly appropriate or, on the contrary, a sign of dangerous (this is) collective unconsciousness.