Artificial intelligence (AI) has the potential to revolutionize our world. And obviously this promise comes with significant risk.
Well, at least in my cognitive horizon I see the pessimistic part of the discussion clearly prevailing. For a reason I don't know, even the AI developers themselves tend to chase each other with "we will all die", only to make good money with their platforms.
In contrast, a handful of people who believe the prospect of a boom in artificial intelligence is saving.
In other words, it seems that humanity has only two options in our approach to artificial intelligence:
- We fear the potential negative consequences of advanced AI, and therefore try to stop its development;
- We appreciate the benefits of advanced AI, and therefore we strive to achieve these benefits as quickly as possible despite the risks.
In my Naples the effective synthesis would be in a famous saying: "either 1, or 90". What is this total lack of balance due to? Are there any middle ground?
The path of responsibility
Our 'binary' vision, sclerotized by social media and the media that relaunch the sensationalist statements or apocalyptic, neglect a path. And it's a great shame, because it is the best path, the only one that can help us achieve the desired benefits of advanced AI: responsible development.
Let's make an analogy: imagine a journey to a kind of golden valley. Imagine, then, that in front of this valley there is a sort of uncertain swamp, perhaps populated by hungry predators hiding in the shadows.
Do we only have the choice between walking through it and running away? Between fear (and escape) and recklessness (moving forward without precautions)? Or is there a third way, that of better understanding the situation and seeking with balance the safest way to cross the river? You already know the answer. Even the most scared of you know her. And for those focused only on the “existential risk” and not the fact that there is also an “existential opportunity” I recommend this reflection of the philosopher and expert on emerging technologies, Max More.
Risk assessment passes through equilibrium
Metaphor aside, the journey is towards “sustainable super abundance” which could be achieved with the assistance of advanced AI, provided we act with wisdom and balance.
And how do you act wisely? First of all by laying the foundations for traditional regulation. According to some philosophers and experts in emerging technologies, putting artificial intelligence in "binaries" reduces its learning capacity and potential benefits. It depends.
In the context of the near future in which we will have more and more "local" and specific artificial intelligences (even completely personal assistants), they can and must be sensitive to the extent of potential failures. When failures are local, then there is merit in allowing these failures to occur. But when there is a risk of a global outcome, a different mentality will be needed, that of responsible development.
In a nutshell: it is not wise to stop artificial intelligence, and it is not wise to let it go freewheeling. As mentioned, neither of the two current paths (stopping or running) leads us to anything good.
We must not stop, we must not run
What is urgently needed is a deeper and more thoughtful investigation into the various scenarios in which AI failures can have catastrophic consequences. We must evaluate which sectors are most at risk and what damage we could suffer, then introduce new "rules" (possibly not bypassable like those of Asimov) not to overflow.
Only in this way can we significantly increase the probability of finding a safe way to cross that swamp of uncertainty that lies between us and the technological Singularity.