Artificial intelligence (AI) has the potential to revolutionize our world. And of course this promise comes with significant risk.
Well, at least in my cognitive horizon I see the pessimistic side clearly prevailing. For some unknown reason, even AI developers themselves tend to chase each other with blows of "we will all die", only to make good money with their platforms.
In contrast, a handful of people who consider the prospect of a boom in artificial intelligence to be saving.
In other words, it seems that humanity has only two options in our approach to artificial intelligence:
- We fear the potential negative consequences of advanced AI, and therefore try to stop its development;
- We appreciate the benefits of advanced AI, and therefore we are committed to achieving these benefits as quickly as possible despite the risks.
In my Naples, the effective synthesis would be in a famous saying: "o 1, o 90". What is this total lack of balance due to? Are there any middle ground?
The path of responsibility
Our 'binary' vision, hardened by social networks and the media that relaunch the sensationalist statements or apocalyptic, neglect a path. And it's a pity, because it's the best path, the only one that can make us achieve the hoped-for benefits of advanced AI: responsible development.
Let's make an analogy: Imagine a journey to some kind of golden valley. Then imagine that in front of this valley there is a sort of uncertain swamp, perhaps populated by hungry predators hiding in the shadows.
Do we only have the choice between crossing it and running away? Between fear (and flight) and recklessness (go ahead without precautions)? Or is there a third way, that of better understanding the situation and looking for the safest way to cross the river with balance? You already know the answer. Even the most frightened of you know it. And for those focused only on the "existential risk" and not the fact that there is also an "existential opportunity" I recommend this reflection of the philosopher and expert on emerging technologies, Max More.

Risk assessment passes through equilibrium
Put simply, the journey is towards "sustainable super abundance" which could be achieved with the assistance of advanced AI, provided we act wisely and balanced.
And how do you act wisely? First of all by laying the foundations for a traditional regulation. According to some philosophers and experts on emerging technologies, putting artificial intelligence on "rails" reduces its learning capacity and potential benefits. It depends.
In the framework of the near future in which we will have more and more "local" and specific (even completely personal assistants), can and should be sensitive to the extent of potential failures. When the failures are local, then there is merit in allowing these failures to occur. But when there is the risk of a global outcome, a different mentality will be needed, that of responsible development.
In a nutshell: it is not wise to stop artificial intelligence, and it is not wise to let it go freewheeling. As mentioned, neither of the two current roads (stop or run) leads us to something good.
We must not stop, we must not run
What is urgently needed is a deeper and more thoughtful investigation into the various scenarios in which AI failures can have catastrophic consequences. We have to evaluate which sectors are most at risk and what damage we could suffer, then introduce new (possibly non-bypassable) "rules". like those of Asimov) not to overflow.
Only in this way will we be able to significantly increase the probability of finding a safe way to cross the swamp of uncertainty that exists between us and the technological Singularity.