The advent of artificial intelligence, a hostile AI and its domination over mankind have been a central topic of debate since the last century.
From EM Forster's book "The Machine Stops", dated 1909 to the recent TV series "Westworld" passing through the "Terminator" saga, our collective imagination has already contemplated this hypothesis. And he felt the advent of hostile AI would be carnage.
Moreover, this is a problem that could remain hypothetical for a little while longer. Scientists and engineers are seriously concerned that this "overtaking" between human and artificial intelligence (which should give way to the technological singularity feared by the transhumanists) is the greatest human error.
Current trends show us a real "arms race" to obtain this technology, which would give a huge competitive advantage. The real question becomes this, on closer inspection: how to stop this unbridled race and with little criterion? How to allow the most harmonious development of an AI that respects its "ethics" and safety parameters?
Developing super artificial intelligence brings two challenges with it so you don't become a hostile AI.
First challenge: unity of purpose. This AI needs to have in memory (mind?) The same goals as mankind. Without it, AI could "casually" destroy humanity as a useless tinsel, or an obstacle to be removed.
Second challenge: mutuality. It's a political problem, it's about ensuring that the benefits of an AI don't go to a small elite, causing frightening social inequalities.
With the current "arms race" model, any AI development tends to ignore these two challenges in order to do it sooner. This makes the danger of the emergence of a hostile AI really concrete.
National policies are needed that prevent unbridled competition, reduce the clutter of companies and entities that develop AI and impose security posts for those who want to compete. Less battle and more rules means less pressure and more attention to safety.
To suggest practical initiatives to avert the advent of a hostile AI is a study by researchers Wim Naudé and Nicola Dimitri. The current picture is described with precision, and it starts immediately with a denial: curiously there are already few competitors, and this does not however guarantee safety.
Groups and entities from the USA, China and Europe are few in number due to the enormous development costs of an AI, but they are still hyper competitive.
For this reason, other strategies would be useful to the nation states
States have the duty to "calm down" the battle by offering to as many subjects as possible orders that bind them. More money, even a lot more for AI development, but under certain security conditions.
They can also offer help to those who want to develop "containment" technologies for the development of an AI, assuming that even the most intelligent and hostile may have weaknesses and not be completely invulnerable.
Furthermore, cooperation can be encouraged by changing taxes and incentives in favor of those who decide to collaborate.
Beyond these solutions (which the study explains in greater detail) there is a supreme one, which is to snatch this field from private individuals. As with atomic energy or other technologies and resources considered advantageous and dangerous for everyone, AI must also be considered a common good, and protected so that it does not take the wrong path.
Nobody is sure of anything at the moment. The very possible hostile nature of AI is under discussion. For some scientists there will never be a "rebellion" of creation against the creator. For many others there will be a fusion of men and AI, we will not be annihilated but improved.
In any case, our planet and humanity will benefit enormously from AI, and the signs are already visible today.