The advent of artificial intelligence, a hostile AI and its domination over mankind have been a central topic of debate since the last century.
From EM Forster's book “The Machine Stops”, dated 1909 to the recent TV series "Westworld" through the "Terminator" saga, our collective imagination has already contemplated this hypothesis. And he believed that the advent of a hostile AI would be carnage.
However, this is a problem that could remain hypothetical for a little while longer. Scientists and engineers are seriously worried that this "overtaking" between human and artificial intelligence (which should give rise to the technological singularity feared by transhumanists) is the greatest human error.
Current trends show us a real "arms race" to obtain this technology, which would give an enormous competitive advantage. The real question becomes this, upon closer inspection: how to stop this unbridled race with little judgment? How can we allow the more harmonious development of an AI that respects its own "ethics" and safety parameters?
Developing super artificial intelligence brings two challenges with it so you don't become a hostile AI.
First challenge: unity of purpose. This AI needs to have in memory (mind?) the same objectives as humankind. Without it, AI could "absent-mindedly" destroy humanity by evaluating it as a useless frill, or an obstacle to be removed.
Second challenge: mutuality. It's a political problem, it's about ensuring that the benefits of an AI don't go to a small elite, causing frightening social inequalities.
With the current “arms race” model, any AI development tends to ignore these two challenges in order to get ahead. This makes the danger of the birth of a hostile AI truly concrete.
Possible solutions
National policies are needed that prevent unbridled competition, reduce the clutter of companies and entities that develop AI and impose security posts for those who want to compete. Less battle and more rules means less pressure and more attention to safety.
Suggesting practical initiatives to avoid the advent of a hostile AI is a study by researchers Wim Naudé and Nicola Dimitri. The current picture is described with precision, and it starts immediately with a denial: curiously there are already few competitors, and this does not however guarantee safety.
Groups and entities from the USA, China and Europe are few in number due to the enormous development costs of an AI, but they are still hyper competitive.
For this reason, other strategies would be useful to the nation states
States have the duty to "calm down" the battle by offering contracts that bind them to as many subjects as possible. More money, even much more for the development of AI, but under certain safety conditions.
They can also offer help to those who want to develop "containment" technologies for the development of an AI, assuming that even the most intelligent and hostile one may have weaknesses and not be completely invulnerable.
Furthermore, cooperation can be encouraged by changing taxes and incentives in favor of those who decide to collaborate.
Common good
In addition to these solutions (which the study explains in greater detail) there is a supreme one, which is to steal this field from private individuals. As with atomic energy or other technologies and resources considered advantageous and dangerous for all, AI must also be considered a common good, and protected so that it does not take the wrong paths.
No one is certain of anything at the moment. The possible hostile nature of AI itself is a matter of debate. For some scientists, a "rebellion" of creation against the creator will never happen. For many others there will be a fusion of men and AI, we will not be destroyed but improved.
In any case, our planet and humanity will benefit enormously from AI, and the signs are already visible today.