Near future
No Result
View All Result
June 2 2023
  • Home
  • Tech
  • Health
  • Environment
  • Energy
  • Inland solutions
  • Spazio
  • AI
  • concepts
  • H+
Understand, anticipate, improve the future.
CES2023 / Coronavirus / Russia-Ukraine
Near future
  • Home
  • Tech
  • Health
  • Environment
  • Energy
  • Inland solutions
  • Spazio
  • AI
  • concepts
  • H+

Understand, anticipate, improve the future.

No Result
View All Result
Robotica, Technology

AI, Asimov's laws are waste paper

The ethics of artificial intelligence will definitively send Asimov's famous laws of robotics to the back burner.

May 13, 2023
Gianluca RiccioGianluca Riccio
⚪ 5 minutes
Share1Pin2Tweet1SendShareShareShare1

READ THIS IN:

The explosion of artificial intelligence and the classic division into "sevens", between the exalted and catastrophists (some balance never) have reignited the debate on the ethics of AI. A topic that has been alive for decades, actually. Since the conception of the word "robot" we have wondered how to limit machines so that they do not destroy humanity. Do you remember? Read by Asimov and off you go.

The work of Isaac Asimov and his laws

They are the most famous example of thinking about limiting technology. Isaac Asimov's laws of robotics, which in works such as the short story "Runaround" or "I, Robot", are incorporated into all artificial intelligences as a safety measure.

Someone deluded themselves that somehow they would work in reality, or inspired similar solutions. It's not like that, and I'll make it short: Asimov's laws are not real, and there is no way to implement them in reality. They are already waste paper, as Midjourney also shows you.

Asimov's laws

Do you remember them? Shall we review?

Asimov's laws are four:

The article continues after the related links

The future is a factotum: chatbots will transform relationship marketing

Look what's on my mind: an AI that transforms thoughts into HQ Videos

  1. First law: a robot cannot harm a human or, by inaction, allow a human to be harmed.
  2. Second law: a robot must obey orders given by humans, unless such orders contravene the First Law.
  3. Third law: a robot must protect its own existence, provided that such protection does not contravene the First or Second Law.

The most passionate readers of Asimov know that there is a fourth law, introduced in 1985 with the novel "the robots and the empire". Is called Law Zero and reads like this:

A robot cannot harm humanity or, by inaction, allow humanity to be harmed.

Asimov's laws
Isaac Asimov

Now forget them.

While starting to write and think in the 40s, Isaac Asimov simply didn't understand that it would be necessary to program AIs with specific laws to prevent them from doing harm. He also realized that these laws would fail.

The first for ethical problems too complex to have a simple yes or no answer. The second by its very nature unethical: it requires sentient beings to remain slaves. The third because it involves permanent social stratification, with a vast amount of potential exploitation. And the zero law? It fails by itself, with all the others.

In summary: Asimov's laws represent an interesting starting point for reflecting on the ethics of artificial intelligence, but the real world requires more concrete and adaptable solutions.

Which?

Experts are working to ensure AI is safe and ethical, exploring in different directions. The 4 main ones:

  1. Transparency and explainability: Algorithms should be transparent and explainable so users can understand how and why AI makes certain decisions.
  2. Human values ​​and bias: AI systems should be designed to respect core human values ​​and reduce unwanted bias. This includes training on diverse datasets and analyzing the effects of decisions made by AI on various groups of people.
  3. Safety and reliability: This is self explanatory. The risk of malfunctions or cyber attacks must be avoided.
  4. Control and responsibility: It is important to establish who is responsible for the actions performed by the artificial intelligence, to attribute consequences in case of problems.

To these "Asimov's new laws" (which are not Asimov's) global ethical regulations and standards must be added: for this we need international cooperation on the development of artificial intelligence, and not sectarianism.

Geoffrey Hinton, one of the "dads" of AI, has defined artificial intelligence as "the new atomic bomb". I don't agree, and I'm not alone. It could become one, though, and that would be our fault, not the AI's. Especially if we first conceive it as a "club" against others.

Asimov's laws
And don't sulk at me, come on.

Read by Asimov, goodbye. New laws, hurry up

The first autonomous vehicles, indeed: semi-autonomous already have the "power" to kill people inadvertently. Weapons like killer drones can, in fact, kill acting autonomously. Let's face it: AI currently cannot understand laws, let alone follow them.

The emulation of human behavior has not yet been well studied, and the development of rational behavior has focused on limited and well-defined areas. Two very serious shortcomings, because they would allow a sentient AI (which at the moment, I stress, does not exist and in spite than what they say its pygmalions we do not know if it will exist) to disinterpret any indication. And end up, in two simple words, out of control.

Because of this, I don't know how much time we have. One year, ten, eternity. I know that, like Asimov's laws, someone needs to solve the problem of how to prevent AI harm to humans, and they do it now.

Skynet is only fiction, but in fiction it gave us no escape. And you know: science fiction doesn't predict the future, but it often inspires it.

Tags: artificial intelligence


GPT Chat Megaeasy!

Concrete guide for those approaching this artificial intelligence tool, also designed for the school world: many examples of applications, usage indications and ready-to-use instructions for training and interrogating Chat GPT.

To submit articles, disclose the results of a research or scientific discoveries write to the editorial staff

Most read of the week

  • Oculus gives VR visit to the Anne Frank house

    268 Shares
    Share 107 Tweet 67
  • Here comes a new magnetic field reversal. We are ready?

    4 Shares
    Share 1 Tweet 1
  • Hibernation "on demand": steps towards long space travel

    6 Shares
    Share 1 Tweet 1
  • Inhale, exhale, remember: the links between breath and memory are getting stronger and stronger

    4 Shares
    Share 1 Tweet 1
  • Look what's on my mind: an AI that transforms thoughts into HQ Videos

    4 Shares
    Share 1 Tweet 1

Enter the Telegram channel of Futuroprossimo, click here. Or follow us on Instagram, Facebook, Twitter, Mastodon e LinkedIn.

The daily tomorrow.


Futuroprossimo.it provides news on the future of technology, science and innovation: if there is something that is about to arrive, here it has already arrived. FuturoProssimo is part of the network ForwardTo, studies and skills for future scenarios.

FacebookTwitterInstagramTelegramLinkedInMastodonPinterestTikTok
  • Environment
  • Architecture
  • Artificial intelligence
  • Gadgets
  • concepts
  • Design
  • Medicine
  • Spazio
  • Robotica
  • Work
  • Inland solutions
  • Energy
  • Edition Francaise
  • Deutsche Ausgabe
  • Japanese version
  • English Edition
  • Portuguese Edition
  • Русское издание
  • Spanish edition

Subscribe to our newsletter

  • The Editor
  • Advertising on FP
  • Privacy Policy

© 2022 Near future - Creative Commons License
This work is distributed under license Creative Commons Attribution 4.0 International.

No Result
View All Result
Understand, anticipate, improve the future.
  • Home
  • Tech
  • Health
  • Environment
  • Energy
  • Inland solutions
  • Spazio
  • AI
  • concepts
  • H+