In recent years, artificial intelligence has entered the technological firmament. AI is revolutionizing law, healthcare, education and various business models.
The advance of Big Data and machine learning is changing the way we approach artificial intelligence. Rishi Bommasani, Percy Liang and colleagues at Stanford University say this paradigm shift has allowed AI to become great for a wide range of applications beyond those it was originally built for, such as natural language processing (NLP) used in chatbots like Siri or Alexa, or image classification. However, this will quickly lead us to a crisis.
AI models have huge potential and can be great in some cases, but they also come with significant risks. “Despite the imminent deployment of these models,” say Bommasani and colleagues, “we currently do not have a clear understanding of how they work, when they fail, and what their capabilities are.”
This is why the team decided to explore the nature of the underlying models to prevent their future crisis, and their conclusions are really interesting.
Emerging behavior, (un) predictable crises?
One problem with these models is that their behavior is emergent, rather than designed. So it is not always possible to know what these systems will do, or when they will fail. “This is both a source of scientific excitement and anxiety about unintended consequences,” the researchers say.
Another problem is that these models are now the basis for many others. This means that they can be applied to a wide range of circumstances, but also that any problem remains present: they are inherited by all descendants. And the profit-driven environments of startups and large companies aren't necessarily the best places to explore potential AI problems.
The commercial incentive can lead companies to ignore the social consequences of the future crisis. I think of the technological displacement of work, the health of an information ecosystem necessary for democracy, the environmental cost of computing resources, and the sale of technologies to nondemocratic regimes.
Rishi Bommasani, Stanford University
A gold rush
When developing a new product, the drive to be first often overrides all other considerations and leads teams down paths that are difficult to justify. The team gives an example of this behavior by disputing its use Clearview ai of photos from the Internet for the development of facial recognition software. This is done without the consent of the owners or image hosting companies. Clearview then sold the software to organizations such as police departments.
Crisis and insidious consequences
The consequences of the crisis from the widespread use of models artificial intelligence they could be even more insidious. “As a nascent technology, the norms for responsible development and implementation of the basic model are not yet well established,” the researchers say.
All of this must change, and quickly. Bommasani and co argue that the academic community is well prepared to meet the challenge of AI's future identity crisis. It is ready because it brings together scholars from a wide range of disciplines that are not driven by profit. “Academia plays a crucial role in developing AI models that promote social benefits and mitigate the possible harms of their introduction. Universities and can also contribute to the definition of standards by establishing ethical review committees and developing their own virtuous models”.
It will be an important job
Ensuring the fair and equitable use of AI must be a priority for modern democracies, also because AI has the potential to threaten the livelihoods of a significant portion of the global population. With the Fourth Industrial Revolution rapidly approaching, it's difficult to know which jobs will be safe from automation. Few job roles are likely to remain unaffected by automated decision-making in the future - a crisis, as mentioned, that must be prevented.