In recent years, artificial intelligence has entered the firmament of technology. AI is revolutionizing law, healthcare, education and various business models.
The advance of big data and machine learning is changing the way we approach artificial intelligence. Rishi Bommasani, Percy Liang and colleagues at Stanford University say this paradigm shift has allowed artificial intelligence to become great for a wide range of applications beyond what it was originally built for, such as natural language processing (NLP) used in chatbots such as Siri or Alexa, or image classification. However, this will quickly lead us to a crisis.
AI models have huge potential and can be great in some cases, but they also come with significant risks. "Despite the imminent diffusion of these models," Bommasani and colleagues say, "we currently don't have a clear understanding of how they work, when they fail, and what their capabilities are."
This is why the team decided to explore the nature of the underlying models to prevent their future crisis, and their conclusions are really interesting.


Emerging behavior, (un) predictable crises?
One problem with these models is that their behavior is emergent rather than designed. So it is not always possible to know what these systems will do, or when they will go into crisis. "This is both a source of scientific excitement and anxiety about unforeseen consequences," say the researchers.
Another problem is that these models are now the basis for many others. This means that they can be applied to a wide range of circumstances, but also that any problems remain present: they are inherited by all descendants. And the profit-oriented environments of startups and large corporations aren't necessarily the best places to explore the potential problems of AI.
The commercial incentive can lead companies to ignore the social consequences of the future crisis. I am thinking of the technological shift of work, the health of an information ecosystem necessary for democracy, the environmental cost of information resources and the sale of technologies to undemocratic regimes.
Rishi Bommasani, Stanford University
A gold rush
When developing a new product, the drive to be first often overrides all other considerations and takes teams down paths that are hard to justify. The team gives an example of this behavior by disputing the use by Clearview ai of photos from the Internet for the development of facial recognition software. This is done without the consent of the owners or image hosting companies. Clearview then sold the software to organizations such as police departments.
Crisis and insidious consequences
The consequences of the crisis from the widespread use of models artificial intelligence they could be even more insidious. 'As a nascent technology, the rules for responsible development and implementation of the underlying model are not yet well established,' say the researchers.
All of this must change, and quickly. Bommasani and co say the academic community is well prepared to take on the challenge of the future AI identity crisis. It is ready because it brings together scholars from a wide range of disciplines that are not profit-driven. "Academia plays a crucial role in developing AI models that promote social benefits and mitigate the possible harms of their introduction. Universities and universities can also help set standards by establishing ethics review committees and developing their own virtuous models."
It will be an important job
Ensuring the fair and equitable use of AI must be a priority for modern democracies, not least because artificial intelligence has the potential to threaten the livelihoods of a significant portion of the global population. With the fourth industrial revolution fast approaching, it's hard to know which jobs will be safe from automation. Few job roles are likely to remain unaffected by automated decision-making in the future - a crisis, as mentioned, that must be prevented.