The researchers built an algorithm capable of capturing the emotions evoked by images and paintings.
"This ability will be key to making AI not only smarter, but more humane, so to speak," he says Panos Achlioptas, PhD student in computer science at Stanford University
Experts from artificial intelligence they have become quite good at making computers that can "see" the world around them. Algorithms that allow to recognize objects, animals and activities in various fields. AI is about to become the core technologies of cars, airplanes and autonomous safety systems of the future.
Today a team of researchers is working to teach computers to recognize not only what objects in an image or painting are, but also how those images and paintings make people feel.
I'm talking about artificial emotional intelligence algorithms.
ArtEmis, the knowledge of art through paintings
To achieve this, Achlioptas and his team collected a new dataset, called ArtEmis, which was recently published in a pre-press of arXiv.
The dataset is based on the 81.000 WikiArt paintings. Consists of 440.000 written responses from over 6.500 humans which indicate how paintings make them feel and includes explanations of why they chose that particular emotion.
Using these answers, Achlioptas and the team, led by the Stanford engineering professor Leonidas Guibas, they trained neural speakers.
These are artificial intelligences that respond with written words: in this case, they generate emotional responses to visual art and define them.

Why Art?
Researchers have chosen to use art specifically, as an artist's goal is to arouse emotions in the viewer. ArtEmis works independently of the subject, from still life paintings to human portraits or abstract painting.
The work is a new approach to computer vision, Guibas notes.
The classical computer vision acquisition work concerned literal content. "There are three dogs in the picture," or "someone is drinking coffee from a cup". Instead, we needed descriptions that would define the emotional content of an image.
Leonidas Guibas
8 emotional categories
The algorithm classifies images and paintings into one of eight emotional categories (from amazement to fun, from fear to sadness) and then explains in the written text what in those images justifies the emotional reading.
Artificial intelligence is doing its job. They can "show" her an image that she did not know, and in response she says how a human being might feel to see it.
Surprisingly, the researchers say, captions accurately reflect the abstract content of the image in ways that are far beyond the capabilities of existing computer vision algorithms.
Not only: the algorithm doesn't just capture the broad emotional experience of a complete picture. It can decipher different emotions within the paintings.

ArtEmis goes even further: it also takes into account the subjectivity and variability of the human response.
Not all people see and feel the same thing when looking at paintings. Someone can feel happy seeing the Mona Lisa, someone else sad. ArtEmis can distinguish these differences.
A tool for artists
In the short term, the researchers anticipate that ArtEmis could become a tool for artists to evaluate their works during creation, and ensure their work has the desired impact.
It could provide guidance and inspiration to guide the artist's work. A graphic designer working on a new logo could use ArtEmis to address its emotional effect.
It's only the beginning. From paintings to people.
Later, after further research and refinements, emotion-based algorithms could be perfected. Artificial intelligences that will help bring emotional awareness to applications such as chatbots and conversations avatars.
"I see ArtEmis bringing insights from human psychology to artificial intelligence," says Achlioptas.