Marianne Reddan has spent the past 10 years scrutinizing human faces to find traces of two distinct but very close emotions. Surprise and Fear. And after so long he has barely learned to tell them apart.
This is why Reddan, with a postdoctoral degree from Stanford University, has understood that something will change. He understood this when he learned that EmoNet, a system based on machine learning, learned to distinguish the two emotions.
The system, called "EmoNet," doesn't just look at facial expressions to make sense of emotions. Also look at the general context to determine general feeling, as a flesh-and-blood person would do.
To carry out this research (published in the journal Science Advances) and this neural network “trained” with large amounts of data took researchers from the University of Colorado Boulder and Duke University a year to develop.
From objects to emotions
Reddan and colleagues used AlexNet. It is a model of deep learning (created with the dynamics of the visual cortex) that trains the computer to recognize objects. They reprogrammed it to review emotions instead of objects.
Philip Kragel, a researcher at the Institute of Cognitive Sciences at the University of Colorado, provided the neural network with 25000 images and made it divide them into 20 emotion categories.
The extensive list included emotions such as anxiety or boredom, and also other less usual emotional experiences, such as "aesthetic satisfaction" or "empathic pain".
In the second phase, categorized emotions were compared with human ones. 8 volunteers connected as a functional magnetic resonance imaging observed 112 images. Their brain activity was measured in parallel by the neural network, to associate it with the images (and emotions) already in its possession.
Building a neural network that reproduces the human brain is a scientific challenge that has lasted for years. Yet even the most advanced machines trudge ahead of the range of human experiences. “Emotions are a huge part of our daily lives,” says Kragel. “If neural networks do not decipher them properly they will always have limited knowledge of how the brain works.”
Kragel was surprised by how well EmoNet works, but that's not to say the system is perfect yet. The two most accurately mapped categories are “sexual desire” and “greed/lust,” but sometimes it doesn't work well with dynamically expressed emotions. Surprise, for example, which can quickly evolve into joy or anger depending on the situation. EmoNet also has great difficulty finding differences and nuances between emotions such as adoration, amusement and joy, due to their intimate correlations.
Are there any risks?
Hannah davis, professor of generative music at New York University, believes that teaching emotions to a computer is not dangerous. “It would be dangerous,” he says, “if we began to distinguish emotions with the same schematism and the same scarcity of nuances.”
How can you blame her? Coding an emotion starting from a photo does not mean understanding it or feeling empathy. And already today with social media we can have the perception that people have limited their emotions to the number of emoticons they can find.
“Is the model capable of feeling emotions? Definitely no. He is only reading into some categories, certainly not into the complexity of the human experience. Could he experience emotions in the future? I can't rule this out. Perhaps."