In a world flooded every day with new music (o "resurrected"), choosing the next song to listen to can become an epic undertaking. Today, streaming platforms like Spotify combine human intuition and artificial intelligence to select the most worthy tracks, with a success rate just above that of the coin toss. A new study, however, could radically change the rules of the game.
If Neuroscience meets Machine Learning
A group of American researchers has discovered that our brain's response to music can predict with astonishing accuracy which song will become a hit. Using a combination of machine learning techniques and neurophysiological data, they have managed to achieve an incredible 97% accuracy predicting musical successes.
Professor Paul J. Zak is senior author of a study published in Frontiers in Artificial Intelligence (I link it here). In the study, Zak explains how he and his team were able to identify what appears to be the brain's rating system for social and emotional experiences, called Immersion. The team then applied this system to music and rating new songs, using data gathered from non-invasive physiological sensors such as those found on smartwatches.
From your wrist to the playlist
In the study, the scientists analyzed a variety of songs from different musical genres, both hits and flops. The neurophysiological data collected by the study participants was then entered into Immersion, the platform created by the research group. Unlike more invasive methods such as direct brain imaging, Zak's approach relies on simpler and more accessible signals, such as heart rate, making the technology much more practical and affordable. And, apparently, extremely effective.
Neuroforecasting: predicting the future of music
The concept behind this method is called "neuroforecasting", which is the use of the neural activity of a small group of individuals to predict the responses of a larger population. It has previously been used to predict stock market swings, viral videos and election results.
What impact will this scientific "tsunami" have on the music of the future? While the sample size of participants and songs analyzed in the study was relatively small, the impressive results suggest this approach could have a significant impact on how music is selected and promoted in the future. With the use of wearable devices becoming more widespread, we could see a future where entertainment choices are tailored based on our neurophysiology.
In the future, also thanks to the AR, people will "immerse themselves" (it should be said) in games and interactive experiences with sounds and music created in real time. Sound itineraries perfectly adapted to their state of mind, or to their sensations. The perfect and totally personalized soundtrack.
A winning solution for artists and listeners
Beyond the commercial implications, Zak's approach could also help young artists develop hit songs by solving a philosophical dilemma all artists face: what I like, will the public like too? Using Immersion, everyone will be able to quickly understand what people will like, stimulating their creativity and making content consumers happier.
A real win-win scenario? Or the death of unpredictability? Let me know how you think.