When we are in the crowd and we try to speak to someone, our brain does everything possible (and it's not even bad) to follow the tone of voice of our interlocutor: it's not easy, sometimes you have to shout a little, but in the end often it succeeds.
The situation is different for the millions of people with problems that force them to use prosthetics hearing aids: as advanced as they are, these devices do not they have the ability to focus on a specific sound and can come overwhelmed by noise, which makes conversation in crowded places almost impossible for those affected from hearing loss.
To face the problem a team of researchers at Columbia University has developed a new device that identifies, selects and isolates only the voice you want to hear. The study is started from a Initial assessment: The brain waves of the listener tend to “synchronize” with those of the speaker.
For this reason, researchers have developed an AI model capable of encoding and separating many voices present in an environment, comparing the pattern vowel obtained with the listener's brainwaves and amplify just what matches more.
"The result is an algorithm that can separate voices without the need for any training," explains the doctor Nima Mesgarani, author of the study published by the journal Science Advances.
If it's not clear to you, take a look at the demonstration of this technology: its ability to isolate voices is impressive.
Here there is a small animation from Columbia University that illustrates the working mechanism.
Here is the study: Speaker-independent auditory attention decoding without access to clean speech sources.