Researchers at the University of San Francisco have developed a neural interface that allows patients without the use of speech to “speak” through the device.
This is a milestone in the field of neural prosthetics: the system monitors the activity of the brain and converts it in words using an artificial voice (like those of Google or Amazon voice assistants, for example). The software is very advanced and provides an accurate virtual reconstruction of larynx, tongue, lips and jaw.
Why do we lose our word?
Patients lose the ability to talk for a lot of reasons: degenerative diseases, accidents or brain damage. The technologies already in use allow some to pronounce a few words by "translating" small movements facially or using other mechanisms which in any case make communication very long and laborious.
The neural interface studied in San Francisco translates directly brain activity into natural-sounding speech, using an infrastructure that “mimics” the way speech centers coordinate movements of the vocal tract.

“The relationship between the movements of the vocal tract and the sounds of words is really complex,” says Gopala Anumanchipalli, one of the researchers involved in the project. “We thought that if these language centers encode movements and translate them in some way, we too can do this operation starting from brain signals”.
What does it consist of?
For this reason the team created a "virtual" vocal tract that uses machine learning to gradually produce more and more correct sounds. A group of volunteers pronounces specific phrases while their brain activity is monitored: the artificial intelligence used scans these signals and compares them with movements of the vocal tract to understand exactly how they translate into that specific sound.
“We have the ability to perfectly mimic spoken language,” says Josh Chartier, another of the researchers. “We are already at a very advanced stage for the slower or softer sounds, such as 'sh' or 'z', but we have difficulties with the truncated ones such as 'b' and 'p'. The However, the level of accuracy increases at an astonishing speed thanks to the use of machine learning”.
“People who cannot move their arms and legs have learned to control robotic prosthetics with their brains,” Chartier continues. “We are confident that one day people with speech disabilities will learn to speak again through this vocal prosthesis.”
Published in Nature