Researchers at the University of San Francisco have developed a neural interface that allows patients without the use of words to "speak" through the device.
This is a milestone in the field of neural prostheses: the system monitors the activity of the brain and converts it into words using an artificial voice (such as those of Google or Amazon voice assistants, so to speak). The software is very advanced and includes an accurate virtual reconstruction of the larynx, tongue, lips and jaw.
Why do we lose our word?
Patients lose the ability to speak for a variety of reasons: degenerative disease, accidents, or brain damage. The technologies already in use allow some to pronounce a few words by "translating" small facial movements or using other mechanisms that make communication in any case very long and laborious.
The neural interface studied in San Francisco directly translates brain activity into natural-sounding language, using an infrastructure that "mimics" the way language centers coordinate movements of the vocal tract.

"The relationship between the movements of the vocal tract and the sounds of words is really complex," says Gopala Anumanchipalli, one of the researchers involved in the project. "We thought that if these language centers code the movements and translate them in some way, we can also do this operation starting from the signals of the brain".
What does it consist of?
For this the team has created a "virtual" vocal tract that uses machine learning to gradually produce more and more correct sounds. A group of volunteers utters specific phrases while their brain activity is monitored: the artificial intelligence employed scans these signals and compares them with the movements of the vocal tract to understand exactly how they translate into that specific sound.
"We have the ability to perfectly mimic spoken language," says Josh Chartier, another of the researchers. “We are already very advanced for the slower or softer sounds, like 'sh' or 'z', but we have a hard time with the truncated ones like 'b' and 'p'. However, the level of accuracy increases at a surprising speed thanks to the use of machine learning ".
"People who can't move their arms and legs have learned to control robotic prosthetics with their brains," Chartier continues. "We are confident that someday people with speech disabilities will again learn to speak through this voice prosthesis."
Published in Nature