Researchers at the University of California at San Diego recently built a machine learning system that predicts what birds are about to sing.
If you don't find great practical possibilities, I'll start by telling you that real-time predictive speech synthesis for voice prostheses would already be a great thing. But the implications of understanding birdsong could go much further.
Song of the birds, an extremely articulated world
Bird singing is a complex form of communication that involves rhythm, tone and, most importantly, learned behaviors.
According to the researchers, teaching an AI to understand (and be able to build enough to anticipate) bird singing is a valuable step on the road to replacing biological human vocalizations.
Motor prostheses used primates as an animal model. There is no similar model for voice prostheses. This is perhaps why these are more limited in terms of neural interface technology, brain coverage, and behavioral study design.

It is not easy to "think" about birdsong, but it is an important step
Songbirds are an interesting model of learned complex vocal behavior. Bird singing shares a number of unique similarities with human language. Studying it has already provided excellent general information on the mechanisms and circuits underlying learning, performing and maintaining vocal motor skills.
But translating vocalizations in real time isn't an easy challenge. Current systems are still slow compared to our natural thought-speech patterns.
Think about it, because it's cool: state-of-the-art natural language processing systems still struggle, and so much, to keep up with human thinking.
We are still too fast for a car
When interacting with our Google Assistant or Alexa, there is often a longer pause than we would expect from talking to a real person. This is because artificial intelligence is processing our speech, determining the meaning of each word in relation to its capabilities and then discovering which reactions or programs to access to respond.
Of course, it is already surprising that these cloud-based systems run at this speed. But they're still not good enough to create a real-time interface that allows the voiceless to speak at the speed of thought.
Research on birdsong

First, the team implanted electrodes in a dozen bird brains (zebra finches) and then began recording brain activity during birdsong.
But it is not enough to train an artificial intelligence to recognize the neural activity of birds during their song: even a bird's brain is too complex to fully map the functioning of communications between its neurons.
For this, the researchers trained another system for reducing songs in real time to recognizable patterns that artificial intelligence can work with.
It is very interesting, because it provides a solution to an outstanding problem.
The real-time bird song processing is impressive and replicating these results with human language would be historic.
But this first job is not ready yet. And it is not yet adaptable to other speech systems. It may not work beyond birdsong.
But if it did, it would be among the first, gigantic technological leaps for brain computer interfaces since the deep learning resurgence in 2014.