A brain implant, using artificial intelligence, successfully turned the thoughts of a paralysed woman into speech almost simultaneously, US researchers announced on Monday.
Although still in the experimental phase, the breakthrough raised hopes that such devices could restore the ability to communicate for individuals who have lost their voice.
The California-based research team had previously employed a brain-computer interface (BCI) to decode the thoughts of Ann, a 47-year-old woman living with quadriplegia, and convert them into speech. However, there had been an eight-second delay between her thoughts and the computer-generated speech.
This delay had prevented Ann, a former high school math teacher who had been unable to speak since suffering a stroke 18 years ago, from engaging in a natural, flowing conversation. But with the team’s new model, detailed in the journal "Nature Neuroscience", Ann’s thoughts were translated into her old speaking voice with an 80-millisecond delay.
“Our new streaming approach converts her brain signals to her customised voice in real time, within a second of her intent to speak,” said senior study author Gopala Anumanchipalli of the University of California, Berkeley. “While we are still far from enabling that for Ann, this milestone takes us closer to drastically improving the quality of life of individuals with vocal paralysis.”
Ann’s ultimate aspiration is to become a university counsellor, Anumanchipalli added. During the research, Ann was shown sentences on a screen, such as "You love me then", which she would silently rehearse in her mind. The implant then converted her thoughts into a digital version of her pre-injury voice.
Ann expressed her excitement upon hearing her voice again. "I was very excited to hear my voice, and it felt like I was truly myself," she shared. Anumanchipalli noted that Ann felt a deep sense of embodiment through the experience.
The brain-computer interface intercepts brain signals after the user has mentally prepared to speak, as co-author Cheol Jun Cho explained. "The BCI detects signals after we have decided what to say, what words to use, and how to move our vocal tract muscles."
The technology utilises a deep learning model trained on thousands of sentences Ann had previously attempted to silently speak. Despite occasional inaccuracies and a limited vocabulary of 1,024 words, the model represented a significant leap forward in the field of communication restoration for paralysed individuals.
Patrick Degenaar, a professor of neuroprosthetics at Newcastle University in the UK, not involved in the study, called the work “very early proof of principle” but still “very cool.” Degenaar highlighted that the system used an array of electrodes that did not penetrate the brain, in contrast to the technology employed by Elon Musk's Neuralink.
“The surgery for installing these arrays is relatively common in hospitals for diagnosing epilepsy,” Degenaar noted, suggesting that the technology could be rolled out on a larger scale more easily.
With adequate funding, Anumanchipalli projected that the technology could be aiding communication for individuals within five to 10 years.