Top Stories

AI Brain Implant Restores Bilingual Communication for Stroke Survivor

Scientists have enabled a stroke survivor, who is unable to speak, to communicate in both Spanish and English by training a neuroprosthesis implant to decode his bilingual brain activity.

The research, published in Nature Biomedical Engineering, comes from the lab of University of California, San Francisco professor Dr. Edward Chang. It builds on his groundbreaking work from 2021 with the same patient, which demonstrated the efficacy of translating brain activity from someone with severe paralysis into words.

In the most recent research, the neuroprosthetic decodes the brain activity of the same person—named Pancho—and, using a bilingual AI model turns that brain activity into either Spanish or English words, depending on which language Pancho intends to communicate in. His words and sentences are then projected onto a computer screen. 

Video 1. A neuroprosthesis implanted into a stroke survivor uses an AI-powered model to decode brain waves in Spanish and English

Both studies offer the promise of far less onerous communication for people unable to speak or who rely on touchscreen or eye-motion monitoring devices to communicate. These results also come around four years after the neuroprosthesis was originally implanted in Pancho, highlighting the longevity of the technology and its potential long-term impact. 

A key component of the study is Pancho’s bilingualism. To differentiate between his intended communication in Spanish and English, the researchers used an AI model trained in both languages to track neural activity in the part of Pancho’s brain responsible for articulating words. 

Researchers trained a large neural network model on Pancho’s brain activity using the NVIDIA cuDNN-accelerated PyTorch framework and NVIDIA V100 GPUs. The neuroprosthesis, implanted on the surface of—but not within—Pancho’s brain, differentiates between brain activity intended for Spanish or English communication.  

As Pancho read and then tried to articulate words first in Spanish and then in English, the scientists recorded his brain activity and trained the model to translate brain activity meant for Spanish words into English words, and vice versa. Or, as they wrote in their paper, they used “neural data recorded in one language to improve the decoding in the other language.”

After training on Pancho’s brain activity, the AI model decoded his sentences with 75% accuracy. 

Over time, the model enabled researchers and Pancho to have unscripted conversations with one another. Alexander Silva, the lead author of the study, told Nature, “After the first time we did one of these sentences, there were a few minutes where we were [all] just smiling.” 

An important finding of the study is its implications for understanding how brains function when trying to communicate through language. Earlier neuroscience studies suggest that communication in different languages originates in separate parts of the brain. However, this study suggests that speech production in different languages originates in the same area of the brain. 

The research also highlights how generative AI models can learn, improve, and adapt to new training data over time, playing a critical role in accurately translating brain activity into spoken words.


Read the full research paper in Nature Biomedical Engineering.
Read about Dr. Chang’s original research on transforming brain waves into words.
Learn more in the article from Nature about Dr. Chang’s research.
Recreate the main figures in the research with the data available on GitHub

Discuss (0)

Tags