ѻý

Brain Implant Translates Neural Signals Into Sentences

<ѻý class="mpt-content-deck">— Neuroprosthesis decodes input from brain to vocal tract in stroke patient who cannot speak
MedpageToday
An illustration a neuroprosthesis implanted over the area of a man’s sensorimotor cortex.

A neuroprosthesis helped a man with post-stroke anarthria communicate in sentences, translating signals from his brain to his vocal tract into words that appeared as text on a screen.

The device processed input from a microelectrode array implanted over the man's sensorimotor cortex and decoded sentences from cortical activity in real time at a median rate of 15.2 words per minute, reported Edward Chang, MD, of the University of California San Francisco, and colleagues in .

"This trial tells us that, yes, we can restore words to someone who's lost speech from paralysis," Chang said in an interview shared with the media. "It's the very beginning, but it definitely tells us that this is possible."

The man, known as BRAVO1, was the first participant in the , a clinical study to evaluate whether implanted electrodes can be used to restore communication in patients paralyzed by stroke, neurodegenerative disease, or traumatic brain injury.

Other research has looked at helping paralyzed people communicate using spelling-based approaches. In contrast, the BRAVO study aimed to translate signals intended to control muscles of the vocal system to tap into natural and fluid aspects of speech, Chang noted.

"When I say fluent language or fluent speech, what I'm referring to is the very fluid and effortless expression of words and communication that occurs when we're speaking," he said. "Right now, in this particular trial we've focused on text as the form of communication, but we're also actively working on restoring the actual voice through a synthetic generator of speech."

BRAVO1 was 36 at the start of the study and had severe spastic quadriparesis and anarthria (loss of ability to articulate speech) from a brainstem stroke when he was 20. His cognitive function was intact.

He could vocalize grunts and moans, but couldn't produce intelligible speech. He normally communicated with an assistive typing interface he controlled with head movements, typing at a speed of approximately five correct words a minute.

In 50 sessions over 81 weeks, BRAVO1 engaged in two types of tasks, an isolated-word task and a sentence task. The researchers collected approximately 27 minutes of neural activity on average during these tasks at each session.

In the isolated-word task, BRAVO1 attempted to produce individual words from a set of 50 common English words. In the sentence task, he attempted to produce word sequences using the 50-word vocabulary. In each trial, he was presented with a target sentence and tried to produce the words in that sentence in order, at the fastest speed he could perform comfortably. Throughout the trial, the word sequence decoded from his neural activity was updated in real time and displayed as feedback.

The researchers then prompted BRAVO1 by asking questions like "How are you today?" and "Would you like some water?" His attempted speech appeared on the screen as he responded "I am very good," and "No, I am not thirsty."

image
Researchers work in Dr. Eddie Chang's lab at UCSF's Mission Bay campus on Friday, June 7, 2019, in San Francisco. Pictured are postdoctoral scholar Pengfei Sun, PhD, research scientist Joseph Makin, PhD, and postdoctoral scholar David Moses, PhD. Photo: Noah Berger

Decoding performance was largely driven by neural activity patterns in the ventral sensorimotor cortex, a finding consistent with previous work implicating this area in speech production. Processed neural signals were analyzed with a speech-detection model.

A classifier computed word probabilities from each window of relevant neural activity. A decoding algorithm used these probabilities, together with word-sequence probabilities, to decode the most likely sentence given the neural activity data. A natural-language model that yielded next-word probabilities improved decoding performance by correcting grammatically and semantically implausible word sequences.

During real-time sentence decoding, the median word error rate across 15 sentence blocks, each consisting of 10 trials, was 60.5% (95% CI 51.4-67.6%) without language modeling and 25.6% (95% CI 17.1-37.1%) with language modeling. Across all 150 trials, the median number of words decoded was 15.2 per minute including all decoded words and 12.5 per minute including only correctly decoded words.

Mean classification accuracy was 47.1% with the use of the speech detector and word classifier to predict the identity of a target word from cortical activity. Almost all attempts (98%) to produce words were successfully detected. Overall, decoding performance was maintained or improved without daily recalibration as more training data were obtained.

The researchers plan to expand the BRAVO trial to include more people with severe paralysis and communication deficits, Chang said. They also are working to increase the number of vocabulary words and improve the rate of speech.

  • Judy George covers neurology and neuroscience news for ѻý, writing about brain aging, Alzheimer’s, dementia, MS, rare diseases, epilepsy, autism, headache, stroke, Parkinson’s, ALS, concussion, CTE, sleep, pain, and more.

Disclosures

The research was supported by a research contract under Facebook's Sponsored Academic Research Agreement, the National Institutes of Health, Joan and Sandy Weill and the Weill Family Foundation, the Bill and Susan Oberndorf Foundation, the William K. Bowes, Jr. Foundation, and the Shurl and Kay Curci Foundation.

Researchers reported relationships with Facebook, Facebook Reality Labs, NIH, William K. Bowes Jr. Foundation, Howard Hughes Medical Institute, and Shurl and Kay Curci Foundation. They hold several patents related to decoding and generating speech.

Primary Source

New England Journal of Medicine

Moses DA, et al "Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria" N Engl J Med 2021; DOI: 10.1056/NEJMoa2027540.