In a neuroscience lab at Stanford University, scientists Sergey Stavisky and Krishna Shenoy are pointing to their heads.
The spot they’re trying to show is near the crown, a few inches up from the ear. That region of the brain helps control hand and arm movements – and also, surprisingly, is active during speech, the researchers have now discovered. They were able to decode brain activity in this region to figure out words human volunteers had spokenexternal link, opens in a new tab, their team reports December 10, 2019, in the journal eLife.
The work could one day help researchers build medical devices to restore speech to people who have lost the ability to talk, Stavisky says. Such a device could convert brain activity into words typed on a screen – or spoken by a computer.
“We’ve laid the groundwork for synthesizing speech from neural activity,” he says.
The team worked with two participants with tetraplegia as part of BrainGate2external link, opens in a new tab, a Food and Drug Administration pilot clinical trial. BrainGate2 was designed to test the safety of neural sensors implanted in the brain and to study whether people with paralysis can use the sensors to control devices such as computers and prosthetic limbs. The sensors record participants’ neural activity, and algorithms translate that activity into actions – like moving a cursor on a computer screen.
Because the implanted sensors reside in a hand-and-arm area of the motor cortex (known as the “hand knob”), the participants can control a cursor by imagining moving their arms. (In 2017, the team reported, participants were able to rapidly type out words by moving the cursor from letter to letterexternal link, opens in a new tab.)
Stavisky, Shenoy, and Stanford neurosurgeon Jaimie Henderson, all part of the clinical trial team, wondered if the hand knob brain area might be active during more than just attempted hand and arm movements. Previous studies, for example, had hinted that the neural circuits for manual gestures might be interlinked with those for speech. So the researchers decided to test it.
Stavisky, a postdoc, recorded neural activity when participants repeated short words like “beet” and “seal.” The words sparked different brain activity patterns, his team found. Stavisky could then “read” the patterns to decipher which words or syllables had been spoken.
“We’ve found that this part of the brain is active when moving your tongue, mouth, and face,” says Shenoy, a Howard Hughes Medical Institute (HHMI) Investigator. “The next question is: Can we use this discovery to help people?”
HHMI spoke with Stavisky and Shenoy to hear the story behind their unexpected find.
Why did you think a brain region that controls arm movement might also be involved in speech?
Stavisky: There’s some evidence in the literature that there’s a close relationship between our neural circuits for hand gestures and speech gestures – movement of the tongue, the lips, the jaw. So, there are little clues out there, and we thought it wouldn’t be totally preposterous to look into the idea.
Shenoy: Sergey’s being modest. It was a classic science “I wonder what would happen” question. We said, “let’s go try it,” and when the results turned up positive, we followed the thread.
Why are you interested in studying the brain activity behind speech?
Shenoy: We know very little about speech because, until now, we’ve had very little access to it at single-neuron resolution. People have studied speech using other methods – like ECoG [electrocorticography], where electrodes are placed on the surface of the brain. But that averages together the activity of thousands of neurons. Our study is the first time activity from over a hundred electrodes – each recording from one or just a handful of individual cells from motor areas of the brain – has been studied in relation to producing speech. That’s the key distinction.
Stavisky: Our eventual goal is to use implanted brain devices to help people who cannot talk – people with stroke or ALS [amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease] who are unable to speak but can still try.
Tell me about your study’s participants.
Shenoy: They’ve been working with us for several years and already had electrode arrays implanted in the hand knob area of the motor cortex [the strip of wrinkly brain tissue that runs from crown to ear and controls body movements].
Stavisky: The participants in this study have tetraplegia caused by spinal cord injury – they can’t use their arms or legs – but they can still talk.
Shenoy: That gave us a golden opportunity where we can actually measure their neural activity while they are speaking out loud.
What does the electrode array look like?
Stavisky: It’s a 10x10 grid with 100 thin electrodes protruding from it – they’re like fine hairs.
Shenoy: Imagine a baby aspirin – it’s about that size. The black square is the array, and the wire bundle comes to a pedestal that’s placed on the participants’ head. Researchers are currently working on replacing the bundle with a wireless transmission system.
Our close collaborator, Professor Jaimie Henderson, placed the arrays on the surface of the brain, and inserted them 1.5 millimeters in, so the electrode tips are right next to cells in the cerebral cortex.
When you looked at the participants’ neural activity, what did you see?
Stavisky: When we gave them a cue to start speaking, we saw that the activity from their neurons changed quite noticeably. This was not a subtle effect. And it’s not just that the area becomes active when they’re speaking – it actually has a different pattern depending on what sound they’re producing.
How did you feel when you first saw those results?
Stavisky: I was very excited and surprised. I gave it a 10 percent chance it would work, but I thought it would be silly for us not to try.
Shenoy: We’ve presented the results at conferences, and scientists in the field have also been quite surprised.
Shenoy: There’s an idea that’s literally in textbooks that certain regions along the motor cortex strip relate to different parts of the body. High up on the head, you’ve got the leg, and lower down, you have voice.
Stavisky: The canonical textbook picture is of a human body draped over the brain.
Shenoy: It’s called the homunculus. It dates back over 70 years and has become indoctrinated – Sergey directly questioned it. He said maybe a brain area for the hand and arm is also related to speech.
Didn’t scientists already know that the brain wasn’t so neatly divided?
Stavisky: Yes, people knew that there was more nuance – that there’s some overlap between the brain regions that control the fingers and wrist and arm, for example. But we didn’t think that you’d see such a mix-up between major body areas – speech activity in the hand area.
What’s next for your team?
Stavisky: We want to build on this work. We can already identify which of 10 words or 10 syllables the participants said. Now, we want to record while participants speak long passages – whole phrases and sentences and paragraphs. We want to reconstruct those passages from the participants’ neural activity. We’re moving toward that medical, translational goal.
Shenoy: If we do a good job synthesizing speech, then we will feel like we’re in a good place to start thinking about trying to help people who can’t speak. We can also think about placing the arrays in areas of the brain that are actually implicated in directly producing speech – areas that are more lateral, closer to the ear.
When do you think a brain-implanted medical device for restoring speech could be available?
Shenoy: Ten years ago, I would have said I don’t even know what decade. But now, I think we’ll see something in the coming 10 years. Neuroscience, microelectronics technology, and advances in mathematical algorithms and machine learning are coming together in such a way that it’s a very different future than it was 10 years ago.
This interview has been edited for length and clarity.
Sergey Stavisky et al. “Neural ensemble dynamics in dorsal motor cortex during speech in people with paralysisexternal link, opens in a new tab.” eLife. Published online December 10, 2019. doi: 10.7554/eLife.46015