Sounds of Silence
Medical breakthroughs: BU scientists are helping a paralyzed man utter his first words in 10 years
Get the Flash Player to see this media.
Medical stories transcend all boundaries, because all of us — rich or poor, old or young, Mayflower descendant or recent arrival — hope for good health and need care sooner or later. This week, we revisit some intriguing medical reports from the past school year; the insights and breakthroughs they reveal could well shape our lives going forward.
One blustery day in February 2008, two Boston University neuroscientists arrived in Atlanta to visit a young man named Erik Ramsey. They’d never met Ramsey in person, but they were well acquainted with his brain, having watched its neural firings for more than a year. About nine years earlier, Ramsey, then a spirited 16-year-old who loved playing football, listening to heavy metal, and drawing monsters in class, was in a late-night car wreck that left him completely paralyzed. He could still see, smell, and hear. His body could still register the itch of a rash or the pleasure of a warm breeze. But he couldn’t speak or make any voluntary movements other than with his eyes. There is no treatment for his condition, which is known to neurologists as locked-in syndrome. Ramsey is trapped, his mind cut off from his body and his ability to communicate entirely dependent on those around him asking the right questions and interpreting the sometimes imperceptible shifting of his eyes — up for yes and down for no.
The BU scientists, Frank Guenther, a College of Arts & Sciences professor of cognitive and neural systems, and Jonathan Brumberg, a postdoctoral research associate in Guenther’s lab, are working to help Ramsey and others who have lost the ability to speak because of stroke or disease. Guenther has been developing a neural model of speech for more than two decades, one that he and Brumberg are using as a Rosetta Stone to create decoder software that they hope can translate thoughts into speech.
Before heading to Atlanta, Guenther and Brumberg analyzed data from about 40 neurons picked up by a tiny electrode that a fellow neuroscientist had implanted in a speech-related area of Ramsey’s motor cortex. The system was designed to receive Ramsey’s brain signals wirelessly from the implanted electrode as he imagined speaking and to decode that neural activity into real-time speech via a voice synthesizer. They’d come to Georgia to give the system its first trial run.
The speech prosthesis project puts Guenther and Brumberg at the forefront of brain-computer interface (BCI) research, a new field of science that offers hope for those with paralysis, amputated limbs, neurodegenerative diseases, or sensory impairments and that will eventually raise some very big questions about enhancing the healthy versus aiding the sick and about the distinction between human and machine. But on that February afternoon, all that mattered to Guenther and Brumberg was helping Ramsey utter his first bit of speech in nearly a decade.
“Everybody was excited, but they were also tense,” recalls Guenther. “Erik kept trying to orient his eyes toward Jonathan and me. He’d heard about us for a long time. We were the guys up in Boston working on this problem, and he was excited to get a look at us. But we didn’t know if this thing was going to work at all.”
The mission to give people like Ramsey back their voice began in the early 1990s, when Guenther created a computer-based model of the neural circuitry that fires every time we speak. The model, the first of its kind, ranges across several parts of the brain, including the areas responsible for the higher-level processes involved in formulating syllables and the motor cortex, which controls the tongue, lips, and jaw (known as articulators). The model also accounts for the feedback mechanism by which our brains compare the sounds we produce to what we meant to say and make any necessary adjustments. It uses inputs from a few thousand networked “neurons” to control what Guenther calls “a virtual vocal tract,” establishing the alignment of tongue, lips, and jaw that in turn produces speech through a synthesizer.
At first, all Guenther could use to build and refine his model were extrapolations from findings about nonspeech brain functions and from studies of people with brain lesions that had somehow short-circuited their ability to talk. But by the late 1990s, he started using functional magnetic resonance imaging (fMRI) technology. This allowed him to compare the model’s predictions to actual brain scans of people speaking words or syllables under particular constraints, such as restricted jaw movement or distorted auditory feedback. “We use the results to either verify the model or in the cases where the measured brain activities go against the model’s predictions, we improve the model so that it now accounts for the new data,” Guenther explains.
Based on the model, he and his fellow researchers at BU’s Cognitive and Neural Systems Speech Lab hypothesized that the premotor cortex, which most scientists believed controls only bodily movement, also contains neurons that generate a mental preview of speech sounds, such as “uh,” “ee,” and “ay.” These so-called formant frequencies in turn inform the motor neurons as they orchestrate the positioning of tongue, lips, and jaw. This idea would prove critical for decoding Ramsey’s neural signals, Guenther says, because the implanted electrode was in this particular region of his brain.
The mind-body disconnect
Around midnight on November 5, 1999, Erik Ramsey was in the passenger seat of a friend’s Camaro as they returned home from a movie on a dark, two-lane Georgia highway. They didn’t see the minivan making a U-turn until it was too late. The Camaro slammed into the minivan’s right front fender, flipped, and landed on an embankment. Firefighters needed the Jaws of Life to cut Ramsey free of the wreck.
He was screaming and writhing in pain when his father, Eddie, got to the emergency room. It took 15 hours of surgery to repair a collapsed lung, a lacerated spleen, a ruptured diaphragm, ripped tendons in his hand, and a femur that was broken in two places. Erik woke up in intensive care, but he didn’t speak, ask for pain medication, or respond to the doctors, his father, or his mother, Sandra. Tests later revealed that a blood clot had caused a brain-stem stroke that cut the connection between his mind and his body.
After a few weeks, the Ramseys took their son home, and with the help of a home health aide, began the daily routine of feeding him through a tube, bathing him, moving his limbs through range-of-motion exercises, keeping his eyes moist with drops, and clearing his lungs with a nebulizer. Soon, he learned to use his eyes to select from a letter board that his father designed, spelling out requests for movies (anything with vampires or other bloodthirsty creatures) and music (Ozzy Osbourne is a favorite). He’d occasionally play small pranks by spelling titles that didn’t exist. And when these tricks were discovered, Erik’s father recalls, “he would just die laughing,” an involuntary, spasm-like response that he still has when something amuses or excites him. But then two bouts of pneumonia robbed him of the stamina and reaction time needed to spell out words with the letter board. He was back to the limited and laborious yes or no of his eyes.
It was a nurse in the local school district who put the Ramseys in touch with Phil Kennedy, a pioneer in brain-computer interface research, who had been implanting electrodes — first in rats, then in monkeys, and eventually in humans — since 1986, and who had National Institutes of Health backing for a start-up company called Neural Signals, based in nearby Duluth, Ga. Kennedy’s first implants in humans had allowed paralyzed individuals to move a computer cursor with their thoughts and to work with basic text and drawing applications. Ramsey would be the first person to have an electrode implanted in a brain region known to be involved in speech. In December 2004, surgeons put a hollow glass electrode with three wires, measuring about a millimeter and a half, six millimeters deep into the left side of Ramsey’s brain.
In the months that followed, Kennedy ran tests in which he asked Ramsey to imagine trying to say various words or to think about moving his lips, tongue, or jaw. Kennedy could see that the neurons were firing during these exercises, but he couldn’t interpret exactly what speech sound, such as “pa” or “ooh” or “dee,” corresponded to which tangle of neural data. In 2006, he sought out Guenther, who along with Brumberg readily agreed to review the data that Kennedy had made available online.
The challenge, Guenther says, was that there were no distinct neuron spikes when Ramsey was trying to say one thing or another. “It’s not like there are neurons that start firing when he says ‘ah’ but no other sound,” he says. “All the neurons are firing a little bit all the time, but they change their firing rates. It’s the details in the patterns of those changes that are important.” By zeroing in on the part of the brain where the electrode was, says Brumberg, “the model gave us the guide to decode that part of the signal. It said, ‘Here’s what that part of the brain is trying to represent.’”
Over the next year, they used that guide to build the neural decoder software at the heart of the system they would bring down to Georgia to read Ramsey’s mind.
The BU researchers spent two days in Georgia before meeting Ramsey — installing the decoder software and testing the receiver for the wireless data transmission and the synthesizer that would give that data voice. The goal for Ramsey was to create vowel sounds, a first step in speech, requiring just one constant configuration of the mouth.
Represented in an x and y coordinate system, different vowels appear in different locations. “Uh” is seen in the middle of the screen, while “ooh” and “ee” and “ah” are in the corners. Ramsey’s challenge would be to mimic a computerized voice starting with “uh” and using only his thoughts, move the target dot from the center to the corner vowel sounds.
“Listen,” the computer would say, “uhhhhhhhoooooooooh.” And then, it would ask Ramsey to “speak,” and, a synthesized voice (adapted from a recording of Eddie Ramsey to give it a familiar southern drawl) would decode Ramsey’s attempt to think that same sequence.
On his first five attempts, Ramsey failed to hit even one target vowel. The tension and frustration in the room were mounting, and everybody took a five-minute break. Eddie Ramsey went for a short walk, and the researchers played one of Erik’s favorite CDs — Headbangers Ball — to help him relax.
In his next series of tries, Ramsey started hitting targets. And by the third round, he nailed more than half, and started laughing in his excitement.
“We were cheering him on,” says Guenther. “Everything we’d done in our 16 years of existence as a lab had been theoretical. It was really heartwarming to see something from our research so directly impact somebody’s life.”
Since then, the researchers and Erik have worked through dozens of sessions. Three days a week, Eddie Ramsey rises well before dawn for his early morning shift at the post office. Then he heads home, picks up his son, and takes him to Neural Signals, where the experiments run throughout the afternoon. While Brumberg returns to Georgia about once a month, he and Guenther conduct most of the experiments from Boston via a video-enabled Skype connection with Kennedy’s lab.
Every session follows a similar pattern, with Ramsey improving as the afternoon progresses, lately achieving as much as 90 percent accuracy with his vowels. “We have only about 40 neurons here, and there are maybe a billion neurons involved in speech. So we have a very tiny window,” says Guenther. “But we can get him in the ballpark, and with practice he’s able to improve his accuracy.”
Guenther and Brumberg are collaborating with researchers at Georgia Tech to refine the decoder. Each improvement makes it easier for Ramsey to learn, but it also means that his brain must continually adjust and master a new system. “When we learn to speak as infants, it takes us months. It’s not an afternoon-long process,” says Guenther. “For Erik, the situation is as if a child woke up every day with a slightly different brain than he had the day before and had to relearn what he’d already learned.”
As a result, one short-term goal is to develop the decoder to a point where Ramsey can use the same one over and over, and thereby increase the pace of his progress. The researchers are simultaneously working on an “articulatory synthesizer” to get ready for Ramsey’s next challenge — consonants — which are far more complicated than vowels.
“There are more dimensions to work with for consonants,” says Brumberg. “You need to know where the tongue touches the back of the teeth when you’re saying a ‘t’ sound, for example.” And it gets even trickier when consonants and vowels are combined. “Saying something like ‘ahh-dah’ means your tongue has to rapidly go up to the closure of the vocal tract and then back down, all in about a 10th of a second,” says Guenther. Despite the complexity, the researchers are confident they’ll have Ramsey producing consonants within the year.
Help for many
Roger Miller, program director for neural prosthesis development at the NIH’s National Institute on Deafness and Other Communications Disorders, one of the project’s funders, says Guenther and Brumberg’s work is at the cutting edge of brain-computer interface research. But the speech prosthesis is just one of several BCI projects making headlines. In January 2008, scientists at Duke University were able to make a robot walk on a treadmill in Japan by transmitting neural activity from monkeys in North Carolina. A few months later, researchers at the University of Pittsburgh and Carnegie Mellon University trained monkeys to adopt brain-controlled robotic arms as their own, using them to feed themselves grapes and marshmallows.
For now, nearly all of the mainstream BCI research is directed at helping the disabled. Guenther and Brumberg, for their part, hope the speech prosthesis will one day be used by people who have the neurodegenerative disease amyotrophic lateral sclerosis (ALS) or who have lost their voice following throat cancer surgery. But they’ve also been approached by those interested in creating BCI applications to enhance the capabilities of healthy people, mainly to boost memory. “Their goal,” Brumberg says, “would be to upload and download information between a computer and your brain like it was a flash drive.” But, he notes, that would be extremely complicated, because scientists still don’t know exactly how memory is stored and retrieved in the brain.
“People in our field do think about things like what it would mean to be human with an enhancement implant,” says Brumberg. “But those are the sort of philosophical questions about the future that don’t seem as important as trying to get our device working its best for people like Erik, who need it right now.”
Indeed, another one of their current projects is devising a system whereby Ramsey can turn the synthesizer on and off on his own, which will be critical when he uses it in an actual conversation. And that’s something the researchers hope he’ll be able to do in about five years.
Ramsey’s father shares their optimism. He believes in a future where his son is not only talking again, but drawing as well, all with the power of his mind. “It’s kind of equivalent to watching your baby learn to walk,” he says. “He’s got the first steps out of the way, and as soon as he’s got his footing under him, he’ll be off.”
To read more about Erik Ramsey and research that will help locked-in patients, go to the Neural Interfacing Research Institute.
This article originally appeared in the Spring 2009 Bostonia.+ Comments