Before an accident in 1999, Erik Ramsey was “a typical teenager,” according to his dad, Eddie Ramsey. He liked to draw and skateboard. He liked sport sand girls. But on that November night, everything typical about Erik Ramsey’s life ended. A car crash caused a brain-stem stroke that left him with “locked-in syndrome” — completely paralyzed, but with total cognitive and sensory awareness. Ramsey, now 24, has almost nonvoluntary control over his body, except for his eyes, which he uses to answer questions — by looking up for yes or down for no. Now, thanks to a collaboration between Neural Signals, Inc., a company in the Ramseys’ home state of Georgia, and Frank Guenther, a College of Arts and Sciences professor of cognitive andneural systems, Ramsey may one day regain his ability to speak. With funding from the National Institutes of Health (NIH), researchers are creating a “speech prosthesis” that combines a wireless electrode and transmitter from Neural Signals, Inc., implanted in Ramsey’s brain, with a voice synthesizer run by software based on a computer model of the brain’s language centers developed by Guenther’s lab. Together, they aim to turn Ramsey’s thoughts into words.
The collaboration is about two years old, but since 1992 Guenther and his lab team have been working on a computational model of how the brain controls speech. Their model mimics the neural networks involvedin producing words — from moving the jaw, lips, and tongue to babbling to processing “auditory targets” stored in the brain of how a word is supposed to sound. Continually refined with data from functional magnetic resonance imaging of people’s brains performing speech tasks, the model learns to control a computer-simulated vocal tract and translate neural signals into words.
In summer 2006, Guenther was contacted by Philip Kennedy,founder of Neural Signals, Inc., who had implanted an electrode aboutsix millimeters long into Ramsey’s brain, in the area that controls thetongue, jaw, and lips. The electrode could wirelessly transmit thepulses of about 40 neurons surrounding it. Kennedy’s team had collectedextensive data from the electrodes, gathered when researchers askedRamsey to imagine speaking specific words. But they couldn’t decode it.Up to a billion neurons are activated when we speak, says Guenther, soto glean much from just 40, “you need to have extremely sensitivetechniques.”
Guenther’s lab used its neuralmodel of speech to guide the design of decoder software that learned toread Ramsey’s mind as he imagined saying vowel sounds. In a clinicaltrial last year, the researchers were able to predict what vowel sound Ramsey wasthinking of with 80 percent accuracy, but not in real time. InFebruary, they used an improved decoder and a new training protocol inwhich Ramsey imagined “singing along” to a series of vowel sounds thatmoved, for example, from oooh to ahhh. Once the decoder had been“trained” to recognize Ramsey’s signal patterns, it was able to drive asynthesized voice that produced the vowel sounds as soon as Ramseythought them.
“Everybody was just ecstatic that day,” says Eddie Ramsey — includingErik, who can still laugh, and did. The next step is consonants, whichare more complex, because they involve the closing of the vocal tract.Meanwhile, Neural Signals, Inc., has FDA approval to implant electrodesin four more patients, which would accelerate the development andrefinement of the decoder software.
As for Ramsey’schances of being able to speak again, his father has no doubt it willhappen. “It’s kind of equivalent to watching your baby learn to walk,”he says. “He’s got the first steps out of the way, and as soon as he’sgot his footing under him, he’ll be off.”
This story was originally published in Research at Boston University 2008 magazine. Click here to read more.
Chris Berdik can be reached at firstname.lastname@example.org.