
Bostonia is published in print three times a year and updated weekly on the web.
Virginia Best (above) and colleague Gerald Kidd are studying how people with hearing impairments locate sounds, a skill called spatial hearing. Photo by Jackie Ricciardi
The next time you’re at a loud party, close your eyes and listen. At first, the sounds are just a fog of noise. But quickly you begin to pick out individual voices and locate them, even without looking. This ability to locate voices using sound alone is called spatial hearing, and it helps listeners follow conversations in noisy places, like cocktail parties and restaurants. For people with normal hearing, it happens almost effortlessly. But people with hearing loss often have trouble with spatial hearing, even when they have hearing aids on. Why?
“This is a problem that conventional hearing aids don’t solve,” says Gerald Kidd, a College of Health & Rehabilitation Sciences: Sargent College professor of speech, language, and hearing sciences, who heads BU’s Psychoacoustics Lab. “In a room full of people talking—a party, a social situation—sometimes people with hearing loss are lost and they disengage. It has a real human consequence.”
With the support of a five-year, $1.5 million National Institutes of Health grant, Virginia Best, a Sargent research associate professor of speech, language, and hearing sciences, will be examining how spatial hearing works differently in people with hearing impairments. The new research brings together Kidd and Best, the lead investigator, with experts in audiology, neuroscience, and biomedical engineering: neuroscientist Barbara Shinn-Cunningham, a College of Engineering professor of biomedical engineering; H. Steven Colburn, an ENG professor of biomedical engineering, who develops neural models of spatial hearing; and Jayaganesh Swaminathan, a Sargent research assistant professor and a hearing aid researcher at the Starkey Hearing Research Center in Berkeley, Calif. Their discoveries may one day guide the development of new hearing aids that give hearing-impaired listeners the location information they have been missing, potentially solving the cocktail party problem in a way not currently possible with traditional hearing aids.
Just as having two eyes helps us locate things in three dimensions, our two ears help us pick out the location of sounds. “A sound off to the right gets to your right ear a little bit before it gets to your left ear, and it also tends to be a little louder in the ear that’s closer,” says Best. The differences are so small that we don’t consciously notice them: the time delay is just a matter of microseconds, and the volume difference (that is, the difference in sound pressure on the ear) can be as little as a decibel. Yet the brain uses this tiny ear-to-ear discrepancy to draw up a remarkably precise mental sound map, accurate to about one degree, that it uses to locate and focus attention on a single voice.
For people with hearing loss, though, this process breaks down, and Best wants to find out why. One hypothesis is that people with hearing loss are not getting the full timing and volume information they need to locate sounds accurately. Another possibility is that they are getting all the right information, but the brain cannot decipher it properly, so the resulting mental sound map comes out fuzzy.
Before they can begin to test these ideas, Best and her colleagues must first figure out how to untangle spatial hearing from other functions that are undermined by impairments. This is tricky, because although we often think that people with hearing loss experience the world with the volume knob turned down, the reality is more complicated. For some listeners, low-pitched sounds are clear while high-pitched sounds are muffled; for others, it’s the other way around; while still others experience distortion all across the sound spectrum. “We want to estimate how much of the real-world difficulty experienced by a person with hearing loss can be attributed to the audibility of sounds, and how much can be attributed to spatial factors,” says Best. “These results could also help guide our colleagues in audiology and in the hearing-aid industry to focus their efforts in the appropriate places.”
Next, Best and her colleagues will bring volunteers into the lab to test their spatial hearing. Using headphones and arrays of loudspeakers, they will find out how well people with hearing impairments can locate the sources of computer-generated sounds. Similar experiments have been done before, but unlike those earlier studies, the new experiments will use speechlike sounds instead of electronic beeps. “Our sounds will still be computer-generated, but they will be more natural in their acoustical structure and their content,” says Best. By using realistic sounds, she hopes to more closely mimic the challenges hearing-impaired listeners face in the real world.
While the researchers will compare hearing-impaired volunteers with volunteers who hear normally, they will also be looking for differences within the hearing-impaired group. The goal is to see if some subgroups—for instance, elderly people—have bigger spatial hearing losses than others. In the past, it has been difficult for them to isolate pure hearing loss from normal aging, because they so often go hand in hand. But Boston, with its large population of students and other young people, is an ideal place to study hearing loss clear of age-related confounds.
Best and her colleagues will also be taking a closer look at how listeners tune in to specific speakers in noisy environments. This process of zeroing in happens quickly and automatically for people with normal hearing, usually within just a few words or sentences. Best wants to find out whether listeners with hearing loss experience something similar and to discover more about how it happens.
Ultimately, the researchers hope that they can use what they learn to help build better hearing aids. Some new noise-reducing hearing aids send exactly the same sounds to both ears, blotting out potentially helpful spatial cues. But, says Best, “there are ways of maintaining some of that spatial information, and it might be that different listeners need that to different extents, depending on how sensitive they are to that spatial information.” Best and Kidd have already tried this on a version of their visually guided hearing aid, an experimental device that uses eye tracking to guide a beam of amplification toward sounds coming from a particular direction. Early results are promising, but, says Kidd, it will take more basic research to invent a hearing aid that can untangle the cocktail party problem. “The real essence of the problem, the ability to hear one talker in uncertain and difficult situations,” he says, “is something that hasn’t been solved yet.”
Related Stories
Hearing Aid of the Future
SAR prof’s new technology could help people hear by looking
The Cocktail Party Problem
Study asks: do musicians better understand speech in a crowd?
Three ENG Profs Elected to AIMBE
Damiano, Zhang, Shinn-Cunningham will be inducted in April
Post Your Comment