Tuning In
Barbara Shinn-Cunningham taps brains to find and fix hidden hearing loss
On a cool May evening, the sounds of a tuning orchestra fill a performance hall in Concord, Massachusetts, spilling out into the otherwise quiet streets of bistros with carved wooden signboards and quaint shops selling antiques, fine wine, and artisanal cheese.
Before the entire orchestra has assembled on the stage—in a historic, converted barn with klieg lights fixed to its rafters—it is still possible for an untrained ear to pluck specific instruments from the musical jumble. But as the official start of rehearsal nears, the growing ranks of musicians produce a louder and less decipherable cacophony. Finally, the conductor steps to his post and holds up a hand to signal “Quiet, please!”

Among the musicians waiting in the sudden silence that follows is Barbara Shinn-Cunningham, a professor of biomedical engineering in Boston University’s College of Engineering (ENG). By her side rest an oboe and an English horn (imagine an overgrown oboe), both of which she will play during the Concord Orchestra’s “Pops” rehearsal tonight.
Music isn’t a hobby for Shinn-Cunningham. Long before she was a scientist, she was a musician. She switched from piano to the oboe and then added the English horn in junior high. For a brief period in high school, she considered pursuing music as a professional, but she enjoyed science and math too much to follow through. When she chose a research focus in graduate school, her deep love of music led her to study the neuroscience of hearing.
For nearly three decades, Shinn-Cunningham has studied how our brains make sense of sound. Her lab’s investigations stretch from the precise algorithms of auditory signal processing to the black boxes of cognition and how shifting attention changes the way our brains sort through the daily mix of sounds we encounter.
Recently, she and her colleagues have focused on hidden hearing loss—the trouble many people with “normal” hearing have deciphering competing, overlapping sounds, such as a conversation in a crowded room. Hidden hearing loss can be found in people of any age, but it is more common among older people. Indeed, the aging of the baby boomers has spurred intense interest in hidden hearing loss among researchers who began to study it within the last decade. Shinn-Cunningham calls hidden hearing loss “the most important discovery in hearing research that I have seen in my career.”
Eventually, she and her lab team hope to devise a new kind of hearing aid that could alleviate this vexing problem. Spanning cognitive psychology, neuroscience, and engineering to both diagnose and treat hidden hearing loss makes for an ambitious agenda, which takes enormous energy, vast curiosity, and an affinity for collaboration. Those who know Shinn-Cunningham say she’s perfect for the job.
Music Meets Math
Shinn-Cunningham attended Brown University, where she majored in electrical engineering and math. In college, she met Rob Cunningham, a fellow engineering student and the lead teaching assistant for a circuit design course they had both taken.
“I was recruiting other teaching assistants, and I reached out to Barb,” Cunningham says. “She turned me down.”
But then, he asked her out on a date, and she accepted. They were married a year after graduation and several years later had two sons, Nick, now 21, and Will, 19.
When Shinn-Cunningham started graduate school at MIT, she envisioned a career designing computers. Once there, however, she met engineers studying auditory perception. It was a revelation to discover that something like music, which she loved and connected to on an emotional level, could also be understood from the mathematical perspective of an engineer.
“I’d never thought about sound as information that could be studied quantitatively,” she says. “How does sound physically get into the head, and how does the brain make sense of all that information, which eventually leads to those deep, emotional responses?”
Starting at your vibrating eardrum and quickly moving to about 30,000 nerve fibers in your inner ear, every sound you hear is pulled apart by frequency, and then analyzed and reassembled by your brain. Even when competing sounds—the voice of a conversation partner, street traffic, chirping birds—share frequencies, your brain somehow separates them.
“Exactly how it does all that is still a pretty big mystery,” says Shinn-Cunningham.

The Cocktail Party Effect
“My early impression of Barb is very consistent with my later impression. She was very smart and had a sort of positive energy about her,” says Steve Colburn, an ENG professor of biomedical engineering and hearing researcher. Colburn was on the committee that reviewed Shinn-Cunningham’s master’s thesis at MIT. “She always seemed to have a broader perspective than a lot of people did.”
Specifically, the engineering approach to hearing builds precise, mathematical models of how a sound signal is processed, neuron by neuron. That’s what Colburn does, by the way. As excited as Shinn-Cunningham was to explore that perspective, however, she was also keen to investigate the far murkier, top-down role of cognition.
“The brain has feedback all the way from the cortex back to the brainstem,” she explains. In short, auditory perception isn’t a one-way street. Higher-order brain activities—particularly attention—change how the sound signals are processed in the brain at every level.
My early impression of Barb is very consistent with my later impression. She was very smart and had a sort of positive energy about her. Steve Colburn, ENG professor of biomedical engineering and hearing researcher.
The power of attention in hearing was famously demonstrated in the 1950s by the British cognitive scientist Colin Cherry. In a series of experiments, Cherry asked subjects to repeat one of two spoken messages played simultaneously over headphones. People had trouble with this task when both messages (spoken by the same voice) could be heard in both ears.
By contrast, listeners had no problem when one message was directed to each ear, because they were able to steer their attention to one ear or the other. The flipside of this ability was that pretty much nothing about the message in the unattended ear registered—most people couldn’t remember a single phrase from the “rejected” message, or notice when it included their name, switched from English to German, or was played backwards.
Cherry’s findings came to be known as the “cocktail party effect,” evoking the overlapping chatter of a crowded party or bar in which we must tune in to one speaker after another or risk missing large chunks of a conversation.

The cocktail party effect is a centerpiece of Shinn-Cunningham’s research agenda, because it is precisely in this interaction between attention and hearing where hidden hearing loss reveals itself. She and her lab team are puzzling out what’s going on in our brains during such complex listening tasks—or, more precisely, which part of that neural circuitry is breaking down among the 5 to 15 percent of Americans who have “normal” hearing but still tell their doctors they have trouble in crowded, noisy social situations?
Current hearing screenings can’t measure this common difficulty. That’s because these tests are based only on our brain’s bottom-up ability to detect tones of different frequencies. It would be as if your eye doctor just asked if you could see anything on each line in the eye chart, rather than ensuring you could see the letters clearly and reliably enough to tell them apart.
While studies of vision have long incorporated cognition—learning, memory, and attention have a lot to do with what we see, and what we don’t—hearing research has been dominated by engineers.
“Engineers like explanations that can be expressed in equations. And a lot of cognitive psychology is more boxes and arrows—this thing feeds into that thing, which changes it in some way,” says Frederick Gallun, a neuroscientist and hearing researcher with the VA Medical Center in Portland, Oregon, who worked with Shinn-Cunningham as a postdoc from 2003 to 2006.
“I was always impressed by Barb’s ability to blend these two approaches,” he says. “She’s a very good engineer, but if she can’t say what the math relationship is, then she can see that cognition is involved and needs to be accounted for if we’re going to solve the problem.”
Life Matters
Shinn-Cunningham crams her days with grant proposals, collaborator correspondence, and meetings with graduate students and postdoctoral researchers. The oversized whiteboard at the center of her office is covered in funny, offhand remarks made during lab meetings—I just started thinking about half an hour ago; At first, it was like, “well, duh!” but now the duh factor is reduced.”
When members of the lab move on to other endeavors, they keep in touch, with emails, postcards, pictures of weddings and new babies.
“Barb loves to get to know you,” says Adrian KC Lee, who worked with Shinn-Cunningham about a decade ago during his graduate studies in the Harvard-MIT Program in Health Sciences and Technology.
“I was very organized. I had all my bullet points ready for our meetings. But we’d only get through one, and she would just want to talk about life in general,” says Lee, now an associate professor of speech and hearing sciences at the University of Washington. “That used to frustrate me. But I find myself doing the same thing with my trainees. I understand now that it’s not just the work that matters. Life matters. Barb was really good at promoting that balance.”
Another former lab member, Virginia Best, adds, “She thinks faster than anybody I know. She’s always one step ahead of you. She’s famous for finishing people’s sentences.”
Supposedly, Shinn-Cunningham’s husband is such a frequent victim of the sentence-finishing that he often will change what he was going to say just to get the last word. He denies this is true, but Shinn-Cunningham confirms the story with a grin.
“I do that to everyone. I try to explain that it’s empathy. I don’t feel impatient. I feel enthusiastic,” she says. “But I think those things are so intertwined that it’s probably difficult for people to appreciate the fine distinction.”
Shinn-Cunningham’s colleagues do appreciate her enthusiasm, and collaborators praise her for being a consummate team player.
One recent collaborator is Helen Tager-Flusberg, a BU College of Arts & Sciences professor of psychology who studies kids with autism who rarely or never speak. Tager-Flusberg says Shinn-Cunningham had to modify her experiments for children who can’t readily communicate and have trouble staying on task or even sitting still for longer than a few minutes at a time.
“Barb figured out how to adapt what she does to fit the needs of the kids we’re studying,” she says. “At the same time, she never compromised on the quality of the data.”

“We live in a really loud world”
During a tour of her lab, Shinn-Cunningham holds up a neural net of 128 tiny electrodes that fit snugly around the heads of research subjects as they listen to tones or words over headphones in a soundproof room.
Known as EEG (electroencephalogram), the neural net tracks brain activity in both cortical and subcortical areas as the sounds being processed change or are obscured. The data gives researchers a real-time, global look at the listening brain, but Shinn-Cunningham is quick to point out its limits.
“We can extract these big responses to different sounds [with EEG],” she says, “but we know the brain parts we’re measuring are changed by attention, and we can’t see those effects from the surface of the scalp.”
In other words, the EEG data can’t show exactly what’s happening inside the brain. Still, a big chunk of the research here is devoted to finding ways to use the EEG responses during simple auditory tasks to reliably diagnose hidden hearing loss.
The lab team is also digging into factors that lead to this hearing difficulty. Age would seem obvious, but the data says not so fast. A few years ago, when Shinn-Cunningham, 51, noticed her own growing difficulty with the cocktail party effect, she suggested that one of her graduate students, Dorea Ruggles, do a comparison study of young adult (aged 18 to 34 in the study) and middle-aged people (aged 35 to 55) with normal hearing.
Ruggles, who is now a postdoctoral researcher at the University of Minnesota, found a small increase in hidden hearing loss among older study subjects overall. But this effect was dwarfed by variability among young adults—some of whom showed no signs of hidden hearing loss while others suffered greatly from it. Why?
“We live in a really loud world,” says Shinn-Cunningham. Genetics likely play a role in making people more or less susceptible to hidden hearing loss, but she suspects a noisy lifestyle is a key trigger.
“We can’t really test how much noise exposure people have had over a lifetime,” she concedes. “But when we do these tests, we ask subjects questions, such as: Do you mow the lawn a lot? Do you like loud music? And the people with the most noise exposure tend to be the worst listeners.”
If we understand what’s missing in the signal that somebody with hidden hearing loss is getting in their brain, then maybe we can deal directly with that problem.
Barbara Shinn-Cunningham
Finally, the lab is working on innovative hearing aids that might alleviate hidden hearing loss. Many existing hearing aids can actually make things worse in crowded, and noisy social situations, because they simply make everything louder, creating a big blur of noise. It’s an engineering approach to a problem that requires something more.
While “directional hearing aids” can be more selective, amplifying only what they’re aimed toward, Shinn-Cunningham says they are not likely to help for a lot of social situations in which conversations move quickly from speaker to speaker.
Shinn-Cunningham’s lab wants to specifically target hidden hearing loss—to create a device for people who don’t currently have a hearing aid, or a diagnosed hearing impairment, just a lot of trouble and embarrassment in noisy environments.
“If we understand what’s missing in the signal that somebody with hidden hearing loss is getting in their brain, then maybe we can deal directly with that problem,” she says.
Shinn-Cunningham won’t get into specifics, citing intellectual property concerns, but in 2015, she and Sharon Kujawa, associate professor of otolaryngology at Harvard Medical School, were awarded $100,000 in translational research funding to develop just such a device.
Jump in with Both feet
“One of the things that’s fun about music is that it’s an artificial cocktail party effect. Musicians are trying to blend together and fool you into thinking they’re one thing,” says Shinn-Cunningham.
“You can only focus on one melody at a time, and composers know this,” she says. “The music will be going along, and the flute player starts holding a note, and at that moment you suddenly become aware of another melody that’s been going on the whole time.”
Unlike those of us in the audience who can lose ourselves in the music, the musicians themselves must keep track of the composition’s individual “voices” so they don’t lose their places.
At the orchestra rehearsal in Concord, the conductor stops the musicians every few minutes with admonishments and instructions, constantly tweaking the mix to bring just the right sounds to the listeners’ attention at just the right time.
“You’re late. You’re late. You’re late! You’re a whole bar late!” he says. And then, “We need less in the strings, more in the harp, and a little more glockenspiel.”
They will push on like this for nearly three hours, and Shinn-Cunningham is in her element. Despite everything she fits into her life, she doesn’t just dabble in anything. The approach has rubbed off on the younger scientists she’s worked with over the years.
“Whenever opportunities present themselves, I always ask, what would Barb do?” says Gallun. “Usually, the answer is ‘jump in with both feet.’”
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.