Singling Out a Sound
BU researchers shed new light on a mystery of sensory processing
By Patrick L. Kennedy
It’s called the “cocktail party problem,” and it’s not about picking the right outfit or juggling stuffed mushrooms and a wine glass. Arising in all manner of busy settings, from coffee shops to subway stations, the phenomenon that has puzzled neuroscientists for decades is this: How does your brain tune out all the other conversations and background noise and focus on the one speaker you’re paying attention to? Conversely, for those who struggle to hear that speaker, what is the brain not doing?
Researchers at Boston University have pinpointed a neuron type that helps perform this sound-isolating task, and their findings might someday be used to improve hearing aids and other assistive technology. Associate Professor Kamal Sen (BME) presented the results of his team’s study in a press conference that kicked off Neuroscience 2021, the annual meeting of the Society for Neuroscience.

Sen is director of the BU Natural Sounds and Neural Coding Laboratory. Through the Neurophotonics Center, Sen collaborated with Associate Professor Xue Han (BME), graduate student Jio Nocon, and Professor Howard Gritton at University of Illinois at Urbana-Champaign to study a cognitive process that has been poorly understood to date.
“The ability to solve the cocktail party problem is one of the most impressive examples of sensory perception by the brain,” Sen said in the conference, but while people with normal hearing accomplish this with relative ease, the problem is thornier for the hearing impaired as well as many with autism or ADHD. “Such humans feel socially and psychologically isolated, leading to severe hardships in life,” said Sen. Voice recognition programs such as SIRI can also be confused in cocktail-party-like settings.

The team integrated theoretical and computational tools developed in Sen’s lab with optogenetics, a revolutionary technique pioneered by Han, to probe the auditory cortex in mice in a cocktail-party-like environment, with competing sounds presented from multiple speakers. The team used light to suppress a specific group of cells in the cortex called the PV neurons, and found that the ability of the cortex to distinguish a target sound from background noise decreased due to degraded timing in the cortical responses.
Their findings suggest that PV neurons, when they’re operating normally, aid the brain’s sound-selection performance by enhancing the timing of cortical responses, just like “a good dance partner can improve your timing on the dance floor,” said Sen. “This newly discovered mechanism may improve treatments and assistive devices.”
In the press conference, Sen represented one of four international teams invited to share their research on the mechanisms of perception. Sponsored in large part by the National Institutes of Health, the studies all explored how our perception and interpretation of sights, sounds, and touch are shaped by cognitive processes such as attention and memory.
“The neuroscience findings presented today demonstrate the importance of comparative brain studies in long-standing issues in human perception and cognition,” wrote Sabine Kastner, a professor at the Princeton Neuroscience Institute and member of the Society of Neuroscience finance committee. “These advances show how research in different model systems can come together to inform our understanding of the human brain, from the neurobiological mechanisms of perception to our subjective perceptual experiences.”