A Solution to the Cocktail Party Problem—Hidden in the Brain
BU researchers are mixing neuroscience, photonics, and engineering to help people with hearing loss pick out sounds in noisy spaces
A Solution to the Cocktail Party Problem—Hidden in the Brain
A Solution to the Cocktail Party Problem—Hidden in the Brain
Imagine you’re trying to have a conversation with a friend at a loud party. To pick out what they’re saying, your brain has to focus on their voice and filter out all the other party sounds—the chatter, the music. That’s the cocktail party problem: the challenge of isolating a single sound in a noisy environment. Overcoming it is particularly difficult for those with hearing loss, ADHD, or autism, but Boston University researcher Kamal Sen is pursuing a potential solution.
A BU College of Engineering associate professor of biomedical engineering, Sen is refining an algorithm he’s developed that can mimic the steps the brain takes to separate different sounds; he aims to use it in new technology to help people with hearing loss.
To watch the brain in action as it processes competing noises, Sen has teamed up with a BU expert in neurophotonics, the science of using light to noninvasively monitor the brain.
David Boas, director of the BU Neurophotonics Center, is a pioneer in functional near-infrared spectroscopy (fNIRS), which uses infrared light to track the flow of blood in the brain and map neural activity. Using this technology, Boas has created a cap that employs light and sensors to measure what’s happening in different brain regions and that can be worn in everyday situations, allowing researchers to study people outside of the lab.
“When David joined BU, his wearable system for noninvasive brain measurements created a unique opportunity,” says Sen, director of the BU Natural Sounds & Neural Coding Laboratory. “We realized we could study humans in realistic, multisensory environments, [and] make real-time brain recordings.”
To better observe people working through the cocktail party problem, Sen has subjects watch and listen to two movies at the same time while wearing Boas’ cap, giving them prompts for which film they should focus on. The research team records the subject’s brain activity as they attempt to focus on one movie and ignore the other.
“Our research now looks at how vision collaborates with hearing—because in natural settings we don’t just hear people, we see them too,” Sen says. “With David’s group, we’ve expanded our experiments to include vision, studying how auditory and visual systems converge to identify speech and faces in complex scenes.”
Sen hopes their work could be transformative for the millions of people with hearing difficulties—and help make parties a little more fun.
“Even for people with normal hearing, real-life environments can be mentally taxing, so such technologies could reduce listening effort,” Sen says. “These algorithms could be embedded in augmented or virtual reality systems, making it easier for users to focus on relevant sounds.”
In the video above, Sen and Boas, an ENG professor of biomedical engineering, show how they’re blending disciplines to solve the cocktail party problem.
This research was supported by a National Science Foundation Integrative Strategies for Understanding Neural and Cognitive Systems (NCS) FRONTIERS award.