NSF Grant Will Advance Study of the Cocktail Party Effect
BU researchers probe puzzling auditory phenomenon

Xue Han (left) and Kamal Sen (right) have been awarded nearly $1 million from the National Science Foundation’s Brain Research through Advancing Innovative Neurotechnologies initiative to study the cocktail party effect, an auditory phenomenon that enables people to selectively listen to a single sound source in a noisy environment. Photos courtesy of Boston University College of Engineering
The cocktail party effect, an auditory phenomenon that allows people to selectively listen to a single sound source in a noisy environment, has long puzzled researchers. Now two Boston University College of Engineering professors, aided by nearly $1 million from the National Science Foundation’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, will combine their areas of expertise to study brain processes that make it possible.
Kamal Sen, principal investigator and associate professor of biomedical engineering, and Xue Han, co–principal investigator and associate professor of biomedical engineering, will use the grant, as part of the NSF Integrative Strategies for Understanding Neural and Cognitive Systems (NCS) program, to expand Sen’s previous work with songbirds and incorporate the optogenetic techniques that Han has been researching with mice.
“I’m really excited about launching this new interdisciplinary collaboration with Xue,” says Sen. “By merging our expertise in computational analysis and modeling with Xue’s expertise in powerful optogenetic techniques to probe neural circuits, we can combine theory and sophisticated experiments to work towards solving the longstanding cocktail party problem.”
Optogenetic techniques use light to control neurons by switching them on and off and allow researchers to isolate specific neuronal pathways for study. Sen’s research with songbirds looked at the electrical activity of neurons in the auditory complex while the birds listened to a song from one source either in isolation or in the presence of noise interference from another source.
The researchers found that auditory neurons with different response properties react to different sound sources. Depending on the location of the sounds, neurons are activated in the bird’s brain to allow it to detect the single song over the noisy interference. The findings suggest that a network of neurons, each with slightly different response properties that fire depending on the locations of both the sound and the listener, detects the noise source. The researchers also found that similar neurons exist in mice, whose underlying circuits can now be explored with optogenetic tools for silencing or activating specific neuron types.
Uncovering the neural circuits involved in these processes could lead to the development of better hearing aids and cochlear implants, and also push speech recognition technology forward. Currently, even when hearing is restored with medical devices in patients with hearing loss, the patients still have difficulty concentrating on one sound source at a time. The research could also advance the capabilities of Amazon’s Echo and Apple’s Siri.
“Technologies like the Amazon Echo or Apple’s Siri have difficulty functioning when there are multiple speakers,” says Sen. “But once we understand the biological circuit, we can translate that into an electronic one.”
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.