Long-known auditory phenomenon remains little understood
By Liz Sheeley
Two ENG professors have been awarded nearly $1 million from the National Science Foundation (NSF) as part of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative. Principal Investigator Associate Professor Kamal Sen (BME) and Co-Principal Investigator Associate Professor Xue Han (BME) will study the neural networks that allow the brain to distinguish sounds from each other.
“I’m really excited about launching this new interdisciplinary collaboration with Xue,” says Sen. “By merging our expertise in computational analysis and modeling with Xue’s expertise in powerful optogenetic techniques to probe neural circuits, we can combine theory and sophisticated experiments to work towards solving the longstanding cocktail party problem.”
The cocktail party effect, an auditory phenomenon where the subject can selectively choose to listen to one sound source while in a noisy environment, is not a well-understood biological process. Even when hearing is restored with medical devices in patients with hearing loss, they still have a difficult time concentrating on one sound source at a time; this suggests a neurological basis for the cocktail party effect.
The grant, as part of the NSF Integrative Strategies for Understanding Neural and Cognitive Systems (NCS) program, will take Sen’s previous work with songbirds and expand on the research while transitioning to using optogenetic techniques in mice, which is Han’s expertise. Optogenetic techniques use light to control neurons by switching them on and off and allow researchers to isolate specific neuronal pathways for study. Sen’s previous research with songbirds looked at the electrical activity of neurons in the auditory complex while the birds listened to a song from one source either in isolation or in the presence of noise interference from another source.
The researchers found that there are auditory neurons with different response properties that are used to react to multiple sound sources. Depending on the location of the multiple sounds, different neurons are activated in the bird’s brain to allow it to detect the single song over the noisy interference. These results suggest a network of neurons, each with slightly different response properties that fire depending on the locations of both the sound the listener is detecting and the noise source. This study laid the groundwork for the grant proposal, which showed that similar neurons exist in mice. This opens up the opportunity to probe the underlying circuits in mice using optogenetic tools for silencing or activating specific neuron types.
Uncovering the neural circuits involved in these processes could lead to the development of better hearing aids and cochlear implants, and also push speech recognition technology forward.
“Technologies like the Amazon Echo or Apple’s Siri have difficulty functioning when there are multiple speakers,” says Sen. “But once we understand the biological circuit, we can translate that into an electronic one.”