Who Said That?
BME Researchers Probe Common, Age-Related Hearing Problem
By Mark Dwortzan
The older you get, the harder it gets to communicate at restaurants, parties and other common social settings where sound reverberates, but ordinary hearing loss is not the primary reason. According to a new College of Engineering study funded by the National Institutes of Health, the main culprit is selective attention, the auditory system’s ability to distinguish among multiple sound sources in noisy environments. Degraded selective attention compromises middle-aged and older individuals’ ability to decipher who said what, leading many seniors to avoid social gatherings and stay home.
Having shown in a previous study that one’s auditory sensitivity threshold—the decibel level below which one cannot detect sounds—does not predict how well one can pinpoint sound sources, the researchers set up this study to establish a clear link between age and selective attention capability.
Toward that end, they equipped 22 listeners aged approximately 21 to 55—all with normal auditory sensitivity thresholds—with headphones simulating three competing speakers with the same voice, positioned 15 degrees to the left of the subject, directly in front, and 15 degrees to the right. As the competing speakers enunciated sequences of digits (1, 2, 3, etc.), subjects were asked to report the numbers they heard from the source positioned in front of them. They completed the test in three simulated auditory environments: a pristine, echo-free chamber without walls; a normal room with ordinary walls; and an extremely reverberant space, akin to a tiled bathroom.
When switching from the pristine to the normal environment to execute this task, the oldest subjects showed the greatest decline in selective attention. In the extremely reverberant space, most picked numbers randomly.
“Selective attention differences between younger and older subjects only showed up in real-world environments in which sound reverberates off walls,” said Shinn-Cunningham. “The spatial cues in sound in a real room are very different than what most people experience in an echo-free room.”
In echo-free environments, the brain relies primarily on low-frequency sound waves to estimate where the sound is coming from, but in natural settings with reverberant noise, low-frequency sound signals get distorted before they reach the auditory system. Whereas younger listeners can interpret mid-to-high-frequency sound signals to pinpoint sound sources in reverberant environments, middle-aged listeners are less able to process these signals due to age-related physiological changes in the auditory brainstem, Shinn-Cunningham speculates.
“Common hearing aids are not designed to amplify high-frequency sound because it’s not critical to understanding speech content,” she observes. “Providing this capability may help older individuals to direct their attention in in social situations that would otherwise be daunting.”