2020 Seminars

January 2020

HRC Seminar with Antje Ihlefeld         January 31st

Antje Ihlefeld, New Jersey Institute of Technology

Title: Circling Back on Theories of Informational Masking

Abstract: A debilitating challenge for most people with hearing loss is that even moderate amounts of background sound make it impossible to understand verbal communication. No device or algorithm can currently restore hearing in these situations. Informational masking holds a key to this problem. However, informational masking is so far described as a psychological construct rather than defined by its underlying brain-based mechanisms. We recently discovered parallels between informational masking and a related phenomenon in vision, called visual crowding. I will present behavioral data on this phenomenon suggesting that informational masking is not purely driven by failures in auditory segregation and attention. Further I will show how we extend this paradigm towards studying cortical function under informational masking via functional near infrared spectroscopy.

February 2020

HRC Seminar with Luna Prud’homme February 7th

Luna Prud’homme, Visiting PhD student from the University of Lyon

Title: Investigating the role of harmonic cancellation in masked speech intelligibility

Abstract: While there is evidence that harmonic cancellation plays a role in the segregation of harmonic sounds based on fundamental frequency (F0), its utility for natural sounds with non-stationary F0s is unclear. The aim of my PhD work is to understand the potential of harmonic cancellation for speech masked by speech. The first step was to modify a speech intelligibility model to include harmonic cancellation, so that it is able to predict speech intelligibility in the presence of monotonized harmonic complex maskers. The next step is to apply the model to more complex maskers with intonation and amplitude modulations, and partially harmonic stimuli like speech. To facilitate this step, a behavioural experiment was conducted, in which speech reception thresholds (SRTs) were measured using maskers ranging from noise to speech. The different maskers provided a comparison between conditions with and without harmonic structure, amplitude modulation and variations in F0 over time. The ability of the model to capture variations in the SRTs, and the contribution of harmonic cancellation to the predictions, will provide insight into the role of harmonic cancellation in cocktail party scenarios.

HRC Seminar with Erica Walker February 28th

Erica Walker, Boston University School of Public Health, Community Noise Lab

Title: Community Noise Lab: Exposure, Epidemiology, and Experiments

Abstract: Erica Walker is an environmental health scientist interested in understanding how both the physical and subjective components of sound and noise impact our health and well-being. Currently, she is a postdoctoral researcher at Boston University School of Public Health’s Department of Environmental Health and is the Principal Investigator of Community Noise Lab, whose research work has been generously funded by the Robert Wood Johnson Foundation. Community Noise Lab’s primary aim is to explore the relationship between community noise and health by working directly with communities to address their specific noise issues using real-time sound monitoring, smartphone technology, laboratory based experiments, and community engagement activities. In this talk, Erica will discuss her current work around environmental sound and community noise perception within several communities across New England. She will also discuss her laboratory-based experiments where she is investigating how sound impacts our brain activity, acutely.

March 2020

HRC Seminar with Lee M. Miller March 6th

Lee M. Miller, University of California, Davis

Title: Neuroengineering the Cocktail Party: Improving Real-World Speech Communication Across the Life Span

Abstract: Understanding speech in noisy “cocktail party” environments is the most profound daily challenge for listeners with hearing loss. Unfortunately, present approaches are limited both in i) diagnosing how different listeners fail to understand speech-in-noise and ii) designing aids and other assistive devices to cope with dynamic and acoustically cluttered scenes. This talk will summarize recent work in our lab using auditory neuroscience and neuroengineering to advance both these aims. First, we have developed a novel EEG method that provides a rapid, hierarchical view into the functional health of the auditory-speech system – from brainstem to cortex – including how different processing stages may interact. Combining natural speech acoustics with FM sweep chirps, this CHEECH (chirp- speech) approach is adaptable to virtually any speech corpus and real-world perceptual or cognitive task. I will describe how CHEECH allows us to characterize auditory development and auditory-visual crossmodal plasticity in children with cochlear implants, and how it may reveal fine temporal processing deficits in older adults. A second project develops an assistive device that uses eye gaze tracking, microphone array beamforming, and virtual 3-D acoustic cues (HRTFs or head related transfer functions) to serve as an “attentional prosthesis”: wherever a listener looks, she hears that sound best. The system is implemented on PC and a mobile platform (Android), with the ultimate goal of improving real-world comprehension, both in individuals with hearing loss as well as healthy listeners in “cocktail parties”.