2013 Seminars
January 2013
HRC Seminar with Michael Heinz | January 11th |
Dr. Michael G. Heinz, Ph.D., Associate Professor of Speech, Language and Hearing Sciences and Biomedical Engineering, Purdue University
Title: Physiological correlates of degraded temporal fine structure sensitivity with sensorineural hearing loss
Abstract: Recent perceptual studies have suggested that listeners with sensorineural hearing loss (SNHL) have a reduced ability to use temporal fine-structure cues for speech and pitch perception. These results have fueled an active debate about the role of temporal coding in normal and impaired hearing, and have important implications for improving the ability of hearing aids and cochlear implants to restore speech perception in noise. This talk will describe some of our recent studies exploring the physiological bases for these perceptual results through a combination of neurophysiological, perceptual, and computational modeling approaches. Recordings from auditory-nerve fibers in chinchillas with noise-induced hearing loss suggest that the fundamental ability of fibers to phase lock to temporal fine structure in quiet conditions is not degraded by SNHL, but that degraded phase locking emerges in background noise. In addition, a number of other effects of SNHL have been observed that may also contribute to perceptual deficits in temporal processing of complex stimuli. These effects include changes in the relative encoding of envelope and fine structure, loss of tonotopicity, and reduced across-CF traveling-wave delays. Furthermore, our correlated neural modeling and human perception results using vocoded speech stimuli suggest the possibility that reported fine-structure deficits could be related (at least in part) to a reduction in recovered envelope cues, which result from cochlear transformations between acoustic fine structure and neural envelope.
HRC Seminar with Sarah Verhulst | January 18th |
Dr. Sarah Verhulst, Post-Doctoral Associate, Auditory Neuroscience Lab, Center for Computational Neuroscience and Neural Technology, Boston University
Title: Cochlear contributions to the precedence effect & hearing impairment through model predictions of brainstem responses
Abstract: This talk focuses on how psychoacoustical, physiological and modeling approaches can be combined to provide insights into the different processing stages along the auditory pathway, both for normal and impaired hearing. The first part of the talk compares click-evoked otoacoustic emissions (CEOAEs), auditory brainstem responses (ABRs) and psychoacoustical results to characterize perceptual consequences of basilar-membrane interactions on the perception of double click pairs known to evoke the precedence effect. Perceptually, the click pairs were shown to give rise to fusion (i.e., the inability to hear out the second click in a lead-lag click pair) for inter-click intervals between 1 and 4 ms, regardless of whether they were presented monaurally or binaurally. The ICI range for which the percept was fused correlated well with the ICI range for which the CEOAE and ABR responses were reduced in level by the presence of a preceding click (i.e., lag suppression). These results suggest that peripheral suppression of a lagging click up to the level of the brainstem accounts for the perceptual aspects of the precedence effect for click stimuli. The second part of the talk explores the consequences of various forms of hearing damage on auditory brainstem responses (ABRs) using a model. Previous studies suggest that the latency of the ABR wave-V decreases with increasing stimulus level in normal- hearing listeners, an effect often ascribed to broadened auditory filters. Following this logic, hearing-impaired subjects with broad auditory filters should exhibit shorter wave-V latencies than normal-hearing listeners. However, model predictions suggest that these ideas may not bear out. A number of recent studies suggest that noise exposure damages low spontaneous (LS) rate auditory nerve fibers (ANFs) preferentially, before high spontaneous rate fibers are affected. The model investigates how such preferential damage of LS ANFs impact the latency of ABR wave-V. The adopted modeling approach can improve our understanding of how ABR wave-V latency reflects peripheral function, and thereby enhance its utility in diagnosing various forms of hearing impairment.
HRC Seminar with Konstantina Stankovic | January 25th |
Konstantina Stankovic, M.D., Ph.D., Assistant Professor of Otology and Laryngology, Harvard Medical School
Title: Vistas in Neurotologic Research: From Biologic Batteries to Biomarkers
Abstract: In this talk, we first review our recent collaborative work on powering electronics by the ear without affecting hearing. Next, we discuss our ongoing work on optical imaging of the inner ear for cellular-level diagnosis and therapy. We conclude by our research on discovery of biomarkers in cochlear fluids and tissues using high throughput technologies.
Bio: Konstantina Stankovic, M.D., Ph.D. is an auditory neuroscientist and a practicing neurotologic surgeon at Massachusetts Eye and Ear Infirmary, Massachusetts General Hospital and Harvard Medical School. She studied at MIT (BS and PhD degrees) and Harvard (MD degree, postdoctoral fellowship, residency in otolaryngology – head and neck surgery, and clinical fellowship in neurotology). Her present research program is cross disciplinary, and she combines tools of systems neuroscience with molecular, genetic and genomic studies in mice and humans to improve diagnostics and therapeutics for sensorineural hearing loss.
February 2013
HRC Seminar with Enrique Lopez-Poveda | February 1st |
Dr. Enrique A. Lopez-Poveda, Ph.D., University of Salamanca, Spain
Title: A word of caution about exceptionally sharp behavioral estimates of cochlear filter tuning
Seminar cancelled due to weather | February 8th |
Seminar cancelled due to weather. Rescheduled date TBA
Takayuki Ito, Ph.D., Haskins Laboratories
Title: Orofacial somatosensory function in speech processing
Abstract: In orofacial somatosensory system, the cutaneous inputs may contribute to speech processing because several orofacial muscles lack muscle spindles and tendons. It is important to know how orofacial cutaneous inputs function in speech motor control and perception of the speech sounds. We here considered two topics: (1) somatosensory afferents associated with facial skin deformation in speech motor control and learning, and (2) interaction between orofacial somatosensory afferents and the perception of speech sounds. In series of the studies, we applied mechanical perturbations to the facial skin. By observing the response to the perturbation in speech production and perception, we found a significant role of orofacial cutaneous inputs in speech production and perception. The result also suggests speech perception and production are intimately linked and accordingly may share common neural mechanisms
HRC Seminar – ARO Practice Session | February 15th |
HRC Seminar with David McAlpine | February 22nd |
Dr. David McAlpine, Professor of Auditory Neuroscience, University College London
Title: Cocktail Party Listening – With a Twist!
Abstract: The ability to hear out a sound source against a background of competing sounds is critical to the survival of many species, and essential for human communication. Nevertheless, brain mechanisms contributing to such ‘cocktail-party’ listening remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information direct from the source, when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. Here, employing human psychophysics and brain imaging (magneto-encephalography), and a novel stimulus in which binaural cues and sound amplitude are modulated with a fixed mutual relation, we demonstrate that the auditory brain emphasizes binaural information arising during the early, rising portion of each modulation cycle. Single-neuron recordings in small mammals, which confirm this finding, indicate that a process of adaptation occurring before binaural integration renders spatial information recoverable in an otherwise un-localizable sound.
March 2013
HRC Seminar with John Middlebrooks | March 29th |
John Middlebrooks, Ph.D., Professor, University of California at Irvine
Title: “A cortical substrate for auditory scene analysis”
April 2013
HRC Seminar with Jonas Braasch | April 5th |
Dr. Jonas Braasch, Associate Professor, Rensselaer Polytechnic Institute
Title: Application of top-down strategies in binaural models to analyze complex sound source scenarios
Abstract: Current models to explain binaural hearing generally focus on bottom-up processes of the central nervous system to simulate sound localization and other binaural tasks. While these models have been very successful in explaining a number of psychoacoustic phenomena, their architectures cannot simulate the active exploration of sound fields. In this talk, top-down feedback strategies to solve some of the current challenges in the field will be discussed. A precedence effect model that adapts its inhibition parameters to the test stimuli and a head-movement model that can resolve front/back confusions through strategic head movements are presented as examples of higher and lower-level feedback structures.
Cancelled – HRC Seminar with Graham Naylor | April 8th |
Please note that this seminar has been cancelled. We are working to reschedule.
HRC Seminar with Xiaoqin Wang | April 12th |
Xiaoqin Wang, Ph.D., Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University
Title: What is missing in auditory cortex under cochlear implant stimulation?
Abstract: The success of a cochlear implant (CI) depends on the central auditory system’s ability to adequately process and interpret electric stimuli delivered to the cochlea. While CI devices have been widely used clinically, our knowledge on how the central auditory system processes CI stimulation is limited, especially in the primate brain. Much of this knowledge has come from electrophysiological studies in anesthetized animals. We have developed a new non-human primate model for CI research using the common marmoset (Callithrix jacchus), a highly vocal New World monkey that has emerged as a promising model for studying neural basis of hearing and vocal communication in recent years. Our recent work has demonstrated that this non-human primate species is suitable for studying single neuron responses in auditory cortex to CI stimulation in awake and behaving conditions. By implanting a CI electrode array in one cochlea and leaving the other cochlea acoustically intact, we were able to compare each neuron’s responses to acoustic and CI stimulation separately or in combination in the primary auditory cortex (A1) of awake marmosets. On the basis of extensive recordings from populations of single neurons, we have discovered that CI stimulation is surprisingly ineffective at activating many A1 neurons, particularly those in the hemisphere ipsilateral to the CI stimulation, in sharp contrast to acoustic stimulation which is effective at activating the majority of neurons in A1 of both hemispheres. We further showed that CI-nonresponsive neurons showed greater acoustic stimulus selectivity compared to CI-responsive neurons, with narrower frequency tuning and greater non-monotonicity over sound level. Such cortical neurons may play an important role in perceptual behaviors requiring fine frequency and level discrimination. Our findings suggest that selective populations of auditory cortex neurons are not effectively activated by commonly used CI stimulation patterns and provide important insights into mechanisms underlying poor performance in a wide range of perceptual tasks by CI users.
HRC Seminar with Agnes Leger | April 19th |
Dr. Agnes Leger, Postdoctoral Associate, Research Laboratory of Electronics, MIT
Title: Abnormal speech processing for hearing-impaired listeners in frequency regions where absolute thresholds are normal
Abstract: Speech intelligibility is reduced for listeners with sensorineural hearing loss, especially in noise. The extent to which this reduction is due to reduced audibility or supra-threshold deficits is still debated. The specific influence of supra-threshold deficits on speech intelligibility was investigated.
The intelligibility of nonsense speech signals in quiet and in various noise maskers was measured for normal-hearing (NH) and hearing-impaired (HI) listeners with hearing loss in high-frequency regions. Participants in both groups had a wide range of ages. The effect of audibility was limited by filtering the signals into low (≤1.5 kHz), mid (1-3 kHz) and low+mid (≤3 kHz) frequency regions, where pure-tone sensitivity was normal or near-normal for the HI listeners. The influence of impaired frequency selectivity on speech intelligibility was investigated for NH listeners by simulating broadening of the auditory filters using a spectral-smearing algorithm. Temporal fine structure sensitivity was estimated for NH and HI listeners by measuring sensitivity to interaural phase differences. Otoacoustic emissions and brainstem electrical responses were measured. HI listeners showed mild to severe intelligibility deficits for speech in quiet and in noise. Similar deficits were obtained for steady and fluctuating noises. Simulated reduced frequency selectivity also led to deficits in intelligibility for speech in quiet and in noise, but these were not large enough to explain the deficits found for the HI listeners. The results suggest that speech deficits for the HI listeners may result from suprathreshold auditory deficits caused by outer hair cell damage and by factors associated with aging. The influence of temporal fine structure sensitivity remains unclear.
These results suggest that speech intelligibility can be strongly influenced by supra-threshold auditory deficits. Audiometric thresholds within the “normal” range (better than 20 dB HL) do not imply normal auditory function.
HRC Seminar with Dan Sanes | April 26th |
Dr. Daniel H. Sanes, Center for Neural Science, New York University
Title: Sensitive periods for auditory perceptual experience
May 2013
New Date: HRC Seminar with Oded Ghitza | May 17th |
Dr. Oded Ghitza, Research Professor, Hearing Research Center and Department of Biomedical Engineering, Boston University
Title: Speech perception, cortical theta oscillations and auditory channel capacity
Abstract: Speech, time-compressed by a factor greater than about three is unintelligible. Surprisingly, intelligibility is considerably restored by ‘repackaging’ – a process of dividing the compressed speech signal into segments, here called packets, and inserting gaps between the packets (Ghitza and Greenberg, 2009). Periodic repackaging is defined by two parameters, the packet duration and the packaging rate. The amount of information, in bits, carried by the packet can be inferred from the packet duration, and the information rate, in bits/sec, is determined by a mix of packet duration and packing rate.
This talk is concern with the following questions: (i) what is the maximum speech-information rate to be perceived without error (i.e., what is the auditory channel capacity, in bits/sec)? and (ii) what is the cortical function that determines auditory capacity? To address these questions, we measured intelligibility of naturally spoken 7-digit telephone numbers time-compressed by factors up to eight, with parametrically varied repackaging parameters. The results show that, at capacity, packaging rate and packet duration are in correspondence with properties of cortical theta oscillations. For any prescribed compression factor, packaging rate is 9 packets/sec – in correspondence with the upper limit of cortical theta (9 Hz). The information delivered by the packet is the information contained in one uncompressed intervocalic segment (one theta-syllable; Ghitza, 2013). These estimates are at capacity because, for any other packaging-rate/packet-duration combination with higher information rate, intelligibility deteriorates.
HRC Seminar with Hideki Kawahara | May 24th |
September 2013
HRC Seminar with Agnes Leger | September 13th |
Agnès Léger, Post-Doctoral Associate, Research Laboratory of Electronics, MIT
Title: Abnormal speech processing for hearing-impaired listeners in frequency regions where absolute thresholds are normal
Abstract: Speech intelligibility is reduced for listeners with sensorineural hearing loss, especially in noise. The extent to which this reduction is due to reduced audibility or supra-threshold deficits is still debated. The specific influence of supra-threshold deficits on speech intelligibility was investigated. The intelligibility of nonsense speech signals in quiet and in various noise maskers was measured for normal-hearing (NH) and hearing-impaired (HI) listeners with hearing loss in high-frequency regions. Participants in both groups had a wide range of ages. The effect of audibility was limited by filtering the signals into low (≤1.5 kHz), mid (1-3 kHz) and low+mid (≤3 kHz) frequency regions, where pure-tone sensitivity was normal or near-normal for the HI listeners. The influence of impaired frequency selectivity on speech intelligibility was investigated for NH listeners by simulating broadening of the auditory filters using a spectral-smearing algorithm. Temporal fine structure sensitivity was estimated for NH and HI listeners by measuring sensitivity to interaural phase differences. Otoacoustic emissions and brainstem electrical responses were measured. HI listeners showed mild to severe intelligibility deficits for speech in quiet and in noise. Similar deficits were obtained for steady and fluctuating noises. Simulated reduced frequency selectivity also led to deficits in intelligibility for speech in quiet and in noise, but these were not large enough to explain the deficits found for the HI listeners. The results suggest that speech deficits for the HI listeners may result from suprathreshold auditory deficits caused by outer hair cell damage and by factors associated with aging. The influence of temporal fine structure sensitivity remains unclear.
These results suggest that speech intelligibility can be strongly influenced by supra-threshold auditory deficits. Audiometric thresholds within the “normal” range (better than 20 dB HL) do not imply normal auditory function.
HRC Seminar with Josh McDermott | September 20th |
Josh McDermott, Ph.D., Assistant Professor, Department of Brain and Cognitive Sciences, MIT
Title: Sound Texture Perception Via Statistics of the Auditory Periphery
Abstract: Humans infer many important things about the world from the sound pressure waveforms that enter the ears. In doing so we solve a number of difficult and intriguing computational problems. We recognize sound sources despite large variability in the waveforms they produce, extract behaviorally relevant attributes that are not explicit in the input to the ear, and do so even when sound sources are embedded in dense mixtures with other sounds. This talk will describe my recent work investigating how we accomplish these feats. I will focus in particular on the perception of sound textures – sounds produced by a superposition of multiple similar acoustic events, as are produced by rain, swarms of insects, or galloping horses. I will explore the hypothesis that textures are represented and recognized with statistics that integrate information over time to yield the average properties of a sound. We have examined this hypothesis by synthesizing textures from statistics measured in natural sounds, and by testing listeners discrimination of the resulting synthetic textures. We have also begun to explore the role of texture in auditory scene analysis, in which the need to integrate information over time conflicts with the need to segregate information from distinct sound sources.
October 2013
HRC Seminar with Monica Hawley | October 4th |
Monica Hawley, Ph.D.
Title: An Intervention to Expand the Auditory Dynamic Range for Loudness Among Persons with Hearing Losses and Hyperacusis
HRC Seminar with Takayuki Ito | October 11th |
Cancelled: HRC Seminar with Douglas Brungart | October 18th |
HRC Seminar with Tim Gardner | October 25th |
Tim Gardner, Assistant Professor, Department of Biology, Boston University
Title: Auditory objects in songbird pre-motor cortex
Rescheduled: HRC Seminar with Douglas Brungart | October 29th |
Douglas S. Brungart, Ph.D..; Audiology and Speech Center, Walter Reed National Military Medical Center
Title: New Directions in Audiology and Speech Pathology: Hearing and Speech Research at Walter Reed Bethesda
November 2013
HRC Seminar with Magdalena Wojtczak | November 1st |
Magdalena Wojtczak, Ph.D., Research Associate Professor, Department of Psychology, University of Minnesota
Title: Exploring the role of the medial olivcochlear reflex in perceptual enhancement
Abstract: When a complex tone with equal-amplitude components is preceded by the same complex with components removed from a certain frequency region, the components from that region in the second complex are perceived as louder standing out from the rest of the spectral content. The origin of this “pop-out” or “enhancement” effect is unknown. One of the proposed accounts for the effect is in terms of adaptation of suppression at the level of the cochlea. This study tests the hypothesis that the enhancement effect is at least partially mediated by the activation of medial olivocochlear efferents known to elicit changes in cochlear amplifier gain over the course of a stimulus. The hypothesis is tested using noninvasive physiological measurements of stimulus frequency otoacoustic emissions (SFOAEs). The results from SFOAE measurements will be compared with psychophysical results from a task using the same enhancement-producing stimulus and the challenges of using the SFOAE measurements to verify interpretations of psychophysical findings will be discussed.
HRC Seminar with Edmund Lalor | November 18th |
Dr. Edmund Lalor, Assistant Professor, Trinity College
Title: The effects of attention and visual input on the representation of natural speech in EEG
Abstract: Traditionally, the use of electroencephalography (EEG) to study the neural processing of natural speech in humans has been constrained by the need to repeatedly present discrete stimuli. Progress has been made recently by the realization that cortical population activity tracks the amplitude envelope of speech. This has led to studies using linear regression methods which allow the presentation of continuous speech. In this talk I will present the results of several studies that use such methods to examine how the representation of speech is affected by attention and by visual inputs. Specifically, I will present data showing that it is possible to “reconstruct” a speech stimulus from single-trial EEG and, by doing so, to decode how a subject is deploying attention in a naturalistic cocktail party scenario. I will also present results showing that the representation of the envelope of auditory speech in the cortex is earlier when accompanied by visual speech. Finally I will discuss some implications that these findings have for the design of future EEG studies into the ongoing dynamics of cognition and for research aimed at identifying biomarkers of clinical disorders.
December 2013
Cancelled: HRC Seminar with Ross Williamson | December 13th |