2009 Seminars

January 2009

Seminar with Sharon Kujawa January 16th

“Acute, Chronic and Delayed Consequences of Noise Exposure: Conventional Wisdom and Recent Findings”

Prof. Sharon Kujawa
Department of Otology and Laryngology
Harvard Medical School
Director of Audiology
Massachusetts Eye and Ear Infirmary

 

February 2009

Seminar with Astrid Klinge February 13th

“Frequency analysis in harmonic complexes in relation to grouping cues in Mongolian gerbils”

Dipl.-Biol. Astrid Klinge
Zoophysiology & Behaviour Group
Department of Biology and Environmental Science
Carl-von-Ossietzky Universität Oldenburg

Seminar with Hendrikus (Diek) Duifhius February 26th

“What can a time-domain cochlea model tell us about biophysical cochlear mechanics?”

Prof.dr.ir. Hendrikus (Diek) Duifhuis
Professor of Biomedical Engineering
University of Groningen
Groningen, The Netherlands

 

March 2009

Seminar with Nicholas Lesica March 25th

“Adaptive Processing of Natural Sensory Stimuli”

Nicholas Lesica, Ph.D.
Ludwig Maximilians University Munich
Department of Biology

 

April 2009

Seminar with Robert Gilkey April 3rd

“The relation between molecular psychophysics and informational masking”
Robert Gilkey, Ph.D.
Professor of Psychology
Wright State University
Dayton, Ohio

Seminar with Allyn Hubbard April 10th

“Joining the TWAMP and SANDWICH models of the cochlea: Combining stereocilia and somatic hair cell forces.”

Prof. Allyn Hubbard, Ph.D.
Professor, Electrical and Computer Engineering Department
Hearing Research Center
Boston University

Seminar with Josh McDermott April 17th

“Sound Texture Perception via Texture Synthesis”

Josh McDermott, Ph.D.
Center for Neural Science
New York University

Seminar with Bo Wen April 24th

“Dynamic range adaptation to sound level statistics in the auditory nerve”

Bo Wen, Ph.D.
EPL Neural Coding Group
Postdoctoral Fellow
R.L.E., M.I.T.

 

May 2009

Seminar with Sasha Devore May 15th

Doctoral Candidate, EPL Neural Coding Group M.I.T.

Title: Neural correlates and mechanisms of sound localization in everyday reverberant settings

 

July 2009

Seminar with Jonas Braasch and John Ellison July 8th

“Investigating the precedence effect for non-impulsive sounds: Psychoacoustic evidence and simulation”

Abstract: The human ability to localize a direct sound source in the presence of reflected sounds is well known as localization dominance due to the precedence effect, formerly also called “the law of the first wavefront.” This presentation focuses on human ability to localize non-impulsive sounds in presence of a single reflection. During the experiments, which will be reported here, the bandwidth and characteristics of the stimuli (noise vs. harmonic complexes) were varied. The second part of the talk will be dedicated to the simulation of these experiments using a newly designed binaural model. In this model, an auto-correlation algorithm determines the delay between lead and lag and their amplitude ratio for both channels. An inverse filter is then used to eliminate the lag signal, before it is localized with a standard localization algorithm. Interestingly, the filter contains both inhibitory and excitatory elements, and the filter’s impulse response looks somewhat similar to the response of a chopper cell. The algorithm operates robustly on top of a model of the auditory periphery (gammatone filterbank, halfwave rectification). Due to its linear nature, the model performs better if the full waveform is reconstructed by subtracting a delayed version of the halfwave-rectified signal, with a delay time that corresponds to half the period of each frequency band’s center frequency. The model is able to simulate a number of experiments with ongoing stimuli, and performs robustly with onset-truncated and interaural-level-difference based stimuli that were previously investigated psychoacoustically by Dizon and Colburn. The model can also be used to demonstrate the Haas Effect.

 

August 2009

Seminar with Pascal Clark August 6th

“Coherent Modulation Decompositions for the Analysis and Modification of Speech Signals”

Abstract: Speech signals are commonly modeled in terms of a sum of low-frequency envelopes that modulate higher-frequency carriers. A widespread practice is to first break up a speech signal into analytic subbands, and then for each subband take its magnitude as the Hilbert envelope and its phase as the carrier fine structure. We argue that this is not a unique solution, and is in fact far from the ideal solution. Closer examination of the modulation signal model reveals it to be a sum-of-products model with an infinite number of possible factorizations. This allows us flexibility to define a more meaningful modulation representation for speech signals, but also requires that we as system designers ask the questions: what are envelopes and carriers, and how should they behave? In this talk I will introduce coherent modulation as an alternative to the conventional Hilbert envelope, and show how it achieves key bandlimiting properties necessary for effective, distortion-free modulation filtering. I will also spend some time on issues of interpretation, especially since coherent modulation, as presently formulated, leads to two surprising results: complex-valued modulators and unintelligible carriers.

 

September 2009

Seminar with Lincoln Gary September 11th

 

October 2009

Seminar with Cyrus Billimoria October 2nd

 

November 2009

Seminar with Patrick Kanold November 13th

“Circuits controlling cortical plasticity”

Prof. Patrick Kanold
Department of Biology
University of Maryland, College Park

Seminar with Peter Cariani November 20th

“Auditory and Visual Sensations: Yoichi Ando’s theory of architectural acoustics”

Abstract: Professor Yoichi Ando, is a well-known architectural acoustician who designed the Kirishima International Concert Hall in Japan. His design method used genetic algorithms to optimize the acoustics according to the psychophysics of listener preferences. I served as guest editor for his most recent book on architectural acoustics and perception, which has just been published by Springer this month. The book summarizes decades of psychophysical experiments related to auditory perception and listener preferences as well as neurophysiological observations (ABR, SVR, EEG, MEG) of their neural correlates that were made by Ando and his colleagues. I will give an overview of Ando’s psychophysics-based approach to architectural acoustics, their psychophysical and neurophysiological findings, and his correlation-based theory of hearing and vision. Ando proposes a correlation-based model of neuronal signal processing in which features of an internal autocorrelation representation subserve “temporal sensations” (pitch, timbre, loudness, duration) while features of an internal interaural crosscorrelation representation subserve “spatial sensations” (sound location, size, diffuseness related to envelopment). Together these two representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Remarkably, Ando and colleagues have found many visual analogues of auditory percepts and preferences (e.g. missing fundamental of flickering light, preferences for flickering lights, oscillatory movements, texture regularity).

 

December 2009

Seminar with Judy Dubno December 4th

“Benefits of Spatial Separation for Aided Listening”
Abstract: Improvements in speech recognition when speech and noise are spatially separated derive largely from interaural level differences (primarily head shadow) and interaural time differences. Given that effective use of interaural difference cues provided by spatial separation improves the functional signal-to-noise ratio, deficits in the use of these cues by older adults with and without hearing loss may contribute to their speech-recognition difficulties. Bilateral amplification should benefit speech recognition in noise by increasing speech audibility and improving spatial benefit by restoring the availability of interaural level and timing cues. In contrast, bilateral hearing aids could reduce spatial benefit by altering these cues. This presentation will review benefit of bilateral hearing aids and benefit of spatial separation for speech recognition in noise in older adults. Comparisons of observed and predicted speech recognition determined the extent to which amplification improved audibility and increased the use of newly available binaural cues. [Work supported by NIH]

Judy R. Dubno
Professor, Department of Otolaryngology-Head and Neck Surgery
Medical University of South Carolina, Charleston, SC, United States

Seminar with Courtenay Wilson December 11th

“Interactions Between the Auditory and Vibrotactile Senses: A Study of Perceptual Effects”

Abstract: This project is an experimental study of perceptual interactions between auditory and tactile stimuli. These experiments present vibrotactile stimuli to the fingertip and auditory tones diotically in broadband noise. Our hypothesis states that if the auditory and tactile systems integrate, the performance of the two sensory stimuli presented simultaneously will be different from the performance of the individual sensory stimuli. The research consists of work in two major areas: (1) Studies of the detection of auditory and tactile sinusoidal stimuli at levels near the threshold of perception (masked thresholds for auditory stimuli and absolute thresholds for tactile stimuli); and (2) Studies of loudness matching employing various combinations of auditory and tactile stimuli presented at supra-threshold levels. Results were compared to three models of auditory-tactile integration. The objective detection studies explore the effects of three major variables on perceptual integration: (a) the starting phase of the auditory relative to the tactile stimulus; (b) the temporal synchrony of stimulation within each of the two modalities; and (c) the frequency of stimulation within each modality. Detection performance for combined auditory-tactile (A+T) presentations was measured using stimulus levels that yielded 63%-77%-correct unimodal performance in a 2-Interval, 2-Alternative Forced-Choice procedure. The research conducted here demonstrates objective and subjective perceptual effects that support the mounting anatomical and physiological evidence for interactions between the auditory and tactual sensory systems.