2014 Seminars

January 2014

HRC Seminar with Hari Bharadwaj January 17th

Hari Bharadwaj, Ph.D. Candidate, Biomedical Engineering, Auditory Neuroscience Laboratory

Title: Individual differences in supra-threshold auditory perception

HRC Seminar with Jennifer Groh January 24th

Jennifer M. Groh, Ph.D., Professor, Duke Institute for Brain Sciences

Title: When Worlds Collide: Different neural codes for visual and auditory space

Abstract: The visual and somatosensory systems use maps to encode stimulus location. But recent work in mammals has suggested that the auditory pathway encodes sound location not through a map but via a meter – neural firing rates that are proportional to sound angle with respect to the axis of the ears. In a meter, the level of neural activity serves the chief indicator of where a sound is coming from. A meter is as different from a map as an analog code is from a digital one. Given profound differences between the native visual and auditory codes, I will discuss how different coding formats “collide” when visual maps and auditory meters co-exist in the same structure, the primate superior colliculus. These differences have important implications for our understanding of the coordination between different sensory modalities.

HRC Seminar with Stephanie Lien January 31th

Stephanie Lien

Title: Objective Assessment of Vocal Hyperfunction

Abstract: Vocal hyperfunction is a voice condition characterized by excessive laryngeal and paralaryngeal tension, and accounts for up to nearly half of cases referred to multidisciplinary voice clinics. Current clinical assessment of vocal hyperfunction is subjective and demonstrates poor inter-rater reliability. Recent work indicates that a new acoustic measure, relative fundamental frequency (RFF) is sensitive to the maladaptive functional behaviors associated with vocal hyperfunction and can potentially be used to objectively characterize vocal hyperfunction. In this talk, I will discuss methods to optimize the current protocol for RFF estimation and present an algorithm that can automate RFF estimation. Validated methods for automated RFF estimation could allow for large-scale clinical studies of the efficacy of RFF for assessment of vocal hyperfunction and thus provide a non-invasive and readily implemented solution for this long-standing clinical issue.

 

February 2014

HRC Seminar with Satra Ghosh February 7th

 

HRC Seminar – ARO Practice Session February 14th

 

March 2014

HRC Seminar with David McAlpine March 6th

David McAlpine, Professor of Auditory Neuroscience; Director UCL Ear Institute

Title: The range of interaural delays in the mammalian brain

HRC Seminar with David Mountain March 28th

Professor David C. Mountain, Ph.D., Biomedical Engineering, Boston University

Title: Hearing Underwater: Structure and Function in the Cetacean Auditory System

 

April 2014

HRC Seminar with Heidi Nakajima April 4th

 

HRC Seminar with Peter Cariani April 11th

Peter Cariani, Ph.D., Senior Research Scientist, Hearing Research Center, Boston University

Title: Musical pitch: Neural codes, computations, conundrums

Abstract: Musical pitch is the low pitch heard at the fundamental frequency of a repetitive sound pattern. Musical pitch plays a central role in tonal music: precise temporal sequences of pitched events form melodies, pitch combinations form harmonies, and recent statistics of presented pitches create tonal centers and distance hierarchies. A number of basic musical pitch phenomena beg for explanation (octave similarity, musical interval recognition, melodic transposition, pitch stability of chords, harmonic tension/relaxation). Phase-locking in the auditory nerve produces interspike interval representations for pitch that encode the subharmonic structure of sounds (note fundamentals, chord fundamental basses). Neural processing of this subharmonic structure may provide a basis for explaining tonal consonance, pitch stability/multiplicity of chords, and musical interval recognition. Despite some recent significant advances, how the central auditory system ultimately utilizes peripheral spike timing correlations remains a deep mystery. Prospects for and problems with putative central codes and computations for musical pitch will be discussed, as well as possible research strategies for elucidating them.

HRC Seminar with Elisabeth Glowatzki April 18th

Elisabeth Glowatzki, Associate Professor Johns Hopkins School of Medicine, Department of Otolaryngology Head and Neck Surgery

Title: Cellular mechanisms underlying auditory nerve fiber activity in the inner ear

HRC Seminar with Dr. Saul Mate-Cid April 25th

Dr. Saúl Maté-Cid, University of Liverpool

Title: Psychophysical experiments on vibrotactile perception of musical pitch

Abstract: Previous vibrotactile research has provided little or no definitive results on the discrimination and identification of important pitch aspects for musical performance such as relative and absolute pitch. Psychophysical experiments using participants with and without hearing impairments have been carried out to determine vibrotactile detection thresholds on the fingertip and foot, as well as assess the perception of relative and absolute vibrotactile musical pitch. These experiments have investigated the possibilities and limitations of the vibrotactile mode for musical performance. Over the range of notes between C1 (32.7Hz) and C6 (1046.5Hz), no significant difference was found between the mean vibrotactile detection thresholds in terms of displacement for the fingertip of participants with normal hearing and with severe/profound hearing impairments. These thresholds have been used to identify an optimum dynamic range in terms of frequency-weighted acceleration to safely present vibrotactile music. Assuming a practical level of stimulation 10dB above the mean threshold, the dynamic range was found to vary between 12 and 27dB over the three-octave range from C2 to C5. Results on the fingertip indicated that temporal cues such as the transient and continuous parts of notes are important when considering the perception of vibrotactile pitch at suprathreshold levels. No significant difference was found between participants with normal hearing and with severe/profound hearing impairments in the discrimination of vibrotactile relative pitch from C3 to C5 using the fingertip without training. For participants with normal hearing, the mean percentage of correct responses in the post-training test was greater than 70% for intervals between four and twelve semitones using the fingertip and three to twelve semitones using the forefoot. Training improved the correct responses for larger intervals on fingertips and smaller intervals on forefeet. However, relative pitch discrimination for a single semitone was difficult, particularly with the fingertip. After training, participants with normal hearing significantly improved in the discrimination of relative pitch with the fingertip and forefoot. However, identifying relative and absolute pitch was considerably more demanding and the training sessions that were used had no significant effect.

HRC Seminar with Li I. Zhang April 30th

 

May 2014

HRC Seminar with Rob Froemke May 30th

Robert Froemke, Ph.D., New York University

Title: Cortical plasticity improves sensory perception

 

September 2014

HRC Seminar with Kate Christison-Lagay September 12th

 

HRC Seminar with Ingrid Johnsrude September 26th

 

October 2014

HRC Seminar with Jose Pena October 3rd

 

HRC Seminar with Tony Ricci October 10th

 

HRC Seminar with Ying-Yee October 17th

Ying-Yee Kong, Northeastern University

Title: Selective attention and neural entrainment to continuous speech

Abstract: Top-down attention plays an important role in auditory perception in complex listening environments. Recent electrophysiology studies have demonstrated cortical entrainment to the attended speech stream in a two-talker environment. In this talk, I will present our recent work on the effect of top-down selective attention on neural tracking of the speech envelope in different listening conditions. Specifically, I will focus on how spectral degradation affects neural responses to the attended and unattended speech streams, and on the relationship between cortical entrainment and speech intelligibility. Finally, I will share our preliminary data obtained from hearing-impaired listeners.
 

November 2014

HRC Seminar with Jonas Oblesser  November 7th

Dr. Jonas Oblesser, Max Planck Institute for Human Cognitive and Brain Sciences
 

December 2014

HRC Seminar with Richard Mooney December 5th

Richard Mooney, Ph.D., Department of Neurobiology; Duke University School of Medicine

Title: Synaptic cross-talk at the auditory motor interface: roles in vocal learning and auditory cortical function

HRC Seminar with Jeffrey R. Holt December 12th

Jeffrey R. Holt, Ph.D., Associate Professor of Otology and Laryngology, Harvard University

Title: The contributions of TMC proteins to sensory transduction in auditory and vestibular hair cells

Abstract: Identification of the components of the sensory transduction channel in auditory and vestibular hair cells has eluded neuroscientists for many years. Recently, we have focused on Transmembrane channel-like proteins (TMC) 1 and 2 because they are necessary for sensory transduction in mammalian hair cells and may be components of the elusive transduction channel. Tmc1 andTmc2 are expressed in hair cells and their protein products can be localized to the tips of hair cell stereocilia. Mice deficient in both Tmc1 and Tmc2 have complete loss of hearing and balance function and lack sensory transduction, despite intact hair bundles and tip-links (Kawashima et al., 2011). Mice that express only TMC1 or TMC2 have distinct single-channel conductances and calcium selectivity. A methionine-to-lysine substitution at position 412 in TMC1 causes deafness and reduces the single-channel current amplitude and calcium permeability (Pan et al., 2013). These results are consistent with the hypothesis that TMCs participate as essential components of the sensory transduction channel in auditory and vestibular hair cells, but the exact role of TMC proteins is not yet clear. They may form a vestibule at the mouth of the pore, the pore of the ion channel itself or both (Holt et al., 2014). However, these suggestions have proven controversial and alternate hypotheses have been presented including that TMCs may function as non-essential accessory subunits (Beurg et al., 2014). For this seminar I will present the latest data from our lab and others and will discuss their implications for auditory and vestibular function and the potential for gene therapy to restore function in mouse models of genetic deafness.