Seminar Archive

 

January 2018

HRC Seminar with Luke Baltzell January 19th

Luke Baltzell, Department of Cognitive Science, University of California, Irvine.

Title: The role of cortical entrainment in speech perception: some considerations

Abstract: It has been suggested that the cortical entrainment response reflects phase-resetting neuronal oscillations that track speech information. However, it remains unclear the extent to which the entrainment response reflects acoustic rather than linguistic features of the speech stimulus. Furthermore, the neural representation of the speech stimulus being tracked remains unknown. We present evidence that the entrainment response tracks acoustic rather than linguistic information, and that the entrainment response tracks acoustic information within peripheral auditory channels.

HRC Seminar with Nace Golding January 26th

Nace Golding, University of Texas at Austin

Title: Beyond Jeffress: New Insights into the Sound Localization Circuitry in the Medial Superior Olive

Abstract: The circuitry in the medial superior olive (MSO) of mammals extracts azimuthal information from the inter-aural time differences (ITDs) of sounds to the two ears. For the past 70 years models of sound localization have assumed that MSO neurons represent a single population of cells with homogeneous properties. Here I will discuss new data that show that MSO neurons are in fact physiologically diverse, with properties that depend on cell position along the topographic map of frequency. In many neurons, high frequency firing is promoted via fast subthreshold membrane oscillations. We propose that differences in these and other physiological properties across the MSO neuron population enables the MSO to duplex the encoding of ITD information in fast, sub-millisecond time varying signals as well as slower envelopes.

 

February 2018

HRC Seminar with Matt McGinley February 2nd

Matt McGinley, Baylor College of Medicine

Title: Pupil-indexed neuromodulation of brain state and cognition

Abstract: Moment-to-moment changes in the state of the brain powerfully influence cognitive processes such as perception and decision-making. For example, during a research seminar we may attend closely to the speaker, drift nearly to sleep, and then arouse rapidly and flee the room following a fire alarm. Failure to notice the same alarm during deep sleep could have tragic consequences. The McGinley lab seeks to understand how these shifts in internal brain states – such as arousal and attention – shape our perception and actions. Brain state is powerfully controlled by release throughout the brain of neuromodulatory transmitters such as acetylcholine and norepinephrine. In addition to controlling brain state, these modulatory systems exert temporally precise control of the cerebral cortex to guide effective learning and decision-making. Our research aims to understand the natural cellular-, synaptic-, and circuit-level physiologic mechanisms by which neuromodulation of the cortex shapes cognition. We use the pupil as a proxy for neuromodulatory brain state. We train mice in psychometric value-based decision-making tasks. To dissect these brain circuits, we conduct two-photon imaging, optogenetics, whole-cell recording, extracellular recording, and pharmacology—all during behavior. We also seek to develop closed-loop electrical interventions to treat related disorders, using novel biosensors and brain stimulation devices.

 

March 2018

HRC Seminar with Yue Sun – Cancelled March 23rd

Yue Sun, Max-Plank Institute

 

HRC Seminar with Amanda Griffin March 30th

Amanda Griffin, Boston Children’s Hospital at Waltham

Title: Effects of Pediatric Unilateral Hearing Loss on Speech Recognition, Auditory Comprehension, and Quality of Life

Abstract: A growing body of research is challenging long-held assumptions that pediatric unilateral hearing loss (UHL) has minimal detrimental effects on children’s development. It is now well understood that children with UHL are at risk for speech and language delays, psychosocial issues, and academic underachievement. Despite this recognition, audiological service provision in this population has suffered from insufficient evidence of objective benefit from the variety of interventions that are available. Relatively few studies have expressly focused on understanding the variability in auditory abilities within this special population, which is imperative to inform intervention strategies. The current talk will briefly review the existing literature on global outcomes and then focus on newer auditory research exploring the effects of UHL on masked sentence recognition in a variety of target/masker spatial configurations, auditory comprehension in quiet and in noise, and hearing-related quality of life in school-aged children.

 

April 2018

HRC Seminar with Lauren Calandruccio April 6th

Lauren Calandruccio, Case Western Reserve University

Title: Speech-on-speech masking: Properties of the masker speech that change its effectiveness

Abstract: In this lecture, I will present two data sets evaluating sentence recognition in the presence of competing speech maskers. The importance of who is talking in the background and what they are saying will be evaluated. In the first set of experiments we will assess if one of the two talkers within the masker speech dominates the masker’s overall effectiveness. In the second set of experiments, we will explore whether the semantic meaning of the masker speech matters when controlling for syntax, lexical content, and the talker’s voice.

HRC Seminar with Matthew Masapollo April 13th

Matthew Masapollo, Boston University

Title: Speech Perception in Adults and Infants: Some Universal Characteristics and Constraints

Abstract: A fundamental issue in the field of speech perception is how perceivers map the input speech signal onto the phonetic categories of their native language. Over the years, considerable research has focused on addressing how the nature of the mapping between acoustic and phonetic structures changes with linguistic experience over the course of development. This emphasis on exploring what is language-specific as opposed to what is universal in the speech categorization process derived in part from research with adults, infants and non-human primates on the well- studied phenomenon called the “perceptual magnet effect” (Kuhl, 1991), which revealed that early linguistic experience functionally alters perception by decreasing discrimination sensitivity near native phonetic category prototypes and increasing sensitivity near boundaries between categories. However, there is now growing evidence that young infants reared in different linguistic communities initially display universal perceptual biases that guide and constrain how they learn to parse phonetic space, and that these biases continue to operate in adult language users independently of language-specific prototype categorization processes. Recent findings on this issue, which are summarized in this talk, suggest that the categorization processes that map the speech signal onto categorical phonetic representations are shaped by a complex interplay between initial, universal biases and experiential influences.

HRC Seminar with Alexandra Jesse April 20th

Alexandra Jesse, University of Massachusetts

Title: Learning about speaker idiosyncrasies in audiovisual speech

Abstract: Seeing a speaker typically improves speech perception, especially in adverse conditions. Audiovisual speech is more robustly recognized than auditory speech, since visual speech assists recognition by contributing information that is redundant and complementary to the information obtained from auditory speech. The realization of phonemes varies, however, across speakers, and listeners are sensitive to this variation in both auditory and visual speech during speech recognition. But listeners are also sensitive to consistency in articulation within a speaker. When an idiosyncratic articulation renders a sound ambiguous, listeners use available disambiguating information, such as lexical knowledge or visual speech information, to adjust the boundaries of their auditory phonetic categories to incorporate the speech sound into the intended category. This facilitates future recognition of the sound. For visual speech to best aid recognition, listeners likewise have to flexibly adjust their visual phonetic categories to speakers. In this talk, I will present work showing how lexical knowledge and speech information can both assist the retuning of phonetic categories to speakers, and how these processes seem to rely on attentional resources. Furthermore, I will present work showing that listeners rapidly form identity representations of unfamiliar speakers’ facial motion signatures, which subserve talker recognition but may also aid speech perception.

HRC Seminar with Bharath Chandrasekaran April 27th

Bharath Chandrasekaran, University of Texas at Austin

Title: Cognitive-sensory influences on the subcortical representation of speech signals

Abstract: Scalp-recorded electrophysiological responses to complex, periodic auditory signals reflect phase-locked activity from neural ensembles within the subcortical auditory system. These responses, referred to as frequency-following responses (FFRs), have been widely utilized to index typical and atypical representation of speech signals in the auditory system. In this talk, I will discuss two studies from my lab that evaluated cognitive-sensory interactions in the subcortical representation of speech features. In one study (Xie et al. in revision); we used novel machine learning metrics to demonstrate the influence of cross-modal attention on the neural encoding of speech signals. We found that the relationship between visual attentional and auditory subcortical processing is highly contingent on the predictability of incoming auditory streams. When attention is disengaged from the auditory system to process visual signals, subcortical auditory representation is enhanced when stimulus presentation is less predictable. We posit that, when attentional resources are allocated to the visual domain, a reduction in top-down auditory cortical control gears the subcortical auditory system towards novelty detection. In a second study (Reetzke et al. submitted), we examined the impact of long-term sound-to-category training on the subcortical representation of speech signals. We trained English-speaking adults on a non-native contrast (Mandarin tones) using a sound-to-category training task for > 4,400 trials over ~17 consecutive days. Each subject was monitored from novice to an experienced stage of performance, which was defined as maintenance of target criterion (90%) for three consecutive days, a criterion defined by native Mandarin performance. Subjects were then over-trained for ten additional days to stabilize and automatize behavior. To assay neural plasticity, we recorded FFRs to the four Mandarin tones at various learning stages. Our results show that English-speaking adults can become as accurate and fast at categorizing non-native Mandarin speech sounds as native Chinese adults. Learners were also able to generalize to novel stimuli and demonstrate categorical perception to a tone continuum equivalent to native speakers. Notably, robust changes in neurophysiological responses to Mandarin tones emerge after the behavior is stabilized, and such observed neural plasticity, along with behavior, is retained after 2-months of no training. I will discuss results from these two studies within the context of the predictive tuning model of auditory plasticity (Chandrasekaran et al. 2014).