2018 Friday Session B 1130

Friday, November 2, 2018 | Session B, Conference Auditorium | 11:30am

Referential cues and novel sign learning in young deaf children
A. Lieberman, A. Borovsky

The social and referential cues that accompany early language input are key to helping young children map new words to their referents (e.g. Baldwin, 1993; Booth, McGregor, & Rohlfing, 2008). For hearing children, the association between language and its referents is made through simultaneous and multi-modal perception; language input is perceived through the auditory mode, while objects and referential cues are typically perceived through the visual mode. In contrast, deaf children acquiring a sign language such as American Sign Language (ASL) perceive both linguistic and non-linguistic information through the visual mode. Thus, in order to integrate language input with attention to objects in their environment, deaf children must learn how to optimally alternate visual attention between their conversational partners and the surrounding visual world. Currently, little is known about the referential cues that support word learning in deaf children under these unique perceptual conditions.

In the current study, we developed an eye-tracking paradigm to investigate how young deaf children use social and referential cues to learn novel signed words. Participants were 24 deaf or hard-of-hearing children between the ages of 18 and 60 months. In the exposure phase, children were introduced to six novel signs (two per condition). We varied the timing of referential cues (simultaneous gaze shifts and points) with respect to the signed label. Specifically, participants saw novel signs and novel objects along with a referential cue that occurred either before (Point- Sign) or after (Sign-Point) the signed label, or not at all (No Cue). We subsequently assessed novel sign recognition in test trials where previously labeled novel objects were paired and one was labeled. Novel object trials were interleaved with familiar object trials (Figure 1).

In the exposure phase, children spent more time fixating the object in the Point-Sign condition than the Sign-Point condition (p = .03) and looked longer at novel objects than familiar objects (p = .03). In the test phase, children reliably mapped the novel sign to the novel object. In familiar sign trials, children looked to the target image more than the distractor image beginning shortly after sign onset. In the novel sign trials, children also looked to the target image more than the distractor image, although they took more time to arrive at the target and directed fewer fixations overall to the target than in the familiar sign trials (Figure 2). There were also differences in target looks based on the condition in which children had learned the novel sign. Surprisingly, signs learned in the No Cue condition appear to elicit more looks to target in the test phase than signs learned in either of the cued conditions (Figure 3). This suggests that children’s self-directed gaze alternation may be more effective than input that provides explicit timed cues about when to shift gaze between an object and its label. This study is a first step in exploring deaf children’s ability to use referential cues to guide looking behavior and learn new words when all input occurs in the visual modality.