SignStream: A Multimedia Tool for Language Research
Although some research on ASL syntax has been done, the field still lacks consensus on even the most basic questions regarding ASL phrase structure. Many theoretical controversies may, in fact, be a consequence of inadequate means for written transcription and reporting of visual-gestural data. This has precluded both the replicability of results and accessibility of the raw data for direct inspection by the scientific community. The representation of ASL signs using English-like glosses has obliterated--and implicitly undervalued--extremely important manual and non-manual information expressed through signing. Glosses are inconsistent and frequently ambiguous or misleading. Reliance on glosses may well have given rise to incompatible and contradictory theoretical claims found in the literature. Thus, in conjunction with our syntactic research, we are currently developing a linguistically oriented computerized multimedia database of ASL that allows simultaneous access to raw video data and to representations of that data in linguistically useful formats.
SignStream software (see this site for more information about the program and to download the application) facilitates the linguistic analysis of visual language data, allowing for display of multiple synchronized video files (and a wave form for an associated sound file). Linguistic annotations are facilitated by a coding schema that can be tailored to particular annotation requirements and are displayed in multiple parallel fields, enabling a visual display of the temporal relationships among events occurring in parallel on the hands and on the face and upper body. The current SignStream release is for MacOS, but a Java version with many additional features is currently under development.
A large amount of annotated video data is publicly available for use by linguists, language learners, computer scientists, and others. For information about available data, see the site for the National Center for Sign Language and Gesture Resources.
Supported by grants from the National Science Foundation.