Boston UniversityAmerican Sign Language Lingustic Research Project

Please pardon our appearance while this site undergoes reconstruction...

SignStream

   

SignStream project
   News and reviews
   Screenshots
   Documentation
   Product comparison
   Development
Get SignStream
  
System requirements
   Registering
   SignStream CDs
Transcripts and videos
Online data repository
Links
   Related projects
   References
Communicate

SignStream™ is a database tool for analysis of linguistic data captured on video. Although SignStream is being designed specifically for working with data from American Sign Language, the tool may be applied to any kind of language data captured on video. In particular, SignStream would be suited to the study of other signed languages as well as to studies that include the gestural component of spoken languages.

A SignStream database consists of a collection of utterances, where each utterance associates a segment of video with a detailed transcription of that video. SignStream provides a single environment for manipulating digital video and linking specific frame sequences to simultaneously occurring linguistic events encoded in a fine-grained multi-level transcription. Items from different fields are vertically aligned to reflect their temporal relations. Not only does SignStream greatly simplify the transcription process and increase the accuracy of transcriptions (by virtue of the link of linguistic events with video frames), but it enhances the researcher's ability to perform linguistic analyses of various kinds. By providing sophisticated search capabilities, SignStream affords instant access to data. In addition, multiple utterances can be open at the same time, permitting side-by-side comparison of data.

One goal of the SignStream project is to develop a large database of coded American Sign Language utterances.

SignStream is part of the American Sign Language Linguistic Research Project and is supported by the National Science Foundation. Further information can be found on these pages and additional questions about the project may be addressed to Carol Neidle, at carol@bu.edu.