Goals
This project makes available several different types of experimental resources and analyzed data to facilitate linguistic and computational research on signed languages and the gestural components of spoken languages.
Two dedicated facilities for collection of video-based language data were established, one at Boston University and one at Rutgers University, eachequipped with multiple synchronized digital cameras to capture different views of the subject.
|
|
|
A substantial corpus of American Sign Language (ASL) video data from native signers is being collected and made available. Data collection began in December 1999.The video data are being made available in both uncompressed and compressed formats.
Significant portions of the collected data are also being linguistically annotated using SignStream® (a program currently under development). The SignStream databases are made publicly available, as will the SignStream application itself. (SignStream® was originally aMacOS Classic application; SignStream® 3 is a Java reimplementation. The data can be exported in text format, for use on other platforms.)
The video data are also being analyzed by various computer algorithms. The SignStream® annotations of the data provide "ground truth" for evaluating such algorithms.
Data are being shared through our Data Access Interface (DAI: http://secrets.rutgers.edu/dai/queryPages/).
**********
A new DAI, and new corpora, are now available at
http://dai.cs.rutgers.edu
some additional information about DAI 2
**********
Thus, this project makes available sophisticated facilities for data collection, a standardization of protocol for such collection, and large amounts of language data.