National Center for Sign Language and Gesture Resources (NCSLGR) Corpus
IMPORTANT: To access the DAI:
pop-ups must be enabled in your browser for the site to display the videos properly.
Available data
The National Center for Sign Language and Gesture Resources (NCSLGR) Corpus consists of linguistically annotated ASL data (continuous signing), with multiple synchronized video files showing views from different angles and a close-up of the face, as well as associated linguistic annotations available as XML.
The data can be browsed from the ASLLRP DAI (Data Access Interface) at http://secrets.rutgers.edu/dai/queryPages/).
For further information about the data collection and annotation, which were carried out at Boston University under the supervision of Carol Neidle and Stan Sclaroff, and about development of the Web interface (now being developed under the supervision of Christian Vogler), see http://www.bu.edu/asllrp/data-credits.html .
Access to our American Sign Language Lexicon Video Dataset (ASLLVD) is available here: http://secrets.rutgers.edu/dai/queryPages/search/search.php.
Subjects and Language Samples
Most of these data are from four native signers of ASL.
This dataset includes 1,866 distinct canonical signs (i.e., grouping together very slight variants in production). The total number of sign tokens is 11,854.
Restricting consideration to signs other than gestures and classifiers, there are 1,278 distinct canonical signs, and a total of 10,719 tokens.
1,002 of the utterances in this collection are part of short spontaneous narratives (19). The remaining 885 utterances were elicited to illustrate a variety of constructions and sentence types.
Video Files
Synchronized video showing some or all of the following views are available, in compressed and uncompressed video formats: close-up view of the face; 2 frontal (stereoscopic) views; side view.
For information about downloading specific video files, see the charts (prepared by Christian Vogler) that can be downloaded here.
This is a a zip archive that contains the index into all videos, which should give the researchers all the information they need to find and download the movie files corresponding to a particular annotation.
There are two versions: an index by the video file names provided in the SignStream annotations/XML files, and an index by XML file name and utterance id.
Each file is provided as plain text (CSV) format, and as an Excel workbook.
Please do note that some of the compressed files in the index play at half-speed (as on the CD-ROMs that were distributed for some of the stories). We do have compressed video files that play at the correct speed. They will be linked into this chart soon.
Unfortunately, not all views are available in all formats. Some of this data was lost as a result of hardware failures. Everything that is available is listed in these charts.
Linguistic Annotations
The linguistic annotations include the start and end frames for manual signs (represented by unique gloss labels) as well as non-manual behaviors, both anatomical (e.g., raised eyebrows, squinted eyes) and functional (e.g., wh-questions, negation). These are not yet searchable through the DAI, but are linguistically annotated for the entire data set.
These annotations are available in a variety of formats; if you're interested in the XML versions, you can jump to that information now.
SignStream® 2.2.2 files - ca. 2007
These data were annotated using SignStream® 2.2.2 (which runs only as a Classic application on older Macintosh computers). A new Java reimplementation of SignStream is expected to be released in 2016.
The original data set was released on CD-ROM in 200. These CD-ROM's are no longer available.
SignStream® 2.2.2 files - ca. 2011
In 2011, an updated set of SignStream annotations was produced. The revisions involved corrections and better enforcement of consistency with respect to the goal of having 1-1 correspondences between signs and gloss labels. The set of new SignStream files (which still require SignStream 2.2.2 for access, and which reference the same set of video files that were included on the CD-ROM's) can be downloaded from here. These files will access the movie files distributed on the original CD-ROM's.
XML files
The linguistic annotations from 2011 are also available in XML format; the XML files corresponding to the data available for browsing on the DAI (http://secrets.rutgers.edu/dai/queryPages/) can be downloaded from here: http://secrets.rutgers.edu/dai/xml/ncslgr-xml.zip
- The Coding Scheme, contained within the XML files, lists Fields and possible Values for each.
- The Media Files section contains a list of referenced video files.
- The coded data is in the Utterances section.
- Every utterance has at least one segment. (In the current corpus, there is a single segment per utterance.) Every segment contains a set of Tracks. The track's FID is a reference to a field in Coding Scheme section. Every Track has a list of A elements, each with a start end time and a VID. The start and end times are expressed as "movie times"; VID is a reference to a value of a field with id equal to FID. Some fields (text fields, for glosses, e.g.) have text instead of VID's.
With respect to converting from frame numbers to time:
if we divide the Quicktime
video time code by 1000, then we get the time in seconds.For a sentence
sentenceStartTime = float(float(utterance.start)/1000.0)
sentenceEndTime = float(float(utterance.end)/1000.0)sentenceStartFrame = sentenceStartTime*30.0
sentenceEndFrame = sentenceEndTime*30.0(assuming 30 fps)
Bonus (courtesy of Christian Vogler): Here is a Zip archive with the Python code for parsing SignStream XML files. This package contains everything needed to read a SignStream XML file into memory, and to query and manipulate the annotations contained therein. No installation is required; just place the analysis directory in the same directory as your main script, or in some other place where Python looks for packages. The README file contained within this archive contains further information.
Annotation conventions
The annotation conventions are documented in reports 11 and 13 available from this site: http://www.bu.edu/asllrp/reports.html . An update reflecting a few changes that have been made as part of the latest revisions is currently in production, and will be available by the end of 2012.
Related publications
Carol Neidle and Christian Vogler (2012) "A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface." The 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey, May 27, 2012.
Philippe Dreuw, Carol Neidle, Vassilis Athitsos, Stan Sclaroff, and Hermann Ney (2008) "Benchmark Databases for Video-Based Automatic Sign Language Recognition." The sixth international Conference on Language Resources and Evaluation (LREC). Morocco. May 2008.
Neidle, C. (2002) "SignStreamâ„¢: A Database Tool for Research on Visual-Gestural Language". In Brita Bergman, Penny Boyes-Braem, Thomas Hanke, and Elena Pizzuto, eds., Sign Transcription and Database Storage of Sign Information, a special issue of Sign Language and Linguistics 4 (2001):1/2, pp. 203-214.
Neidle, C., S. Sclaroff, and V. Athitsos (2001) "SignStreamâ„¢: A Tool for Linguistic and Computer Vision Research on Visual-Gestural Language Data". Behavior Research Methods, Instruments, and Computers 33:3,pp. 311-320.
Related data set
The American Sign Language Lexicon Video Dataset (ASLLVD) http://www.bu.edu/av/asllrp/dai-asllvd.html