The ASLLRP includes:

    • investigation of the syntactic structure of American Sign Language, and the relationship of syntax to semantics and prosody;

    • development of multimedia tools to facilitate access to and analysis of primary data for sign language research;

    • collaboration with computer scientists interested in problems involved in computer-based recognition and generation of signed languages.

These projects have been funded by the National Science Foundation. For further details, see the menu to the left.

Data Available from various related projects

Terms of use for ASLLRP data

The data available from these pages can be used for research and education purposes, but cannot be redistributed without permission.

Commercial use, without explicit permission, is not allowed, nor are any patents and copyrights based on this material.

Those making use of these data must, in resulting publications or presentations, cite: The National Center for Sign Language and Gesture Resources (NCSLGR) Corpus and this publication:

Carol Neidle and Christian Vogler [2012] "A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface," Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey.

and also include the following URL's: and

By accessing data from this site, you agree to the above terms of use.

(1) ASLLRP DAI (Data Access Interface) - Web access to the National Center for Sign Language and Gesture Resources (NCSLGR) corpus: linguistically annotated ASL data (continuous signing), with multiple synchronized video files showing views from different angles and a close-up of the face and linguistic annotations available as XML.

Information about the data collection and annotation, and about development of the Web interface

Anotation conventions are documented in these two reports, and an updated version will be forthcoming in Spring 2012:

C. Neidle (2002) "SignStream™ Annotation: Conventions used for the American Sign Language Linguistic Research Project," American Sign Language Linguistic Research Project, Report 11, Boston University, Boston, MA.

C. Neidle (2007), "SignStream™ Annotation: Addendum to Conventions used for the American Sign Language Linguistic Research Project," American Sign Language Linguistic Research Project, Report 13, Boston University, Boston, MA.

Coming soon: 2012 Annotation Conventions

The data collection listed in (1) was created using SignStream® 2.2.2 (runs as a Classic application on older Macintosh systems) for linguistic annotation. CD-ROMs containing the SignStream files with linguistic annotations and the associated files are also available: . (A new Java reimplementation of SignStream®, with many new features, is currently under development and scheduled for beta release by Spring 2012.)

C. Neidle (2002) "SignStreamSignStream™: A Database Tool for Research on Visual-Gestural Language." In Brita Bergman, Penny Boyes-Braem, Thomas Hanke, and Elena Pizzuto, eds., Sign Transcription and Database Storage of Sign Information, a special issue of Sign Language and Linguistics 4 (2001):1/2, pp. 203-214.

D. MacLaughlin, C. Neidle, and D. Greenfield (2000) "SignStreamSignStream™ User's Guide." American Sign Language Linguistic Research Project, Report 9, Boston University, Boston, MA.

Pending completion of the DAI download capabilities, see this page for access to complete sets of materials from the NCSLGR corpus (videos and annotations): .

(4) Additional data will also be available soon from the American Sign Language Lexicon Video Dataset (ASLLVD), which is a collection of over 3,000 signs, based largely on the entries in the Gallaudet Dictionary of American Sign Language (video files from which were used as stimuli to elicit data from between 1 and 6 native signers for each entry; additional signs have been added to our collection). The data collection and linguistic annotations (including labels for start and end handshapes) were carried out at Boston Unviersity. This research project is a collaboration between Boston University (PI's Stan Sclaroff and Carol Neidle; PhD students Ashwin Thangali and Joan Nashi) and the University of Texas, Arlington (Vassilis Athitsos, PI). Reports of this project:

V. Athitsos, C. Neidle, S. Sclaroff, J. Nash, A. Stefan, Q. Yuan and A. Thangali (2008) "The ASL Lexicon Video Dataset", CVPR 2008 Workshop on Human Communicative Behaviour Analysis (CVPR4HB'08) (pdf ps)

H. Wang, A. Stefan, S. Moradi, V. Athitsos, C. Neidle, and F. Kamanga (2010) "A System for Large Vocabulary Sign Search," Proceedings of the Workshop on Sign, Gesture and Activity (SGA), September 2010. (pdf)

Ashwin Thangali, Joan P. Nash, Stan Sclaroff and Carol Neidle (2011) "Exploiting Phonological Constraints for Handshape Inference in ASL Video," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. (to appear), 2011. (pdf)

Information will be forthcoming about access to the video files and linguistic annotations for the complete set. Note that the gloss labels used are consistent with those found in the data set listed in (1). Handshape labels follow the conventions shown in (4) below

(4) Handshapes (with videos showing multiple angles of the hands in motion) and our labelling conventions for handshapes: