The ASLLRP includes:
A new version of our Data Access Interface (DAI 2) and a large new collection of linguistically annotated data will also be released in late Summer 2017.
The data available from these pages can be used for research and education purposes, but cannot be redistributed without permission.
Commercial use, without explicit permission, is not allowed, nor are any patents and copyrights based on this material.
Those making use of these data must, in resulting publications or presentations, cite: The National Center for Sign Language and Gesture Resources (NCSLGR) Corpus and this publication:
(1) Web Access to Linguistically Annotated Corpora: the ASLLRP DAI (Data Access Interface)
A. Web access to the National Center for Sign Language and Gesture Resources (NCSLGR) corpus: linguistically annotated ASL data (continuous signing), with multiple synchronized video files showing views from different angles and a close-up of the face and linguistic annotations available as XML.
Anotation conventions are documented in these two reports, and an updated version will be forthcoming in Spring 2012:
C. Neidle (2002) "SignStream Annotation: Conventions used for the American Sign Language Linguistic Research Project," American Sign Language Linguistic Research Project, Report 11, Boston University, Boston, MA.
C. Neidle (2007), "SignStream Annotation: Addendum to Conventions used for the American Sign Language Linguistic Research Project," American Sign Language Linguistic Research Project, Report 13, Boston University, Boston, MA.
See also: C. Neidle and C. Vogler (2012), A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface (DAI)," 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey, May 27, 2012.
A new version of our Data Access Interface (DAI 2) and a large new collection of linguistically annotated data will also be released in Summer 2017.
B. There are additional data available from the American Sign Language Lexicon Video Dataset (ASLLVD), which is a collection of almost 10,000 examples (of about 3,000 distinct signs, each produced by between 1 and 6 ASL signers) based largely on the entries in the Gallaudet Dictionary of American Sign Language.
Handshapes (with videos showing multiple angles of the hands in motion) and our labelling conventions for handshapes:
C. Neidle (2002) "SignStreamSignStream: A Database Tool for Research on Visual-Gestural Language." In Brita Bergman, Penny Boyes-Braem, Thomas Hanke, and Elena Pizzuto, eds., Sign Transcription and Database Storage of Sign Information, a special issue of Sign Language and Linguistics 4 (2001):1/2, pp. 203-214.
D. MacLaughlin, C. Neidle, and D. Greenfield (2000) "SignStreamSignStream User's Guide." American Sign Language Linguistic Research Project, Report 9, Boston University, Boston, MA.
Pending completion of the DAI download capabilities, see this page for access to complete sets of materials from the NCSLGR corpus (videos and annotations): http://www.bu.edu/asllrp/ncslgr-for-download/download-info.html .
(2) Software for Linguistic Annotation and Analysis of Visual Language Data
The data collection listed above in (1)-A was created using SignStream® 2.2.2 (runs as a Classic application on older Macintosh systems) for linguistic annotation. The data listed in (1)-B and data soon to be released through the DAI 2 include annotations that were carried out using SignStream® version 3, a Java application with many new features released in August 2017.
Integrated into SignStream® version 3 and DAI 2 is a SignBank, where individual signs from our data collections are stored, along with their morpho-phonological features. This greatly speeds annotation, as previously annotated signs can be retrieved along with their relevant properties (which can be further edited in case of variations in prodcution). This also helps to ensure consistency in labeling.
(4) Tracking of Non-manual Features (head movement, facial expressions) and Detection of Corresponding Linguistic Information
In collaboration with Dimitris Metaxas, et al., at Rutgers University, we are conducting research on tracking and 3D modeling of non-manual events (head positions and movements, and facial expressions) that convey essential grammatical information in signed languages. Visualizations of the ASL non-manual feature tracking and detection of the associated linguistic information are available here: http://www.bu.edu/av/asllrp/NM/ .