The ASLLRP includes:

    • Investigation of the linguistic structure of American Sign Language, with a focus on the relationship of syntax to semantics and prosody;

    • Collaboration with computer scientists to advance computer-based recognition and generation of signed languages;

    • Development of multimedia tools to facilitate access to and analysis of primary data for sign language research.

These projects have been funded by the National Science Foundation.

This includes research on sign recognition by computer from 2D video, "CHS: Medium: Collaborative Research: Scalable Integration of Data-Driven and Model-Based Methods for Large Vocabulary Sign Recognition and Search" (a collaborative project, with Rutgers University (D. Metaxas) and RIT (M. Huenerfauth)); see http://www.bu.edu/asllrp/nsf.html#scalable

For information about our most recently funded NSF project, "NSF Convergence Accelerator [Phase I]–Track D: Data & AI Methods for Modeling Facial Expressions in Language with Applications to Privacy for the Deaf, ASL Education & Linguistic Research" (a collaborative venture with Rutgers University (D. Metaxas, M. D'Imperio) and RIT (M. Huenerfauth)), see:
https://www.bu.edu/asllrp/facial-analytics.html


Major updates:

New data !

As of July 2021, lots of new data has been added to our publicly accessible corpus of continuous signing available through our Data Access Interface (DAI 2) and to our ASLLRP Sign Bank.

For details, please see http://www.bu.edu/asllrp/about-datasets.pdf and report #19.




July 2020: A new update for SignStream® -- version 3.3 is now AVAILABLE !!

It has many new enhancements and bug fixes. The Sign Bank, accessible from within the application, also allows for easy direct accesss to the online version, with many more examples for each sign.


A new version of our Data Access Interface (DAI 2) and a new collection of linguistically annotated data from the ASLLRP SignStream® 3 Corpus is also NOW AVAILABLE !

DAI 2 also provides access to our new ASLLRP Sign Bank.

See below.



Data Available from various related projects

Terms of use for ASLLRP data

The data available from these pages can be used for research and education purposes, but cannot be redistributed without permission.

Commercial use, without explicit permission, is not allowed, nor are any patents and copyrights based on this material.

Those making use of these data must, in resulting publications or presentations, cite the relevant datasets and the publications, as appropriate:

Carol Neidle and Augustine Opoku [2020] A User's Guide to the American Sign Language Linguistic Research Project (ASLLRP) Data Access Interface (DAI) 2 — Version 2. BU ASLLRP Report No. 18, Boston, MA. http://www.bu.edu/asllrp/rpt18/asllrp18.pdf

Carol Neidle, Augustine Opoku, Gregory Dimitriadis, and Dimitris Metaxas [2018]
NEW Shared & Interconnected ASL Resources: SignStream® 3 Software; DAI 2 for Web Access to Linguistically Annotated Video Corpora; and a Sign Bank, 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, LREC 2018, Miyazaki, Japan, pp.147-154.  https://open.bu.edu/handle/2144/30047

Carol Neidle and Christian Vogler [2012] A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface (DAI), 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey. http://www.bu.edu/linguistics/UG/LREC2012/LREC-dai-final.pdf.

Carol Neidle, Ashwin Thangali, and Stan Sclaroff [2012] Challenges in the Development of the American Sign Language Lexicon Video Dataset (ASLLVD) Corpus, Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey. http://www.bu.edu/linguistics/UG/LREC2012/LREC-asllvd-final.pdf

and also include the following URL's, as appropriate: http://dai.cs.rutgers.edu/dai/s/dai http://www.bu.edu/asllrp/ and http://secrets.rutgers.edu/dai/queryPages/.

By accessing data from this site, you agree to the above terms of use.

 

(1) Web Access to Linguistically Annotated Corpora: the ASLLRP DAI (Data Access Interface)

A. Web access to the National Center for Sign Language and Gesture Resources (NCSLGR) corpus: linguistically annotated ASL data (continuous signing), with multiple synchronized video files showing views from different angles and a close-up of the face and linguistic annotations available as XML.

Information about the data collection and annotation, and about development of the Web interface

Information about download: http://www.bu.edu/asllrp/ncslgr-for-download/download-info.html .

See also: C. Neidle and C. Vogler (2012), A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface (DAI)," 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey, May 27, 2012

B. Handshapes (with videos showing multiple angles of the hands in moti.on) and our labelling conventions for handshapes:

http://www.bu.edu/asllrp/cslgr/pages/ncslgr-handshapes.html.

C. A new, enhanced Data Access Interface, DAI 2, is also now available. This provides many new features for displaying information from SignStream® 3 files, including start and end handshapes. This interface provides access to a new ASLLRP SignStream® 3 Corpus. DAI 2 also provides access to our new ASLLRP Sign Bank (see (3) below).

See ASLLRP Project Report No. 18:   

Neidle, C. and A. Opoku (2020), A User's Guide to the American Sign Language Linguistic Research Project (ASLLRP) Data Access Interface (DAI) 2 — Version 2 [pdf - 11 MB]

New data is now available, as of March 2021!

For details, please seehttp://www.bu.edu/asllrp/about-datasets.pdf.

 



(2) Software for Linguistic Annotation and Analysis of Visual Language Data

The data collection listed above in (1)-A was created using SignStream® 2.2.2 (runs as a Classic application on older Macintosh systems) for linguistic annotation. More recent data, released through the DAI 2, include annotations that were carried out using SignStrea version 3: http://www.bu.edu/asllrp/SignStream/3/, a Java application with many new features, first released in August 2017.

User guide: ASLLRP Report No. 15  Neidle, C. [2017]: A User's Guide to SignStream® 3
[pdf -24 MB]

About the 3.1.0 update: ASLLRP Report No. 16  Neidle, C. [2018]: What's new in SignStream® 3.1.0 ?
[pdf - 1 MB]

About the 3.3.0 update: ASLLRP Report No. 17  Neidle, C. [2020]: What's new in SignStream® 3.3.0 ?
[pdf - 14 MB]

(3) Annotation Conventions

Anotation conventions are documented in these two reports:

C. Neidle (2002) "SignStream™ Annotation: Conventions used for the American Sign Language Linguistic Research Project," American Sign Language Linguistic Research Project, Report 11, Boston University, Boston, MA.

C. Neidle (2007), "SignStream™ Annotation: Addendum to Conventions used for the American Sign Language Linguistic Research Project," American Sign Language Linguistic Research Project, Report 13, Boston University, Boston, MA.



(4) ASLLRP Sign Bank

Integrated into SignStrea version 3 and DAI 2 is a Sign Bank, where individual signs from our data collections are stored, along with their morpho-phonological features. This greatly speeds annotation, as previously annotated signs can be retrieved along with their relevant properties (which can be further edited in case of variations in prodcution). This also helps to ensure consistency in labeling.

The Sign Bank incorporates the data from the American Sign Language Lexicon Video Dataset (ASLLVD), which is a collection of almost 10,000 examples (of about 3,000 distinct signs, each produced by between 1 and 6 ASL signers) based largely on the entries in the Gallaudet Dictionary of American Sign Language. It also incorporates examples from the continuous signing data in the ASLLRP Signstream® 3 Corpus.

See ASLLRP Project Report No. 18:   

Neidle, C. and A. Opoku (May 2020), A User's Guide to the American Sign Language Linguistic Research Project (ASLLRP) Data Access Interface (DAI) 2 — Version 2 [pdf- 11 MB]


New data is now available, as of March 2021!

For details, please see http://www.bu.edu/asllrp/about-datasets.pdf .


(4) Tracking of Non-manual Features (head movement, facial expressions) and Detection of Corresponding Linguistic Information

In collaboration with Dimitris Metaxas, et al., at Rutgers University, we are conducting research on tracking and 3D modeling of non-manual events (head positions and movements, and facial expressions) that convey essential grammatical information in signed languages. Visualizations of the ASL non-manual feature tracking and detection of the associated linguistic information are available here: http://www.bu.edu/av/asllrp/NM/ .