NSF Support
for the American Sign Language Linguistic Research Project

    We are grateful for support from the National Science Foundation.

I. The Architecture of Functional Categories in American Sign Language

#SBR-9410562 November 1, 1994 - January 31, 1998
C. Neidle, P.I.; J. Kegl, co-P.I.; B. Bahan, co-investigator ($355,000)

#SBR-9729010 and #SBR-9729065 March 15, 1998-February 28, 2002
C. Neidle, P.I.; J. Kegl, co-P.I., B. Bahan and D. MacLaughlin, co-investigators ($355,000)

II. SignStream: A Multimedia Tool for Language Research

#IRI-9528985 and #IIS-9528985 December 1, 1995 - May 31, 2000
C. Neidle, P.I. ($748,169)

III. National Center for Sign Language and Gesture Resources

#EIA-9809340 and #EIA-9809209 October 1, 1998 - September 30, 2003
Boston University: C. Neidle, P.I.; S. Sclaroff, co-P.I. ($649,999)
University of Pennsylvania: D. Metaxas, P.I.; N. Badler and M. Liberman, co-P.I.'s ($650,000)

IV. Essential Tools for Computational Research on Visual-Gestural Language Data

#IIS-9912573 May 15, 2000 - April 30, 2004
C. Neidle, P.I.; S. Sclaroff, co-P.I. ($687,602)

V. Pattern Discovery in Signed Languages and Gestural Communication

#IIS-0329009 September 15, 2003 - August 30, 2007
C. Neidle, P.I.; M. Betke, G. Kollios, and S. Sclaroff, co-P.I.'s ($749,999)

VI. ITR-Collaborative Research: Advances in recognition and interpretation of human motion: An Integrated Approach to ASL Recognition

#CNS-04279883 , 0427267, 0428231 October 15, 2004 - March 31, 2009
Boston University:
C. Neidle, P.I. ($500,000)
Gallaudet University: C. Vogler, P.I. ($249,998)
Rutgers University: D. Metaxas, P.I.; A. Elgammal and V. Pavlovic, co-P.I.'s ($1,099,815)

VII. HCC-Large Lexicon Gesture Representation, Recognition, and Retrieval


September 15, 2007 - September 30, 2011
Boston University:
S. Sclaroff, P.I.; C. Neidle, co-P.I. ($899,985)
University of Texas at Arlington: V. Athitsos, P.I.


Aims to develop sign "look-up" technology, to allow the computer to recognize and identify a sign produced by a user in front of a web cam (or a video clip for which the user has specified the start or end point). Such technology could, for example, serve as the interface to an ASL video dictionary. For purposes of developing and training computer algorithms for recognition, a set of about 3,000 signs in citation form was elicited from up to 6 native signers each, and these approx. 9,000 tokens have been linguistically annotated with unique gloss labels and start and end handshapes.

See details of the project here: http://www.bu.edu/av/asllrp/dai-asllvd.html

For preliminary reports of this work, see [2] [3] [5] [8] .

Further reports and a doctoral dissertation by Ashwin Thangali related to the use of linguistic constraints on the composition of signs to improve recognition results will be forthcoming. The complete, linguistically annotated, data set will be made available ASAP, hopefully early in 2012.

VIII. II-EN: Infrastructure for Gesture Interface Research Outside the Lab
#IIS-0855065 July 20, 2009 - August 31, 2012
Boston University:
S. Sclaroff, P.I., C. Neidle and M. Betke, co-P.I.'s ($$591,445)
IX. Collaborative Research: II-EN: Development of Publicly Available, Easily Searchable, Linguistically Analyzed, Video Corpora for Sign Language and Gesture Research (planning grant)

#CNS-0958442, 0958247, and 0958286 April 1, 2010 - March 31, 2012
Boston University:
C. Neidle, P.I; S. Sclaroff, co-P.I. ($70,000)
Rutgers University: D. Metaxas, P.I. ($20,000)
University of Texas at Arlington: V. Athitsos, P.I. ($10,000)

X. III: Collaborative Research: CI-ADDO-EN: Development of Publicly Available, Easily Searchable, Linguistically Analyzed, Video Corpora for Sign Language and Gesture Research

#CNS-1059218, 1059281, 1059235 and 1059221
August 1, 2011 - July 31, 2017
Boston University:
C. Neidle, P.I., S. Sclaroff, co-P.I. ($368,205)
Rutgers University: D. Metaxas, P.I. ($97,908)
Gallaudet University:
B. Bahan, P.I.; C. Vogler, co-P.I. ($92,257)
University of Texas, Arlington: V. Athitsos, P.I. ($66,630)


Aims to make publicly available the resources that have been developed in conjunction with all of the projects listed on this page. This includes especially linguistically annotated ASL video data (with synchronized video showing the signing from multiple angles along with a close-up of the face) from native signers. An additional video collection from Gallaudet Universitywill also be included.

See the data now available through the Data Access Interface, currently under development (to include additional data sets and enhanced possibilities for browsing, searching, and downloading data):

XI. III: Medium: Collaborative Research: Linguistically Based ASL Sign Recognition as a Structured Multivariate Learning Problem

#IIS-0964385 and 0964597
September 1, 2010 - August 31, 2015
Boston University:
C. Neidle, P.I. ($469,000)
Rutgers University: D. Metaxas, P.I. ($739,000)


The goal is to enable computer algorithms to distinguish among and identify different morphological classes of signs in ASL (e.g., lexical signs, fingerspelled signs, loan signs, classifier constructions), which follow different compositional principles -- and to exploit appropriate linguistic constraints for the relevant class, in order to improve recognition of manual signs.

XII. HCC: Collaborative Research: Medium: Generating Accurate, Understandable Sign Language Animations Based on Analysis of Human Signing

#IIS-1065013, 10650090 and 1054965
July 1, 2011 - June 30, 2016
Boston University:
C. Neidle, P.I. ($385,957)
CUNY (Queens College): M. Huenerfauth, P.I. ($338,005)
Rutgers University: D. Metaxas, P.I. ($469,996)


Applies the models developed (with funding from VI above) for recognition of linguistically significant facial expressions and head movements (see, for example, these presentations: [1] [4] [6] [7]) to the production of more realistic signing avatars. One of the biggest problems with generation of sign language through signing avatars is the problem of incorporating realistic facial expressions and head movements that are essential to the grammars of signed languages.

XIII. EAGER: Collaborative Research: Data Visualizations for Linguistically Annotated, Publicy Shared, Video Corpora for American Sign Language (ASL)

#1748016, 1748022
August 1, 2017 - July 31, 2018
Boston University:
C. Neidle, P.I. ($18,001)
Rutgers University: D. Metaxas, P.I. ($54,999)


The goal of this project is to further improve the existing SignStream 3 and DAI 2 applications by incorporating several powerful enhancements and additional functionalities to enable the shared tools and data to support new kinds of research in both linguistics (for analysis of linguistic properties of ASL and other signed languages) and computer science (for work in sign language recognition and generation). Specifically, we will incorporate into the displays, within both the annotation software and the Web interface, graphical representations of computer-generated analyses of ASL videos, so that users will be able to visualize the distribution and characteristics of key aspects of facial expressions and head movements that carry critical linguistic information in sign languages (e.g., head nods and shakes, eyebrow height, and eye aperture).

Resulting publications: [9], [10], [11].

XIV. CHS: Medium: Collaborative Research: Scalable Integration of Data-Driven and Model-Based Methods for Large Vocabulary Sign Recognition and Search

#17634866, 1763523, 1763569
August 1, 201
8 - July 31, 2022
Boston University:
C. Neidle, P.I. ($300,023)
Rutgers University: D. Metaxas, P.I. ($689,999)
RIT: M. Huenerfauth, P.I. ($209,896)


This research will create a framework that will enable the development of a user-friendly, video-based sign-lookup interface, for use with online ASL video dictionaries and resources, and for facilitation of ASL annotation. Input will consist of either a webcam recording of a sign by the user, or user identification of the start and end frames of a sign from a digital video. To test the efficacy of the new tools in real-world applications, the team will partner with the leading producer of pedagogical materials for ASL instruction in high schools and colleges, which is developing the first multimedia ASL dictionary with video-based ASL definitions for signs. The lookup interface will be used experimentally to search the ASL dictionary in ASL classes at Boston University and RIT. Project outcomes will revolutionize how deaf children, students learning ASL, or families with deaf children search ASL dictionaries. They will accelerate research on ASL linguistics and technology, by increasing efficiency, accuracy, and consistency of annotations of ASL videos through video-based sign lookup. And they will lay the groundwork for future technologies to benefit deaf users, such as search by video example through ASL video collections, or ASL-to-English translation, for which sign-recognition is a precursor. The new linguistically annotated video data and software tools will be shared publicly, for use by others in linguistic and computer science research, as well as in education.

This research will strategically combine state-of-the-art computer vision, machine-learning methods, and linguistic modeling. It will leverage the team's existing publicly shared ASL corpora and Sign Bank - linguistically annotated and categorized video recordings produced by native signers - which will be augmented to meet the requirements of this project.

XV. NSF Convergence Accelerator [Phase I]--Track D: Data & AI Methods for Modeling Facial Expressions in Language with Applications to Privacy for the Deaf, ASL Education & Linguistic Research

September 15, 2000 - May 31, 2022
award to:
Rutgers University:
D. Metaxas, P.I. , M. D'Imperio, co-P.I. ($960,00); with subcontracts to
Boston University: C. Neidle, P.I. ($213,342) and to
RIT: M. Huenerfauth, P.I.


The NSF Convergence Accelerator supports use-inspired, team-based, multidisciplinary efforts that address challenges of national importance and will produce deliverables of value to society in the near future.

This award will support development of “Data & AI Methods for Modeling Facial Expressions in Language with Applications to Privacy for the Deaf, ASL Education & Linguistic Research.” Facial expressions and head gestures constitute an essential component of signed languages such as American Sign Language (ASL), which is the primary means of communication for over 500,000 people in the United States and the third most studied "foreign" language in the US. They also play an important role in spoken languages, but this has been much less well studied for spoken languages, in part because of the lack of analytic tools for analysis.

The team of linguists, computer scientists, deaf and hearing experts on ASL, and industry partners will address research and societal challenges through three types of deliverables targeted to diverse user and research communities. They will develop:

1. Tools to facilitate and accelerate research into the role of facial expressions in both signed and spoken languages.

2. An application to help ASL second-language learners produce the facial expressions and head gestures that convey grammatical information in the language. This is one of the most challenging aspects of second language acquisition of ASL.

3. An application to enable ASL users to have private conversations about sensitive topics, by de-identifying the signer in video communications while preserving the essential linguistic information expressed non-manually (through use of 4-dimensional face-tracking algorithms to separate facial geometry from facial movement and expression). This last deliverable addresses a real problem for ASL users who seek private communication in their own language. Obscuring the face is not an option for hiding the signer’s identity, since critical linguistic information expressed non-manually would be lost.



References cited above, in reverse chronological order; for more complete list of publications see http://www.bu.edu/asllrp/talks.html, http://www.bu.edu/asllrp/reports.html , and http://www.bu.edu/asllrp/publications.html.

[1] Nicholas Michael, Peng Yang, Dimitris Metaxas, and Carol Neidle, A Framework for the Recognition of Nonmanual Markers in Segmented Sequences of American Sign Language, British Machine Vision Conference 2011, Dundee, Scotland, August 31, 2011.

[2] Ashwin Thangali, Joan P. Nash, Stan Sclaroff and Carol Neidle, Exploiting Phonological Constraints for Handshape Inference in ASL Video, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2011.

[3] Haijing Wang, Alexandra Stefan, Sajjad Moradi, Vassilis Athitsos, Carol Neidle, and Farhad Kamangar, A System for Large Vocabulary Sign Search. International Workshop on Sign, Gesture, and Activity (SGA) 2010, in conjunction with ECCV 2010. September 11, 2010. Hersonissos, Heraklion, Crete, Greece.

[4] Nicholas Michael, Carol Neidle, Dimitris Metaxas, Computer-based recognition of facial expressions in ASL: from face tracking to linguistic interpretation. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, LREC 2010, May 22-23, 2010.

[5] Vassilis Athitsos, Carol Neidle, Stan Sclaroff, Joan Nash, Alexandra Stefan, Ashwin Thangali, Haijing Wang, and Quan Yuan, Large Lexicon Project: American Sign Language Video Corpus and Sign Language Indexing/Retrieval Algorithms. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, LREC 2010, May 22-23, 2010.

[6] Nicholas Michael, Dimitris Metaxas, and Carol Neidle, Spatial and Temporal Pyramids for Grammatical Expression Recognition of American Sign Language. Eleventh International ACM SIGACCESS Conference on Computers and Accessibility. Philadelphia, PA, October 26-28, 2009.

[7] Carol Neidle, Nicholas Michael, Joan Nash, and Dimitris Metaxas, A Method for Recognition of Grammatically Significant Head Movements and Facial Expressions, Developed Through Use of a Linguistically Annotated Video Corpus. Workshop on Formal Approaches to Sign Languages, held as part of the 21st European Summer School in Logic, Language and Information, Bordeaux, France, July 20-31, 2009.

[8] V. Athitsos, C. Neidle, S. Sclaroff, J. Nash, A. Stefan, Q. Yuan, & A. Thangali, The ASL Lexicon Video Dataset. First IEEE Workshop on CVPR for Human Communicative Behavior Analysis. Anchorage, Alaska, Monday June 28, 2008.

[9] D. Metaxas, M. Dilsizian, & C. Neidle, Linguistically-driven Framework for Computationally Efficient and Scalable Sign Recognition. Proceedings of LREC 2018. Miyagawa, Japan, May 2018.

[10] C. Neidle, A. Opoku, G. Dimitriadis, & D. Metaxas, NEW Shared & Interconnected ASL Resources: SignStream® 3 Software; DAI 2 for Web Access to Linguistically Annotated Video Corpora; and a Sign Bank. 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community. pp. 147-154. LREC 2018. Miyagawa, Japan, May 2018.

[11] D. Metaxas, M. Dilsizian, and C. Neidle, Scalable ASL Recognition using Model-based Machine Learning and Linguistically Annotated Corpora. 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community. pp. 127-132. LREC 2018. Miyagawa, Japan, May 2018.



[ ASLLRP home page ]