NSF Support
for the American Sign Language Linguistic Research Project

    We are grateful for support from the National Science Foundation.

I. The Architecture of Functional Categories in American Sign Language

#SBR-9410562 November 1, 1994 - January 31, 1998
C. Neidle, P.I.; J. Kegl, co-P.I.; B. Bahan, co-investigator ($355,000)

#SBR-9729010 and #SBR-9729065 March 15, 1998-February 28, 2002
C. Neidle, P.I.; J. Kegl, co-P.I., B. Bahan and D. MacLaughlin, co-investigators ($355,000)

II. SignStream: A Multimedia Tool for Language Research

#IRI-9528985 and #IIS-9528985 December 1, 1995 - May 31, 2000
C. Neidle, P.I. ($748,169)

III. National Center for Sign Language and Gesture Resources

#EIA-9809340 and #EIA-9809209 October 1, 1998 - September 30, 2003
Boston University: C. Neidle, P.I.; S. Sclaroff, co-P.I. ($649,999)
University of Pennsylvania: D. Metaxas, P.I.; N. Badler and M. Liberman, co-P.I.'s ($650,000)

IV. Essential Tools for Computational Research on Visual-Gestural Language Data

#IIS-9912573 May 15, 2000 - April 30, 2004
C. Neidle, P.I.; S. Sclaroff, co-P.I. ($687,602)

V. Pattern Discovery in Signed Languages and Gestural Communication

#IIS-0329009 September 15, 2003 - August 30, 2007
C. Neidle, P.I.; M. Betke, G. Kollios, and S. Sclaroff, co-P.I.'s ($749,999)

VI. ITR-Collaborative Research: Advances in recognition and interpretation of human motion: An Integrated Approach to ASL Recognition

#CNS-04279883 , 0427267, 0428231 October 15, 2004 - March 31, 2009
Boston University:
C. Neidle, P.I. ($500,000)
Gallaudet University: C. Vogler, P.I. ($249,998)
Rutgers University: D. Metaxas, P.I.; A. Elgammal and V. Pavlovic, co-P.I.'s ($1,099,815)

VII. HCC-Large Lexicon Gesture Representation, Recognition, and Retrieval


#HCC-0705749

September 15, 2007 - September 30, 2011
Boston University:
S. Sclaroff, P.I.; C. Neidle, co-P.I. ($899,985)
University of Texas at Arlington: V. Athitsos, P.I.

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0705749

Aims to develop sign "look-up" technology, to allow the computer to recognize and identify a sign produced by a user in front of a web cam (or a video clip for which the user has specified the start or end point). Such technology could, for example, serve as the interface to an ASL video dictionary. For purposes of developing and training computer algorithms for recognition, a set of about 3,000 signs in citation form was elicited from up to 6 native signers each, and these approx. 9,000 tokens have been linguistically annotated with unique gloss labels and start and end handshapes.

See details of the project here: http://www.bu.edu/av/asllrp/dai-asllvd.html

For preliminary reports of this work, see [2] [3] [5] [8] .

Further reports and a doctoral dissertation by Ashwin Thangali related to the use of linguistic constraints on the composition of signs to improve recognition results will be forthcoming. The complete, linguistically annotated, data set will be made available ASAP, hopefully early in 2012.

VIII. II-EN: Infrastructure for Gesture Interface Research Outside the Lab*
#IIS-0855065 July 20, 2009 - August 31, 2012
Boston University:
S. Sclaroff, P.I., C. Neidle and M. Betke, co-P.I.'s ($$591,445)
IX. Collaborative Research: II-EN: Development of Publicly Available, Easily Searchable, Linguistically Analyzed, Video Corpora for Sign Language and Gesture Research (planning grant) *

#CNS-0958442, 0958247, and 0958286 April 1, 2010 - March 31, 2012
Boston University:
C. Neidle, P.I; S. Sclaroff, co-P.I. ($70,000)
Rutgers University: D. Metaxas, P.I. ($20,000)
University of Texas at Arlington: V. Athitsos, P.I. ($10,000)

X. III: Collaborative Research: CI-ADDO-EN: Development of Publicly Available, Easily Searchable, Linguistically Analyzed, Video Corpora for Sign Language and Gesture Research*

#CNS-1059218, 1059281, 1059235 and 1059221
August 1, 2011 - July 31, 2014
Boston University:
C. Neidle, P.I., S. Sclaroff, co-P.I. ($368,205)
Rutgers University: D. Metaxas, P.I. ($97,908)
Gallaudet University:
B. Bahan, P.I.; C. Vogler, co-P.I. ($92,257)
University of Texas, Arlington: V. Athitsos, P.I. ($66,630)

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1059218

Aims to make publicly available the resources that have been developed in conjunction with all of the projects listed on this page. This includes especially linguistically annotated ASL video data (with synchronized video showing the signing from multiple angles along with a close-up of the face) from native signers. An additional video collection from Gallaudet Universitywill also be included.

See the data now available through the Data Access Interface, currently under development (to include additional data sets and enhanced possibilities for browsing, searching, and downloading data):

http://secrets.rutgers.edu/dai/queryPages/
XI. III: Medium: Collaborative Research: Linguistically Based ASL Sign Recognition as a Structured Multivariate Learning Problem*

#IIS-0964385 and 0964597
September 1, 2010 - August 31, 2013
Boston University:
C. Neidle, P.I. ($469,000)
Rutgers University: D. Metaxas, P.I. ($739,000)

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0964385

The goal is to enable computer algorithms to distinguish among and identify different morphological classes of signs in ASL (e.g., lexical signs, fingerspelled signs, loan signs, classifier constructions), which follow different compositional principles -- and to exploit appropriate linguistic constraints for the relevant class, in order to improve recognition of manual signs.

XII. HCC: Collaborative Research: Medium: Generating Accurate, Understandable Sign Language Animations Based on Analysis of Human Signing*

#IIS-1065013, 10650090 and 1054965
July 1, 2011 - June 30, 2014
Boston University:
C. Neidle, P.I. ($385,957)
CUNY (Queens College): M. Huenerfauth, P.I. ($338,005)
Rutgers University: D. Metaxas, P.I. ($469,996)

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1065013

Applies the models developed (with funding from VI above) for recognition of linguistically significant facial expressions and head movements (see, for example, these presentations: [1] [4] [6] [7]) to the production of more realistic signing avatars. One of the biggest problems with generation of sign language through signing avatars is the problem of incorporating realistic facial expressions and head movements that are essential to the grammars of signed languages.

* Currently Active

References cited above, in reverse chronological order

[1] Nicholas Michael, Peng Yang, Dimitris Metaxas, and Carol Neidle, A Framework for the Recognition of Nonmanual Markers in Segmented Sequences of American Sign Language, British Machine Vision Conference 2011, Dundee, Scotland, August 31, 2011.

[2] Ashwin Thangali, Joan P. Nash, Stan Sclaroff and Carol Neidle, Exploiting Phonological Constraints for Handshape Inference in ASL Video, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2011.

[3] Haijing Wang, Alexandra Stefan, Sajjad Moradi, Vassilis Athitsos, Carol Neidle, and Farhad Kamangar, A System for Large Vocabulary Sign Search. International Workshop on Sign, Gesture, and Activity (SGA) 2010, in conjunction with ECCV 2010. September 11, 2010. Hersonissos, Heraklion, Crete, Greece.

[4] Nicholas Michael, Carol Neidle, Dimitris Metaxas, Computer-based recognition of facial expressions in ASL: from face tracking to linguistic interpretation. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, LREC 2010, May 22-23, 2010.

[5] Vassilis Athitsos, Carol Neidle, Stan Sclaroff, Joan Nash, Alexandra Stefan, Ashwin Thangali, Haijing Wang, and Quan Yuan, Large Lexicon Project: American Sign Language Video Corpus and Sign Language Indexing/Retrieval Algorithms. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, LREC 2010, May 22-23, 2010.

[6] Nicholas Michael, Dimitris Metaxas, and Carol Neidle, Spatial and Temporal Pyramids for Grammatical Expression Recognition of American Sign Language. Eleventh International ACM SIGACCESS Conference on Computers and Accessibility. Philadelphia, PA, October 26-28, 2009.

[7] Carol Neidle, Nicholas Michael, Joan Nash, and Dimitris Metaxas, A Method for Recognition of Grammatically Significant Head Movements and Facial Expressions, Developed Through Use of a Linguistically Annotated Video Corpus. Workshop on Formal Approaches to Sign Languages, held as part of the 21st European Summer School in Logic, Language and Information, Bordeaux, France, July 20-31, 2009.

[8] V. Athitsos, C. Neidle, S. Sclaroff, J. Nash, A. Stefan, Q. Yuan, & A. Thangali, The ASL Lexicon Video Dataset. First IEEE Workshop on CVPR for Human Communicative Behavior Analysis. Anchorage, Alaska, Monday June 28, 2008.

See also http://www.bu.edu/asllrp/publications.html, http://www.bu.edu/asllrp/talks.html.

 

[ ASLLRP home page ]