Advances in Recognition and Interpretation of Human Motion:
An Integrated Approach to ASL Recognition

This project has aimed to advance the state of the art in the following areas: (1) integration of multimodal information in computer-based recognition and interpretation of human movements for communicative purposes; and (2) scientific understanding of the linguistic structure of American Sign Language.

Sign language recognition to data has focused primarily on manual signing. This is a major limitation, given that critical grammatical information is expressed by movements of the face and upper body occurring in parallel with manual signing. No system for sign language recognition or generation can succeed without properly modeling the linguistic use of non-manual expressions. This project included development, enhanced by domain-specific linguistic knowledge, of innovative model-based algorithms to address the dynamic data-driven aspects of sign language recognition.

Collaboration between the ASLLRP and Rutgers University (Dimitris Metaxas, Ahmed Elgammal, and Vladimir Pavlovic) and with Gallaudet University (Christian Vogler), supported by funding from the National Science Foundation.