------

Departments

News & Features

Arts

Research Briefs

In the News

BU Yesterday

Contact Us

Advertising Rates

Classifieds

Calendar

Jobs

Archive

 

 

-------
BU Bridge Logo

Week of 21 January 2000

Vol. III, No. 20

Feature Article

Signs of improvement

New facility will aid in understanding sign language and human movement

By Eric McHenry

In American Sign Language, some of the most important expressions are facial. Without the activity of the eyes, eyebrows, and mouth, much of what the hands are saying would be ambiguous at best.

"When people think about sign language, they usually think about movements of the hands. But that's only part of what's going on," says Carol Neidle, CAS associate professor of modern foreign languages and principal investigator at the National Center for Sign Language and Gesture Resources (NCSLGR), a joint venture between BU and the University of Pennsylvania. "In fact, about 80 percent of the grammar is on the face and on the body, in parallel with manual signing. Syntactic information -- about negation or question status, for example -- is expressed through movements of the eyebrows, the face, head nods, head tilts, eye gaze, that sort of thing."

Too many of the existing tools for the study of ASL, Neidle explains, aren't sensitive to the language's subtleties. Written transcriptions of signed conversations are often, for obvious reasons, incomplete. Video data have tended to be of insufficient quality -- poorly shot, inadequately annotated, and difficult to search for key words and phrases. And researchers have often failed to make use of an essential resource: people whose first language is ASL. With the NCSLGR, Neidle, co-principal investigator Stan Sclaroff, and colleagues at UPenn plan to help bring the study of sign language up to speed.

Using a shared $1.3 million National Science Foundation grant, the BU and UPenn research teams have set up two facilities for the recording and analysis of signed data. The BU lab, in the basement of 111 Cummington St., features four synchronized digital cameras to register four distinct views of an ASL-speaking subject. Over the course of the next four years, investigators at both universities hope to log many hours of ASL data from native signers, establish a standard protocol for the gathering of such data, annotate it using a multimedia tool developed by Neidle and colleagues, and create computer algorithms for its analysis.

Dawn MacLaughlin (far right) speaks American Sign Language with Ben Bahan (seated in background) while Vassilis Athitsos (GRS'01) and Carol Neidle view Bahan's half of the conversation on a computer monitor. The facility they're using includes four digital cameras for the creation of an ASL video database. Photo by Kalman Zabarsky


Ideally, says Sclaroff, CAS assistant professor of computer science, such algorithms will lay the foundation for automatic sign language recognition devices of the sort that already exist for oral speech. For several years, he says, computational experts have been interested in the development of ASL recognition technology. But until now, the data they've used have been problematic.

"Linguistics scholars would look at the data that the computer scientists were using and say that they weren't linguistically controlled and that they weren't representative of natural signing by native signers," Sclaroff says. "It's analogous to someone in Germany developing an English speech recognizer by studying 10 words of English, pronouncing them with their German accents, and putting them together with a thick grammar that may not follow natural discourse rules."

The NCSLGR project's aim, Neidle says, is "to develop a facility that allows for high-quality data collection through collaborative efforts involving both computer scientists and linguists."

In addition to affording four complementary views of a subject, the facility's digital video cameras collect footage at twice the frame rate and with twice the spatial resolution of standard television cameras.

"That means that we're going to have very crisp data," says Sclaroff, "and that's critical for the study of hand motions, which are very fast. This is one of the best facilities in the country."

Using SignStream, the data analysis computer application she helped develop, Neidle demonstrates some of the center's capabilities. She clicks on one of the four windows that occupy her computer screen. A woman, seated in profile, executes a short series of gestures. Then Neidle cues up another screen, a frontal close-up of the same woman's face. It moves slowly through several dramatic changes of expression as the woman signs the same sentence.

All of the views, Sclaroff points out, are synchronized: a meter at the bottom of each screen tracks the footage frame by frame as it rolls, making it possible for researchers to determine, among other things, precisely which facial expressions correspond with which hand gestures.

"From a linguist's point of view," says Neidle, "the computer technology -- the availability of synchronized multiple views of signing -- is invaluable."

"And for me," Sclaroff says, "it's good to have the labeled data that the SignStream system generates." He points to a poster on the wall of Neidle's office, an enlarged printout of a computer screen where SignStream is being used. A signing subject is frozen midsentence; below him is a breakdown of his activity at that exact moment -- eyes, eyebrows, head. "All of that information," Sclaroff says, "can be used to train or validate a computer algorithm. Ideally, we would like to develop algorithms that can analyze the collected video and correctly classify the recorded movements."

Annotated data from the facility will also help Sclaroff and his team of computer science graduate students extend their independent research efforts. For the past five years, these have involved the development of new algorithms for the analysis, tracking, and recognition of human movement and gesture.

"The ongoing research project in my computer science group fits very nicely into the work of this facility," Sclaroff says, "because the technologies that we've been developing can be deployed in this new area. And that, we hope, will lead to new ideas and technologies."

The first data collection at the BU facility roughly coincides with the publication of The Syntax of American Sign Language: Functional Categories and Hierarchical Structure (MIT Press, 2000). Neidle coauthored the book with several BU-affiliated colleagues, including Dawn MacLaughlin (GRS'97), a visiting assistant professor of linguistics, and Ben Bahan (GRS'96), associate professor and chair of the deaf studies department at Gallaudet University. Both MacLaughlin and Bahan are also serving as consultants on the NCSLGR project. The Syntax of American Sign Language, Neidle says, is exactly the sort of research undertaking that will likely benefit, in the future, from databases and new technologies created by the NCSLGR. If computers can be trained to recognize and transcribe recorded sign language, she says, "we'll be able to generate larger quantities of data that will actually allow us to further our understanding of the grammar."

"And if we want eventually to do automatic recognition of sign languages," Sclaroff adds, "we have to first have a good linguistic analysis of sign languages. So as we make advances in the linguistic area, we will also inevitably be able to make advances in the computational domain."

The project, then, is a mutually beneficial partnership between two very different disciplines. "The results on each side," Neidle says, "are feeding the general enterprise."


For more information about the National Center for Sign Language and Gesture Resources, visit www.bu.edu/asllrp/ncslgr.html.