The Chronicle of Higher Education: Articles

INFORMATION TECHNOLOGY


November 7, 1997

Linguists Hope a New Computer Program Will Advance the Study of Sign Language

Multimedia data bases will enable scholars to examine role of gestures and facial expressions

By VINCENT KIERNAN

BOSTON

"Is John buying a house?" Benjamin Bahan arches his eyebrows as he asks the question. "John is not buying a house," declares Dr. Bahan, shaking his head and frowning. "John is buying a house."

In each case, he makes the same signs with his hands -- signs for "John," "buy," and "house" -- but his face registers a different expression.

In fact, Dr. Bahan, an associate professor of deaf studies at Gallaudet University, doesn't care whether the hypothetical John is in the market for real estate. What interests him is the linguistic structure of American Sign Language and other forms of signed speech.

Until now, A.S.L. has been difficult for linguists to study. Signs are difficult to represent in written form, and transcriptions of signed communications don't capture the interplay of gesture and facial expression that is the basis of meaning.

Now linguists are hoping to solve this problem with a new computer program that will compile digitized, multimedia data bases of signed expressions, enabling researchers to explore the languages in depth. Researchers could, for example, compare Dr. Bahan's various statements about John to assess the importance of gestures and facial expressions.

SignStream, the program that compiles the data bases, is scheduled to be released in a preliminary version by the end of this year. It could transform the scientific study of sign languages, say its developers and other specialists in the field.

Using digital video technology, SignStream allows researchers to make video recordings of individuals as they use sign language, and then to store the recordings on a computer's hard disk. The program also enables researchers to annotate the recordings extensively. They could note, for example, the precise moment at which Dr. Bahan -- a member of the SignStream development team -- starts to frown when he declares that John is not shopping for a home.

The program, being developed by researchers from Boston University, Dartmouth College, Gallaudet, and Rutgers University, has received $1.1-million in support from the National Science Foundation.

"This is the kind of tool that most researchers have been longing for," says Carol Neidle, an associate professor of linguistics at Boston University who heads the SignStream project.

"It's high time for this kind of development to come along," says Susan Duncan, a research associate in linguistics at the University of Chicago, who has tested an early version.

SignStream could bring a new level of scientific rigor to the study of sign language, says Dr. Neidle, by standardizing the methods researchers use and by making it easier for them to examine each other's raw data.

At present, linguists who study a sign language typically videotape people using the language. But scientific papers based on the videotapes cite only written transcriptions of signed expressions, says Dawn MacLaughlin, a research associate in linguistics at Boston who is a member of the SignStream team.

That makes it impossible for other researchers to judge whether the transcription was done properly and, consequently, whether linguistic conclusions based on the transcription are valid, she says.

Moreover, says Chicago's Dr. Duncan, the transcriptions are often crude, focusing on the words expressed by the hands but ignoring other gestures. Her work emphasizes the importance of "non-manual" gestures, such as the tilting of the speaker's head or the arching of eyebrows. Even if the transcriber has attempted to represent such non-manual aspects, the transcriptions show only approximately how the other aspects fit into the expression. "The written form that people use is not sufficient to allow someone who knows A.S.L. to reproduce the example," says Dr. MacLaughlin.

By contrast, SignStream lets a researcher step through an expression in sign language at intervals of a fraction of a second. At each instant, the researcher can annotate the recording. "We can go frame by frame and say, 'Here's where the eyebrows have started to go up, and here's where they've stopped,'" says Robert G. Lee, a doctoral student in linguistics at Boston who is also a member of the SignStream team.

The result is a detailed transcription, depicted on the computer screen as a series of parallel time lines representing hand gestures and facial and upper-body expressions. As the user plays the recording, a marker courses along the time lines, showing the precise moment when gestures begin or change.

The program provides no new way to represent sign languages in written form in scientific papers, says Dr. Neidle. But authors of on-line papers on sign language could build hypertext links into their papers that would allow readers to summon SignStream recordings of sign-language expressions described in the paper. Readers could watch the original recording of the speaker and draw their own conclusions. Authors of papers in printed journals could supply an Internet address where the relevant SignStream data could be found, she says.

At present, Dr. Neidle says, "disputes about data generally degenerate into, 'Well, my data are different from your data,' and there's no way to verify them. There's no scientific scrutiny or replicability of claimed results."

"It has really slowed progress in the field, because people haven't been able to get beyond disputes about data," she adds. "People haven't been able to move to the interesting theoretical questions."

Sign-language specialists also have had difficulty in sharing their research because so much of it is recorded on videotape. A researcher might have to hunt through a closet full of tapes to find the one in question, then fast-forward through the entire tape to locate a particular phrase.

A search function that is still under development for SignStream will eventually allow a researcher quickly to sift through an entire digitized data base, says Boston's Mr. Lee. The researcher will be able to conduct simple searches -- say, for all uses of a given sign -- or more complex searches, such as for a specific combination of a sign and a facial expression.

The SignStream researchers plan to create a repository of data that would be freely available to other linguists, Dr. Neidle says. The program itself is to be nearly free, costing only $25 for a license that would cover an entire university, she says.

Bencie Woll, chair of the department of sign language and deaf studies at City University in London, is pleased by the prospective access to digitized recordings cited in linguistics papers. "You look at somebody's transcription and you don't know if it's true or not," she says.

One drawback to SignStream is that it will operate only on Macintosh computers. "It's a pity that we only have a Mac version," Dr. Woll says, explaining that Macs are less common in European than in American universities, and that European linguists would have to buy them for use with SignStream.

Eventually, Dr. Neidle says, she would like to produce a version of SignStream for P.C.'s, but for now, the National Science Foundation's grant is sufficient only to produce a Macintosh version. She also believes that linguistic researchers will not have great difficulty in locating Macs. "First, we want to get a program that's up and operating," she says. Then she will worry about producing a P.C. version.

Although Dr. Neidle and her colleagues are developing SignStream specifically for use in studying American Sign Language, she says the program could be used to study any sign language.

Chicago's Dr. Duncan says the program could be used to compare features of two or more sign languages. The recordings would provide a convenient common format, she says.

Already, Dr. Neidle says, SignStream has led to new insights into the grammatical structure of American Sign Language, through analysis of more than 200 recorded expressions of native users of A.S.L., such as Dr. Bahan.

For example, the researchers have found that non-manual gestures expressing grammatical information -- such as lifted eyebrows or tilted head -- extend over entire phrases of words expressed by the hands. That means that American Sign Language, although it is expressed differently, has an abstract structure similar to that of spoken language, a structure comprising distinct phrases arranged in a pattern.

"It's really made it possible for us to discover new generalizations and test them against the data that we've got," says Dr. Neidle. "We've discovered things that we probably never would have discovered otherwise."


Copyright (c) 1997 by The Chronicle of Higher Education
http://chronicle.com
Date: 11/07/97
Section: Information Technology
Page: A27

Copyright (c) 1997 by The Chronicle of Higher Education, Inc. Posted with permission. This article may not be published, reposted, or redistributed without express permission from The Chronicle. To obtain such permission, please send a message to permission@chronicle.com. For subscription information, send a message to more-today@chronicle.com.



Neidle, MacLaughlin, and Lee

Neidle, MacLaughlin, and Lee


Back to SignStream home page.