BU Today


Help for the Hearing-Impaired

BU researchers developing new technology for learning American Sign Language

For the hearing-impaired and their families, learning American Sign Language can be a kind of Catch-22. There are ASL dictionaries in print, but because the language lacks a written form the signs are often organized according to their nearest English translation. “You can only look up a sign in the dictionary if you already know what it means,” says Carol Neidle, a College of Arts and Sciences professor of linguistics and coordinator of the Undergraduate Linguistics Program at BU.

Neidle and Stanley Sclaroff, a professor and chair of the CAS computer science department, hope that before long it will be possible to demonstrate signs in front of a camera and have a computer look up their meaning. 

With a three-year, $900,000 National Science Foundation grant, the two BU professors are collaborating, along with Vassilis Athitsos (GRS’06), an assistant professor of computer science and engineering at the University of Texas at Arlington, on computer technology that could identify a sign based on its visual properties.

One of the project’s aims is to apply such technology to multimedia dictionaries, which would enable signers to access definitions, etymology, and examples of usage, all in ASL. They also hope to develop a way to perform Google-type searches, called “sloogle,” in, for example, recorded databases of ASL literature, lore, educational courses, video conversations, and performances.

“Computer recognition technology for ASL is a really hard problem,” Neidle says. “But this grant allows us to bring together several different pieces of the puzzle we’ve already been working on.”

Computer vision technology is a passion for Sclaroff and his team. He and Athitsos are developing techniques to allow a computer to identify signs from video clips, an extension of Athitsos’ dissertation research on hand pose recognition.

The first step is establishing a comprehensive ASL video lexicon — 3,000 to 5,000 signs. Native ASL users have already logged countless hours in front of a camera at BU’s National Center for Sign Language and Gesture Resources. Neidle and Sclaroff have developed a computer program called SignStream, which displays videos of ASL signing from multiple angles for linguistic annotation. 

Analyzing and categorizing such data is painstaking. But the video inventory has been invaluable for collaboration between linguists and computer scientists from within and beyond BU, enabling testing and refinement of computer vision and recognition algorithms.

“Eventually, we’d like to be able to do automatic translations of ASL,” Sclaroff says. “But that’s down the road.”

Neidle says that the ASL look-up and search capabilities have important implications for improving education, opportunities, and access for the deaf and their families. “Ninety percent of deaf children are born into hearing families,” she says. “This would allow parents to look up a sign produced by their deaf child.”

It’s estimated that up to two million people use ASL as their primary language. Neidle has been a leading researcher of the linguistic properties of ASL, which up until a few decades ago wasn’t even considered a true language. So there’s much left to discover about its linguistic structure, she says.

Advances in computer vision-recognition research also have potential applications in national security — analyzing facial expressions for deception, for example — or for the severely handicapped, who could use personalized gestures to communicate, access the Internet, and control devices in their home, says Sclaroff.