Data Access Interface (DAI) 2 - Credits and Acknowledgments

 

1.

For details of the development of the Data Access Interface (DAI) and the National Center for Sign Language and Gesture Resources (NCSLGR) corpus for which it was originally designed to provide access, see:

         http://www.bu.edu/asllrp/data-credits.html#credits.

Christian Vogler was responsible for the first implementation of the DAI.

See also Carol Neidle and Christian Vogler, “A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface,” 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey, May 27, 2012:

         http://www.bu.edu/linguistics/UG/LREC2012/LREC-dai-final.pdf 

2.

The DAI was extended -- by Jessy Sheng at the Rutgers Laboratory of Computer Science Research (LCSR), under the supervision of Augustine Opoku and in collaboration with Carol Neidle -- to provide access to an additional corpus of citation-form signs, the American Sign Language Lexicon Video Dataset (ASLLVD); see (3) below.

3.

For information about the creation of the ASLLVD, see  

Neidle, C., Thangali, A., and Sclaroff, S. (2012). Challenges in Development of the American Sign Language Lexicon Video Dataset (ASLLVD) Corpus. Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon. LREC 2012, Istanbul, Turkey. May 2012. https://open.bu.edu/handle/2144/31899

The project began as part of the NSF-funded project #0705749 on “Large Lexicon Gesture Representation, Recognition, and Retrieval” (Stan Sclaroff, Carol Neidle, and Vassilis Athitsos, PIs). We are grateful to Ben Bahan, Rachel Benedict, Naomi Berlove Caselli, Elizabeth Cassidy, Lana Cook, Jaimee DiMarco, Danny Ferro, Alix Kraminitz, Joan Nash, Indya Oliver, Caelan Pacelli, Braden Painter, Chrisann Papera, Tyler Richard, Donna Riggle, Tory Sampson, Dana Schlang, Jessica Scott, Alexandra Stefan, Jon Suen, and Iryna Zhuravlova. Special acknowledgment is due to Ashwin Thangali, who made enormous contributions to the development of tools for classification of the data. Linguistic annotations and data analysis were carried out at BU under the supervision of Carol Neidle, with participation of large numbers of BU students; see (6) below. Funding for continuation of linguistic annotation and clasisfication of the data for ASLLVD development was provided by NSF grants #0855065 and #0964385.

4.

A new DAI, “DAI 2,” with many new features, including the ability to display handshapes included in SignStream® 3 annotations, was implemented by Augustine Opoku, in collaboration with Carol Neidle.

It provides access to a newer corpus, the ASLLRP SignStream® 3 corpus. For their contributions, we are especially grateful to Ben Bahan, Carey Ballard, Cory Behm, Rachel Benedict, Tess Dekker, Amanda Gaber, Graham Grail, Justin Bergeron, Chelsea Hammond, Ryan Hevia, Kelsey Koenigs, Alix Kraminitz, Corbin Kuntze, Carly Levine, Rebecca Lopez, Jonathan McMillan, Travis Nguyen, Indya-Loreal Oliver, Caelan Pacelli, Braden Painter, Chrisann Papera, Emma Preston, Donna Riggle, Tyler Richard, Tory Sampson, Norma Bowers Tourangeau, Blaze Travis, Amelia Wisniewski-Barker, and Isabel Zehner, as well as the other participants in the American Sign Language Linguistic Research Project.

5.

Sign Bank: The DAI 2 also includes a Sign Bank functionality, also implemented by Augustine Opoku. The initial Sign Bank was built from the ASLLVD data collection. However, the DAI 2 now incorporates data from our continuous signing corpora, and it will be expanding significantly.  The Sign Bank from the DAI 2 can also be accessed from within SignStream® 3, to improve efficiency and consistency of annotations.

Expanding functionalities of the DAI are described here:

Neidle, C. and Opoku, A. (2020) A User's Guide to the American Sign Language Linguistic Research Project (ASLLRP) Data Access Interface (DAI) 2 — Version 2. ASLLRP Project Report No. 18: Boston University, Boston, MA. http://www.bu.edu/asllrp/rpt18/asllrp18.pdf

See also http://www.bu.edu/asllrp/New-features-DAI2.pdf

6.

For information about SignStream® 3 (the annotation software we have developed, which is shared publicly and which has been used for the annotation of the corpora shared here) and Corpus Development 

See http://www.bu.edu/asllrp/SignStream/3/.

7.

The RIT data shared through the ASLLRP Sign Bank was collected by researchers at Rochester Institute of Technology under the supervision of Matt Huenerfauth as part of NSF grant IIS-1763569. Thanks to Abraham Glasser, Ben Leyer, Saad Hassan, and Sarah Morgenthal for their roles in helping with data collection and annotation.

8.

The DawnSignPress data sets are shown here courtesy of DawnSignPress, which has granted permission to view these data on this site, but has restricted use of their data, as indicated in the statement of terms of use on the DAI site.


For help with linguistic annotations, analysis, and software testing in recent years, we are especially grateful to Carey Ballard and Indya-loreal Oliver.

Further descriptions about the development of these datasets and the functionalities of the DAI 2 website, making it possible to search, view, and download data from our continuous signing corpora and our ASLLRP Sign Bank are available here:

Neidle, C., Opoku, A., Dimitriadis, G., and Metaxas, D. (2018). NEW Shared & Interconnected ASL Resources: SignStream® 3 Software; DAI 2 for Web Access to Linguistically Annotated Video Corpora; and a Sign Bank. Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community. LREC 2018, May 2018, Miyagawa, Japan. https://open.bu.edu/handle/2144/30047

Neidle, C., Opoku, A., and Metaxas, D. (2022) ASL Video Corpora & Sign Bank: Resources Available through the American Sign Language Linguistic Research Project (ASLLRP). arXiv:2201.07899. https://arxiv.org/abs/2201.07899

Most recently, functionality to enable download of Sign Bank data has been added.


We also gratefully acknowledge support for these projects from the
National Science Foundation.