DH09 Wednesday, session 3: tools for text analysis

in DigiLib BLog
June 24th, 2009

Oh, this’ll be good.

Geoffrey Rockwell presents a paper he wrote with Stefan Sinclair, Piotr Organisciak and Stan Ruecker, “Ubiquitous Text Analysis.”

How do you connect texts with appropriate tools? The recent Tools for Data-Driven Scholarship workshop concluded that:

1. Tools need to work better with other tools.
2. Tools need to connect better with content and use content in a more robust way.
3. Tools need better mechanisms for discoverability by the scholars who need them.

Humanists are used to looking at documents, not at using them as tokens to be processed by tools. A pipe-and-flow diagram is too abstract for most humanists. The TAPoR project workbench is better than a diagram, but still too abstract for some people who didn’t quite get the paradigm. So they surveyed users, and added an “analyze this” view: text on one side, tools (including suggested compatible tools) on the other. Does same thing as workbench, but allows you to view the text at the same time as you view the processed data.

But you still had to create an account and choose a text to study. So they’ve abandoned the user-account model, and are thinking more about ubiquitous text analysis.

From the beginning, TAPoR was conceived as a broker for tools as text services. They provided information for users interested in embedding tools into their projects. It also occurred to them to provide custom HTML, like for example a collapsible, discrete toolbar embedded into an online project. This is documented and other people can use it, but it tends to conflict with other CSS and JavaScript, so not so easy to embed into new project.

Now they’re developing a YouTube-inspired application, originally called FlashTAT (Flash Text Analysis Tool), but now called TAToo (temporary tattoos forthcoming). Doesn’t conflict with existing JS/CSS, interface simple and encourages you to explore further. It’s easily embeddable, though not extensively tested.

You can change some things, like the endpoint URL (on your server, if you don’t want to depend on TAPoR’s), links to new CSS stylesheets so you can make the tool look like the rest of your project, and identifying the root directory for specific text analysis.

Next tool Geoffrey presents is a Facebook plugin called Digital Texts 2.0. Social bibliography: what you’re reading, what other people are reading, and so on.

Another new tool: Voyeur. A set of tools embedded right into the prose of an essay. Not so much for content providers as for research authors.

So what do we learn? Tools are discoverable by users, but perhaps in unexpected ways. Authors’ approach is to embed tools right into text (see Voyeur link above). That way they’re hard to miss; on the other hand, they don’t connect with each other very well.

Embedding these tools into larger projects with a lot of data presents challenges that are in the process of being addressed. One way to deal with that is for people to install the tools on their own servers, but then we have a code forking problem.

Embedded tools, esp. opaque ones like the ones that use Flash, are difficult to customize, particularly in terms of graphic design, and don’t blend into their host sites.

Embedding tools is well and good, but tools need to be clearly distinguished from the original object, in part so that they don’t confuse other tools.

Most difficult challenge ahead: good, productive interaction between the cultures of digital libraries (which are about services [and implied stability of content and long-term maintenance]) and digital tool development (which is about experimenting and innovation [and implied changeability of content, and moving on to other projects]). One solution might be to hand the code of an innovative project over to a library, or better yet, to an entity that’s set up to run cyberinfrastructure in the long term.


Geoffrey Rockwell presents (again, joined by his co-authors) a paper written with Patrick Juola, Stefan Sinclair and Stephen Downie, “T-Rex: A Text Analysis Research Evaluation eXchange”

Problem of tools: we’re reinventing them over and over. No one disputes the need for reliable tools that enable us to ask interesting questions about content. The problem is that we keep asking for The Perfect Tool that will scratch our itch, but The Perfect Tool isn’t there. Nor is it forthcoming!

If a problem doesn’t go away, it’s a fact. Suppose development was a fact of DH research; what then?

Hermeneutical assuptions: the dev’t of analytical tools is interpretative, practice iteratively, not service work but can inform such work with innovation. How, then, may we think through analytics for community?

Authors propose a way we can reinvent tools together through evaluation and exchange.

First round of the TADA research eXchange competition (T-REX), spring 2008: idea was to start developing reserach conversation where tools could be documented, evaluated, compared and researched. Categories in the competition: best new web-based tool; idea for a web-based tool; idea for improving a current web-based too; idea for improving the interface of the TAPoR Portal; and experiment of text analysis using high performance computing.

They got eleven submissions, judged by a panel of three judges. Seven of the submissions were recognized as contributing to the imagination of the field.

What would a mature community like that look like? Stephen Downie talks about MIREX (Music Information Retrieval Evaluation eXchange). It began in 2005, meant to be an evaluation exchange and not a contest. Its tasks are defined by community debate. Data sets are collected and/or donated to MIREX; participants submit their code to IMIRSEL, and then meet at ISMIR to discuss results. Data is then posted to a wiki. Also, MIREX has a dedicated and mandatory half-day poster session at ISMIR.

The MIREX model: they have a standardized set of queries/tasks; standardized collections; and standardized evaluations of results. Since 2005, when MIREX was started with a suite of 10 tasks, they’ve grown healthily to 18 tasks in 2008, with 168 processing runs. That’s a lot of music information retrieval, used for a lot of research.

Tasks are stuff like audio artist identification, audio cover song identification (I give you Dylan, can you find me covers of his stuff?), score following, symbolic key finding, many others.

Ongoing issues:

People still have a contest mentality, even though it’s not a contest. So they keep coming up with new metrics that willl help them score best in some category. As a result, data warps.

Buggy code. Leaving task definitions to the last minute; expediency over the interesting/meaningful. Lack of qualitative evaluations. Need stronger community leadership in tasks.

In the end, this is about people, not algorithms, so you have to get people to play with the tool.

Patrick Juola up next, talking about T-REX 2009. This year T-REX hopes to follow the MIREX format more closely; get community input on challenge problems; create a testbed for evaluation of working systems; and keep an open-ended forum for ideas. Sounds good.

Tasks: identify challenge problems; develop framework for eval; establish community buy-in; create and distribute results (DH10?), and hold kaffeeklatch for gossip and discussion.

Challenges: critical mass of participants; clear challenge definitions; platform independence; forum availability.

Timeline: Sept-Oct: discussion; Nov: challenges complete; Feb: submissions; Mar-Apr: evaluation; May: results; June: posters.

If it’s not clear by now, they want YOU to participate. Won’t you? Contact Patrick Juola, use the wiki, go to the 2009 T-Rex website. Come play, they say.


Piotr Organisciak presenting a paper he co-wrote with Geoffrey Rockwell and Stan Ruecker, as well as Susan Brown and Kamal Ranaweera (are you seeing a collaborative authorship trend in this session?): “Mashing Texts: Supporting collections level text analysis.”

Piotr is here to inform us, not impress us. Got it.

2005 Summit on Digital Tools in the Humanities, we asked: how can we easily locate documents (in mult. formats and multiple media), find specific information and patterns across large numbers of diferently formatted documents, and share our results with others in a range of scholarly disciplines and social networks?

JiTR [pron. jitter]: Just-in-Time Research. It’s a Mashing Texts prototype, which allows you to manage collections of digital items and run tools to gather, clean or analyze them.

Texts are wrapped into collections [much like Omeka]. They’re operating on a Personas and Scenarios model: personas are imagined possible users that “act as stand-ins for real users”. If the end goal is value for users, why not begin with them? The main deliverable, at this time, is conceptual.

The model itself goes like this: personas –> scenarios [actions they'd want to achieve; tool features connected to contexts] –> wireframing [layout hypotheses without design] –> mockups [adding design] –> coding [translation of mockups into a product].

Example of a scenario: Kate, an independent resercher, creates a shared collection; runs some processes to auto-populate it; labels new items and annotates them; and is emailed whenever someone adds new stuff.

[demo demo demo. I wonder if Piotr's slides will be online.]

Implications of this project:
-Ecological fit–compatibility with existing tools and innovations–is important.
-Need to reconcile the editing and the just-in-time collection crowds.

Tagged , ,

One Comment on DH09 Wednesday, session 3: tools for text analysis

  • Thanks for the fantastic summaries! I’ve been at many of the sessions you’ve written up, but your summaries are typically much more complete than my notes. I’m impressed by how fast you get these up!

Post Your Comment