DH09: opening plenary by Lev Manovich
I’m in College Park, MD, at Digital Humanities 09, the annual international digital humanities conference put on by the Alliance for Digital Humanities Organizations. It’s my home conference; I first attended it in 2001, and have been in love with this crowd ever since. It’s the most fantastically supportive bunch of people I’ve found in academe. More: this year’s conference is hosted by the Maryland Institute for Technology in the Humanities, which is celebrating its tenth anniversary this year, and the mood so far is downright festive.
Lev Manovich is a humanities researcher and artist at UCSD who studies cultural analytics. What does this mean? He’s about to talk to us about analysis and visualization of large [amounts of] cultural data. Please forgive what follows for its choppiness; live blogging is like that. Anything in quotation marks is likely quoted directly from a slide, or (more rarely) a direct quote of what Manovich says, as close as I’m able to replicate it in real time. If you’re interested, you can also follow the conference on Twitter at #dh09.
So, cultural data sets have exploded recently, in both number and size. Museums and libraries are putting their collections online, and individuals are producing massive amounts of data that is of great cultural interest. Tools for the study of these data have followed close behind; there are now great tools for visualizing very large data sets.
Manovich talks about a global “cultural brain.” He compares DH with neuroscience, which employs neural networks and neural maps, and also fMRI of the neural activity of the whole brain. We’re now doing with the humanities what neuroscientists were doing 50 years ago: recording and analyzing activity of a single cell or a small cell population. “We need to start tracking, analyzing and visualizing larger cultural structures.”
Another, often overlooked, new source of mass data: cultural globalization. Coroflot.com: art portfolios online. From all over the world! Growth of cumulative number of new art biennials has spiked hugely between 1990 and today.
A new science of culture? “Until now, the study of human beings/cultural processes relied on two types of data: shallow data about many people/objects (statistics, sociology) or deep data about a few people/objects (psychology, … ethnography, [etc]). […] We can now collect detailed data about very large numbers of people/objects/cultural processes. We no longer have to choose between size and depth.” (Emphasis speaker’s.)
Cultural analytics: term they use at the UCSD Software Studies Initiative lab, where Manovich works, to refer to the analysis of large humanistic data sets. Parallel to, among other terms, visual analytics–“the science of analytical reasoning facilitated by interactive visual interfaces.”
What kind of interfaces would we want to be able to interactively explore large data sets? Platform for Cultural Analytics research environment: HIPerSpace. 287 megapixels. If AT&T can have a control center with a wall full of screens, why can’t we? Today, though of course still with some substantial investment in hardware, we can. Well, at least it’s in development.
Every technology has an ideology behind it. HIPerSpace encourages people to think of every instance of culture as part of a global network, interwoven with many other instances of culture.
They’re creating software that in turn creates graphs which consist from the actual media objects. Like, for example, all of Mark Rothko’s oeuvre.
You can visualize most anything. Manovich’s students took NBC news shows between 1960 and 2008 and built a chronological graph of frames, based on their dominant colors. You can see not only when black and white TV changed to color, but also when, for example, computer graphics were introduced into newscasts [and news became more entertainment].
Surprise guest Jeremy Douglass joins Manovich in the plenary! Jeremy is a postdoc researcher working with Manovich, and is presenting his research on “big analyses” of games, gameplay, feature films, web comics and paintings.
A few tools and examples around the special case of webcomics: “juxtaposed images”, patterns in templating and reuse (see a softer world, which templates frames and reuses photos), dinosaur comics, which does the same (different text for the same images, hundreds of times over). Jeremy studies patterns like that.
Jeremy talks about Warren Ellis’ Freakangels. We could quantize pages by panel count on each page. Or we could treat all the pages as a single sequence of images, auto-detect each frame on each page and number them all in order. We could quantize by the aspect ratio of the frames: one beat for a quarter-page frame, two beats for half-page frames. All kinds of “DNA sequences” could be constructed, and then we could try to extract information. Jeremy is working on software that auto-detects information and then, in Mac OS X, inserts it into image files as label and content information. Poof, Finder becomes a visual browser.
When we extract data like that from images or video, every frame is a data point. With gameplay, you can extract keystrokes or joystick moves to get a sense of the gameplay rhythm, or “what types of behavior the player is conducting.”
Manovich comes back to finish up, talking about theoretical issues around cultural data mining:
– Culture doesn’t equate cultural artifacts. How can we auto-analyze context?
– Statistical paradigm (using a sample) vs. data mining paradigm (analyzing the complete population). Modernity vs. Software Society.
– Pattern as a new epistemological object.
– New digital divide — between social and cultural activities/people which leave digital traces and those that do not.
– From small number of genres to multi-dimentional space of features, where we can look for clusters and patterns.
What do we want to do with all this? What new biases must we be aware of? What kinds of visualizations might be interesting to enact?