Tagged: events

DH09 Tuesday: Christine Borgman keynote

June 23rd, 2009 in DigiLib BLog 0 comments

[note to self: read Gibson’s Spook Country.]
[OK, I’ll admit: I’m tired and punchy. Hopefully, I’ll do some justice to Borgman’s talk.]

“Scholarship in the Digital Age: Blurring the boundaries between the sciences and the humanities.”

Borgman’s Scholarship in the Digital Age: Information, Infrastructure and the Internet was published by MIT Press in 2007. Well received and worth reading. What’re you waiting for? Here it is! Encourage your local library to get a copy! And now, on to the talk.

More

Tagged ,

DH09 Tuesday, session 4: collaboration, XML geekDOM, collaborative modeling

June 23rd, 2009 in DigiLib BLog 0 comments

Oooh, late to the Julia Flanders talk, “Dissent and Collaboration.” I’ll do what I can.

We have an implicit contract with future scholars, who need to know how we did what we did.

Is there a conduit through which collaborative negotiation can take place? There’s data itself, potentially a schema for the data; hopefully documentation of both; and an implicit agreement (social contract) to use a common language, and to use it according to its accepted usage.

These agreements, in a human world where scholarly expression has a high value and standards are still being developed, aren’t enough to ensure perfect collaboration. So what we need is not a common language but a common mechanism for the creation of such a language. TEI provides this: it’s a mechanism not for collaboration but for the creation of a common language.

More

Tagged , , ,

DH09 Tuesday, session 3: Use Cases Driving the Tool Development in the MONK Project

June 23rd, 2009 in DigiLib BLog 0 comments

MONK Project is “a digital environment designed to help humanities scholars discover and analyze patterns in the texts they study. The MONK project has been generously supported by the Andrew W. Mellon Foundation, from 2007-2009. All code produced by the project is open source. MONK has a publicly available instance with texts contributed by Indiana University, the University of North Carolina at Chapel Hill, the University of Virginia, and Martin Mueller at Northwestern University.” So now you have context.

More

Tagged , ,

DH09 Tuesday, session 2: Supporting the Digital Humanities: Putting the Jigsaw Together

June 23rd, 2009 in DigiLib BLog 1 comment

[OK, not putting information copied from slides in quotes: no time. Thank you, panelists, for concise wording in your slides! If you want specific attribution, let me know.]

The big questions to be addressed by the panelists, as Martin Wynne proposes in his introductory remarks:

1. What specific problems have you identified, and how are you seeking to address them?
2. What services, if any, will you provide?
3. How might you link with other related initiatives?
4. What are the further elements of the jigsaw puzzle which are needed to create a coordinated and more complete research infrastructure?
More

Tagged , , ,

DH09 Tuesday, session 1: Preserving Virtual Worlds: Models & Community

June 23rd, 2009 in DigiLib BLog, Preservation 2 comments

[Again, live blogging with all its pitfalls and disclaimers. I almost certainly won’t get most or all of the live discussion, in particular; if you remember the Q&As, please put those in comments.]

This panel is put on by members of the Preserving Virtual Worlds project, a multi-institutional collaboration “funded by the Preserving Creative America initiative under the National Digital Information Infrastructure Preservation Program (NDIIPP) administered by the Library of Congress.” (quoted from the PVW site) More

Tagged ,

DH09: opening plenary by Lev Manovich

June 22nd, 2009 in DigiLib BLog 1 comment

I’m in College Park, MD, at Digital Humanities 09, the annual international digital humanities conference put on by the Alliance for Digital Humanities Organizations. It’s my home conference; I first attended it in 2001, and have been in love with this crowd ever since. It’s the most fantastically supportive bunch of people I’ve found in academe. More: this year’s conference is hosted by the Maryland Institute for Technology in the Humanities, which is celebrating its tenth anniversary this year, and the mood so far is downright festive.

More

Tagged ,

Events: Zotero trainers workshop at Emory this August

April 28th, 2009 in DigiLib BLog 0 comments

From the Zotero blog:

We are now accepting applications for the second Zotero trainers workshop, to be held July 30-31st at Emory University in Atlanta. At this info-packed and fun-filled two-day event, participants will acquire a solid understanding of Zotero’s capabilities and how those capabilities can best meet their users’ needs. Beyond acquiring a detailed understanding of the program, participants will learn: best-practices for demo-ing and supporting Zotero at their institution; approaches for developing institution-specific documentation; and steps for migrating user data to and from other research management tools.

Cost: $350. Application deadline: May 31. One or two people from a single institution will be accepted. More at the link above.

Tagged , ,

More from Chapel Hill: CHAT Festival, 2010

April 9th, 2009 in DigiLib BLog 0 comments

From Cathy Davidson’s HASTAC blog:

The Institute for the Arts and Humanities is coordinating a state-wide festival to showcase Collaborations: Humanities, Arts & Technology (CHAT) in February 2010.

CHAT will be a first step toward making UNC, and the Triangle area, a nationally recognized leader in the use of new technologies for collaborative scholarly research and education. This 10-day festival will be an opportunity for local and national communities to witness and participate in ongoing projects by artists, performers, scholars and technologists. We invite you to engage in new media and explore collaborative process through the use of technology.

The festival’s site is here.

Tagged

Limited spaces still available at the DigCCurr Professional Institute at Chapel Hill

April 9th, 2009 in DigiLib BLog 0 comments

Got this through a list:

DigCCurr Professional Institute: Curation Practices for the Digital Object Lifecycle

June 21-26, 2009 & January 6-7, 2010 (One price for two sessions)

University of North Carolina at Chapel Hill

Visit the institute’s site for more information and to register.

The institute consists of one five-day session in June 2009 and a two-day follow-up session in January 2010. Each day of the June session will include lectures, discussion and a hands-on “lab” component. A course pack and a private, online discussion space will be provided to supplement learning and application of the material. An opening reception dinner on Sunday, break time treats and coffee, and a dinner on Thursday will also be included.

This institute is designed to foster skills, knowledge and community-building among professionals responsible for the curation of digital materials.

Tagged , ,

An article, a CFP, and a useful site

April 7th, 2009 in DigiLib BLog 0 comments

The article, from the Public Library of Science, is this: “Clickstream Data Yields High-Resolution Maps of Science.” The authors collected “nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia,” says the abstract. They proceeded to create maps that illustrate citations in the articles with which the users interacted. These maps “provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data.” The most interesting illustration in this context is Figure 5—check out that big white and yellow cluster in the center. It’s worth the load time to view the larger image.

The CFP is for the next annual meeting of the Text Encoding Initiative Consortium. This year’s theme is text encoding in the era of mass digitization. The the first three suggested topics are conceptually larger than TEI, and are intriguing: In-depth encoding vs. mass digitization; Is text encoding sustainable?; Is text encoding scalable? People are bound to talk about crowdsourcing metadata, which I think is the only hope we have of scaling semantic encoding. (The quality control issues, which are the first concern that usually arises when people talk about collaborative knowledge work, are real. But there are ways to deal with them, and data that can be corrected may well be better than no data at all.)

The site I came across today is FairShare. It allows people to track how their online publications are used and/or remixed. Haven’t played with it yet, but it looks promising, particularly in the context of an institutional repository. Imagine a researcher depositing an article, pointing FairShare at it and seeing others respond to her work. Just the psychological boost from that is valuable in spurring future work.

Tagged , , , , ,