Tagged: best practices

DH09 Tuesday, session 4: collaboration, XML geekDOM, collaborative modeling

June 23rd, 2009 in DigiLib BLog 0 comments

Oooh, late to the Julia Flanders talk, “Dissent and Collaboration.” I’ll do what I can.

We have an implicit contract with future scholars, who need to know how we did what we did.

Is there a conduit through which collaborative negotiation can take place? There’s data itself, potentially a schema for the data; hopefully documentation of both; and an implicit agreement (social contract) to use a common language, and to use it according to its accepted usage.

These agreements, in a human world where scholarly expression has a high value and standards are still being developed, aren’t enough to ensure perfect collaboration. So what we need is not a common language but a common mechanism for the creation of such a language. TEI provides this: it’s a mechanism not for collaboration but for the creation of a common language.

More

Tagged , , ,

Innovative teaching at BU

March 27th, 2009 in DigiLib BLog 0 comments

Today I went to the instructional innovation conference organized by BU’s Center for Excellence in Teaching. Well, the first half of it, anyway: my phone charge ran out from checking my email and voice mail as if I were in the office, and I took that as a sign to go back and do some practical digital humanities: blogging and scanning old theses. The presentations I did see were exciting and diverse. People around the university are incorporating technology into their teaching in so many ways! Here are some highlights.

At the School of Medicine, audience response is used to create an interactive course review session. Faculty are using MS PowerPoint, with game-show-like templates to create review questions a la Jeopardy or Who Wants to Be a Millionaire. They integrate this with TurningPoint (warning: ~3min Flash video you can’t stop), which enables students to interact with the PowerPoint presentation’s multiple-choice questions via clickers. The questions are timed—students have sixty seconds to respond to each, and the answer bar graph is updated in real time. This simulates medical board exams, and allows the faculty member to tailor review according to the responses received. For instance, if student responses fell mostly on two out of four choices, one of them the correct one, specific differences between the two choices can be emphasized right there and then. I imagine this also gives the students an idea of how their class as a whole is thinking, which implicitly teaches them about the learning process.

At the School of Public Health, educators are interested in practice-based learning. They asked themselves how they might convey foundational knowledge without using all of the available classroom time for the purpose, leaving time to put the knowledge to practical use. The answer: have the students write the textbook, which is used as the core reading for the course. Fourteen years ago this started; student groups were assigned a topical chapter each, and circulated it to their classmates the week before it was to be discussed. Every year since then, students have edited previously created documents, updating and augmenting information in the textbook. Right now they’re doing all of this in Word and emailing files to each other, but they are looking to transfer the process to more recent and perhaps better-suited technologies. Whatever the venue, what agency to give people in their own learning process! Retention must be through the roof.

At the School of Management, students have the chance to use the Team Learning Assistant web application in some of their courses. Aside from the subject-matter projects they pursue, teams work out contracts regarding their participation in teams: an agreed-upon common goal; performance standards; norms for behavior (how often do they expect each other and themselves to check email?), plans for managing performance and conflict. Participants end up learning teamwork skills that employers look for through mutual feedback and ratings. Because TLA was developed at BU (see the 2004 bulletin announcement), students here can purchase a license for a discount—$12.50 for six months or $18 for nine months—exponentially less than the price of some of their books. Faculty also have access to students’ reviews of themselves and each other, and can monitor both arising problems and how they get resolved. SMG’s next step is to track each student over multiple semester, to give them a bird’s-eye view of how they do working with a variety of groups.

Again at the School of Public Health, in one course students are given a choice: write a standard term paper, or participate in a semester-long team project. They research a public health problem (say, the spread of malaria), and then produce a video to engage and educate the public about the biological bases for that problem. There are many hooks that get people interested in this option over the term paper: they learn new skills; they’ll be authors of videos that will then be made electronically available by a recognized public health organization; they can put this experience on their resume; and who knows, it might be fun. Not to mention all that teamwork. Unsurprisingly, students tend to lean towards video production, and add to the internet-enabled world’s knowledge base.

In the CAS Sociology Department, one professor uses the NetDraw social network visualization package to help learners think about social networks by constructing models of their own. In Engineering, wikis with their easy and flexible formatting capabilities are used for homework assignments, tests and project development. At the Metropolitan College, a computer science professor produces videos for his distance-learning courses using a tablet PC instead of videorecording the whiteboard, and the recording quality goes up significantly.

Domenic Screnci, Executive Director for Educational Media and Technology on the medical campus, presented the Echo360 package, which records audio, LCD projector signal, video camera signal and whatever’s on the podium PC, sends it all to a specialized (not cheap, but reusable!) appliance, which then allows for some editing and integrates it all into a single video. This video can then be thrown on a server and disseminated via web links and audio and/or video podcasts. Faculty have had quite legitimate concerns about this—copyright issues, what if mistakes get immortalized, what if live attendance flags—but all seems to be working out well. Editing is now possible (it wasn’t in a previous incarnation of the software, but that’s the beauty of new version releases), attendance isn’t negatively affected, and copyright, from what I understand, is individually handled.

As I mentioned, the Echo360 setup isn’t cheap; but Screnci mentioned that as more people at BU buy in, the price goes down rather drastically. Interesting.

I can only imagine innovative uses of technology shown during the second half of this conference, which I missed. Happily, the conference was recorded using Echo360, and slides and/or videos will be (we were led to believe) made available on CET’s website.

Tagged , , , ,

Best practices: what do we want to do with it?

February 20th, 2009 in DigiLib BLog 1 comment

At the School of Theology Library, we’ve begun combing our first collection for public-domain imprints we’d like to have digitized by the Internet Archive. For the first batch we’ve chosen the Missions collection—logical, given the Digital Mission Project we’re pursuing.

The selection process is a huge amount of work, and we’re only dealing with about 3,000 records so far! Not only do we have to pull the books; we must find out whether they’ve already been digitized; if so, whether there’s a good reason to digitize them again; and each item needs to be within IA’s technical spec requirements.

Missions stuff is only the beginning of what we’ll eventually want to preserve and make available electronically. But, as the social internet has been teaching us, it’s not enough to digitize. Once artifacts are digitized, what do we want to be able to do with them? I’ve found a third-hand formulation that may be a useful starting point for answering that question in our specific context.

Dan Cohen, director of the Center for History and New Media at George Mason U, was a participant in the recent Smithsonian 2.0 meeting in Washington. Summarizing the meeting, he paraphrased David Recordon’s description of what he’d like to be able to do with Smithsonian Institution objects, in the future (I quote from Dan’s post):

Before I visit Washington, I want to be able to go to the web and select items I’m really interested in from the entire Smithsonian collection. When I wake up the next morning, I want in my inbox a PDF of my personalized tour to see these objects. When I’m standing in front of an object in a museum, I want to see or hear more information about it on my cell phone. When an event happens related to an object I’m interested in, I want a text message about it. I want to know when it’s feeding time for the pandas, or when Lincoln’s handball will be on public display. And I want to easily share this information with my classmates, my friends, my family.

It’s unlikely that any theology library will ever have the same breadth of appeal as SI. But, as I said: it’s a starting point for thinking about what, in the exciting world of tomorrow (and tomorrow, and tomorrow), we’d like to be able to do with the objects we’re digitizing. What are the contexts for its use, inside and outside of academe? Who would want to share what with whom? What would your ideal user experience of digitized theological artifacts be like? Technosocial fantasies welcome in comments.

Tagged ,