Session Ideas – THATCamp CHNM 2009 http://chnm2009.thatcamp.org The Humanities And Technology Camp Mon, 06 Aug 2012 18:37:51 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.12 Mapping Literature http://chnm2009.thatcamp.org/06/26/mapping-literature/ http://chnm2009.thatcamp.org/06/26/mapping-literature/#comments Fri, 26 Jun 2009 21:20:00 +0000 http://thatcamp.org/?p=298

Apologies for the extremely last-minute post, which I’m writing on the plane en route to THATCamp!

In a nutshell, what I’d like to discuss in this session is the mapping of literature. By this I mean not only strictly geographical mappings (i.e. cartographical/GIS representations of space and place) but also perhaps more abstract and conceptual mappings that don’t lend themselves so well to mathematical geospatial mash-ups.

How can we (and do we already) play with the possibilities of existing technology to create great DH tools to read literature spatially?

I’ll first demo my Litmap project and hopefully that’ll serve as a springboard for discussion. You can read more about Litmap and look at it ahead of time here.

Very much looking forward to a discussion with all of the great people who are going to be there!

]]>
http://chnm2009.thatcamp.org/06/26/mapping-literature/feed/ 3
Ways Past Facsimile http://chnm2009.thatcamp.org/06/26/ways-past-facsimile/ Fri, 26 Jun 2009 19:43:32 +0000 http://thatcamp.org/?p=289

I’m wrestling with how we can move past the electronic facsimile as the standard digital humanities web-based presentation.  Projects such as The Valley of the Shadow (to select a well known one at random) are presentations of static objects.  The meta-data is usually searchable and there are other tools sometimes for slicing and dicing the objects to help in doing research.  Is there a way to move forward to something that plays more of the role of the monograph?

A facsimile performs a useful service to the scholarly community and supports further research, but does not itself usually form a narrative.  The monograph does, but presents a fairly linear argument (disregarding tables, figures, and plates, the text of a monograph can form a single string of glyphs that can be read as a linear set of words, sentences, paragraphs, chapters, etc., that take the reader from an initial state to some different, final state).

Online presentations consider the user to be the reader: a passive consumer instead of an active contributor.  This is similar to a book reader, but instead of turning pages, the online reader composes searches and flips through pages of results.

I’m trying to figure out how video game and film criticism can be turned into a critical tool for unpacking digital humanities web presentations and figuring out how to design projects that are participatory, encouraging information flow not only to the reader, but from the reader.

One of the texts I’ve found useful is Alexander Galloway’s “Gamic Action, Four Movements” (in Gaming: Essays on Algorithmic Culture, Electronic Mediations 18.  Minneapolis: Univ. of Minnesota Press, 2006).  He presents four modes of gamic action: play (operator initiated diegetic action), algorithm (operator initiated non-diegetic action), process (machine initiated diegetic action), and code (machine initiaited non-diegetic action).  In the context of digital humanities presentation, these might be exploration, transformation, curation, and code.

There’s still something missing though, and that’s participation by the reader.  All information still flows from the machine to the operator (site to the reader).  What if we flip things around and have information flow from the reader?  We still have exploration, transformation, curation, and code, but now from the machine persepective.  We can have systems prioritize prompts to maximize the information gained for the time spent by the participant, transform the information based on how much the system trusts the participant, have the participant track sets of objects they care about, and I’m still figuring out what “code” might be in me.

]]>
"Us" vs. "Them" http://chnm2009.thatcamp.org/06/25/us-vs-them/ http://chnm2009.thatcamp.org/06/25/us-vs-them/#comments Fri, 26 Jun 2009 02:37:23 +0000 http://thatcamp.org/?p=267

One interesting discussion occurred on twitter a few weeks back. Dan, Brian, and a few others were discussing the future of the Digital Humanities, and I (attempting to make what I believed at the time would be a “funny” science fiction joke) said that the definition of the Digital Humanities would be much cooler in the future. Dan Cohen’s response stuck with me, though. He said that, in the future, it would just be called “The Humanities”, and that stuck with me. The idea that the Digital Humanities is a transitional form, a sort of leap ahead into what everything should be. Now, Dan might not even agree with that statement (it was, after all, just a tweet) but I think it is an interesting thing to consider; are we simply what comes next, or will there always be classic (albeit technologically improved) academia and that group of Nerds in the corner using Zotero? To turn it into geek terms: Are we Homo Sapiens (Homo Superior if you are a Bowie fan), or are we X-men?

Running parallel to this topic is the notion of the “Digital Native”. That word has always caused a little discord for me – after all, according to the definition, I am one of them! However, it has always struck me as an odd term, either oddly placed or oddly defined. If oddly placed, it is because I have seen my fellow “digital natives” stare coldly and run frightened from a wiki page, or even saving a word document. There are so many in my generation that refuse to go deeper than surface level, and in many cases, repel technology as an unwanted obstacle. This is not an insignificant minority, by my observation. If it is oddly defined, then the problem comes with the expansion of the phrase. What I mean to say is that while the strict definition I’ve heard is “Someone who has grown up with technology”, of which my generation applies, but always comes with the adage “and is therefore more comfortable with it and probably very knowledgeable”. For the same reasons as above, this is not always true – regrettably – and therefore creates a sort of double-blind issue; it seems that the digital natives are under-performing for the seemingly powerful title, and those deeming us with the title are overestimating the meaning of it.

I bring this up because they both highlight an issue that has seemingly existed since the playground: The Us vs. Them mentality. are the Digital humanities a breeding ground for ideas that will one day be excepted, or are they a toolbox that the professors and academics of tomorrow will turn to in a time of experimentation? Will there always be geeks, or will everyone eventually be logged on? For complications sake, do you think that the Digital Humanities are sort of “reaching far”, and only the more median of pedagogies and academic memes will gestate into the population as a whole? For instance, Digital Archives will be obviously used in the future, but in-class use of wikis will not? iPhones, but not EBooks?

I look forward to hearing your thoughts!

P.s. Thatcamp is my very first academic Conference. I am immensely excited, and look forward to seeing you all there!

]]>
http://chnm2009.thatcamp.org/06/25/us-vs-them/feed/ 4
Developing, but not overdeveloping, a collaborative space http://chnm2009.thatcamp.org/06/25/developing-but-not-overdeveloping-a-collaborative-space/ http://chnm2009.thatcamp.org/06/25/developing-but-not-overdeveloping-a-collaborative-space/#comments Fri, 26 Jun 2009 02:18:00 +0000 http://thatcamp.org/?p=265

For the past few months, I’ve been involved in the development of the CUNY Academic Commons, a new project of the City University of New York whose stated mission is to “to support faculty initiatives and build community through the use(s) of technology in teaching and learning”. This is no small goal, given the mammoth size and unruliness of CUNY: 23 institutions comprising some 500,000 students, 6,100 full-time faculty, and countless more staff and part-time faculty. The Commons – built on a collection of open-source tools like WordPress MU, Mediawiki, Buddypress, and bbPress – is designed to give members of this diffuse community a space where they can find like-minded cohorts and collaborate with them on various kinds of projects.

My work as a developer for the Commons pulls me in several directions. Most obviously, I’m getting a crash course in the development frameworks that underlie the tools we’re using. These pieces of software are at varying stages of maturity and have largely been developed independently of each other. Thus, making them fit together to provide a seamless and elegant experience for users is a real challenge. This kind of technical challenge, in turn, leads me to consider critically the way that the site could and should serve the members of the CUNY community. How do you design a space where people with wildly different interests and wildly different ways of working can collaborate in ways that work for them? By making the system open enough to accommodate many ways of working and thinking, do you thereby alienate some of those individuals who need more structure to envision the utility that the site could hold for them? How do the choices you make when developing a tool – decisions about software, about organization, about design – mold or constrain the ways in which the site’s uses will evolve?

In light of these varying challenges, there are a couple different things that I would be interested in talking about at THATcamp. For one, I’d like to get together with people working with and on open-source software to talk nuts and bolts: which software are you using, how are you extending or modifying it to suit your needs, and so on. I’m also very interested in talking about strategies for fostering the kinds of collaboration that the CUNY Academic Commons has as its mission. I’m also anxious to discuss more theoretical questions about the design and development of tools that are meant to serve a diverse group of users. In particular, I’m interested in the interconnections between the designer, the software, and the designer’s knowledge and assumptions about the desires and capacities of the end user.

]]>
http://chnm2009.thatcamp.org/06/25/developing-but-not-overdeveloping-a-collaborative-space/feed/ 4
Omeka for an Education Dept. http://chnm2009.thatcamp.org/06/25/omeka-for-an-education-dept/ http://chnm2009.thatcamp.org/06/25/omeka-for-an-education-dept/#comments Fri, 26 Jun 2009 00:22:28 +0000 http://thatcamp.org/?p=259

I’m worried this sounds a little boring compared to everyone else’s topics, but here goes!

I would be interested in talking about using Omeka in a somewhat un-likely way — to develop an archive of materials for a museum education department. The archive/”collection” would primarily include images and video of our programs and participants.

I am currentlybuilding two sites that primarily use the exhibit builder functionality of Omeka (not the collections functionality), but I would be interested in extending our use of Omeka to include creating an archive of department material and enabling some of the interactive features of Omeka like the Contribute plugin and “My Omeka.”

]]>
http://chnm2009.thatcamp.org/06/25/omeka-for-an-education-dept/feed/ 2
Taking a Rich Archive of Learning from Static to Social http://chnm2009.thatcamp.org/06/25/taking-a-rich-archive-of-learning-from-static-to-social/ http://chnm2009.thatcamp.org/06/25/taking-a-rich-archive-of-learning-from-static-to-social/#comments Thu, 25 Jun 2009 17:24:53 +0000 http://thatcamp.org/?p=247

I’m interested in sharing the 4b2288;text-decoration: underline">Digital Storytelling Multimedia Archive with folks and brainstorming ideas on taking the site from its current, unfinished, static state to a truly social environment for students, teachers, and scholars of teaching and learning.

I see ties between this idea and those expressed around making digital archives social and also around taking archives and libraries public.  My apologies for how long this post is–I probably  have way too much detail in here!–so I put some stuff in bold after the next paragraph to facilitate a skim.  The real heart of it is in the last couple numbered points.

The Archive presents the results of a multi-campus study of the impact of student multimedia narrative production (or digital stories).  Digital stories are short (3-4 minutes) films combining text, music, voice-over, intertitles, and are used as an alternative tool for expression of academic arguments.  The Archive currently contains mostly interview clips with students and faculty from classes in Latina/o studies, American studies, media studies, and American history.  We have additional clips from ESL classes that we want to include at some point.

These interview clips are currently presented within a traditional hierarchical website organized by our three research questions. The three main sections present our ‘argument’ or ‘findings’ and folks drill down through statements of findings to evidence from student interviews.  We have an additional section which presents our findings within a ‘grid’ that ties together ‘dimensions’ of learning (the ‘grid’ is a little opaque at present, but it is cool to click around).  Finally, we have the ‘archive’ section, which at present is only a list of clip names with a link.

We are working on lots of obvious things like general clarity of writing.  We also have tags for all of the interview clips.   We want to make these tags public every time the clips appear (currently they are in a backend database).  In addition,  we have more digital stories to include and we want to tie examples of stories to interview clips.  We are also working on creating short, one-minute video “talking head” overviews of each section and also a screencast of how to use the grid.

However, what we want to do ultimately is to expand out the archive section and/or create a new social exhibits section.

1)Within the archive (really, throughout the site) we want to give folks the ability to add video of other interviews or of digital stories and to engage in their own commenting, tagging and adding tags to the existing archive.  We also love for there to be a way for folks to create their own grid, but marking tags that they think are important and linked and having those pulled together for their own presentation.

2) We’d like to also (perhaps using Omeka?) to create an exhibits section. This could allow faculty to showcase stories and interviews from their own classes, to pull together multimedia essays about what they think they’re learning about multimedia work, or to have students play in putting stuff together.

And so, I’d love to get input from folks on these and other ideas, how best to implement, what tools we can possibly use, what other ideas for increasing the ‘social’ nature of the site.

Also, see some additional stories at: gnovisjournal.org/coventry

]]> http://chnm2009.thatcamp.org/06/25/taking-a-rich-archive-of-learning-from-static-to-social/feed/ 2 Digital libraries, Web 2.0 and historians http://chnm2009.thatcamp.org/06/25/digital-libraries-web-2-0-and-historians/ http://chnm2009.thatcamp.org/06/25/digital-libraries-web-2-0-and-historians/#comments Thu, 25 Jun 2009 11:58:48 +0000 http://thatcamp.org/?p=234

My post is to be linked with Larry Cebula’s first question:

«The first [question] is how to make my institution, the Washington State Digital Archives, more interactive, useful, and Web 2.0ish. We have 80 million documents online and an interface from 1997! I need not only ideas on how to change, but success stories and precedents and contacts to convince my very wary state bureaucracy that we can and have to change.»

My institution is editing a digital library called European NAvigator (ENA), a digital library on the European integration process (ie the long process which led to today’s European Union), which has almost no equivalent on-line (on this subject). At the beginning, it was intended to be for the use of high school’s teachers and for every citizen who was interested in the subject.

The site as you can see it now was put on-line in 2005. It obvioulsy lacks participatory and “community” features – what’s somehow unfortunately called Web 2.0 features. We would like to use those kind of features to give more services to our present audience, but also to extend – with some special features – this audience to researchers (history, law and political sciences).

I would like to propose a session on digital libraries, where I will present you ENA and its future as we see it for 2010. But my point is to share a more general reflexion on digital libraries and their future within Web 2.0 and further within the semantic Web. The idea is not to do some Web 2.0 for the sake of it, but to better focus on researchers and their needs.

Frédéric

]]>
http://chnm2009.thatcamp.org/06/25/digital-libraries-web-2-0-and-historians/feed/ 1
Visualizing time http://chnm2009.thatcamp.org/06/24/visualizing-time/ http://chnm2009.thatcamp.org/06/24/visualizing-time/#comments Wed, 24 Jun 2009 16:17:03 +0000 http://thatcamp.org/?p=197

For the last two years, I have been very interested in visualizing data that emerges within my particular field: literature. This interest emerged as I read Moretti’s Graphs, Maps, Trees at the same time that I was experimenting with using GIS tools like Google Earth as a portion of the analysis in the last chapter of my dissertation. In my last year as a graduate student, a fellowship in the Emory Center for Interactive Teaching gave me additional time to begin experimenting with timelines. Timelines in literary studies were nothing new, but I wondered if it would be possible to have a class collaboratively build one in a manner similar to writing a wiki. The result was–in turn–a collaboration with Jason Jones (@jbj) where I coded a timeline, he designed an assignment, and his students created the data for a timeline of the Victorian Age. I’ve since had the chance to play with the tool in my own classes.

Jason and I both thought that timelines would be a fruitful subject for conversation THATCampers. And as many others have done, I thought I would share my original THATCamp proposal and then propose some ideas about where a discussion might go:

I would like discuss the different web-based tools and software that can be used to produce interactive and collaborative timelines. The presentation would involve demonstrating the different tools, showing the strengths and the weaknesses of each one, and producing a finished product. The tools would range from CHNM’s Timeline Builder to xtimeline and from Bee Docs Timeline 3D to the Timeline and Exhibit widgets that were developed in MIT’s Simile project. Having already spent some time with these tools, I think that the tools from Simile might be the most interesting to THATCamp participants due to their flexibility in representing data in multiple ways, including color coding events, sorting events, and with GIS data, as well as the ability to grab data from sources as diverse as a Google Docs spreadsheet or Twitter. Perhaps the best demonstration of the usefulness of a timeline would be to create–during the session/event–a timeline of THATCamp.

My current thinking:

As I’ve been preparing for THATCamp, I have gone ahead and evaluated as many of the timeline tools as I’ve had time for. I’ll be looking at another one or two tomorrow. I’ve gone ahead and created a spreadsheet listing the abilities of these different tools, along with some evaluation. Admittedly, some of the categories that I was using to evaluate the timelines stem from my deep involvement with the Simile widgets, and so the cases might not stack up as being completely fair to the competition.

Also, wanting to blend together both streams of data visualization that seemed valuable to me, I’ve also expanded on the original timelines that I designed for my courses by adding a Google Maps view this week. You can choose to either look at one view at a time or a dual view.

While a conversation could certainly be held about the different strengths and weaknesses of these different tools, most of the timeline tools that are available are going to be fairly easy for THATCampers to pick up and run with. The most complicated among them is the Simile tool, but I’ve heard there’s a fairly straightforward tutorial on building your own. Instead (or in addition to), I wonder if it could be possible to have a conversation about other possible research and pedagogical uses for timelines than those to which Jason and I have put them to use thus far. One obvious apporach would be to timeline a particular text (say, Slaughterhouse-Five) rather than a contextual time period. But what else could we do with timelines to make them valuable?

Moreover, I wonder if a discussion about visualizing time could be a part of a larger discussion about visualization that seems to be on the minds of other THATCampers (at least per their blog posts) such as Tonya Howe and Amanda Watson. How best can we use such visualizations in our research and/or teaching? At what point are there diminishing returns on such projects? Since these tools are relatively easy to learn (as opposed to programming languages), are they a good gateway tool for “traditional faculty” to begin comfortably integrating new technologies into their research/teaching? And, perhaps most broadly, what is the relationship between digital humanities and visualization

(I should meniton that while Jason and I proposed related ideas to THATCamp, this post is my own. So don’t hold him responsible for my shortcomings in expression.)

]]>
http://chnm2009.thatcamp.org/06/24/visualizing-time/feed/ 6
Who is working with Drupal? (I am — here's why) http://chnm2009.thatcamp.org/06/24/who-is-working-with-drupal-i-am-heres-why/ http://chnm2009.thatcamp.org/06/24/who-is-working-with-drupal-i-am-heres-why/#comments Wed, 24 Jun 2009 16:55:37 +0000 http://thatcamp.org/?p=193

Well, I’m finally caught up in reading these blog entries, so I’m taking my turn to post about my proposal. I hope this isn’t too late to get some response and maybe interest in participation this weekend. In short, I’m working with Drupal on my course websites, and I’ve developed some practices and tools with it that I’d like to share. Specifically, I’ve been working on adapting a gradebook module for my own purposes by adding in a mechanism for evaluating student blog entries. I’m basically a committed Drupal fanboy, so I’m really interested to hear if anyone else is doing cool things with this platform. I’d love to converse about my projects or yours, or just generally about best practices and future directions in Drupal development.

I don’t know if there’s enough interest for an entirely Drupal-focused session, but since a lot of the proposals here include comments like “I’d love to see what tools or solutions other people have come up with,” I’d be happy chiming in about what I’ve done with Drupal.

The main thing I’ve done recently (and what I initially proposed) is to use Drupal instead of an LMS (a la BlackBoard) for class websites. I position my use of Drupal as part of the post-LMS conversation discussed in this chronicle piece. Whether we want to call it edupunk or not, the point is that open, flexible tools let us make online class conversations that look (when they work) more like we’re constructing knowledge with our students and less like we’re managing learning. (Also, note how the BlackBoard guy closes the article with the assertion that other tools are inferior because the lack a gradebook feature. Ha!)

To make this more about digital humanities and less ed tech, the thing I like about Drupal is that its flexibility is such that it doesn’t solve problems for me — it gives me tools to solve my own problems. If the defined problem is one of learning outcomes, then maybe Drupal can be built into an LMS. But since we don’t start with that paradigm when we download and install Drupal core, it instead gives us an opportunity to think about information structures, conversation, and knowledge in several different ways at once.

For example, what does it mean that one can use Drupal to think through an answer to ShermanDorn’s question as well as Dave’s?

To put it more generally, what are the relative strengths and weaknesses of any platform, and how are those affordances related to knowledge construction in a (physical or virtual) classroom? I think we’d all agree that WordPress MultiUser allows for different kinds of conversations to emerge (with arguably different stakes) than, say, a Blackboard discussion forum, but why are those differences really important, and does that difference also extend to research and publishing (yes, obviously).

I realize some of these paths may be well-worn, but it’s what I think about as I try to build new Drupal sites, as I’m doing this summer. Anyone want to talk about it this weekend?

I’ve written about this some on my non-blog, and anyone who is interested is welcome to visit my recent courses. Also, for more on using Drupal for teaching, there are several groups and projects out there, including, most notably, Bill Fitzgerald’s DrupalEd project.

]]>
http://chnm2009.thatcamp.org/06/24/who-is-working-with-drupal-i-am-heres-why/feed/ 9
Archiving Social Media Conversations of Significant Events http://chnm2009.thatcamp.org/06/23/archiving-social-media-conversations-of-significant-events/ http://chnm2009.thatcamp.org/06/23/archiving-social-media-conversations-of-significant-events/#comments Tue, 23 Jun 2009 14:30:42 +0000 http://thatcamp.org/?p=188

I’ve already proposed one session, but recent events in Iran and the various discussions of the role of social media tools in those events prompted this post.

I propose that we have a session where THATCampers discuss the issues related to preserving (and/or analyzing) the blogs, tweets, images, Facebook postings, SMS(?) of the events in Iran with an eye toward a process for how future such events might be archived and analyzed as well.  How will future historians/political scientists/geographers/humanists write the history of these events without some kind of system of preservation of these digital materials?  What should be kept?  How realistic is it to collect and preserve such items from so many different sources? Who should preserve these digital artifacts (Twitter/Google/Flickr/Facebook; LOC; Internet Archive; professional disciplinary organizations like the AHA)?

On the analysis side, how might we depict the events (or at least the social media response to them) through a variety of timelines/charts/graphs/word-clouds/maps?  What value might we get from following/charting the spread of particular pieces of information? Of false information?  How might we determine reliable/unreliable sources in the massive scope of contributions?

[I know there are many potential issues here, including language differences, privacy of individual communications, protection of individual identities, various technical limitations, and many others.]

Maybe I’m overestimating (or underthinking) here, but I’d hope that a particularly productive session might even come up with the foundations of: a plan, a grant proposal, a set of archival standards, a wish-list of tools, even an appeal to larger companies/organizations/governmental bodies to preserve the materials for this particular set of events and a process for archiving future ones.

What do people think?  Is this idea worth a session this weekend?

UPDATE:   Ok, if I’d read the most recent THATCamp proposals, I’d have seen that Nicholas already proposed a similar session and I could have just added my comment to his…..  So, we have two people interested in the topic.  Who else?

]]>
http://chnm2009.thatcamp.org/06/23/archiving-social-media-conversations-of-significant-events/feed/ 10
Digital Publishing-Getting Beyond the Manuscript http://chnm2009.thatcamp.org/06/22/digital-publishing-getting-beyond-the-manuscript/ http://chnm2009.thatcamp.org/06/22/digital-publishing-getting-beyond-the-manuscript/#comments Mon, 22 Jun 2009 09:53:40 +0000 http://thatcamp.org/?p=169

Here is the original submission I made to THATCamp followed by some additional background ideas and thoughts:

Forget the philosophic arguments, I think most people at THATCamp are probably convinced that in the future scholarly manuscripts will appear first in the realm of the digital, I am interested in the practical questions here: What are born digital manuscripts going to look like and what do we need to start writing them? There are already several examples, Fitzpatrick’s Planned Obsolescence, Wark’s Gamer Theory, but I want to think about what the next step is. What kind of publishing platform should be used (is it simply a matter of modifying a content management system like WordPress)? Currently the options are not very inviting to academics without a high degree of digital literacy. What will it take to make this publishing platform an option for a wider range of scholars? What tools and features are needed (beyond say Comment Press), something like a shared reference manager, or at least open API, to connect these digital manuscripts (Zotero)? Maybe born digital manuscripts will just be the Beta version of some books which are later published (again i.e. Gamer Theory)? But, I am also interested in thinking about what a born digital manuscript can do that an analog one cannot.

Additional Thoughts:

So I should start by saying that this proposal is a bit self serving. I am working on “a book,” (the proverbial tenure book), but writing it first for the web. That is rather than releasing the manuscript as a beta version of the book online for free, or writing a book and digitally distributing it, I want to leverage the web to do things that cannot be accomplished in a manuscript form. It is pretty clear that the current academic publishing model will not hold. As I indicated in the original proposal above, I think that most participants at THATCamp are probably convinced that the future of academic publishing is in some ways digital (although the degree to which it will be digital is probably a point of difference). But, in working with this project I have come to realize that the tools for self digital publishing are really in the early stages, a pre-alpha release almost. Yes, there are options, primarily blogs, but for the most part these tend to mimic “book centered” ways of distributing information. To be sure there are examples of web tools which break from this model, namely CommentPress, but I am interested in thinking about what other tools might be developed and how can we integrate them. And at this point I think you have to be fairly tech savvy or have a “technical support team” to be able to do anything beyond a simple blog, or digital distribution of a manuscript (say as a downloadable .pdf). For me one of the early models we can look to is MacKenzie Wark’s Gamer Theory, but he had several people handling the “tech side.” For me I can get the tech help to do the things I cannot on my own, but is seems pretty clear that until the tools are simple and widely available digital publishing will either remain obscure or overly simple/conservative (just a version of the manuscript).

So, what tools do we need to be developing here? Should we be thinking about tools or about data structures and than developing tools around that? (I realize this is not an either or proposition.) I am imagining something like WordPress with a series of easy to install plugins that would open up web publishing to a much wider range of scholars. Perhaps a “publisher” could host these installs and provide technical support making it even easier for academics. I have a fairly good idea of what I personally want for my project, but am interested in thinking about/hearing about what other scholars, particularly those from other disciplines would need/want.

]]>
http://chnm2009.thatcamp.org/06/22/digital-publishing-getting-beyond-the-manuscript/feed/ 7
Mobile digital collections http://chnm2009.thatcamp.org/06/21/mobile-digital-collections/ http://chnm2009.thatcamp.org/06/21/mobile-digital-collections/#comments Sun, 21 Jun 2009 15:20:30 +0000 http://thatcamp.org/?p=167

I’d like to share some work we have done at NC State to bring digital collections to the mobile environment. Now that libraries have made large parts of their photograph and image collections available in digital form on the desktop, the next step is to deliver them via mobile devices that, through the integration of (relatively) large touch screen, faster processors, high-speed connectivity and location-awareness, are becoming an increasingly attractive platform.

“WolfWalk,” a prototype application for the Apple iPhone and iPod Touch, is our attempt to leverage these technologies to provide access to a small subset of our library’s digital collections, in this case historic images of buildings on the NC State campus. Users can access these images, together with short descriptions of the buildings, through an alphabetical list or a map interface. Instead of having to access print photographs in a controlled library environment or viewing digital surrogates on the desktop, “WolfWalk” allows users to view these images “in the wild,” i.e., they can view them while at the same time experiencing the real object. Also, by (eventually) making use of the device’s location awareness, we can add a serendipitous aspect to the process of discovering images. Instead of having to browse through a search interface or a virtual representation of our campus, the campus becomes the interface when the application shows users buildings, related images and descriptions in their vicinity.

I’d be interested in hearing what others think about the impact of the mobile medium not only on digital collections, but also how these new technologies and practices could be leveraged in other contexts related to work in the digital humanities.

]]>
http://chnm2009.thatcamp.org/06/21/mobile-digital-collections/feed/ 1
Visualization, Literary Study, and the Survey Class http://chnm2009.thatcamp.org/06/18/visualization-literary-study-and-the-survey-class/ http://chnm2009.thatcamp.org/06/18/visualization-literary-study-and-the-survey-class/#comments Thu, 18 Jun 2009 20:00:01 +0000 http://thatcamp.org/?p=160

I hope I’ve not missed the boat on the pre-un-conference-idea-generating-posts! In brief, I’d like to meet up people interested in a web project visually weighting by color simple semantic relations in literary texts and/or putting together an NEH grant for said project. Caveat: I’m not an expert on this. Here’s my initial proposal, though in retrospect it looks rather stilted and sad:

For the past year or so, I’ve been interested in putting together a small team of like-minded folks to help bring to fruition a data visualization project that could benefit less-prepared college students, teachers in the humanities, and researchers alike. Often, underprepared or at-risk educational populations struggle to connect literary study with the so-called “real world,” leading to a saddening lack of interest in the possibilities of the English language, much less literary study. I am currently working with Doug Eyman, a colleague at GMU, to develop a web application drawing on WordNet—and particularly the range of semantic similarity extensions built around WordNet—to visually mark up and weight by color the semantic patterns emerging from small uploaded portions of text. This kind of application can not only help students attend more fully to the structures of representation in literature and the larger world around them—through the means of a tool emphatically of the “real world”—but also enable scholars to unearth unexpected connections in larger bodies of text. Like literary texts to many students, the existing semantic similarity tools available through the open source community can seem inaccessible, even foreign, to a lay audience; this project seeks to lay open the language that so many fear, while enabling the critical thinking involved in literary analysis. Ultimately, we hope to extend this application with a collaborative and growing database of user-generated annotations, and perhaps with time, to fold in a historically-conscious dictionary as well. We are seeking an NEH Digital Humanities startup grant to pursue this project fully, and I’d like the opportunity to throw our idea into the ring at THATcamp to explore its problems as well as possibilities, even gathering more collaborators along the way.

Here’s a hand-colored version of something like what I’m thinking; I used WordNet::Similarity to generate the numbers indicating degree of relatedness, and then broke those numbers into a visual weighting system. Implementation hurdles do come out pretty clearly when you see how the numbers are generated, so I’m hoping someone out there will have better insights into the how of it all.

To a related, larger point: I always have the sneaking suspicion that this has been done before–Jodi Schneider mentioned LiveInk, a program that reformats text according to its semantic units, so that readers can more effectively grasp and retain content. This strikes me as simlar, as well, to the kinds of issues raised by Douglas Knox–using scale and format to retrieve “structured information.” Do the much-better-informed Campers out there know of an already-existing project like this? I wish the checklist of visual thinking tools that George Brett proposes were already here!

To a related, larger point: I always have the sneaking suspicion that this has been done before–Jodi Schneider mentioned LiveInk, a program that reformats text according to its semantic units, so that readers can more effectively grasp and retain content. This strikes me as simlar, as well, to the kinds of issues raised by Douglas Knox–using scale and format to retrieve “structured information.” Do the much-better-informed Campers out there know of an already-existing project like this? I wish the checklist of visual thinking tools that George Brett proposes were already here…
]]>
http://chnm2009.thatcamp.org/06/18/visualization-literary-study-and-the-survey-class/feed/ 9
Visual Thinking & Tools Discussion http://chnm2009.thatcamp.org/06/15/visual-thinking-tools-discussion/ http://chnm2009.thatcamp.org/06/15/visual-thinking-tools-discussion/#comments Mon, 15 Jun 2009 19:40:05 +0000 http://thatcamp.org/?p=153

A tweet by @WhyHereNow (Brooke Bryan) “thinking about how we create tools to do things, then the tools come to change the things that we do. #thatcamp spurred me to suggest a discussion about using visualization tools like mind maps or concept maps or other graphical diagrams to augment research, information management, collaboration, as well as other work processes.

I have personally used mind maps to brainstorm ideas for a long time. Lately I take the early model and expand it into a visual notebook to store collected materials as well as do quick show and tell for colleagues. Recently I learned how to use multi dimensional maps for analytical purposes using the Issue Based Information System methodology.

Mind maps can be much more than quick brainstorm sketches. The available range of stand-alone and networked applications, along with a host of Web 2.0 mapping tools continue to expand. The many ways these tools are being used, with the tips and tricks of the experts, and with advice about which one to use for what result are bits of information that really ought to be shared.

So, I’m proposing an informal session that could grow into an online annotated check list of tools, or at least or at least contribute to another resource like Digital Research Tools (DiRT).

]]>
http://chnm2009.thatcamp.org/06/15/visual-thinking-tools-discussion/feed/ 14
Digital Collections of Material Culture http://chnm2009.thatcamp.org/06/10/digital-collections-of-material-culture/ http://chnm2009.thatcamp.org/06/10/digital-collections-of-material-culture/#comments Wed, 10 Jun 2009 22:44:35 +0000 http://thatcamp.org/?p=126

Hello, everyone! I’ve been reading over everyone’s posts and comments, and letting it all percolate – but today’s my day to finally post my own thoughts.

Here’s my original proposal:

“Digital collections of material culture – how to make them, share them, and help students actually learn something from them!

– “quick and dirty” ways for faculty to develop digital collections for the classroom, without giving up on metadata. For the recent workshop we held at Vassar, I’ve been working on demo collections (see grou.ps/digitalobjects/wiki/80338 ) to evaluate 8 different tools,  including Omeka. In each, you can view the same 8 historic costumes and their metadata, with 43 jpeg images and 1 QTVR. I’m developing my work as a template, with support documentation, for others to use.

-how students can use digital collections and contribute to them, without requiring a huge technological learning curve, especially for students with non-traditional learning styles

-the potential of union catalogs”

Of course these issues cross over in many ways with issues that have already been posted. So, I’m not sure if this needs to be a session, or if it’s more about bringing this material culture perspective to other relevant sessions. That probably depends on how many other material culture people are coming – anyone?

Deep Digital Collections / The Thing-ness of Things

Projects that successfully represent 3D objects are still pretty rare. Current systems of image representation are not sufficient – 1 image per object is not enough. Artifacts also continue to defy controlled vocabularies and metadata schema. For example, one of my current projects involves studying a historic dress inside and out – I have over 100 detail images and complex data (see a sample blog post that shows the complexity of the object).

I’m working to create digital access to the Vassar College Costume Collection, our collection of historic clothing, with about 540 objects dating from the 1820’s to today. Just to clarify, in the field of costume history, the term “costume” refers to all clothing, not theatrical costume.  For about 7 years I’ve been exploring different ways of digitizing this collection, giving students access to a database of the objects, and then sharing their research projects, in a variety of digital formats, as virtual exhibitions.

“Quick and Dirty” Classroom Tools / Low Tech Digital Humanities

In my demos, you can view the same 8 historic costumes and their metadata, with 43 jpeg images and 1 QTVR, in Omeka, CollectiveAccess, Greenstone, Luna Insight, ContentDM, VCat, Filemaker, Excel, and GoogleDocs.

My inquiry has developed beyond the initial comparison of different available tools, to explore a kind of “division of labor” in the process. My approach has been very much on the DIY side, but couched in a collaborative experience. I initially created my demos for a NITLE sponsored workshop at Vassar this past March (entitled “Digital Objects in the Classroom”). Our workshop emphasized the importance of collaboration, and we asked participating institutions to send teams of faculty, librarians, instructional technologists, and media specialists. Perhaps ironically, the demos have mostly been my own work (with wonderful help from Vassar’s Systems Administrator and Visual Resources Librarian). I continue to search for the perfect compromise – for faculty and students to be able to quickly and independently get resources both into and out of collections, while administrators feel comfortable with the security and maintenance of the technology involved.

Student Contributions

Even if you’re not working in a traditional academic setting, I encourage you to view your audience as students. We can use technology as part of a suite of pedagogical tools to provide differentiated instruction for different styles of learners.  What I imagine is a way for students to add to the conversation in ways beyond tagging and commenting – to contribute their own images and research.  Our work in the last couple of years has reinforced this in a backward kind of way. We envisioned a large grant might allow us to carefully photograph and catalog much of the collection, which we could then present to students (on a platter?). Such a grant hasn’t come through yet, but the students have kept on coming! So, each semester brings us a new project, with new research about some of the objects, new photographs that students have taken to support their research, new citations of and links to supporting references. And the database grows. And I wonder, if we did present the virtual collection to students on a platter, would they be as inspired to work with the objects doing their own research? Would it seem as fresh to them? We need to keep the focus on our students and not on technology for its own sake.

Union Catalogs / Federated Searches

For each of our collections we’re working hard to structure our metadata and to define controlled vocabularies. But most of the time we aren’t taking advantage of the sharing that structured metadata allows. Either collections aren’t having their data harvested, or if they are, they’re going into giant collections like OAIster where it can be hard to find them. We need more union catalogs for material culture objects that are oriented for specific disciplines. By harvesting for a more specific kind of union catalog, we can transcend the “dumbing down” of data for Dublin Core standards and create variations that allow for richer data in each of our fields. We don’t have to reinvent the wheel, but building on Dublin Core or VRA or CDWA can really benefit our specific fields. For collections that have a strong visual component, some form of image needs to be a part of what is harvested and shows up in the federated search.

I look forward to reading your comments – and to meeting you all in person later this month!

]]>
http://chnm2009.thatcamp.org/06/10/digital-collections-of-material-culture/feed/ 13
Literary mapping and spatial markup http://chnm2009.thatcamp.org/06/10/literary-mapping-and-spatial-markup/ http://chnm2009.thatcamp.org/06/10/literary-mapping-and-spatial-markup/#comments Wed, 10 Jun 2009 17:48:05 +0000 http://thatcamp.org/?p=121

I’ve been thinking a lot lately about the uses of digital maps in literary study, partly because I’ve been thinking about the connections between place and memory for a long time, and partly because I got interested in GIS a few years ago, while working in the UVa Library’s Scholars’ Lab along with some extremely smart geospatial data specialists. There’s been talk of a “spatial turn” in the humanities lately, and there are already models for what I’m interested in doing. Franco Moretti’s maps of literary places in Atlas of the European Novel 1800-1900 and Barbara Piatti’s in-progress Literary Atlas of Europe have helped me think about the patterns that a map can help reveal in a work of fiction. I’m very much looking forward to hearing about Barbara Hui’s LitMap project, which looks a lot like what I’d like to make: a visualization of places named in a text and stages in characters’ journeys.

Since I came to the digital humanities via a crash course in TEI markup, I tend to think first of markup languages as a way to represent places, and capture place-related metadata, within a literary text. The TEI encoding scheme includes place, location, placeName, and geogName elements, which can be used to encode a fair amount of geographic detail, which can then be keyed to a gazetteer of place names. But there are also markup languages specifically for representing geospatial information (GML, SpatialML), and for displaying it in programs like Google Earth (KML). Using some combination of a database of texts, geographic markup, and map display tools seems like a logical approach to the problem of visualizing places in literature.

But (as I’ve said on my own blog, with a different set of examples) I’m also interested in spatial information that’s harder to represent. There are a lot of ways in which literary settings don’t translate well to points on a map. Lots of authors invent fictitious, and even when one can identify more or less where they’re supposed to be, one can’t supply exact coordinates. Consider Edith Wharton’s The House of Mirth and The Age of Innocence, both set in New York at the turn of the century and in the 1870s, respectively. One of my ongoing Google Maps experiments is a map of named places in both novels, focused on New York and Newport, Rhode Island. Both novels explore an intricate, minutely-grained social world, in which a character’s address says a great deal about his or her status. In some cases, the reader can precisely identify streets, points, and landmarks. And I think you can learn quite a lot about the world of Edith Wharton’s novels by looking at the spatial relationships between high society and everyone else, or between old and new money, or between a character’s point of entry into the world of the novel and where (physically and spatially) he or she ends up.

But in other cases the locations are harder to pin down. One can surmise where Skuytercliff, the van der Luydens’ country house in The Age of Innocence, is (somewhere on the Hudson River, not far from Peekskill), but it’s a fictional house whose exact location is left vague. The blob labeled “Skuytercliff” on my map represents a conjecture. And of course geoparsing won’t work if the place names are imaginary and the coordinates are unknown. So: what do we do with unreal places that still have some connection to the geography of the real world? And what if we want to visualize a writer’s completely imaginary spaces? What if we move out of fiction and into less setting-dependent literary forms, like poetry? How would one even begin to map settings in the work of, say, Jorge Luis Borges? Are there limits to the usefulness of visualization when we use it to analyze things that are fundamentally made out of words? Are some texts “mappable” and others much less so? (I’m pretty sure the answer to that last one is “yes.” I have yet to encounter an approach to literature that works equally well for everything from all time periods.)

So what I’d like to bring to the table at THATCamp is a set of questions to bounce off of people who’ve done more work with cartographic tools than I have. In some ways, my interests resonate with Robert Nelson’s post on standards, since I’m also thinking about what to do when the objects of humanistic study (in this case, literature) turn out to be too complex for the standards and data models that we have. If we end up having that session on standards, I’d like to be in on it. But I hope there are also enough people for a session on mapping and the representation of place.

]]>
http://chnm2009.thatcamp.org/06/10/literary-mapping-and-spatial-markup/feed/ 9
How to get money, money, money for wild and crazy times!! http://chnm2009.thatcamp.org/06/09/how-to-get-money-money-money-for-wild-and-crazy-times/ http://chnm2009.thatcamp.org/06/09/how-to-get-money-money-money-for-wild-and-crazy-times/#comments Wed, 10 Jun 2009 03:01:52 +0000 http://thatcamp.org/?p=110

Okay, not really.  But I do think this topic is particularly important right now.

This was my original proposal:
I’d like to talk about the role of faculty, IT, and administrators in collaborating to shape institutional strategic plans and planning in general for academic computing and the digital humanities.  I’ve spent nearly 18 months now involved in various strategic and practical planning committees at UMW regarding digital resources and goals for the humanities and social sciences.  Making sure that resources are allocated to the digital humanities requires broad commitments within administrative and strategic planning.  [Not as sexy or fun as WPMU or Omeka plug-ins, but sadly, just as important….]  I’d like to share my own experiences in the area and hear from others about theirs.

And today I would simply add that as UMW is closing in on a first draft of its strategic plan, I’m even more convinced that the college/university-wide planning process is something with which digital humanists need to be engaged.  In this time of dwindling economic resources, however, we also need to be, pardon the pun, strategic about it.  I think we need to figure out when we need to explain concepts, tools, the very notion of what digital humanities is and its place in the curriculum (something even THATCampers seem to be debating), when we need to do full-on DH evangelizing, and when we need to back off from our evangelizing in order to ease fears and/or recognize budgetary realities.  In any case, who else has had to make the case for Digital Humanities or academic technology as part of these processes?

]]>
http://chnm2009.thatcamp.org/06/09/how-to-get-money-money-money-for-wild-and-crazy-times/feed/ 5
Disciplinary Heresies and the Digital Humanities http://chnm2009.thatcamp.org/06/03/disciplinary-heresies-and-the-digital-humanities/ http://chnm2009.thatcamp.org/06/03/disciplinary-heresies-and-the-digital-humanities/#comments Wed, 03 Jun 2009 09:48:10 +0000 http://thatcamp.org/?p=87

Cross-posted at Clio Machine:

(This post is a continuation of some of the questions I raised in my original THATCamp proposal.)

Are the humanities inherently valuable, both in terms of the skills they impart to students and because the value of humanistic scholarship cannot be validated by external (often quantitative) measures?  Or are the humanities experiencing a crisis of funding and enrollments because they have not adequately or persuasively justified their worth?  These debates have recently resurfaced in the popular press and in academic arenas.  Some commentators would point to the recession as the primary reason for why these questions are being asked.  We should also consider the possibility that the mainstreaming of the digital humanities over last couple of years is another (but overlooked) reason for why questions about the value and worth of the traditional humanities are being taken more seriously.

As humanists have pursued academic prestige, they have long resisted the notion that intuition is important in their analysis and interpretation of texts.  (Although I think this is more true in history than in literary studies, perhaps because the latter is considered more “feminine” than the former.) Humanists have distanced themselves from the notion that their subjective study is somehow speculative or irrational.  They have been much more comfortable describing their work as imaginative and creative.  What all of this posturing overlooks is the advances that cognitive scientists have made in explaining intuition over the last few decades.  For instance, they have shown that humans are hardwired for instantly recognizing the emotions felt by other people.  They have also explained how our minds are programmed to find patterns, even where none may exist.  This tension was captured in the title of a recent book by a respected psychologist, Intuition: Its Powers and Perils.  From this new perspective, then, intuition is taken for granted or ignored by almost all humanists but it is actually central to much of their work.

This debate over intuition raises important questions for traditional humanists working in the digital era.  Would traditional humanists argue that their close reading of texts, which has become the hallmark of humanistic scholarship, is an example of this new concept of intuition at its best, since it is theoretically rigorous and excels at finding new patterns in old texts?  Or will traditional humanists increasingly feel that their research methodology is threatened by what Franco Moretti calls “distant reading,” precisely because it risks exposing the limitations or perils of their intuitive models of scholarship?  How would traditional humanists react if they knew that various digital humanists have searched Google Books to test the arguments set forth in some monographs and found them lacking when text mining revealed an significant number of counterexamples that were missed or ignored by the authors?  These and other examples should get us thinking seriously about the advantages and disadvantages of relying so heavily on anecdotal, case study, and close reading research methods in the humanities.

Data and databases have become the holy grail of the new class of information workers.  One recent books applies the term super crunchers to these data analysts.  Recent articles in the popular press describe how large data sets allow trained professionals to find new patterns and make predictions in areas such as health careeducation, and consumer behavior.  In fact, we have probably reached the point this country where it is impossible to change public policy without the use of statistics.  Even the American Academy of Arts and Sciences jumped on the statistics bandwagon when it launched its Humanities Indicators Prototype web sitelast year, presumably in plenty of time for congressional budget hearings.  The fact that the humanities were the last group of disciplines to compile this kind of data raises some troubling questions about the lack of quantitative perspectives in the traditional humanities.

The humanities and mathematically-driven disciplines operate at almost opposite poles of scholarly inquiry.  In the humanities, practitioners privilege crystallized intelligence, which is highly correlated with verbal ability.  This has given rise to the idea that a “senior scholar” in the humanities accomplishes his or her most important work in their 50’s or 60’s, after a lifetime of accumulating and analyzing knowledge in their particular specialization.  By contrast, the most mathematically-inclined disciplines prize the abstract thinking that characterizes fluid intelligence.  This other form of general intelligence peaks in a person’s 20’s and 30’s.  As a consequence, the Fields Medal, widely considered the highest award in Mathematics, has never been awarded to a mathematician over the age of 40.  So if the digital humanities require young scholars to learn and excel at computational and algorithmic forms of thinking, we should be asking ourselves whether most senior scholars in the humanities will resist this as a perceived threat to their systems of seniority and authority.

Digital humanists have already written and talked quite a bit about how tenure and promotion committees have rejected some digital scholarship for being non-traditional.  Further compounding this problem are what appear to be significant cultural differences.  Almost all traditional humanists work on their scholarship in isolation; digital humanists collaborate often, sometimes because this is the only way to assemble the requisite technical knowledge.  Traditional humanists distinguish their scholarship from that produced in the social sciences, which they often think lowers itself to the level of policy concerns.  Digital humanists, by contrast, are almost universally oriented towards serving the needs of the public.  And while traditional humanists place a premium on theoretical innovation, digital humanists have so far focused much more on embracing and pioneering new methodologies.

Digital humanists will have to seriously ask themselves whether their embrace of social science methods will be considered heretical by traditional humanists.  Online mapping and work with GIS in the digital humanities is clearly borrowing from geography.  The National Historic Geographical Information System, which maps aggregate census data from 1790 to 2000, is obviously influenced by demographic and economic analysis.  The Voting America web site, overseen by the digital humanist Ed Ayers, builds on decades of studies in political science.  Text mining is catching on as digital humanists adapt the methods of computational linguistics and natural language processing.  What remains to be seen is whether the digital humanities will take this flirtation to its logical conclusion and follow the example of the computational social sciences.

All of this might sound quite unlikely to some of you.  After all, most, if not all, of us have heard the mantra that the digital humanities is a misnomer because in ten to fifteen years all humanists will be using digital methods.  But for that to be true, digital humanists will have to fall into the same trap as traditional humanists: believing that others will follow our example because the correctness of our way of doing things seems self-evident.  But as we have seen, there may actually be significant differences in the ways that digital humanists and traditional humanists think about and practice their disciplines.

Let me conclude with a few questions that I would love to see discussed, especially as part of a session at THATCamp.  Will the methodologies and mindset of the traditional humanities become increasingly anachronistic in today’s data-driven society?  Will the digital humanities have to team up with the computational social sciences and create a new discipline, similar to what happened with the emergence of cognitive science as a discipline, if traditional humanists realize that we could radically change their research methods and therefore decide that we are too heretical?  What if this departure from the traditional humanities is the only way for digital humanists to become eligible for some share of the 3 percent of the GDP that Obama has committed to scientific research?  If digital humanists decide instead to remain loyal to traditional humanists, then what are the chances that young humanists can overthrow the traditions enshrined by senior scholars?  Won’t traditional humanists fight attempts to fundamentally change their disciplines and oppose efforts to make them more open, public, collaborative, relevant, and practical?

]]>
http://chnm2009.thatcamp.org/06/03/disciplinary-heresies-and-the-digital-humanities/feed/ 10
A Giant EduGraph http://chnm2009.thatcamp.org/05/29/a-giant-edugraph/ http://chnm2009.thatcamp.org/05/29/a-giant-edugraph/#comments Fri, 29 May 2009 16:48:32 +0000 http://thatcamp.org/?p=70

Hi all,

Really exciting stuff so far! (Can we make this a week-long event?)

Here’s what I’m up to, thinking about, and hoping to get guidance about from the Manhattan-Project-scale brainpower at THATCamp.

I’ve been working on ways to use semantic web stuff to expose and connect more info about what actually goes on in our classes, and especially in our WPMU environment, UMWBlogs. So far, I’ve been slowly working on scraping techniques and visualizations of the blog data at Semantic UMW. It sounds like this is similar stuff to Eric’s interest and Sterling’s interest — making connections — but in the domain of students and teachers and what they study.

The next phase of it is to get from the blog to the classroom. I want to ask and answer questions like:

  • Who’s studying the Semantic Web?
  • Is anyone teaching with “Semantic Web for the Working Ontologist”?
  • Anyone teaching from a Constructivist viewpoint?
  • What graduation requirements can I meet through courses that study things I’m interested in?
  • Can I study viral videos and meet a graduation requirement at the same time?
  • I’m a recruiter with a marketing firm. I need someone who has used Drupal, and is familiar with Linked Open Data.

I’d love to brainstorm about other kinds of questions/scenarios that people would like to answer!

(Here‘s a test/demo of an earlier version, with a handful of both fake and real data. Hopefully I’ll have demos of the updated version ready to roll by THATCamp.)

Part of the mission, and one of the things I’d like to hear thoughts about, is a types classification for the things that classes study. Here’s the run-down of it right now. Love to talk about where this succeeds and fails at being a general vocabulary for what classes study. — maybe even whether there are things in LOC I need to learn from?

Agent (Person / Group)
Culture
Era
Language
Perspective
Phenomenon
–Social Phenomenon
–Natural Phenomenon
Place
Practice
Object
–Artifact
–Natural Object
Tool
Document
Work

So, that’s the kinds of stuff I’d like to share and get feedback about.

I’ve got a handful of posts on this idea (warning! some contain serious RDF geekery, some do not).

And for the folks who are interested and are familiar with SPARQL, here’s an endpoint containing the current state of the vocabs, in graphs named www.ravendesk.org/univ# www.ravendesk.org/univ_t# . Also a set of sample data in graph example.university.edu/rdf/

]]>
http://chnm2009.thatcamp.org/05/29/a-giant-edugraph/feed/ 2
Zotero and Semantic Search http://chnm2009.thatcamp.org/05/29/zotero-and-semantic-search/ http://chnm2009.thatcamp.org/05/29/zotero-and-semantic-search/#comments Fri, 29 May 2009 08:26:37 +0000 http://thatcamp.org/?p=62

Here is my original proposal for THATCamp, which I hoped would fit in with session ideas from the rest of you:

I would like to discuss theoretical issues in digital history in a way that is accessible and understandable to beginning digital humanists.  This is probably the common thread running through my interests and research.  I really wonder, for instance, whether digital history has its own research agenda or whether it simply facilitates the research agenda of traditional academic history.  I believe that Zotero will need a good theory for its subject indexing before it can launch a recommendation service.  Are any digital historians planning on producing any non-proprietary controlled vocabularies?  We need to have a good discussion of what the semantic web means for digital history.  Are we going to sit on our hands while information scientists hardwire the Internet with presentist ontologies?  Can digital historians create algorithmic definitions for historical context that formally describe the concepts, terms, and the relationships that prevailed in particular times and places?  What do digital historians hope to accomplish with text mining?  Are we going to pursue automatic summarization, categorization, clustering, concept extraction, entity relation, and sentiment analysis?  What methods from other disciplines should we consider when pursuing text mining?  What should be our stance on the attempt to reduce the “reading” of texts to computational algorithms and mathematical operations?  Will the programmers among us be switching over to parallel programming as chip manufacturers begin producing massively multi-core processors?  How prepared will we be to exploit the full capabilities of high-performance computing once it arrives on personal computers in the next few years?

Here is a post that just went up at my blog that addresses some of these issues and questions:

Zotero and Semantic Search

The good news is that Zotero 2.0 has arrived.  This long-awaited version allows a user to share her or his database/library of notes and citations with others and to collaborate on research in groups.  This will be a tremendous help to scholars who are coauthoring papers.  It also has a lot of potential for teaching research methods to students and facilitating their group projects.

The bad news is that I think Zotero is about to hit another roadblock.  The development roadmap says version 2.1 “will offer users a recommendation service for books, articles, and other scholarly resources based on the content in your Zotero library.”  This could mean simply that Zotero will aggregate all of the user libraries, identify overlap and similarity between them, and then offer items to users that would fit well within their library.  This would be similar to how Facebook compares my list of friends with those of other people in order to recommend to me likely friends with whom I already have a lot of friends in common.  If this was all there was to the process of a recommendation system in Zotero, then I think Zotero would meet its goal.  But if Zotero is to live up to its promise to enable users to discover relevant sources, then I think there is still a lot of work to be done.

This may seem like a distinction without a difference.  My point is a subtle one and hopefully some more examples will illustrate what I am trying to say. But first let’s define the semantic web.  According to Wikipedia, “The semantic web is a vision of information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing, and combining information on the web.”  Zotero fulfils this vision when it captures citation information from web sites and makes it available for sharing and editing.  Amazon does something similar with its recommendation service.  It keeps track of what books people purchase, identifies patterns in these buying behaviors, and then recommends books that customers will probably like.  Zotero developers have considered using a similar system to run Zotero’s recommendation engine.  These are examples of the wisdom of the crowd in the world of web 2.0 at its best.

Unfortunately, there are limits to how much you can accomplish through crowdsourcingNetflix figured this out recently and is offering $1 million to whoever can “improve the accuracy of predictions about how much someone is going to love a movie based on their movie preferences.”  The programming teams in the lead have, through trial and error, figured out that they needed to extract rich content from sites like the Internet Movie Database in order to refine their algorithms and predictive models.  This is kind of like predicting the weather; the more variables you can include in your calculations, the better your prediction will be.  However, in the case of movies the concepts for classifying movies is somewhat subjective.  Without realizing it, these prize-seeking programmers have been developing an ontology for movies.  (That may be a new word for you–according to Wikipedia, “an ontology is a formal representation of a set of concepts within a domain and the relationships between those concepts.”)  Netflix is essentially purchasing a structured vocabularly and matching software that will allow it to vastly improve the accuracy of its recommendation engine when it comes to predicting what movies its customers will like.

One company that has taken ontologies quite seriously is Pandora, a “personalized internet radio service that helps you find new music based on your old and current favorites.”  The company tackled head-on the problem of semantically representing music by creating the Music Genome Project, which categorizes every song in terms of nearly 400 attributes.  And here is where the paradigm shift becomes evident.  Rather than aggregating and mining the musical preferences of groups of people, like what Amazon does with its sales data on books, Pandora defines similarity between songs in terms of conceptual overlap.  In other words, two songs are related to one another in the world of Pandora because they share a whole bunch of attributes–not because similar people listen to similar music.  (I told you this would be a subtle distinction.)  This is an example of how the semantic web trumps web 2.0.

Now let’s return to our discussion of Zotero.  As mentioned earlier, the envisioned recommendation engine for Zotero has been compared to Amazon’s recommendation engine.  The ability of users to add custom tags underscores how Zotero was influenced by web 2.0 models.  Apparently Zotero developers looked forward to the day when the “data pool” in Zotero would reach critical mass and enable the recommendation system to predict what items users would want to add to their library.  As we have seen, these models have inherent limitations.  They make recommendations on the basis of shared information, rather than on the basis of similarity between concepts.  I think at some level Zotero developers sensed this problem.  That is why they probably designed Zotero to capture terms from controlled vocabularies as part of the metadata it downloaded from online databases.  Unfortunately, though, some users and developers have said that imported tags, such as subject headings from library catalogs, are pretty much useless in Zotero.  Furthermore, the fact that Zotero comes with a button for turning off “automatic” tags, and that some translators sloppy or fail to capture subject headings, suggests that most users would rather avoid using these terms from controlled vocabularies.

And so the problem with Zotero is that its users and developers generally resist incorporating ontologies into their libraries (item types and item relations/functions are notable exceptions).  That may sound like a very abstract thing to say.  So let me provide you with some concrete examples of what this would look like.  The first is a challenge I would like to issue to the Zotero developers.  It has been said that Zotero would allow a group of historians to collaboratively build a library on “a topic lacking a chapter in the Guide to Historical Literature.”  My “bibliographer test” is a slight variation on this: 1) pick any section in this bibliographic guide, 2) enter all but one of the books in the given bibliography into Zotero, and 3) program Zotero’s recommendation engine so that, in the majority of cases, it can identify the item missing from the library.  Similarly, I would like to see us develop algorithms for “related records searches.”  You may think this is impossible, but this capability already exists in the Web of Science database.  And as we have already seen, Netflix and Pandora provide examples of the kind of semantic work it takes to make these types of searches feasible.

After reading this post, you may feel that Zotero has been heading down the wrong path.  I prefer to think of Zotero as having made some amazing progress over the last three years.  And I think the genesis of the ideas it needs are already in place.  In my estimation, we need to think more expansively about what means to carry out semantic searches with Zotero.  It also seems to me that we need to think more carefully about balancing the benefits of web 2.0 with the sophistication of the semantic web.  I will be excited to see what the developers come up with.  And maybe if I work more on my programming skills, I can help with writing the code.  As I see it, this will be an exciting opportunity for carrying out theoretical research in the digital humanities.

]]>
http://chnm2009.thatcamp.org/05/29/zotero-and-semantic-search/feed/ 14
Context and connections http://chnm2009.thatcamp.org/05/28/illuminating-context-and-connections/ http://chnm2009.thatcamp.org/05/28/illuminating-context-and-connections/#comments Thu, 28 May 2009 15:37:47 +0000 http://thatcamp.org/?p=60

I’ve been doing a lot of thinking about ways to get primary source documents to “talk” to each other and to the cloud of secondary sources that surround them.  For example, at Monticello we’re working on a digital version of Jefferson’s memorandum books (60 years’ worth of purchases made, places visited, people seen, etc.) and want to enrich it far beyond simply getting the text on the web.  Can we make that incredible information come alive in a rich and user-friendly way?  Put these and other primary sources into a broader context of people, events, ideas?  Connect these seamlessly with secondary sources treating the same topics?  Can we decentralize the process to pull information from non-Monticello assets?  What visualization tools will help?

Or another version of the same “problem.”  Thomas Jefferson wrote between eighteen and nineteen thousand letters in his lifetime and received several times that number from other writers.  What are ways to illuminate the connections among those letters?  What are ways to permit an easy understanding of the larger (political, social, material, geographic) contexts in which that correspondence took place?  Are there good tools that will let people explore letters by theme?  And beyond that, can the same solutions be applied to other correspondents at other times in other places (and, ultimately, turned into a giant killer spiderweb of correspondence)?

]]>
http://chnm2009.thatcamp.org/05/28/illuminating-context-and-connections/feed/ 12
Patchwork Prototyping a Collections Dashboard http://chnm2009.thatcamp.org/05/27/patchwork-prototyping-a-collections-dashboard/ http://chnm2009.thatcamp.org/05/27/patchwork-prototyping-a-collections-dashboard/#comments Wed, 27 May 2009 15:33:29 +0000 http://thatcamp.org/?p=53

In days of yore, the researcher had a limited set of tools at their disposal to get a broad sweeping view of what a research collection consisted of.   There might be a well-crafted NUCMC entry,  a quick glance at a finding aid, a printed catalogue, or a chat with an archivist or librarian.  Sometimes these pieces of information might tell you how much of a collection might meet your research needs (and correspondingly how many days you should plan to spend working with a collection).

Unfortunately many of our digital collections still rely on modes of presentation and description that are based on analog interfaces to collections.   With increasingly large repositories (gathered into even larger aggregations) it is often hard for the researcher to know just how deep a particular rabbit hole goes.  Improved search capabilities help solve part of this problem, but they can often impede serendipitous discoveries and unexpected juxtapositions of materials.

As part of our work to update the IMLS Digital Collections and Content project’s Opening History site, we are exploring ways that we can make the contours of a collection more explicit, develop modes of browsing that facilitate discovery, and provide researchers a sense of what’s available at different levels.

I’m looking forward to THATCamp because this looks like a great group of people to brainstorm with.   Thus far, we’ve been using a paticipatory design technique known as “patchwork prototyping.” By the time of THATCamp we’ll have a few pieces of prototype together for review.   If others are interested, I would be willing & able to lead  a session that explores the general problem space using Opening History and any other collections that participants suggest.

]]>
http://chnm2009.thatcamp.org/05/27/patchwork-prototyping-a-collections-dashboard/feed/ 6
The ill-formed question http://chnm2009.thatcamp.org/05/25/the-ill-formed-question/ http://chnm2009.thatcamp.org/05/25/the-ill-formed-question/#comments Mon, 25 May 2009 11:58:59 +0000 http://thatcamp.org/?p=46

Since sending in the brief blurb for THATCamp I’ve gone through the latest edition of McKeachie’s Teaching Tips book and spent some time pondering what’s necessary to make a seminar work. In some ways this is designing from the back end: for online graduate programs in the humanities or social sciences to work for a large segment of potential students, the classes have to accomplish a certain number of things, and that requires a certain (but undefined) intensity of exchange. I’m afraid I’ve got the Potter Stewart problem with definition here: I can’t tell you what constitutes sufficient intensity, but I know it when I’ve experienced it as either a teacher or student.

It’s certainly possible to construct that intensity in live chats, but since most online classes I’ve seen or taught are asynchronous, I have to think differently from “Oh, I’ll just transpose.” (Here, you can insert platitudinous warnings about uploading PPTs and thinking you’re done.) But while several colleagues have pointed me to some of their online discussions with deep threads (and at least at face value, it seems like intensity to me), that doesn’t help, in the same way that telling a colleague, “Oh, my seminars work great; what’s wrong with you?” isn’t sufficient.

So let me step back and reframe the issue: the existence of great conversation in a setting is not helpful to the central problem of running a seminar. In some ways, it’s a type of chauvenism (“you can have better conversations in this setting”), and that prevents useful conversations about what a seminar experience requires. Not a seminar class online or a face to face seminar but a seminar class in any setting.

Unfortunately, while I have searched, I have not been able to come across ethnographic or other qualitative research on this. There are plenty of how-to guides for running face-to-face discussion, but I am hungry for something beyond clinical-experience suggestions. There is some decent research on transactional distance, and cognitive apprenticeship is an interesting concept, but neither is that satisfying.

So back to basics and some extrapolation. In my most memorable literature classes, and in informal conversations around books, plays, movies, and poems, I’ve been entranced by how others think that writing works–maybe not in the same way that James Wood would parse it, but in some way.

“What does this mean? Was it good or bad? Why did that appear then? No, no, think about these moments, because she could have done something different. They swept in at the end, and that’s why it’s called deus ex machina.”

That’s the type of conversation I imagine for and remember from seminars: close readings, fast exchanges, excruciating pauses while I tried to piece ideas together, rethinking/reframing on the fly. Never mind that I’m an historian, and never mind the excruciating boredom in plenty of classes; the texture of intense conversation stuck in my brain is derived from conversations about novels, poems, plays, and movies.

And as fellow historians of ed David Tyack and Larry Cuban would point out, I have relied on this experience as a “grammar” that I would be predisposed to impose on online seminars. But as my original proposal for THATCamp pointed out, I don’t think the world (or learning) works in the same way everywhere.

What can be extrapolated from the best face-to-face seminars beyond the setting-specific events? I’ll propose that the best seminar classes are ill-formed questions, puzzles with weakly- but effectively-defined targets. Here, I am using “ill-formed” not in the sense of grammar but in the sense of a question that is not itself the best approach to a topic, and in this case, deliberately so. The best framing of an history class I ever took as either an undergraduate or graduate was Susan Stuard‘s course on early modern Europe. In essence, it was historiography, but framed as, “How do we explain the rise of modern just-pre-industrial Europe?” That was a great focus, but it was ill-formed in that it did not have a closed-form answer. The answers we read about and argued over were hypotheses that led to different questions. The course did not finish with our finding an (intellectual) pot of gold, but it was a great way to structure a class.

In many ways, problem-based learning uses the ill-formed question, “How do we solve this problem?” That question assumes a problem, a problem definition, and a potential solution, and of course the value is not in the solution itself but the development of analysis and the application of important concepts in the setting of problems. In this case, the course goal is not the motivating question, but the question is essential to meeting the goal.

Problem-based learning is great when it fits the goals of the course. Not all courses can be designed around problems, and if a seminar is online and asynchronous, I suspect that the loose “how does literature work?” question is not going to… well, work. But the ill-formed question can appear in more than the examples I have described or experienced.

]]>
http://chnm2009.thatcamp.org/05/25/the-ill-formed-question/feed/ 4
Dorn proposal for 2009 THATCamp http://chnm2009.thatcamp.org/05/17/dorn-proposal-for-2009-thatcamp/ http://chnm2009.thatcamp.org/05/17/dorn-proposal-for-2009-thatcamp/#comments Sun, 17 May 2009 20:17:38 +0000 http://thatcamp.org/?p=39

For the record, below is what I proposed for THATCamp. Since I wrote the following months ago, I’ve had additional thoughts on where to go with this, but origins and drafts matter, so here it is, warts and all:

Rant/discussion/query:

Dialog: (How) can we generate and maintain the type of dramatic/performative classroom interaction in an online environment that exists in the best discussion and seminar classes? Face to face classes have a spontaneity that generates such dialogue, and teachers or facilitators can play the Devil’s-advocate role in a way that hones the issues moment to moment, iteratively. But in an asynchronous environment, there is no such inherent moment-to-moment tension and drama This is one essence of humanities classes that I have been unable to replicate online, and the technology skeptics such as Margaret Soltan doubt it is possible.

Central questions:
Are there elements of a live-dialogue drama that can be translated into an asynchronous environment, or should we give up on the “aha!” moment embedded in an argument?
If the first, what are those elements?
If the second, how do we pick different goals that still serve that conversational, perspective-shifting goal for the liberal arts?

]]>
http://chnm2009.thatcamp.org/05/17/dorn-proposal-for-2009-thatcamp/feed/ 8