Blog

Literary mapping and spatial markup

Wednesday, June 10th, 2009 | amanda watson

I’ve been thinking a lot lately about the uses of digital maps in literary study, partly because I’ve been thinking about the connections between place and memory for a long time, and partly because I got interested in GIS a few years ago, while working in the UVa Library’s Scholars’ Lab along with some extremely smart geospatial data specialists. There’s been talk of a “spatial turn” in the humanities lately, and there are already models for what I’m interested in doing. Franco Moretti’s maps of literary places in Atlas of the European Novel 1800-1900 and Barbara Piatti’s in-progress Literary Atlas of Europe have helped me think about the patterns that a map can help reveal in a work of fiction. I’m very much looking forward to hearing about Barbara Hui’s LitMap project, which looks a lot like what I’d like to make: a visualization of places named in a text and stages in characters’ journeys.

Since I came to the digital humanities via a crash course in TEI markup, I tend to think first of markup languages as a way to represent places, and capture place-related metadata, within a literary text. The TEI encoding scheme includes place, location, placeName, and geogName elements, which can be used to encode a fair amount of geographic detail, which can then be keyed to a gazetteer of place names. But there are also markup languages specifically for representing geospatial information (GML, SpatialML), and for displaying it in programs like Google Earth (KML). Using some combination of a database of texts, geographic markup, and map display tools seems like a logical approach to the problem of visualizing places in literature.

But (as I’ve said on my own blog, with a different set of examples) I’m also interested in spatial information that’s harder to represent. There are a lot of ways in which literary settings don’t translate well to points on a map. Lots of authors invent fictitious, and even when one can identify more or less where they’re supposed to be, one can’t supply exact coordinates. Consider Edith Wharton’s The House of Mirth and The Age of Innocence, both set in New York at the turn of the century and in the 1870s, respectively. One of my ongoing Google Maps experiments is a map of named places in both novels, focused on New York and Newport, Rhode Island. Both novels explore an intricate, minutely-grained social world, in which a character’s address says a great deal about his or her status. In some cases, the reader can precisely identify streets, points, and landmarks. And I think you can learn quite a lot about the world of Edith Wharton’s novels by looking at the spatial relationships between high society and everyone else, or between old and new money, or between a character’s point of entry into the world of the novel and where (physically and spatially) he or she ends up.

But in other cases the locations are harder to pin down. One can surmise where Skuytercliff, the van der Luydens’ country house in The Age of Innocence, is (somewhere on the Hudson River, not far from Peekskill), but it’s a fictional house whose exact location is left vague. The blob labeled “Skuytercliff” on my map represents a conjecture. And of course geoparsing won’t work if the place names are imaginary and the coordinates are unknown. So: what do we do with unreal places that still have some connection to the geography of the real world? And what if we want to visualize a writer’s completely imaginary spaces? What if we move out of fiction and into less setting-dependent literary forms, like poetry? How would one even begin to map settings in the work of, say, Jorge Luis Borges? Are there limits to the usefulness of visualization when we use it to analyze things that are fundamentally made out of words? Are some texts “mappable” and others much less so? (I’m pretty sure the answer to that last one is “yes.” I have yet to encounter an approach to literature that works equally well for everything from all time periods.)

So what I’d like to bring to the table at THATCamp is a set of questions to bounce off of people who’ve done more work with cartographic tools than I have. In some ways, my interests resonate with Robert Nelson’s post on standards, since I’m also thinking about what to do when the objects of humanistic study (in this case, literature) turn out to be too complex for the standards and data models that we have. If we end up having that session on standards, I’d like to be in on it. But I hope there are also enough people for a session on mapping and the representation of place.

From History Student to Webmaster?

Wednesday, June 10th, 2009 | jamesdcalder

Here’s my original proposal (or part of it at least):

“I would like to discuss the jarring, often difficult and certainly rewarding experiences of those, like myself, who have somehow managed to make the leap from humanities student to digital historian/webmaster/default IT guy without any formal training in computer skills.  While I am hoping that such a discussion will be helpful in generating solutions to various technical and institutional barriers that those in this situation face, I am also confident that meeting together will allow us to better explain the benefits that our unique combination of training and experience bring to our places of employment.  I would also be very interested to see if we could produce some ideas about how this group could be better supported in our duties both by our own institutions and through outside workshops or seminars.”

I’m not sure if this is the right place for this discussion, as I’m guessing that many campers may not share these difficulties.  However, if enough people are interested, I think I’ll go with it.  Related to this discussion, I would also like to talk about people’s experiences or recommendations for resources that could be useful to digital historians in training, as well as better ways to get our message about web 2.0, open source technologies, freedom of information, etc. to our colleagues.

Anyways, let me know what you all think.

Omeka playdate open to THATCampers

Wednesday, June 10th, 2009 | Dave

The Friday before THATCamp (June 26th) we’ll be holding an Omeka “playdate” that’s open to campers and anyone else who would like to attend. Interested in learning more about Omeka? Already using Omeka and want to learn how to edit a theme? Want to build a plugin or have advanced uses for the software? This workshop is a hands-on opportunity to break into groups of similar users, meet the development and outreach teams, and spend the part of the day hanging around CHNM.

We’ve increased the number of open spots, and would love to see some THATCampers sign up as well. If you plan on attending, please add you name to the wiki sign-up.  Remember to bring your laptop!

How to get money, money, money for wild and crazy times!!

Tuesday, June 9th, 2009 | Jeffrey McClurken

Okay, not really.  But I do think this topic is particularly important right now.

This was my original proposal:
I’d like to talk about the role of faculty, IT, and administrators in collaborating to shape institutional strategic plans and planning in general for academic computing and the digital humanities.  I’ve spent nearly 18 months now involved in various strategic and practical planning committees at UMW regarding digital resources and goals for the humanities and social sciences.  Making sure that resources are allocated to the digital humanities requires broad commitments within administrative and strategic planning.  [Not as sexy or fun as WPMU or Omeka plug-ins, but sadly, just as important….]  I’d like to share my own experiences in the area and hear from others about theirs.

And today I would simply add that as UMW is closing in on a first draft of its strategic plan, I’m even more convinced that the college/university-wide planning process is something with which digital humanists need to be engaged.  In this time of dwindling economic resources, however, we also need to be, pardon the pun, strategic about it.  I think we need to figure out when we need to explain concepts, tools, the very notion of what digital humanities is and its place in the curriculum (something even THATCampers seem to be debating), when we need to do full-on DH evangelizing, and when we need to back off from our evangelizing in order to ease fears and/or recognize budgetary realities.  In any case, who else has had to make the case for Digital Humanities or academic technology as part of these processes?

Standards

Monday, June 8th, 2009 | rob nelson

Here’s my original proposal for THATCamp. The question and issues I’m interested in entertaining dovetail nicely, I think, with those that have been raised by Sterling Fluharty in his two posts.


The panel at last year’s THATCamp that I found the most interesting was the one on “Time.” We had a great discussion about treating historical events as data, and a number of us expressed interest in what an events microformat/standard might look like. I’d be interested in continuing that conversation at this year’s THATCamp. I know Jeremy Boggs has done some work on this, and I’m interested in developing such a microformat so that we can expose more of the data in our History Engine for others to use and mashup.

While I’d like to talk about that particular task, I’d also be interested in discussing a related but more abstract question too that might be of interest to more THATCampers. Standards make sense when dealing with discrete, structured, and relatively simple kinds of data (e.g. bibliographic citations, locations), but I’m wondering if much of the evidence we deal with as humanists requires enough individual interpretation to make it into structured data that the development of interoperability standards might not make that much sense. I’m intrigued by the possibility of producing data models that represent complex historical and cultural processes (e.g. representing locations and time in a way that respects and reflects a Native American tribe’s sense of time and space, etc.). An historical event doesn’t seem nearly that complicated, but even with it I wonder if as humanists we might not want a single standard but instead want researchers to develop their own idiosyncratic data models that reflect their own interpretation of how historical and cultural processes work. I’m obviously torn between the possibilities afforded by interoperability standards and a desire for interpretive variety that defies standardization.


In his first post, Sterling thoughtfully championed the potential offered by “controlled vocabularies” and “the semantic web.” I too am intrigued to by the possibilities that ontologies, both modest and ambitious, offer, say, to find similar texts (or other kinds of evidence), to make predictions, to uncover patterns. (As an aside, but on a related subject, I’d be in favor of having another session on text mining at this year’s THATCamp if anyone else is interested.) Sterling posed a question in his proposal: “Can digital historians create algorithmic definitions for historical context that formally describe the concepts, terms, and the relationships that prevailed in particular times and places?” I’m intrigued by that ambitious enterprise, but as my proposal suggests I’m cautious and skeptical for a couple of reasons. First, I’m dubious that most of what we study and analyze as humanists can be fit into anything resembling an adequate ontology. The things we study–e.g. religious belief, cultural expression, personal identity, social conflict, historical causation, etc., etc.–are so complex, so heterogeneous, so plastic and contingent that I have a hard time envisioning how they can be translated into and treated as structured data. As I suggested in my proposal, even something as modest as an “historical event” may be too complex and subjective to be the object of a microformat. Having said that, I’m intrigued by the potential that data models offer to consider quantities of evidence that defy conventional methods, that are so large that they can only be treated computationally. I’m sure that the development of ambitious data models will lead to interesting insights and help produce novel and valuable arguments. But–and this brings me to my second reservation–those models or ontologies are, of course, themselves products of interpretation. In fact they are interpretations–informed, thoughtful (hopefully) definitions of historical, cultural relationships. There’s nothing wrong with that. But adherence to “controlled” vocabularies or established “semantic” rules or any standard, while unquestionably valuable in terms of promoting interoperability and collaboration, defines and delimits interpretation and interpretative possibility. I’m anti-standards in that respect. When we start talking about anything remotely complex–which includes almost everything substantive we study as humanists–I hope we see different digital humanists develop their own idiosyncratic, creative data models that lead to idiosyncratic, creative, original, thoughtful, and challenging arguments.

All of which is to say that I second Sterling in suggesting a session on the opportunities and drawbacks of standards, data models, and ontologies in historical and humanistic research.

Digital Humanities Manifesto Comments Blitz

Monday, June 8th, 2009 | Tom Scheinfeldt

I just managed to read UCLA’s Digital Humanities Manifesto 2.0 that made the rounds a week or so ago, and I noticed its CommentPress installation hadn’t attracted many comments yet. Anyone interested in a session at THATCamp where we discuss the document paragraph by paragraph (more or less) and help supply some comments for the authors?

Campfire Plans

Wednesday, June 3rd, 2009 | Richard Urban

Maybe this isn’t the right venue,  but sometimes it’s never too early to start talking about extracurricular activities.   What happens Saturday/Sunday night?   Will Amanda French be leading us in a round of digital humanities songs around the campfire?

An installation

Wednesday, June 3rd, 2009 | david staley

Colleagues,

I, too, am eager for the camp to begin, and seeking your insights for the project I will be presenting.

I will be using the video wall in the Showcase center to display a digital installation titled “Syncretism,” which will run for both days of the camp. The piece is an associative assemblage of still images that each depict instances of cultural syncretism; juxtaposed together, the images suggest associations and analogies, and this a larger theme, between differing instances of cultural syncretism (for example, images of “English-style Indian food” juxtaposed next to skyscrapers in Shanghai next to a rickshaw driver in Copenhagen.

I am seeking feedback both on the visual message of the installation itself, as well as thoughts on the idea of an installation as an example of scholarly performance in the humanities. Is there space in the humanities for a “humanities-based imagist?”

I don’t know if I should propose a separate session to discuss these themes, or whether I should informally speak with you all during the conference while the installation runs.

In any event, I am eager to hear your thoughts about the installations.

Disciplinary Heresies and the Digital Humanities

Wednesday, June 3rd, 2009 | sterling fluharty

Cross-posted at Clio Machine:

(This post is a continuation of some of the questions I raised in my original THATCamp proposal.)

Are the humanities inherently valuable, both in terms of the skills they impart to students and because the value of humanistic scholarship cannot be validated by external (often quantitative) measures?  Or are the humanities experiencing a crisis of funding and enrollments because they have not adequately or persuasively justified their worth?  These debates have recently resurfaced in the popular press and in academic arenas.  Some commentators would point to the recession as the primary reason for why these questions are being asked.  We should also consider the possibility that the mainstreaming of the digital humanities over last couple of years is another (but overlooked) reason for why questions about the value and worth of the traditional humanities are being taken more seriously.

(more…)

A Giant EduGraph

Friday, May 29th, 2009 |

Hi all,

Really exciting stuff so far! (Can we make this a week-long event?)

Here’s what I’m up to, thinking about, and hoping to get guidance about from the Manhattan-Project-scale brainpower at THATCamp.

I’ve been working on ways to use semantic web stuff to expose and connect more info about what actually goes on in our classes, and especially in our WPMU environment, UMWBlogs. So far, I’ve been slowly working on scraping techniques and visualizations of the blog data at Semantic UMW. It sounds like this is similar stuff to Eric’s interest and Sterling’s interest — making connections — but in the domain of students and teachers and what they study.

The next phase of it is to get from the blog to the classroom. I want to ask and answer questions like:

  • Who’s studying the Semantic Web?
  • Is anyone teaching with “Semantic Web for the Working Ontologist”?
  • Anyone teaching from a Constructivist viewpoint?
  • What graduation requirements can I meet through courses that study things I’m interested in?
  • Can I study viral videos and meet a graduation requirement at the same time?
  • I’m a recruiter with a marketing firm. I need someone who has used Drupal, and is familiar with Linked Open Data.

I’d love to brainstorm about other kinds of questions/scenarios that people would like to answer!

(Here‘s a test/demo of an earlier version, with a handful of both fake and real data. Hopefully I’ll have demos of the updated version ready to roll by THATCamp.)

Part of the mission, and one of the things I’d like to hear thoughts about, is a types classification for the things that classes study. Here’s the run-down of it right now. Love to talk about where this succeeds and fails at being a general vocabulary for what classes study. — maybe even whether there are things in LOC I need to learn from?

Agent (Person / Group)
Culture
Era
Language
Perspective
Phenomenon
–Social Phenomenon
–Natural Phenomenon
Place
Practice
Object
–Artifact
–Natural Object
Tool
Document
Work

So, that’s the kinds of stuff I’d like to share and get feedback about.

I’ve got a handful of posts on this idea (warning! some contain serious RDF geekery, some do not).

And for the folks who are interested and are familiar with SPARQL, here’s an endpoint containing the current state of the vocabs, in graphs named www.ravendesk.org/univ# www.ravendesk.org/univ_t# . Also a set of sample data in graph example.university.edu/rdf/

Here's what others are saying about THATCamp on Twitter