Week 3: the gentle art of storytelling with GIS

Okay, so this week’s unit isn’t really about statistics, but I’ve felt my data illiteracy more strongly these past few days! Working with QGIS was an exercise in experimentation as much as frustration, and maybe this illustrates one of the assumptions of GIS projects: that researchers will already have a clear question they want to ask of the sources, or testing of a specific hypothesis, rather than exploring in the hopes of discovery. 

It wasn’t all bad to start; getting into QGIS and joining the polygon ‘map’ layers with a related dataset worked mostly as expected. Not having much experience with election maps (or much idea of what to do with these varied numbers), though, I had serious difficulty working with the Fairfax Congressional Election Results dataset. Thanks to a patient classmate, it became clear that this would require more than just joining two compatible tables—the data in the Fairfax would necessarily require multiple layers to do any kind of comparison between candidates. After more “see what happens if I do this,” I was able to get an approximation of an election map, showing which candidates won with at least 40% of the vote across the precincts. Intellectually, I recognize that this can be used—and misused—for many purposes. For one, these visualizations really only seem capable of highlighting majority candidates. Without overlays or animations that could be controlled by a user, it’s almost impossible to see anyone outside of the two major parties. 

Screenshot of QGIS visualization of Fairfax County congressional election results from 2014
Please, nobody use this terrible visualization for your civics class.

With the London Plague dataset, I felt more comfortable in figuring out what needed to be visualized (spread of plague week by week) and recognizing how my choices in defining and grouping the data could tell this story. Too few classes, and it appears as though plague descended virtually overnight on half of London’s parishes; likewise, arbitrary choice in distribution patterns obscures any useful visual data on when the plague reached certain regions. I wasn’t fully sure how each mode worked to divide/distribute the data, but the tried-and-true method of “click and see what happens” was surprisingly useful.1 I realized the “Equal Count (quartile)” mode was evenly dividing the number of data points across all weeks—which might be useful in other visualization applications, but maybe less so if one is interested in seeing a pattern of spread over time. Given that, I switched between different presets to eventually land on a distribution over ~4 weeks of time each, which seemed to provide enough granularity to show patterns of change over time. This approach made sense while also highlighting the perils of GIS: it’s very easy to lie with visualizations.

It’s difficult to separate my frustrations and ineffectual fumbling with these tools from my thoughts on GIS as a whole. To be sure, it has its exemplars: for example, the Mapping Inequality project balances clear visuals and accessible annotations with robust documentation, and its relevance to contemporary concerns of racial disparity and wealth inequality are immediate (though this immediacy isn’t always necessary for DH projects). I keep returning to a question posed by Ian Gregory and Paul Ell:

“To what extent does [the visualization] advance our understanding of the topic?”

Historical GIS, 104

Or: at what point are you just creating a boutique digital history project? This is a tricky question to navigate, and I don’t want to suggest that obscure or hyper-narrow fields of research need to justify themselves, or “prove” relevance by standard metrics or usage. However, at what point is the work of tracking down sources, digitizing, painstaking recreation of boundaries and topologies over time, and the synthesis of multiple disparate modes of data leading to new insights, rather than just create a fractalized approach to history? Does more and more sophisticated data always reveal greater insights, or is there a point of diminishing intellectual returns?

This feels particularly sharp in Siebert’s example of visualizing the spatial history of Tokyo, and the sheer volume of sources collected and reformatted. This may also be simply a difference in intent and methods—Siebert notes early on that this “data-driven” approach is built on finding and mapping information over time, in order to spark questions based on these patterns, rather than starting with a question and seeking out sources to provide an answer 2

I am less convinced of Siebert’s assertion that the analysis is provided by the visualization, and that “interpretation can follow so quickly, one upon the other, that it is difficult to say which comes first or to differentiate them.”3 This suggests that the visualization IS the analysis, rather than representation of a very specific set of conditions. Even Siebert’s own examples of revelations in the data seem to require further analysis—that the visualizations might revealing change over time, but these leave us with little understanding of how or why this occurred, or their overall impact. “There appears in this percentage view a rough visual balance, but there was actually a greater loss […] A comparison with population changes for other prefectures would probably reveal where many, but not all, of these people went.”4

This critique aside, one (maybe not) surprising commonality across this week’s readings and examples is the mass of paper-based historic and traditional methods behind each project. Each project spoke to the need for deep archival investigation and analysis, tracking down fugitive sources, and close reading of materials and comparative sources. (And maybe some of my reservations with GIS also stem from this: that an emphasis of visualization can so easily elide the volume of work and time, but also selection and interpretation, on the part of historians or researchers.)

  1. Had I read Gregory and Ell before starting, I might have had a slightly better time with this. See: Ian Gregory and Paul Ell, “Using GIS to Visualise Historical Data,” in Historical GIS: Technologies, Methodologies and Scholarship (Cambridge: Cambridge University Press, 2007): 97-100.
  2. Loren Siebert, “Using GIS to Document, Visualize, and Interpret Tokyo’s Spatial History,” Social Science History 24, no. 3 (2000): 539.
  3. Siebert 556
  4. Siebert 561

← Previous post

Next post →

6 Comments

  1. I also am struggling with Gregory and Eli’s question about visualization enhancing the data versus a “boutique” project. I think the Gettysburg map enhances the data because it reveals information about what the Confederacy and Union armies saw at key moments in time that impacted their strategic decisions — decisions that we probably could not visualize even by standing in the same locations at the battlefield itself.

  2. Robert Carlock

    You raise a good point about whether some HGIS projects have a return on the intellectual labor required to produce them. My immediate thought when considering this question is that for a field of people that study the past, historians are surprisingly forward thinking in their scholarship. As with all academics, we write in a specific way so that our argument can be found and distilled easily; we actively define our scope and methodology; we provide our sources in footnotes in arduous detail so that those who come after can find them as well; and in the case of GIS, create maps that hopefully someone else will utilize.

    I also think that HGIS projects are like many other pieces of historical scholarship, in that historians recognize their work is simultaneously contributing to/revising/refuting an existing narrative, while also recognizing that someone else will come along and do the same to their work. In this sense, all products of historical analysis are simply tools for furthering the study of history. HGIS is simply a different format for historians to add their analysis to the broader network of history.

    • stephanie

      Thank you, Robert — it’s good to have that reminder, because it’s easy to get lost in the labor of a project with the scope and depth of something like what Siebert was doing! I wanted to be mindful to keep the scope of my questions to general critique about where data stops and analysis begins (that is, how much detailed information does one need to do good history? Is there an upper or lower threshhold?), while also recognizing that the work someone does now can have returns at a point unseen.

      (Honestly, with Siebert’s example, maybe it’s that it almost feels like he’s selling himself short on his role in the interpretation! Or maybe I’m taking a different meaning of “interpretation” here, but it felt like he was doing that all along, through careful work and adjustment of his sources and recognizing gaps and issues with the material (like unreliable census data from wartime Tokyo), and that the answers weren’t simply “driven” by the data falling into place.)

  3. Your phrasing of, “‘see what happens if I do this,'” captures one of the most relatable processes as a scholar. Innovation requires…uncoordinated testing at times. I find that less foreknowledge of a subject can illuminate crevices and kinks that more experienced persons might skip over. (I especially see this approach viable in oral histories for example.) In some ways, being new to the subject helps researchers literally see from the audience’s perspective.

  4. Great post, Stephanie! I completely agree that after our bungling with the messy data last week, it made me a bit speculative about the data underpinning these projects and how things are presented. These are all choices that can be manipulated. And I also agree with the strong points, too, like the Mapping Inequalities project.

  5. And great job with all the nice formatting on the post, too, making it visually interesting besides!

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php