Category Archives: Distant reading

Hacking the Early Modern: the EEBO-TCP hackfest

[The original version of this post was first published by ABO Public: An Interactive Forum for Women and the Arts 1640-1830].

So in March, I was invited to my first hack. Me, an English Literature lecturer was going to have to produce something with computers in one day? Now read on …

 Hunched over our laptops in the Weston library

Hunched over our laptops in the Weston library

This was the EEBO-TCP hackfest, an event designed to launch the release into the public online domain of over 25,000 texts from the fifteenth to the seventeenth century. These texts have been curated and encoded by the Text Creation Partnership, a collaborative project between the University of Michigan, the Bodleian Library University of Oxford, and Proquest, the publishers of online database Early English Books Online. The idea of the hackfest was that humanities researchers and scholars would come together with digital researchers and technologists and create – in a day – innovative and imaginative ways of exploring, analysing, and developing this huge corpus. Now, while I’ve been tinkering with digital humanities approaches myself, I’m no programmer. Moreover, I’m an eighteenth-century-ist so I was stepping a little outside my normal safety zone. So it was with some trepidation, yet also with considerable excitement, that I dipped a toe into my first digital hack. The setting was the new Bodleian Weston library: appropriately for a day building things, it was still under construction.

It started with a speed-date. Over plenty of coffee thirty-or-so of us circulated around telling our stories and plans to anyone we could button-hole. Given humanists seem to be in the majority, most people were looking for a tech person to help out, and my case, slightly desperately so. My idea was to analyse some of the structural features of pre-eighteenth-century fiction, such as dedications, prefaces, letters to the reader, chapters, illustrations etc. But what I didn’t know was how to bring out that data from a large corpus and produce something potentially meaningful.

Hattige XML grab
Detail of the XML file of Gabriel de Brémond, Hattige: or The amours of the king of Tamaran A novel. 1683.

I needn’t have worried. Everyone was incredibly receptive and eager to make our plans work, so I found my geek (I know he’s happy with that epithet!): the extraordinarily energetic Dan Q from the Bodleian’s digital team. Together with a couple of people

Dan Q looking at my trusty mac
Dan Q leaning over my trusty mac

working with formal features of seventeenth-century alchemy texts, we found ourselves a table and began to work out how we might visualize this structural data. And this is the part that I found really exciting: within a couple of hours I had created a sub-corpus of fiction from the total of 25,000 texts, Dan had written some code to identify and count all the structural features I could think of (with some advice from Simon Charles from the TCP project about the TEI markup), and it had started producing some figures. With the knowledge that we all had to present our work at the end of the day, I had to think of ways to set out the results to suggest some kind of point to all this: in short, the ‘so what? question. (The crude but quick answer: by putting the texts in chronological order and colour-coding our Excel sheet, a hint of a pattern emerged).

Meanwhile, others in the room were experimenting with identifying the frequency of colour words, the use of Latin, simulating the shelves of the St Paul’s book-sellers, and even creating a game based on witch-trials (this by Sarah Cole, using Twine), and a team thinking about how to make the archive user-friendly to a more diverse audience (see Sjoerd Levelt’s prize entry to the EEBO-TCP Ideas Hack competition). Given my idea was conceived off-the-cuff, it was rather splendid to share third prize with our colleagues working on the same table.

What impressed me was the advantages offered by scale of the corpus and the rigour of its markup. Both of these features of the TCP project enabled me and Dan to produce – with surprising speed – a set of results for a question that would otherwise be much more difficult to answer. But what really blew my mind was how my tech guy took my simple question to another level: Dan wondered ‘how the structural differences between fiction and non-fiction might be usable as a training data set for an artificial intelligence that could learn to differentiate between the two’ (see his own blog post on the event).

‘Nice work Stephen” Nice work Dan”

I came away a slightly different academic, no longer intimidated by big data, enthused by digital collaboration, and now a big fan of the day-long hack.

Play, experiment, and digital pedagogy

CSIRO_ScienceImage_7630_test_tubesFirst of all, a hat-tip to Willard McCarty: during a talk at Bath Spa University in March of this year, he quoted early-twentieth-century English critic I. A. Richards and it was this that crystallised my scattered thoughts on my students’ encounter with digital approaches to English literature. Richards prefaced his book Principles of Literary Criticism with the highly suggestive notion that ‘[a] book is a machine to think with’. Richards’ image was not an idle one: an ardent believer in the interplay between the arts and sciences, both his book and the book in the abstract – like any piece of technology from the automated looms of the late eighteenth century onwards – embodied human-designed creative procedures. Through the book, by bringing to bear those same human processes of thought, we are able to examine civilization and what it is to be human: the very task the book was designed to ‘re-weave’.[1] In the digital age it is hard to avoid the resonances: the preeminent machine of our age – the computer – is also governed by human procedures (programming) and ‘processing’ has now become almost entirely associated with computers. Yet we forget that books are, as Richards is implying, an invitation to be (re)processed by humans. What I want to emphasise is that this re-processing – what we less starkly call literary criticism – can be envisioned as a series of procedural building blocks.

What I’m also drawing upon has been defined by Ian Bogost as ‘procedural literacy’. Developing the idea that computing programming is a kind of literacy, Bogost proposed that ‘any activity that encourages active experimentation with basic building blocks in new combinations contributes to procedural literacy.’ Such a literacy in processes and procedures (such as I have described) becomes a foundation that can be applied elsewhere: ‘[e]ngendering true procedural literacy means creating multiple opportunities for learners—children and adults—to understand and experiment with reconfigurations of basic building blocks of all kinds.’[2]

This movement between play, experimentation and a critical awareness in the processes of interpretation was evident during a session on my undergraduate module Digital Literary Studies. Students were introduced to distance reading and invited to work with Voyant Cirrus on eighteenth-century novels. It was apparent in the workshops that the preliminary results of this analysis were not immediately significant or meaningful. So, the next stage involved playing with word choices, selecting synonyms to create clusters of meaning, or choosing antonyms to gain critical leverage. Given these were historical texts, another step involved researching historical inflections using the OED. Some students wanted add another interpretative layer: using Google’s N-Gram Viewer (with caution) they zoomed out even further. It was interesting to watch. The movement between these steps was not linear: some students moved back into the print copy of the novel for a close reading; some students shuttled back and forth between a few key procedures.

The initial surprise that textual visualization did not produce an immediate interpretation was a useful warning about the technological lure of instant answers. Instead, results became merely a first step in a series of experiments: each set of word choices – let’s call them hypotheses – required us to re-think the interpretative assumptions about the text(s). Moreover, the significance of the results was also subject to constant discussion, as if the text itself was changing shape. What my students discovered via this experimentation is the fascinating tension between different processes of interpretation: between what I. A. Richards might call re-weaving and what Lisa Samuel and Jerome McGann termed ‘deformance.’[3] The aim of the session was to generate some analyses of the literary history of the novel between 1660 and 1799; but the session also enabled students to slow down and reflect on their processes of interpretation: it trained them to be procedurally literate.

I started with citing I.A. Richards, part of a group of critics and intellectuals who in the early twentieth century placed close reading at the heart of English Studies. Despite its varied fortunes it is still there. What is most resonant for me and my students is the interplay between close reading, digital reading and procedural literacy. Experimentation puts both students and tutor at the very edge of their knowledge, but it is a place that is productively challenging. In also helping students to see their learning as series of processes that can be modified and reiterated, we are also enabling them with a critical and creative self-awareness that fits them for the rapidly changing twenty-first century world.

[1] I.A Richards Principles of Literary Criticism. 3rd ed. London: Keagan Paul, 1926, vii.

[2] Ian Bogost, ‘Procedural Literacy: Problem Solving with Programming, Systems, & Play.’ , 52:1&2 (Winter/Spring, 2005), 32-36.

[3] Lisa Samuels and Jerome McGann, ‘Deformance and Interpretation.’ New Literary History 30:1 (1999), 25-56.


What is a novel in the eighteenth century? Some numbers …

Some of my undergradutes playing with data…

Digital Literary Studies

Students Ben Franks and Alice Creswell share their charts on some keyword searches conducted via the the ‘Genre’ filter in ESTC across 1660-1799. The first chart breaks down the 2,880 hits from the genre term ‘Fiction’ into various title keywords:

Fiction Fiction

This second pie-chart breaks down the 1,434 hits from the search term ‘Novels’:

Novels Novels

We wondered about the ways in which the ESTC catalogue had tagged these genres and the exent to which they overlapped (meta-metadata questions?). But these results were given additional context and meaning by setting them against the same keyword searches on Google’s N-Gram viewer and some more granulated searches of the metadata of the 1,000 novels in the Early Novels Database.

Ben and Alice’s favourite titles? The Devil Turn’d Hermit (check that full title!) and Adventures of a Bank-Note.

View original post