Category Archives: Editing

Students and the digital edition. A polemic

GhostscriptThis is the text of a talk I gave at the panel session for ‘Opening the book: reading and the evolving technology(ies) of the book’ for Academic Book Week, at the Institute of Historical Research, School of Advanced Study, London. 10th November, 2016. This post first appeared on the IHR blog.

I want to talk about the undergraduate perspective on a particular kind of academic book – the edition. In fact my starting point is that, from the student perspective (and according to some scholars), there is no longer a clear idea of what that is.

The place and perceived value of the printed critical edition seems to be still firmly established. I once asked my students to identify and compare value markers of their printed text in front of them and of an online version of the same text, and they made a pretty good case for the printed text, citing everything from the name of the publisher, to modes of reading, navigation, and interaction, and even pointing to the durability of its medium. And this in a digital humanities module. However, asking them to tell me how and why either of these versions look the way they do was a far more tricky question. So my polemic will be a plea for teaching in a way that puts students themselves in the position of editors and curators of literary texts: and that the best way of doing this is an engagement with digital editing and curating.

But first, I’m going to begin by outlining how a dramatic rise in the online availability of our literary heritage drives certain changes in reading and studying practices. When a lot of academics are running to catch up with the accelerating process in disseminating the world’s literary heritage online – even in their own field, and I include myself – is it any wonder that our students, stepping off the path of the printed set text, also find themselves slightly taken aback and click on the top hit in Google? Because there is indeed a chaotic mass of types of texts they can find. In addition to catalogue entries and Amazon hits, there are texts from web sites and web ventures that essentially depend upon some form of commercial revenue or profit (e.g. Google, Luminarium, editions via Kindle, and even apps), non-profit web organisations (e.g. Project Gutenberg, Poemhunter, Internet Archive, Hathi Trust), nationally-supported or privately-endowed institutions (e.g. Folger digital texts, British Library Shakespeare Folios), University libraries (e.g. SCETI, Virginia, Adelaide, Bodleian), a whole host of academic projects (e.g. Rosetti Archive, EEBO-TCP, the Correspondence of William Godwin, the Walt Whitman Archive) and, of course, via institutionally-accessed and pay-walled commercial publishers (like Cengage or ProQuest). My essential point is that there is a blurring of the definition of the ‘edition’. What we see – for sometimes good reasons – are projects that describe themselves as digital archives, databases, digital library collections, social editions (like Transcribe Bentham), and apps (e.g. Touchpress’s The Wasteland). And texts that come via these platforms look, feel and function very differently.

Between the printed and digital text, there’s a two-way process happening. The easy and quick availability of texts online drives a certain kind of reading of printed editions which makes invisible ‘the history of their own making’ (D. F. McKenzie).[1] At the same time, undergraduates don’t often spot the distinction between the kinds of texts they find online and the one in their printed critical editions. This partly because they see only the text in their editions, and not the ‘edition’ (introduction, textual note, annotations, etc.): the actual edition becomes invisible. I don’t want to denigrate undergraduates’ skills and this isn’t entirely the students’ fault: it’s partly how English literary studies – at least in many seminar rooms – is still running with the idea of the literary text as an immaterial abstraction (despite the influence of various kinds of historicization). It’s this that renders invisible the processes that shape the form of the book in their hands. So I guess my rant is partly a plea for a serious consideration for the materiality of the book and a bigger role for the history of the book in English Studies.

But I’m also thinking about the lack of attention (at undergraduate level) paid to how editions and texts end up on the web in the ways they do. Formats vary hugely, from poorly catalogued page facsimiles, to unattributed HTML editing of dodgy nineteenth-century editions, to scholarly high-standard editing with XML/TEI encoding. But there are still plenty of these digital versions and collections that make it very difficult to see who these resources are for and how they got to look and function the way they do. And, as I’ve hinted at earlier, issues of format and accessibility are linked to how the various sites and projects are funded. In significant ways a lot of texts available digitally do much worse than the print edition at signalling ‘The history of their own making.’

So, the second half of my polemic is about how we should be making our students more aware of how the edition is remediated based on an understanding of the limits and affordances of digital technology and of how the internet works.[2] Because this is where digital technology can open their books in a vital way. I’ve found it intensely interesting that the digital humanities community has been using a variety of material and haptic metaphors to describe what it is they are doing – ‘making’ or ‘building.’[3] For me, this is wonderfully suggestive. In asking my students to understand the processes involved in transforming a material book into an printed edition and then a digital edition is a necessarily haptic experience. This experience – a process that involves decisions about audience, purpose, authority, and technological affordances and restraints – enables a student to understand their literary object of study in a vital and transformative way. It might seem odd that I’m emphasising materiality in a debate thinking through the effects of what is, ostensibly, an immaterial medium, but technology is material and digital editing should involve the material aspects of the book and material work. My undergraduate dissertation student is producing a digital edition of a work by Henry Fielding: she will be going to the British Library to see the source text as an essential part of her learning. In a few weeks time, my students will be building a digital scanner partly out of cardboard; after that even our training in digital markup will start with pencil and a printed sheet of paper.

So I’m arguing that we give students the opportunity to be academic editors of books, and not just in theory but in practice; to enable them to be creators and not merely consumers of texts, because the electronic editions of the future should be powered by an early and vital experience of digital making.

[1] D. F. McKenzie, quoted in Jerome McGann, ‘Coda. Why digital textual scholarship matters; or, philology in a new key,’ in The Cambridge Companion to Textual Scholarship, eds, Neil Fraistat and Julia Flanders (Cambridge: Cambridge University Press, 2013), pp. 274-88 (p.274).

[2] I’m always reminded of internet hacktivist Aaron Swartz’s maxim: ‘It’s not OK not to understand the internet anymore.’

[3] Most notably Stephen Ramsay, On Building.

 

Encoding with English Literature undergrads

xmlgrabThis is an overview and reflection on a two-hour workshop I ran for English Literature undergraduates introducing XML/TEI. ‘Encoding worksheet’ (word doc) is here.

Previously I had taught XML/TEI in one-to-one tutorials, so this was the first time I had tried a group workshop, comprising two students who I was supervising (their final year dissertation projects were digital editions) and two students whose projects concerned print editing (from a module on Early Modern book history run by Prof. Ian Gadd). The knowledge base of these students was very varied: some had no experience of coding or markup; at the other end of the spectrum one was already competent with HTML. What, then, was the best way into encoding given this varied cohort?

TEI adviceMy answer was to start with the skills they already had (as @TEIConsortium emphasised), and emphasise the continuum between digital encoding and the traditional literary-critical analysis students use when preparing any text. After all, we’re so frequently concerned about the relationship between form and meaning. And it is the particular capability of XML/TEI to render this relationship between form and meaning that distinguishes it from other kinds of electronic coding.

So the first part of the workshop started with pencil-and-paper tasks. We first annotated a photocopy of a poem. Then I gave them a print out of the transcribed poem stripped of some of its features – title, line spaces, peculiar line breaks, italicisation. I then asked them to annotate, or markup, this version with a set of instructions to make it look like the ‘original’. The result was that the students not only marked up formal features, but clearly had a sense that these features also carried meaning. For example, I asked, “why was it important to render a line space?” I also pointed out that none of them had inserted the missing title in the plain text version, which raised some eyebrows: “Is it part of the text?” “Well, how do you define the text?”, I replied. These question were important for several reasons. I wanted to make the point that markup was a set of editorial and interpretative decisions about what the ‘text’ was and how it might be rendered and for what purpose. I also wanted to emphasis that both practices – whether pencil notes in the margin or encoding on a screen – involved very similar processes.

I next wanted to translate these points into an electronic context, by illustrating the differences between HTML as, essentially, a markup for how a text looks, to XML as a markup for describing that text. I did this by using my WordPress editor: by inserting a few HTML tags in the text editor mode then switching to the ‘visual’ mode they could see these features reproduced.[1]

At this point we moved to the computers and got down to some encoding in an XML editor (Oxygen). My main aim here was to enable them to markup the same poem in an XML editor to see how easily their literary-critical procedure could be transferred to this medium. In this, I was very gratified: all the students were able to create an XML file and mark up the poem remarkably easily.[2] I spent the last section of the workshop answering the implicit question: “you can’t read XML, so what is this for?” Given the restrictions on time, I had to briefly engage with some very broad issues of digitization and preservation and of analysing big data. Putting it simply, I remarked “computers are stupid,” (my mantra), “but if we markup up our texts cleverly, we can get computers to look at large bodies of knowledge with precision.” Demonstrating this was tricky given the time restrictions, but I had a go by exemplifying the more complex encoding of meaning possible in XML/TEI. I used a former student’s markup of Defoe’s Hymn to the Pillory and an XML file of A Journal of a Plague Year. The former demonstrated the encoding of names; for example I asked “how would a computer know that ‘S—ll’ is Dr Henry Sacheverell unless you have a way of encoding that?” The Journal was useful for demonstrating the highly structured nature of TEI and the ability of us to markup structural features of texts in precise ways: features that a computer can then process.

Journal-XMLgrab

I also demonstrated the flexibility of TEI: by inserting a new < after a </> XML automatically shows a dropdown list of possible markup elements and attributes. But my key point was that deciding which features to encode – out of all the possible features of a text – was an interpretative and editorial decision.

My aim for the workshop was modest: to enable students to make the leap from so-called ‘traditional’ literary-critical skills to the basics of encoding in XML, and in this I think the session was successful. On reflection, I think there two points which I hadn’t judged quite right. I hadn’t anticipated how quickly they could mark up a poem in XML; I think that was because the transition from pencil annotations to coding on screen worked very well. The last section – on the bigger point of getting computers to read literary texts – turned out to be more important than I had presumed and I would do this differently if I were to run this again. This might involve a follow-up session that, given the success of the first part of the session which involved hand-on tasks, would ask students to markup some more complex textual issues with TEI. This could be combined with a demo that not only showed some well-encoded texts but also the results of some data-mining of a medium-sized XML/TEI corpus.

I’ll keep you posted …

[1] There are probably better ways to demonstrate this, given the limitations of the WP text editor, but it was very much to hand.

[2] I acknowledge here my use of teaching materials from the Digital Humanities Oxford Summer School (the very same ones from which I had learnt TEI).

Digital editing: students, building, sharing

Lego_Brick

I’ve posted before on an undergraduate digital editing project for my final year English degree students, but Adam Kirsch’s recent summary and critique of digital humanities has prompted some further thoughts about my students’ work and what I’d hoped to help them achieve. I’m not going to presume to add to the solid body of responses to Kirch’s piece (see Mark Sample’s piece), so this is a focused and brief reaction to his depiction of “the application of computer technology to traditional scholarly functions, such as the editing of texts” as ostensibly “minimalist” digital humanities work. Part of the problem in this back-handed compliment is that it devalues what Ryan Cordell’s response rightly characterises as “arguably the longest-standing and most influential thread of digital humanities’ history in literary studies: the preservation, annotation, and representation of historical-literary works for the new medium of our time.”

But more importantly, I don’t think my students would recognise their work as either minimalist or traditional. In this project I ask for volunteers to create an online digital edition of an eighteenth-century text in conjunction with the scholarly digital platform 18thConnect (Mandell, IDHMC, Texas). The project was built out of my belief that digital technology could offer English Literature students a way to demonstrate their critical skills in a more tangible way than in written coursework: to create an artefact that carries them beyond the confines of the hermetic world of student/tutor/institution. Simultaneously, it was a response to what I perceived to be students’ limited knowledge about the nature of the digitized texts they accessed via databases such as EEBO, ECCO, or even Google Books.

Crucial to the project was the ability of students to reflect upon and rationalise the use of digital technology; in effect, their answers to the questions: ‘What is a text in a digital context?’ ‘Why digital?’ and ‘Who is this for’? The interconnectedness of these questions draws upon two definitions of digital humanities easily misread as dichotomous. Stephen Ramsay’s post ‘On Building’ posited that “the move from reading to making” enables a different experience of interpretation and so produce new insights. In this project, for example, encoding their edition in XML / TEI demands – and enables – students to reflect upon the nature and authority of the text in new ways. The ‘why digital’ question also asks students to think about audience: what are the best ways of building digitally to render biographical, literary, or historical meanings? So the students reflect upon, as Mark Sample put it, “the way the digital reshapes the representation, sharing, and discussion of knowledge” (‘The digital humanities is not about building, it’s about sharing’). The project, then, is about how students can explore the intimacy between (contra Kirsch) interpretation and digital creation, building and sharing.

Note, this is a summary of a more expansive talk I gave at the Digital Humanities Congress 2014 in Sheffield, hosted by the HRI and Centernet, and at the ‘Teaching Digital Humanities’ conference at the University of Reading. Here are the slides:

Students, building, sharing – Created with Haiku Deck, presentation software that inspires

“Lego Brick”. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons – http://commons.wikimedia.org/wiki/File:Lego_Brick.jpg#mediaviewer/File:Lego_Brick.jpg

Digital editing with undergraduates: some reflections

Digital Editing Project outline and Digital editions criteria

[Added 2015]

In 2012 I started supervising an English undergraduate dissertation: this was a online digital edition and it was my first experience of supervising a student’s digital project. What follows is a joint blog post of two parts – one from me and the other from Jess MacCarthy (the student) – that reflects upon our experiences. You can see the final online edition here:

pillory banner

 

Thoughts from the me, the supervisor

A couple of years ago, I decided to learn a little more about the back-end end of digitized primary resources. I attended a boot-camp into the why and how of encoding, using XML encoding and the protocols of the TEI, at the Digital Humanities Summer School at Oxford University. Just over a year ago (late Spring 2012) I decided that the best way to learn is to teach. Simultaneously, I wanted to conduct a trial on producing a digital edition of a Defoe text that used up-to-date protocols of digital editing as well as the open-access ethos of the great majority of current digitization projects. So I asked our 3rd year English undergraduates whether anybody would be willing to do this for their dissertation project. Luckily, I had a volunteer, Jessica McCarthy.

I left it up to Jess to decide which Defoe texts she would like to work on: like any large-scale project, sustaining enthusiasm is essential. But it also meant that Jess would find a lot out for herself about Defoe’s writings. However, an important factor was that I was not expecting Jess to spend time transcribing the text and so we had to source a reliable electronic copy in plain text. This would give Jess the freedom to decide how she wanted to encode it and how it would be presented online. However, it also occurred to me that the question of a ‘reliable’ electronic copy in plain text was an interesting issue of discussion in itself: what different kinds of texts and what kind of reliability are offered by, for example, Project Gutenberg, Google Books, Jack Lynch’s Eighteenth-Century Resources, or Romantic Circles? Examples that directly raised other questions were close by: at Bath Spa University we are lucky enough to have access to the large-scale digital resources of EEBO and ECCO. Texts accessed via these different resources come in various forms: digital facsimiles, plain text transcriptions from post-1800 print editions, hyperlinked and encoded texts, or a combination of plain text and facsimile texts. So this first stage of the project actually involved a deeper understanding of the nature of existing electronic resources, databases and archives, and would more effectively immerse Jess in important questions concerning the format, usability and access to historical literary texts. How are issues of access related to the kind of texts one was accessing? What does the format of these texts have to say about how they can be used and who are using them? What processes are involved with the type of text available on these resources? What is a ‘text’ in a digital context anyway?

Such questions are important, first, because undergraduate students do not often understand why different online resources look and feel the way they do. So I try to make explicit to students the differences between a facsimile, an edition, and an encoded text and the significance of those differences for how the text is to be used and for whom. The facsimile usually presents no problem to understand; although, for example in the case of ECCO, the relation between the image and the text (unseen and what one actually searches) is not fully grasped by many undergraduates, which provokes some interesting discussion. Second, this contextual understanding is essential for students to decide what kind of edition they are going to create. In this I ask students to consider their readership or, as Dan Cohen put it in ‘The Social Contract of Scholarly Publishing’, the ‘demand side’ of  Cohen argued that the print model has built-in assumptions about value and audience: ‘The book and article have an abundance of these value triggers from generations of use, but we are just beginning to understand equivalent value triggers online.’ Jess, for her own project – as you can see – decided to provide two editions to appeal to a variety of readerships: one an online edition with hyperlinked notes and a textual commentary; the other an encoding of that text. (In this, we looked to an edition on Romantic Circles as our model).

So, back to an earlier stage of decision-making. If we were after plain text copies of eighteenth-century editions, and not texts that were edited at some point later, that left two options for sources: the Oxford Text Archive and 18thConnect. There are currently 728 texts attributed to Defoe available via 18thConnect and 121 via OTA. Despite the ease with which one can download texts in a variety of file formats from OTA, I deliberately steered Jess towards 18thConnect because of its use of TypeWright. This software enables users to correct a number of individual 18c texts released to 18thConnect by ECCO (as frequent users of ECCO will know, the text that users are able to search is a rather mangled version, the product of now dated OCR software trying to decipher 18c typography via microfilm).

TypeWright

I may well continue to use this, since the advantage for any student is not only the knowledge gained about the workings and limitations of large-scale digital resources like ECCO that might be normally taken for granted, but also the added perspective gained on the processes of transformation from material document to electronic text.

Why encode and why TEI/XML?

Most databases allow one to perform searches based on a variety of categories (author, place of publication, title, date etc) because the texts have been ordered and sorted according to these categories. One can perform ‘all text’ searches. But I struggled, at first, to explain the limitations of this kind of markup to my students. So I’ll give you a similar kind of example I gave to Jess in relation to ECCO. Let’s imagine I’m searching some works by Defoe and I want to find references to High Church clergyman Henry Sacheverell (bap. 1674, d. 1724). Unsurprisingly there are quite a few, but it misses a number of important Defoe poems. Now I happen to know Sacheverell is mentioned in More Reformation and in The Double Welcome but ECCO didn’t find these. Why? Because in The Double Welcome his name is spelt ‘Sachevrel’, and in More Reformation it is ‘Sachavrell’. We could of course put in alternative spellings or use fuzzy searching. But this wouldn’t find more oblique references such as the one in Hymn to the Pillory where his name is pseudo-anonymously presented as ‘S———ll’. A machine does not know this is Henry Sacheverell. Similarly, it would not correctly identify this if Defoe had ever called him ‘Henry’ or ‘old Sacha,’ or something more figurative like ‘the Devil in a pulpit’ that we human readers would be able to interpret. More importantly, what if we didn’t know how Defoe alluded to Sacheverell at all?

A machine searches for strings of symbols and cannot recognise that one string of symbols represents another different string of symbols unless we tell it that each of those particular combination of symbols represent the same named entity. As Lou Bernard put it “only that which is explicit can be digitally processed,” or to put it another way encoding is to “make explicit (for a machine) what is implicit (to a person)”.

For me, then, the project has enabled me to reflect upon strategies for teaching digital technology and identifying – or beginning to – what issues are essential to introduce to students: the how and why of digital editing.

Jess McCarthy’s perspective: decentering authority?

I’m going to be going on a slightly different track; I’ll be talking about how in some ways my edition decentres some of the authority of a traditional printed edition of a text.

It wasn’t until I’d starting researching my reflective essay that I realised that my edition achieves this, to an extent, through my encoding of variants in the XML version. Most modern scholarly editions of texts work on the basis of editorial interpretation and intervention in creating a definitive edition which most closely presents the editor’s understanding of the author’s intentions. These editions are usually created through extensive use of textual apparatus, such as tables of variants and considered reasoning supporting the inclusion of one variant and the exclusion of another. Digital methods of presenting texts have brought into sharper focus how this approach to assembling an edition is based largely on limitations of its publication media. Marilyn Deegan and Kathryn Sutherland pointed out that,

for some the new technology has prompted the recognition of the prescriptive reasoning behind such editions as no more than a function of the technological limits of the book, less desirable and less persuasive now that the computer makes other possibilities available; namely, multiple distinct textual witnesses assembled in a virtual archive or library of forms. [1]

I aimed to achieve a presentation of multiple textual witnesses in my own edition by encoding variant readings into my XML document. This made it possible to present the different states of the text without privileging one state over another. This approach questions the idea of an ideal or more representative version of the text by presenting each state as equally valid and as existing simultaneously. Although I was able to present variants within my encoding without making any claims as to which witness was more authoritative, this was only really achievable within the encoded document. For example:

<l n=”19″>The undistinguish’d Fury of the Street,</l>
<l n=”20″><app>
<rdg wit=”#Q2″>With</rdg>
<rdg wit=”#Q1″>which</rdg>
</app> Mob and Malice Mankind Greet:</l>

To present the text on the website I had to choose a copy text based on what I considered to be the most complete representation of Daniel Defoe’s intentions in A Hymn to the Pillory. I based my edition of the text on the second edition, corrected with additions. This decision was reached early in the project and it was based on the logic that this was the earliest edition available that presented a fuller version of the text. Given the common editorial practice of selecting either the first available edition or the last edition known to have been produced by the author, I would reconsider my choice of copy text were I to start again. However, despite being an unorthodox approach to a copy text, contemporary editions of A Hymn to the Pillory based on the first edition include the later additions found in the second edition, and given that variants between the two texts have been included, I don’t think that my earlier decision undermines the authority of the text presented in a significantly damaging way.

This concern might seem to conflict with my encoding of variants. There I have deliberately not identified a lemma and chosen instead to present multiple, simultaneous witnesses that destabilise the assumption that there are readings that are more valid. This approach works well if you are concerned with textual criticism or data mining to create distant readings of texts. However, I wanted my edition to be as useful as possible to the widest possible audience, so the traditional concern of the humanities with close readings and interpretation had to be considered, and which depend on a stable text to interpret. Marilyn Deegan and Kathryn Sutherland acknowledge this, pointing out that ‘the editor’s exercise of proper expertise may be more liberating for more readers than seemingly total freedom of choice.’[2] Although digital technologies are highlighting how text can be treated differently in electronic formats, the primary concern for most readers of literature is still in interpreting the meaning of the text (rather than how it was composed or its variant states); and to interpret the meaning rather than the textual history, a stable edition needs to be presented.

I wanted to support the authority of my edition as a serious scholarly work so I included all of the textual apparatus that you would expect to find in a scholarly print edition. C. M. Sperberg-McQueen argues that ‘electronic editions without apparatus, without documentation of editorial principles, and without decent provisions for suitable display are unacceptable for serious scholarly work.’[3] While this doesn’t necessarily mean that apparatus for digital editions has to work in the same way or with the same concerns as print editions, it situates intellectual integrity as remaining a key concern for supporting the authority of an online edition.

I used hyperlinks as a way to discretely point to textual annotations from A Hymn to the Pillory and also in order to direct readers to further online points of interest, either from the annotations themselves, or from further reading. Phillip Doss argues that ‘by allowing escape from the context of a single documentary sequence, hypertext allows a reader to escape the linearity imposed by print media.’[4] There are positive and negative implications to the use of hypertext links that I tried to consider within my edition. An obvious limitation of using hypertext is exactly that it allows readers to escape the linearity of the text. On the other hand, by using hyperlinks I have been able to provide easy access to extra-textual material that would not be possible to include in a print edition. For instance, where I have been able to find them, I have included works by people that are mentioned in A Hymn to the Pillory. This has meant that intertextual relationships can be explicitly explored, rather than simply acknowledged. In this way the text is shown to be the product of many various influences in a way that is more difficult to achieve using physical means of publication and although the text is still the main focus of the edition it is presented less in isolation.

Lisa Spiro’s essay ‘“This Is Why We Fight”: Defining the Values of the Digital Humanities’ argues that ‘for the Digital Humanities, information is not a commodity to be controlled but a social good to be shared and reused.’ This is very much an attitude that I adopted in my approach to this project. My website is open access, making it freely available to anyone who wants to use the information presented. However, although this project is not formally associated with Bath Spa University, as an undergraduate studying there I had the privilege of institutional access to specialist resources that I would not have been able to use to support my research otherwise. Access to services such as the Dictionary of National Biography (DNB) and Eighteenth Century Collections Online (ECCO) allowed me to work using facsimiles of the copy text and research biographical annotation with confidence in the reliability and authority of my sources. I chose to hyperlink these sites where I have relied on them for my research to maintain the integrity of my sources. Although this means that some users may not be able to access the sites at the end of the hyperlinks I believe that being able to present information based on what these resources provide goes a small way to democratising the information that they contain. Working with the knowledge that not all users will be able to reference my sources, I tried to make my annotations as comprehensive as possible while still maintaining a focus to how they are relevant to the text.

At its core this project has an engaged interest in making specialist information freely available in the most useful, reliable form possible. It has supported ongoing work to make other scholarly resources more reliable by using 18thConnect’s TypeWright and hopes to engage with the widest possible audience by providing not only what is traditionally expected from an authoritative edition of a text but also by incorporating the formats that digital encoding supports for more specialist pursuits and longevity.


[1] Marilyn Deegan and Kathryn Sutherland, Transferred Illusions: Digital Technology and the Forms of Print (Farnham: Ashgate, 2009), p.87.

[2] Transferred Illusions, p.71.

[3] C. M. Sperberg-McQueen, ‘Textual Criticism and the Text Encoding Initiative’, The Literary Text in the Digital Age, ed. Richard J. Finneran (Michigan: University of Michigan Press, 1999), p.41.

[4] Phillip E. Doss, ‘Traditional Theory and Innovative Practice: The Electronic Editor as Poststructuralist Reader’, The Literary Text in the Digital Age, p.218.