The Oxford Research Encyclopedia of Literature will be available via subscription on April 26. Visit About to learn more, meet the editorial board, or recommend to your librarian.

Dismiss
Show Summary Details

Page of

 PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LITERATURE (literature.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 26 April 2018

E-text

Summary and Keywords

Electronic text can be defined on two different, though interconnected, levels. On the one hand, electronic text can be defined by taking the notion of “text” or “printed text” as the point of departure. On the other hand, electronic text can be defined by taking the digital format as the point of departure, where everything is represented in the binary alphabet. While the notion of text in most cases lends itself to being independent of medium and embodiment, it is also often tacitly assumed that it is in fact modeled on the print medium, instead of, for instance, on hand-written text or speech. In late 20th century, the notion of “text” was subjected to increasing criticism, as can be seen in the question that has been raised in literary text theory about whether “there is a text in this class.” At the same time, the notion was expanded by including extralinguistic sign modalities (images, videos). A basic question, therefore, is whether electronic text should be included in the enlarged notion that text is a new digital sign modality added to the repertoire of modalities or whether it should be included as a sign modality that is both an independent modality and a container that can hold other modalities. In the first case, the notion of electronic text would be paradigmatically formed around the e-book, which was conceived as a digital copy of a printed book but is now a deliberately closed work. Even closed works in digital form will need some sort of interface and hypertextual navigation that together constitute a particular kind of paratext needed for accessing any sort of digital material.

In the second case, the electronic text is defined by the representation of content and (some parts of the) processing rules as binary sequences manifested in the binary alphabet. This wider notion would include, for instance, all sorts of scanning results, whether of the outer cosmos or the interior of our bodies and of digital traces of other processes in-between (machine readings included). Since other alphabets, such as the genetic alphabet and all sorts of images may also be represented in the binary alphabet, such materials will also belong to the textual universe within this definition. A more intriguing implication is that born-digital materials may also include scripts and interactive features as intrinsic parts of the text.

The two notions define the text on different levels: one is centered on the Latin, the other on the binary alphabet, and both definitions include hypertext, interactivity, and multimodality as constituent parameters. In the first case, hypertext is included as a navigational, paratextual device; whereas in the second case, hypertext is also incorporated in the narrative within an otherwise closed work or as a constituent element on the textual universe of the web, where it serves the ongoing production of (possibly scripted) connections and disconnections between blocks of textual content. Since the early decades of early 21st century still represent only the very early stages of the globally distributed universe of web texts, this is also a history of the gradual unfolding of the dimensions of these three constituencies—hypertext, interactivity, and multimodality. The result is a still-expanding repertoire of genres, including some that are emerging via path dependency; some via remediation; and some as new genres that are unique for networked digital media, including “social media texts” and a growing variety of narrative and discursive multiple-source systems.

Keywords: digital media, digital text, e-literature, hypertext, media text, materiality and media, humanities computing, digital humanities, writing space, language of new media

The spread of digital media has resulted in disturbances in the array of the core notions: “text,” “work,” “document,” “corpus,” “collection,” “archive,” “machine,” and “materiality.” Each of these was complex and unstable before it was mobilized to deal with digital materials, and digital media has brought a set of still-developing properties, which cannot but add to the complexity given the fluid and dynamic nature of these media. In spite of this, these notions have prevailed even if they have been reinterpreted to deal with a fast-growing variety of digital materials. The notion of e-text is a very significant case in point. One reason is that the foregrounding of the “electronic” aspect is an obstacle to many theories of text. It is often doubted that e-text can actually be made into a concept. The e-prefix refers to physical characteristics, which did not enter into previous conceptualizations of text. A second reason is that the role of the medium for the messages, insofar as this issue is accepted, is subject to a variety of interpretations. A third reason is that the conceptualizations of both the electronic and the textual components are subject to change in ways that bring the relation between the two components into play.

The disturbances have important empirical and historical dimensions. The physical production of printed text and the features that characterize digital media are constantly developing. Today’s printed texts are usually produced with help of an e-text. As a result, they can be printed in any number, at any place, in any design and format, and at any time in the contemporary networked media landscape. How this will affect the concept of a document, work, or editions, not least scholarly editions, remains to be seen. Digital media provide printed texts with a range of characteristics that were not available using the former print technologies. However, the disturbances go further than that.

Regarding the notion of text, there is no commonly agreed on definition of the meanings of “electronic” or of the constellation of “e-text,” and it is not clear what sorts of materials should be included. A fundamental question is whether e-text forms a subcategory of linguistic text, as opposed to, for instance, printed text and electronic images, or whether e-text is a new and distinct category with its own set of characteristics that includes digital linguistic text.

If e-text is considered a subcategory of text, it is primarily defined as a digital representation of linguistic text, whether written or printed. Eventually, it includes a broader range of media texts, referred to here as “e-text type 1.” One question, then, is about the electronic part of the text. Is it an intrinsic part of the text, or is it extrinsic? A further question is, does the relation between the text and the electronic format differ from the relation between the text and the printed format? The answers to these questions depend on both the notion of “text” and the “notion” of the computer.

E-text can also be considered a new and distinct category of text that includes all sorts of digital materials. What these materials share is that they are coded, manifested in the binary alphabet, and processed bit for bit.1 Many of these processes can be sequenced and automatized with the help of algorithms. The algorithms inherit the editable functional architecture of the device, but algorithms can also be a significant part of a message because they are represented and processed bit for bit on a par with other data. They remain as editable software. Whether binary sequences serve as data or as algorithmic structures, they can be combined and made into intentional and discursive wholes in which linguistic articulations are mixed with other audible, visual, kinetic, and formal semiotic modalities. Digital materials manifested in the binary alphabet, whether as data or as code and in any possible interrelation between the two, are referred here to as “e-text type 2.” The two types overlap because type 1 is included in type 2, though the electronic dimension is conceived differently in the two perspectives.

Although the frameworks coexist and are sometimes mixed together, they can be related to three major paradigms in the history of digital media. The three paradigms form around the notions of the computer as a rule-based machine or automat, as a tool for human-computer interaction, and as networked digital media, respectively. Each conceptualization is connected with particular sets of e-text representing different blendings of e-text types 1 and 2.

The relation between the three paradigms and the notion of text can be traced by following a variety of concepts, such as author-reader configuration, types of interactivity, conceptualizations of the relation between text and image, development of markup languages, and so forth. However, the relations between text and e-texts will be illuminated in the successive unfolding of three different types of relations between text and hypertext. Hypertext is always present in any kind of e-text, and hypertext relations are intimately connected both to the concept of the computer and to the concepts of text, interactivity, author-reader configurations, and so on. The notion of hypertext is contested, however, and will be included as part of the themes it is used to structure.

Hypertext can be defined as a coded relation between anchor, link, and destination. A link in digital media will always include the explicit address of the destination and a specific, explicitly stated instruction about what to do at the destination. All the elements, however, remain open to further modification, thus trespassing the physical closure of printed texts. These qualities, taken together, make hypertext distinct from printed node-link relations, such as footnotes, annotations, and other types of referential devices that are used in print. The definition makes it clear that a hypertext link in digital media has to include an editable instruction, though this dimension is ignored in many accounts of hypertext. The definition not only allows for the inclusion of surface links, but also random access, search, interactivity, automated updating, and a variety of more complex configurations utilizing the editable semiotic space between storage and interface. It is possible to characterize significant differences between various types of digital materials due to the particular configuration of hypertext relations.

Three major configurations structure the following:

  1. 1. Hypertext as a navigational (paratextual) device to manipulate and navigate e-books, understood as digital “copies” or “translations” of nondigital, finite texts and corpora. Hypertext is used as a tool and eventually as a methodological device for manipulation and analysis. The field of e-texts is delimited by the conceptualizations of linguistic text and literary text as being independent of the electronic format, which is seen only as a physical instantiation and conceived of as being external to the text. The use of hypertext practices may be unacknowledged or eventually denoted with such terms as “search,” “navigation,” or “annotation.”

  2. 2. Hypertext as a textual feature built into the narrative of a text created in electronic form or into the original structure of an electronic corpus. The coded link and the connected nodes are intrinsic to the text or corpus. Hypertext is considered both part of the content and a materialized dynamic feature. It is mechanically executed on the level of the binary alphabet. The execution is coded, and some hypertext relations are significant parts of the text.

  3. 3. Hypertext as the basic globalized landscape and networked infrastructure that connects all sorts of digital materials. Links can be extrinsic to and intrinsic parts of all sorts of digital materials. They can be established in the original production or in later use. At the same time, links serve as a means to deliberately create porous delimitations between texts and parts of texts and to establish ever-ongoing new connections and disconnections between any deliberately chosen sequences of bits. This brings to the fore issues concerning time and text, closure, self-identity, machine, and materiality.

Hypertext as External to Text and Computer

Humanities Computing from the Computational Paradigm to Theories of Text

The notion of “e-text” was not widely used until the late 1970s, following the development of electronic typewriters, desktop machines, text editors, and word processors for producing linguistic texts in digital formats. In humanities computing, the term “natural language texts” was most often used until the early 1980s.2

The history of natural language processing, however, goes back further, to the pioneering work of father Roberto Busa, a Catholic priest who, in the late 1940s, took on the task of creating an edition of the works of Thomas Aquinas. His goal was to create a trustworthy, authoritative edition revealing the original intentions of the author that was rooted in established bibliographical studies. To achieve this goal, Busa looked for “machines for the automation of the linguistic analysis of written texts.” He convinced IBM to sponsor the project, seeing the computer as a tool that should be able to help to automatize scholarly routines as much as possible. He considered digitized representation of the text in punch cards and on magnetic tapes processed in mainframes to be tantamount to carefully edited copy of a printed or written original. The digital format was considered external to the text.3

There were other early efforts to introduce computers into the study of linguistic texts. During the 1950s, machine translation, pioneered by Andrew Booth and, more indirectly, also by Claude Shannon and Warren Weaver, attracted great academic and financial interest.4 In machine translation, the focus is not on the text but on the underlying, predominantly statistical methods used to automatize text translation. But these early experiments with statistical methods did not meet expectations.5 For Yehoshua Bar-Hillel, the main obstacles were linguistic polysemy and—a notion inspired by Noam Chomsky—the lack of insight into the transformations required for an adequate description of the syntax of any given language. There were other, more rudimentary attempts to build text generators, such as Christopher Strachey’s “love letter generator,” which aimed to generate a literary text from scratch. Completed in 1952, it is assumed to be “the first known experiment in Digital Literature.”6 A related attempt was Joseph Weizenbaum’s ELIZA program, from 1966. If Strachey’s project was done for fun, Weizenbaum intended to show the lack of deep knowledge of language and the superficiality of human–machine communication.7 Taken together, these early approaches cover both digital representations of textual material, and issues related to using formalisms in text production from scratch and in statistical machine translation.

With the emergence of the humanities computing community in the 1960s and 1970s, Busa’s idea that the computer could be used as a tool in linguistics was raised to a higher level. Because of its formal rigor and demand for disambiguation, the computer was now assumed to be capable of providing a fundamental scholarly methodology.

A broader range of issues was addressed at the same time. The journal Computers and the Humanities, launched in 1966, published articles on development of concordances and text-retrieval programs, literary analysis, stylometrics and attribution studies, dictionaries, lexical databases, applications for archaeology, and visual arts and musical studies. Historians had a different focus; they tended to prefer a diplomatic edition, faithfully transcribed from its appearance in a particular document or a facsimile of a particular document.8

The focus on computational methods was aimed at bridging the gap between the sciences and the humanities by adhering to the rigor and systematic, unambiguous procedural methodologies ascribed to the sciences while opposing the loose reasoning ascribed to the humanities.9 Formal methodologies would be the defining characteristics of humanities computing. Stephen Ramsay later described the analogy to science “as a backward path” because the relation between the two cultures was seen as a one-way street.10

Theories of Text

During the 1980s, the computational paradigm was questioned by the human-computer interaction (HCI) paradigm and by neural network theories (or connectionism) and the related ideas of parallel computing.11 Neural network theory inspired exploratory efforts to analyze, for instance, metaphors.12 It turned out that “computation” itself might have different meanings both within the classic paradigm of the rule-based automats and within the HCI paradigm.13

Both paradigms influenced the humanities computing community. The computational paradigm was transformed into a modeling paradigm that was increasingly oriented toward analyses of the limitations of the models based on exploratory approaches. Analysis using the computer would more often be described as “computer-assisted” analysis or “computer-assisted interpretation”; or, the computer would be used for modeling and experimental exploration instead of computation.14 Modeling was not simply a method for detecting the shortcomings of the computational paradigm; it was also a method for gaining insights about the modeled texts.

The computer is now a toolbox used in research done with, for instance, statistical methods or to compare and test hypotheses. In this respect, humanities computing accords with the emerging HCI paradigm, and thus the ideas of interface, interactivity in human–machine relations, hypertext, and multimodality appear more frequently.15 At the same time, a range of new disciplines, including cultural studies, media studies, media ethnography, information studies, and computer semiotics, entered the scene outside humanities computing.16

In humanities computing, the epistemological role of the computer was replaced by theories of text and principles of critical scholarly text editions. In critical bibliographic theory, the ideal text was considered to be an expression of the intention of an author (the McKerrow-Greg paradigm). Whether and how it was possible to achieve this goal became a still more contested issue.17 The doubts were both a result of the practical experiences of the scholarly editing community and the influence of modern and postmodern literary theories. Modern literary theory had a focus on the text as an abstract Neoplatonic entity, as a freestanding, compositional whole defined by its own internal structures.18 Postmodern literary theory moved the focus from the author and the work to the infinitely ongoing intertextual transactions.19

The efforts to produce digitized editions of nondigital originals also raised awareness of the intricacies inherent in the notions of “works” and “text” and of the need for standardized methods of encoding. At the same time, a trend developed in commercial information technology and in the publishing industry that aimed to establish a robust, content-based format for the digital representation of text.20

Both traditions were concerned with establishing the authenticity of digital documents. The focus in the publishing industry could be either on the electronic document itself or the use as a source for a printed copy.21 Among the elements was the development of descriptive markup protocols, such as Generalized Markup Language (GML), conceived by IBM employee C. G. Goldfarb, which was later developed into Standard Generalized Markup Language (SGML), advancing from older forms of procedural and punctuational markup.22 The format-content distinction that originated in the batch processing of text on mainframes gradually led to more analytical questions about the structure of texts and efforts to identify the codes at an editorial rather than typographical level.23

In the 1980s and 1990s, there was a landmark development in content-based encoding with the releases of SGML (in 1987) and of the first draft of the Text Encoding Initiative (TEI) guidelines for scholarly editing (1990).24 SGML was based on the idea that texts are abstract compositions of content, which are organized in an Ordered Hierarchy of Content Objects (OHCO), and it aimed to provide a standardized platform for e-text production and the processing of commercial and institutional documents.25

The original version was based on two strong claims. The first was that the essential parts of documents are content objects and include a variety of types, such as paragraphs, quotations, emphatic phrases, and attribution. According to this view, the graphical form, layout, technology, and medium are not essential. The second was that content objects are always organized in an ordered hierarchy, without overlapping hierarchies. This principle was soon modified by the suggestion that hierarchies should not be considered part of the text but of an analytical perspective applied to the text and thus extrinsic. If two hierarchies overlapped, each could be considered an ordered hierarchy.26 Both the OHCO model and the pluralist, multiperspective model were based on a structuralist approach. The text is an abstract entity with one overarching hierarchy or with a series of hierarchies. The material instantiation was contingent. Allen Renear described the development from SGML and OHCO to the revised multiple-perspective theory as a development from a platonic version of content-based encoding to a pragmatic-scientific version.27 The methodology was not guaranteed by the rigor of the computer, as it had been earlier, but on a theory of the abstract text. Even if the OHCO model is independent of the machinery, it is fully in accordance with the concept of the computer as a rule-based machine processing content objects that are ordered in a database. The modified OHCO model, however, brings the theory closer to the postmodern issues of the relation between analytical perspectives, which cannot but be between different interpretations.28

A strength of the model is that it allows for different types of digital resources. The three major examples—the use of the document as database, hypertext, or network—happen to reflect the three waves in the history of digitization. The use as database reflects the classic computer paradigm but now on par with other conceptualizations of usage. Hypertext now appeared explicit, referring primarily to user navigation considered extrinsic to the machine-readable text. The focus is the option of multiple representations of the same document. The network perspective primarily refers to the collaboration among scholars working on large editorial projects. Finally, the binary representation appeared, though, so far only as disturbing “binary arcana,” which are not represented.29 The SGML platform did not quite meet the demands of the humanities computing community, but it served as a template for the development of the TEI guidelines that establish a series of norms for how to ensure coded closures of text on many levels and was later often used as a standard in e-book editions.30

During the discussion of the multiple-perspective model, the “binary arcana” are also addressed in a way that opens up new considerations of the relation between the text and the machine. According to Sperberg-McQueen, it is “an incontrovertible fact” that “texts cannot be put into computers. Neither can numbers.” Computers can only “contain and operate on patterns of electronic charge.” As an implication, Sperberg-McQueen considered computer processes as representations of data reflecting conscious and unconscious human judgments and biases. Applied to issues of markup, one implication is that any kind of markup represents a theory of the specific text. This again leads to the question of whether the markup tags inserted in the e-text are part of the text because they represent a theory of the text, which could not be processed without them.31 In this view, the binary coded electronic charge, the physical instantiation, remains external to the notion of text. The physical characteristics of texts were primarily identified as the visual characteristics related to the electronic representation of images and facsimiles.32 Even if the codes are physically enacted on the level of patterns of electronic charge, there are as yet no further considerations of the characteristics of digital media. The focus is on the description of the physical characteristics of the nondigital originals, and there is a strong demarcation between the codes in the machine and in the text even though the material characteristics of printed text need to be interpreted and represented in a digital copy.

If the OHCO model and the TEI represented crucial achievements based on the abstract-text theory, the paradigm was also met with criticism, both with respect to the theory of text and the conceptualization of the computer.

Hypertext: From Extrinsic to Nearby Intrinsic to the Text

A comprehensive critique of the OHCO model and the underlying theoretical assumptions was articulated by Jerome McGann, who put forward an alternative built on the notions of “overlapping hierarchies,” “decentered text,” and “radiant text.” The aim was to develop critical tools for studying both the full range of interpretations and the material traces surrounding the linguistic text, including textual materials with a significant “visible” component.33

A core question has to do with how scholarly editions of literary works can account for the ambiguities of the text, which are not “self-identical” because each reading represents a new interpretation. In this way, the visual and textual ambiguities are expressed as a series of different readings that eventually evolve over the years. Thus, the platonic model of the abstract text was replaced by a notion of text that aimed to include the material (visual) characteristics, as well as the social history, of the text in the form of later interpretations, as a genuine part of the scholarly edition.

Early on, the focus on visual manifestation was anchored in McGann’s poetics related to the semiotic ambiguities in the literary text. Inspired by Gérard Genette and his notion of paratext and by Johanna Drucker’s work on graphic forms, McGann would add a set of “bibliographic codes” that constituted the material (visual) characteristics having semiotic relevance for any particular written or printed text.34 At the same time, he helped to unfold the notion of hypertext and the materiality of e-texts by pointing to the “virtual space time” of digital media as distinct from the reading space of print.35 In this view, the virtual space-time facilitates that “the book’s semantic and visual features can be made simultaneously present to each other.”36

If the printed and electronic versions of the text are identical from a linguistic perspective, they nonetheless differ with respect to their material characteristics. These are articulated in the different types of “markup”; mechanisms of closure; and time-spaces provided for production, editing, and reading; and in the difference between “bibliographical” codes related to the physical appearance on the interface and the coding of the e-text in editable binary sequences. The e-text thus is considered a translation of a written or printed text.

To combine the notion of the “work” with the ever-growing number of new interpretations, McGann introduced the notion of an autopoietic system, which he had borrowed from the biologists Humberto Maturana and Francisco Varela.37 As a notion of text, an autopoietic system includes both the production and the readings. This again points to the social nature of text processing. Thus the notion of an autopoietic system should keep together the wholeness of the object of attention in a now infinitely open time dimension, allowing for the gradual inclusion of multiple interpretations.38

In a discussion of the Rossetti Archive based on these ideas, McGann, in just a few lines, used the three notions of “work,” “scholarly edition,” and “archive” for the same “object of attention.”39 If the Rossetti Archive as a whole is considered a work, the hyperstructure would be an intrinsic part of the system. If it is considered an autopoietic system, it would include generative procedures that allow for the creation of new links in future operations. If it is considered an open-ended archive combining a growing range of interpretations, it remains a navigational device extrinsic to the connected works. At the same time, the three notions imply three different configurations of hypertext: intrinsic and closed, open-ended and intrinsic-link generating, and extrinsic to the archived entities.

If not a full-blown paradox, the sliding back and forth between these terms nonetheless creates a tension, though it is a productive one. Hypertext features allow for the accumulation of interpretations, but they also open up experimental ways of exploring the sources by implementing features for explorative methodologies, juxtaposing, “backward reading,” “deperformative” readings, and other, often antithetical ways of readings.40 Thus hypertext was directly connected to interpretative operations involving a high cognitive load that went well beyond the broadly acknowledged (goal-directed) navigation and (associative) browsing approaches associated with hypertext that were widely accepted within the humanities computing community and often considered to be mere prolongations of already developed practices in critical bibliography.41

With the introduction of hypertext as a fundamental compositional principle in the production of critical scholarly editions of literary works, McGann added to the methodology of critical text editing. In the “The Rosetti Archive,” the focus was on the link relations between texts and text-image, and on providing the reader with the possibility of playfully exploring the work and all the associated documents.42 The new medium thus changes the analytical focus from “finding” order in the text to “mak[ing] order and then to make it again and again, as established orderings expose their limits.”43

The theory also contributed to an enrichment of the notion of hypertext because McGann placed a high cognitive load of interpretation in the use of link relations. This was not least a result of his professional interest in critical editions of texts and his approaching hypertext from the position of an editor, between the author and reader positions. The editor position differs in principle from the author position because the editor deals with existing text; and it differs from the reader position because it allows for an interpretation to impact the “object of attention.” In print media, the editor position ends with the publication of the printed version. In digital media, the editor position remains open, subject only to coded closures, which remain editable. Hypertext thus incorporates an editable time dimension in all kinds of e-texts, making these distinct from printed text even when the e-text is meant to be a copy of a printed original. This, again, can be utilized to allow the reader to switch between the author, editor, and reader modes.

The rationale behind the idea of the “autopoietic” work can perhaps be made clear if it is seen as a result of the tension between hypertext that is considered to be extrinsic and hypertext that is considered to be intrinsic to the object of attention. For the editor, it is both extrinsic and intrinsic at the same time. McGann’s ambition was to maintain some sort of interpretational wholeness associated with multiple interpretations of particular texts even if their meanings are ambiguous. McCann was aware of the dissolution of the physical support of intentional closure, a characteristic of printed books, but he only vaguely foresaw the tensions related to the complex time dimensions of web-based hypertext.

In the history of humanities computing, McGann’s work marks a transition in its development from the unrecognized use of hypertext to the acknowledgment of hypertext as a methodological device for navigation, browsing, and interpretation, external to the materials and on the brink of a full-fledged conceptual inclusion of hypertext as an integral part of the text, as well as on the brink of the incorporation of culture and society into a networked hypertext-based media landscape. Thus, the conceptualization remains dependent on previous notions of digital media. Hypertext is either external to the text or something added to the text. It is still not recognized as the very feature allowing for digitization of text in the first place.

Hypertext as Intrinsic to the Text

The Computer Reconsidered

During the 1980s and 1990s, the production of e-text developed in many unprecedented directions because of the spread of “personal computers,” graphical user interfaces, and a quickly expanding range of application programs that would always include text-editor software. The computational paradigm was supplemented with the HCI paradigm and the notion that the computer is a tool-box for a growing variety of particular purposes supported by specialized application programs. The process represents a breakthrough of what Alan Turing described as “choice machines,” “whose motion is only partially determined by the configuration . . . When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator.”44

For Turing, this was a rather trivial precondition. With few exceptions, the idea of the choice machine remained trivial until the potentials were gradually unfolded as the use of personal computers spread among both office workers and a broad range of experts in fields beyond engineering and computer science in the late 1980s.45

In one of the most significant interpretations of this shift, Jay Bolter described the computer as a fourth fundamental type of writing technology in the history of humankind, after modern print technology, the codex technology of the Middle Ages, and the papyrus roll of antiquity.46 Based on this idea, he also provided a full-scale reinterpretation of the computer by replacing rule-based programming with hypertext as the central operating principle. The argument is based on the semiotic distinction between “a sign and its reference,” which is inherited in the relation “between the address of a location in the storage and the value stored at that address.” This distinction between the sign and its reference has “to be learned in any kind of writing and programming.” The editable co-relation between address and content constitutes the architecture of all digital media. At the same time, it is the “essence of hypertext and of programs for artificial intelligence, in all of which text is simply a texture of signs pointing to other signs.”47 As a consequence, both the address and its content can be edited via the interface. This, again, provides the computer with an invisible space behind the visible representation of the text, which in former media “has been all image, never anything more than the ink we see on the paper or the scratches in clay or stone.”48

Television and radio are also based on invisible sources appearing on a screen or through a receiver, but the relation between source and representation is limited to a few mechanical variables on the receiver’s side, which is also separate from the sender and the storage. There is no access to the storage or editable hyperlinks between the storage and the interface. Thus, Bolter’s claim concerning former media remains true for radio and television, as well. In digital media, the interpretation of the text is generated by the interactions of the machine and the reader due to the “kinetic” nature of the textual representation.49 The visible text is only the “superficial text” in between the poles of the machine and the reader, which all together established a new type of writing space.

Bolter’s conceptualization of the computer also included a second expansion of the notion of text by stressing the spatial dimension as a fundamental semiotic dimension manifested on the screen but codable in the interaction between the operator and the codes stored in the machine.

Bolter called this spatial perspective “topographic writing,” derived from “topography,” denoting a written description of a place, and later understood as “mapping” or “charting” a space. Bolter’s reinterpretation refers to relations between the verbal and the visual appearance, as well as between the interface and storage. The topographical nature of e-text thus means that it is possible to compose the screen interface or any visual interface by writing “with places, spatially realized topics.”

There was an echo of poststructuralism, perhaps a bit too much so, when he also added that his concept of topographic writing refers to a mode that is not limited to the computer medium, since you can also divide a printed text into unitary topics and organize them in a connected structure. This is true, but only for the fixed visual representation. The dynamic codes behind the screen are put aside, in contrast to his definition of computer-mediated signs as a link-based relation between editable (hidden) codes and the visual representations.50

At the same time, text is extended to include mathematical, verbal, and pictorial signs. If media include such nontextual modes they are often denoted as “hypermedia,” a term originally suggested by Ted Nelson. In Bolter’s theory, the inclusion of these different semiotic modalities is not simply a notion for a set of additional semiotic modes but is derived from the semiotic relation between address and content which allows for the full array of semiotic modes to be deliberately incorporated and mixed in the same architecture of binary sequences.

The theory has been influential, but the fundamental analysis of the computer as a writing technology and the conceptualization of hypertext have perhaps not yet been fully appreciated.

In early hypertext theory, hypertext was primarily understood at the level of the interface as a technique that could allow authors and readers to compose and read text in new ways by offering multiple pathways through a text and between texts. The positions of author and reader remained conceptually separate. Nelson, who coined the term hypertext, described it as nonsequentially read text, because links were inserted in a primary text as references.51 He later gave a more dynamic description, saying that hypertext was best described as “branching and responding text. Best read at a computer screen.”52 A more far-reaching idea of his was that all documents could be incorporated and supplied with semantically motivated links in a fully cross-referenced “docuverse” that could connect any text to any other relevant text or passage of text.53

In the 1980s, the writer’s perspective became more significant. Now the computer could be used to link information together and create paths through a corpus of related material and, at the same time, to incorporate hypermedia perspectives to compose and combine heterogeneous sequences “created with different applications such as a painting program, a chart package, or a music editor.”54 In accordance with this, George P. Landow described hypertext as a simple node-link relation based on associations, similar to a footnote.55 Landow later viewed this as an incarnation of postmodern theories of intertextuality, and he argued that the hierarchies of the textual world would be replaced with nonhierarchical networks.56

The closeness of the writer and the reader positions would soon lead to reconsiderations of the relations between the author, work, and reader; perhaps the most radical of these was Landow’s idea that they would converge into the “wreader,” a term meant to characterize how a reader can utilize hypertext in an interpretative interaction with a text. The two extremes in the reader–writer relations are relatively easy to identify.57 On the one end, hypertext is delimited as a navigation tool that allows a reader to add her own markers and comments in the margins, which, like footnotes, are external to the text. In most cases, such remarks would remain unpublished. To the extent that they represent a scholarly interpretation, however, they might be included in a scholarly edition, as Jerome McGann has suggested. This would eventually become a dedicated archive, which would continue to be edited and thus evolve over time. In this case, the physical closure of a printed text is replaced by a coded closure in an e-text. Since such coded closures are inserted as text into the text, they can be manipulated on a par with any other element in that text. On the other end, the most extreme interference in any e-text is complete rewriting or deletion. The question, then, is how is the semiotic space between these two poles exploited for meaningful articulation?

Hypertext as a Tool and as a Signifying Feature in the Literary Text

Two major traditions emerge in response to this question. One is rooted within the HCI paradigm focused on the tool perspective that describes “hypertext-as-interaction with information to build associations, and through associations to build knowledge.”58 A second tradition whose main focus is on hypertext as a node-link relation incorporated into literary works emerges primarily in hypertext fiction and literary studies. The two traditions have been said to be incompatible because of their different epistemological roots, in computer science and poststructuralism, respectively.59 In Bolter’s view, they may as well be seen as different hypertext configurations. The HCI-tool perspective has developed a strong focus on creating editable tools for knowledge building and for modifying the functional architecture of the computer. Literary hypertext theory and practice developed at the same time, but with a strong focus on utilizing hypertext as a fully integrated part of a literary work.

In an attempt to establish a canon of literary hypertext, Astrid Ensslin set up a three-polar typology taking as the point of departure pioneering works by Michael Joyce, Steve Moulthrop, and others that formed what N. Katherine Hayles calls the early, classic hypertext literature.60 This generation of work was dominated by an author-centered approach in which the author provides the reader with a set of narrative pieces. To continue reading, the reader has to choose among a set of links provided at each screenpage in the work, such as in the pioneering work The Afternoon: A Story by Michael Joyce.61 This is indeed some steps away from Ted Nelson’s original concern with hypertext as an instrument for a writer for organizing notes and manuscripts to meet ever-changing associative and interpretational needs.62 The focus is text centric, and the links are predominantly simple connections between two pieces of text or between two pages, each with a restricted number of texts and options. The author text-centric perspective implied that hypertext was mainly, though not solely, considered to be a feature that is intrinsic to a closed work, the delimitation of which from other texts was not part of the experiments. The focus was primarily on the modal shift between the ordinary reading mode and the navigation or browsing mode, leaving the reader with the question of how to make sense of and decide the next step. The editor mode is not yet facilitated as an option in the reader position. Hayles also addressed the narrow, screen-focused interpretation of literary hypertext; she saw the “first generation hypertext” as building on a rather simple print-based convention of “moving through a text passage by passage.”63 The notion of the work was opened mainly toward the readings of it even if many issues referred to the implications for the authoring of hypertexts as works.

Matthew Kirschenbaum traced a number of shared tropes in the early literary hypertext literature that were linked by such notions as the “flexible,” “fluid,” “ephemeral,” “instantly transformational,” and “flickering” and culminated in the “ultimate apotheosis, ‘immaterial.’” He described the theoretical framework as the advent of “a media ideology,” rather than “point[ing] to a transparent and self-sufficient account of the ontology of the medium itself.” Still, there are interesting insights in the lingering moves between materiality and immateriality, the editable relation between address and content, and between hidden code and visible manifestation, and the variety of coded closures.64

Marie L. Ryan criticized the early hypertext literature was for its lack of aesthetic quality, “pleasurability,” and “of allegoric meaning to the actions of moving through a textual network.” The link system is treated as an invariant generic message related to the medium instead of giving unique meaning “to each particular text, and ideally recreated with every use of the device.”65 Janet Murray argued that there was a lack of literary quality, and Hayles came to the conclusion that (early) hypertext had failed to deliver the immersion.66 For Hayles, a further limitation of the first-generation perspective was its lack of innovative use of e-text components belonging to the textual universe, such as “cut-outs, textures, colours, movable parts, and page order.”67

According to Ensslin, the second generation took up the hypermedia perspective, which had already been addressed by Nelson, Alan Kay, Adele Goldberg, Yankelovich and colleagues, and Bolter, but was now also being manifested in artistic works, such as Deena Larsen’s Marple Springs.68 For Hayles, the second generation was characterized by the inclusion of “all the other signifying components of e-texts, including sound, animation, motion, video, kinaesthetic involvement, and software functionality, among others.” With the inclusion of software, the notions of e-text, hypertext, and hypermedia are taken a step further as the editable program function in this view is included as a signifying part of the text.69

For the third generation, Ensslin counted works that somehow fit into the theoretical concept of cybertext that was put forward by Espen Aarseth and then widely recognized as a major advance in hypertext theory. As far as Aarseth was concerned, it was an alternative, and he only accepted a limited set of literary hypertexts as cybertext.70

The distinct material and technological features of e-text were often a main focus in the hypertext theories of the 1990s, but they are deliberately absent in Aarseth’s conceptualization of cybertext. Aarseths purpose of introducing the notion of cybertext, was to get rid of “vague and unfocused terms as digital text or electronic literature . . . [italics in the original] and to develop a function-oriented perspective, in which the rhetoric of media chauvinism will have minimal effect on the analysis.”71 Cybertext is “a perspective on all forms of textuality.”72 The concept thus includes only the characteristics that are shared with other media. Compared to McGann’s theory of autopoietic systems based on reflexive second-order cybernetics, Aarseth relied more on Norbert Wiener’s first-order cybernetics. This conceptualization of the computer is consistent with the computational paradigm.

Despite the return to an abstract notion of “ergodic” computational text, Aarseth contributed to the understanding of both hypertext and e-text in terms of his focus on the modal shift from a reading mode to a participative mode in “nontrivial” hypertext systems and on the value of the explorative potentials. In this respect, he aligned with Michael Joyce, who introduced the distinction between explorative hypertext and constructive hypertext, and with McGann.73 Aarseth added to the explorative perspective by focusing on the variety of combinatorial reading strategies; he also added to the understanding of the huge potential of hypertext-based interactive media, computer games included. Finally, he recognized that hypertext relations are not always flat and open networks. They can also serve as a means of navigating and interpreting hierarchies by allowing multiple pathways to any given destination and relative to a variety of possible anchors. This, again, accords with McGann.74 Finally, with Bolter, he stressed the double nature of hidden and visual representation, which Aarseth denoted as textons and scriptons.75 The cybertext notion, however, limits some of the insights because it ignores that in digital media there are never simply textons and scriptons. The relation between stored sequences and visual representations is itself organized as a coded, editable, and externalized instruction, as hypertext.

The hypertext literature survived and developed, not least in the “electronic literature” community, which organizes itself in the Electronic Literature Organization (ELO), founded in 1999.76 As the ELO defines it, electronic literature refers “to works with important literary aspects that take advantage of the capabilities and contexts provided by the stand-alone or networked computer.”77 “Importance,” of course, is in the eyes of the reader. The ELO definition of “electronic literature” implies that it is distinct from “print literature” because of the characteristics of the computer, not because the two forms represent distinct sets of aesthetic values. The “literary aspects” and the value system of print, at the same time, have priority over other modalities even if “important” literary aspects are found in works dominated by paint, video, music, games or codes.78 In recent years, the delimitation of text to linguistic and literary text has been further challenged within the ELO community by the inclusion of virtual reality stories; visualizations of scholarly and scientific fields; bio-texts; and other experimental practices related to issues of narration, interfaces, and interaction.79 The importance of the various modalities and their interrelations within a work is not so much used as a criterion for exclusion or inclusion in a canon as it is an issue for the interpretation of any individual work.

Digitization brings the various symbolic modalities and their related artistic practices that previously existed in separate materials and media into new kinds of interference and interrelations, without necessarily breaking down their boundaries. The boundaries can be either maintained or opened for flux and blendings as a part of a particular creative work. This is possible because of the “universality” of the binary alphabet, which allows all sorts of symbolic content to be coded, stored, and modified in the same alphabet, if not always without loss.

Materiality is ascribed to a growing variety of meanings. First, it refers to the perceptual conditions: symbols need to be physically manifested to be perceived. Second, it refers to the particular perceptual conditions on a screen interface, for example, the flickering on the screen. Third, it refers to the time-space conditions of particular medium or eventually of the particular time-space conditions built into an application or a work. Fourth, it refers to the interpretation of cyberspace as an immaterial (virtual as potential or as fictive) space as opposed to “real life.” Fifth, it may also refer to the sequences of bits and the mechanical devices needed to perform the invisible physical processing in digital media. As there is yet no dominant conceptualization, the theme will be addressed below.

The E-text Immersed in the World Wide Web

From “Tamed” to “Feral” Hypertext

In the first wave in the history of e-text, the various concepts were primarily related to interpretations of the notion of “text.” The electronic dimension was considered extrinsic and of minor relevance. The notion of “hypertext” was articulated, but it existed in the shadow of the computational paradigm of the dominating mainframe culture.

In the second wave, the focus shifted. Hypertext, interactivity, code, node-link, storage and interface, screen oscillation, reading and writing relations, and a growing variety of semiotic modalities, including codes, entered into the interpretation of e-text type 1. This manifested in the development of application programs, in the tool perspective of the critical text edition, and in the literary version of e-text theory.

The first two waves developed in close relation to stand-alone machines, but they differed in the conceptualization of the computer in the first wave as a computational automate, and in the second, as a toolbox for human operators and authors. The assumption was that users or readers were to be served rather than to be servers themselves.

On the crest of the second wave, a third wave took off, sparked by the release of the World Wide Web (WWW) protocols, which overnight more-or-less transformed the existing Internet based on the TPC/IP protocols into a new, globally distributed, electronically integrated communicative infrastructure formed around networked digital media.80 The WWW protocols also provided a convenient interface, which made the internet accessible to a fast-growing part of the world’s population. In the following years, a cascade of new software genres and communicative practices that served a rapidly increasing array of purposes emerged, and the processes of digitization spread into almost all segments of society and culture.

Both the TCP/IP Internet and the web-based parts of it build on hypertext connections that allow new addresses and connections between existing addresses to be added and content to be edited deliberately. The quantitative changes in scale, reach, and access were thus obtained with the help of an infrastructure in which hypertext, whether extrinsic or intrinsic, was not something additional but the very “landscape in which the text is immersed.”81

In late 20th century, hypertext was primarily developed within the context of the stand-alone computer in which the sequences of bits are controlled by a central processing unit. For networked digital media, there is no such central unit. Networked digital media can be programmed to interfere with each other on all levels, including the functional architecture of any machine. Even though networked digital media are still mechanical machines, they lose autonomy when immersed in the network of fluctuating hypertext connections, whether these connections are made visible or not.

The full range of the—ever-developing—implications that are inherited in the emergence of networked digital media is beyond comprehension from both an overall societal or cultural perspective and the narrow perspective of digital media genres.

In a large-scale perspective, it would be necessary to consider known drivers of the development, be they the exponential growth in knowledge production, climate change and threats to the biosphere, globalization, migration, urbanization and modernization, new types of mediatization, and new types of social interaction and communication. Among the issues raised are questions of authority, of democratization and the deflation of cultural value systems, of copyright and privacy, and of networked forms of social collaboration.

In the narrow perspective of conceptualizations of e-text, it is still possible to trace significant trends by looking at a variety of hypertext configurations from a growing range of multiple-source knowledge systems.

In 2005, Jill Walker described the “unleashing” of hypertext “into the world wide web” as a transition in which the concept “goes feral,” and feral hypertexts “refuse to stay put within boundaries we have defined.”82 The term “feral hypertext” denotes the emergence of hypertext structures that may or may not trespass any structural delimitations. They cannot be restricted to navigational features outside a work—as Ted Nelson originally imagined—or kept within the closure, as in the literary hypertext tradition. For networked digital media, hypertext is both inside and outside and the connection in between.

The dichotomy between “tamed” and “feral” echoes the wider philosophical controversies about whether digital media, and particularly the “setting free” of the hypertext repertoire on the internet, represent a decline in literacy, logic, and rationality and leave us only with a kind “freedom” where there is nothing left to lose. The dichotomy is also set between “unplanned structures” and the “massive possibility for collaboration and emergence in the network that creates truly feral, uncontrollable hypertext.”83

Among the network-dependent genres there are, however, also highly organized and controlled configurations of hypertext. These include a number of multiple-source systems for the real-time monitoring of climate, weather, pollution, human behavior, traffic, market developments, and so forth, often also combining real-time data, interactive transactions, and other sources. Besides these primarily research-initiated sources there are also increasingly important commercial sources, such as the data repositories of Google, Facebook, Twitter, Amazon, and other service providers, as well as numerous civic projects in a variety of crowdsourcing formats, including Wikipedia. A range of such systems has also been developed in relation to the United Nations’ seventeen Sustainable Development Goals.84 Some systems are oriented toward user interactions in a growing variety of formats, such as online games, virtual reality systems like Second Life, social media sites, crowdsourcing sites, online services like Google search, and variety of personalized news services. Others fill out many intermediary positions between centrally controlled and feral hypertext configurations.

Multiple-source systems as such are not unique to digital media. Encyclopedias, dictionaries, newspapers, journals, catalogs, phonebooks, and collections of any sort stored in libraries, museums, and archives, and many research corpora are all based on the aggregation of materials from a wide range of sources. They are gathered with respect to a variety of purposes and criteria for inclusion. Networked digital media allow for fundamental changes in the character and functioning of such systems.

Corpus linguistics provides a very illuminating example. In a study of “the web as Corpus” within a corpus linguistic framework, Maristella Gatto concluded that “the idea of a ‘web of texts’ has brought about notions of non-finiteness, flexibility, de-centring/re-centring, and provisionality” to be added to the established notion of a corpus as a “body of text of finite size, balance, part whole relationship, and permanence.” A summary of the methodological implications suggests that the study of a corpus of linguistic web materials raises questions about data stability, the reproducibility of the research, and the reliability of the results, which formerly could be taken for granted. These seemingly negative characteristics are counterbalanced by an array of new methodological possibilities. Gatto did not mention hypertext, but she described a great variety of instantiations with more specific terms.85

In the debates about the exponential growth of data, it is often taken for granted that the majority of these materials are “unstructured,” “messy,” or heterogeneous because of still more different purposes that are articulated in distinct software paradigms, resulting in a growing diversity of knowledge formats.86 Not least among the reasons for this is the hypertext infrastructure that allows the ever-ongoing connection and disconnection of any deliberately chosen sequence of bits located anywhere on the web.87 Even from a strict linguistic text perspective, networked digital media bring with them fundamental changes in the conceptualizations and analyses of linguistic materials. Early in the 21st century, David Chrystal, in an analysis based on e-mail, chat groups, and virtual worlds, reached a more moderate conclusion: although the internet provided a wide range of variations of reading modes, it remained on the whole a linguistic text, “an analogue of the written language that is already ‘out there’ in the paper-based world.”88 Notwithstanding that, he described in detail a range of digital-only features, such as written synchronous communication across distances, the incorporation of links to signify parts of the text that are often signified by use of color-codes, among others.

Later, Naomi S. Baron described how the move of computer-mediated communication (CMC) “beyond academics in the 1990s” was accompanied by the question of whether CMC in general, or at least e-mail and instant messaging, “more closely resembles speech or writing,” and she set out to analyze “the new forms of language: online and mobile language.”89 CMC is now established as a distinct subfield of communication studies, organized around the Journal of Computer-Mediated Communication.90 To a large extent, CMC can be seen as the ongoing study of language and linguistic text as it develops in the new, networked landscape and featuring frequent observations of how the gaps between previous separate semiotic fields, such as speech and writing, are filled with a variety of intermediary forms. As a continuation of speech-act theories, the CMC tradition also adds to the understanding of the dynamic nature of e-text by mainly focusing on the interactive relations between communicating people; the interactions with the functional architecture of the machinery seem to have less priority.

In the literary hypertext tradition, one of the main changes has had to do with the expansion of interactivity to include readers who can alter the text, and this will soon be further extended by the development of new forms of collaboration and interaction. The move of CMC from academia to the broader society is also a move into popular culture. Thus there is a growing tension between the literary value system of, say, the Electronic Literature Organization (ELO) and the electronic forms. The explorative dimension seems to be beyond the literary value system insofar as it is oriented toward networked digital media.

Multiple Source Cultures and the Language of New Media

Writing about internet literature in China, Michel Hockx addressed the elitism of literary studies that ignore popular genres like fan fiction even though fan communities are involved in the exploration of the expressive potentials in networked digital media.91 Popular culture is not that obsessed with the notion of authorship and does not privilege text for multimodal and participatory communication. Hockx made his case by analyzing a variety of genres ranging from avant-garde experimentalism, blogging, fan-fiction, and online poetry to mass-produced semipornographic fiction. For Hockx, the notion of electronic literature needs to include multimodal, interactive, and participatory expressions. To some extent, he maintained the expectations, rooted in the literary tradition, that the texts chosen for analysis should demonstrate a certain level of reflexivity about the aesthetic aims and prioritizing work that explores boundaries.92 If the Electronic Literature Organization (ELO) tradition sticks with closure in a mimetic relation to print, popular culture enters into situational closures, which are to some extent negotiated by the participants.

A similar and perhaps more radical step had already been taken in media studies, as in Henry Jenkins’s analyses of the complex interrelations between commercial content providers and their increasingly interactive and participatory consumers. A main theme is the claim that stories today are told across multiple media platforms and semiotic modalities.93

For Jenkins, “convergence” refers to the coordinated use of many channels on the side of the culture industries, while users meet in participatory communities to widen the narratives with their own contributions. These are not always that interesting, but sometimes these communities have developed new genres, such as fan fiction created around the Harry Potter films, utilizing a range of semiotic regimes be they textual, pictorial, video, or audio. Fan cultures have the potential to develop social media skills, which includes the capacity to incorporate the array of multimodal regimes in the narrative. They do not restrict themselves to digital linguistic text, but are moving into multimodal texts and the hypertext-based coding repertoire of e-text.

Writing in 2006, Jenkins foresaw that the clash of corporate interests and fan communities might lead to the closure of what was a still-open window for user-generated content. Then came the breakthrough of Facebook, which provided a commercial platform for social communication. Unlike the fan-culture sites, Facebook did not require that fans become part of a community. Facebook allowed a more limited semiotic repertoire, centered on linguistic text and fixed topographic space flows. Still, Facebook provides real-time, typed, public or semipublic communication. The timescale of Facebook communication is therefore the same as that of spoken communication, but the messages are stored. To respond, subscribers need to be present on the site within a time-limited “window-of-interaction” that is partly controlled or “edited” by the service provider, who forms the streaming of messages on the news page. The subscriber can trace back the stream as it is stored, but time for response may have passed anyway.94

The demand for response presence—the window of interaction—demonstrates the significance of the time dimension in a number of hypertext systems. A perhaps more extreme example is described by Karin Knorr Cetina, who analyzed various cases of the mediatization of face-to-face encounters.95 One of those was the global currency trading system, which provides a huge array of information distributed on six to eight screens and continuously updated with real-time financial data and with relevant news from around the world provided by professional journalists. The system includes facilities for private communication between currency dealers. It also includes their individual trade actions, as well as deals performed by preprogramed algorithms. The traders thus both read and write information into the system and have to be able to respond to a constant flow of updated information in just fractions of a second, so as to not lose out in a fluctuating market. Knorr Cetina described the communication of the traders as follows: it is “as if a trader’s brain was attached to the market . . . unthinkingly.”96 The system defines the need for “response presence” (as distinct from embodied presence) as a very narrow window of interaction, determined by the speed of market fluctuations as these are filtered in this particular hypertext configuration.

From a semiotic perspective, the system includes numbers (e.g., exchange rates and timestamps), icons, charts, graphs (representing numbers), alphanumeric text, color codes, and probably other formats. The materials are processed sequentially, but they are likely to be read based on visual recognition of the changes on the screens due to updating frequencies, color-code changes and other indicators. Reading still takes place in linear time, but does not necessarily pass the screen space in a standard routine order as is often assumed for printed texts. The use of graphical markers to call attention to parts on a printed page that break the standard reading order is a familiar technique used in printed newspapers and magazines and is often seen in the perspective of the montage technique of early 20th century. In digital media, the array of such markers is expanded both in number and function, not least because they can be dynamic and time coded. They can also be used as markers of hypertext anchors to trigger actions at a destination. This can be done for any fraction on the screen, for a pixel, a single letter, a word, or any other arbitrary delimitation of screen space. Because these features are available to be utilized as “signifying components of e-texts” it can be argued that they should be included in a contemporary interpretation of the notion of e-text.97

The answer to the question of why introduce a notion of e-text that is broader than the language-centered one of e-text type 1 elaborated in the 20th century is that it is necessary to incorporate different modalities, codes, and dynamic time dimensions in the analyses of digital media narratives insofar as they are used as signifiers in these narratives. These visual, aural, and kinetic modalities include both the full array of semiotic modalities on the perceptual level and the array of binary sequences, either codes or data or both, manifested in binary coded electromagnetic signals on the level of machine processing that is below our sensory capacities. Because hypertext is rooted in the basic address system, link relations such as the use of the keys on the keyboard in a word processor are “trivial” in most cases. A distinction between trivial and nontrivial hypertext can be made based on the criterion suggested by Hayles of whether a link is utilized as a signifying component in a message.98

The most widely used notion for digital materials is arguably the term “data” inherited from the computational paradigm. So why introduce the disturbing term “e-text type 2” between the terms “data” term and “e-text type 1”? In this case, the answer is that “data” refers to passive objects. Data does not include mechanical transactions performed by the bits or sequences organized as programs and scripts, and it does not include the significant hypertext relations. The term “e-text type 2” qualifies because it includes all sorts of digital materials, data as well as codes and links. At the same time, it comes with an increased focus on signifying potentials, one the one hand, and on the messiness and noisy character of data related to networked digital media, on the other.

If there are good reasons to establish e-text as embracing the full array of “signifying components,” the question is, what would its main characteristics be? Is it possible to identify a common denominator for CMC, a language of networked digital media? In “Language of New Media,” Lev Manovich provided an affirmative answer. Digital media have a set of shared characteristics. The theory takes its point of departure in the history of film. This resonates well with Manovich’s primary interest in visual representation, and it adds a variety of interesting aspects compared to Bolter’s analyses of digital media as writing technologies.99 Manovich introduced transcoding as the process in which the characteristics of digital media become cultural forms. Thus, the conceptualization of the computer becomes central in this notion of culture. At the same time, new media are influenced by conventions developed in former media but this influence has to be articulated in the cultural forms of digital media.100

The generalization of the transcoding principle may be far-fetched. It makes perfect sense, however, as a description of the relation between the stored sequences of bits and those that are made visible on the screen or otherwise perceptible. Ignoring former interpretations of this distinction between storage and interface (e.g., Bolter and Aarseth), Manovich conceptualized a relation between, on the one hand, “the computer’s own cosmogony,” characterized by the database format and complete separation between data and programs, and, on the other hand, the interface level that makes sense to human users.101

The separation of data and program, or data and software, reinvokes the computational paradigm. Still, the distinction of hardware and software is also essential, but the reason given is unusual, as he traces this separation back to analogue electronic media in the 19th century. With the shift from tangible and sensible physical objects to invisible electronic signals, he argues, digital coding of electronic signals is only a minor change compared to “analogue” coding of, for instance, amplitude and frequency, brightness, and contrast.102 Yet the “minor change” had a major impact because there was no editable software in the predigital media world. The theory hides the semiotic array of possible connections in the coding of links between storage and interface.

Manovich’s theory adds to the interpretation of digital media as they become integrated into culture at large. This is not least due to his overall approach, which characterizes the language of digital media as based on numerical representation, modularity, automation, variability, and transcoding.103 Even if each of the five principles needs to be further elaborated, perhaps reinterpreted, they constitute a set of essential dimensions.

The theory is an alternative to Bolter’s writing-space concept because it includes HCI concepts, such as modularity and variability in the computational paradigm, which remain the overarching principle in Manovich’s interpretation of modularity, variability, and transcoding. For instance, variability is thus “closely” linked to automation because data “can be assembled into numerous sequences under program control.”104 This may be true, but it does not include all dimensions of the language of new media, since programs are programmed and controlled by human operators using Turing’s choice machine to create deliberately composed hypertext configurations, including a variety of automata and robots. Bits are not numerical representations, but when they are combined in ordered sequences, they may represent letters, numeric, and operational characters, formal rules, instructions, images, signals and addresses, as well.

Recalling Bolter’s theory, it can also be argued that it is not the numerical values of data that allow for data processing, but the storage of binary coded data in addressable form. This allows for random access to any sequence of bits independent of their semantic values, meanings, and function. What counts in the machine is their mechanical function on the physical level. Writing on the brink of the third wave of digitization, Manovich contributes to the increased focus on the role of the computer as a medium in society at large. He also added significantly to the interpretation of the multimodal nature of digital media, especially in his elaborate account of coded visual materials.

Bolter’s writing space concept of the computer also competed with the writing machine perspective articulated by N. Katherine Hayles, elaborated and still evolving over three decades dealing both with literary perspectives and the interpretation of the computer. A main distinction between the two perspectives is indicated by the difference between “a space” and “a machine.” Both theories however incorporate a communicative space as well as mechanical and dynamic properties. They differ in their conceptualization of the connection and it is most clearly visible in their different notions of hypertext. For Hayles, hypertext exists both in print media and electronic media and “minimally” includes only multiple reading paths, text that is chunked in some way; and some kind of linking mechanism that connects the chunks together so as to create the multiple reading paths.105 In Hayles’s account, the “machine” is coded, but the coded link relation between the storage and the interface remains black boxed. In Bolter’s account, the machine is built with algorithms that remain editable because any single sequence is stored at one or another address, from where it can be accessed, modified, or moved.

In Hayles’s perspective, linking is a rather simple process that equalizing the reader’s interpreting a superscript numeral in a text as a reference to a footnote with the coded anchor link destination relation. The digital equivalent, however, includes an instruction of what to do at the destination, and all of it remains editable. An editable timescale is thus built into any deliberately chosen part of the e-text. Since the timescales are editable, they can be used as significant semiotic elements in an e-text type 2. With the author and reader positions in focus, Hayles contributed to the elaboration of terminology for these modes and the related textual perspectives. The editing mode made available due to the hypertext relation between storage and interface is left out. In this respect, Hayles is also aligned with Aarseth and Manovich. Contrary to them, Hayles includes the codes as integral part of her notion of text. None of these theories is yet fully capable of including the characteristics of e-text type 2.

The Text in the Machine

The original computational paradigm, modified into an explorative modeling paradigm, now competes with HCI interpretations of the computer as a toolbox that can be adjusted by hypertext menus and with networked digital media interpretations. Networked digital media is generally accepted as the new landscape. The interpretations range from seeing the internet as a communicative landscape external to the scholarly foci to a growing integration of the global network facilities in new interactive hypertext genres.

Stephen Ramsay describes “coding and structure” and modularity as the basic characteristics of computers. Ramsay’s position echoes Lev Manovich’s inclusion of modularity in the computational paradigm. The program remains the central compositional feature. The machine is distinct from the text it processes. This is a modest theory that conceals the role of the programmer, even though Ramsay actually exploits McGann’s explorative and “deperformative” ideas related to the interpretations of literary texts.

Quite different interpretations include Jay Bolter’s notion of a “writing space,” constituted by hypertext connections between the storage and the screen and between connected machines.106 This position is continued in the works of, among others, Henry Jenkins and of Axel Bruns in Blogs, Wikipedia, Second Life and Beyond: From Production to Produsage, introducing the notion “produsage” for the variety of ways in which citizens are involved in the production and reproduction of content.107 In this perspective, hypertext, the kernel in Turing’s choice machine, is the basic compositional mechanism used by professional or civic programmers to produce, connect, and disconnect the modules. The text is part of the machine and can be used also to control the mechanical processes.

Steven Roger Fischer and Peter Sahle address the question of how to delimit the electronic part of a text (type 1) from the software in which it is embedded or, more precisely, how the codes that represent “the text” can be delimitated from the surrounding codes, such as markup codes, ASCII codes, or hyperlinks. Which parts of the electronic materials are intrinsic to the text and which are extrinsic? This relates to the question of how to establish closures in a medium in which the materials, the coded algorithms included, remain editable. The question may apply both to digitized and born-digital linguistic text.

In A History of Writing, Steven Roger Fischer argues that e-text differs from alphabetic text because it does not rely on spoken language but on electronic programming. Computers can “write” both messages and entire programs between themselves. Both kinds are considered to be “complete writing.”108 If the computer “writes” and program executing is a part of that, the machine is defined by the writing of the program. If so, it is only a short step to acknowledging Bolter’s writing space perspective and including not only “ASCI texts” but all sorts of digital materials including the writing instructions in the concept of e-text type 2.

A related argument is made by Peter Sahle in Digitale Informationsformen describing both the program and the hypertext as part of the invisible and linear text on one level, while hypertext on a different level is considered nonlinear, and thus oppositional to the linear text.109 It may be argued that this is only possible because both e-text type 1 and the hypertext link are materialized in e-text type 2. Sahle wanted to delimit his theory to linguistic texts (e-text type 1), which are manifested within a delimited set of characters (such as ASCII or UNICODE). According to Sahle, an e-text can be distinguished from the e-image (Bild) because each is based on different types of semiotic coding. The e-image on the screen is defined by coded pixels, and the text is composed of characters that are both visible and can be processed as semiotic units.110 Facsimiles of printed text, however, can be converted with character recognition software. E-text and e-image do not have to share any algorithms or codes, but they do need to be manifested and processed in the same binary alphabet, as does any particular algorithm. From this perspective, the interpretation of e-text type 1 cannot but move fast toward inclusion as a particular type within e-text type 2.

The implications of this are amplified in, so far, individual computers, and other digital devices are interconnected, because the interconnection is based on hypertext links between destinations and addresses. The hypertext connection of machines implies that the individual machine loses functional autonomy. Networked digital media facilitate communicative exchanges of content by interfering in the functional architectures of each other. The physical devices may be dedicated to particular kinds of usage, but their functional architectures are still defined on the level of the binary alphabet.

The computer was originally used to mechanize calculation, text processing, and other processes. This meant that the text, as well as the functional architecture, algorithms, and address systems, had to be represented as sequences of the—editable—bits already recognized but not further explored by Turing. The mechanization of text processing in digital media is based on the textualization of the functional architecture of the machine. Thus electronic text type 2 may also include the functional architectures of “automates” and “robots” performing via remote controlled programs, as deliberately coded closed works. Robots may be monitored in real time like drones or unmonitored like “self-driving” cars. In all cases, the algorithms are both part of the functional architecture and of the individual “messages” processed.

Electronic text type 2 is both the basis for further developments of automates and robots governed by either externalized or real-time monitored remote controls and for a fast-growing array of less controlled and semantically richer narratives.

Materialities

As a consequence of the growing awareness of the intricate relations between text and machine, the issue of the materiality of text and of digital materials becomes increasingly dominant. This is partly because of the emergence of the tremendous variety of new genres. Digital media are used to produce texts, as well as images, sounds, 3D virtual spaces, 3D printing, and an array of other physical and social processes in a variety of physical materializations that go far beyond oscillating screen images. The issue of materiality is also conceptual in nature, as it relates to the hardware, software, and materials processed and is ascribed a growing variety of meanings. It refers both to a set of perceptual conditions, to the time-space conditions of a medium and, eventually, within a particular text, to the interpretation of virtuality and potentiality, to the processing of the sequences of bits and the dynamic impacts of these processes, and to physical and formal characteristics of the hardware and software. These refer both to tangible physical objects and energy and to digital processes that are often described as purely immaterial or virtual processes.

What is seen as the materiality of media and of messages is thus a matter of epistemology discussed from a range of different perspectives that are more or less closely related to digital media. The question of materiality is a recurrent theme in the works of N. Kathrine Hayles, which relate both to discourses of embodiment, analyses of Writing Machines (2002), and the dissolution of the modern “I,” in, for instance, How We Became Posthuman (1999), and are further elaborated in How We Think: Digital Media and Contemporary Technogenesis (2012). A central claim is that materiality cannot be specified in advance, as if it preexisted the specificity of the work. For Drucker, materiality “inheres a process of interpretation rather than a positing of the characteristics of the object.”111 Further reflections in this direction include the edited collection by Diana Coole and Santha Frost, New Materialisms, Ontology, Agency and Politics (2010), and Jay Bolter’s “Posthumanism” (2016).112

From a literary media perspective, Matthew Kirschenbaum, in Mechanisms: New Media and the Forensic Imagination (2008) analyzes the hardware devices in a kind of close-reading perspective, arguing that the physical characteristics of, say, a drive are particularly relevant because they may provide clues to or impose constraints on usage. Kirschenbaum elaborates on Drucker’s and Hayles’s idea that the materiality is defined in the act of signification, which he calls “forensic materiality.” In the case of digital media, Kirschenbaum suggests adding the concept of “formal materiality” to refer to “the simulation or modeling of materiality via programed software processes.”113 A question then is how repetitive, programmed processes and individual messages are connected, since they are materialized in two completely distinct forms. Media theories, not least medium theory and parts of mediatization theory, would argue that physical characteristics in these analyses do not refer to physics, but to the organization and, eventually, institutionalization of physical materials that are used for human communication, both in the form of a medium and in the form of a repertoire of possible variations within a medium and in the combination between media that can be used to articulate individual messages or sequences of such.114 An indication of a turn to media theories is also found in N. Katherine Hayles and Jessica Pressmann’s recent Comparative Textual Media.115 Further complications awaiting closer inspection relate to the distinctions between physics and biology, notions of embodiment and biological tissue, biological and mental processes, not to say life and death.

The overall, recurrent theme in this literature is the post-Cartesian relation between the brain and mind, between physical process and mental content. Both dimensions exist within the same time and space insofar as ideas and thoughts are conceived of as materialized in the brain or in external mediated forms ranging from fluid speech over fixed objects to digital media which are physically fixed devices made fluid by codable software and messages.

The generalizations in the meta-reflections and meta-perspectives somehow mirror the generalization of the representation of everything, be it things, physical processes and mental content in the very same binary alphabet.

David Miller presents an anthropological perspective is in his edited collection Materiality (2005); it argues that there is a need to give room for the analysis of particular manifestations and conceptualizations of materiality throughout human history. This may fit well both the recent efforts to include human behavioral data in climate research, indicating that human culture, as it is always materialized, plays a significant role in the history of nature, and to the conceptualization of the Anthropocene as a new geological epoch.116

Discussion of the Literature

E-text Type 1

The literature on e-texts is extensive while the literature on the notion of e-text is spare. The term is most often used without further qualification but is primarily associated to e-text type 1. There is, however, no established canon across all relevant disciplines for the conceptualization of e-text type 1. In the broader area of e-text type 2, which includes all kinds of digital materials, the situation is even worse because of the exponential growth in amounts, types, and genres of digital materials, as well as in areas and purposes of digitization.

Some disciplines, however, qualify in particular ways in particular epochs in the history of digitization. Regarding e-text type 1, this was the case for the humanities computing community in the second half of 20th century. Humanities computing qualifies because of a sophisticated understanding of texts and of the interpretative subtleties of e-text processing. The tradition also opens a range of new issues. Former assumptions and theories of text are revised in the attempts to take advantage of a technology, which gradually turn into a medium that brings its own set of underlying characteristics in play. The interpretation of these characteristics becomes a still more significant part of the story.

In second half of the 20th century, the humanities computing community was surrounded by postmodernist theory, but remained rooted in modernist thinking and for a while subscribing also to computational epistemology. The history of and developments within humanities computing is documented in Journal of Computers and the Humanities (1966–2004), Natural Language and Linguistic Theory (1983–), and Literary and Linguistic Computing (1986–2014), and in an array of anthologies published throughout the period. These include, among many others, collections edited by Raimondo Modiano, Leroy F. Searle, and Peter Schillingsburg, Voice, Text, Hypertext Emerging Practices in Textual Studies, and by Marilyn Deegan and Kathryn Sutherland, Text Editing, Print and the Digital World. Modiano and colleagues cover both “oral, material and e-text” from a wide range of periods studied in a wide range of countries and focusing on methodological issues related not least to the application of hypertext tools. Deegan and Sutherland cover the incorporation of digital media as a workplace and methodological tool in critical text editing. Peter L. Shillingsburg, in From Gutenberg to Google, discusses electronic infrastructures needed for the transfer of written or printed text to e-text type 1 based on his theory of script acts comprising every sort of act related to written, printed and electronic representation of text. In this view such a theory is needed precisely because the electronic representations alter “the conditions and principles of textuality” due to the unique capabilities of digital media beyond simple hypertext search and navigation.117 The wider implications for cultural criticism of the incorporation of a broad range of digital features in literary studies are discussed by Alan Liu in “Social Computing,” among others, in the Modern Language Association encyclopedia Literary Studies in a Digital Age: An Evolving Anthology, edited by Kenneth Price and Ray Siemens.118

A second strand of linguistic and literary text theory in the 20th century develops as attempts to automatize language production and translation, starting with the early experiments inspired by Shannon and Weaver’s statistical approach and Noam Chomsky’s transformational grammar approach in the 1950s. Both cases are touched by artificial intelligence ambitions as in Herbert Simon’s heuristics strategies or as in the connectionist paradigm in 1980s.119 The EU funded EUROTRA project was perhaps among the most ambitious of these and generated many insights in the subtleties of translation, but without reaching the ambition of full scale automatized translation.120 As for today Google Translate seems to be the better bid, though the quality is questionable. It builds on huge amounts of heterogeneous linguistic datasets and a big data approach, but without revealing why it translates as it does.121 For Bar-Hillel a main obstacle was linguistic polysemy.122 Fifty years later, these issues are still on the agenda for the Google Translate team: “The same meaning can be expressed in many different ways, and the same expression can express many different meaning.”123 Nevertheless, in a pragmatic perspective there has been some progress in the use of statistical analysis. Like Google Search, which has become a serious rival to information retrieval theories in library and information science, Google Translate has become a serious rival to certain areas in linguistics.

A third strand is the corpus-linguistic tradition comprising a range of approaches based on the analysis of a corpus of a “real language”—materials usually collected based on research defined criteria. Regular conferences have been held biannually since 2001.124 The International Journal of Corpus Linguistics has been published since 1996; Corpus Linguistics and Linguistic Theory, since 2005; and Corpora, since 2006. Corpus linguistics is mainly focused on e-text type 1. As these texts are increasingly immersed in e-text type 2, for example, as web-texts, the notion of corpus as a “body of text of finite size, balance, part whole relationship, and permanence” is questioned as in Maristella Gatto’s Web as Corpus: Theory and Practice; Studies in Corpus and Discourse in which notions like “non-finiteness, flexibility, de-centering/re-centering, and provisionality” are used to characterize web-based text corpora.125

The efforts concerned with e-text type 1 are increasingly put into the landscape of e-text type 2, which influences the delimitation of e-text type 1.

E-text Type 2: In the Wilderness

Humanities computing, with its focus on the digitalization of nondigital originals, is a main source of the conceptualization of e-text type 1. Further sources are needed for dealing with born-digital materials. Literary and linguistic theories are often still the most elaborate, but now also media studies and media ethnography, media archaeology, HCI studies, hypertext theories, CMC studies, social media studies, network analyses, “big data” studies, web studies, and a wider range of theories of text and social text bring with them a range of new perspectives manifested in a fast-growing range of specialized studies generated from almost any possible discipline concerned with contemporary culture.

During the same process, humanities computing became a main pillar within an emerging digital humanities community, which embraced a wider range of approaches to digital media initiated by the spread of a still more diversified set of digital media into all spheres of society.126

The humanities are thus confronted simultaneously with two different processes of digitization. One process is created by the ongoing digitalization of nondigital cultural heritage materials, which relate to the humanities computing tradition insofar as these efforts center on the digitization of nondigital originals. As an implication, hypertext will remain a tool that will eventually be used for navigation, modeling, and exploration as a part of scholarly methodologies. It will never be a part of the nondigital originals, though it will inevitably be part of any representation of and transaction with the digitized translations.

A second process emerges, as a response to the fast-growing amounts of born-digital materials, related to the spread of networked digital media and involving materials produced independently of scientific and scholarly purposes, including complicating features such as scripts, interactivity and hypertext in the texts which are at the same time immersed in a global hypertext infrastructure. The two processes interfere both with respect to the materials of attention and to the conceptualization of digital media and of methods and epistemologies used. The transition from humanities computing to digital humanities is documented in a series of Companions to . . . publications and a range of anthologies, most recently, for instance, Gold and Flein Debates in the Digital Humanities.127 Steven E. Jones discusses these developments in The Emergence of Digital Humanities from a broad perspective, with a strong focus on developments in the United States.128 The international Alliance of Digital Humanities Organization was formed in 2002. The community gradually developed into a rather diversified “big tent.”129 Other metaphors have also been applied, but the lack of a consistent delimitation is often addressed within the tradition, not least within the “classicist” part anchored in the study of nondigital originals. See for instance John Unsworth, “What Is Humanities Computing and What Is Not?”; Steven Ramsay, “Hard Constraints”; Willard McCarty, “Three, Turf, Centre, Archipelago; or Wild Acre: Metaphors and Stories for Humanities Computing.”130 Despite this big tent, digital humanities has also been criticized for being narrowly centered on the UK and US perspectives and for lack of wider cultural critical perspectives, for example, by Domenico Fiormonte and Alan Liu.131

The array of digital materials and related methods is overwhelming. A New Companion to Digital Humanities includes thirty-seven chapters each covering a particular theme or area and types of data—predominantly materials that may be classified as e-text type 1 though often immersed in e-text type 2.132 Despite this diversity, e-text type 2 includes materials and methods that are far beyond the current foci in digital humanities, even if it might be argued that all sorts of digital materials should be worthwhile to study from the perspective of the humanities because they are genuine human artefacts.

Although A New Companion to Digital Humanities includes a chapter on digital preservation by William Kilbride, it does not have any chapters on web materials or archived web materials.133 Web materials appear in corpus linguistics, though the web materials that are studied are primarily linguistic. For the broader range of web materials—itself a moving target—a series of web archives have been established pioneered by the private American Internet Archive established in 1996. Since then national web archives have been established in a number of countries, often under the auspices of their national libraries. These archives utilize a variety of criteria both for collecting, preserving, and accessing these materials. Peter Lyman deals with the main issues in “Archiving the World Wide Web,” as does Julien Masanès in the edited collection Web Archiving.134 In 2003, the International Internet Preservation Consortium (IIPC) was established. Besides these general archives are an unknown number of targeted web archives. Niels Brügger and Niels Ole Finnemann discuss the distinction between the digital versions of nondigital originals and born-digital materials in The Web and Digital Humanities: Theoretical and Methodological Concerns, providing web materials and archived web materials as examples of born-digital materials that include link instructions, scripts, and interactive sequences, as well as formats that cannot be captured with existing tools.135

Notwithstanding these archiving efforts, web archives are only capable of collecting a very tiny surface of the global data-production. It is not possible to preserve all data, and indeed, much might not be worth preserving. It still makes sense, however, to develop the criteria for what should be considered worth preserving, whether for the purposes of creating a cultural heritage, aiding future research, or being employed for commercial, civic, and personal purposes in the future. The notion of “archive” and the value of maintaining archives are contested, but the choice is not between archiving or not, but involves complicated issues about who, what, where, when, and why archives are produced and maintained in a networked culture in which, as Mike Featherstone has stated, “the boundaries between archive and everyday life becomes blurred through digital recording and storage technologies.”136

The range of digital materials include the fast-growing number of multiple-source knowledge systems for real-time scanning of everything from outer space to the interior of our bodies, and everything in between.137 Some aspects can be seen in the existing analyses of multiple-source knowledge systems, such as Knorr Cetina’s analysis of the international currency trading system.138 This analysis has demonstrated that networked digital media provide the basis for new types of knowledge organization that cannot be sufficiently analyzed within the framework of e-text type 1 because networked digital media are built with intricate time dimensions, which need further analysis.

Literature aiming to create an overview and to characterize various sorts of digital materials can be found within a variety of areas related to, for instance, web archiving, content analysis, social media analysis, studies of media ethnography and media archeology. More-general approaches that discuss the character of data materials and the questions of which data to preserve and how to preserve such data are found in Lisa Gitelman’s edited collection, Raw Data Is an Oxymoron, Christine Borgmann’s, Big Data, Little Data, No Data: Scholarship in the Networked World, Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences, and Eric C. Meyer and Ralph Schroeder’s, Knowledge Machines.139 From an archive perspective, there is also Arjun Sabharwal’s Digital Curation in the Digital Humanities: Preserving and Promoting Archival and Special Collections.140 The complexity of the materials is reproduced in the humanities communication outputs, both within and outside the peer-review domain. This process has led to the establishing of “altmetrics” as a new branch within or emerging from bibliometrics.141

The tensions between the established disciplines in the humanities and the digital humanities are arguably more intense, compared to those between the rather marginalized position of the humanities computing tradition within the humanities at large in 20th century. Digital humanities now to some extent includes the study of born-digital materials and a growing range of new genres with significance in culture at large. The legitimacy of the classical humanities is questioned. One of the most ambitious attempts to bridge the gaps by identifying the search for principles and patterns as an on-going long-term effort is Rens Bod, A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present.142 Others would argue that there are also deviations, exceptions, unique instances, experiences, and redundancies that call for richer narratives. The nature inhabited by man is still a culture identified by proper nouns and otherwise-named entities. The relation between the search for principles and patterns and narrative is still on the agenda, which means that there is still what Hayles describes as the tension between “the strictness of code and the richness of language.”143 This tension runs throughout the entire history of digitization, from Bar-Hillel’s reflections of polysemy to Busa’s claim, in 1980, that words are “deeply different from . . . numbers and symbols” because, for instance, of the unique occurrence of metaphors and the multiple diversity of language, to Stephen Ramsay’s attempt to bridge the gap with an explorative, algorithmic criticism.144 Thus the question remains, will further elaborations or explorative compositions of hypertext configurations be capable of delivering a cure?

Alliance of Digital Humanities Organizations (ADHO).

Corpus Linguistic Conference series.

Electronic Literature Organization (ELO).

Functional Requirements for Bibliographic Records: Final Report/IFLA Study Group on the Functional Requirements for Bibliographic Records. (1998).

Hypertext: Yearly ACM Hypertext conferences since 1987.

ACM Sigweb conferences since 1991.

International Internet Preservation Consortium (IIPC).

Internet Archive.

Liu Alan. The Voice of the Shuttle.

McGann, Jerome J., ed. 2008. The Rosetti Archive, IATH and the NINES consortium.

Roberto Busa, S. J., and associates. Corpus Thomisticum—Index Thomisticus. Web edition by Eduardo Bernot and Enrique Alarcón. English version.

Scott Rettberg, Roderick Coover, Daria Tsoupikova, Arthur Nishimoto. Hearts and Minds: The Interrogations Project.

Text Encoding Initiative Consortium. “TEI: Text Coding Initiative Guidelines,” 1994–.

Text Encoding Initiative. “TEI: History.”

Text Encoding Initiative Consortium. “P5 Guidelines for Electronic Text Encoding and Interchange.” Version 3.0.0, revision 89ba24e, March, 29, 2016.

Further Reading

Bod, Rens. A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present. Oxford: Oxford University Press, 2013.Find this resource:

Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Lawrence Erlbaum, 1991.Find this resource:

Day, Ronald E. Indexing It All: The [Subject] in the Age of Documentation, Information, and Data. Cambridge, MA: MIT Press, 2014.Find this resource:

Drucker, Johanna. Graphesis: Visual Forms of Knowledge Production. Cambridge, MA: Harvard University Press, 2014.Find this resource:

Gitelman, Lisa, ed. Raw Data Is an Oxymoron. Cambridge, MA: MIT Press, 2013.Find this resource:

Hayles, N. Katherine, and Jessica Pressmann. Comparative Textual Media: Transforming the Humanities in the Postprint Era. Minneapolis: University of Minnesota Press, 2013.Find this resource:

Huhtamo, Erkki, and Jussi Parikka, eds. Media Archaeology: Approaches, Applications, and Implications. Berkeley: University of California Press, 2011.Find this resource:

Jones, Steven E. The Emergence of the Digital Humanities. London: Routledge, 2014.Find this resource:

Kirschenbaum, Matthew. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: MIT Press, 2008.Find this resource:

Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: SAGE, 2014.Find this resource:

Knorr Cetina, Karin. “Scopic Media and Global Coordination: the Mediatization of Face-to-Face Encounters.” In Mediatization of Communication. Edited by Knut Lundby, 39–62. Handbooks of Communication Science 21. Berlin: Mouton de Gruyter, 2014.Find this resource:

Meyer, Eric C., and Ralph Schroeder. Knowledge Machines: Digital Transformations of the Sciences and Humanities. Cambridge, MA: MIT Press. 2015.Find this resource:

Moretti, Franco. Distant Reading. London: Verso, 2013.Find this resource:

Ramsay, Stephen. Reading Machines: Toward an Algorithmic Criticism. Champaign: University of Illinois Press, 2011.Find this resource:

Rockwell, Geoffrey, and Stéfan Sinclair. Hermeneutica: Computer-Assisted Interpretation in the Humanities. Cambridge, MA: MIT Press, 2016.Find this resource:

Schreibman, Susan, Ray Siemens, and John Unsworth, eds. A New Companion to Digital Humanities. London: Wiley, 2016.Find this resource:

Notes:

(1.) The term “binary alphabet” is used here to denote the two bits, which are often referred to metaphorically as “0” and “1”. These metaphors give the impression that they represent numbers and have a particular semantic value except being distinct from each other. Bit sequences are used in computers to represent numbers, letters, images, and sounds, as well as processing rules. The bits function more like letters in linguistic alphabets than like units in formal languages.

(2.) Roy A. Wisbey, ed., The Computer in Literary and Linguistic Research: Papers from a Cambridge Symposium (Cambridge, U.K.: Cambridge University Press, 1971), vii.

(3.) Perry Willett, “Electronic Texts: Audiences and Purposes: History,” in A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens, and John Unsworth (Oxford: Blackwell, 2004), n.p.; Roberto Busa, “Foreword: Perspectives on the Digital Humanities,” in Schreibman, Siemens, and Unsworth, Companion to Digital Humanities; Steven E. Jones, Roberto Busa, S.J., and the Emergence of Humanities Computing: The Priest and the Punched Cards (London: Routledge 2016); Roberto Busa, Index Thomisticus (Stuttgart: Frommann-Holzboog, 1974); and Roberto Busa, “The Annals of Humanities Computing: The Index Thomisticus,” Journal of the Computer and the Humanities 14 (1980): 83–90.

(4.) Sergei Nirenburg, Harold. L. Somers, and Yorick Wilks, eds., Readings in Machine Translation (Cambridge, MA: MIT Press, 2003); Sean Gouglas, et al., “Before the Beginning: The Formation of Humanities Computing as a Discipline in Canada,” Digital Studies/Le champ numérique, February 3, 2013; Yehoshua Bar-Hillel, “The Present Status of Automatic Translation of Languages,” Advances in Computers 1 (1960): 91–163; and Claude Shannon and Warren Weaver, The Mathematical Theory of Communication (Urbana: University of Illinois Press, 1969).

(5.) Yehoshua Bar-Hillel, “The Present State of Research on Mechanical Translation,” American Documentation 2.4 (1951), 229–237. Yehoshua Bar-Hillel, “The Present Status”. John. W. Hutchins, “Yehoshua Bar-Hillel: A Philosopher’s Contribution to Machine Translation,” in Early Years in Machine Translation, ed. John W. Hutchins (Amsterdam: John Benjamins, 2000), 299–312.

(6.) Noah Wardrip-Fruin, “Digital Media Archeology: Interpreting Computational Processes,” in Media Archaeology: Approaches, Applications, and Implications,” ed. Erkki Huhtamo and Jussi Parikka (Berkeley: University of California Press, 2011), 303.

(7.) Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (New York: W. H. Freeman, 1976).

(8.) Journal of Computers and the Humanities, vols. 1–3 (1966–1969); Wisbey, Computer in Literary and Linguistic Research; Susan Hockey, Electronic Texts in the Humanities: Principles and Practice (Oxford: Oxford University Press, 2000); Willett, “Electronic Texts,” in Schreibman, Siemens, and Unsworth, Companion to Digital Humanities; and Marcus Walsh, “Theories of Text, Editorial Theory, and Textual Criticism,” in The Oxford Companion to the Book, ed. Michael F. Suarez, S.J., and H. R. Woudhuysen (Oxford: Oxford University Press, 2010).

(9.) Rosanne G. Potter, Literary Computing and Literary Criticism: Theoretical and Practical Essays on Theme and Rhetoric (Philadelphia: University of Pennsylvania Press, 1989); Susan Hockey, “The History of Humanities Computing”,” in Schreibman, Siemens, and Unsworth, Companion to Digital Humanities, 3–19; John Unsworth, “What Is Humanities Computing and What Is Not?Jahrbuch für Computerphilologie 4 (2002): n.p.; and Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism (Champaign: University of Illinois Press, 2011).

(10.) Ramsay, Reading Machines, 3.

(11.) Stuart. K. Card, Thomas P. Moran, and Allen Newell, The Psychology of Human-Computer Interaction (Mahwah, NJ: Lawrence Erlbaum, 1983); Lucy Suchman, Plans and Situated Action: The Problem of Human-Machine Communication (Cambridge, U.K.: Cambridge University Press. 1987); Donald Norman and Steven W. Draper, eds., User Centered System Design: New Perspectives on Human-Computer Interaction (Hillsdale, NJ: Lawrence Erlbaum, 1986); and James L. McLelland and David E. Rummelhardt, Explorations in Parallel Distributed Processing: A Handbook of Models, Programs and Exercises (Cambridge, MA: MIT Press,1988).

(12.) Christian Koch, “On the Benefits of Interrelating Computer Science and the Humanities: The Case of Metaphor,” Computing and the Humanities 25 (1991): 289–295.

(13.) Liam Bannon and Zenon Pylyshyn, eds., Perspectives on the Computer Revolution (Norwood, NJ: Ablex, 1989); and Paul Mayer, ed., Computer Media and Communication: A Reader (Oxford: Oxford University Press, 1999).

(14.) Hockey, Electronic Texts. Paul Eggert, “The Book, the E-text, and the ‘Work-site’,” in Text Editing, Print, and the Digital World, ed. Marilyn Deegan and Kathryn Sutherland (Burlington, VT: Ashgate, 2009), 63–82; Ramsay, Reading Machines; Jerome J. McGann and Lisa Samuels, “Deformance and Interpretation,” in Radiant Textuality: Literature after the World Wide Web, by Jerome J. McGann (New York: Palgrave, 2001), 104–137; and Willard McCarty, “Modeling: A Study in Words and Meanings,” in Schreibman, Siemens, and Unsworth, Companion to Digital Humanities.

(15.) Hockey, Electronic Texts.

(16.) The development is described from the humanities computing perspective in Susan Schreibman, “Digital Humanities: Centres and Peripheries,” Historical Social Research/Historische Sozialforschung 37.3 (2012): 46–58.

(17.) Walter W. Greg, “The Rationale of Copy-Text,” Studies in Bibliography 3 (1950–51): 19–37, available online; Walsh, “Theories of Text”; Hockey, Electronic Texts; Allen H. Renear, Elly Mylonas, and David G. Durand, “Refining Our Notion of What Text Really Is, 1993.” McGann, Radiant textuality; Paul Eggert, “Text-Encoding, Theories of the Text, and the ‘Work-Site’,” Literary and Linguistic Computing 20.4 (2005): 425–435; Paul Caton, “On the Term ‘Text’ in Digital Humanities,” Literary and Linguistic Computing 28.2 (2013): 209–220; and G. Thomas Tanselle, “The Editorial Problem of Final Authorial Intention,” Studies in Bibliography 29 (1976): 167–211. Available online.

(18.) Jonathan Culler, Structuralist Poetics: Structuralism, Linguistics, and the Study of Literature (Ithaca, NY: Cornell University Press, 1973); Paul Ricoeur, “The Model of the Text: Meaningful Action Considered as a Text,” Social Research 38.3 (1971): 529–562. Available online; Paul Ricoeur, “Humanities between Science and Art (Transcript of speech delivered at the Turn of the Millennium Conference, University of Aarhus, Denmark, June 4, 1999); and Walsh, “Theories of Text.”

(19.) Julia Kristeva, “Word, Dialogue, and Novel,” in The Kristeva Reader, ed. Toril Moi (Oxford: Blackwell, 1986), 35–61.

(20.) Charles F. Goldfarb, “A Generalized Approach to Document Markup,” Proceedings of the ACM SIGPLAN SIGOA Symposium on Text Manipulation (New York: Association for Computing Machinery, 1981), 68–73; James H. Coombs, Allen H. Renear, and Steven J. DeRose, “Markup Systems and the Future of Scholarly Text Processing,” Communications of the ACM 30.11 (1987): 933–947; C. Michael Sperberg-McQueen, “Text in the Electronic Age: Textual Study and Text Encoding with Examples from Medieval Texts,” Literary and Linguistic Computing 6.1 (1991): 34–46; Renear, Mylonas, and Durand, “Refining Our Notion of What Text Really Is.”; Nancy Ide and Jean Veronis, “Text Encoding Initiative: Background and Context,” Computers and the Humanities 29.1–3 (1995): n.p.; Eggert, “Text-Encoding, Theories of the Text”; Marilyn Deegan and Kathryn Sutherland, eds., Text Editing, Print, and the Digital World (Burlington, VT: Ashgate, 2009); Allen H. Renear, “Out of Praxis: Three (Meta) Theories of Textuality,” in Electronic Text: Investigations in Method and Theory, ed. Kathryn Sutherland (Oxford: Oxford University Press, 1997), 107–126; and Matthew Kirschenbaum, Mechanisms: New Media and the Forensic Imagination (Cambridge, MA: MIT Press), 2008.

(21.) Thomas Haigh, “Remembering the Office of the Future: The Origins of Word Processing and Office Automation,” IEEE Annals of the History of Computing 28.4 (2006): 6–31.

(22.) Goldfarb, “Generalized Approach”; and Coombs, Renear, and DeRose, “Markup Systems.”

(23.) Renear, “Out of Praxis,” 7.

(24.) Text Encoding Initiative, “TEI: History,” November 19, 2014, Text Encoding Initiative Consortium website, Charlottesville, VA.

(25.) Steven J. DeRose, David G. Durand, Elly Mylonas, and Allen H. Renear, “What Is Text, Really?.” Journal of Computing in Higher Education 1.3 (1990): 3–26.

(26.) Renear, Mylonas, and Durand, “Refining Our Notion”; Renear, “Out of Praxis.” The principle was criticized by, among others, C. Michael Sperberg-McQueen, “Textual Criticism and the Text Encoding Initiative,” in The Literary Text in the Digital Age, ed. Richard J. Finneran (Ann Arbor: University of Michigan Press, 1996), 37–61; Claus Huitfeldt, “Multi-Dimensional Texts in a One-Dimensional Medium,” Computers and the Humanities 28 (1995): 235–241; Sperberg-McQueen, C. Michael, and Claus Huitfeldt. “Concurrent Document Hierarchies in MECS and SGML.” Literary and Linguistic Computing 14.1 (1999): 29–42; and Susan Hockey, “History of Humanities Computing”; Eggert, “Text-Encoding, Theories of the Text.”

(27.) Renear, “Out of Praxis.”

(28.) Sperberg-McQueen, “Text in the Electronic Age”; Huitfeldt, “Multi-Dimensional Texts”; and C. Michael Sperberg-McQueen and Claus Huitfeldt, “GODDAG: A Data Structure for Overlapping Hierarchies,” in DDEP-PODDP 2000, ed. P. King and E. V. Munson, Lecture Notes in Computer Science 2023 (Berlin: Springer, 2004), 139–160.

(29.) DeRose, Durand, Mylonas, and Renear, “What Is Text, Really?,” 15.

(30.) Sperberg-McQueen, “Textual Criticism”; Ide and Veronis, “Text Encoding Initiative. Background and Context”; Text Encoding Initiative Consortium, “P5 Guidelines for Electronic Text Coding and Interchange,” version 3.0.0, revision 89ba24e, March 29, 2016.

(31.) Sperberg-McQueen, “Text in the Electronic Age,” 34.

(32.) Sperberg-McQueen, “Text in the Electronic Age,” 40.

(33.) McGann, Radiant Textuality, 2001; Buzzetti, Dino, and Jerome J. McGann. “Electronic Textual Editing: Critical Editing in a Digital Horizon,” in Lou Burnard, Katherine O’Brien O’Keeffe, and John Unsworth eds. Electronic Textual Editing (New York: Modern Language Association, 2007), n.p.; and McGann, “Marking Texts of Many Dimensions,” in A New Companion to Digital Humanities, ed. by Susan Schreibman, Ray Siemens, and John Unsworth (London: Wiley, 2016), 358–376.

(34.) Gérard Genette, Paratexts: Thresholds of Interpretation, trans. Jane E. Lewin (Cambridge, U.K.: Cambridge University Press, 1997); and Johanna Drucker and Jerome J. McGann, “Images as the Text: Pictographs and Pictographic Logic” (research paper for Institute for Advanced Technology in the Humanities, University of Virginia, n.d.).

(35.) McGann, Radiant Textuality, 12. This book contains a number of McGann’s articles from the early to late 1990s.

(36.) McGann, “Rationale of Hypertext,” 56–57.

(37.) Humberto R. Maturana and Francisco J. Varela, Autopoiesis and Cognition: The Realization of Living (Boston: D. Reidel, 1980); and Humberto R. Maturana and Francisco J. Varela, The Tree of Knowledge: The Biological Roots of Human Understanding (New York: Random House, 1992).

(38.) McGann, “Rationale of Hypertext,” 218.

(39.) McGann, “Rationale of Hypertext,” 69.

(40.) Jerome J. McGann and Lisa Samuels, “Deformance and Interpretation,” in McGann, Radiant Textuality, 105–135; and Ramsay, Reading Machines, 33–38.

(41.) G. Thomas Tanselle, “Critical Editions, Hypertexts, and Genetic Criticism,” Romanic Review 86.3 (1995): 581; and Eggert, “Text-Encoding, Theories of the Text.”

(42.) McGann, “Rationale of Hypertext,” 57; Jerome McGann, “Imagining What You Don’t Know: The Theoretical Goals of the Rosetti Archive,” in Voice, Text, Hypertext: Emerging Practices in Textual Studies, ed. Raimonda Modiano, Leroy F. Searle, and Peter Schillingsburg (Seattle: University of Washington Press, 2004): 378–400.

(43.) McGann, “Rationale of Hypertext,” 71.

(44.) Alan M. Turing, “On Computable Numbers: With an Application to the Entscheidungs Problem,” Proceedings of the London Mathematical Society, ser. 2, 42 (1937): 230–265, at 232.

(45.) Among the exceptions are the basic principle of random access, LISP programming based on simple sequencing of instructions, Ted Nelson’s idea of hypertext, Theodor Holm Nelson, “Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate,” Proceedings of the 1965 20th National Conference (New York: Association for Computing Machinery, 1965), 84–100; and Douglas Engelbart’s idea of augmentation of human intelligence in “Augmenting Human Intellect: A Conceptual Framework,” (Stanford Research Summary Report AFOSR-3223, Washington DC, October 1962).

(46.) Jay David Bolter, Writing Space: The Computer, Hypertext, and the History of Writing (Hillsdale, NJ: Lawrence Erlbaum, 1991).

(47.) Bolter, Writing Space, 195.

(48.) Bolter, Writing Space, 198.

(49.) Bolter, Writing Space, 198.

(50.) Bolter, Writing Space, 25.

(51.) Theodor Holm Nelson, “Complex Information Processing.”; Theodor Holm Nelson, Literary Machines (Sausalito, CA: Mindful Press, 1993); and Belinda Barnet, Memory Machines: The Evolution of Hypertext (London: Anthem, 2013).

(52.) Nelson, Literary Machines. quoted in Barnet, Memory Machines, xxi; .

(53.) Nelson, Literary Machines; and Cliff McKnight, A. Andrew Dillon, and John Richardson, Hypertext in Context (Cambridge, U.K.: Cambridge University Press. 1991).

(54.) Nicole Yankelovich, Norman Meyrowitz, and Andries van Dam, “Reading and Writing the Electronic Book,” Computer 18.10 (1985): 18–19.

(55.) George P. Landow, “Hypertext in Literary Education, Criticism, and Scholarship,” Computers and the Humanities 23.3 (1989): 174. Available online.

(56.) George P. Landow, Hypertext: The Convergence of Contemporary Critical Theory and Technology (Baltimore: Johns Hopkins University Press, 1992).

(57.) George P. Landow, ed., Hyper Text Theory (Baltimore: Johns Hopkins University Press, 1994), 14.

(58.) Monica C. Schraefel, Les Card, David De Roure, and Wendy Hall, “You’ve Got Hypertext,” Journal of Digital information 5.1 (2004): n.p.

(59.) Noah Wardrip-Fruin, “What Hypertext Is,” Proceedings of the Fifteenth Annual ACM Conference on Hypertext and Hypermedia. Hypertext ’04 (New York: Association for Computing Machinery, 2004), 126–127.

(60.) Astrid Ensslin, Canonizing Hypertext: Explorations and Constructions (London: Continuum, 2007); and N. Katherine Hayles, Writing Machines (Cambridge, MA: MIT Press, 2002).

(61.) Michael Joyce, afternoon: a story (Cambridge, MA: Eastgate Systems, 1990); and Stuart Moulthrop, Victory Garden (Cambridge, MA: Eastgate Systems, 1991).

(62.) Nelson, “Complex Information Processing.”

(63.) Hayles, Writing Machines, 27.

(64.) Kirschenbaum, Mechanisms, 43.

(65.) “Narrative as Puzzle!? An Interview with Marie L. Ryan,” Dichtung Digital, Newsletter MÄRZ '003/2000 (2.Jg. / Nr. 10) ISSN 1617-6901; Marie L. Ryan, ed., Cyberspace Textuality: Computer Technology and Literary Theory (Bloomington: Indiana University Press, 1999); and Ensslin, Canonizing Hypertext, 66.

(66.) Janet Murray, Hamlet on the Holodeck: The Future of Narrative in Cyberspace (New York: Free Press, 1997); and Hayles, Writing Machines, 36.

(67.) Hayles, Writing Machines, 20.

(68.) Nelson, “Complex Information Processing”; Alan Kay and Adele Goldberg, “Personal Dynamic Media,” Computer 10.3 (1977): 31–41, Alan Kay, “Computer Software,” Scientific American 251.3 (1984): 41–47. Yankelovich, Meyrowitz, and van Dam, “Reading and Writing the Electronic Book”; Bolter, Writing Space; and Deena Larsen, Marple Springs, electronic text (Watertown, PA: Eastgate, 1993).

(69.) Hayles, Writing Machines, 20.

(70.) Espen Aarseth, Cybertext: Perspectives on Ergodic Literature (Baltimore: Johns Hopkins University Press, 1997).

(71.) Aarseth, Cybertext: Perspectives on Ergodic Literature, 19.

(72.) Aarseth, Cybertext: Perspectives on Ergodic Literature, 18.

(73.) McGann, “Rationale of Hypertext”; and Michael Joyce, “Siren Shapes: Exploratory and Constructive Hypertexts,” Academic Computing 3 (1988): 10–14, 37–42.

(74.) McGann, “Rationale of Hypertext.”

(75.) Bolter, Writing Space.

(76.) Electronic Literature Organization, “History.”

(77.) Electronic Literature Organization, “What Is E-Lit?”; N. Kathrine Hayles, Electronic Literature: New Horizons for the Literary (Notre Dame, IN: University of Notre Dame Press, 2008), 3; and Michel Hockx, Internet Literature in China (New York: Columbia University Press, 2015), 7.

(78.) Hayles, Electronic Literature, 3.

(79.) A recent virtual reality example is Walter Scott Rettberg, Roderick Coover, Daria Tsoupikova, and Arthur Nishimoto, “Hearts and Minds: The Interrogations Project.” Interactive video project developed at the Electronic Visualization Lab, Chicago, 2014. First presented in Proceedings of the IEEE VIS 2014 Arts Program, VISAP’14

(80.) The WWW protocols include a protocol for hypertext transmission (HTTP), a markup language (HTML), and a global address system (URL) built on the top of the TCP/IP protocols. These were published by Tim Berners-Lee in 1989. The hypertext-based WWW was not the only possible architecture. Many types of electronic networks were developed in 1970s and 1980s. Some of them were proprietary networks delivering services to people on a market basis or as a public service. In this way, they would limit the reach of hypertext relations.

(81.) Niels Ole Finnemann, “Hypertext Configurations: Genres in Networked Digital Media,”Journal of the Association for Information Science and Technology, 68: 845–854, 50. Specific forms of immersive space and immersive textuality are discussed in Murray, Hamlet on the Holodeck; and Neil Fraistat and Steven E. Jones, “Immersive Textuality: The Editing of Virtual Spaces,” Text 15 (2003): 69–82.

(82.) Jill Walker, “Feral Hypertext: When Hypertext Literature Escapes Control,” in Hypertext ’05: Proceedings of the Sixteenth ACM Conference on Hypertext. (New York: Association for Computing Machinery, 2005), 46–53, at 47.

(83.) Walker, “Feral Hypertext,” 47.

(85.) Maristella Gatto, Web as Corpus: Theory and Practice. Studies in Corpus and Discourse (London: Bloomsbury, 2014), 211–212.

(86.) Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (New York: Riverhead, 2013); Christine Borgman, Big Data, Little Data, No Data: Scholarship in the Networked World (Cambridge, MA: MIT Press, 2015); Lisa Gitelman, ed., Raw Data Is an Oxymoron. (Cambridge, MA: MIT Press, 2013); and Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences (London: SAGE, 2014).

(87.) Niels Brügger and Niels Ole Finnemann, “The Web and Digital Humanities: Theoretical and Methodological Concerns.” Journal of Broadcasting and Electronic Media 57.1 (2013): 66–80.

(88.) David Chrystal, Language and the Internet (Cambridge, U.K.: Cambridge University Press, 2001), 198.

(89.) Naomi S. Baron, Always on Language in an Online and Mobile World (Oxford: Oxford University Press, 2008), 28–29.

(90.) Journal of Computer-Mediated Communication, 1995–.

(91.) Hockx, Internet Literature in China, 8.

(92.) Hockx, Internet Literature in China, 8.

(93.) Henry Jenkins, Convergence Culture: Where Old and New Media Collide (New York: New York University Press, 2006).

(94.) Finnemann, “Hypertext Configurations.”

(95.) Karin Knorr Cetina, “Scopic Media and Global Coordination: The Mediatization of Face-to-Face Encounters,” in Mediatization of Communication, ed. Knut Lundby, Handbooks of Communication Science 21 (Berlin: Mouton de Gruyter), 39–62.

(96.) Knorr Cetina, “Scopic Media,” 52; and Niels Ole Finnemann, “Digital Humanities and Networked Digital Media,” MedieKultur 30.57 (2014): 94–114. 2014.

(97.) Hayles, Writing Machines, 27.

(98.) Hayles, Writing Machines, 20.

(99.) Lev Manovich, The Language of New Media (Cambridge, MA: MIT Press, 2001).

(100.) Manovich, The Language of New Media, 45–49.

(101.) Manovich, The Language of New Media, 63; Bolter, Writing Space; and Aarseth, Cybertext.

(102.) Manovich, The Language of New Media, 128, 165.

(103.) Manovich, The Language of New Media, 43–70.

(104.) Manovich, The Language of New Media, 56.

(105.) Hayles, Writing Machines, 21.

(106.) Stephen Ramsay, “Hard Constraints: Designing Software in the Digital Humanities,” in Schreibman, Siemens, and Unsworth, New Companion to Digital Humanities; and Jay Bolter, quoted in Barnet, Memory Machines, 20.

(107.) Axel Bruns, Blogs, Wikipedia, Second Life and Beyond: From Production to Produsage (New York: Peter Lang, 2008).

(108.) Steven Roger Fischer, A History of Writing (London: Reaktion, 2001), 3–8.

(109.) Patrick Sahle, Digitale Editionsformen: Textbegriffe und Recodierung, Buch 3. Digitale Editionsformen, zum Umgang mit der Überlieferung unter den Bedingungen des Medienwandels, (Norderstedt, Germany: Books on Demand, 2013), 216.

(110.) Sahle, Digitale Editionsformen, 96.

(111.) Hayles, Writing Machines, 32–33; N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999); N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago: University of Chicago Press, 2012); and Johanna Drucker, The Visible Word: Experimental Typography and Modern Art 1909–1923 (Chicago: University of Chicago Press, 1994), 43.

(112.) Diana Coole and Samantha Frost, eds., New Materialisms, Ontology, Agency, and Politics (Durham, NC: Duke University Press, 2010); Sidney J. Shep, “Digital Materiality,” in Schreibman, Siemens, and Unsworth, New Companion to Digital Humanities; and Jay Bolter, “Posthumanism,” in The International Encyclopedia of Communication Theory and Philosophy, ed. Klaus Bruhn Jensen and Robert T. Craig (Blackwell, Wiley Online Library, 2016), 1–8.

(113.) Kirschenbaum, Mechanisms, 9.

(114.) See, for instance, John Durham Peters, Speaking into the Air: A History of the Idea of Communication (Chicago: Chicago University of Chicago Press, 1999); and Knut Lundby, ed., Mediatization of Communication, Handbooks of Communication Science 21 (Berlin: Mouton de Gruyter, 2014).

(115.) N. Katherine Hayles and Jessica Pressmann, Comparative Textual Media: Transforming the Humanities in the Postprint Era (Minneapolis: University of Minnesota Press, 2013).

(116.) David Miller, ed., Materiality (Durham, NC: Duke University Press, 2005); Steffen, Will, Katherine Richardson, Johan Rockström, Sarah E. Cornell, Ingo Fetzer, Elena M. Bennett, Stephen R. Carpenter, et al. “Planetary Boundaries: Guiding Human Development on a Changing Planet.” Science 347.6223 (2015): n.p.; Paul J. Crutzen and Eugene F. Stoermer, “The Anthropocene.” Global Change Newsletter 41 (May 2000): 17–18; and Timothy Clark, Ecocriticism on the Edge: The Anthropocene as a Threshold Concept (London: Bloomsbury, 2015). For a discussion of “nature” as including culture see also Hans Fink, “Three Sorts of Naturalism.” European Journal of Philosophy 14.2 (2006), 202–221.

(117.) Raimonda Modiano, Leroy F. Searle, and Peter Schillingsburg, Voice, Text, Hypertext Emerging Practices in Textual Studies; Deegan and Sutherland, Text Editing, Print and the Digital World (Farnham: Ashgate 2009); and Peter L. Shillingsburg, From Gutenberg to Google: Electronic Representations of Literary Texts (Cambridge, U.K.: Cambridge University Press. 2006), 12.

(118.) Kenneth Price and Ray Siemens, eds., Literary Studies in a Digital Age: An Evolving Anthology, (Modern Language Association Commons, 2013–); Alan Liu, “From Reading to Social Computing,” in Price and Siemens, Literary Studies in a Digital Age.

(119.) Shannon and Weaver, Mathematical Theory of Communication; Noam Chomsky, Syntactic Structures (The Hague: Mouton, 1957); Herbert A. Simon, The Sciences of the Artificial (Cambridge MA: MIT Press, 1969); John Haugeland, Artificial Intelligence: The Very Idea (Cambridge, MA: MIT Press, 1985); Paul Churchland, Matter and Consciousness (Cambridge, MA: MIT Press, 1984); Patricia Churchland, Neurophilosophy: Toward a Unified Science of the Mind-Brain (Cambridge, MA: MIT Press, 1986); and McLelland and Rummelhardt, Explorations in Parallel Distributed Processing.

(120.) S. Krauwer, The Eurotra Project (Utrecht Institute of Linguistics UiL OTS (document list, 2014); and Brian Oakley et al. (Eurotra Evaluation Panel), Final Evaluation of the Results of Eurotra: A Specific Programme concerning the Preparation of the Development of an Operational EUROTRA System for Machine Translation (Diane Publishing Company, 1995). Also available online.

(121.) Alon Halevy, Peter Nordvig, and Fernando Pereira, “The Unreasonable Effectiveness of Data,” IEEE Intelligent Systems 24.2 (2009), 8–12.

(122.) Bar-Hillel, “Present Status.”

(123.) Halevy, Nordvig, and Pereira, “Unreasonable Effectiveness of Data.”

(125.) Gatto, Web as Corpus.

(126.) Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp /Digital_Humanities/ (Cambridge, MA: MIT Press, 2012).

(127.) Susan Schreibman, Ray Siemens, and John Unsworth, eds., A Companion to Digital Humanities (Oxford: Blackwell, 2004); Ray Siemens and Susan Schreibman, eds., A Companion to Digital Literary Studies (Oxford: Blackwell, 2008); Susan Schreibman, Ray Siemens, and John Unsworth, eds., A New Companion to Digital Humanities, 2d ed. (Chichester, U.K.: Wiley, 2016); Matthew K. Gold and Lauren F. Klein, Debates in the Digital Humanities 2016 (Minnesota: University of Minnesota Press, 2016); and Marie-Laure Ryan, Lori Emerson, and Benjamin J. Robertson, eds., The Johns Hopkins Guide to Digital Media (Baltimore: Johns Hopkins University Press, 2014).

(128.) Steven E. Jones, The Emergence of Digital Humanities (London: Routledge, 2014).

(129.) Alliance of Digital Humanities Organization. “Big Tent Digital Humanities” was the theme of the Digital Humanities 2011 conference at Stanford University; the term “big tent” is used frequently and considered to be the best expression of the state of affairs in contemporary digital humanities. Melissa Terras, Julianne Nyhan, and Edward Vanhoutte, eds., Defining Digital Humanities: A Reader (London: Routledge, 2016).

(130.) John Unsworth, “What Is Humanities Computing and What Is Not?,” 4; Ramsay, “Hard Constraints,” in Schreibman, Siemens, and Unsworth, New Companion to Digital Humanities, 449–457; and Willard McCarty, “Three, Turf, Centre, Archipelago; or Wild Acre: Metaphors and Stories for Humanities Computing,” Literary and Linguistic Computing 21.1 (2006): 1–13.

(131.) Domenico Fiormonte, “Towards a Cultural Critique of the Digital Humanities,” Historical Social Research/Historische Sozialforschung 37, no.3 (141): 59–76; and Alan Liu, “The History and Future of the Digital Humanities,” in Debates in the Digital Humanities, ed. Matthew K. Gold (Minneapolis: University of Minnesota Press, 2012), 490–509.

(132.) Schreibman, Siemens, and Unsworth, New Companion to Digital Humanities.

(133.) William Kilbride, “Saving the Bits: Digital Humanities Forever?,” in Schreibman, Siemens, and Unsworth, New Companion to Digital Humanities, 408–419.

(134.) Peter Lyman, “Archiving the World Wide Web,” in Building a National Strategy for Preservation: Issues in Digital Media Archiving (report by the Council of Library and Information Resources, Washington, DC, April 2002), 38–51; and Julien Masanès, ed. Web Archiving (Berlin: Springer, 2006).

(135.) Brügger and Finnemann, Web and Digital Humanities.

(136.) Mike Featherstone, “Archive,” Theory, Culture and Society 23.2–3 (2006): 591–596.

(137.) Kitchin, Data Revolution.

(138.) Knorr Cetina, “Scopic Media.”

(139.) Gitelman, Raw Data Is an Oxymoron; Christine Borgmann, Big Data, Little Data, No Data: Scholarship in the Networked World (Cambridge, MA: MIT Press, 2015); Kitchin, Data Revolution; and Eric C. Meyer and Ralph Schroeder, Knowledge Machines: Digital Transformations of the Sciences and Humanities (Cambridge, MA: MIT Press, 2015).

(140.) Arjun Sabharwal, Digital Curation in the Digital Humanities: Preserving and Promoting Archival and Special Collections (Kidlington, U.K.: Chandos, 2015).

(141.) For a founding document, see Altmetrics: A Manifesto. For a brief history, see Mike Thelwall, “A Brief History of Altmetrics,” Research Trends 37 (June 2014): n.p.

(142.) Rens Bod, A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present (Oxford: Oxford University Press, 2013).

(143.) Hayles, Writing Machines, 16.

(144.) Bar-Hillel, “Present Status”; Busa, “Annals of Humanities Computing”; and Ramsay, Reading Machines.