Everyone knows the myth of the mechanical servant, such as the sorcerers apprentice, Frankensteins monster, or the golem. These servants are stronger than any human being, but it is difficult to give them the right instructions; they do too much or too little, and may even turn against their masters. The myths symbolise two relations that have always been somewhat uneasy and suspicious: between master and servant, and between humanity and machines.
When literary critics first began to use the computer, about forty years ago, they believed that they had found a friendly golem to help them with their tasks. The golem was a man of clay, made by a learned rabbi and brought to life by putting into his mouth a piece of paper with a sacred text written on it. He would then be a faithful worker and a defender of the Jews against persecution; when he was no longer needed, the rabbi took the paper from the golems mouth and he was again just a clay figure. Critics had a similar idea; that the computer would awake, perform its work, and then go back to sleep. With its help, though, critics would progress beyond mere impressionism. The computer would deliver firm evidence for any assertions made about the literary work, and be the most powerful of readers, discovering in literary texts patterns of meaning that the weaker eyes of the critic had failed to discern.
Now, looking back over several decades of work in this tradition, the prevailing judgement is one of disappointment. In the 1970s, philosophers and critics argued convincingly that the computer would never learn to read like a human being, or produce interpretations of texts. More recently, Mark Olsen has pronounced the general failure of computer-aided literary studies to have a significant impact on the research community as a whole. It seems true that, after some early successes in such areas as concordance-building or authorship attribution, computer-aided criticism has not been able to make steady progress: it has climbed some mountains, but shows no promise of getting to the moon.
Even the most sophisticated programs for textual analysis, using automated parsing and artificial intelligence, do not seem able to cope with the complexities of literary works. An AI program may combine knowledge of a restricted domain with scripts for the typical sequences that occur in the domain; Roger Schanks restaurant script is a well-known example. But it seems to be constitutive of literary narratives that they occupy a space where different domains overlap and create effects - such as irony, humour or overdetermination - that cannot be specified by the computer. Let us take the story of the two Finns who go to a bar. After an hour of silence, one Finn says: Well, how are you? After another hour has passed, the second Finn says: Are we here to drink, or to talk? Gregory Ulmer observes that such stories foreground the deviation, but within the context of the normal expectation. The story presents two sides of the same operation - logic and joke - and . . . it is difficult to find ones way without the constant interplay between the two. An AI program might understand what people do when they go to a bar, but other domains are relevant here without being formally specified: differences between Finns and other nationalities, between male and female friendship, between being drunk and being sober, and so on. If all this is active in a story of thirty words, there would seem to be little chance of formalising the interplay between narrative and context in a work the size of a novel.
But there are further reasons for the eclipse of computer-aided literary studies. Critics of this school might still argue that they will continue to make progress towards greater authority and refinement in the handling of literary evidence. Unfortunately for them, the very idea of progress in literary criticism has fallen under suspicion, as a misplaced ambition of becoming scientific and thus a last gasp of modernism. The vogue for literary structuralism did not pass because of any failure to identify complex structures in literary works, for the structures were easily produced by structuralist methods themselves. Structuralism died because critics became tired of doing it; they simply decided to move on, and to leave behind them scorched earth that would discourage any successors from occupying the territory. The project of constructing a comprehensive poetics was abandoned as "a little scientific delirium," in Barthes' phrase, and criticism became a series of interpretations, without goal and without end.
Paradoxically, if literary criticism has become nothing more than a fashion system regulating interpretation, then the computer can still find a place for itself - provided it conforms to fashion. It will no longer be necessary for scholars to sit at the keyboard, writing programs that classify linguistic features in a text. Rather, the computer offers a symbolic universe and a terminology that can help to satisfy criticisms constant hunger for new discourses to appropriate. These discourses may bring with them an aura of scientific prestige, but within criticism their use is no more than heuristic, or merely ludic.
In the recent era, the first great example of such an appropriated discourse was the binarism of structural linguistics, which provided the language of Lévi-Strausss structural anthropology and, later, literary structuralism. That Saussure was not a literary critic was, if anything, an advantage: the critic could take a principle like there are only differences without positive terms and apply it to literature without any restraint. What was in linguistics part of a rigorous system became, in literary criticism, a highly productive metonymic chain.
In the brief golden age of literary structuralism, around 1965-1970, critics seemed to have acquired true precision and authority from their adaptation of an external discourse. But postmodernism soon discredited the grand narratives, whether structuralist, Freudian, or Marxist; and criticism was again enclosed in its own hermeneutic circle. Further, the collapse of structuralism led to a distrust of all claims to privileged interpretation. Now, interpretation proceeds by any means necessary, within multiple and irreconcilable systems, and subject to more and more rapid obsolescence. External discourses are still in demand, but no longer as sources of validation for criticism; rather, they are worn, as ethnic or period styles are incorporated into metropolitan fashions in clothing. The distinction (and hierarchy) between original literary texts and commentary also tends to dissolve. Discourses mingle promiscuously; legal or historical texts are turned into literature, and vice versa; and all the human sciences seem to be engaged now in the weaving of one gigantic text of the world.
Contemporary criticism thus bases itself on the merging of texts, the increased size of the unit of literary analysis, and on intertextuality as a system of relations between contiguous blocks of discourse. This widening of the literary field corresponds to a shift in key metaphors: as structure declined, it was replaced by network, a locus of connections without hierarchy or center. The work is never monumental, Barthes said, it is a proposition that everyone fills out as they like, as they can: I give you semantic material to run about in, like a ferret. This shift was already under way by 1970, and preceded a corresponding shift in the domain of computing: the rise of text-based computing to parity with numerical computing, and the merging of computers as individual processing units into a wider domain, that of the network. We now have in sight a universally inclusive network that will be the infrastructure for a truly global consciousness, transcending national and spatial boundaries. At the same time as the Net extends itself as a universal information utility, it moves towards convergence with the mass media and telecommunications and, beyond that, to a totalising harmonisation of systems in the future information society.
In trying to predict the effects of these new technologies, we naturally look to the closest historical analogy, the invasion of medieval manuscript culture by the printing press. However, Alvin Kernan argues that it took more than two hundred years for a true print culture to establish itself in the literary world. Like the computer, printing was conceived as a mechanical invention; only gradually did it extend its influence as the first of the great modern technologies of the mind, which both modify and augment our capacities of memory, analysis and communication. Later technologies of this kind include telecommunications, the mass media, and now computing. Their most significant quality is their production of second-order effects and unforeseen consequences - in contrast to mechanical or industrial technologies that are much more closely linked to increases in material productivity. It was easy to see that printing would make books much cheaper; but no one could have foreseen, in detail, printings contribution to the Reformation or the rise of bourgeois individualism and representative democracy. Similarly, if one enquired into the effects of printing on literary criticism one could easily point to immediate influences on the development of indexes, libraries, or systems of annotation and cross-referencing. Much more important in the long run, though, were changes we can ascribe to the sheer multiplication of texts (quantity evolving into quality), and to the metaphors that arose from the idea of mechanical reproduction. And most important of all, though also most distant from the invention itself, were the new kinds of writing that eventually emerged - above all, the rise of the novel.
With the computer, similarly, the first impression that it made on the literary scene was one of immediate technical efficacy: this was the age of the golem, in the 1950s, when scholars put the new machine to work at making concordances and word-counts. But with every decade that passes, and especially since the arrival of the microcomputer some fifteen years ago, the effects of computing have become steadily more diffuse and, by the same token, more pervasive. It has become evident that the computers performance of any set task is much less significant than its creation of a new kind of mental environment.
These environments make available new metaphors and modes of communication that have gradually infiltrated the domain of literary criticism. We observed earlier how binarism entered the critical sphere from structural linguistics; later, some structuralists tried to work directly with the binary notation used in computing to represent data at the level of the machine. But our argument has been that there can be no fixed rules about the place of such metaphors in critical practice. The classic metaphors have not been systematic, but images that proved fruitful for a particular mode of criticism: Platos allegory of the cave, Aristotles katharsis, Baudelaires forest of symbols, the well-wrought urn of the American New Criticism, and so on. Such metaphors stimulated invention and connected criticism to the spirit of its age; to claim anything more than this is to embrace the fallacy that literature can be subordinated to some external master-discourse.
If criticism cannot be reduced to any such sub-system, metaphors drawn from computing have no privileged validity as descriptions of literary effects, or as markers of progress towards an authoritative science of literature. Such claims are only mystifications, strategically deployed by those striving to prevail in the merciless fashion system of contemporary criticism. Thirty or so years ago, computer critics might have claimed that they were deploying a powerful new analytic tool, one that would eventually sweep away impressionistic criticism that rested on no substantial body of evidence. Such claims have long faded. The computer is no longer a tool in the instrumental sense, but rather the creator of a cybernetic milieu of interaction between man and machine, and from which we borrow metaphors as casually as a peasant from the care of his flocks.
An important region of this milieu is that of hypertext, with its terminology of blocks, links, and frames. George Landow has argued that hypertext represents a literal implementation of concepts like intertextuality, fragmentation and decentering in such critics as Derrida, Todorov, Kristeva, Barthes and Foucault. However, the homology between the two realms is a product of separate development. The dispersal and decentering of the text by structuralist and post-structuralist critics belongs to the 1960s and 1970s; implementation of hypertext concepts on computers began in the early 1960s, though it was inspired by Vannevar Bushs Memex proposal of 1945. But structuralist literary critics made little direct use of the computer, while hypertext pioneers like Andries van Dam and Ted Nelson worked in the computer science milieu and owed nothing to Barthes, Derrida, and the rest.
French criticism from the early 1960s broke up long-established ideas about the boundaries of the literary work, the status of the author, linear narrative, etc. At about the same time, hypertext programs were being developed on mainframe computers, primarily as document-handling systems for very large manuals and archives. These two groups were thus working in opposite directions: the French critics were effecting the disintegration of book-length literary texts, while the hypertext programmers were working on the integration of short documents into a complex structure of blocks and links.
When hypertext began to be applied to literary studies at Brown University from the late 1970s, its users continued Van Dams emphasis on knitting documents together into a large resource: primary literary texts were linked to annotations and related blocks of text or graphics from other disciplines. Only for about the past seven years have critics started to build a direct connection between French literary theory and practical implementations of hypertext. This is a promising alliance, though already it is pursuing two quite different directions. One is the adoption of hypertext structures and technical vocabulary in literary analysis; this is consistent with the successive invasions of criticism by external discourses, as we discussed above. The other direction is the development of a new genre of writing, composed in hypertext or hypermedia rather than in traditional print form. Here the computer makes possible a new relationship between writers and readers, mediated by programs like Storyspace or Hypercard. From the side of authors, the computer becomes an instrument on which they both compose and perform their work; from the side of readers, each time they access the work they choose links that create a single actualised experience out of innumerable other potential ones.
It is not surprising that the first phase of work on hypertext fiction has focused on the issue of narrative structure. At its simplest level, narrative is a sequential disposition of elements from left to right, these elements ranging in size from individual letters to episodes and chapters. The spatial presentation of these elements will correlate in some way (though not always directly) with a temporal sequence. Fiction in hypertext format has typically used blocks of self-contained narrative units varying between a sentence and a paragraph in length. Instead of such blocks flowing steadily down the page, as they would in a printed text, a hypertext narrative can offer an infinity of paths through the structure of blocks and links provided by the author.
Hypertext can thus create a kind of zero degree narrative, entirely liberated from the sequential imperatives of a standard plot. But does this achieve any more than those books of the 1960s that were issued unbound, so that the pages could be shuffled and read in any order? And if the work is delivered as a labyrinth of possibilities, where is the object of criticism to be situated? In his valuable discussion of hypertext fiction, George Landow observes that Since some narratologists claim that morality ultimately depends upon the unity and coherence of a fixed linear text, one wonders if hypertext can convey morality in any significant form or if it is condemned to an essential triviality.
The value of hypertext fiction may turn out to lie in other capacities than the disruption of narrative. One is in the opportunity of combining narrative and spatial structures in new ways. Robert Coover notes that there is a tension in narrative, as in life, between the sensation of time as a linear experience, one thing following sequentially (causally or not) upon another, and time as a patterning of interrelated experiences reflected upon as though it had a geography and could be mapped. Because hypertext has an extra dimension, compared to the printed page, it can make explicit those spatial forms that the reader is vaguely conscious of when reading a novel. This extra dimension may also be used to make the printed text into a libretto, accompanied at suitable points by musical or visual elements. Finally, hypertext offers possibilities for collaboration that altogether explode the idea of the work as object, and turn it into an environment. The Multi User Dungeons (MUDs) that are now popular on computer networks are gigantic narrative fields where hundreds of players can assume characters and construct episodes; a MUD is thus inhabited rather than read, and the only possible criticism of it would be to enter this imaginative space and try to modify it.
One other aspect of computerised textuality may be a useful model for criticism: the window within which text appears on screen. This window corresponds to the page of a printed book; but where the page only has a horizontal dimension of intertextuality, the window also has behind it a universe of virtual textuality that can be pulled up into view by various activities of search, selection or re-formatting. Derridas maxim of il ny a pas dhors texte implies a horizontal plenitude, with texts extending to the horizon and beyond. [ there is no outside-the-text, with a pun on the printer's term for plates bound into a book.] The management of text on the computer implies an under-text that creates an interesting potential for layering, folding or knotting one plane of text with another. The text in a window can be linked horizontally on screen to blocks of text in other windows, or it can be threaded up through the window from the virtual textbase behind it.
Our assumption here is that literary critics have profoundly internalised, over the past five hundred years, the printed book as a neutral and unproblematic container for texts. The existence of the book as a stable and easily replicable physical object has supported the various critical tendencies that have constituted the literary text as an aesthetic object with a more or less fixed internal structure. The electronic book of the future will not be delivered to the reader as a discrete object containing the work and nothing but the work. We cannot predict the precise consequences of the shift from a printed page to a programmable window, but we already know that the styles and institutions of the various criticisms - of visual art, music, film, etc. - vary according to the media in which their objects are delivered. Within universities, for example, a department of art history is likely to have a different culture from a department of literature. So the shift from a fixed literary text to a virtual one, with many new possibilities for manipulation, is sure to modify critical practice. We can expect the hypertext revolution, which transforms the textual object after an extraordinarily long period of stability, to create a corresponding revolution in critical approaches to the objects that are now emerging.
One could view the transmission of texts over networks as simply an extension of hypertext blocks and links into a wider area of exchange. It may not matter to the individual critic whether the text appearing on his screen is called up from local storage, or transmitted over the network from a central archive at the Bibliothèque de France. Nonetheless, such institutions as the Net and the Bibliothèque de France do much more than simply deliver texts on demand. They are also collective and political projects of great significance, as expressions of national will in the information age - equivalent to the canals and railways of the industrial revolution.
In a literary perspective the Net, together with its digitised archives, extends the virtual resource of text held in local storage into a fathomless and unified mega-text, a book of the world that can be read and written by all. This global connectivity can be seen as a revival of certain features of pre-literate cultures. The death of the author, as proclaimed by Barthes and Foucault, dissolved the individual literary work into various larger and more impersonal fields of discourse. But the printed book - as a compact physical object, in some irreducible sense the product or possession of its author - was not quite so easily dissolved as Barthes or Foucault might have assumed at the time. It was only some twenty years later that the widespread use of the Net for the transmission of texts began to create a practical implementation of these critics vision. The network makes available a seething mass of texts, often fragmentary, separated from their authors, or arriving at the users terminal unformatted. Such texts do not arrive as a discrete series of works, but rather as the scraps of ephemeral chatter that one picks up in a crowded room. We are dealing here with an electronic orality that contrasts with the much more focused encounter between a single book and a solitary reader.
The proposed reading station for the Bibliothèque de France will interpose a third party between text and reader, in the form of a powerful front end that filters delivery of texts from the archive. The reading station will thus support all kinds of sorting and searching operations that will adapt the text appearing on the screen to the particular task undertaken by the critic. Further, the critic no longer encounters a mere collection of books, but a corpus that can be grasped - for some analytic purposes, at least - as a single whole. Mark Olsen has argued that computer-aided literary studies should direct themselves towards such large corpuses, in order to draw conclusions about phenomena of longue durée and mentalités, rather than about individual works or authors. In this way, the idea of the computer as golem - the assistant who performs faithfully a set task - would be replaced by the idea of the agent: a program that rummages through the corpus in search of information of special interest to its trainer. The agent would display initiative in the sense of bringing back material that its master would not otherwise have known about.
The Net may have first interested literary critics as an instantiation of poststructuralist principles of decentering or de-personalisation; but now it may be of more significance as a powerful part of the knowledge infrastructure. Peter Brooks has attributed the rise of New Historicist criticism in the U.S. to the return of historical contextualization, that placed literature in networks of other discourses and other deployments of social power. Brooks is here using networks metaphorically; but critics are becoming steadily more aware of computer networks as contextualising machines, weaving the entire universe of knowledge into a single field. Further, we cannot miss the importance of such networks as both instruments and expressions of power, exerted in such elements as databases, dossiers, and personal communications. Foucault first aroused interest in nineteenth-century technologies of surveillance such as the panopticon; but the computerised surveillance of today is infinitely more pervasive as an instrument of control.
We have made a survey of computer-aided criticism, computer-aided writing (such as hypertextual narratives), and of networks as a new kind of textual space. But we should also speak briefly about a new object of criticism: the representation of computers in literature. The standard literary genres are typically slow to adopt new subject matter, and few works of high literature have explored the potential of the world of computing. But this deficiency has been made good in the populist genre of science fiction, and especially in the cyberpunk school comprising such authors as Gibson, Sterling, Vinge and Stephenson. Their books have become an important reference point for theorists of postmodernism; they present a dystopian near future for western capitalism, and show how technology might bring about the death of the humanist subject. Cyberpunk also suggests the death of the psychological novel. Complexity is transferred from the human subject to information systems in cyberspace, controlled by transnational corporations; and personal history is dissolved into the popular mythologies of postindustrial (and, to a large degree, postliterate) society. This literature of paranoia and conspiracy is, for Fredric Jameson, a degraded attempt - through the figuration of advanced technology - to think the impossible totality of the contemporary world system.
The pessimism of cyberpunk fiction contrasts with the earlier era of the computer as golem, a powerful and submissive machine that would aggrandise its master. Now we face an impersonal realm of systems and networks, whose rapidly growing digital archives seem to overshadow any individual will. Criticism in the 1960s and 1970s certainly reflected a shift towards impersonal systems, but it made this shift within a linguistic and rhetorical model derived from Nietzsche, Saussure and Heidegger. Although the technological and economic foundations of the information society were being laid at that time also, literary critics did not pay much attention to what was growing up around them. Now, the microcomputer and the exponential growth of network resources cannot be ignored; rather, they must be assigned a place in literary culture. The special issue on Future Libraries of the New Historicist journal Representations is an important landmark of this recognition. We can observe the emergence of a new cybercriticism based on the universal mediation of writing by the computer, and the replacement of the fixity of print by virtual texts. This is not to claim that cybercriticism will dictate the entire mode and substance of literary criticism. But just as the problematic of the language system became an implicit context for all criticism after 1970, so may the digitisation of texts and the globalisation of networks be the air that criticism breathes by the year 2000.