In her book Electronic Literature: New Horizons for the Literary, Katherine Hayles defines “electronic literature” as “work with an important literary aspect that takes advantage of the capabilities and contexts provided by the stand-alone or networked computer” (3). In the classic sense, literature refers to the written medium exclusively. Thus, “e-lit” has many distinct characteristics that validate its existence as a separate medium. It is afforded certain functions that augment the reading experience in a unique way; when comparing it to popular media such as film, video games, or print, e-lit bridges the gap between text and multimedia most effectively. In other words, the balance between writing and audiovisual content is the most even. In addition, e-lit is able to utilize a wider variety of tools that enable authors to create a very specific experience for their audience (more so than static illustrations alone). These artistic media elements can be merely supplementary or integral components of the work, both of which can be seen in Eric Lemay’s “Losing the Lottery” and Mark Marino’s “Living Will.”
Lemay’s “Losing the Lottery” features a very interesting media element to supplement its writing. The work first brings you to a lottery mini-game: the screen shows rapidly moving random lottery balls with numbers, six of which the reader must choose to continue. The reader is then shown a two-column layout that displays a lottery simulator algorithm on the right side and a collection of 49 pages on the right side. Each of the 49 pages has a short paragraph, quote, or absurd statistic that has something related to the unlikelihood of winning the lottery, interspersed with personal anecdotes and thoughts from the author. The lottery simulator uses the six numbers chosen by the reader and cycles through randomly generated lottery sequences, with headers that show the number of times you have won, the degree to which your numbers match, and the time and money spent/earned playing the lottery.
This media element is supplemental to the text, but definitely enhances the simple message being portrayed. The combination of short excerpts and the miniscule winnings shown by the simulator shows the reader on a deeper level just how futile it is to play the lottery. The simulator serves as a personalized firsthand experience, and quietly runs alongside the text as you read. It allows the reader to glance over to the right and view his or her “progress,” while simultaneously taking in information about how difficult it is to win the lottery. Although the message in this piece is pretty simple, it is a very clear demonstration of how such a media element can simultaneously reinforce the ideas of a text effectively. This interactive simulator is a much more interesting method of visualizing an idea than merely showing data aggregates in a graph or table. While the simulation might not be essential, the work would be far less interesting without it.
Media elements can also play an integral role in the consumption of an e-lit work; Mark Marino’s “Living Will” is a great example. This piece functions as a highly interactive click-and-scroll story that allows the reader to choose what happens on a whim. As the reader scrolls through and reads the will, different parts of the text become clickable. Depending on what the reader selects, the document will alter itself instantly. The left hand side of the page has a box that explains who the reader is and what he or she is reading, and the right side of the screen features a simulator (just like Lemay’s piece) that runs simultaneously as the reader goes through the document, tallying up the inheritance (bequests, fees, taxes, etc.). In addition, the simulator features multiple points-of-view that let the reader see how much money the different characters of the work have earned as a result of the reader’s browsing through the document. This media element is very immersive and provides a rich storytelling experience. In a way, this feels like a role-playing video game (RPG), in that the decisions that the reader makes alters the course of the story. However, each of the various permutations of the path follows a parent storyline, implying that all arcs will eventually lead to the same conclusion.
This type of e-lit piece shows the powerful applications the medium can have when it comes to fictional storytelling. It is hard to compare this to film or video because it is mostly comprised of words as opposed to moving images; however, it is a dynamic experience that could be more aptly described as a video game of sorts (maybe not the most entertaining or colorful, but certainly interactive).
One of the few flaws that e-lit pieces are unable to rectify currently is the issue of accessibility. While they can serve as very interesting and immersive methods of consuming literature, one must have a computational device (a smartphone, computer, tablet) to experience it, which restricts access to many different people. Also, depending on which device is used, the full experience can vary. Personally, I would not enjoy navigating through “Living Will” on a small smartphone screen as opposed to a regular laptop screen. These are all considerations the author must put into account when producing a work. Furthermore, while this is not necessarily a limitation to the medium, it is difficult to reproduce such a work and present it in other media, as the author has purposefully designed the work with a specific representation in mind. Having a knowledge of coding flash/java/html scripts would be quite useful in attempting to do so.
While “Losing the Lottery” and “Living Will” are quite in depth, they do not showcase the entire range that the e-lit medium possesses. Some other e-lit works such as Robert Kendall’s “Candles for a Street Corner” or Campbell and Jhave’s “Zone” feature much more audiovisual content than text. In this way, they are more similar to other visual-heavy media such as graphic novels or film. There is an emphasis on what is seen in order to communicate certain emotions and feelings more effectively than text alone. However, one cannot exist without the other in e-lit pieces; without the writing to justify the media element, it is enormously difficult (and often unsatisfying) to navigate these elements without direction or apparent purpose. This is one of the reasons why the e-lit medium has such great potential as an effective means of telling a story or communicating information – the tools it has in its arsenal to relay a multidimensional experience far outnumber those which books can employ, which help the reader understand works on a significantly more personal and profound level.
Campbell, Andy, and Jhave. “Dreaming Methods : Zone.” Dreaming Methods : Zone. Dreaming Methods, 2013. Web. Nov.-Dec. 2013. <http://labs.dreamingmethods.com/zone/>.
Hayles, Katherine. Electronic Literature: New Horizons for the Literary. Notre Dame, IN: University of Notre Dame, 2008. Print.
Kendall, Robert, and Michele D’Auria Studio. “Candles for a Street Corner.” Candles for a Street Corner. Michele D’Auria Studio, July 2004. Web. Nov. 2013. <http://www.bornmagazine.org/projects/candles/>.
Lemay, Eric. “DIAGRAM :: Eric LeMay.” DIAGRAM :: Eric LeMay. N.p., n.d. Web. 04 Dec. 2013. <http://thediagram.com/11_5/lemay.html>.
Marino, Mark. “Living Will.” Living Will. Markcmarino.com, 2010. Web. Nov. 2013. <http://markcmarino.com/tales/livingwill.html>.
Is literature data? Yes and no.
From a layman’s perspective, no. The fact is that nobody except critics of digital humanities has ever seriously considered this question. We were all brought up believing that every literature piece we read and see, whether fiction, non-fiction, poetry or prose, is an artistic piece that reflects the author’s intentions, aspirations and quite possibly hidden philosophical ideas. To consider a literature piece just an accumulation of written language symbols, alphabetical letters, individual soundwaves or paint strokes seems absurd to many of us, and fairly so. Literature is in a sense not data because merely analyzing this art in terms of scientific means takes away the most significant aspects of a literature piece: the author’s artistic and creative elements. These factors just cannot be simply “measured” and summarized like it can be done with computer algorithms. In fact, a specific literature work might garner thousands upon thousands of different interpretations, each with it’s own unique analytic aspects, whereas data, with its ultra-clear structure and quantitative properties, might merely yield one result. Stephen Marche, in his critique Literature is not Data, even goes as far as to claim that “the story of literature [regarded as] data is a series of…failures.”  Therefore, in this perspective, literature cannot be stringently considered to be data.
We also, however, have to accept the fact that literature works, though fraught with myriads of interpretations, are on the very basic level still a construction of numerous individual aspects that ultimately come together, and that those aspects can be analyzed one by one for finding principle and relationships in a literature piece. Franco Moretti advocated the notion of “distant reading”, which means that we should not interpret written literature in terms of studying specific texts, but “by aggregating and analyzing massive amounts of data.”  This may seem a radical proposal, but it does have some meaning attached to it and is a valuable exercise to engage in. Literature pieces themselves have been segmented by people into parts in terms of their many meanings, such as different genres, chapters, settings, plot details, persons (protagonist, antagonist etc.), archetypes, symbols and the list goes on. Furthermore, as books and the like appear more prolifically around us nowadays, it becomes increasingly harder to read and analyze all written text. Data accumulation software and websites such as Google Ngram  and Understanding Shakespeare  have opened up opportunities for people to achieve their research objectives without having to skim through the information from piles of books. With these new abilities, the digital humanities and its many revolutionary aspects (i.e. distant reading) have practically “augmented” scholarship and reality alike by creating new paths for research, interpretation and exploration both for classical and newly-emerging literature works.
 Marche Stephen, Literature is not Data: Against Digital Humanities, http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/.Accessed Oct. 2, 2013
 Google Ngram Viewer, Google Inc., http://books.google.com/ngrams. Accessed Oct. 2, 2013.
 Understanding Shakespeare, http://www.understanding-shakespeare.com/. Accessed Oct. 2, 2013.
Generally, whether or not literature is data depends on your definition of data. If one is to classify data simply as information that can be quantified or analyzed in some way, then literature would absolutely fit that definition. Data is not just scientific observations, mathematical figures, or sets of graphs – media can be considered data as well. Music, literature, even paintings – one can perform all sorts of analyses on these works to generate data, both quantitative and qualitative. Marche’s article refers to the analysis of literature as data as “distant reading.” While he argues that this type of approach to reading ruins the experience as we know it, I believe that it is instead a different, valuable sub-discipline of literature. Distant reading, or macroanalysis, allows one to have a multidimensional understanding of a work. Its context in a larger literary ecosystem (period in time, cultural significance, etc.) can be understood by treating the book on a more holistic level. One can understand writing styles, forms, and conventions by looking at literature objectively; temporarily staying away from subjective plot or thematic analyses and looking at the mechanical details of literature opens it up to an entirely different type of scholarship, namely digital humanities. This additional perspective on the same work should be welcomed and valued. The projects studied in the course improve the quality of literature scholarship – they are tools we can use to gain another perspective beyond the scope of unassisted brainpower alone. Especially with larger volumes, using tools to perform distant reading can almost instantly compile word patterns, trends, and more and present them in such a way as to facilitate our digestion of the information. In this sense, these projects augment reality. They give us “superpowers” of analysis. They allow us to access an entire history of literature and academia instantly, which would be otherwise impossible. The most obvious value in using digital tools to analyze literature as data is that it allows us to handle large volumes of information much more easily and efficiently.
Marche, Stephen. “Literature is not Data: Against Digital Humanities.” Los Angeles Review of Books. 28 Oct 2012: n. page. Web. 2 Oct. 2013.
The argument of whether literature is data depends upon the definition of data. Data can be viewed in a negative connotation, in a way that removes the artistic and creative elements and turns something into a quantitative subject. It can also be viewed simply as a form of information, from which we can establish interpretations and analyses that we can learn from. In his article Literature is not Data: Against Digital Humanities, Stephen Marche makes several bold statements claiming the introduction of digital books have brought about the “…end of the book as we know it”. However, digitizing literature offers us an additional medium through which literature can be experienced, analyzed, and interpreted in different ways. It is not the end of the book as we know it, but rather the expansion of the book as we know it. Literature has always been taken apart—quotes, syntax, characters, plots, symbols, themes, and more have been discussed, interpreted, debated, and given meaning since their origin. Digitizing books doesn’t put an end to this kind of thinking, but rather provides tools that allow us to go even further in depth. Digital tools, such as n-gram, allow us to compare thousands of types of literature in seconds. Computer algorithms allow us to delve into details that would take years of collecting and studying to analyze in as little as a few seconds.
Literature has always been data—we have always learned from it and always used it as a tool to examine different elements of writing, human psyche, cultural reflections, and more. Just because there are new means to examining this data doesn’t mean that the old form is nonexistent. The book as we know it today still exists, and personally I prefer reading a hard copy of a book. I can choose not to associate with the digital tools that are being developed and experience books in the more nostalgic form. However, the fact that those digital tools exist provides the opportunity for me to expand my knowledge and understanding of a book, if I choose. Digital tools open the window for books to be examined on a large scale, to go in depth with details, such as word choice, while covering a wide sample, which could range from several works to an entire era’s worth of literature.
Marche, Stephen. “Literature is not Data: Against Digital Humanities.” Los Angeles Review of Books. 28 Oct 2012: n. page. Web. 2 Oct. 2013. .
The digital humanities project, “The Future of the Past” is a unique use of digital humanities methods. When scrutinized under the figurative lens of Shannon Mattern’s Criteria for Evaluating Multimodel Work, one can understand why it awarded “Best use of digital humanities project for fun” by the expert consortium administering the annual DH Awards: It is fun but it is perhaps not entirely effective. To simplify her criteria we made our own rubric to analyze it.
For readers unfamiliar with the project, a brief overview can be found on the website by clicking “the story…” on the right side of the homepage. This provides an overview of how the author, Tim Sherrat, turned 10,000 newspaper articles into a digital humanities project. His aim was to archive every Australian news article from the 19th and 20th centuries that contained the phrase “the future” and create a site to explore how the future was perceived in the past. Sherrat’s website includes evidence of research, links to his sources, and links to/from his site to reinforce his underlying thesis in a cohesive manner. That is, that throughout time the future has been discussed throughout the past in different contexts. On the website , a link can be found to the newspaper he used which demonstrates use of citations and academic integrity – imperative components of Mattern’s ideal digital humanities model.
He extracted every word from his archive of collected articles containing the word ‘future’ and made a database. He then made an interactive word-based interface so that whenever a reader accesses the site they find a compilation of words from the articles that act as hyperlinks.
It is through this organization that he was able to take the newspaper articles and recontextualize them into a format useful for his digital humanities project. One particularly impressive aspect of Sherrat’s project was how much feedback he
received – and his constructive dialogue with users – throughout the developmental stages of his site . Clicking “The Story” allows the site’s viewers to follow Sherrat’s creative process. Additionally, he live tweeted his progress in real time, and people tweeted at him with questions about his project, which he appeared to happily reply to. These tweets act as a form of pseudo-peer review. A series of lectures explaining his project allowed additional public understanding. This would not necessarily be considered a form of collaboration, as he was and continues to develop this project on his own, but the public input serves a form of joint effort.
Exploration of the website, and use of the tools he provides make it easy to deduce that Sherrat has a very clear vision, and a comprehensive understanding of the mechanisms behind his project. However, the format of the website is the project’s biggest downfall – the tool is simple enough to use (user friendly) because one just points and clicks, but there is no balance with something new other than the format of site. Despite the simplicity of the point and click if someone were to come to the page on their own it would be difficult for them to grasp what the site was attempting to achieve. Although it does not initially appear accessible, reading “the story…” provides some clarity. Additionally, if one came to the site to learn about how others viewed the past, it would only be a random acquisition of knowledge rather than a particular route. A direct search for a particular year, event, or phrase is not possible which makes it less useful then one would hope for. Perhaps we are not entirely grasping what he is trying to achieve which would make our critique a little unfair. If he is trying to achieve a database to allow for random knowledge acquisition (which is maybe suggested by winning the Fun category) then he effectively created such a database. From our perspective, however, it appears as if this is not the most conducive format for this project, because you cannot purposefully acquire knowledge.
Despite our inability to determine the exact purpose of the site we will proceed the rest of the way under the impression that it is made for random knowledge acquisition. Under these conditions it is appropriately formatted and effectively organized. The fact that each time you open the page a random subset a words appears, which will lead you on a different path each time, is an innovative way to create a site and to organize this information. But how well this page is linked together, its cohesiveness, is arguably the most difficult criteria to judge. If it is judged based on the understanding that it is supposed to be random then yes, the fact that it is a jumble of words that allows you to arbitrarily click on an appealing word and learn more is fantastic. Conversely, if a reader wants to acquire specific data then we would dispute how cohesive the page is.
Since Sherrat appears to demonstrate a mastery of the tool it is therefore adaptable. Mastery correlates with adaptability because a complete understanding of how the tool (the tool being the way he tied together all the words) suggest that changes could be made if the site needed to adapt. It is this adaptability that is one of the most exciting aspects of the project. If he were to come across more data he could expand upon the comprehensiveness of the database. Currently, the data is limited to a certain time and geographic range (Australia). However, if he were to collaborate with partners in various nations he could expand upon the database so that readers could learn “The Future of the Past” of more nations across a longer expanse of time. With such an extensive database, readers could compare not only “the future” across time, but across space. Another improvement we would suggest is a more comprehensive explanation of how to use the site because a better understanding allows for a more user friendly experience. Wouldn’t it be nice too to have different navigation interfaces if the world-link interface is not useful to you? If the back-end database is robust and adaptable, we should be able to feed that data into multiple interfaces allowing for very different web ‘faces’.
Based on Shannon Mattern’s Criteria, Sherrat created an approvable digital humanities project. It fulfills most of the requirements she presents, and its adaptability allows for it to potentially fulfill the rest. Overall, it is an impressive project that understandably won the award for “Best use of digital humanities project for fun.”
Co-Authors: Shane and Joy
Thank you to Amanda Gould for her assistance in reviewing our work
Collaboratively written by Mithun Shetty, Kim Arena, and Sheel Patel
The efficacy of a digital humanities project can be vastly improved if the delivery and interface is thoughtfully designed and skillfully executed. The following two websites, “Speech Accent Archive”, and “10 PRINT eBooks”, both utilize non-traditional forms of displaying content that alter the experience of their internet audience. Both projects will be critically assessed according to the guidelines described by Shannon Mattern’s “Evaluating Multimodal Work, Revisited” and Brian Croxall and Ryan Cordell’s “Collaborative Digital Humanities Project.”
The Speech Accent Archive is an online archive of audio recordings that contains multiple speakers from a variety of regional and ethnic backgrounds dictating a specific set of sentences. These recordings are submitted by the public and reviewed by the project administrators before being added to the site. The purpose of this media element is to expose users to the phonetic particularities of different global accents. This website is useful because it provides insight regarding the various factors that affect the way people talk and how these factors interconnect – from their ethnic background to their proximity to other countries. This site proves to be a useful tool for actors, academics, speech recognition software designers, people with general appreciation for the cultural connections in languages, and/or anyone studying linguistics, phonetics, or global accents.
The website’s layout is ideal for accomplishing this purpose: users can browse the archive by language/speakers, atlas/regions, or can browse a native phonetic inventory. This allows users to explore accents on a regional basis which makes it easier to see similarities between local dialects. The audio recordings are all accompanied by a phonological transcription, showing the breakdown of consonantal and vowel changes as well as syllable structure of the passage. Each user submission is accompanied by personal data, including the speakers’ background, ethnicity, native language, age, and experience with the English language. The site also has a very comprehensive search feature which has many demographical search options, ranging from ethnic and personal background to speaking and linguistic generalization data. This level of detail is an invaluable resource for those who study cultural anthropology, phonetics, languages, and other areas, as it allows for a specific manipulation of the data presented. Also, the quality of user contributions is consistently high – it is very easy to follow the playback of the recordings.
However, the project does have its limitations as well. The passage being read by the contributors is in English, no matter the speaker’s fluency or familiarity with the language. Pronunciations of this passage may not reflect the natural sound of the languages represented. Further, because the audio samples are user-contributed, it is hard to maintain a constant of English fluency among contributors. Another limitation to the site is that many of the sections of the site have little to no recordings or data; this is merely due to a lack of user contributions, but could be resolved by website promotion. The project is still ongoing, thus the database will continue to grow as time goes on. Another limitation of the site is that it lacks any sort of comparison algorithm. The accents are all stored on their own specific web pages and load via individual Quicktime audio scripts; consequently, it is very difficult to perform a side by side comparison of accent recordings. As a result, the project is not really making any conclusions or arguments with this data. This could be improved by allowing users to stream two separate files at the same time, or by allowing a statistical comparison of the demographical information accompanying each recording. It would also be interesting if an algorithm or visualization could be created that could recognize the slight differences in the voices and arrange them based on similarity along with the demographical data that accompanies the voice sample. Further, the project could establish a tree-like comparison of regions and accents, visually representing the divergences and connections between where people live or have lived and the way that they speak.
With these additions, it would be easier to aurally understand the effects of background or ethnicity on speech accents. Still, this website shines albeit these setbacks. The project offers a tremendous amount of data in an organized manner, presenting many opportunities for further research and applications of the information. This level of detail is an invaluable resource for those who study cultural anthropology, phonetics, languages, and much more.
The book titled 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 is a collaboratively written book that describes the discovery and deeper meaning behind the eponymous maze building program created for the Commodore 64. The book can be seen as a way to look at code, not just as a functional working line of characters, but also as a medium of holding culture. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 uses the code as a jumping point to talk about computer programming in modern culture and the randomness/ unpredictability of computer programming and art. The book explores how computation and digital media have transformed culture. Along with this book, one of the authors, Mark Sample, created a “twitterbot” that uses a Markov Chain algorithm to produce and send tweets. The @10print_ebooks twitterbot takes the probability that one word will follow another, scans the entire book, and tweets the random phrases it generates.
The clear goal of this book is to demonstrate that reading and programming coding does not have to always be looked at in the two-dimensional, functional sense that many people see it as. The authors argue that code can be read and analyzed just like a book. They do so by delving into how this 10 Print program was created, its history, its porting to other languages, its assumptions, and why that all matters. They also talk about the randomness of both computing and art, and use this 10 Print program as a lens through which to view these broader topics. The purpose of this book is stated very clearly by one of the co-authors, Mark Sample: “Undertaking a close study of 10 Print as a cultural artifact can be as fruitful as close readings of other telling cultural artifacts have been.”
The implementation and format of this book and twitterbot is a little difficult to understand and doesn’t necessarily help them portray and establish their goals, especially when talking about the twitterbot. The book itself is coauthored by 10 professors who are literary/cultural/media theorists whose main research topics are gaming, platform studies, and code studies, which gives a broad range of perspectives regarding the topics. It also dives into the fact that code, just like a book, can be co-authored and can incorporate the views and ideas of more than one person. This idea draws on the parallels that the authors are trying to draw between coding and literary elements. Code is not just one-dimensional; it can incorporate the creative and artistic ideas of many people and can achieve many different forms that often have very similar functions in the end. In this sense, the co-authoring of this book inherently showcases their main message regarding code and how it should be viewed. The book also progressively talks about the history of this Basic program, and how it coincided with cultural changes due to the advent of the personal computer. Sample’s twitterbot, on the other hand, leaves the user more often confused than educated, but that may be point. Using the algorithm it spits out random, syntactically correct sentences that sometimes mean absolutely nothing, but also occasionally it creates coherent thoughts from the words in the book. The occasional coherent sentence that the bot spits out may be a demonstration of code itself. The user may see that within jumbles of code or in the case of the book, words, and meanings can be pulled if put in the correct syntax. Also, the form definitely fits. The randomness of the twitterbot allows people to see that even by coincidence there can be substance to code. If it was done having people point out specific parts of the code then we would be limited to their interpretations. Having a machine randomly spew out phrases allows for many different interpretations.
This tool, although abstractly useful could be implemented much better. If the twitter bots are “occasionally” genius, then the website would be more efficient if it had implemented some sort of ranking system for the most interesting or coincidental tweets. If they had some sort of sorting mechanism, then the project may be more convincing in saying that code can be made to have a creative license or brand.
Regardless of the various limitations both projects may have shown, it is abundantly clear that their media elements vastly improve their ability to illustrate their ideas and accomplish their purposes. It would be practically impossible to illustrate these projects with text alone. The Speech Accent Archives’ audio recordings give concrete examples to an entirely aural concept, which is infinitely more useful than simply listing the phonetic transcriptions. The 10print Ebooks’ twitterBot, while difficult to understand, is an interesting concept that also generates concrete examples of what the project is trying to illustrate – that code is multidimensional in its structure and can be interpreted and analyzed similar to a complex literary work.
@10PRINT_ebooks, “10 PRINT ebooks”. Twitter.com. Web. https://twitter.com/10PRINT_ebooks
Baudoin, Patsy; Bell, John; Bogost, Ian; Douglass, Jeremy; Marino, Mark C.; Mateas, Michael; Montfort, Nick; Reas, Casey; Sample, Mark; Vawter, Noah. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. November 2012. Web. http://10print.org/
Cordell, Ryan, and Brian Croxall. “Technologies of Text (S12).” Technologies of Text S12. N.p., n.d. Web. 15 Sept. 2013. http://ryan.cordells.us/s12tot/assignments/
Mattern, Shannon C. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. N.p., 28 Aug. 2012. Web. 15 Sept. 2013. http://journalofdigitalhumanities.org/1-4/evaluating-multimodal-work-revisited-by-shannon-mattern/
Weinberg, Stephen H. The Speech Accent Archive. 2012. Web. http://accent.gmu.edu/about.php