Technoscience / Ecomateriality / Literature
Header

Author Archives: Greg Lyons

DH Critique: Diego Nogales and Greg Lyons

September 22nd, 2014 | Posted by Greg Lyons in Uncategorized - (0 Comments)

Stephan Thiel’s “Understanding Shakespeare” project succeeds as a digital design project, but it falls slightly short when viewed as a digital humanities project (which, in our opinion, requires effective analysis and original conclusions). Thiel aims to present a “new form of reading drama” (Thiel) to add new insights to Shakespeare’s works through information visualization. The project is broken into five separate approaches, each of which turns the words and events of Shakespearean drama into data and then presents said data in an informative visual display. While Thiel’s intentions (the “new form” stated above) constitute a worthy design goal, they do not serve as a strong thesis to guide the literary implications of his project (or lack thereof – literary conclusions are mostly absent). The separate approaches are not linked to support a core argument.

Each approach display has a small, concise description of its purpose, and presents data in a visual form that is easy for any average reader to navigate and explore. In viewing Shakespeare’s words as information to be processed (by methods described further on), Thiel goes against the opinions of Stephen Marche and others who argue that “literature is not data” (Marche). Marche fears the advent of the digital humanities and criticizes the field for being “nothing more than being vaguely in touch with technological reality” (Marche). He goes on to describe the sorts of algorithms that Thiel uses as “inherently fascistic” (Marche). Most digital humanities scholars will dismiss Marche’s fears of algorithms as irrational and exaggerated. However, there is a danger to the scholarly pursuit of literary analysis when projects claim to serve a literary purpose but instead do relatively little literary research. Although Thiel’s project is primarily a design project, his own self-written goals are a little too ambitious and reflect literary intentions that he does not satisfy. For example, his “Shakespeare Summarized” approach uses a word frequency algorithm to condense speeches from a play into one “most representative sentence” each, which he claims will create a “surprisingly insightful way to ‘read’ a play in less than a minute” (Thiel). This is a far-fetched claim, as the “Shakespeare Summarized” charts each turn out to be more of a disjointed collection of hit-or-miss quotes rather than a coherent narrative. The charts give no detail with regards to plot events or characters, and viewing this data cannot be compared to the experience of reading Shakespeare’s full prose. The data presented is of little value to someone who has not previously read the associated work. Therefore, Thiel falls short in re-purposing the data to create an analytic digital humanities project – instead, he simply gathers the data and presents it visually.

Another of the approaches, “Me, You and Them” (Thiel), serves to identify each character’s role by compiling statements that begin with personal pronouns. Thiel claims that this approach “reveals the goals and thoughts of each character” (Thiel), though the project itself does no analysis of the data. Scholars who are familiar with the work may be able to examine Thiel’s compiled data and draw conclusions from it, but there are no conclusions put forth as part of the project.

Looking at the overall project’s design and technique criteria, it is clear that this digital humanities project really did form in sync with the concept and tool application. Thiel is well aware of the affordances of his tools (the capabilities of each algorithm for useful visualization), and he is effective in organizing the data in a readable manner. The approach titled “Visualizing the Dramatic Structure” introduces Shakespeare’s plays through a fragmented lens. Each lens signifies a major character within the entire play, or simply a character important within one scene. To produce this, while still maintaining an authentic feel to reading a play, this approach has a very inventive page structure. The structure follows that of a novel, however the story is divided by vertical lines that create horizontal portions for each scene/character that summarize their most important lines. This format reveals how this approach properly demonstrates the affordances of the overall project through this particular fragmented, yet organized, display. Thiel focuses on using technology that affords him the ability to examine the scope of an entire story by highlighting smaller, important details. The only major concern or flaw in the design of the media was that the visuals were presented through Flickr. This made it somewhat difficult to zoom in far enough and more so to navigate the vertical Flickr photo. A higher resolution and different media type for the visuals would have pushed the design to a higher level of sophistication.

dhpic

(Hamlet, Prince of Denmark – Understanding Shakespeare Project)

It is not sufficient to only view the final presentation of a digital humanities project. Examining the development of any project is imperative to fully appreciating the level of work and rigor involved within a project’s creation. Studying the design process also can reveal biases or assumptions inherent in the project. The “Understanding Shakespeare” project was successful in recording and documenting the entire process, from the digitalization of the plays, to the coding manipulation of the data, to its fruition. The process is presented through a series of Youtube videos fast-forwarding through the various mini-projects. This is a great tool to observe and, to an extent, understand the coding algorithms that were used to organize the words or lines of the play by frequency. The major dilemma with this entire process, however, is that without a Computer Science major, it may be impossible to understand the process of the coding by looking at the video. What is missing in this page is verbal dialogue, walking someone through the process as the video is playing. Therefore, even though the documentation is there, the transparency of the project’s development isn’t present.

This Shakespeare project not only documents the entire process to the final product, but it also thoroughly credits the different platforms and software used within the project. In the “About” tab, all the acknowledgements are made. It certifies that the data being used was based from the WordHoard Project and Northwestern University. In addition, it reveals that the software processors called “Toxicilbs” and “Classifer4J”, were the ones used to manipulate the data into an interesting visual arrangement based on frequency. In terms of project visibility, the open web accessibility of this project allows for any academic scholars to examine Thiel’s charts. Furthermore, it is also open and simple enough that it accommodates for the layman who may only be attracted to the visuals of one play that he or she may have read. It is worth noting, however, that Thiel does not make the raw data available to the public – he only displays the data visualizations.

To sum up “Understanding Shakespeare” as a digital humanities project, it helps to look through the lens of a prominent digital humanities scholar like Katherine Hayles. In her book “How We Think”, Hayles describes how “machine reading” processes like Thiel’s algorithms could supplement traditional reading experiences by providing a “first pass towards making visible patterns that human reading could then interpret” (Hayles 29).  However, this relationship implies that machine reading could inform readers who have not yet read the work traditionally, and in the case of “Understanding Shakespeare”, the data is not of much use without previous familiarity with the drama. As of yet, no scholars have taken advantage of Thiel’s project to make literary arguments, and thus it still sits idly as what Mattern would describe as a “cool data set” (Mattern). Standing alone as data, the project leaves lingering questions: Could these techniques be applied effectively to the works of other authors, and more importantly, what are the literary implications of this type of data?

 

Citations:

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Web.

Marche, Stephen. “Literature Is Not Data: Against Digital Humanities – The Los…” The Los Angeles Review of Books. N.p., 28 Oct. 2012. Web. 15 Sept. 2014. <http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/>.

Mattern, Shannon. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. Journal of Digital Humanities, Fall 2012. Web. 22 Sept. 2014.

Thiel, Stephan. “Understanding Shakespeare.” Understanding Shakespeare. 2010. Web. <http://www.understanding-shakespeare.com/>.

Blog 2: DH Projects

September 12th, 2014 | Posted by Greg Lyons in Uncategorized - (0 Comments)

New Digital Humanities projects are constantly serving to augment scholarship in new ways.  Most DH projects share a common thread of extracting and amassing data from collections of texts (literary works, scholarly works, web data, etc.), however the true augmenting lies in the wide range of research that is done after the data is collected.  This data can provide a model for examining more nebulous phenomena, such as emotion.  In a study titled “Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter,” Sheridan Dodds and other researchers studied individual tweets based on frequency and significance of certain words to gain insight on hedonism and emotion.  The study operates under the principle that “the raw word content of tweets does appear to reflect people’s current circumstances” (Dodds).  In this sense, Twitter and other forms of social media serve as additional embodied human communication tools – rather than being separate entities from the humans who use them, these Twitter accounts are an auxiliary part of the human himself.  With progress being made in DH, it is possible for humans to be identified by analysis of their auxiliary communication tools.  In her article for National Geographic, Virginia Hughes describes how scholars were able to examine literary data to determine that the real identity of pseudonymous writer Robert Galbraith was in fact famed author J.K. Rowling.  The idea that simply examining words and word patterns could point to a conclusion of “very characteristically Rowling” (Hughes) certainly finds itself somewhere on the “awesome-creepy” scale.

In her book How We Think, Hayles examines what can make this sort of literary data-extraction unsettling.  She discusses the differences between human interpretation of literary material and “machine reading”, and notes that human egocentricity may lead to the principle that “human interpretation should be primary in analyzing how events originate and develop” (Hayles 29).  Traditional humanities scholars often rush to discredit the digital humanities and techniques of machine reading or “distant reading,” but they often lose sight of the fact that DH seeks to augment existing scholarship rather than replace it.  These sorts of scholars remind me of some of the “console cowboys” from Neuromancer, who see simstim as an inferior tool compared to jacking-in to cyberspace.  Eventually, Case sees that simstim can be an extremely powerful tool to serve different purposes (Gibson).  DH represents a model of scholarship that uses as many tools as possible to explore academic inquiries.

 

Citations:

Dodds PS, Harris KD, Kloumann IM, Bliss CA, Danforth CM (2011) Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter. PLoS ONE 6(12): e26752. doi:10.1371/journal.pone.0026752

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Hughes, Virginia. “How Forensic Linguistics Outed J.K. Rowling (Not to Mention James Madison, Barack Obama, and the Rest of Us).” Phenomena. National Geographic, 19 July 2013. Web. 11 Sept. 2014.

Novel reponse – Neuromancer

September 8th, 2014 | Posted by Greg Lyons in Uncategorized - (0 Comments)

The futuristic setting is of Neuromancer is obviously a different world than the one we inhabit, yet for me, the more interesting aspects lie in smaller details rather than the big picture disparities.  I am fascinated by the intersection between aspects of modern culture that already exist and Gibson’s speculations about what will be commonplace in the future.  Right from the start of the novel, we see several different types of weapons mentioned, ranging from knives and shurikens to modern and futuristic firearms.  Knives and shurikens are old, rudimentary weapons, yet in Gibson’s world they still co-exist with weapons such as the “Smith & Wesson riot gun,” a firearm that uses “subsonic sandbag jellies” as ammunition.  At one point, Case borrows a “fifty-year-old Vietnamese imitation of a South American copy of a Walther PPK”.  Gibson’s choice to include older weapons alongside his new, speculative gun and other futuristic technologies helps characterize the seedy underworld of the city that he creates.  The intersection between futuristic technology and crude urban crime sets the scene for the events of the cyberpunk novel.

 

Gibson treats drugs in a similar manner.  He mentions familiar methods of taking drugs (pills and needles), as well as familiar drugs (speed).  However, Gibson also mentions other unfamiliar drugs (“pituitaries” and others).  In fact, the whole process of jacking-in to the matrix is essentially a drug in its own right.  The DiVE experience was along those same lines, although to a lesser extent.  When I was in the DiVE, I was always fully aware of my body and did not feel like it was affecting my brain state as much as a drug might (although it certainly interacted with my brain in interesting ways!).  Within the novel, the presence of today’s street drugs along with other new drugs further adds to the atmosphere that Gibson creates.  Gibson isn’t out to create a chrome utopia where society has solved all problems – he aims to create a believably grim haven for illicit activities of the future.  The presence of crude weapons and current street drugs alongside new technologies of the future sets the tone right off the bat for the world in which Neuromancer’s events take place.

Citations from web version of Neuromancer:

Gibson, William. Neuromancer. 1984. Web.