Technoscience / Ecomateriality / Literature

Tag: Digital Humanities

Digital Humanities Project Critique By Cathy Li and Norma De Jesus

Digital humanities projects hold different mediums through which they present their information. They aren’t simple 2 dimensional artistic representations of a humanities piece. They are a way through which humanities pieces can be transformed. The infographic “Every Scene in Great Gatsby”, is not technically a digital humanities project, we will focus on comparing it to other projects, why it is not acknowledged as a digital humanities project, and how to make it into an actual one.

First of all, the creator of the infographic has done a good job representing the series of events within the novel graphically. Each major event is represented through a picture. On the top of the picture, a map shows the protagonists’ geographic changes in the The Great Gatsby, particularly focusing on Gatsby’s death. The body of the picture is separated based on the chapters of the novel, and the characters, represented as circles with the initial letter of their names, participate in each chapter in a linear temporal order so as to provide the reader with the information of how the characters interact with one another throughout the novel.

Google the title of the infographic and not many articles regarding its merit appear. In fact, many articles state that the producers of this project are Pop Chart Labs, an infographic poster company who specializes in making popular culture items visual. In a sense, it loses some merit given that it was not created for the sole purpose of advancing scholarship. Nevertheless, many who stumble upon this Great Gatsby infographic find it useful. This project is described as “a stylish, elegant and beautifully designed graphic – another classic” (infographick.com). Although not necessarily a classic per se, it does provide its audience a mode of understanding the book better. There is some dialogue prevalent to the project. It appears in social media such as Pinterest and Twitter, basically portraying how the general public does find it useful enough to share amongst others. Fastcodedesign.com even has an article depicting the breakdown of the project along with comments about how it helps the reader.

In retrospect, it is clear that not enough dialogue about this projects is present throughout the internet, at least not enough to portray biases of the project. Also, although it is a platform that presents media objects, it doesn’t necessarily provide an argument. According to Shannon Mattern, a useful digital project is created on the basis of whether it could be argumentative or responded to. This infographic lacks enough elements to even be labeled as such. There isn’t sufficient links or annotation, but it does do justice to the initial literary work even if it is a simple derivative to The Great Gatsby.

The novel representation and the simplistic drawing does offer the reader a clearer outline of the novel. However, surely one can remake the infographics on a piece of paper so this project can hardly be called a digital humanities project. Nevertheless, one should never give up on a brilliant idea such as this but to turn it into something more modern, useful, technologically advanced.

Shannon Mattern, in her paper “Evaluating Multimodal Work, Revisited “emphasized the importance of “a strong thesis or argument at the core of the work”, which obviously is lacking from this infographics (Mattern, 2014). Transforming a dull poster that merely serves to retell a story into a vivid digital humanities project requires a strong motivation to make a point. In this case, the creator should reevaluate the essential ideas that Fitzgerald tried to convey such as Daisy’s vanity and Gatsby’s unconditional affection. What the revisor, as a reader, thinks of these (are they in vain? valorous? pathetic?) should be incorporated in the project and the details of the novel that embody the point should become the main theme of the project. The revisor’s motivation plays a crucial part of the project because it ensures what technical effort should be made and why it should be made to finish the project; it differentiates a thoughtful project from a directionless “cool-data-set” that cannot be interpreted.

There are many ways to transform this simplistic infographic into something with more digital affordances. Images are mostly the only types of mediums the creator uses to make this project work. But audio, code, and other types of technologies could’ve helped make this infographic livelier. After the creator settles on what his/her point of making the project, the structure and technical details need to be filled.

One must first decide what affordances will be utilized – whether it is going to be a visual computer interface that asks the reader to click on or an immersive environment that activates the reader’s other senses like auditory, olfactory, tactile, etc. The technic availability limits what a project can do; since the design and technique is concept/content driven as aforementioned, the revisor must consider whether switching from one affordance to another will affect the reader’s understanding of the gist and motivation of the project. For example, an easy revision of the project could be designed by creating a programmed interface wherein the main body of the infographics remains the same but extra function buttons are added. The reader could click on different scenarios throughout the novel and then a clip of the movie would be replayed or a segment of the novel would be reread for them. It could also be made interactive as the reader could ask the characters questions about the novel and the character would respond according to the content of the story. Or, the book could have simply been brought to life through the infographic itself. The creator could keep the temporal and spatial elements he incorporated and add more movement through programming and audio.

More types of data could have been extracted from other sources in order to create a more credible project, and more technology and design would have most definitely helped the infographic fit into Dr. Mattern’s criteria that qualifies a multimodal project. By tweaking this infographic with more data and research along with various mediums, a multidimensional project like this would provide an immersive environment for the audience, granting them a more interactive experience. In essence, both scholarship and multimedia should be synthesized to perfection in order for the audience to reap more benefits from the medium. By keeping the audience in mind and providing them with a digital resource that could help them better understand The Great Gatsby, the creator could have invented a whole new, innovative way to make literary media more digital.

Work Cited:

Mattern, Shannon. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. Journal of Digital Humanities, Fall 2012. Web. 22 Sept. 2014.
Wilson, Mark. (2013, July 25). Infographic: “Every Scene in the Great Gatsby”
“The Great Gatsby Chart Infographic.” Infographickcom. N.p., 20 July 2013. Web. 22 Sept.
2014.

DH Critique: Diego Nogales and Greg Lyons

Stephan Thiel’s “Understanding Shakespeare” project succeeds as a digital design project, but it falls slightly short when viewed as a digital humanities project (which, in our opinion, requires effective analysis and original conclusions). Thiel aims to present a “new form of reading drama” (Thiel) to add new insights to Shakespeare’s works through information visualization. The project is broken into five separate approaches, each of which turns the words and events of Shakespearean drama into data and then presents said data in an informative visual display. While Thiel’s intentions (the “new form” stated above) constitute a worthy design goal, they do not serve as a strong thesis to guide the literary implications of his project (or lack thereof – literary conclusions are mostly absent). The separate approaches are not linked to support a core argument.

Each approach display has a small, concise description of its purpose, and presents data in a visual form that is easy for any average reader to navigate and explore. In viewing Shakespeare’s words as information to be processed (by methods described further on), Thiel goes against the opinions of Stephen Marche and others who argue that “literature is not data” (Marche). Marche fears the advent of the digital humanities and criticizes the field for being “nothing more than being vaguely in touch with technological reality” (Marche). He goes on to describe the sorts of algorithms that Thiel uses as “inherently fascistic” (Marche). Most digital humanities scholars will dismiss Marche’s fears of algorithms as irrational and exaggerated. However, there is a danger to the scholarly pursuit of literary analysis when projects claim to serve a literary purpose but instead do relatively little literary research. Although Thiel’s project is primarily a design project, his own self-written goals are a little too ambitious and reflect literary intentions that he does not satisfy. For example, his “Shakespeare Summarized” approach uses a word frequency algorithm to condense speeches from a play into one “most representative sentence” each, which he claims will create a “surprisingly insightful way to ‘read’ a play in less than a minute” (Thiel). This is a far-fetched claim, as the “Shakespeare Summarized” charts each turn out to be more of a disjointed collection of hit-or-miss quotes rather than a coherent narrative. The charts give no detail with regards to plot events or characters, and viewing this data cannot be compared to the experience of reading Shakespeare’s full prose. The data presented is of little value to someone who has not previously read the associated work. Therefore, Thiel falls short in re-purposing the data to create an analytic digital humanities project – instead, he simply gathers the data and presents it visually.

Another of the approaches, “Me, You and Them” (Thiel), serves to identify each character’s role by compiling statements that begin with personal pronouns. Thiel claims that this approach “reveals the goals and thoughts of each character” (Thiel), though the project itself does no analysis of the data. Scholars who are familiar with the work may be able to examine Thiel’s compiled data and draw conclusions from it, but there are no conclusions put forth as part of the project.

Looking at the overall project’s design and technique criteria, it is clear that this digital humanities project really did form in sync with the concept and tool application. Thiel is well aware of the affordances of his tools (the capabilities of each algorithm for useful visualization), and he is effective in organizing the data in a readable manner. The approach titled “Visualizing the Dramatic Structure” introduces Shakespeare’s plays through a fragmented lens. Each lens signifies a major character within the entire play, or simply a character important within one scene. To produce this, while still maintaining an authentic feel to reading a play, this approach has a very inventive page structure. The structure follows that of a novel, however the story is divided by vertical lines that create horizontal portions for each scene/character that summarize their most important lines. This format reveals how this approach properly demonstrates the affordances of the overall project through this particular fragmented, yet organized, display. Thiel focuses on using technology that affords him the ability to examine the scope of an entire story by highlighting smaller, important details. The only major concern or flaw in the design of the media was that the visuals were presented through Flickr. This made it somewhat difficult to zoom in far enough and more so to navigate the vertical Flickr photo. A higher resolution and different media type for the visuals would have pushed the design to a higher level of sophistication.

dhpic

(Hamlet, Prince of Denmark – Understanding Shakespeare Project)

It is not sufficient to only view the final presentation of a digital humanities project. Examining the development of any project is imperative to fully appreciating the level of work and rigor involved within a project’s creation. Studying the design process also can reveal biases or assumptions inherent in the project. The “Understanding Shakespeare” project was successful in recording and documenting the entire process, from the digitalization of the plays, to the coding manipulation of the data, to its fruition. The process is presented through a series of Youtube videos fast-forwarding through the various mini-projects. This is a great tool to observe and, to an extent, understand the coding algorithms that were used to organize the words or lines of the play by frequency. The major dilemma with this entire process, however, is that without a Computer Science major, it may be impossible to understand the process of the coding by looking at the video. What is missing in this page is verbal dialogue, walking someone through the process as the video is playing. Therefore, even though the documentation is there, the transparency of the project’s development isn’t present.

This Shakespeare project not only documents the entire process to the final product, but it also thoroughly credits the different platforms and software used within the project. In the “About” tab, all the acknowledgements are made. It certifies that the data being used was based from the WordHoard Project and Northwestern University. In addition, it reveals that the software processors called “Toxicilbs” and “Classifer4J”, were the ones used to manipulate the data into an interesting visual arrangement based on frequency. In terms of project visibility, the open web accessibility of this project allows for any academic scholars to examine Thiel’s charts. Furthermore, it is also open and simple enough that it accommodates for the layman who may only be attracted to the visuals of one play that he or she may have read. It is worth noting, however, that Thiel does not make the raw data available to the public – he only displays the data visualizations.

To sum up “Understanding Shakespeare” as a digital humanities project, it helps to look through the lens of a prominent digital humanities scholar like Katherine Hayles. In her book “How We Think”, Hayles describes how “machine reading” processes like Thiel’s algorithms could supplement traditional reading experiences by providing a “first pass towards making visible patterns that human reading could then interpret” (Hayles 29).  However, this relationship implies that machine reading could inform readers who have not yet read the work traditionally, and in the case of “Understanding Shakespeare”, the data is not of much use without previous familiarity with the drama. As of yet, no scholars have taken advantage of Thiel’s project to make literary arguments, and thus it still sits idly as what Mattern would describe as a “cool data set” (Mattern). Standing alone as data, the project leaves lingering questions: Could these techniques be applied effectively to the works of other authors, and more importantly, what are the literary implications of this type of data?

 

Citations:

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Web.

Marche, Stephen. “Literature Is Not Data: Against Digital Humanities – The Los…” The Los Angeles Review of Books. N.p., 28 Oct. 2012. Web. 15 Sept. 2014. <http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/>.

Mattern, Shannon. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. Journal of Digital Humanities, Fall 2012. Web. 22 Sept. 2014.

Thiel, Stephan. “Understanding Shakespeare.” Understanding Shakespeare. 2010. Web. <http://www.understanding-shakespeare.com/>.

DN Digital Humanities Blog Post

After reading this week’s texts, I can see the augmentation potential in Digital Humanities. I envision a two-step revolution. The first stage was the gathering of text from these millions of traditionally printed works. A perfect example of this was Larry Page’s project to digitalize books and use a “crowd-sourced textual correction… program called reCAPTCHA” (Marche). This revolutionary step definitely attracted criticism and as a relatively new concept, the direction of digital humanities and language-analyzing algorithms was uncertain. A major part of this active debate is whether literature is data. Some critics suggest, “Literature is the opposite of data. The first problem is that literature is terminally incomplete” (Marche). Another perfectly acceptable argument is that, “the [text] data are exactly identical; their meanings are completely separate” (Marche). I can agree with these criticisms regarding the limitations of the digitalization of text. However, I also think that these arguments will become absolute within the next decade, if not sooner.

Looking at developing projects based on coding algorithms to analyze text, the augmentation of analysis is present. Through the digital humanities, one is able to grasp patterns in millions of words or “data”, and learn something from it. One example is the interactive visual representation of the most used words in the State of Union address for each year, starting from George Washington to Barack Obama. This effective augmentation of scholarship is not only exposed to academic community, but to the entire general population in the United States. The ability to analysis hundreds of speeches at a macro-level within a few minutes simply could not have been done without the digitalization of text. This tool is just the tip of the iceberg, as the second step to Digital Humanities is just beginning.

This second step will close the gap between raw data and literature with meaning. The use of deep learning techniques through the use of coding algorithms is the direction in which digital humanities is going. Google is spearheading a “deep-learning software designed to understand the relationships between words with no human guidance” (Harris). This open sourced tool called word2vec is a movement that will push the analysis of text through computers to new levels. This future movement refers back to Hayles’, How We Think, because it will only be a matter of time before the distinctions between “machine reading” and human interpretation will be unnoticeable (Hayles 29).

 

 

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Harris, Derrick. “We’re on the Cusp of Deep Learning for the Masses. You Can Thank Google Later.” Gigaom. Gigaom, Inc., 16 Aug. 2014. Web. 12 Sept. 2014. <https://gigaom.com/2013/08/16/were-on-the-cusp-of-deep-learning-for-the-masses-you-can-thank-google-later/>.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Hom, Daniel. “State of the Union, in Words.” Business Intelligence and Analytics. Tableau, 12 Feb. 2013. Web. 12 Sept. 2014. <http://www.tableausoftware.com/public/blog/2013/02/state-union-words-1818>.

Marche, Stephen. “Literature Is Not Data: Against Digital Humanities – The Los…” The Los Angeles Review of Books. N.p., 28 Oct. 2012. Web. 15 Sept. 2014. <http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/>.

Blog 2: Digital Humanities

From bringing a book to life to utilizing graphs to map out human nature, it is clear that technology can play an essential role to subjects within the humanities. The project of mapping out the character relationships and storyline of The Great Gatsby through the use of a computer portrays the way digital technology is augmenting scholarship by providing yet another spectrum through which the story can be retold. In other words, by incorporating digital media, the novel begins to evolve into other forms of useful mediums thus giving the audience another way to look at and understand the plot. Reality is being augmented in this project due to the utilization of computation in an effort to bring this story to life. Another story being brought to life through the means of augmenting the way we perceive reality is human emotion. In the article “Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter,” the author informs the audience through data and graphs about the way a study of human happiness was conducted through the use of social media, another form of technology that plays a role in amplifying the human experience. By digitally bringing an identity of humanity to life, it provides us insight into contextual information through other mediums. In retrospect, this right here is the value digitalism places on the humanities. All in all, Hayles’, “How We Think” prepared me to understand how digital technology plays a vital role in these articles. Had I not been informed on Hayes’ perspective, I wouldn’t have understood that digitalizing the humanities could provide other means of understanding a novel, human emotion, and other man-made subject. Through many different apparatuses, digital humanities are advancing the way we perceive the world around us. Take Neuromancer for example. Gibson’s use of cyberspace provides a distorted reality where digital influences prevail throughout the novel. If we were to achieve such a world where we allow technology to garble with our comfortable reality, perhaps a dystopia would arise similar to the one found in Neuromancer. But for now, we must appreciate how technology has provided us with ways to digitally alter our perception of subject matter within the humanities.

Dodds PS, Harris KD, Kloumann IM, Bliss CA, Danforth CM (2011) Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter. PLoS ONE 6(12): e26752. doi:10.1371/journal.pone.0026752

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Wilson, Mark. (2013, July 25). Infographic: “Every Scene in the Great Gatsby”

Blog 2: DH vs H

From my perception of the Hayles reading, I would say that digital humanities differ from simple humanities by the fact that digital humanities really do allow for you to interact with the text and encourage different paths to the main purpose of the text. These projects that we were presented with are prime examples of digital humanities, with each one going through and presenting new ways to interact with the media, and giving us information beyond what the media itself affords. For example, the video that presented how brushstrokes can help us identify authentic Van Gogh paintings from copies. With just the physical painting, we can see—well, the painting. But by digitizing it, converting it to a black and white image and using computer algorithms to analyze the brush stroke patterns, we gather two pieces of information that the physical painting could have never told us—what exactly is Van Goh’s distinctive brush style, and if the painting is, based on its brush style compared to Van Gogh’s, an genuine painting or a worthless copy.

Digitizing texts gives it dozens of affordances that a hard copy simply cannot do. For example, in this TED talk, we see analysts show us historical trends and social culture through the word count of texts that were written in the past. This is simply not possible with hard copies of books, because the time and effort it would take to do a manual word count on enough literature to gather any sorts of conclusions would vastly outweigh the benefits of discovering that information. By adding the ‘digital’ to humanities, we now get the opportunity to look through hundreds of thousands of pieces of information and extract what we need from it in a matter of minutes. This incredible benefit is, in fact, the foundation for Google’s project to digitize all of the literature available to us, a benefit that Marche argues “is a story of several, related intellectual failures.” Marche argues that digitizing the humanties removes the “humanity” from the work all together, but I disagree. I, like many others who have written in response to him, would like to believe that by giving us this ease in analyzing texts, we are now more likely to look at a piece of literature because we can go directly to the part that we want, rather than having to pore through the entire book and sift through endless pages of information that is not useful for the task we are trying to achieve.

Aiden, E. and Michel, J. (2011, September 20). What we learn from 5 million books?http://www.youtube.com/watch?v=5l4cA8zSreQ Video.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Eck, Allison. “How Forensic Linguistics Revealed J. K. Rowling’s Secret.” PBS. PBS, 19 July 2013. Web. 12 Sept. 2014. Online.

Powered by WordPress & Theme by Anders Norén