Technoscience / Ecomateriality / Literature
Header

#Augrealities @Personas tweet!

September 23rd, 2014 | Posted by Amanda Starling Gould in Uncategorized - (Comments Off on #Augrealities @Personas tweet!)

DH Critique: Diego Nogales and Greg Lyons

September 22nd, 2014 | Posted by Greg Lyons in Uncategorized - (Comments Off on DH Critique: Diego Nogales and Greg Lyons)

Stephan Thiel’s “Understanding Shakespeare” project succeeds as a digital design project, but it falls slightly short when viewed as a digital humanities project (which, in our opinion, requires effective analysis and original conclusions). Thiel aims to present a “new form of reading drama” (Thiel) to add new insights to Shakespeare’s works through information visualization. The project is broken into five separate approaches, each of which turns the words and events of Shakespearean drama into data and then presents said data in an informative visual display. While Thiel’s intentions (the “new form” stated above) constitute a worthy design goal, they do not serve as a strong thesis to guide the literary implications of his project (or lack thereof – literary conclusions are mostly absent). The separate approaches are not linked to support a core argument.

Each approach display has a small, concise description of its purpose, and presents data in a visual form that is easy for any average reader to navigate and explore. In viewing Shakespeare’s words as information to be processed (by methods described further on), Thiel goes against the opinions of Stephen Marche and others who argue that “literature is not data” (Marche). Marche fears the advent of the digital humanities and criticizes the field for being “nothing more than being vaguely in touch with technological reality” (Marche). He goes on to describe the sorts of algorithms that Thiel uses as “inherently fascistic” (Marche). Most digital humanities scholars will dismiss Marche’s fears of algorithms as irrational and exaggerated. However, there is a danger to the scholarly pursuit of literary analysis when projects claim to serve a literary purpose but instead do relatively little literary research. Although Thiel’s project is primarily a design project, his own self-written goals are a little too ambitious and reflect literary intentions that he does not satisfy. For example, his “Shakespeare Summarized” approach uses a word frequency algorithm to condense speeches from a play into one “most representative sentence” each, which he claims will create a “surprisingly insightful way to ‘read’ a play in less than a minute” (Thiel). This is a far-fetched claim, as the “Shakespeare Summarized” charts each turn out to be more of a disjointed collection of hit-or-miss quotes rather than a coherent narrative. The charts give no detail with regards to plot events or characters, and viewing this data cannot be compared to the experience of reading Shakespeare’s full prose. The data presented is of little value to someone who has not previously read the associated work. Therefore, Thiel falls short in re-purposing the data to create an analytic digital humanities project – instead, he simply gathers the data and presents it visually.

Another of the approaches, “Me, You and Them” (Thiel), serves to identify each character’s role by compiling statements that begin with personal pronouns. Thiel claims that this approach “reveals the goals and thoughts of each character” (Thiel), though the project itself does no analysis of the data. Scholars who are familiar with the work may be able to examine Thiel’s compiled data and draw conclusions from it, but there are no conclusions put forth as part of the project.

Looking at the overall project’s design and technique criteria, it is clear that this digital humanities project really did form in sync with the concept and tool application. Thiel is well aware of the affordances of his tools (the capabilities of each algorithm for useful visualization), and he is effective in organizing the data in a readable manner. The approach titled “Visualizing the Dramatic Structure” introduces Shakespeare’s plays through a fragmented lens. Each lens signifies a major character within the entire play, or simply a character important within one scene. To produce this, while still maintaining an authentic feel to reading a play, this approach has a very inventive page structure. The structure follows that of a novel, however the story is divided by vertical lines that create horizontal portions for each scene/character that summarize their most important lines. This format reveals how this approach properly demonstrates the affordances of the overall project through this particular fragmented, yet organized, display. Thiel focuses on using technology that affords him the ability to examine the scope of an entire story by highlighting smaller, important details. The only major concern or flaw in the design of the media was that the visuals were presented through Flickr. This made it somewhat difficult to zoom in far enough and more so to navigate the vertical Flickr photo. A higher resolution and different media type for the visuals would have pushed the design to a higher level of sophistication.

dhpic

(Hamlet, Prince of Denmark – Understanding Shakespeare Project)

It is not sufficient to only view the final presentation of a digital humanities project. Examining the development of any project is imperative to fully appreciating the level of work and rigor involved within a project’s creation. Studying the design process also can reveal biases or assumptions inherent in the project. The “Understanding Shakespeare” project was successful in recording and documenting the entire process, from the digitalization of the plays, to the coding manipulation of the data, to its fruition. The process is presented through a series of Youtube videos fast-forwarding through the various mini-projects. This is a great tool to observe and, to an extent, understand the coding algorithms that were used to organize the words or lines of the play by frequency. The major dilemma with this entire process, however, is that without a Computer Science major, it may be impossible to understand the process of the coding by looking at the video. What is missing in this page is verbal dialogue, walking someone through the process as the video is playing. Therefore, even though the documentation is there, the transparency of the project’s development isn’t present.

This Shakespeare project not only documents the entire process to the final product, but it also thoroughly credits the different platforms and software used within the project. In the “About” tab, all the acknowledgements are made. It certifies that the data being used was based from the WordHoard Project and Northwestern University. In addition, it reveals that the software processors called “Toxicilbs” and “Classifer4J”, were the ones used to manipulate the data into an interesting visual arrangement based on frequency. In terms of project visibility, the open web accessibility of this project allows for any academic scholars to examine Thiel’s charts. Furthermore, it is also open and simple enough that it accommodates for the layman who may only be attracted to the visuals of one play that he or she may have read. It is worth noting, however, that Thiel does not make the raw data available to the public – he only displays the data visualizations.

To sum up “Understanding Shakespeare” as a digital humanities project, it helps to look through the lens of a prominent digital humanities scholar like Katherine Hayles. In her book “How We Think”, Hayles describes how “machine reading” processes like Thiel’s algorithms could supplement traditional reading experiences by providing a “first pass towards making visible patterns that human reading could then interpret” (Hayles 29).  However, this relationship implies that machine reading could inform readers who have not yet read the work traditionally, and in the case of “Understanding Shakespeare”, the data is not of much use without previous familiarity with the drama. As of yet, no scholars have taken advantage of Thiel’s project to make literary arguments, and thus it still sits idly as what Mattern would describe as a “cool data set” (Mattern). Standing alone as data, the project leaves lingering questions: Could these techniques be applied effectively to the works of other authors, and more importantly, what are the literary implications of this type of data?

 

Citations:

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Web.

Marche, Stephen. “Literature Is Not Data: Against Digital Humanities – The Los…” The Los Angeles Review of Books. N.p., 28 Oct. 2012. Web. 15 Sept. 2014. <http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/>.

Mattern, Shannon. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. Journal of Digital Humanities, Fall 2012. Web. 22 Sept. 2014.

Thiel, Stephan. “Understanding Shakespeare.” Understanding Shakespeare. 2010. Web. <http://www.understanding-shakespeare.com/>.

Digital Humanities Critique by Cathy & Norma

September 22nd, 2014 | Posted by Norma De Jesus in Uncategorized - (Comments Off on Digital Humanities Critique by Cathy & Norma)

As the project that we are critiquing, the infographic “Every Scene in Great Gatsby”, is not technically a digital humanities project, we will focus on comparing it to other projects, why it is not acknowledged as a digital humanities project, and how to make it into an actual one.

First of all, the creator of the infographic has done a good job representing the series of events graphically. On the top of the picture, a map shows the protagonists’ geographic changes in the novel, particularly focusing on Gatsby’s death. The body of the picture is separated based on the chapters of the novel, and the characters, represented as circles with the initial letter of their names, participate in each chapter in a linear temporal order so as to provide the reader with the information of how the characters interact with one another throughout the novel.

Google the title of the infographic, and not many articles regarding its merit appear. In fact, many articles state that the producers of this project are Pop Chart Labs, an infographic poster who specializes in making popular culture items visual. In a sense, it loses some merit given that it was not created for the sole purpose of advancing scholarship. Nevertheless, many who stumble upon this Great Gatsby infographic find it useful. This project is described as “a stylish, elegant and beautifully designed graphic – another classic” (infographick.com). Although not necessarily a classic per se, it does provide its audience a mode of understanding the book better. There is some dialogue prevalent to the project. It appears in social media such as Pinterest and Twitter, basically portraying how the general public does find it useful enough to share amongst others. Fastcodedesign.com even has an article depicting the breakdown of the project along with comments about how it helps the reader.

In retrospect, it is clear that not enough dialogue about this projects is present throughout the internet, at least not enough to portray biases of the project. Also, although it is a platform that presents media objects, it doesn’t necessarily provide an argument. A useful digital project is created on the basis of whether it could be argumentative or responded to. This infographic lacks enough elements to even be labeled as such. There isn’t sufficient links or annotation, but it does do justice to the initial literary work even if it is a simple derivative to The Great Gatsby.

The novel representation and the simplistic drawing does offer the reader a clearer outline of the novel. However, surely one can remake the infographics on a piece of paper so this project can hardly be called a digital humanities project. Nevertheless, one should never give up on a brilliant idea such as this but to turn it into something more modern, useful, technologically advanced.

Shannon Mattern, in her paper Evaluating Multimodal Work, Revisited emphasized the importance of “a strong thesis or argument at the core of the work”, which obviously is lacked from this infographics. Transforming a dull poster that merely serves to retell a story into a vivid digital humanities project requires a strong motivation to make a point. In this case, the revisor should reevaluate the essential ideas that Fitzgerald tried to convey such as Daisy’s vanity and Gatsby’s unconditional affection. What the revisor, as a reader, thinks of these (are they in vain? valorous? pathetic?) should be incorporated in the project and the details of the novel that embody the point should become the main theme of the project. The revisor’s motivation plays a crucial part of the project because it ensures that what technical effort should be made and why it should be made to finish the project; it differentiates a thoughtful project from a directionless “cool-data-set” that cannot be interpreted.

There are many ways to transform this simplistic infographic into something with more digital mediums. Images are mostly the only types of mediums the creator uses to make this project work. But audio, code, and other types of technologies could’ve helped make this infographic livelier. After the revisor settles on what his/her point of making the project, the structure and technical details need to be filled.

Here, one must first decide what affordances will be utilized – whether it is going to be a visual computer interface that asks the reader to click on or an immersive environment that activates the reader’s other senses like auditory, olfactory, tactile, etc. Of course, the technic availability limits what a project can do; since the design and technique is concept/content driven as aforementioned, the revisor must consider whether switching from one affordance to another will affect the reader’s understanding of the gist and motivation of the project. For example, an easy revision of the project, can be a programmed interface wherein the main body of the infographics remains the same but extra function buttons are added. The reader can click on different scenarios throughout the novel and then a clip of the movie will be replayed or a segment of the novel will be reread for them. It can also be made interactive as the reader can ask the characters questions about the novel and the character will respond according to the content of the story. Or, the book could have simply been brought to life through the infographic itself. The creator could keep the temporal and spacial elements he incorporated and added more movement through programming and audio.

More types of data could’ve been extracted from other sources in order to create a more credible project, and more technology and design would’ve most definitely helped the infographic fit into Dr. Mattern’s criteria of a multimodal project. By tweaking this infographic with more data and research along with various mediums, a multidimensional project like this would provide an immersive environment between the audience, granting them a more interactive experience. In essence, both scholarship and multimedia should be synthesized to perfection in order for the audience to reap more benefits from the medium. By keeping the audience in mind and providing them with a digital resource that could help them better understand The Great Gatsby, the creator would’ve invented a whole new, innovative way to make literary media more digital.

Work Cited:
Mattern, Shannon. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. Journal of Digital Humanities, Fall 2012. Web. 22 Sept. 2014.
Wilson, Mark. (2013, July 25). Infographic: “Every Scene in the Great Gatsby”
“The Great Gatsby Chart Infographic.” Infographickcom. N.p., 20 July 2013. Web. 22 Sept.
2014.

Partnered Critique -Pooja and David(Edited)

September 21st, 2014 | Posted by Pooja Mehta in Uncategorized - (Comments Off on Partnered Critique -Pooja and David(Edited))

On the Origin of Species: The Preservation of Favoured Traces is a digital humanities project by Ben Fry on the evolution of Darwin’s famous On the Origin of Species. It begins with three introductory written paragraphs which clarify the purpose of the project, give a couple of examples of significant changes in Darwin’s work across its six editions, and give credit to the sources, tools, and motivations behind the project, respectively. Under these three paragraphs lies the main media element in the project: a hyper minimized copy of Darwin’s work which can undergo a time lapse at two different rates which demonstrates the changes in Darwin’s work across its six different editions by color coding these changes.

Capture

The core conceptual content of Ben Fry’s project is that Darwin’s seminal work on evolution itself evolved in a substantial way throughout its six different editions. When we evaluate this thesis, however, we see that the thesis is not a contestable one, although it is both defensible and substantive (Galey 1). It fails to be contestable for the simple reason that anyone who is aware that Darwin’s Origin of Species went through six editions will recognize that it did go through such an evolution.  This failure could have easily been remedied in a number of ways. Rather than simply presenting the data about how the book transformed, for example, Ben Fry could have analyzed this data. We get a very small dose of analysis in the second introductory paragraph when he points to the addition of “by the Creator” in the second edition of Darwin’s text and when he points out that the phrase “survival of the fittest”, inspired by a British philosopher, only appeared in the fifth edition of the text. Continuing this line of thought, it would be a natural extension of Fry’s work if he addressed which changes in Darwin’s work were merely matters of detail and which changes were significant conceptual changes. Another question that Fry could have analyzed is the immediate one that any user has after interacting with this project: what happened to section VII during the sixth edition? In the time lapse, it is clear that the entire content of the section is original to the sixth edition, so it is a natural question to wonder to what extent this change influenced the main conceptual core behind the Origin of Species.  Another crutch the project has is that it only really relies on one source, Dr. John van Wyhe’s The Complete Work of Charles Darwin Online. Additionally, it does not incorporate data from other digital humanities projects, and it suffers from a lack of links or annotations. The last hyperlink in the third introductory paragraph, which is found in the sentence “More about the project can be found here”, does, however, provide some interesting autobiographical motivation for doing this project. In it, he says that after completing the project, he came to a greater appreciation of Darwin’s original ideas and discovered that they were not in fact stolen from some of his contemporaries. Again, the reasons he came to this conclusion would have been an excellent piece to analyze.

The format and design that Fry used for his project is also somewhat of a mixed bag. As a positive, Ben Fry succeeded in making his thesis “experiential” by both color coding changes based on the edition as well as letting the user experience these changes through a time lapse that has the option of going at two different rates. Furthermore, these media elements were not at all used gratuitously, but they all have explicit connections to the conceptual core of his project. Additionally, the format and design were essentially digital; the time lapse for example could not have been done on a piece of paper. However, these positives are not unaccompanied by limitations. Users of this project will find it very inconvenient to identify what the edits actually are. The project does let one zoom in on a couple lines at a time by scrolling over the text, however it would be much more usable if the project let a user zoom in to individual chapters or paragraphs and see the edits in a more contextualized setting, rather than just zooming in on one line at a time. Changing this one thing could potentially have vastly expanded the audience of the project.

The academic integrity of the project is hurt by the fact that it only really had one source, and it also did not have any collaborators. He did succeed, however, in document his intentions for the creation of the project in the concluding hyperlink: he began the project to understand to what extent Darwin stole ideas from his contemporaries. The project is linked to by other websites occasionally, but only for the purpose of directing the audience to it rather than using it or analyzing it in any depth (see here and here). The work does not seem to use expert consultations, and it is not peer reviewed.

After looking through the criteria for a multimedia project, and applying the sets of questions given by Shannon Mattern, we would argue that this is not a multimedia project–it is a media project at best, and a glorified e-text at worst. That is not to say it is not a good project. According to Fry, “We often think of scientific ideas, such as Darwin’s theory of evolution, as fixed notations that are accepted as finished. In fact, Darwin’s On the Origin of Species evolved over the course of several editions he wrote,” and this project strives to show the evolution of the idea of evolution. He does a good job of demonstrating this through the use of processor, but that, along with the original text, is the only media he used. While it does repurpose the text by putting all six editions together and allow us to see the differences that a hard copy simply could not, we would argue that Fry left out a lot of things that could have been done with the project. For example, instead of just showing us the changes, embedding tags into the project that offer explanations or hypotheses for certain changes would make it much more informative and take advantage of the fact that this project has the entire knowledge of the internet available to it.  This would also give more credit to the format of the project. As it stands now, while it is cool to watch the text change and grow, printing out the final result would cause no loss in information. If there were tags and other external resources embedded directly into the project, keeping it in a multimedia format would be necessary. But, after looking at some of Fry’s other projects it seems to us that most of his work is done with the intent of being displayed in print. So this project does do what Fry wanted, but it does not qualify as a multimedia project.

Mattern, Shannon C. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. Journal of Digital Humanities, 1 Sept. 2012. Web. 21 Sept. 2014.

Fry, Ben. “Projects.” Projects | Ben Fry. Ben Fry, n.d. Web. 21 Sept. 2014.

Fry, Ben. “On the Origin of Species: The Preservation of Favoured Traces.” On The Origin of Species. Ben Fry, 2009. Web. 21 Sept. 2014.

Galey, Alan. “Literary and Linguistic Computing.” How a Prototype Argues. Oxford Journals, 27 Oct. 2010. Web. 21 Sept. 2014

 

Powerful Digital Representation

September 15th, 2014 | Posted by Diego Nogales in Uncategorized - (1 Comments)

Hey guys, I recently watched a TED talk that I feel is truly relevant to this class and to my post. Hans Rosling is a global health professor and he loves data and statistics. His presentation was powerful and explained the changes to global economic development over a 200 year span that I am sure no printed reading could have done, in my opinion. This kind of visual representation of data and statistics proves how useful computers are and how interactive digital representations can give you a clearer perspective than a printed report would.

Here is the link, start at TIME 3:00 and just watch the next few minutes.

https://www.youtube.com/watch?v=usdJgEwMinM

 

DN Digital Humanities Blog Post

September 15th, 2014 | Posted by Diego Nogales in Uncategorized - (Comments Off on DN Digital Humanities Blog Post)

After reading this week’s texts, I can see the augmentation potential in Digital Humanities. I envision a two-step revolution. The first stage was the gathering of text from these millions of traditionally printed works. A perfect example of this was Larry Page’s project to digitalize books and use a “crowd-sourced textual correction… program called reCAPTCHA” (Marche). This revolutionary step definitely attracted criticism and as a relatively new concept, the direction of digital humanities and language-analyzing algorithms was uncertain. A major part of this active debate is whether literature is data. Some critics suggest, “Literature is the opposite of data. The first problem is that literature is terminally incomplete” (Marche). Another perfectly acceptable argument is that, “the [text] data are exactly identical; their meanings are completely separate” (Marche). I can agree with these criticisms regarding the limitations of the digitalization of text. However, I also think that these arguments will become absolute within the next decade, if not sooner.

Looking at developing projects based on coding algorithms to analyze text, the augmentation of analysis is present. Through the digital humanities, one is able to grasp patterns in millions of words or “data”, and learn something from it. One example is the interactive visual representation of the most used words in the State of Union address for each year, starting from George Washington to Barack Obama. This effective augmentation of scholarship is not only exposed to academic community, but to the entire general population in the United States. The ability to analysis hundreds of speeches at a macro-level within a few minutes simply could not have been done without the digitalization of text. This tool is just the tip of the iceberg, as the second step to Digital Humanities is just beginning.

This second step will close the gap between raw data and literature with meaning. The use of deep learning techniques through the use of coding algorithms is the direction in which digital humanities is going. Google is spearheading a “deep-learning software designed to understand the relationships between words with no human guidance” (Harris). This open sourced tool called word2vec is a movement that will push the analysis of text through computers to new levels. This future movement refers back to Hayles’, How We Think, because it will only be a matter of time before the distinctions between “machine reading” and human interpretation will be unnoticeable (Hayles 29).

 

 

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Harris, Derrick. “We’re on the Cusp of Deep Learning for the Masses. You Can Thank Google Later.” Gigaom. Gigaom, Inc., 16 Aug. 2014. Web. 12 Sept. 2014. <https://gigaom.com/2013/08/16/were-on-the-cusp-of-deep-learning-for-the-masses-you-can-thank-google-later/>.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Hom, Daniel. “State of the Union, in Words.” Business Intelligence and Analytics. Tableau, 12 Feb. 2013. Web. 12 Sept. 2014. <http://www.tableausoftware.com/public/blog/2013/02/state-union-words-1818>.

Marche, Stephen. “Literature Is Not Data: Against Digital Humanities – The Los…” The Los Angeles Review of Books. N.p., 28 Oct. 2012. Web. 15 Sept. 2014. <http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/>.

Reading response

September 12th, 2014 | Posted by Cathy Li in Uncategorized - (Comments Off on Reading response)

The readings this week are all very interesting. The article “How a Prototype Argues” presents an optimistic perspective of utilizing digital humanities; “Literature Is Not Data” (“interesting” in its particular way) and two brilliantly written objections both have very clear and distinct standpoints; the deep learning article pertains my research interest (Yay); the TED talk introduces us to how the Google searches our words in its ginormous data bases of digital texts.

The particularly interesting article, as mentioned above, Marche’s “Literature is not Data”, as a whole emphasizes the negative sides of exploiting the digital technology. Practices such as distant reading or converting the literature to data, whatever that means, inherently jeopardize our ability to understand literature and distinguish the bad from the good or the worse. This point can be easily dismissed by the lack of evidence and contradicted by the fact that my favorite “Virginia Woolf is no danger on this count” (Selisker). As Syme mentioned, Marche’s negative language also does not offer an objective description of what Google is doing and how people perceive Google Books. Just as a personal experience, none of my professors have stopped me from using online textbooks. In fact, lots of very well written books and manuals have already been put online for free by their authors, such as this and this and countless others. The author of the former also humorously linked to the “dead-tree version” of his book on Amazon. I guess Marche has not realized some erroneous assumptions he has made about the scholars nowadays. Most academic scholars are not like money grabbers like John Green who lives on publishing books; they write books outside of their academic practices, in their leisure hours, so it does not matter to these authors what form their books are published in. Not to mention lots of ebook selling websites use DRM to make sure that the readers have paid for what they read so that no copyright infringement or monetary loss will be caused. While academic journal is another story, well written papers such as “Emergence of Limit-periodic Order in Tiling Models” (by my phy TA) and “Where am I” (by Daniel Dennet) will surely be circulated in its academic pool. The assertion that being published online either undermines the judgment of its potential readership or is to be buried under millions of rubbish is simply not true.

Another personal experience of mine with digitizing the learning experience comes from this website the Mathigon. It’s a website full of AWESOMENESS that made me weep the fact that I had not been born some time later to enjoy leaning math through this interface.

Our future generation is going to be so much smarter.

Blog 2: Digital Humanities

September 12th, 2014 | Posted by Norma De Jesus in Uncategorized - (Comments Off on Blog 2: Digital Humanities)

From bringing a book to life to utilizing graphs to map out human nature, it is clear that technology can play an essential role to subjects within the humanities. The project of mapping out the character relationships and storyline of The Great Gatsby through the use of a computer portrays the way digital technology is augmenting scholarship by providing yet another spectrum through which the story can be retold. In other words, by incorporating digital media, the novel begins to evolve into other forms of useful mediums thus giving the audience another way to look at and understand the plot. Reality is being augmented in this project due to the utilization of computation in an effort to bring this story to life. Another story being brought to life through the means of augmenting the way we perceive reality is human emotion. In the article “Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter,” the author informs the audience through data and graphs about the way a study of human happiness was conducted through the use of social media, another form of technology that plays a role in amplifying the human experience. By digitally bringing an identity of humanity to life, it provides us insight into contextual information through other mediums. In retrospect, this right here is the value digitalism places on the humanities. All in all, Hayles’, “How We Think” prepared me to understand how digital technology plays a vital role in these articles. Had I not been informed on Hayes’ perspective, I wouldn’t have understood that digitalizing the humanities could provide other means of understanding a novel, human emotion, and other man-made subject. Through many different apparatuses, digital humanities are advancing the way we perceive the world around us. Take Neuromancer for example. Gibson’s use of cyberspace provides a distorted reality where digital influences prevail throughout the novel. If we were to achieve such a world where we allow technology to garble with our comfortable reality, perhaps a dystopia would arise similar to the one found in Neuromancer. But for now, we must appreciate how technology has provided us with ways to digitally alter our perception of subject matter within the humanities.

Dodds PS, Harris KD, Kloumann IM, Bliss CA, Danforth CM (2011) Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter. PLoS ONE 6(12): e26752. doi:10.1371/journal.pone.0026752

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Wilson, Mark. (2013, July 25). Infographic: “Every Scene in the Great Gatsby”

Blog 2: DH vs H

September 12th, 2014 | Posted by Pooja Mehta in Uncategorized - (Comments Off on Blog 2: DH vs H)

From my perception of the Hayles reading, I would say that digital humanities differ from simple humanities by the fact that digital humanities really do allow for you to interact with the text and encourage different paths to the main purpose of the text. These projects that we were presented with are prime examples of digital humanities, with each one going through and presenting new ways to interact with the media, and giving us information beyond what the media itself affords. For example, the video that presented how brushstrokes can help us identify authentic Van Gogh paintings from copies. With just the physical painting, we can see—well, the painting. But by digitizing it, converting it to a black and white image and using computer algorithms to analyze the brush stroke patterns, we gather two pieces of information that the physical painting could have never told us—what exactly is Van Goh’s distinctive brush style, and if the painting is, based on its brush style compared to Van Gogh’s, an genuine painting or a worthless copy.

Digitizing texts gives it dozens of affordances that a hard copy simply cannot do. For example, in this TED talk, we see analysts show us historical trends and social culture through the word count of texts that were written in the past. This is simply not possible with hard copies of books, because the time and effort it would take to do a manual word count on enough literature to gather any sorts of conclusions would vastly outweigh the benefits of discovering that information. By adding the ‘digital’ to humanities, we now get the opportunity to look through hundreds of thousands of pieces of information and extract what we need from it in a matter of minutes. This incredible benefit is, in fact, the foundation for Google’s project to digitize all of the literature available to us, a benefit that Marche argues “is a story of several, related intellectual failures.” Marche argues that digitizing the humanties removes the “humanity” from the work all together, but I disagree. I, like many others who have written in response to him, would like to believe that by giving us this ease in analyzing texts, we are now more likely to look at a piece of literature because we can go directly to the part that we want, rather than having to pore through the entire book and sift through endless pages of information that is not useful for the task we are trying to achieve.

Aiden, E. and Michel, J. (2011, September 20). What we learn from 5 million books?http://www.youtube.com/watch?v=5l4cA8zSreQ Video.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Eck, Allison. “How Forensic Linguistics Revealed J. K. Rowling’s Secret.” PBS. PBS, 19 July 2013. Web. 12 Sept. 2014. Online.

Blog 2: DH Projects

September 12th, 2014 | Posted by Greg Lyons in Uncategorized - (Comments Off on Blog 2: DH Projects)

New Digital Humanities projects are constantly serving to augment scholarship in new ways.  Most DH projects share a common thread of extracting and amassing data from collections of texts (literary works, scholarly works, web data, etc.), however the true augmenting lies in the wide range of research that is done after the data is collected.  This data can provide a model for examining more nebulous phenomena, such as emotion.  In a study titled “Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter,” Sheridan Dodds and other researchers studied individual tweets based on frequency and significance of certain words to gain insight on hedonism and emotion.  The study operates under the principle that “the raw word content of tweets does appear to reflect people’s current circumstances” (Dodds).  In this sense, Twitter and other forms of social media serve as additional embodied human communication tools – rather than being separate entities from the humans who use them, these Twitter accounts are an auxiliary part of the human himself.  With progress being made in DH, it is possible for humans to be identified by analysis of their auxiliary communication tools.  In her article for National Geographic, Virginia Hughes describes how scholars were able to examine literary data to determine that the real identity of pseudonymous writer Robert Galbraith was in fact famed author J.K. Rowling.  The idea that simply examining words and word patterns could point to a conclusion of “very characteristically Rowling” (Hughes) certainly finds itself somewhere on the “awesome-creepy” scale.

In her book How We Think, Hayles examines what can make this sort of literary data-extraction unsettling.  She discusses the differences between human interpretation of literary material and “machine reading”, and notes that human egocentricity may lead to the principle that “human interpretation should be primary in analyzing how events originate and develop” (Hayles 29).  Traditional humanities scholars often rush to discredit the digital humanities and techniques of machine reading or “distant reading,” but they often lose sight of the fact that DH seeks to augment existing scholarship rather than replace it.  These sorts of scholars remind me of some of the “console cowboys” from Neuromancer, who see simstim as an inferior tool compared to jacking-in to cyberspace.  Eventually, Case sees that simstim can be an extremely powerful tool to serve different purposes (Gibson).  DH represents a model of scholarship that uses as many tools as possible to explore academic inquiries.

 

Citations:

Dodds PS, Harris KD, Kloumann IM, Bliss CA, Danforth CM (2011) Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter. PLoS ONE 6(12): e26752. doi:10.1371/journal.pone.0026752

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Hughes, Virginia. “How Forensic Linguistics Outed J.K. Rowling (Not to Mention James Madison, Barack Obama, and the Rest of Us).” Phenomena. National Geographic, 19 July 2013. Web. 11 Sept. 2014.