Technoscience / Ecomateriality / Literature

Tag: augmentation

Hyper Use of Tech DN

This week we explored the possibilities of how AR devices may change daily activities in the future. I was personally fascinated by the short film Sight, by Eran May-raz and Daniel Lazo. The most troubling issue that was presented in the short film was the possibility of people being able to hack or take control over someone else’s brain. This ending was very daunting and makes you wonder about how the effects of hacking or stolen data can evolve to consequences of that nature, if we continue to progressively give companies more and more of our personal information to them.

Aside from this message, what struck me the most was the constant level of interaction with the web or cloud that the future holds. In a larger scale, I am worried about how my generation is very active in social networks, online media, and just always connected. This is generally seen as a good thing, because it means that data is a lot more accessible to us and that we have fewer limitations in keeping ourselves informed. However, I also think this high level of dynamic interaction has developed shorter attention spans and has developed the need to constantly be doing something. I have find myself always wanting to multitask or fidget with my phone when I have free time. Many times, the technology that we have at our fingertips becomes exhausting to me. I am always refreshing different apps, and before it use to just be Facebook and emails. Now, however you have to be in Yak Yik, Facebook, Instagram, Groupme, and Snapchat to really “stay connected”. The thought of the addition of the Google glass to the addicting uses of television, phones, and computers, becoming a part of our daily lives is overwhelming. As we discussed in class, one example of this hyper use of technology is the possibility of advertisements coming into your Google glasses. This shows that you will be constantly flooded with online information without any real escape.

The more and more I read articles on new technologies, that more am I aware of the possible downside effects hidden behind the great innovation behind them.

DN Digital Humanities Blog Post

After reading this week’s texts, I can see the augmentation potential in Digital Humanities. I envision a two-step revolution. The first stage was the gathering of text from these millions of traditionally printed works. A perfect example of this was Larry Page’s project to digitalize books and use a “crowd-sourced textual correction… program called reCAPTCHA” (Marche). This revolutionary step definitely attracted criticism and as a relatively new concept, the direction of digital humanities and language-analyzing algorithms was uncertain. A major part of this active debate is whether literature is data. Some critics suggest, “Literature is the opposite of data. The first problem is that literature is terminally incomplete” (Marche). Another perfectly acceptable argument is that, “the [text] data are exactly identical; their meanings are completely separate” (Marche). I can agree with these criticisms regarding the limitations of the digitalization of text. However, I also think that these arguments will become absolute within the next decade, if not sooner.

Looking at developing projects based on coding algorithms to analyze text, the augmentation of analysis is present. Through the digital humanities, one is able to grasp patterns in millions of words or “data”, and learn something from it. One example is the interactive visual representation of the most used words in the State of Union address for each year, starting from George Washington to Barack Obama. This effective augmentation of scholarship is not only exposed to academic community, but to the entire general population in the United States. The ability to analysis hundreds of speeches at a macro-level within a few minutes simply could not have been done without the digitalization of text. This tool is just the tip of the iceberg, as the second step to Digital Humanities is just beginning.

This second step will close the gap between raw data and literature with meaning. The use of deep learning techniques through the use of coding algorithms is the direction in which digital humanities is going. Google is spearheading a “deep-learning software designed to understand the relationships between words with no human guidance” (Harris). This open sourced tool called word2vec is a movement that will push the analysis of text through computers to new levels. This future movement refers back to Hayles’, How We Think, because it will only be a matter of time before the distinctions between “machine reading” and human interpretation will be unnoticeable (Hayles 29).

 

 

Gibson, William. Neuromancer. New York: Ace, 1984. Print.

Harris, Derrick. “We’re on the Cusp of Deep Learning for the Masses. You Can Thank Google Later.” Gigaom. Gigaom, Inc., 16 Aug. 2014. Web. 12 Sept. 2014. <https://gigaom.com/2013/08/16/were-on-the-cusp-of-deep-learning-for-the-masses-you-can-thank-google-later/>.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Hom, Daniel. “State of the Union, in Words.” Business Intelligence and Analytics. Tableau, 12 Feb. 2013. Web. 12 Sept. 2014. <http://www.tableausoftware.com/public/blog/2013/02/state-union-words-1818>.

Marche, Stephen. “Literature Is Not Data: Against Digital Humanities – The Los…” The Los Angeles Review of Books. N.p., 28 Oct. 2012. Web. 15 Sept. 2014. <http://lareviewofbooks.org/essay/literature-is-not-data-against-digital-humanities/>.

Powered by WordPress & Theme by Anders Norén