Lit 80, Fall 2013
Header

Author Archives: Mithun Shetty

Is Literature Data?

October 4th, 2013 | Posted by Mithun Shetty in Uncategorized - (0 Comments)

Generally, whether or not literature is data depends on your definition of data. If one is to classify data simply as information that can be quantified or analyzed in some way, then literature would absolutely fit that definition. Data is not just scientific observations, mathematical figures, or sets of graphs – media can be considered data as well. Music, literature, even paintings – one can perform all sorts of analyses on these works to generate data, both quantitative and qualitative. Marche’s article refers to the analysis of literature as data as “distant reading.” While he argues that this type of approach to reading ruins the experience as we know it, I believe that it is instead a different, valuable sub-discipline of literature. Distant reading, or macroanalysis, allows one to have a multidimensional understanding of a work. Its context in a larger literary ecosystem (period in time, cultural significance, etc.) can be understood by treating the book on a more holistic level. One can understand writing styles, forms, and conventions by looking at literature objectively; temporarily staying away from subjective plot or thematic analyses and looking at the mechanical details of literature opens it up to an entirely different type of scholarship, namely digital humanities. This additional perspective on the same work should be welcomed and valued. The projects studied in the course improve the quality of literature scholarship – they are tools we can use to gain another perspective beyond the scope of unassisted brainpower alone. Especially with larger volumes, using tools to perform distant reading can almost instantly compile word patterns, trends, and more and present them in such a way as to facilitate our digestion of the information. In this sense, these projects augment reality. They give us “superpowers” of analysis. They allow us to access an entire history of literature and academia instantly, which would be otherwise impossible.  The most obvious value in using digital tools to analyze literature as data is that it allows us to handle large volumes of information much more easily and efficiently.

Marche, Stephen. “Literature is not Data: Against Digital Humanities.” Los Angeles Review of Books. 28 Oct 2012: n. page. Web. 2 Oct. 2013.

Game(r) Critique

September 30th, 2013 | Posted by Mithun Shetty in Uncategorized - (0 Comments)

            I have played video games for most of my life. My first video game device was the Gameboy Color, on which I played games like Pokémon Yellow, Red, Blue, and an assortment of others religiously. My first console, the Nintendo 64, was also played into the ground – 3D environments of the Legend of Zelda and Mario 64 brought me to a place that a 2D handheld screen could not. However, regardless of what the game looked like, I’ve always been able to immerse myself in both the environment and storyline of the game. Rather than a mere diversion, most games became an alternate reality for me – I handled the tasks and challenges of the game seriously and sometimes forgot that I was even playing a game. This early experience to video games was definitely a formative experience for me as an individual.

Games have a profound academic value and should be studied – they are multidimensional works that have value in the natural sciences and humanities alike. A game is absolutely a work of art – this is evident in the video game Journey for the Playstation 3. Simply looking at the game is breathtaking – serene landscapes, minimalist ambient noise, and character designs are carefully detailed to produce a pleasant audiovisual experience. One could study a game environment similar to how he or she might study a painting in a museum – more often than not, game environments are specifically designed with a purpose (as opposed to simply repeating a series of textures to fill empty space). Perhaps the environment seeks to disorient the player with bright colors or strobe-like visuals. These types of games could be classified as abstract art:

LA Game Space Experimental Game Pack 01: “DEPTH”

As technology and computer hardware are continually advancing, so too are the graphics of the games being produced in terms of replicating real-world images. Games are becoming so life-like by way of texture detail, colors, shading, and a slew of other computer miracles. In a decade alone, one series managed to completely reupholster its image while maintaining its gameplay:

Halo: Combat Evolved Anniversary Edition (2011) vs. Halo: Combat Evolved (2001)

These types of games are prime examples of realism in art and can be classified into their own genre.

Further, games can also be studied from a psychological or neurological perspective. Exposure to video games, especially at a younger age, changes how individuals choose to handle real-life situations (BBC). Depending on its difficulty, a game can make an individual more detail-oriented or thorough in real-life tasks. Additionally, studies have shown a correlation between playing video games and hand-eye coordination skills (CNN). Also, the effects that the environments of games have on the visual cortex can provide information regarding certain patterns that explain how humans direct their attention to external stimuli (USC). The level of engagement required in individual games would be interesting to study as well. As Laetitia Wilson describes in her “Interactivity or Interpassivity” (2009), all video games are interpassive experiences. Compare this idea to the idea of interactive games, which have a direct effect on the player. An interactive game (such as Portal) requires a serious level of engagement from the player. It requires critical thinking skills, spatial reasoning skills, understanding of changing physics, and much more. Interpassive games, such as Flow, are much less taxing and serve a different purpose. They are “calming” games which feature soothing colors, music, or simplified gameplay. Interactive games have the player creating his or her own emotional states, whereas interpassive ones transfer these emotional states to the player passively.

Games are also quite useful in providing concrete examples to abstract philosophical concepts. Many philosophical or ethical dilemmas are difficult to understand on paper; video games allow the player to more profoundly understand the dilemma by “living” the experience. For example, The Company of Myself requires the player to kill what is assumed to be his girlfriend/wife in order to progress. The player can choose to continue the story and kill her, or cease playing. Another example is found in the Bioshock series. The game takes place in a dystopian society under the sea. Without getting into too much detail, the player is often times presented with the decision to harvest a parasite from another type of character (which yields more monetary gain but kills the characters) or exorcise the characters (which saves them but yields less monetary gain). The effects of this decision are manifested in the ending of the game, which changes according to how many characters were killed or saved.

Bioshock 2: Harvest or Adopt (Rescue) Little Sisters?

Bogost says that a medium is characterized by the variety of uses it has: “we can understand the relevance of a medium by looking at the variety of things it does” (Bogost 3). The aforementioned examples already qualify games as a medium, but they merely scratch the surface of what games have to offer. It is quite easy to see games as a medium when they are compared to classical examples of media. Books, for example, contain information and, in some cases, rely on the reader’s imagination to transport them to a different reality. Video games do much the same – they contain text information, graphical information, and also serve the function of transporting the player to an alternate reality. The feeling one can get from playing certain games is an escape from the real world. The avatar/perspective in the game is a conduit for the player to experience a wider variety of physical skills or abilities. This alternate world allows us to use our imagination and experience sensations beyond our physical capabilities. Cloud gives players the gift of flight; The Company of Myself and Braid gives players the ability to control time. These functions provide something books cannot – they have a wider range of functions and can convey information in a different manner. It is likely that people consider games as not being a medium due to the subject matter and general stigma of many commercial video games (violence, sexual themes, time-wasting, etc.) However, games have serious potential to be used as valuable tools of transmitting information and should be utilized as such (in addition to recreational purposes).

Works Cited

Bogost, Ian. How To Do Things With Video Games. Minneapolis: University of Minnesota, 2011.

Fleming, Nic. “Why Video Games May Be Good for You.” BBC.com. BBC, 26 Aug. 2013. Web. Nov. 2013. <http://www.bbc.com/future/story/20130826-can-video-games-be-good-for-you>.

Roach, John. “Video Games Boost Visual Skills, Study Finds.” National Geographic. National Geographic Society, 28 May 2003. Web. Nov. 2013. <http://news.nationalgeographic.com/news/2003/05/0528_030528_videogames.html>.

Steinberg, Scott. “How Video Games Can Make You Smarter.” CNN. Cable News Network, 31 Jan. 2011. Web. Nov. 2013. <http://www.cnn.com/2011/TECH/gaming.gadgets/01/31/video.games.smarter.steinberg/>.

Tortell, Rebecca, and Jacquelyn F. Morie. Videogame Play and the Effectiveness of Virtual Environments for Training. Http://ict.usc.edu. USC, 2006. Web. <http://ict.usc.edu/pubs/Videogame%20play%20and%20the%20effectiveness%20of%20virtual%20environments%20for%20training.pdf>.

Wilson, Laetitia. “Interactivity or Interpassivity: A Question of Agency in Digital Play.”Fine Art Forum 17.8 (2009)

Collaboratively written by Mithun Shetty, Kim Arena, and Sheel Patel

    The efficacy of a digital humanities project can be vastly improved if the delivery and interface is thoughtfully designed and skillfully executed. The following two websites, “Speech Accent Archive”, and “10 PRINT eBooks”, both utilize non-traditional forms of displaying content that alter the experience of their internet audience. Both projects will be critically assessed according to the guidelines described by Shannon Mattern’s “Evaluating Multimodal Work, Revisited” and Brian Croxall and Ryan Cordell’s “Collaborative Digital Humanities Project.”

    The Speech Accent Archive is an online archive of audio recordings that contains multiple speakers from a variety of regional and ethnic backgrounds dictating a specific set of sentences. These recordings are submitted by the public and reviewed by the project administrators before being added to the site. The purpose of this media element is to expose users to the phonetic particularities of different global accents. This website is useful because it provides insight regarding the various factors that affect the way people talk and how these factors interconnect – from their ethnic background to their proximity to other countries. This site proves to be a useful tool for actors, academics, speech recognition software designers, people with general appreciation for the cultural connections in languages, and/or anyone studying linguistics, phonetics, or global accents.

    The website’s layout is ideal for accomplishing this purpose: users can browse the archive by language/speakers, atlas/regions, or can browse a native phonetic inventory. This allows users to explore accents on a regional basis which makes it easier to see similarities between local dialects. The audio recordings are all accompanied by a phonological transcription, showing the breakdown of consonantal and vowel changes as well as syllable structure of the passage. Each user submission is accompanied by personal data, including the speakers’ background, ethnicity, native language, age, and experience with the English language. The site also has a very comprehensive search feature which has many demographical search options, ranging from ethnic and personal background to speaking and linguistic generalization data. This level of detail is an invaluable resource for those who study cultural anthropology, phonetics, languages, and other areas, as it allows for a specific manipulation of the data presented. Also, the quality of user contributions is consistently high – it is very easy to follow the playback of the recordings.

    However, the project does have its limitations as well. The passage being read by the contributors is in English, no matter the speaker’s fluency or familiarity with the language. Pronunciations of this passage may not reflect the natural sound of the languages represented. Further, because the audio samples are user-contributed, it is hard to maintain a constant of English fluency among contributors. Another limitation to the site is that many of the sections of the site have little to no recordings or data; this is merely due to a lack of user contributions, but could be resolved by website promotion. The project is still ongoing, thus the database will continue to grow as time goes on. Another limitation of the site is that it lacks any sort of comparison algorithm. The accents are all stored on their own specific web pages and load via individual Quicktime audio scripts; consequently, it is very difficult to perform a side by side comparison of accent recordings. As a result, the project is not really making any conclusions or arguments with this data. This could be improved by allowing users to stream two separate files at the same time, or by allowing a statistical comparison of the demographical information accompanying each recording. It would also be interesting if an algorithm or visualization could be created that could recognize the slight differences in the voices and arrange them based on similarity along with the demographical data that accompanies the voice sample. Further, the project could establish a tree-like comparison of regions and accents, visually representing the divergences and connections between where people live or have lived and the way that they speak.

    With these additions, it would be easier to aurally understand the effects of background or ethnicity on speech accents. Still, this website shines albeit these setbacks. The project offers a tremendous amount of data in an organized manner, presenting many opportunities for further research and applications of the information. This level of detail is an invaluable resource for those who study cultural anthropology, phonetics, languages, and much more.

   The book titled 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 is a collaboratively written book  that describes the discovery and deeper meaning behind the eponymous maze building program created for the Commodore 64. The book can be seen as a way to look at code, not just as a functional working line of characters, but also as a medium of holding culture. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 uses the code as a jumping point to talk about computer programming in modern culture and the randomness/ unpredictability of computer programming and art. The book explores how computation and digital media have transformed culture. Along with this book, one of the authors, Mark Sample, created a “twitterbot”  that uses a Markov Chain algorithm to produce and send tweets. The @10print_ebooks twitterbot takes the probability that one word will follow another, scans the entire book, and tweets the random phrases it generates.

     The clear goal of this book is to demonstrate that reading and programming coding does not have to always be looked at in the two-dimensional, functional sense that many people see it as. The authors argue that code can be read and analyzed just like a book. They do so by delving into how this 10 Print program was created, its history, its porting to other languages, its assumptions, and why that all matters. They also talk about the randomness of both computing and art, and use this 10 Print program as a lens through which to view these broader topics. The purpose of this book is stated very clearly by one of the co-authors, Mark Sample: “Undertaking a close study of 10 Print as a cultural artifact can be as fruitful as close readings of other telling cultural artifacts have been.”

The implementation and format of this book and twitterbot is a little difficult to understand and doesn’t necessarily help them portray and establish their goals, especially when talking about the twitterbot. The book itself is coauthored by 10 professors who are literary/cultural/media theorists whose main research topics are gaming, platform studies, and code studies, which gives a broad range of perspectives regarding the topics. It also dives into the fact that code, just like a book, can be co-authored and can incorporate the views and ideas of more than one person. This idea draws on the parallels that the authors are trying to draw between coding and literary elements. Code is not just one-dimensional; it can incorporate the creative and artistic ideas of many people and can achieve many different forms that often have very similar functions in the end. In this sense, the co-authoring of this book inherently showcases their main message regarding code and how it should be viewed. The book also progressively talks about the history of this Basic program, and how it coincided with cultural changes due to the advent of the personal computer. Sample’s twitterbot, on the other hand, leaves the user more often confused than educated, but that may be point. Using the algorithm it spits out random, syntactically correct sentences that sometimes mean absolutely nothing, but also occasionally it creates coherent thoughts from the words in the book. The occasional coherent sentence that the bot spits out may be a demonstration of code itself. The user may see that within jumbles of code or in the case of the book, words, and meanings can be pulled if put in the correct syntax. Also, the form definitely fits. The randomness of the twitterbot allows people to see that even by coincidence there can be substance to code. If it was done having people point out specific parts of the code then we would be limited to their interpretations. Having a machine randomly spew out phrases allows for many different interpretations.

This tool, although abstractly useful could be implemented much better. If the twitter bots are “occasionally” genius, then the website would be more efficient if it had implemented some sort of ranking system for the most interesting or coincidental tweets. If they had some sort of sorting mechanism, then the project may be more convincing in saying that code can be made to have a creative license or brand.

Regardless of the various limitations both projects may have shown, it is abundantly clear that their media elements vastly improve their ability to illustrate their ideas and accomplish their purposes. It would be practically impossible to illustrate these projects with text alone. The Speech Accent Archives’ audio recordings give concrete examples to an entirely aural concept, which is infinitely more useful than simply listing the phonetic transcriptions.  The 10print Ebooks’ twitterBot, while difficult to understand, is an interesting concept that also generates concrete examples of what the project is trying to illustrate – that code is multidimensional in its structure and can be interpreted and analyzed similar to a complex literary work.

 

Sources:

@10PRINT_ebooks, “10 PRINT ebooks”. Twitter.com. Web. https://twitter.com/10PRINT_ebooks

Baudoin, Patsy; Bell, John; Bogost, Ian; Douglass, Jeremy; Marino, Mark C.; Mateas, Michael; Montfort, Nick; Reas, Casey; Sample, Mark; Vawter, Noah. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. November 2012. Web. http://10print.org/

Cordell, Ryan, and Brian Croxall. “Technologies of Text (S12).” Technologies of Text S12. N.p., n.d. Web. 15 Sept. 2013. http://ryan.cordells.us/s12tot/assignments/

Mattern, Shannon C. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. N.p., 28 Aug. 2012. Web. 15 Sept. 2013. http://journalofdigitalhumanities.org/1-4/evaluating-multimodal-work-revisited-by-shannon-mattern/

Weinberg, Stephen H. The Speech Accent Archive. 2012. Web. http://accent.gmu.edu/about.php

Neuromancer: Novel Response

September 6th, 2013 | Posted by Mithun Shetty in Uncategorized - (0 Comments)

Aside from its bizarrely accurate foresight, Neuromancer is an interesting novel because of the questions and ideas that it brings up regarding the relationship between humans and technology. In a mere set of decades, our society has transformed from a largely disconnected, isolated set of communities to a thoroughly interconnected digital network; in some way or the other, we are constantly transmitting or receiving data in our daily lives. Whether it is the infrastructures of our cities (traffic, navigation, consumerism, etc.) or keeping ourselves updated on our array of electronic devices, urbanized areas are almost completely dependent on technology. Additionally, the technology we are using is tending towards a more profound integration into our biological systems; new inventions are changing the way we perceive information from our environments. The implementation of QR codes, “Google Glasses,” and other marvels that augment reality are steps toward the complete unification of man and machine.

This is an idea that Neuromancer focuses on for the majority of the novel. Where exactly is the delineation between human and technology?  As we become more dependent on our devices to orient ourselves in our changing environments, will we lose the characteristics that we currently consider makes us “human?” The characters in Gibson’s novel all feature some sort of technological miracle; they have been able to cure their drug addictions, develop veritable superpowers (fingernails, cybernetic implants), and even achieve immortality. Additionally, Gibson introduces characters who are able to willfully suspend their consciousness or jack into an alternate form of reality, “the matrix.” Such examples represent the extremes of technological integration – it is understandable that Gibson chooses to represent the characters’ reliance on technology similar to drug dependence.

The reader may find it very difficult to classify certain characters as human or technology. Most notably, Dixie Flatline – who is deceased but has his mind and consciousness stored onto a ROM – is able to interact with Case and Molly and the other characters of the novel. Would we consider him a human? Though he is not the physical manifestation of McCoy Pauley, he is still able to access his mind. Perhaps he is not human because he cannot create or learn new thoughts. This distinction could be a vital part of the definition of a human being. Additionally, characters that have serious prosthetics (Molly, for example) cause readers to wonder how much technological additions/replacements would be necessary to cross the threshold of man to machine. When does a character like Molly cease being human and become a super intelligent bio-computer?

We probably will not have such drastic technological advances as presented in Gibson’s novel in the foreseeable future. However, these examples illustrate important phenomena that are occurring in our present lives: we are co-evolving with the technology that we produce at an alarmingly fast rate, which has both useful benefits (such as increased perception and function) and dangerous risks of total dependence on technology and a lack of a separate, human identity.