Lit 80, Fall 2013

Tag: mattern

The Future of the Past- A Critique

The digital humanities project, “The Future of the Past” is a unique use of digital humanities methods. When scrutinized under the figurative lens of Shannon Mattern’s Criteria for Evaluating Multimodel Work, one can understand why it awarded “Best use of digital humanities project for fun” by the expert consortium administering the annual DH Awards: It is fun but it is perhaps not entirely effective. To simplify her criteria we made our own rubric to analyze it.

Screenshot of the Rubric we created to simplify the Mattern Criteria

Screenshot of the Rubric we created to simplify the Mattern Criteria

For readers unfamiliar with the project, a brief overview can be found on the website by clicking “the story…” on the right side of the homepage. This provides an overview of how the author, Tim Sherrat, turned 10,000 newspaper articles into a digital humanities project. His aim was to archive every Australian news article from the 19th and 20th centuries that contained the phrase “the future” and create a site to explore how the future was perceived in the past. Sherrat’s website includes evidence of research, links to his sources, and links to/from his site to reinforce his underlying thesis in a cohesive manner. That is, that throughout time the future has been discussed throughout the past in different contexts. On the website , a link can be found to the newspaper he used which demonstrates use of  citations and academic integrity – imperative components of Mattern’s ideal digital humanities model.

He extracted every word from his archive of collected articles containing the word ‘future’ and made a database. He then made an interactive word-based interface so that whenever  a reader accesses the site they find a compilation of words from the articles that act as hyperlinks.

Screenshot of the Homepage of the DHP

Screenshot of the Homepage of the DHP

It is through this organization that he was able to take the newspaper articles and recontextualize them into a format useful for his digital humanities project.  One particularly impressive aspect of Sherrat’s project was how much feedback he
received – and his constructive dialogue with users – throughout the developmental stages of his site . Clicking “The Story” allows the site’s viewers to follow Sherrat’s creative process. Additionally, he live tweeted his progress in real time, and people tweeted at him with questions about his project, which he appeared to happily reply to. These tweets act as a form of pseudo-peer review. A series of lectures explaining his project allowed additional public understanding. This would not necessarily be considered a form of collaboration, as he was and continues to develop this project on his own, but the public input serves a form of joint effort.

Exploration of the website, and use of the tools he provides make it easy to deduce that Sherrat has a very clear vision, and a comprehensive understanding of the mechanisms behind his project. However, the format of the website is the project’s biggest downfall – the tool is simple enough to use (user friendly) because one  just  points and clicks, but there is no balance with something new other than the format of site. Despite the simplicity of the point and click if someone were to come to the page on their own it would be difficult for them to grasp what the site was attempting to achieve. Although it does not initially appear accessible, reading “the story…” provides some clarity. Additionally, if one came to the site to learn about how others viewed the past, it would only be a random acquisition of knowledge rather than a particular route. A direct search for a particular year, event, or phrase is not possible which makes it less useful then one would hope for. Perhaps we are not entirely grasping what he is trying to achieve which would make our critique a little unfair. If he is trying to achieve a database to allow for random knowledge acquisition (which is maybe suggested by winning the Fun category) then he effectively created such a database. From our perspective, however, it appears as if this is not the most conducive format for this project, because you cannot purposefully acquire knowledge.

Despite our inability to determine the exact purpose of the site we will proceed the rest of the way under the impression that it is made for random knowledge acquisition. Under these conditions it is appropriately formatted and effectively organized. The fact that each time you open the page a random subset a words appears, which will lead you on a different path each time, is an innovative way to create a site and to organize this information. But how well this page is linked together, its cohesiveness, is arguably the most difficult criteria to judge. If it is judged based on the understanding that it is supposed to be random then yes, the fact that it is a jumble of words that allows you to arbitrarily click on an appealing word and learn more is fantastic. Conversely, if a reader wants to acquire specific data then we would dispute how cohesive the page is.

Since Sherrat appears to demonstrate a mastery of the tool it is therefore adaptable. Mastery correlates with adaptability because a complete understanding of how the tool (the tool being the way he tied together all the words) suggest that changes could be made if the site needed to adapt. It is this adaptability that is one of the most exciting aspects of the project. If he were to come across more data he could expand upon the comprehensiveness of the database. Currently, the data is limited to a certain time and geographic range (Australia). However, if he were to collaborate with partners in various nations he could expand upon the database so that readers could learn “The Future of the Past” of more nations across a longer expanse of time. With such an extensive database, readers could compare not only “the future” across time, but across space. Another improvement we would suggest is a more comprehensive explanation of how to use the site because a better understanding allows for a more user friendly experience. Wouldn’t it be nice too to have different navigation interfaces if the world-link interface is not useful to you? If the back-end database is robust and adaptable, we should be able to feed that data into multiple interfaces allowing for very different web ‘faces’.

Based on Shannon Mattern’s Criteria, Sherrat created an approvable digital humanities project. It fulfills most of the requirements she presents, and its adaptability allows for it to potentially fulfill the rest. Overall, it is an impressive project that understandably won the award for “Best use of digital humanities project for fun.”

 

Co-Authors: Shane and Joy

 

Thank you to Amanda Gould for her assistance in reviewing our work

#dh Project Critiques: “Speech Accent Archive” and “10 PRINT eBooks”

Collaboratively written by Mithun Shetty, Kim Arena, and Sheel Patel

    The efficacy of a digital humanities project can be vastly improved if the delivery and interface is thoughtfully designed and skillfully executed. The following two websites, “Speech Accent Archive”, and “10 PRINT eBooks”, both utilize non-traditional forms of displaying content that alter the experience of their internet audience. Both projects will be critically assessed according to the guidelines described by Shannon Mattern’s “Evaluating Multimodal Work, Revisited” and Brian Croxall and Ryan Cordell’s “Collaborative Digital Humanities Project.”

    The Speech Accent Archive is an online archive of audio recordings that contains multiple speakers from a variety of regional and ethnic backgrounds dictating a specific set of sentences. These recordings are submitted by the public and reviewed by the project administrators before being added to the site. The purpose of this media element is to expose users to the phonetic particularities of different global accents. This website is useful because it provides insight regarding the various factors that affect the way people talk and how these factors interconnect – from their ethnic background to their proximity to other countries. This site proves to be a useful tool for actors, academics, speech recognition software designers, people with general appreciation for the cultural connections in languages, and/or anyone studying linguistics, phonetics, or global accents.

    The website’s layout is ideal for accomplishing this purpose: users can browse the archive by language/speakers, atlas/regions, or can browse a native phonetic inventory. This allows users to explore accents on a regional basis which makes it easier to see similarities between local dialects. The audio recordings are all accompanied by a phonological transcription, showing the breakdown of consonantal and vowel changes as well as syllable structure of the passage. Each user submission is accompanied by personal data, including the speakers’ background, ethnicity, native language, age, and experience with the English language. The site also has a very comprehensive search feature which has many demographical search options, ranging from ethnic and personal background to speaking and linguistic generalization data. This level of detail is an invaluable resource for those who study cultural anthropology, phonetics, languages, and other areas, as it allows for a specific manipulation of the data presented. Also, the quality of user contributions is consistently high – it is very easy to follow the playback of the recordings.

    However, the project does have its limitations as well. The passage being read by the contributors is in English, no matter the speaker’s fluency or familiarity with the language. Pronunciations of this passage may not reflect the natural sound of the languages represented. Further, because the audio samples are user-contributed, it is hard to maintain a constant of English fluency among contributors. Another limitation to the site is that many of the sections of the site have little to no recordings or data; this is merely due to a lack of user contributions, but could be resolved by website promotion. The project is still ongoing, thus the database will continue to grow as time goes on. Another limitation of the site is that it lacks any sort of comparison algorithm. The accents are all stored on their own specific web pages and load via individual Quicktime audio scripts; consequently, it is very difficult to perform a side by side comparison of accent recordings. As a result, the project is not really making any conclusions or arguments with this data. This could be improved by allowing users to stream two separate files at the same time, or by allowing a statistical comparison of the demographical information accompanying each recording. It would also be interesting if an algorithm or visualization could be created that could recognize the slight differences in the voices and arrange them based on similarity along with the demographical data that accompanies the voice sample. Further, the project could establish a tree-like comparison of regions and accents, visually representing the divergences and connections between where people live or have lived and the way that they speak.

    With these additions, it would be easier to aurally understand the effects of background or ethnicity on speech accents. Still, this website shines albeit these setbacks. The project offers a tremendous amount of data in an organized manner, presenting many opportunities for further research and applications of the information. This level of detail is an invaluable resource for those who study cultural anthropology, phonetics, languages, and much more.

   The book titled 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 is a collaboratively written book  that describes the discovery and deeper meaning behind the eponymous maze building program created for the Commodore 64. The book can be seen as a way to look at code, not just as a functional working line of characters, but also as a medium of holding culture. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 uses the code as a jumping point to talk about computer programming in modern culture and the randomness/ unpredictability of computer programming and art. The book explores how computation and digital media have transformed culture. Along with this book, one of the authors, Mark Sample, created a “twitterbot”  that uses a Markov Chain algorithm to produce and send tweets. The @10print_ebooks twitterbot takes the probability that one word will follow another, scans the entire book, and tweets the random phrases it generates.

     The clear goal of this book is to demonstrate that reading and programming coding does not have to always be looked at in the two-dimensional, functional sense that many people see it as. The authors argue that code can be read and analyzed just like a book. They do so by delving into how this 10 Print program was created, its history, its porting to other languages, its assumptions, and why that all matters. They also talk about the randomness of both computing and art, and use this 10 Print program as a lens through which to view these broader topics. The purpose of this book is stated very clearly by one of the co-authors, Mark Sample: “Undertaking a close study of 10 Print as a cultural artifact can be as fruitful as close readings of other telling cultural artifacts have been.”

The implementation and format of this book and twitterbot is a little difficult to understand and doesn’t necessarily help them portray and establish their goals, especially when talking about the twitterbot. The book itself is coauthored by 10 professors who are literary/cultural/media theorists whose main research topics are gaming, platform studies, and code studies, which gives a broad range of perspectives regarding the topics. It also dives into the fact that code, just like a book, can be co-authored and can incorporate the views and ideas of more than one person. This idea draws on the parallels that the authors are trying to draw between coding and literary elements. Code is not just one-dimensional; it can incorporate the creative and artistic ideas of many people and can achieve many different forms that often have very similar functions in the end. In this sense, the co-authoring of this book inherently showcases their main message regarding code and how it should be viewed. The book also progressively talks about the history of this Basic program, and how it coincided with cultural changes due to the advent of the personal computer. Sample’s twitterbot, on the other hand, leaves the user more often confused than educated, but that may be point. Using the algorithm it spits out random, syntactically correct sentences that sometimes mean absolutely nothing, but also occasionally it creates coherent thoughts from the words in the book. The occasional coherent sentence that the bot spits out may be a demonstration of code itself. The user may see that within jumbles of code or in the case of the book, words, and meanings can be pulled if put in the correct syntax. Also, the form definitely fits. The randomness of the twitterbot allows people to see that even by coincidence there can be substance to code. If it was done having people point out specific parts of the code then we would be limited to their interpretations. Having a machine randomly spew out phrases allows for many different interpretations.

This tool, although abstractly useful could be implemented much better. If the twitter bots are “occasionally” genius, then the website would be more efficient if it had implemented some sort of ranking system for the most interesting or coincidental tweets. If they had some sort of sorting mechanism, then the project may be more convincing in saying that code can be made to have a creative license or brand.

Regardless of the various limitations both projects may have shown, it is abundantly clear that their media elements vastly improve their ability to illustrate their ideas and accomplish their purposes. It would be practically impossible to illustrate these projects with text alone. The Speech Accent Archives’ audio recordings give concrete examples to an entirely aural concept, which is infinitely more useful than simply listing the phonetic transcriptions.  The 10print Ebooks’ twitterBot, while difficult to understand, is an interesting concept that also generates concrete examples of what the project is trying to illustrate – that code is multidimensional in its structure and can be interpreted and analyzed similar to a complex literary work.

 

Sources:

@10PRINT_ebooks, “10 PRINT ebooks”. Twitter.com. Web. https://twitter.com/10PRINT_ebooks

Baudoin, Patsy; Bell, John; Bogost, Ian; Douglass, Jeremy; Marino, Mark C.; Mateas, Michael; Montfort, Nick; Reas, Casey; Sample, Mark; Vawter, Noah. 10 PRINT CHR$(205.5+RND(1)); : GOTO 10. November 2012. Web. http://10print.org/

Cordell, Ryan, and Brian Croxall. “Technologies of Text (S12).” Technologies of Text S12. N.p., n.d. Web. 15 Sept. 2013. http://ryan.cordells.us/s12tot/assignments/

Mattern, Shannon C. “Evaluating Multimodal Work, Revisited.” » Journal of Digital Humanities. N.p., 28 Aug. 2012. Web. 15 Sept. 2013. http://journalofdigitalhumanities.org/1-4/evaluating-multimodal-work-revisited-by-shannon-mattern/

Weinberg, Stephen H. The Speech Accent Archive. 2012. Web. http://accent.gmu.edu/about.php

Powered by WordPress & Theme by Anders Norén