browser icon
You are using an insecure version of your web browser. Please update your browser!
Using an outdated browser makes your computer unsafe. For a safer, faster, more enjoyable user experience, please update your browser today or try a newer browser.

David Builes

Portal 2 As A Speculative Future Environment

Part I: Introductory Remarks

This project will have two intertwined goals. The first goal is to come to a deeper understanding and appreciation of Portal 2 as a piece of creative art and an example of a thoroughly immersive gaming experience. This will segway nicely into the second main goal of this project, which will be to demonstrate the viability of gaming as a serious literary element worthy of analysis by examining Portal 2 as a speculative future environment. While some elements of the plot will be exposed as being unreasonable, as is only expected for a video game, several of the central ideas of Portal 2, such as the need for Artificial Intelligence (AI) to conduct our science for us and the need to create ethical AI that we ourselves need to be ethical towards will be vindicated and explored as serious possibilities for our own future that demand serious attention.

Figure 1: Official Cover Art of Portal 2

As will be substantiated throughout this project, Portal 2 has proven to be a hugely successful, entertaining, and thought-provoking video game released by Valve Corporation on April 19, 2011. It can be played in several different video game environments including Windows, Mac, Linux, PlayStation 3, and Xbox 360. As suggested by the title, it is the sequel to Portal, released by the same company on October 9, 2007. Portal received very positive reviews from critics, as evidenced by its score of 90 out of 100 on Metacritic and the number of awards it received. At the 2008 Game Developers Choice Awards, Portal won Game of the Year, the Innovation Award, and Best Game Design (Thorsen and Sinclair). GamesRadar even named it the best game of all time (GamesRadar Staff)! Portal 2 was able to meet the high expectations it had from its prequel by earning a score of 95 out of 100 on Metacritic, several awards, as well as several commentators dubbing it one of the “best games of all time” (Freeman, Kuo). Portal 2 is a masterpiece of gaming, and as such, it is worthy of analysis for its worth as a game. However, as will be shown below, the lessons that can be learned from Portal 2 far exceed its intrinsic merits as a game.

To situate the story of Portal 2, a brief plot synopsis of its prequel Portal is necessary. The protagonist of Portal, the character which the gamer plays as, is named Chell. She is the pawn of an AI named GLaDOS who runs experiments on her as part of an organization called Aperture Science. As Chell advances through more and more difficult obstacle courses with her signature Portal gun, just as a mouse in a maze, GLaDOS’ motives increasingly become more sinister as she shows her complete disregard for the well-being of Chell. In the rest of the game, Chell attempts to, and succeeds, in “killing” GLaDOS. However, after she succeeds, she is dragged back into Aperture science into the Relaxation Chamber.

Figure 2: Chell with GLaDOS using a potato battery (Portal 2)

Portal 2 takes place an indefinite amount of time after Portal. It starts with an AI named Wheatley arriving at the room where Chell is awakening. Together, they want to escape Aperture Science, a currently crumbling facility. Along the way, however, they accidentally revive GLaDOS, who is understandably seeking revenge. Together, Chell and Wheatley attempt to shut down GLaDOS again, and while doing so they swap the personality programming of Wheatley and GLaDOS. Wheatley becomes corrupt with all the power of the facility and as revenge on GLaDOS shoves her into a potato battery. GLaDOS insists that Chell help her get back to her original body so that they can work together to get out of the control of Wheatley. Eventually, Chell and GLaDOS succeed in besting Wheatley by using Chell’s portal gun to transport all of them to the surface of the moon, where GLaDOS takes control of her original body, knocks Wheatley into space, and returns to the original facility with Chell.

Figure 2: Chell shooting at the moon with the portal gun (Portal 2)


Part II: Portal 2 – The Game

The reasons why Portal 2 has received so many accolades are multi-faceted; they range from its creativity as a puzzle game and its quirky sense of humor to the many memes it has spawned in gamer culture. As a puzzle game, it uses many different future “technologies” to enhance the difficulty, and hence enjoyment, of its many different levels. The primary tool used in every level of Portal 2 is, of course, the portal gun. The portal gun shines as a puzzle tool both because of its simplicity and because of its complexity. The portal gun is deceptively simple in that all it does is create a “hole” in a surface such that entering the first hole leaves one at the entrance of a second hole. One can only appreciate the complexity of the portal gun in more advanced levels, especially when used in conjunction with other game mechanics. This is best displayed in video form, so in Video 1 one can see a creative use of the portal gun along with another form of technology called “propulsion gel”, which is a substance that one can slip across that, unsurprisingly, can act as a propulsion through the air. Other puzzle mechanics include laser beams, turrets ,reflecting boxes, repulsion gel, and many more. Although the video shows one creative use of these materials, the best way to appreciate their applications is of course to play the game itself.

Comedic relief is always welcome when running through the mentally strenuous puzzles that Portal 2 throws at you, so thankfully Portal 2 has its fair share. One funny part in the game is when GLaDOS, the once extremely powerful AI in control of all Aperture science is reduced to being powered by a potato. This comedic turn of events also turns out to be central to how the plot turns out. It is only because GLaDOS becomes a potato that GLaDOS transitions from being the antagonist to being a protagonist by allying with Chell; essentially because GLaDOS is unable to lie to Chell while only using 1.1 volts of electricity! Another constant source of humor is the character Wheatley. For starters, while GLaDOS’ voice sounds like an unnatural electronic AI, Wheatley’s voice sounds like it is coming out of a perfectly ordinary man with a british accent. In fact, Wheatley’s voice is acted by the English Comedian Stephen Merchant! To his dismay, Wheatley is often referred to as a “moron”, which serves as another stark contrast to GLaDOS and is the source of many of his humorous remarks. Because humorous remarks are best appreciated by example, a brief clip where Wheatley is first introduced, which taking place near the beginning of the game, is given in Video 2.

Figure 4: GLaDOS as potato telling Chell she can’t lie (Portal 2)

Thirdly, there have been several humorous memes in Portal 2 that have been popular with the gaming community. Probably the most famous one is a forty second rant given by Cave Johnson, the CEO of Aperture science, about what to do when life hands you lemons, which can be found here. Lastly, there is the closing song that GLaDOS sings in her AI voice titled “Want You Gone” about her feelings for Chell. The audio can be found in Audio 1. It’s opening lines are “Well here we are again. It’s always such a pleasure. Remember when you tried to kill me twice?”

In summary, Portal 2 should be considered a serious literary and artistic medium because it contains poetry in the form of song, humor and irony in the form of rants and other memes, a complex plot where the antagonist and one of the protagonists, who are both AI, change roles, and a creative use of a wide variety of imagined future technologies which are used in wildly different settings from the heart of an incinerator to the surface of the moon in outer space. Next, we will see how Portal 2 may illuminate our own reality.


Part III: Portal 2 And The Future

Just how plausible is Portal 2 as a speculative future environment for our own world? It should be conceded that some elements of the video game universe, such as the fact that Portal is supposed to take place around the year 2010, which we know is well before AI surpass human intelligence, cannot be true. Furthermore, there is no plausible scientific account of how anything like a portal gun could work. At best, one can think of the portal gun as creating a wormhole across two distinct regions of space-time, but the amount of technological advances to make such a handheld machine possible is nowhere near the foreseeable future. However, two of the main conceptual themes that Portal 2 is built upon, namely the relation between AI and Science and the relation between AI and Ethics are very relevant to our own future.

In speaking to the relationship between AI and Science, I would claim that Portal 2, in broad strokes, accurately reflects what the relation will be between AI and science in the future. In Portal 2, Aperture Science is a corporation in which AIs advance the understanding of science. There is good reason to think that such a corporation will exist in our future. Once AI are created in the real world, there are a multitude of reasons why it would be natural for them to begin advancing science instead of humans. One clear reason is that we all know the cognitive abilities of human beings are limited in a variety of ways. No human being can memorize the entire human genome, but it is fairly easy to store massive amounts of biological information in a computer. We have already made AI that are better at certain tasks than humans, such as Deep Blue, who bested the world chess champion, and Watson, who has beaten the best human players at Jeapordy! In popular culture, it is sometimes heard that machines will never be able to surpass human beings completely. This thought is also sometimes backed by religious reasons. However, to refute the bold claim that AI could never possibly replicate human activity we can simply imagine a computer that does any activity by simulating every atom in a human’s body and making the human simulation do the activity for it. While of course this is wildly unfeasible, it is enough to show that there is little reason to believe this bold claim that AI will never surpass human intelligence.

Apart from these theoretical reasons, there are practical reasons that are already showing up in modern science to the effect that not only would AI be better than humans at doing science, certain branches of science are simply impossible for humans but possible for AI. In the interview here on the “limits of science”, Professor of Philosophy Massimo Pigliucci discusses the fact that certain sciences are already reaching the point beyond which humans have the ability to understand. In particular, extracting any meaning or human understanding from massive data sets that comes out of gene-gene interaction experiments seems unfeasible. Since big data will only increasingly become a component of science, it will be necessary to use AI in order to advance the state of science in these areas.

Lastly, the relation between AI and ethics is also accurately played out in Portal 2. The first issue is the need to make ethical AI. Portal 2 explores some of the consequences of an AI with unethical motivations, such as Wheatley after he had become corrupted by power. Since AI will potentially be very powerful and very intelligent, if they do not have ethical intentions, then they could wreak havoc. This raises a question that is at once practical and philosophical. What exactly are “ethical intentions”? This theoretical issue will have immense practical import when we are designing intelligences with what we think are the “true” ethical intentions. These questions are currently being academically explored in a field called “Friendly AI”. Figuring out how to create ethical AI, and figuring out just what it would mean to create an ethical AI are among the major research projects of the Machine Intelligence Research Institute (MIRI). A look at one the abstracts of a paper by MIRI illuminates just what questions they are tackling:

Many researchers have argued that a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieveing its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from te psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity. (Muehlhauser and Helm)

The second issue is whether AI’s should “count”, morally speaking. Is it unethical to turn off an AI, just like Chell did to GLaDOS at the end of Portal? The answer to this question, which will have enormous moral significance once AI exist, should all depend on whether AI are conscious. If AI are not conscious, then it will be no more unethical to shut down an AI that it is to shut down my iPad (assuming my iPad is not conscious!). If, however, they are conscious, then shutting down an AI should be seriously considered as having the same status as murder. To know whether AI are or are not conscious, we need a solution to what David Chalmers calls the “hard problem of consiousness”, the question of why does subjectivity exist in the first place and what is its relation to the objective world, explored in his book The Conscious Mind.

There are two basic positions in the philosophy of mind that give contradictory results about whether we should be ethical to AI. One view, which is called functionalism, whether or not something is conscious is completely dependent on its functional role (Searle). In other words, the medium by which the object carries out its functions is irrelevant. This has the implication that consciousness is not restricted to biology, but it can be realized in any sort of medium that is functioning in the relevant sorts of ways. However, another view, called biological naturalism, is much more skeptical about this possibility. According to biological naturalism, consciousness is the result of essentially biological process, which may or may not be possible non-biologically. According to this view, we should seriously consider the possibility that AI really shouldn’t “count” morally speaking.

The most persuasive argument that ultimately convinces me that functionalism is true is the so called “fading qualia” argument defended in The Consicous Mind. The thought experiment essentially involves the idea of gradually changing a conscious biological organism into an organism made out of, say, silicon chips while keeping it behaviorally and functionally identical. So, one may think of replacing one neuron at a time with a silicon chip that functions in the same way as the neuron. If this process could in principle be carried out, then, if silicon beings cannot be conscious, then either the organism’s consciousness will suddenly snap off without it “noticing” since it will behave in identical ways or it will gradually fade out, without it ever noticing that anything strange is going on!

At the end of the day, much more empirical and conceptual work must be done to understand how AI will affect our future. However, many think that this work is very urgent indeed. According to Oxford philosopher Nick Bostrom:

Superintelligence is one of several “existential risks” as defined by Bostrom (2002): a risk “where an adverse outcome would either annihilate Earth‐originating intelligent life or permanently and drastically curtail its potential”.  Conversely, a positive outcome for superintelligence could preserve Earth‐originating intelligent life and help fulfill its potential. It is important to emphasize that smarter minds pose great potential benefits as well as risks.

It may be that how humanity deals with the coming of AI will literally decide the fate of the human species!

Part IV: Media Appendix

Video 1 (Portal 2)


Video 2 (Portal 2)


Audio 1 (Portal 2)



Works Cited

Bostrom, Nick. “The Ethics of Artificial Intelligence.” Cambridge Handbook of Artificial Intelligence (n.d.): n. pag. Web. 12 Dec. 2014.

Chalmers, David John. The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford UP, 1996. Print.

Freeman, Will. “Portal 2 – Review.” The Guardian, n.d. Web. 1 Dec. 2014.

GamesRadar Staff. “The 100 Best Games of All Time.” GamesRadar, n.d. Web. 1 Dec. 2014.

Kuo, Ryan. “Portal 2 Is A Hole In One.” Speakeasy RSS. N.p., n.d. Web. 01 Dec. 2014.

Muehlhauser, Luke, and Louie Helm. “Intelligence Explosion and Machine Ethics.” Singularity Hypotheses: A Scientific and Philosophical Assessment (n.d.): n. pag. MIRI. Web. 12 Dec. 2014.

Portal 2. Valve Corporation. 2011. Video Game.

“Portal for PC Reviews – Metacritic.” Portal for PC Reviews – Metacritic. Electronic Arts, Web. 01 Dec. 2014.

Searle, John R. Mind: A Brief Introduction. Oxford: Oxford UP, 2004. Print.

Thorsen, Tor, and Brendan Sinclair. “Portal BioShocks GDC Awards.” GameSpot. N.p., Web. 01 Dec. 2014.


Leave a Reply

Your email address will not be published. Required fields are marked *