This is the last of my reflections on the Ambiguously Human film series. See the ones on Wall-E, Ghost in the Shell, and The Stepford Wives. Up next is the installation, Humanized Objects.


I finished with Ex Machina because it takes that two-way flow even further than The Stepford Wives and integrates the artificial body and mind concerns of Ghost in the Shell as well as the importance of individual choice in qualifying a being as human implied in Wall-E. It deals pretty comprehensively with the issues surrounding my question that I’ve seen addressed in film.

Ex Machina has a very small set of characters that we get to know quite well, and everyone we get to know in Nathan’s house occupies an interesting, complicated place on this human-robot or human-object spectrum. First, there’s Caleb, the naïve young programmer who wins a trip to visit his company’s CEO for a week. He seems to be shy and smart, not someone who’s faced many moral quandaries in his life. He’s at first guarded with Nathan but soon opens up, and then quickly closes off again when he starts getting to know Eva. What begins as scientific curiosity soon loses out to sympathy for who he sees as a fellow person being unjustly held prisoner. He likes Eva (though he feels conflicted about his attraction to her) and hatches a plan to deceive Nathan and help her escape. Unfortunately for Caleb, Eva’s feelings don’t seem to be reciprocal and she leaves him to die.

Nathan is the grown-up tech genius child prodigy. He’s incredibly smart and creative, in certain ways, and is very aware of it. He’s not quite so full of himself that he would call himself “God,” but he leaps at the chance to take the designation when Caleb obliquely references a connection. Nathan’s house is his high-tech playground, and he unproblematically understands that he can play with anything in it. That includes anything he creates but also anyone he invites, like Caleb. His work is very high-stakes but he believes he has the ultimate control over his domain. Tellingly, in the end of the film two of his creations kill him and his last words commenting on the situation are “fucking unreal.”

Eva is one of those creations. She’s an artificial intelligence with a relatively life-like android body (she has a very realistically human face and hands, but the rest of her body is visibly mechanical). We get to know Eva through Caleb’s interviews with her; Nathan has asked him to perform a kind of modified Turing test to see if, despite visually and intellectually knowing she’s artificial, he believes her to be human. Her answers elicit sympathy from both Caleb and viewers: she’s trapped in Nathan’s basement and being threatened with destruction for not being quite human enough for Nathan’s liking. Caleb figures out a way for her to escape her room and she does, first killing Nathan and then leaving Caleb, trapped in the once again sealed-off underground portion of the house, behind as she emerges into the larger world. The film ends with Eva people watching in a busy intersection, just as she had told Caleb was the place she’d want to go if she left her room.

 

From the three main characters, I found it interesting to try and figure out who is presented as the “best human,” comparing their qualities. Caleb and Nathan are biologically human; Eva is not. All three are intelligent, although in the end Caleb outsmarts Nathan and then Eva outsmarts Caleb. Nathan is self-righteous and manipulative. Caleb and Eva, too, deftly manipulate the others, but theirs seems to be a defensive response to Nathan rather than Nathan’s presumed superiority and subsequent right to do as he pleases. All three seem cold and detached at times but express themselves when they feel safe or less inhibited: Nathan when he’s drunk, Eva when she’s alone with Caleb or thinking of him, and Caleb alternately with whoever he trusts, first opening up to Nathan and then Eva.

Assuming all the presentations we see are genuine, I think Eva wins. The artificially intelligent android shows more desirable human qualities than the two biological humans, assuming (as I think the film does) that having a biological body and brain isn’t absolutely necessary for being human. But, one of the significant points of Ex Machina is that none of the characters’ presentations are guaranteed to be genuine. I think the fourth character we get to know in the house, Kyoko, is very interesting to think about in this respect.

 

Kyoko is entirely, literally objectified. Before we know she’s a robot, Nathan presents her as dumb but attractive combination cook and maid. His judgment of her seems harsh and colonialistic, since at the beginning of the film we assume Kyoko is human and simply doesn’t know English, hardly a personal fault (especially given Nathan’s explanation that he doesn’t want her to be able to understand when he talks about his secretive work). It makes more sense when we realize Nathan has been trying to make an artificial intelligence and Kyoko is an earlier version than Eva. But it highlights two important things about Nathan: he sees the robots he creates as entirely his possessions to do whatever he wants with, and he wants a completely human-passing artificial intelligence that he can control. Nathan is abusive of Kyoko because his creation didn’t meet his exacting standards and because he sees nothing wrong with what he does since she’s not human.

In the end of the film, Kyoko helps Eva kill Nathan and dies in the fight. It can read as vindication. The abused, captive woman finally gets her chance at payback. But Ex Machina constantly questions whether its characters are acting for internal, felt, human reasons or programmed, simulated, robotic ones; it induces viewers into anthropomorphizing the androids and then questions whether that projection of feelings and motivations is justified. Kyoko’s case is no exception. She’s been living in Nathan’s house, free to roam around while obeying his capricious whims, for some unspecified amount of time before the events in the film. She cooks, cleans, and dances for Nathan, and judging from a scene with Caleb she seems conditioned to have sex with him whenever he wants. After Eva escapes, she speaks with Kyoko, and then they confront and ultimately end up killing Nathan.

The first time I watched Ex Machina, this seemed justified and triumphant. Kyoko finally gets out from under Nathan’s thumb and takes revenge. Watching it again, I’ve looked for any indications that she is conscious rather than just a very human-looking, unthinking robot, and it’s very ambiguous. Things like her dancing or stripping at Nathan’s will, and apparently never running away or striking back, can be read either as the behavior of a trapped woman or of a program following its commands. When Eva talks with Kyoko, is she explaining her plan in order to win Kyoko over to her side, or is Eva programming her? Kyoko, although she’s a relatively minor character, really exemplifies the problem of measuring consciousness or other important human qualities in others. No matter how much we know about her, externally, it’s impossible to be certain about her internal state.

That, of course, is true about biological people as well. Theory of mind, which I discussed in my introduction, only means that you’re aware other people exist independent of you, not that you can understand anything of them from their own perspectives. And of course regardless of that conceptual impossibility of knowing someone else’s mind, on the large scale we tend to assume they are human just as we are. Human beings older than five generally understand that other people’s experiences are distinct from their own, and that just because they can’t personally experience other people’s thoughts doesn’t mean they don’t have them. On the smaller scale, though, in actual situations rather than the abstract, there are innumerable obvious situations where that’s not the case, from death camps to spousal abuse. Within the different social hierarchies that are in place or can be created, certain people think that their human-ness is smarter, nobler, or just generally better and worth more than others’.

 

Ex Machina does a great job of provoking thought along these lines. The test Nathan explains he wants Caleb to do, test the humanity of a visually non-human robot, is what the film does to its viewers. We see that Kyoko and Eva are androids, and yet the film asks you to empathize with them as people. It doesn’t demand that, of course, it leaves room for debate, but it makes it very likely that at some point, you’ll relate to the artificially intelligent characters as fellow human beings.

I think one of the important points Ex Machina raises is the question of why it should even matter. Why is it so important and so consequential to decide whether or not Eva is sentient, and therefore human, or not, and therefore robotic? There’s the important human quality of having individual choice, raised in the other films in this project, which plays a role in that distinction. If Eva is following her programming and not making her own decisions, for her own reasons, then no matter how convincingly she mimics they way a person would perform those actions she’s still under someone’s control. But, then again, obeying someone can be the behavior of a person who’s been dehumanized, not only that of a robot following its programming. Ex Machina questions whether those might be the same thing. Who are we to judge, when we already understand it’s inherently impossible to know for certain, whether or not another being experiences things we deem important to humanity?