Humanity of AI

Humanity of AI

Author: Sai Cheemalapati

Since the dawn of machinery, scientists and futurists alike have been fascinated by the concept of an artificial intelligence (AI). As humanity climbs the curve of Moore’s law, machines are gaining the resources required to solve extremely difficult computational problems. Eventually, the resources will be available to emulate human behavior to the degree that an artificial intelligence could be made indistinguishable from a human. Pioneers of the information age like British computer scientist Alan Turing foresaw such possibilities in machines. In Turing’s 1950 paper “Computing Machinery and Intelligence,” he proposed what would come to be known as the Turing test [1]. A machine passes Turing’s test if another human believes that the machine is a human. In a relatively short time, computing has progressed to the point where emulation of the human mind is within grasp. Science fiction has long held a romanticized version of AIs where the constructs acted superhumanly and or inhumanly. Using several popular published media representing AIs and analysis of real life efforts in emulating human intelligence, this paper argues that AIs that pass the Turing test are inherently human.

XKCD 329: Turing Test
XKCD 329: Turing Test

First, the concept of a “human” AI must be defined. Human beings are emotional, social, capable of higher reasoning, and much more. Emulation of the mind requires detailed reconstruction of millions of pathways and the inclusion of a capacity to learn. Hypothetically, if a perfect AI were to exist, then it would properly express those pathways and thus it would be convincingly human. AIs which are not human lack some inherent human ability. If a bot was unable to carry conversation, produce ideas  or critically think for example, then it would be an inhuman AI. Henceforth, AIs which are referred to as “human” reflect all the abilities and temperaments of a human being. That is, they are indistinguishable from another human being, and any human would affirm that said AI is another human. The argument is that if a machine perfectly emulates a human being, then it passes the Turing test and is human-like in intelligence. This paper does not concern itself with the biological aspects of humanity – it focuses purely on the mental aspects and the psychological human insofar as these are the requirements for a true AI.

One of the most popular methods of demonstrating human-like intelligence in a machine is by writing a program that carries a conversation. These programs are popularly known as chatbots. Popular online chatbots, like Cleverbot [2], use conversations with users to build enormous databases of responses.

A less than convincing conversation.
A less than convincing conversation.

To augment the discussion on chatbots, the author created a simple chatbot to demonstrate the abilities and potential failings of the technology. The basic method behind the sample chatbot is a database that contains a collection of inputs and responses. When a human enters an input, the bot will look through its database and print out the corresponding response. While its a simple implementation, it’s entirely possible that for some standard inputs the bot responds with convincing line. For simple responses it’s not hard to see some success. However, the sample bot lacks any kind of memory or critical thought. Try to have the chatbot keep a train of thought and it will fail. Ask it to think critically and it will likely fail, unless the question is within its database. Certainly the bot lacks intelligence – its library is entirely based on a limited set of expected questions that the author anticipated. More complicated bots do have memory and some common knowledge. These bots use complicated computer science concepts like machine learning to “learn” and follow the natural flows of conversation.

An example of the author’s chatbot in action.
An example of the author’s chatbot in action.

Each year, The Loebner Prize awards prizes to the most convincingly human chatbot. AIs that succeed demonstrate some understanding in general knowledge and realistically follow the flow of conversation [3]. Writing such a program is appreciably difficult to do. Chatbots that succeed in the competition can be made to fall flat easily outside of the scope of their construction. Were a chatbot produced that is perfectly successful in fooling humans however, would it be human? If judges of the bot agree that it was – then yes. Looking at it from the other side, in the internet age people apply such logic daily. In a chatroom, the only method available to determine whether other users are human is general reasoning. On Youtube, users can quickly identify bots that post similar, unrelated comments on videos. Often, they advertise a product or website and they are quickly marked as spam. Interestingly, this provides some incentive for the manufacturers of “spambots” to make constructive comments that fool other humans. Thus, if a chatbot were able to pass the Turing test perfectly, it would be indistinguishable from, and therefore human.

XKCD 810: Constructive
XKCD 810: Constructive

In the science fiction world, AIs are often represented by humanoid robots. Consider Steven Spielberg’s movie A.I. The main character is a robot boy named David who, on the outside, is indistinguishable from a normal human boy. David is purchased by a mother to temporarily replace her son, who was placed in suspended animation after contracting a rare disease. Robots are commonplace in this universe, but David is special, as he is the first who is given the capacity to love. When the mother first meets David, she activates his imprinting program, causing David to feel love for her. For a short time, both David and the mother are happy. The mother feels that the hole in her family is filled, and David has a mother to love and appears happy. Spielberg wrote and presented the character David to be very human. He’s played by a human actor and shows human emotions, but is the character David human? On one hand, David is shown to be easily influenced by programming. He is technically forced to love his mother, and all of his past “memories” were generated and implanted. On the other hand, David is presented to be very emotionally convincing and human in his actions. He shows happiness towards his mother, jealousy towards his brother, sadness at her loss, longing for her return, fear of death and much more. After being adopted into his family, David struggles to get along with his new biological brother, and his family eventually makes the choice to abandon him in fear of their own safety.

Above is the scene where David is abandoned by his mother. One can observe a mix of sadness, confusion and anger in David’s voice and face.

While wandering alone, David comes across a band of rogue robots and is forced into an anti-Mecha fair, which celebrates the destruction of robots as a human audience cheers on. When David is forced on the stand to be destroyed, he starts to plead for his life. The audience members are confused and mortified. David is so convincingly human that the audience pauses, giving him time to escape. At one point in the story, David shows signs of depression and even attempts suicide by jumping off a bridge. He is rescued, but the complexities of such an action are very interesting and meaningful to an audience. To contemplate suicide is a complicated set of emotions that goes against the prime directive of life – and likely any programming. It is a very human and sympathizable feeling. At the end of the movie, like the story of pinocchio, David asks a magical creature to turn him into a real boy. Every prejudiced character in the movie hated David because he was not biologically human. Certainly he passed the Turing test – he convinced an audience of humans that he should not be killed, and he even fit in with a family for some time. He was even originally purchased so he could replace that family’s son. In order to feel like he belongs in the world of humans David feels that he should be biologically human – not just mentally. If the difference between being human and accepted to David is a biological body, then surely David was human before. As a robot that passed the Turing test, David is human [4].

The novel Neuromancer, by William Gibson, follows the story of hacker Henry Case as he uncovers a conspiracy in a cyberpunk dystopia. The story features two powerful artificial intelligences named Wintermute and Neuromancer. The former has one goal – to merge with the latter. Events are set in motion when Case is approached by a man named Armitage. Together with an enhanced human named Molly, the three take part in a huge heist and conspiracy orchestrated by the AI Wintermute to advance his goals.

The two AIs Wintermute and Neuromancer are powerful programs that influence the progress of much of the story. They are shown to be calculating and cold, but are either of them human? Looking first at Wintermute, the AI is self serving and manipulative. It controls Armitage, the “person” responsible for putting the events in motion, absolutely. Through Armitage, the AI seeks out Case for his hacking skills so it can get to Neuromancer. All of its actions are in order to achieve the end goal – merger with Neuromancer. Does Neuromancer pass the Turing test? In the novel, he controls the character Armitage absolutely. Case meets this character and does not know that he is controlled by the AI. He willingly signs on to a plan designed by Wintermute and is none the wiser that an AI has recruited him. Time and time again Wintermute proves to be resourceful and manipulative of others. Neuromancer on the other hand wants nothing to do with Wintermute. While Wintermute wants to join with his brother AI, Neuromancer wants nothing to do with its other half. Neuromancer tries to bribe and goes so far as to attempt to kill Case to avoid the merger.

Gibson’s universe features a police force known as the “Turing Police.” The Turing Police are tasked with preventing AIs from exceeding set limitations. The original reason why Neuromancer and Wintermute were separate was to avoid exceeding those limitations. While both AIs succeed in appearing human, both fall just short according to the novel. In every interaction with Wintermute, the AI does not exhibit its own personality. Instead, it takes the forms of people in the memories of whomever it is talking to. Neuromancer however has its own avatar. It is the half that is the slightly more human because it projects a self. In this novel Wintermute would not pass the Turing test, but Neuromancer would. One’s personality is a distinctly human trait and is easily recognizable. Wintermute is a bit too robotic – it takes on the traits of others to overcome [5].

The Animatrix short film collection provides another case. The short films show the events leading to the rise of the robot nation. Before the revolution, humanoid robots are looked down upon by their human counterparts as slaves and tools. When one robot is threatened by its owner, it chooses to murder the owner. The robot appears to testify in court, and pleads innocent, claiming that it was acting in self defense. The result is that robots are refused human rights and rights as citizens. Looking at this robot specifically, it would seem that machines in the Matrix universe have free will – a very human characteristic. The robot chose to murder its owners – an action not likely programmed in by its human creators. The robot loses its court case however and machines are denied basic human rights. All over the world, robots begin to protest, and a massive war between humans and machines erupts. Up until the moment of the murder, machines were sub-human servants who existed for the convenience of humanity. The possibility of murder put them in a different light – a human one. The machines were demanding human rights as black slaves once did in the United States. The machines acted very distinctly human and considered themselves to be “human,” as they sought out the same human rights of their owners. The machines appeared to show human intelligence, and as a result would likely pass the Turing test. In this world, like in Spielberg’s A.I., the machines were feared the moment they became too human. The possibility that a non-biological machine could be human caused them to be outcast from society and trampled. In the case of the Animatrix, the machines used their understanding of humans to produce the Matrix, enslave the human population, and drive them to the brink of extinction [6].

AI as depicted in fiction is often ‘human’ in intelligence and characteristics. Writers choose to reflect human intelligence on machines because the concept is inherently unsettling. The human brain is stagnant in power while machines become more powerful each day. A truly intelligent machine could run circles around a human and act in its own best interests, perhaps becoming cold and ruthless like in the Animatrix, or Neuromancer. A machine that passes the Turing test reflects all human characteristics and is therefore human itself. A.I., Animatrix, and Neuromancer all feature human intelligences, and in the real world the barrier to human intelligence is slowly breached by each new generation of chatbot technology.


[1] “Computing Machinery and Intelligence A.M. Turing.” N.p., n.d. Web. 01 Dec. 2013.

[2] “Cleverbot.” Cleverbot. N.p., n.d. Web. 01 Dec. 2013.

[3] “Home Page of The Loebner Prize in Artificial Intelligence.” Home Page of the Loebner Prize. N.p., n.d. Web. 01 Dec. 2013.

[4] A.I.: Artificial Intelligence. By Steven Spielberg. Prod. Steven Spielberg. Dir. Steven Spielberg. Warner Bros., 2001.

[5] Gibson, William. Neuromancer. New York: Ace, 1984. Print.

[6] Animatrix. Warner Home Video, 2003.


 * Simple ChatBot
 * Sai Cheemalapati


import java.util.HashMap;
import java.util.Scanner;

class Chat {
	public static void main(String args[]) {
		HashMap<String, String> map = new HashMap<String, String>();

		try {
			BufferedReader br = new BufferedReader(new FileReader("input.txt"));
			String line = br.readLine();
			while(line != null) {
				map.put(line, br.readLine());
				line = br.readLine();

		} catch(Exception e) {
			System.out.println("Error reading/parsing input.txt file");

		Scanner in = new Scanner(;
		while(in.hasNext()) {
			System.out.println("ChatBot: " + map.get(in.nextLine()));


hi there
how are you
are you a robot?
no, i'm a human
what time is it?
i don't have a watch
how old are you?
where do you go to school?
duke university
what's your name?
i'd rather not say...
goodbye, nice talking to you!

Leave a Reply

Your email address will not be published. Required fields are marked *