Imagine a world where someone got their hands on Thanos’s Infinity Gauntlet and, with a single snap, replaced all the words referring to transportation with the word “vehicle.” No one has a word for “car,” “bike,” “ship,” “airplane,” or “spaceship.”
It would be absolute chaos. One person would say, “Vehicles are great for the environment,” when talking about bicycles, while another may respond, “No! They are destroying our planet,” because they are thinking of heavy-fuel-oil-powered cargo ships. Meanwhile, news of a recently designed, highly efficient, electric-powered, small airplane has people thinking their next car might fly.
It would take years, maybe generations, to rebuild the lost vocabulary. It takes time to articulate the differences between two different vehicles, create a word to differentiate them, and then get others to adopt the word until there’s a shared understanding. And where there’s ambiguity, there’s opportunity to take advantage of the confusion for your own gain.
So, in this world, salespeople lean into this confusion. They advertise a scooter with a basket as a “smart personal vehicle with space for cargo” and point out how it is sleek and eco-friendly. From the ad, it sounds like a car. In reality, it doesn’t go far and barely holds anything. But when the word vehicle could just as easily mean a cargo ship or a skateboard, who’s to say what you’re supposed to expect?
This is what it’s like with the word “AI” right now. So, in this learning objective, we are going to start our journey to build our vocabulary so that AI is seen as generic as the word vehicle, and we can become suspicious if no other details are provided. With a greater vocabulary, we can become skeptical in a more precise way.
Terminology
AI is a very large umbrella term. Overall, it can be broken down into two main categories and a hodgepodge of other things. One category is predictive AI, which refers to a system that uses data to estimate future outcomes or classify current states. How they do this depends on the data they have and the mathematical model that is built using that data. Another main category is generative AI, which refers to a system that creates new content based on the data it is trained on. Rather than predicting an outcome or state, it produces something that resembles its data but is not (usually) exactly like its data. Due to the umbrella nature of the term AI, there are other things that are labeled AI that do not fall in these two categories. We list a few of them in the table below as well.
Note: This is not an exhaustive list. AI is a large and rich research field, with new applications emerging continually. In addition, the categorization between predictive and generative AI is arguable for some of these terms. Some AI systems could really be both. For example, it’s not uncommon for generative AI to rely on predictive AI in some way under the hood. However, for the purposes of this class, we do not have the time to go into all the shades of gray. Therefore, this is how we will characterize things and what we will use for this class.
| Kind | Category | Purpose/Goal | Output | Example |
| Classification | Predictive | Assigns a known label to its input | A textual label like "cat" for images or yes/no, depending on the model | Classifying whether an email is spam or not spam based on the email content and where it came from |
| Recommendation | Predictive | Suggests relevant content or items based on a set of input data | A ranked list of items, content, or actions | A ranked list of recommendations of what video to watch next based on a particular user's watch history, video likes/dislikes, and the data from other users that are similar to this user. |
| Decision Making | Predictive | Guides or automates decisions based on a specific context and predicted outcome | A recommended action to take | A tool recommending whether to approve a car loan |
| Translation | Predictive | Converts text from one language to another | Text in the targeted language | Translating from English to Spanish |
| Synthetic text generator | Generative | Produce new text based on a given prompt | Text | Producing text to describe a product for marketing |
| Chatbot | Generative | A subclass of synthetic text generators that focus on turn-taking conversations | Multi-round conversational text from two or more entities | A customer service chatbot that tries to help the customer without the need of a human |
| Synthetic image generator | Generative | Produce an image based on a text prompt | Image | Producing an image of a space alien from the prompt "draw me a space alien" |
| Synthetic audio generator | Generative | Produces audio based on text or structured input. | Audio, such as music, speech, or sound effects | An app that turns text into a short audio clip of part of a song |
| Synthetic video generator | Generative | Produce a video based on (likely) a sequence of prompts | Video | Producing a short video clip of a dancing cat alien |
| Automation | Other | Performs repetitive tasks or a set of predefined tasks without human intervention | A completed task or process | A machine that installs the door of a car at a manufacturing plant without aid from a human |
| Robotics | Other | A physical machine that senses the world around it and interacts with it | Physical actions such as movement, manipulation, and sensing | A robot that plays soccer |
| Artificial General Intelligence (AGI) | Other | A computer that replicates human-level reasoning across any task or domain. | Problem-solving and reasoning ability like a human | This does not exist, and there is no agreed-upon definition or test for this. If someone ever claims this, they made up a definition and claimed it is true. |
| | | | |
On the Anthropomorphization of AI
Anthropomorphism is the human tendency to attribute human traits, like intention, emotion, or consciousness, to non-human things. In the context of AI, this becomes especially problematic with synthetic text generators like chatbots. Language is central to how we understand and relate to one another. Linguistics research shows that when we encounter coherent language, we instinctively imagine a mind behind it, a person who is thinking, feeling, and trying to communicate. This is how we evolved to interpret language, and it works well for human relationships. Synthetic text generators have no mind, no goals, no moral judgment, and no understanding. They do not care about us because there is nothing there that can care. They are simply remixing patterns from massive datasets to produce plausible-sounding responses.
A person may anthropomorphize their car and say it “takes care of them on road trips,” but we don’t actually believe the car has emotions or intentions. And yet many of us still treat synthetic text generators as if they had empathy, insight, feelings to be hurt, etc. That’s because language itself triggers social and emotional instincts. We imagine a mind behind the text.
This problem is exacerbated by a common bias that leads people to believe computers are more objective, neutral, and trustworthy than humans. As a result, we are more likely to place undue trust in AI-generated outputs, excuse harmful output as accidental “mistakes,” or assume good intentions where there are none because computers do not have a mind. Synthetic image generators rarely evoke this illusion of sentience, but synthetic text generators routinely do because of how closely human language is tied to our understanding of thought and emotion.
This false perception of humanity in a synthetic text generator and our bias to believe computers are neutral have serious implications. When an AI causes harm, we risk blaming the AI instead of its creators. The creators are the ones who designed it. They decided what data to train it on, what outputs were reasonable, where to use it, and how to profit from it. If a car had a fundamental manufacturing flaw, we would not blame the car. We would blame the automaker and hold them accountable. We should do the same for the creators of the AI.
Learning A/B Test
Now that we have finished the themes “What is learning?” and “How does learning work?” and entered the “What is AI?” theme for the course, you will apply what you have learned to observe and analyze your own learning. You will do this by running an A/B test as you learn LO8 through LO11. An A/B test is a user-experience research method where two variants (A and B) are experienced by either the same or different users to determine which variant is more effective. Of course, you know, at this point, that learning is much more complicated than what you can learn from a simple A/B test. The purpose of this experience is to start deliberately and carefully exploring how AI impacts your learning.
You will do this A/B test as follows:
- There are two learning units in the “What is AI?” theme:
- Data visualization: LO8 and LO9
- Probability: LO10 and LO11
- For one unit, you will commit to not using AI in any way, shape, or form to help you learn the LO’s for that unit until after the first LO checkpoints that test that LO.
- For the other unit, you will plan to use AI to help you learn the material.
The units are as follows:
- Data Visualization
- LO8 – Basics of data types, chart types, what kind of data types best match a chart type, how to read a chart, and common ways charts can be ineffective or misleading.
- LO9 – Create charts using Excel and how to convert/transform data in a spreadsheet into a format that enables you to create chart you want.
- Probability
- LO10 – Basics of probability, such that given a scenario, you can calculate the probability of that scenario happening.
- LO11 – Conditional probability, such that given a conditional probabilistic scenario, you can calculate the probability of that scenario happening.
Additional Optional Readings if you are interested
References
The following text was written based on the following materials:
- Narayanan, A., & Kapoor, S. (2024). AI Snake Oil. Penguin Press.
- Vehicle metaphor came from this book
- Bender, E. M., & Hanna, A. (2025). The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Harper.
- Much of the original terminology is from this book, and the discussion of anthropomorphization
- Bender, E. M., & Hanna, A. (Hosts). (2023–present). Mystery AI Hype Theater 3000 [Podcast]. Distributed AI Research Institute.