Skip to content

The Cost of Courtesy

By: Stephen Toback

In the world of AI, we often find ourselves treating chatbots like colleagues. We say “please,” we offer a “thank you,” and sometimes we even apologize for a typo. But recent data suggests that this digital etiquette comes with a literal price tag and a surprising impact on accuracy.

The Multi-Million Dollar “You’re Welcome”

OpenAI CEO Sam Altman recently made headlines by confirming that politeness is costing the company “tens of millions of dollars” in compute power and electricity. While it sounded like a joke, the mechanics of Large Language Models (LLMs) back it up.

Every word you type is processed as a “token.” When you add “Could you please be so kind as to…” you aren’t just being nice; you are forcing the model to calculate the statistical probability of every extra character. This requires more GPU cycles, more electricity to power the data centers, and more water to cool them.

The Resource Toll:

  • Electricity: A single ChatGPT query consumes roughly 10 times the electricity of a standard Google search.
  • Water: Generating 100 words can consume up to three bottles of water for server cooling.
  • Tokens: Politeness adds “linguistic noise” that the model must parse before it even gets to your actual request.

The Accuracy Paradox: Rude vs. Polite

While we’ve been told that “being nice gets better results,” recent research from 2025 (including studies from Penn State and the University of Pennsylvania) suggests the opposite for the latest models like GPT-4o.

In these studies, researchers tested the same questions using five different tones: very polite, polite, neutral, rude, and very rude.

  • The Result: “Very Rude” prompts actually outperformed “Very Polite” ones, with an accuracy jump from 80.8% to 84.8%.
  • Why? It isn’t that the AI likes being insulted. Rather, rude or blunt prompts are often more directive. They strip away the “fluff” and “conversational filler,” allowing the model to focus its attention entirely on the core task. Polite prompts can sometimes trigger “sycophancy”—where the AI prioritizes being agreeable over being factually correct.

Cultural Nuance Matters

It isn’t a universal rule that rudeness wins. Studies show that the “optimal politeness level” varies by language:

  • English: Directness and efficiency generally lead to higher accuracy.
  • Japanese: Because politeness (Keigo) is structurally embedded in the language, the AI often performs better with respectful prompts, reflecting the data it was trained on.
  • Chinese: The models typically favor conciseness over extreme formality.

The Reinimation Take

At the intersection of media and AI, we often talk about the “re-animation” of how we communicate. The fact that manners—a core human trait—can actually act as “noise” in a neural network is a fascinating look at the divide between human psychology and machine logic.

If you’re looking for the most efficient, accurate, and environmentally friendly response, the data suggests you should skip the pleasantries. Save the “please” for your human colleagues—the servers will thank you (silently).

This video provides a concise breakdown of Sam Altman’s comments regarding the energy and monetary costs associated with polite AI interactions.

Thank you to Gemini for working with me on this blog. It is appreciated it.

Leave a Reply

Your email address will not be published. Required fields are marked *