Ms. Gwyneth Bernier on “From Redcoats to Bit Code: Balancing American Liberty and Security in the Age of AI”
Today’s post is by Duke Law 2L (and one of my Research Assistants!) Ms. Gwyneth Bernier. Gwyneth takes a very innovative approach to what she describes as the “tension between liberty and artificial intelligence (AI)” by drawing a parallel between the concerns of Revolutionary-era Americans and the issues many people raise about AI today.
I’ve always believed there is much to learn from history, and Gwyneth’s essay is a textbook example of that. I think you’ll really enjoy her thoughtful analysis!
From Redcoats to Bit Code: Balancing American Liberty and Security in the Age of AI
by Gwyneth Bernier
Independence Day celebrations this past July unexpectedly brought the tension between liberty and artificial intelligence (AI) to the forefront of our national consciousness. Journalist Karen Hao recently published a book titled Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. In it, Hao draws parallels between the rise of AI and European colonialism by framing the British East India Company as an analog to today’s Big Tech.
According to Hao, the British East India Company started as a commercial enterprise trading in India and, over time, accumulated so much economic influence that it began exercising political control—effectively ruling vast territories like India—not unlike how AI platforms are starting to amass power today.
But Hao’s comparison could also extend to British rule of the American colonies. Just as the founding fathers resisted distant and unaccountable colonial rule, Americans today must confront opaque private power in the form of invisible governance by algorithms.
AI’s potential
Let’s start by acknowledging that the potential for artificial intelligence to transform national security for good is undeniable. Otherwise, the Department of Defense would not have entered a $800 million development contract with four major AI platforms in June.
AI enhances intelligence gathering by analyzing data from sources such as satellite imagery to identify patterns. It bolsters cybersecurity by quickly detecting vulnerabilities and cyberattacks. And in military operations, AI enables autonomous reconnaissance and strikes while reducing risk to human personnel.
Just last year, the Pentagon’s U.S. Project Maven, which enables military personnel to decide on 80 targets per hour as opposed to 30 without AI, located rocket launchers in Yemen and surface vessels in the Red Sea for destruction. Its efficiency using a targeting cell with twenty people was comparable to Operation Iraqi Freedom, which used a targeting cell with 2000 people.
Colonial Rule: Three key issues
But colonial rule also offered significant national security benefits to the growing American colonies. Britain’s military presence fortified the colonies’ northern border against the French and the southern border against the Spanish. Additionally, the British navy’s global influence deterred potential attacks from other foreign powers that might have sought control of the colonies’ abundant natural resources and strategically important trade routes.
Yet the colonists eschewed this protection because of its steep price: the constraints on their liberty. “Give me liberty or give me death,” they chanted. But what made English rule so intolerable?
First, the colonists abhorred taxation without representation; the American colonists grew tired of being subject to laws drafted by political elites behind closed doors in London far removed from colonial realities—so much so that they tossed 46 tons of taxed British tea into the Boston harbor.
During the Stamp Act Congress of October 1765, nine colonies dispatched delegates to pass a Declaration of Rights and Grievances which asserted “it is inseparably essential to the freedom of a people, and the undoubted rights of Englishmen, that no taxes should be imposed on them, but with their own consent, given personally, or by their representatives.”
Second, the colonists resented the British extracting raw materials and labor from the colonies without appropriate compensation. In Letters from a Farmer in Pennsylvania, Pennsylvania legislator John Dickinson remarked
“The [colonists]…have been told, that they are sinking under an immense debt…contracted in defending the colonies—that the[y] are so ungrateful and undutiful…even to the support of the army now kept up for their “protection and security” but it was still intolerable that “British colonies [were] to be drained of the rewards of their labour.”
Third, the colonists feared British tyranny in the form of surveillance. Writs of assistance were a major point of contention to the Revolutionary War. Unlike specific search warrants, which require probable cause and describe the place to be searched and items to be seized, writs of assistance were broad and allowed for the search of any place at any time with no specific reason.
In his 1761 courtroom remarks, Massachusetts attorney James Otis argued “this writ of assistance is…the worst instrument of arbitrary power, the most destructive of English liberty and the fundamental principles of law, that ever was found in an English law-book. Every one with this writ must be a tyrant…one of the most essential branches of English liberty is the freedom of one’s house…This writ…would totally annihilate this privilege.”
It is clear that the soon-to-be American people yearned for autonomous and transparent self-government, even if it meant risking their national security—as it would in the Spanish-American War, the American Indian Wars, and the Quasi-War with France.
Parallels between past colonial concerns and present concerns about AI
The colonists’ three main concerns from nearly a quarter millennia ago bear striking similarities to growing public concern over of AI’s potential drawbacks.
First, just as the colonists once questioned rule by a distant monarchy, Americans today are questioning whether the AI boom is eroding their right to self-determination. The most advanced AI systems—such as OpenAI’s GPT models or Google DeepMind’s Gemini—operate with minimal transparency, often prioritizing corporate interests over individual rights.
Yet these models are increasingly being used across various sectors to make decisions influencing critical sectors like employment and healthcare without democratic oversight. American legal scholar Frank Pasquale described this phenomenon as the rise of a “Black Box Society,” where decision-making power in democratic societies becomes concentrated in technological entities that are unknowable and unaccountable to the people they affect.
This trend has drawn growing bipartisan concern from policymakers; Senator Elizabeth Warren (D‑MA) and Senator Josh Hawley (R‑MO), among others, criticized closed government hearings with Bill Gates and Elon Musk as a forum for Big Tech executives to steer AI regulation policy from the shadows. Warren in particular astutely summed up the legislators’ concern, saying “These tech billionaires want to lobby Congress behind closed doors with no questions asked. That’s just plain wrong.”
Second, just as the colonists resented Britain extracting their raw materials and labor, critics today highlight how AI platforms are extracting millions of Americans’ data and intellectual labor without compensation to enrich a handful of AI companies. AI companies harvest vast amounts of data—often without explicit consent or fair compensation—to train their models, profiting enormously from collecting everyday users’ personal information.
“We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI…Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds. You are owed profits for your data that powers today’s AI.”
Even worse, OpenAI and other firms are taking intellectual property from non-users by scraping tens of thousands of copyrighted books and millions of websites without permission.
Authors Richard Kadrey, Sarah Silverman, and Christopher Goldman are several plaintiffs in an ongoing class action lawsuit against Meta, alleging:
“Because the [Meta’s] LLaMA language models cannot function without the expressive information extracted from Plaintiffs’ Infringed Works and retained inside the LLaMA language models, these LLaMA language models are themselves infringing derivative works, made without Plaintiffs’ permission and in violation of their exclusive rights under the Copyright Act.”
Third, just as the colonists once objected to British writs of assistance, today’s Americans are speaking out against AI-driven predictive policing systems. These tools ingest years of crime reports, arrest records, and social media data to forecast where crime is “likely” to happen and who might commit it.
The problem, the NAACP points out, is that these datasets reflect decades of over-policing in marginalized neighborhoods, meaning the algorithms simply send officers back to the same areas, amplifying a problematic cycle. People now find themselves under surveillance simply for living in a “predicted” hot spot. An article from Deloitte on AI surveillance and predictive policing aptly describes how “cities need to consider if using technology for surveillance and policing implies making concessions to convenience at the expense of freedom.”
Concluding thoughts
How should Americans deal with these challenges? Notwithstanding Keanu Reeves’s crusade in The Matrix, the kind of revolution needed to be effective against intangible technological entities is not one of arms and bloody battlefield, but rather an intellectual one backed by concrete action.
For example, a group of scholars in a position paper for the July 2025 International Conference on Machine Learning offers an alternate solution: a world in which people “meaningfully participate in the development and governance of the AI systems that shape their lives.”
On the premise that “grassroots participatory methodologies” can reduce AI bias while increasing AI responsiveness to social needs, the authors envision collective and inclusive data ownership models (such as local data trusts), transparent design practices, and oversight led by stakeholders.
The lesson from 1776 is clear: the true strength of a nation lies not only in its defenses, but also in its people’s ability to govern themselves. In the age of AI, that means building technological systems whose power serves and enhances the American people’s liberty, rather than forces them to relinquish it.
About the Author:
Gwyneth Bernier (J.D. 2027) is a 2L at Duke University School of Law. She is from New York City and graduated from Duke University, where she majored in International Studies and French. At the law school, Gwyneth is involved with the Veterans Pro Bono Project and the Moot Court Board. She worked as a Research Assistant for Professor Steven Schwarcz during her 1L summer, and will spend her 2L summer at Davis Polk in New York City.
Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!



