LENS Essay Series: “Wartime Propaganda in the Age of Generative Chatbots”

Do you know what generative artificial intelligence (AI) is?  Do you know how its use in future conflicts could, among other things, super-empower propaganda?  Today’s post can help to answer those questions and more.


Historically, the use of propaganda has been a permissible means and method of warfare (subject to some limitations such as it cannot be used to incite violations of the law of war).  Despite its obvious utility to wartime success, the the U.S. Department of Defense Law of War Manual currently claims (para “[d]iminishing the morale of the civilian population and their support for the war effort does not provide a definite military advantage” (but see here for a different perspective).

Because an object must, among other things, provide a “definite military advantage” in order to be targetable as a military objective, the instrumentalities that spread propaganda (e.g., radio and television stations) have typically not been considered proper military objectives for attack if that was the only reason for targeting them.

Does powerful new technology that supercharges propaganda warrant a re-evaluation of how it is traditionally viewed by the law of war?

In recent years it has become evident that the enormous technical capabilities of the digital age have changed modern propaganda significantly and, it is argued, made it much more dangerous. 

Now we are seeing yet another new technology – generative AI – coming on the scene and it may dwarf the propaganda potential of even the most powerful information technologies that have emerged in recent years.  Writing in the April issue of Foreign Affairs, Josh Goldstein and Girish Sastry explain why:

Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

With technology now able to supercharge disinformation, the capabilities of propaganda have been revolutionized to a point far beyond what was in the realm of the possible just a few years ago.

Consequently, should this revolutionary technology be treated by the law of armed conflict in the same way as legacy propaganda instrumentalities like radio and television stations are currently?  Today’s addition to the LENS Essay Series grapples with such issues and more.

In her cutting-edge essay, Wartime Propaganda in the Age of Generative Chatbots,” Ashley DaBiere, who just graduated from Duke Law, gives us an analysis of these most timely issues.  Here’s the abstract:

Along with the initial buzz of excitement about the seemingly endless capabilities of ChatGPT came voices of concern that generative chatbots could be the end of life as we know it. Although this doomsday mentality is perhaps more pessimistic than many would like to believe, and while it is true that generative chatbots could provide humanity with many benefits, one danger is especially apparent: the possibility of AI-generated wartime propaganda. Because of the nature of the technology, as well as the potential difficulties of regulating its use from an international law perspective, an advanced generative chatbot could provide a novel and modern platform capable of influencing the masses. This possible danger raises a host of issues under the law of armed conflict. This Essay considers: (1) the legality of a military targeting data centers hosting a generative chatbot that is disseminating wartime propaganda, and (2) considers what players can be held responsible under international law for war crimes “committed” by a generative chatbot.

If Ashley’s name seems familiar to you, it may be that you read her previous contribution to the Essay Series: Which Protectors Need More Protection? Analyzing Legal Possibilities of Reducing Patent Protection to Protect National Defense Companies.”   Ashley, who is something of a polymath, is the first person to have two essays selected for the Series!  Expect to hear great things about her in the years to come!

Be sure to read here full essay (found here).

About the Author:

Ashley DaBiere graduated from Duke University School of Law in 2023 with a J.D., where she was an Executive Editor for the Duke Journal of Constitutional Law & Public Policy and a Senior Research Editor for the Duke Law & Technology Review. Ashley graduated from Cornell University in 2019 after majoring in Biological Sciences with a concentration in Neurobiology, where she spent several semesters studying neuron regeneration through an independent research project. During her 1L summer, Ashley interned in the Department of In-House Counsel at Catalent Pharma Solutions. During her 2L summer, she worked at Desmarais, LLP, a patent litigation firm in New York City.

Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself! 




You may also like...