Ms. Dora Velenczei on “Perfidious Warfare and AI Systems: Assessing the Capacity for Deception in an International Armed Conflict”
There is lots of discussion these days about military uses of artificial intelligence (AI). While AI can provide many advantages, it also raises thorny questions under international humanitarian law (IHL) [otherwise known as the law of armed conflict or the law of war].
Scholar Dora Vanda Velenczei of Australia’s Monash University is a new Lawfire®contributor, and she tackles some of the questions raised by the fact that AI systems lack human intuition and empathy. For example, how should we think about violations of IHL such as perfidy that require, as she puts, “a mental state capable of consciousness and intentionality”?
As AI systems are rapidly being incorporated into militaries, scholars like Ms. Velenczei are needed to identify, as she does here, gaps in the law of armed conflict and help to develop solutions.
Perfidious Warfare and AI Systems: Assessing the Capacity for Deception in an International Armed Conflict
by Dora Vanda Velenczei
The use of artificial intelligence (AI) in military technology has triggered numerous questions around its ethical usage and humans’ ability to control these enhanced means of warfare. The US Department of Defense describes AI as the “…ability of machines to perform tasks that normally require human intelligence.”
As AI capabilities rapidly advance, many states are investing heavily in AI-powered systems to enhance their military effectiveness, improve decision-making speed, and reduce risks to human soldiers.
However, the growing reliance on AI and deployment of autonomous AI systems in armed conflict raises profound legal and ethical questions, particularly concerning their susceptibility to deception under international humanitarian law (IHL) and committing the prohibited acts of perfidy.
Perfidy relies on the concepts of “belief,” “confidence,” and “trust.” Scholars and lawyers alike generally understood these concepts to require a mental state capable of consciousness and intentionality. In other words, perfidy requires traits traditionally ascribed only to humans.
This article examines whether autonomous AI systems, whose decision-making is powered by advanced deep learning algorithms, can form the requisite “belief,” “confidence,” and “trust” necessary for perfidious conduct to occur.
Definition of Perfidy under International Humanitarian Law
While IHL allows for the use of ruses in wartime, in other words, the confusion of the enemy, IHL prohibits parties from using perfidy to kill, injure, and capture adversaries.
Article 37(1) of Additional Protocol I of the Geneva Conventions describes perfidy as “acts inviting the confidence of an adversary to lead him to believe he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict with intent to betray that confidence.”
The following acts are examples of perfidy listed under Article 37(1):
(a) The feigning of an intent to negotiate under a flag of truce or of a surrender;
(b) The feigning of an incapacitation by wounds or sickness;
(c) The feigning of civilian, non-combatant status; and
(d) The feigning of protected status by the use of signs, emblems or uniforms of the United Nations or of neutral or other States not Parties to the conflict.
As Article 37 describes, part of what constitutes perfidy is when a party attempts to gain the “confidence” of their adversary with the intent to later betray this confidence. Thus, the intent to betray is the subjective element of the crime, meaning that the adversary is led to believe that the perpetrator is entitled to protections under IHL.
Overview of AI Systems
The use of artificial intelligence (AI) is becoming increasingly frequent in every aspect of our lives, whether civilian or military. Microsoft explains that “general” or “strong” AI refers to systems being able to outperform humans in any intellectual task.
General AI works through deep learning. An article published by IBM defines deep learning as “…a subset of machine learning that uses multilayered neural networks, called deep neural networks, to simulate the complex decision-making power of the human brain.”
The authors also note that deep learning powers the majority of AI applications used today. Essentially, deep learning enables AI to process complex, unstructured data and perform tasks with minimal human intervention, powering most current AI applications.
Three different types of deep learning
AI models can be built automatically through “unsupervised, supervised, or reinforcement machine learning techniques.” Deep learning involves three different types.
The first type is supervised learning, where human intervention is still needed to label data correctly. The second type is called unsupervised learning, which does not require the labeling of datasets.
The third type is called reinforcement learning, which is a machine learning technique where a computer learns to perform tasks through repeated trials and errors, thereby imitating human learning techniques (IBM). Thus, different AI systems vary in their levels of autonomy and adaptiveness after deployment.
In terms of applying the AI systems, literature distinguishes between high automation, autonomy, and decision support. Highly automated systems can identify and engage a target without additional human input.
These systems’ ability to act independently is limited by algorithms that determine their responses by imposing rules of engagement and setting mission parameters. (Oslo Manual, Rule 37).
Autonomous systems are more sophisticated, deep learning systems that seek to support the human decision-maker in a better capacity (Oslo Manual, Rule 38).
Lastly, as the AI Consultation Report by the Geneva Academy notes, if human decision-makers come to place reliance on autonomous AI systems to the fullest extent, similar reliance by senior military decision-makers on the input they receive from the systems will be designed to support human decision-making.
Can AI Recognize Deception?
This article centers around whether “belief” is a uniquely human trait or whether AI can experience it. In other words, can the AI system be misled into believing that it must refrain from attacking the enemy?
Research demonstrates that a range of currently deployed AI systems have learned how to deceive humans through deep learning abilities. Language models and other AI systems have already learned from their training the ability to deceive through manipulation and cheating.
However, scholars have observed that the question of whether an AI military system in active deployment can be deceived has largely been overlooked.
If an autonomous AI is carrying out military activities independently, it is unclear whether AI would meet the conditions of perfidy. If the AI system is remotely operated, establishing belief may be easier, given that the human operator is the one being deceived by the remote operator via the AI system.
Boothby illustrates a scenario where an AI weapon programmed to hold fire for a white flag is deceived when an adversary abuses this protection to attack personnel.
While the perpetrator invites confidence, the question remains whether the AI system can genuinely form the belief required for perfidy, given its pre-programmed nature and perceived incapacity to form human emotions and beliefs.
Deep learning is a fundamental characteristic of autonomous AI systems. The most commonly employed deep learning networks are called the convolutional neural network (CNN) and the recurrent neural networks (RNN).
Trzun likens CNNs to the functioning of the central nervous system of living organisms. At the same time, he concedes that autonomous AI systems lack emotional intelligence and contextual understanding. AI systems may misinterpret complex social and cultural contexts, resulting in poor or inappropriate decisions.
Consequences of the lack of human intuition and empathy
Lack of human intuition and empathy also limits AI systems’ ability to handle nuanced, unpredictable situations. This limitation is crucial because the legal concept of “belief” or “confidence” under Article 37(1) Additional Protocol I entails a subjective mental state, involving trust or reliance that safeguards the adversary’s good faith expectations in IHL.
Military actors frequently emphasize trust and ethics as intertwined and essential in the development and deployment of autonomous AI systems in warfare.
However, as Troath argues, this trust is often a political and strategic construct designed to legitimize AI use rather than a reflection of genuine human-like belief or understanding. In this sense, trust in AI is instrumental, shaped by social and institutional contexts rather than by AI’s cognitive capacities.
Complementing this perspective, Vestrucci, Lumbreras, and Oviedo argue that “…AI systems’ capacities come close to some aspects of the believing process, such as believing in something as a consequence of a recurring pattern of events, or believing as the result of learning from new data.” However, they, too, conclude that AI systems are currently unable to form and express beliefs.
Accounting for the Gap between how AI Functions and IHL
Together, these insights reveal a fundamental gap between the political construction of trust in AI systems and the legal requirements for perfidy, suggesting that autonomous AI’s capacity to be deceived under Article 37(1) Additional Protocol I is highly limited, if not impossible, under current understandings.
Perfidy presupposes an actor capable of subjective reliance, which AI lacks. Nonetheless, as AI systems become increasingly autonomous, there is an emerging debate about whether legal frameworks should evolve to recognize AI’s functional decision-making as a form of “belief” relevant for perfidy.
This gap necessitates a reexamination of IHL’s protective purposes and could impact accountability mechanisms. Practically, AI’s inability to contextualize or empathize means that deceptive tactics exploiting these limits may not legally qualify as perfidy, raising challenges for compliance and enforcement.
Dora Vanda Velenczei is a PhD scholar and sessional academic at Monash University. Her research interests include emerging technologies of military significance, the law of armed conflict, and national security law.
__________________________________________________
The views expressed by guest authors do not necessarily reflect my views or those of the Center on Law, Ethics and National Security, or Duke University. (See also here).
Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!




