The law and the U.S.’ new declaration on military uses of AI: some observations
Is the U.S. becoming the leader in the development of norms for the military use of artificial intelligence (AI)? Maybe, but there is still much work to do and some red flags to be addressed. Among other things, the U.S. needs to be clear as to which “guidelines” and “best practices” are actually mandated by international law, and which are simply policy-driven norms.
As to latter, the U.S. should advocate strict adherence to existing law, but should not let its eagerness to lead AI norm development to allow it to agree to significant policy restrictions that are not legally-required, or to compromise its current interpretations of international law. The U.S. also ought to continue its opposition to a treaty banning AI weapons.
Here’s the context: last Thursday, the U.S. Department of State (“DoS”) released what it calls a “non-legally binding guideline” entitled “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” (the “Declaration”).
According to DoS:
The aim of the Declaration is to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations, and to help guide states’ development, deployment, and use of this technology for defense purposes to ensure it promotes respect for international law, security, and stability. (Emphasis added.)
In announcing its recently updated Directive 3000.9 (“Autonomy in Weapon Systems”), the U.S. Department of Defense (“DoD”) unabashedly expressed its leadership aspirations:
The update reflects DoD’s strong and continuing commitment to being a transparent global leader in establishing responsible policies regarding military uses of autonomous systems and artificial intelligence (AI).” (Emphasis added.)
The U.S.’ evident desire to lead in the development of norms as to AI weapons and processes is understandable. Last November, the Congressional Research Service pointed out that the U.N. has sponsored a “Group of Governmental Experts” that has examined these issues for at least five years, while “approximately 30 countries and 165 nongovernmental organizations have called for a preemptive ban on [lethal autonomous weapons].”
Given that the U.S.–cognizant that several potential adversaries are are actively developing such potentially game-changing weaponry–doesn’t support a ban, the current effort is an admirable step to preempt attempts to regulate this rapidly advancing technology in a way that doesn’t serve the U.S.’s interests or, for that matter, those of its friends and allies.
However, like so many other “political declarations”—the devil is in the details. Let’s unpack a few of them.
“Non-legally binding”? “Best practices”?
DoS describes its Declaration as “a series of non-legally binding guidelines describing best practices for responsible use of AI in a defense context.”[1]: Actually, some of the statements are inarguably legally-binding and not merely “guidelines” or just “best practices.” Professor Mike Schmitt, one of the world’s top experts in international law related to armed conflicts, expressed in an email to me this concern about the Declaration:
I agree that the Political Declaration, considered in light of the DoDDI 3000.9 update, accurately reflects appropriate behavior with regard to military AI use. My only concern, as, for example, with the UN GGE’s so-called “Voluntary Non-binding Norms of Responsible State Behavior” for cyber operations, is that States must not conclude that, as political norms, they are necessarily non-binding.
For instance, States shoulder a legal obligation to review weapons (means of warfare), including those with AI capabilities. Similarly, a commander or other decision-maker introducing a destructive AI-capable system into the battlespace is legally required to have an appropriate level of understanding as to how that system will likely operate in that environment.
Therefore, all such hortatory best-practices statements would benefit from a caveat that the inclusion of a norm does not mean it is not reflective of binding international law, as has been done by some States in the cyber context.
What law?
As Mike suggests, if the U.S. is to lead, it needs to be explicit about what the law currently mandates with respect to AI and work through how its views mesh (or don’t mesh) with allies and partners.
In this respect, it is curious that DoS chose to use the term “international humanitarian law” versus the DoD’s use of the term “law of war” in DoDDI 3000.9 as well as its Law of War Manual. Though many believe the terms may be interchangeable, DoD notes the nuance in the Law of War Manual ¶ 1.3.1.2:
International humanitarian law is an alternative term for the law of war that may be understood to have the same substantive meaning as the law of war.9 In other cases, international humanitarian law is understood more narrowly than the law of war (e.g., by understanding international humanitarian law not to include the law of neutrality).
If the U.S. is aspiring to global leadership in AI development, internal nomenclature consistency will help.
More substantively, the U.S. has interpretations of international law that are at odds with the view of many (and in some case, most) countries. For example, Article 2(4) of the U.N. Charter generally bars the threat or use of “force,” but Article 51 permits the right of self-defense only “if an armed attack occurs.”
Thus, most countries interpret this language to mean that merely being the victim of a use of force does not trigger a right to self-defense unless that force rises to the level of an “armed attack.”
The U.S. doesn’t see it that way. In 2012, then legal adviser to the DoS Harold Koh observed that “some other countries and commentators have drawn a distinction between the ‘use of force’ and an ‘armed attack’’ but insisted “the United States has for a long time taken the position that the inherent right of self-defense potentially applies against any illegal use of force.”
Additionally, the U.S. has a range of differences—even with allies and partners—as to what the law of war comprises, and how it applies in a given circumstance. For example, while the U.S. considers much of what is found in Protocol 1 of the Geneva Conventions as customary international law, it doesn’t so conclude as to other key provisions (see, e.g., here). Consequently, the U.S. is not a party to it (rightly in my view) even though 168 states are.
The U.S. needs to compare the statements in the Declaration to determine where, if at all, there is the potential for divergent interpretations of international law that may cause friction with friends and allies with respect to AI.
In evaluating the law with respect to AI, it is vitally important that there be no confusion between what policy-driven rules of engagement might have provided in the past, and what rules for AI should be. Among other things, the most urgent need for AI weaponry would not necessarily be in a counterinsurgency (COIN) or counterterrorism (CT) setting, but rather one involving large-scale combat operations (LOSCO) against an adversary also employing advanced AI weaponry and systems.
In a 2021 article Lt Gen Chuck Pede and Colonel Peter Hayden warned about the risks of failing to understand the needs for “legal maneuver space” during a LOSCO. They called it a “capability gap” and assessed it be “one of the greatest dangers to our future success.” They said:
Twenty years of COIN and CT operations have created a gap in the mindset—in expectations—for commanders, soldiers, and even the public. Army forces suffer our own CT “hangover,” having become accustomed to operating under highly constrained, policy-driven rules of engagement. Compounding this phenomenon is public perception. Nongovernmental organizations, academics, and critics consider “smart bombs” and CT tactics to have become normative rules in warfighting. In short, they are not.
This gap—the space between what the law of war actually requires, and a growing expectation of highly constrained and surgical employment of force born of our own recent experience coupled with our critics’ laudable but callow aspirations—left unchecked, threatens to unnecessarily limit a commander’s legal maneuver space on the LSCO battlefield.
True, Pede and Hayden were not speaking specifically to AI, but their concerns are applicable to it.
The U.S. needs to avoid imposing restrictions on itself and its allies that are not required by international law, but which could hobble the ability of commanders to exploit the battlefield potential of AI against an AI-equipped adversary not burdened by self-imposed limits not mandated by international law (if even that).
To reiterate, there needs to be a keen understanding that the policy preferences of the past that limited the lawful use of military capabilities should not be indiscriminately applied (if at all) to AI capabilities. We should not forget that AI’s most imperative use would likely be in a LSCO against a ruthless, AI-equipped adversary–not a low-tech insurgency. Failing to appreciate this could be a formula for catastrophic defeat.
Definitions matter. The Declaration uses several phrases and terms that call for more refined definitions. Indeed, Human Rights Watch has already declared the Declaration “flawed” much because of complaints about what particular language really means. To an extent, some definitions cannot be elaborated upon, but could be usefully illustrated with examples and hypotheticals.
Draftsmanship matters too. Consider this somewhat curious statement in the Declaration’s list:
States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.
Is that meant to suggest that “human control and involvement for all actions” is only necessary if nuclear weapons employment is involved? It may be that this is meant to bar “dead hand” devices. A Military.com article said that the “Soviet Union developed a world-ending mechanism that would launch all of its nuclear weapons without any command from an actual human…if its entire armed forces were wiped out.” (The U.K. seems to have a more analog process for such circumstances – see here).
In any event, there may be other technologies and capabilities which should not have a ‘dead hand’ but, applying the interpretive axiom of expressio unius est exclusio alterius (“the expression of one thing is the exclusion of the other”), it would seem no such ‘anti-dead hand’ norm is being advocated for systems other than nuclear weapons.
All of this simply means that there is work yet to do; the Declaration should not be considered end state. The project needs ongoing attention; dormancy will neuter the effort.
The role of law and AI: domestic and international
Not to put too fine a point on it, but there are well meaning “political declarations,” and there are hard military realities. With respect to the military uses of AI, Vladimir Putin, for all his faults, may have been right in this 2017 quote:
Artificial intelligence is the future, not only for Russia, but for all humankind . . . It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.
This eerily echoes a claim made by two Canadian scholars who ominously warned in 1988 that “technology has permitted the division of mankind into ruler and ruled.”
With respect to technologies like AI which are, as some say, revolutionizing warfare at literally hyper speed, how can the law keep up? We need to be conscious that law, however well-intended, can create challenges.
Domestic law can be problematic, even when national security interests are involved. Although I wasn’t thinking about AI, in a 1999 monograph, “Technology and the 21st Century Battlefield: Recomplicating Moral Life for the Statesman and the Soldier” I tried to identify the complications law can create and the potential dangers represented by nations for which the law is easily manipulated if it exists at all.
[E]ven America’s vaunted free-enterprise system, the engine that fuels its technological might, has its own recomplications.
Consider that American values—in this instance the commitment to full and fair competition within a capitalistic economy—might deny U.S. troops the best technology on 21st century battlefields. Author David Shukman explains: “While the Western military struggle for a decade on average to acquire new weapons, a country with commercially available computer equipment and less 33 rigorous democratic and accounting processes could field new systems within a few years. It is the stuff of military nightmares.”
Although high-tech systems are touted as a means to get inside an adversary’s “decision loop,” the reality is that nations unencumbered by Western-style procurement regulations may well be able to get inside our “acquisition loop” and field newer weaponry even before the United States finishes buying already obsolete equipment.
Just as the speed of technological change creates difficulties for the procurement process, so it does for those concerned with law, ethics, and policy. President Harry Truman once remarked that he feared that “machines were ahead of morals by some centuries.”…Consequently, statesmen and soldiers must accelerate their efforts to develop norms of law, ethics, and policy that honor this nation’s finest ideals while at the same time appreciating that “technology is America’s manifest destiny.”
With respect to international law, however, it is vitally important—as already suggested above–that the U.S. and its allies not fall into the trap of creating norms that unnecessarily put them at a disadvantage with respect to belligerents who will not honor the rules. This means being very wary of creating new AI-specific legal restrictions beyond those already applicable to armed conflicts.
One of the most unwise approaches would be an AI-specific treaty as Human Rights Watch and others have advocated. I have always believed that technology-specific bans are usually misguided. In the first place, properly used, AI weapons carry great potential to help protect civilians, and can do so within the existing framework of international law.
As I said in a 2015 essay (“A Better Way to Protect Civilians and Combatants than Weapons Bans: Strict Adherence to the Core Principles of the Law of War”):
[T]the law of war ought to be technologically agnostic. Banning a specific weapon that can, in fact, be used in compliance with the core principles of the law of war invites science to come up with other ‘legal’ weapons equally or more devastating. But more than that, we are inhibiting the ability of science to produce new means that can accomplish the necessary military mission, but do so in a less destructive and less lethal manner, or even without any of permanent physical injuries conventional weaponry cause.
In another essay, I argue:
All of this highlights the complications that can arise when international law departs from focusing on principles and chooses instead to simply denounce particular technologies. Given the pace of accelerated scientific development, the assumptions upon which the law relies to justify barring certain technologies could become quickly obsolete in ways that challenge the wisdom of the prohibition.
In short, the weight of the effort should be focused on understanding key features of AI, and then employing existing legal principles—which are time and experience proven—to the emerging technologies.
Put another way, an informed and energetic effort to apply the facts of AI capabilities to the current framework of international law needs to take place. Only when that effort demonstrably fails, should experimentation with new rules occur.
To reiterate, the Declaration is a useful starting point, but the need for further work should not be underestimated or ignored.
Notes
[1] Below are the twelve “best practices” the Declaration lists
- States should take effective steps, such as legal reviews, to ensure that their military AI capabilities will only be used consistent with their respective obligations under international law, in particular international humanitarian law.
- States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.
- States should ensure that senior officials oversee the development and deployment of all military AI capabilities with high-consequence applications, including, but not limited to, weapon systems.
- States should adopt, publish, and implement principles for the responsible design, development, deployment, and use of AI capabilities by their military organizations.
- States should ensure that relevant personnel exercise appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
- States should ensure that deliberate steps are taken to minimize unintended bias in military AI capabilities.
- States should ensure that military AI capabilities are developed with auditable methodologies, data sources, design procedures, and documentation.
- States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those capabilities and can make context-informed judgments on their use.
- States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
- States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. Self-learning or continuously updating military AI capabilities should also be subject to a monitoring process to ensure that critical safety features have not been degraded.
- States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. States should also implement other appropriate safeguards to mitigate risks of serious failures. These safeguards may be drawn from those designed for all military systems as well as those for AI capabilities not intended for military use.
- States should pursue continued discussions on how military AI capabilities are developed, deployed, and used in a responsible manner, to promote the effective implementation of these practices, and the establishment of other practices which the endorsing States find appropriate. These discussions should include consideration of how to implement these practices in the context of their exports of military AI capabilities.
Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!