Guest Post: A Quick Primer on International Laws Applicable to Lethal Autonomous Weapon Systems

Fuzzy about the international law applicable to Lethal Autonomous Weapon Systems?  Allow today’s guest blogger, Zhanna Malekos Smith to help you clarify. Readers may recall that Zhanna joined the Center on Law, Ethics, and National Security (LENS) and the Duke Center on Law & Technology (DCLT) in fall the 2018 as the Law School’s inaugural Reuben Everett Cyber Scholar.  Here’s her quick take on this complicated topic:

This post offers a quick primer on the international laws applicable to lethal autonomous weapon systems. A caveat to this interpretative lens: while the U.S. recognizes much of Additional Protocol I as customary international law, it is not a party to it.

According to U.S. Air Force Lt. Gen. Jack Shanahan, who leads the Pentagon’s Joint Artificial Intelligence Center (JAIC), “We are nowhere close to the full autonomy question that most people seem to leap to a conclusion on when they think about DoD and AI.” The JAIC was established in summer 2018 and functions as the central conduit for strategy development and synchronization of all DoD AI activities.

With regard to the broad development and application of military AI, the Congressional Research Services latest report on Artificial Intelligence and National Security notes that AI technologies have already “been incorporated into military operations in Iraq and Syria. Further, by 2035 the Joint Force is expected to have implemented human-machine teaming with armed robotics in warfighting based on the 2016 Joint Concept for Robotic and Autonomous Systems.

At the same time, however, advocacy groups like Human Rights Watch (HRW) and the Campaign to Stop Killer Robots are asking Congress to ban ‘killer robots because they “would increase the risk of death or injury to civilians during armed conflict,” and violate international humanitarian law and the law of armed conflict.

Military futurist Michael Schmitt is critical of this approach and posits: “Of course, the fact that autonomous weapon systems locate and attack persons and objects without human interaction raises unique issues. These challenges are not grounds for banning the systems entirely.” Other voices in the field are expressing concern that it’s “too early for a ban,” and recommend regulating the methods for employing these systems.

To better engage with the merits on both sides of the debate, however, an understanding of the core international legal framework is vital.

First, the 1868 Saint Petersburg Convention constitutes one of the leading, preliminary declarations that banned the use of certain weapons in warfighting. The U.S. is not a party to this Declaration. It states in relevant part, “the progress of civilization should have the effect of alleviating as much as possible the calamities of war” and that there should be a prohibition on weapons that “uselessly aggravate the sufferings’ of combatants.”

But what legal standards regulate the means and methods of armed conflict?

To begin, Additional Protocol I of the Geneva Conventions is helpful starting point of reference. A caveat to this interpretative lens, however, is that while the U.S. recognizes much of the Protocol as customary international law, it is not a party to this treaty.

Additional Protocol I requires the military to distinguish between combatants and civilians. Article 51(2) states that “[t]he civilian population as such, as well as individual civilians, shall not be the object of attack.” Additionally, the “constant care obligation” principle embodied in Article 57(1) of Additional Protocol I, echoes the centrality of observing “risk mitigation and harm prevention” practices in military operations. It recites that “[i]n the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects.”

The Martens Clause of 1899 also imparts clarity here. It states that even under circumstances where the rules and customs surrounding armed conflict are undergoing development (e.g., using autonomous weapon systems in warfighting), the principles of international humanitarian law still protect individuals:

“Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.”

From HRW’s perspective, fully autonomous weapons would violate the Martens Clause, because the design concept fails the “requirements of the public conscience[.]” The logic being “[b]y eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians.”

The Department of Defense’s (DoD) position, however, is that weapon systems should not supplant human judgment in employing lethal force; specifically, DoD Directive 3000.09 Autonomy in Weapons Systems reads that: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Another instructive legal source here is Article 35 of the 1977 Geneva Conventions, which provides three overarching principles on the proper methods, or means of warfare:

“1. In any armed conflict, the right of the Parties to the conflict to choose methods or means of warfare is not unlimited.
2. It is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.
3. It is prohibited to employ methods or means of warfare which are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment.”

In addition, Article 36 mandates that the High Contracting Party conduct a review of a new weapon prior to the weapon’s use in warfare, in order to ensure it comports with international humanitarian law:

“In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.”

Lastly, while the Convention on Certain Conventional Weapons (CCW) as amended in 2001 does not expressly mention AI and automated systems, the general principles of the Convention might still apply to these emerging weapons. How? The articulated purpose of the CCW is to “ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” According to Michael Meier, the Campaign to Stop Killer Robots is advocating for a “new protocol in the CCW that would preemptively ban the development” of lethal autonomous weapon systems. Given the current discourse over whether lethal autonomous weapons could cause unnecessary suffering to combatants or civilians, this remains an open question under the CCW to consider.

In terms of recent AI policy developments, on February 11, U.S. President Donald Trump signed an executive order to prioritize federal funding for artificial intelligence (AI) research and development. The following day the Department of Defense (DoD) released a 17-paged strategy report on Harnessing AI to Advance Our Security and Prosperity.

The report references the DoD’s strategic focus areas of cultivating partnerships, building an AI workforce, supporting research and development, and leading the way in military ethics and AI safety. Armed with this basic framework of the legal regimes, one can begin to engage with the competing perspectives on how to best regulate the application of AI technologies in warfighting.

As we like to say at Lawfire®, check the facts, assess the arguments, and decide for yourself!!!

 

You may also like...