“Changing the Conversation: The ICRC’s New Stance on Autonomous Weapon Systems”

Today we continue our examination of the International Committee of the Red Cross’ (ICRC) new stance on autonomous weapons with an essay by my friend, Richmond Law’s Dr. Rebecca Crootof.   (You may recall her from the podcast earlier this year, “Artificial Intelligence and the Future of Warfighting” with Army Brigadier General Patrick Huston).

Lawfire’s® dialogue on this new ICRC position was kicked off by Cornell Law’s Brian L. Cox’s essay,In Backing Future Autonomous Weapons Ban, the ICRC Appears Intent on Repeating Past Mistakes.

Rebecca takes a different tact in her analysis.  She shares skepticism about outright weapons’ bans, but argues that there “is a real need for new rules,” and says the “ICRC’s suggested rules are a critical step towards [them].”  At the same time she has some cautions, and introduces her own innovative ideas.

Here’s Dr. Crootof’s very interesting and ‘fresh’ read on a complicated issue: 

Changing the Conversation: The ICRC’s New Stance on Autonomous Weapon Systems

Rebecca Crootof

On May 12, the International Committee of the Red Cross (ICRC) took a new stance on the regulation of autonomous weapon systems, which it defines as systems that “select and apply force to targets without human intervention.” It abandoned its previous agnosticism to explicitly state that there should be “new legally binding rules that specifically regulate AWS.”

This marks a seismic and much-needed shift in the international conversation. For too long, we have been mired in an unproductive to-ban-or-not-to-ban debate. While I share many of the ban advocates’ concerns, my analysis of factors relevant to a weapon ban’s success suggests that a comprehensive prohibition on autonomous weapon systems is unlikely to succeed. In terms of their social usage, autonomous weapon systems are far more akin to crossbows, submarines, and other technologies which have not been successfully banned than the (oft-cited, but laughably narrow) “successful” ban on permanently blinding lasers.

But there is a real need for new rules. While autonomous weapon systems can sometimes be usefully analogized to other weapons, in other situations there is no apt analogy—especially when evaluating who should be accountable for the unexpected, harmful consequences of employing war algorithms.

Happily, states are often willing to regulate weapons that they will not voluntarily relinquish, and the ICRC’s suggested rules are a critical step towards doing so.

The ICRC has proposed two prohibitions on certain types of autonomous weapon systems and four usage regulations, many of which are clearly derived from the SIPRI/ICRC “Limits on Autonomy in Weapons Systems” report. Let’s unpack them.

Proposal 1: A Ban on Unpredictable Autonomous Weapon Systems

Unpredictable autonomous weapon systems should be expressly ruled out, notably because of their indiscriminate effects. This would best be achieved with a prohibition on autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted and explained.

There’s something here for everyone!

At first glance, and if one were so inclined, this could be read as a roundabout way of supporting a ban on autonomous weapon systems. After all, as detailed in the UNIDIR “Known Unknowns” report, all autonomous weapon systems will sometimes act unpredictably, resulting in unpredictable effects.

But if the ICRC intended to propose a ban, there would be no need to suggest any additional rules; all autonomous weapon systems would be prohibited as inherently unpredictable. Instead, the ICRC is exceedingly careful not to use the word “ban” in its recommendations or background position paper, save only for references to the “Mine Ban Convention.” Clearly, it has something else in mind.

Meanwhile, those who tend to believe that all new problems can be solved with extant, time-tested international humanitarian law might read this as a simple rearticulation of the treaty and customary prohibition on the use of weapons which are inherently indiscriminate because their effects “cannot be limited as required by international humanitarian law.” There is some support for this reading in the ICRC’s background paper, which explicitly links this proposal to the longstanding prohibition on inherently indiscriminate weapons.

While some decry tech-specific statements of familiar tech-neutral international humanitarian law rules, articulating tech-specific rules helps focus attention on particular problems, fosters shared understandings, and constrains later actors from engaging in self-interested interpretations. This recommended prohibition prompts us to confront the question of when an autonomous weapon system’s effects will be “sufficiently” understandable, predicable, and explainable to legitimize its use.

I also read this as operationalizing one of my preferred regulations: prohibiting in-field learning. Machine learning systems can be trained in a “sandbox” and then frozen before being deployed; alternatively, they might be permitted to continue to adjust their parameters while in the field. While the latter approach enables the system to take action that is more tailored to its use environment, it also introduces a terrifying amount of unpredictability. The ICRC’s recommendation would forbid this, as—per their background paper—a system’s functioning would violate this prohibition if the system “changes during use in a way that affects the use of force (e.g. machine learning enables changes to targeting parameters over time).”

Proposal 2: A Ban on Anti-Personnel Autonomous Weapon Systems

In light of ethical considerations to safeguard humanity, and to uphold international humanitarian law rules for the protection of civilians and combatants hors de combat, use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.

The second proposal is essentially is a ban on anti-personnel autonomous weapon systems. It explicitly is not a ban on all “lethal autonomous weapon systems”; as is made clear in the background paper, the ICRC anticipates that autonomous weapon systems will be used to target military vehicles, vessels, and aircraft.

This is a sensible rule given the current state of the technology. As detailed in a recent article, in an October 2020 demonstration of military sensors, an algorithm “marked a human walking in a parking lot and a tree as identical targets.” Forget fancy adversarial attacks; in a March 2021 post, AI researchers described how they were able to convince an object-recognition algorithm that a Granny Smith apple was an iPod by . . . putting a label that said “iPod” on the apple.

Yikes.

Still, a complete prohibition on anti-personnel autonomous weapon systems risks inadvertently limiting the use of future technology that is capable of or even better than humans in context-specific distinction. Given that algorithmic classifiers will continue to be developed and perfected in the civilian realm, it is theoretically possible that at some future time, one rationale for this prohibition—that algorithms are incapable of distinguishing between lawful and unlawful targets—will be undermined by technological developments.

Some may find this risk of little concern. If one believes that international humanitarian law mandates human involvement in decisions that implicate the use of lethal force—such as whether a combatant is wounded or a civilian is directly participating in hostilities—this prohibition enshrines and preserves that human involvement.

If, however, one believes that international humanitarian law requires prioritizing minimizing civilian harm, this prohibition risks becoming, at best, jurisprudential space junk—“laws on the books that are theoretically in force but actually simply clutter and confuse the relevant legal regime”—and, at worst, an obstacle to employing technologies which might better fulfill that aim.

While I’m concerned about the many ways in which autonomous weapon systems can fail, I tend towards the latter camp, and so I see this as an ideal situation for incorporating a technological sunset. As I and a co-author observe, technological sunsets allow “lawmakers to capture the benefits of a sunset provision—namely, its ability to mitigate the difficulties of regulating despite inadequate information—without the arbitrariness of picking an expiration date that may bear no relation to changes in the use or format of the relevant technologies.” For example, instead of a blanket ban, there could be a prohibition on anti-personnel autonomous weapon systems . . . unless and until the fielding state can prove that the system is capable of being used in compliance with the distinction requirement in the intended use environment.

Proposal 3: REGULATIONS! 

In order to protect civilians and civilian objects, uphold the rules of international humanitarian law and safeguard humanity, the design and use of autonomous weapon systems that would not be prohibited should be regulated, including through a combination of: 

        • limits on the types of target, such as constraining them to objects that are military objectives by nature

The first proposed regulation seems to be another tech-specific restatement of a venerable treaty and customary obligation—here, the requirement that all attacks distinguish between lawful and unlawful targets. I suspect that states will be reluctant to endorse the suggested limitation—that targets be limited to objects that are military objectives by nature (like warships)—as it excludes objects that are military objectives by location, purpose, or use (like a civilian ship commandeered for use by military forces). Again, though, there is utility in prompting a debate as to what autonomous weapon systems may legitimately target.

      • limits on the duration, geographical scope and scale of use, including to enable human judgement and control in relation to a specific attack

I’m admittedly biased towards the second proposed regulation; Paul Scharre’s Operational Risk report convinced me long ago that one of the best ways to minimize the risks associated with autonomous weapon systems is to limit their damage potential, which “depends upon the inherent hazard of the system—the type of task being performed and the environment in which it is operating—as well as the [type of human] control.” Building off this, subsequent scholarship, and state commentary, this recommendation suggests that those fielding autonomous weapon systems must take diverse factors into consideration. Of course, like proportionality and feasible precautions evaluations, this will require context- and tech-specific information.

      • limits on situations of use, such as constraining them to situations where civilians or civilian objects are not present

Like the prohibition on anti-personnel autonomous weapon systems, the third regulation and its suggested limitation make sense given the current technology’s inability to sufficiently differentiate between lawful and unlawful targets.

And again, if one has a more effects-based, consequentialist approach to international humanitarian law, the suggested constraint may become regressive should future technological developments enable accurate algorithmic assessments in more mixed environments. If phrased to incorporate a technological sunset, it’s a solid, common-sense guideline.

      • requirements for human–machine interaction, notably to ensure effective human supervision, and timely intervention and deactivation.

The fourth regulation, like the many variants on the concept of “meaningful human control,” will likely garner widespread support—who wouldn’t want to ensure effective human supervision?—but there may be little consensus as to what is actually required.

This indeterminacy is not necessarily a weakness! As I’ve noted, “International law is built on state consensus, and it is often easier to get states to first agree to a progressive but vague statement or principle—say, that everyone has the right to life—and later hash out what it actually entails.” Indeed, “flexible terms that simultaneously draw a line prohibiting certain extreme developments while allowing for adaptive interpretations are of particular use in law intended to regulate new technology, especially weapons technologies.”

Still, any final version of this rule will need to grapple with the risk that, in certain scenarios, a legal requirement for human oversight combined with practical incentives that favor the superhuman speed of algorithmic decisionmaking will foster retaining a human in the loop purely as a scapegoat or liability sponge

Concluding Thoughts

Back in 2017, I and a co-author argued that a conversation on how best to regulate the legal uncertainties raised by autonomous weapon systems would better achieve everyone’s shared goal—to proactively address the risks associated with increasing autonomy in weapon systems—than pushing a narrow ban on futuristic weaponry or relying overmuch on extant law.

The ICRC’s new position is a welcome invitation to do exactly that: to focus on crafting rules that will address the full range and depth of legal challenges raised by autonomous weapon systems. And, for many, the ICRC actually has the legitimacy and clout to change the conversation.

The proposed regulations are far from comprehensive—they don’t address many longstanding questions, like what legal review is required or who should be held accountable for accidents (spoiler: the state). They also leave a lot open to interpretation: How much predictability is sufficient?  What limits should there be on usage? What is required to ensure “effective human supervision”?

But it is incredibly useful to have draft text that can now be evaluated, debated, and refined—it provokes states, international organizations, and other interested parties to form and express positions in a new conversational context.

I look forward to seeing how the discussion develops.

About the author

Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Dr. Crootof’s primary areas of research include technology law, international law, and torts; her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberspace, robotics, and the Internet of Things.

The views expressed by guest authors do not necessarily reflect the views of the Center on Law, Ethics and National Security, or Duke University.

Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!

You may also like...