Autonomous Weapons: An Update
Temple University’s Beasley School of Law’s Journal of International and Comparative Law has just completed revitalizing its website and has now posted a very interesting collection of essays about the legal issues associated with autonomous weapons. These articles were generated by an excellent symposium at Temple a little more than a year ago that was chaired by Associate Dean Duncan Hollis.
I was privileged to participate and my contribution is a short essay entitled Accountability and Autonomous Weapons: Much Ado About Nothing? My paper focuses on a monograph put out by Human Rights Watch (HRW), with the help of Harvard’s International Human Rights Clinic (IHRC). This was the second effort by HRW/IHRC to scuttle the whole concept of autonomous weapons. The first, entitled Losing Humanity: The Case Against Killer Robots, was intellectually demolished by Mike Schmitt in a rebuttal piece which I summarized it in my Temple article as follows:
[Mike] found that a principal flaw in the analysis is a blurring of the distinction between international humanitarian law‘s prohibitions on weapons per se and those on the unlawful use of otherwise lawful weapons. He went on to convincingly conclude that autonomous weapon systems are not unlawful per se, adding:
Their autonomy has no direct bearing on the probability they would cause unnecessary suffering or superfluous injury, does not preclude them from being directed at combatants and military objectives, and need not result in their having effects that an attacker cannot control. Individual systems could be developed that would violate these norms, but autonomous weapon systems are not prohibited on this basis as a category
The second HRW/IHRC effort which was the one I addressed in my piece was entitled Mind the Gap: The Lack of Accountability for Killer Robot. It repeated many of the flawed concepts of Losing Humanity but took a somewhat different tact, claiming that because a robot couldn’t be held accountable in a criminal trial for lethal effects that went awry, and obtaining a civil judgment would be virtually impossible, those conditions somehow made all autonomous weapons intrinsically unlawful.
I found their contentions puzzling and implausible. In the first place, the basic lawfulness or unlawfulness of a weapon under international law does not turn on the ability to fix liability on some particular individual. Rather, it is the potential use of the device that really matters. If it can be used in a discriminate and otherwise lawful manner, there is no reason to ban it.
Regarding liability for illicit use, the HRW/IHRC monograph engages in an rather meandering discussion of criminal law but never seems to understand that the law does, in fact, impose criminal responsibilities on those that set in motion weaponry, and that would include autonomous systems. In essence, the obligation is to have a reasonable understanding of how the device works, and have a reasonable expectation that it would apply force within the parameters of the law. Using autonomous or, really, any weapon without such understandings and expectations engages several avenues of potential prosecution, especially under U.S. military law (an entire corpus of jurisprudence about which HRW/IHRC seems unaware).
In particular, HRW/IHRC seems to think that criminal liability would “likely apply only in situations where the humans specifically intended to use the robots to violate the law.” Putting prosecutorial discretion aside, this is demonstrably untrue as a matter of basic criminal law. Even civilian law has a range of manslaughter charges that would not require such intent. For example, one authority describes manslaughter in a fairly standard manner as:
The unjustifiable, inexcusable, and intentional killing of a human being without deliberation, premeditation, and malice. The unlawful killing of a human being without any deliberation, which may be involuntary, in the commission of a lawful act without due caution and circumspection.
I also discuss how under U.S. military law criminal liability can arise based on simple negligence, and use as an example a case where an accused was convicted of negligent homicide merely because he lent his car to a drunken driver who kills himself in an automobile accident. It isn’t very hard to envision how such a standard could be employed if necessary to criminalize the improper use of an autonomous weapon.
The HRW/IHRC monograph also discusses civil liability, but never explains how the absence of the same determines the legality or legality of any weapon under international law. In fact, I say in my article that their discussion mainly centers on the complexity of U.S. tort liability litigation generally, rather than anything to do with weapons‘ law or the law of war.
By the way, in connection with the domestic and international law concept of command responsibility, I cite Professor Peter Margulies concept of “dynamic diligence” which he says calls for a “a three-pronged approach entailing a flexible human/machine interface, periodic assessment, and parameters tailored to [International Humanitarian Law] compliance.” I am pleased to report that Professor Margulies has refined his thinking into a new paper (“Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts”) which will be incorporated into a new book, Research Handbook on Remote Warfare (Jens David Ohlin ed., forthcoming 2017).
Since the Temple conference there have been several developments. Last December Chris Jenks reported the decision at the Fifth Review Conference for the Convention on Conventional Weapons (CCW) “to create a UN Group of Governmental Experts (GGE), which will meet for 10 days in 2017 to discuss emerging technologies in the area of lethal autonomous weapon systems (LAWS).”
Chris concludes – and I agree – that it is unlikely that any new restrictions will emerge out of the 2017 GGE meeting. He points out that the anti-LAWs movement – which has attracted only 16 supporters among the 122 CCW countries – is premised on totally banning autonomous weapons even though systems with significant autonomy have been around for decades and have caused few legal issues. Chris explains:
Fully autonomous weapon systems don’t yet, and may never, exist. Framing the CCW discussion by describing weapons that are fully autonomous has enabled ban proponents to employ moral panic, casting into the indeterminate yet looming future and projecting visions of “killer robots.” Additionally, full autonomy facilitates ignoring the reality that if weapons system autonomy was thought of in terms of the critical functions of selecting and engaging targets, the resulting discussion would have to acknowledge that since 1980, roughly 30 countries have manufactured and/or employed weapons that are capable of autonomously selecting and engaging targets. In essence, LAWS aren’t coming, they’re here, and they’ve been here.
I agree with Chris, and it is inevitable that we will see more autonomous weapons, and especially swarming drones. Autonomous swarming drones are poised to revolutionize warfare and it’s naïve to expect nations to forego developing and deploying them, especially given their very real potential to be better than humans are in discriminating between lawful and unlawful targets. That said, as I’ve written elsewhere, even the lawful use of autonomous weapons will enable what I call the “hyper-personalization of war” that will, I predict, be more unsettling to militaries than even the current drones are.
I have never thought that special rules for particular weapons were an especially good idea. As I said on the ICRC’s Intercross blog, I believe that the better way to protect people is to focus on strict adherence to the basic principles of the law of armed conflict. Banning particular weapons based on the technology at a given moment of time can become quickly outdated and counterproductive. I concluded that:
[W]hile weapons bans may have utility in certain circumstances, the better course for advanced nations like the U.S. and its allies is to avoid becoming a party to them. The fact is that such rule-of-law nations can – and do – employ complex weapons systems in full compliance with the core principles of the law of war. They carefully train their forces to follow the law, and hold them accountable when that does not occur. They also have prohibitions on the transfer of arms to rogue regimes. At the same time, if not barred by outright bans, the U.S. and other highly-developed countries have the ability to take advantage of advances in science to develop weaponry that can accomplish the military mission in less deadly ways. It makes humanitarian sense, therefore, to avoid agreements that could limit their ability to do so.
I continue to believe that, and especially as to autonomous weapons. But, as we like to say on Lawfire, make you own judgement!