Guest Post: Zhanna Malekos Smith on “Who’s Driving This Train? Intelligent Autonomy and Law”

Concerned about the relationship between the law and autonomous weapons? So is today’s guest blogger, Zhanna Malekos Smith. Zhanna joined the Center on Law, Ethics, and National Security (LENS) and the Duke Center on Law & Technology (DCLT) in fall the 2018 as the Law School’s inaugural Reuben Everett Cyber Scholar. Here’s her view:

In August 2018 the United Nations Group of Governmental Experts (UN GGE) held their second session on autonomous weapons systems in Geneva. The delegation examined a variety of subjects on human-machine interface, accountability, and intelligent autonomy.

This article first describes the concept of intelligent autonomy and then offers a rather pointed critique of one view expressed in the UN GGE Chair’s Report on the delegation’s discussion. An advanced copy of which is available here.

Intelligent Autonomy

Autonomy refers to the ability of a machine to function without a human operator.

The UN GGE’s report describes autonomy as a spectrum; noting that there are variations based on machine performance and technical design characteristics like “self-learning” and “self-evolution,” which is essentially machine-based learning without human design input.

Bearing in mind that autonomous systems function differently from automatic systems, the U.S. Department of Defense’s report, Unmanned Systems Integrated Roadmap FY 2011- 2036, describes automatic systems as largely self-steering; “follow[ing] an externally given path while compensating for small deviations caused by external disturbances.”

In contrast to these systems, according to DoD Directive 3000.09 an autonomous system “can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system[.]”

Although fully autonomous weapons (FAW) systems operate according to control algorithms set by system operators, they do not require human command to perform combat and support functions. Currently, these specialized systems are being developed by the U.S., China, the U.K., Russia, Israel, and South Korea.

The Congressional Research Service’s report on U.S. Ground Forces Robotics and Autonomous Systems provides specific examples of how other states have integrated armed robots into warfighting: “South Korea has deployed a robot sentry gun to its border with North Korea. Israel has sent an armed robotic ground vehicle, the Guardium, on patrol near the Gaza Border. Russia is building an array of ground combat robots and has plans to build a robot tank.”

A Critique of the UN GGE Chair’s Summary Report

One point of friction in the summary report concerns the vitality of the relationship between law and autonomous weapons.

For instance, section four, paragraph B(27)(e) reads:

“Autonomy in the military targeting and engagement cycle has to be studied further keeping in view that autonomy can exist throughout or during parts of the targeting cycle and could start to be applied increasingly in other contexts as close combat.” (emphasis added).

However, section four, paragraph E(33) states: “As IHL [international humanitarian law] is fully applicable to potential lethal autonomous weapons systems a view was also expressed that no further legal measures were needed.” (emphasis added).

Really?

No additional inquiry is necessary to develop legal measures addressing autonomous weapons, but we must continue testing these systems in military targeting?

How can ‘no further legal measures be needed’ if the summary report is silent on how international law applies to:

  • Situations where a non-state actor uses an autonomous weapon system to harm persons, or objects.
  • How the international legal principle of state responsibility extends to this technology.
  • How the international legal principle of reciprocity applies here.
  • How the use of FAWs influences the way states should inform their decision on “when to resort to force”.
  • And how a state’s inherent right to self-defense under Article 51 of the United Nations Charter might be challenged if proper and timely attribution to the FAWs attack is encumbered.

This simultaneous call for continued research and development, and the implicit support for the stagnation of international law, is befuddling; much like a train conductor urging travelers on the station platform to hop aboard the train before it departs, while at the same time barring all entry on, or off.

Case in Point: Reciprocity and FAWs

Focusing on the challenges with reciprocity, while the functioning of international humanitarian law and the law of armed conflict (IHL/LOAC) is largely dependent upon states agreeing to be held accountable for their actions, how will the legal concept of reciprocity translate as a control algorithm for FAWs?

Reciprocity is the legal and diplomatic concept that whatever rules and customs states agree to, each shall abide by the terms. In jus in bello, reciprocity encourages combatants to abide by the state-sponsored customs of war. For example, a predominant feature of IHL/LOAC recognizes the need to reduce the means and methods of warfighting that risk unnecessary suffering to combatants and civilians. Human Rights Watch argues that FAWs risk unnecessary suffering because they “lack the human qualities necessary to meet the rules of international humanitarian law.”

Responding to this concern, international legal scholar Michael Schmitt provides countervailing evidence about FAWs capabilities: “Modern sensors can, inter alia, assess the shape and size of objects, determine their speed, identify the type of propulsion being used, determine the material of which they are made, listen to the object and its environs, and intercept associated communications or other electronic emissions.”

To this issue of target discrimination, however, The Verge reports that military commanders are leery of “surrendering control to weapons platforms partly because of a lack of confidence in machine reasoning, especially on the battlefield where variables could emerge that a machine and its designers haven’t previously encountered.” With these compelling counterviewpoints and burgeoning areas of law to yet explore, how can the position that “no further legal measures are needed” be reasonably supported?

Interpreting the delegation’s intent is further muddled when read alongside paragraph C(b):

“Where feasible and appropriate, inter-disciplinary perspectives must be integrated in research and development, including through independent ethics reviews bearing in mind national security considerations and restrictions on commercial proprietary information.”

This passage signposts that there are international legal issues yet to be grasped. And yet, the ‘train conductor’ in paragraph E(33) takes the stance that ‘none shall pass.’

Pressing Ahead – Intelligent Law 

Discussions at the 2019 UN GGE meeting on lethal autonomous weapons systems must include, and cannot sacrifice, examining how IHL/LOAC applies to the above-mentioned areas, to develop granularity here. “Reason is the life of the law,” as the 16th-century English jurist, Sir Edward Coke observed, and indirectly encouraging a lethargy in legal analysis is neither a healthy, nor reasonable approach to driving this train.

Jessica ‘Zhanna’ Malekos Smith, J.D., the Reuben Everett Cyber Scholar at Duke University Law School, served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Before that, she was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London.

As we like to say at Lawfire®, check the facts, assess the arguments, and decide for yourself!

 

You may also like...