Guest Post: Tyler Jang on “The Legal Status of Autonomous Weaponry: Recapping the 2019 LENS Conference”

Mr. Jang

Today’s guest post is by Mr. Tyler Jang, a freshman at Duke who attended our 24th Annual National Security Law Conference. In a post that first appeared on Duke’s American Grand Strategy website, Tyler discusses the autonomous weapons update provided by Army LTC Chris Ford (which you can find here).

(LTC Ford was speaking in his personal capacity, and his views and opinions do not necessarily reflect those of the U.S. Department of Defense or any other entity of the U.S. Government.)

“The Legal Status of Autonomous Weaponry: Recapping the 2019 LENS Conference”

by Tyler Jang

While we’re certainly a long way from the killer robots of the Terminator series, autonomous weaponry is a subject of growing importance and is one we should approach cautiously.

LTC Ford

On Feb 23rd U.S. Army Lieutenant Colonel Chris Ford spoke at the Duke 2019 Law, Ethics, and National Security (LENS) Conference, summarizing the current reality of, and legal framework for, autonomous weapons. The U.S. Department of Defense’s current operating definition for an autonomous weapon systems is:

“A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input from activation.”

This definition and others employed in the debate still leaves significant ambiguity that makes regulation of such systems difficult.

Autonomous weapons systems operate on a broad spectrum of capabilities. Related to how there is no consensus definition of “artificial intelligence,” there are in fact a range of functions that could be considered, in a military weapons sense, “autonomous.” Understanding the implications of autonomous weapons systems requires defining the spectrums of autonomy.

For the U.S. military, this conversation is informed by the existing United States Targeting Cycle—which breaks down distinct steps to find, fix, track, target, engage, and assess threats. Most existing weapons systems handle one or two of these functions, particularly the tracking and targeting phases. But looking forward, that will change: much of the current debate on artificial intelligence in the military centers around the implications of letting these systems handle the engagement phase.

In this same vein, the replicability and predictability of AI raises major concerns. AI, particularly machine learning, is built by allowing a computer to “learn” based on provided data sets. Problems arise when a system is deployed in an environment different from its training set, causing it to respond unpredictably. This was seen in the case of Microsoft’s Tay, a chatbot that learned to respond to Tweets and developed into a discriminatory disaster.

Tay’s behavior is neither an established norm nor an inevitable outcome of every application of AI, but it serves as a cautionary tale. If autonomous systems are not predictable, how can military personnel responsibly deploy them to complete certain tasks, especially when commanders must assess the proportionality of a military action’s gain against its potential for collateral damage? Some analysts seem to think they won’t be able to, at least not reliably in the near future. For instance, the Campaign to Stop Killer Robots sees this problem as irresolvable with disastrous moral implications and is lobbying to ban autonomous weaponry altogether.

Other state actors are spearheading their own research into AI. Russia in particular is pursuing adaptive drones that utilize “swarm” technology to coordinate the tracking and targeting of enemy combatants.

As it stands, there is no customary international law with respect to autonomous weapons, but the Group of Governmental Experts is scheduled to meet in March to again discuss potential regulation, making sure that human accountability is preserved in these systems. But with regulation lagging behind the technological frontier, artificial intelligence is poised to be at the center of international competition.

Ford stresses that while the evolving developments have potential for both good and ill, many questions remain. With most of the technological prowess coming from Silicon Valley, who will drive regulation, and will it be fast enough to catch up with the rapidly progressing field?

Tyler Jang is a freshman at Duke University from Anchorage, Alaska studying Electrical and Computer Engineering. At Duke, he is a member of Cyber Team, where he enjoys learning about artificial intelligence and cloud computing. 

As we like to say on Lawfire®, check the facts, assess the law and arguments, and decide for yourself!

You may also like...