Podcast: Artificial Intelligence and the Future of Warfighting

I am happy to share the recording of my dialogue with my friends, Professor Rebecca Crootof, and Brigadier General Patrick Huston, on Artificial Intelligence and the Future of Warfighting. The video of our discussion is now available here.

Our guests joined us, along with other world-class speakers and panelists, as part of LENS’ recently completed 26th Annual National Security Law conference.

Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Her primary areas of research include technology law, international law, and torts, and her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberoperations, robotics, and the Internet of Things.

BG Huston is an Army lawyer focused on the legal and ethical development of AI, robotics, and other emerging technologies. A former Army Ranger and helicopter pilot, he served five combat tours as a legal advisor in Iraq and Afghanistan.  He also served as the Commanding General of the federal government’s only ABA-accredited law school.

During the discussion, General Huston and Professor Crootof offered their thoughts on a range of topics related to Artificial Intelligence (AI), including its weaponized and non-weaponized uses in military and national security contexts, AI systems’ compliance with the law of armed conflict, contemporary commercial-led technological development processes, proper regulation and accountability when using AI systems, and the challenges AI may or may create with regards to international human rights law and the protection of civilians.

General Huston reminded the audience that while “the Pentagon is the world’s biggest user of AI…the bulk of (AI) projects are not killer robots.” A “key point…is that the vast majority of AI programs in the Pentagon are benign, innocuous programs simply designed to increase efficiency and reduce costs.”

Regarding weaponized uses of AI, Professor Crootof described autonomous weapons systems and AI decision assistance, two “main ways in which…AI might affect decisions regarding the use of lethal force.”

Inevitably, “AI is going to be used in gathering information, in processing, in analyzing that information, in assessing potential threats, as alert systems in recommending targets, and so on,” she said, and there are “a lot of opportunities for errors and accidents in these processes” for which we must account.

General Huston expressed a sincere belief that “AI has the potential to make warfare more human, or at least less inhumane.” To help mitigate the risks of using autonomous systems, the Pentagon is trying to “include human machine teaming, the right mix of both, maintaining appropriate human judgment in the process…and mak(ing) sure that people are accountable for the use of these systems.” The “less reliable” AI technology is, the “the more you have to do to invoke human judgment to ensure that things don’t go astray.”

While Professor Crootof was not optimistic that a ban on autonomous weapons systems would be successful, she articulated another potential limitation on the use of AI systems: a prohibition on in-field machine learning.

Professor Crootof argued the prospect of machines learning not only in a lab, but out in the field, is “a huge concern…in terms of potential loss of control, potential loss of understanding of what the system has the capacity to do, or how it might react in different situations.” She said, “I would love to see more agreement around even that basic level of norm about when we allow machine learning to happen.”

General Huston also spoke to the challenges posed by the contemporary research and development model in which “most new technology…is being developed by the commercial sector for private markets.” Because “the technology is being developed in the civilian sector primarily for commercial uses, and then it’s migrated for military uses….it’s very difficult to put it back in the box, very difficult to control the development of this technology, and it makes it very hard to control the spread of this technology to implement effective arms control.”

What kind of facet of AI keeps the panelists up at night? General Huston says “it’s deepfakes technology,” which has the potential to “completely undermine judicial systems” and “undermine global stability.” General Huston elaborates: “it’s really important that the Pentagon works with private industry and with academia to help solve” the problem of deepfakes.  (See the discussion of ‘deep fakes’ by Erin Wirtanen and Shane Stansbury here).

For Professor Crootof, it’s the concern that, when it comes to weaponized AI, that “we’re so focused on capabilities, we’re not paying as much attention to the potential for accidents.”

You can hear/watch all this and more here.

 

You may also like...