Podcast: Lt Gen Jack Shanahan, USAF (Ret.) on “The Military Uses of Artificial Intelligence”
Want to get caught up on the latest about artificial intelligence (AI) in the armed forces? Then today’s video of Prof Gary Corn’s, Fireside Chat with Lt Gen John N.T. “Jack” Shanahan, USAF (Ret.), the former Director of the Department of Defense’s Joint Artificial Intelligence Center, on “The Military Uses of Artificial Intelligence” is for you.
This is the latest in our series of podcasts based upon presentations at the recently-completed 30th Annual National Security Law conference. Given that Lt Gen Shanahan is authentically the world’s top expert on this topic, you’ll defintiely want to watch (or listen to) this one.. Among many things, you’ll hear about AI-enabled military systems, the essential role of lawyers (who understand the technology), the involvement of China, and much, much more.
Here are a few snippets of what you’ll hear:
“I think we’re at the cusp of a digital revolution. There was an Agrarian Revolution, there was an Industrial Revolution. I do not call this the Fourth Industrial Revolution [though] some like to call it that.”
“I think it is fundamentally going to be different because a merger of humans, data, and machine that goes well beyond symbiosis, the idea of cognition at scale that we have never seen before that will affect power, economics, military strength. But I don’t know when that’s going to happen yet. I would say the AI we have today is nowhere close to that.”
“So I’m predicting, based on what we think is coming down in the future, I think it will be seen in retrospect as a digital revolution.”
He says that the “pace of change” occasioned by AI will be uncomfortable for some:
“Everybody wants to look at AI as this magic thing, as an enabling technology. It’s like electricity to me. It’s not a capital asset. What really matters to me is how this technology will diffuse across the military over the next two decades. It won’t happen instantly. You’ll need organizational redesign. Reform. Operating concepts will have to change. It’s going to take some time. Every technology throughout history has gone through the same cycle, it’s just going to go through much faster.”
“And that’s what’s uncomfortable for some people with AI today, is the pace of change.”
Lt Gen Shanahan explained how he has qualified his view on the nature of war:
“And there’s another thing. I think it’s worth saying here because I’ve never thought of this until my keynote, I put it in print. For 36 years– and in fact, for four years, almost, since retirement– almost five, I’ve always said the character of warfare constantly evolves with new technology, but the nature of war, the Thucydides part, why we fight, is immutable.”
“I, for the first time, qualified that assertion by a fully autonomous system future in which an autonomous system can interpret and execute human instructions in a way that no further human intervention is allowed or permissible, whether by mistake or design. And once a machine can decide when and how to initiate, sustain, and terminate conflict, we are in a different world, and that’s not a world I’m particularly optimistic about.”
“We’re not there yet, which is why it’s so important to talk about the intersection of law and AI right now, to, one, get the ethicists in the room; but two, the lawyers in the room. And I did that all the time with these technologies.”
He also spoke about what he called the “most common” AI issues: the accountability problem, black box problem, and the control problem. Along this line he notes that there is much discussion about the need for a “human in the loop.” However, he sees it differently:
“I don’t even use the term “in the loop,” “on the loop.” I avoid them like the plague. The question is, where does the human come in? I think it’s going to be more in the design and development phase with lawyers there at the table because at some point, when you feel them and they’re fully autonomous, it’s too late. Unless you build in a failsafe method, which may be. That’s why you have these discussions early on.”
What about the prospects for a career in the field? Lt Gen. Shanahan says:
“If you’re at the intersection of AI and national security law, you’re employed for life. You’re good. Don’t worry about it. Whether you’re in uniform or not, we need it. We need more of it because it is a growth industry, and you focus on the things that are most different. I think the law is the law. Ethics should be ethics. Morality should be morality, but let’s focus on the things that will surprise us when these systems work in ways that we didn’t quite intend.”
“That could happen in the future as we get to these very advanced large language models and we start crossing the digital and physical divide by human instructions being then interpreted by a system that then acts on it using this thing called AI agents– that’s the phrase of the day, agentic AI. Then you put that in a robot or autonomous system and it’s acting in a physical world based on your digital instructions. That’s what’s coming in the Department of Defense.”
Believe me, this is just a tiny sampling of what you’d hear in this truly mesmerizing podcast. Again, you can find it here.
The views expressed by guest speakers do not necessarily reflect my views or those of the Center on Law, Ethics and National Security, or Duke University. (See also here).
Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!
Watch this space for additional podcasts from the conference. Some presentations, however, were for attendees only, so save the date to attend LENS 31 set for 27-28 Feb 2026