Guest Post: BG Pat Huston on “Future War and Future Law”

Interested in how artificial intelligence (AI) and the law will operate together (or not) in future conflicts? If so, then today’s guest post is exactly what you are looking for!   

U.S. Army judge advocate Brig General Pat Huston will share his perspective – using real-world examples – to provide some context for thinking about AI, and to give us some ideas as to where this technology might be headed. 

I hasten to add that Pat is no ordinary lawyer-warrior. As his bio below notes, he’s completed five combat tours in Iraq and Afghanistan. That and the fact that he earned the coveted Ranger tab (and has early military service as a helicopter pilot) gives him a perspective few other lawyers can possibly achieve. His academic credentials are equally as formidable: a degree in engineering from West Point, a law degree from the University of Colorado at Boulder, a Master of Laws degree from the U.S. Army JAG School, and a Master of Strategic Studies degree from the U.S. Army War College. Pat is clearly one of the nation’s premier lawyers. 

Here is Pat’s introductory note: This essay is a based on my remarks during the National Security Law Workshop co-hosted by the University of Texas Law School, the South Texas College of Law, and The Judge Advocate General’s Legal Center and School. I sincerely thank Professors Bobby Chesney, Geoff Corn and Todd Huntley for their invitation to speak, and Professor Charlie Dunlap at Duke for his willingness to publish these comments.

The long title of for these remarks is “Future War and Future Law,” but the short title is simply “AI” because Artificial Intelligence is what this conversation is really all about. My goal is to provide you several real-world AI examples to help put this in context. I’ll start with a basic primer about this technology, and then discuss where this is likely headed. I’ve broken my comments down into 5 topics.

(1) The rise of artificial intelligence in society.

(2) The future of AI in the practice of law.

(3) The future of war, particularly the role of autonomous weapons.

(4) The future of International Humanitarian Law (IHL) or the Law of Armed Conflict (LOAC) in this context, and finally

(5) A few legal, ethical, and practical considerations.

I’m not one to keep you in suspense, guessing about my conclusions, so let me start my giving you the top three takeaways from my presentation:

(1) Human Judgment: We must ensure that autonomous weapons allow commanders to exercise appropriate levels of human judgment. In other words, we need humans to make certain key decisions, and we can’t unleash an autonomous weapon over which we expect to lose control.

(2) Accountability: All weapons use must comply with IHL, and commanders must remain responsible for all weapons they employ. These are fundamental principles.

(3) The importance of Government cooperation with Industry: The best and brightest AI researchers should insist on legal and ethical conduct for all military AI uses, and they should work with compliant governments (including the US) to this end. If they boycott military projects, the void will be filled by researchers who are less capable, less ethical, or both, and I think that would be a recipe for disaster.

(1) The rise of Artificial Intelligence in society. 

So let’s jump to the first of the five topics. Artificial Intelligence is not just science fiction like the Terminator movies or HBO’s Westworld. AI is real, it is here now, and it is all around us. You know about self-driving cars, “Smart” thermostats, and robot vacuum cleaners, but not everyone uses those. But nearly everyone has done a Google search, which uses AI-enhanced algorithms to optimize the results. So do Amazon and Netflix. They make recommendations for you based on past purchases and past movies, and there are many other everyday interactions with AI, such as Siri, Alexa and Google Translate.

So what’s next? Where is this technology headed? 30 years ago, IBM’s Deep Blue computer program beat world chess champion Garry Kasparov. That too has evolved. There’s a traditional Chinese board game called Go that is far more complex than Chess. Recently, Google’s Deep Mind program – called Alpha Go — used a new type of AI to beat the world champion Go player. This was huge because it shows the potential for a new level of machine learning, where the program starts off knowing nothing about the game, and teaches itself by playing billions of games to see what works and what doesn’t. And as it learns, it rewrites its own code. This is the next generation of advanced AI, and is generally referred to as “Deep Learning.”

AI-enabled technology is everywhere. But it’s not just the US pursuing this technology. China has invested heavily in its AI research and development, and has established fusion centers with universities. China also declared its goal of becoming the global leader in AI and robotics by 2030. In short, AI is all around us and there is a global AI “Arms Race” underway. 

(2) The Future of AI in Practice of Law. 

Since this audience is mostly lawyers, I’ll highlight AI’s use in the practice of law simply to illustrate its impact in that context, but you need to understand that AI is also disrupting other industries similarly.

I’ll start with some really simple AI uses in the legal profession. Most of you are aware of the speech recognition software used by many court reporters. Those of you involved in litigation – especially civil or commercial litigation – understand how eDiscovery is now being used extensively. AI-enhanced search tools can sort through terabytes of electronic records and e-mails and extract relevant and responsive documents far faster than any human attorney or paralegal. They can also tag documents as potential attorney work product or privileged, so that they’re not automatically turned over. Government agencies can use this technology to perform Freedom of Information Act (FOIA) searches. This technology is so effective that some predict due diligence will soon require the use of AI tools in large discovery cases. In other words, failure to use this technology could be legal malpractice in some circumstances.

Some courts in the U.S. are using AI-enhanced tools to help make parole decisions – to evaluate convicted criminals for the likelihood of recidivism. That raises several questions, but think about that from a Due Process standpoint. I recently saw a report that courts in Buenos Aires are using AI to help them assess guilt or innocence, and then to draft their court opinions.

The entire history of our Supreme Court decisions has been fed into an AI system. This system has reportedly identified voting patterns of individual justices resulting in an 80% accuracy rate at predicting future court decisions. This is better than predictions by human SCOTUS experts who routinely practice before our high court.

This is one of my favorites: A study last year pitted British insurance lawyers against an AI system called CaseCrunch to evaluate insurance claims. The AI system was the clear winner. The lawyers had a 62% accuracy rate, and AI scored 87%. That’s a D- versus a B+!

Three American universities ran a similar study this year. Stanford, Duke and USC had a lawyer versus AI competition to evaluate contracts for thirty different contract law issues. Again, the AI system won: 85% accuracy for lawyers and 95% for AI. But what was even more impressive than the accuracy was the speed. It took the average attorney 92 minutes to review those contracts. It took the AI system just 26 seconds.

If you’re a client, would you rather pay a lawyer for an hour and a half of work that is 85% accurate, or a computer for a half-minute of work that’s 95% accurate? The bottom line is that AI could rapidly change the practice of law in some areas and if we ignore these changes, we risk getting left behind.

(3) The Future of War, particularly the role of autonomous weapons. 

Let’s talk about war … future war. The biggest looming issue is the use of fully autonomous weapons that unilaterally select and engage targets. And they could do this on a scale and at a speed that could overwhelm humans and render traditional warfare obsolete. This is not as far-fetched as you might think.

Secretary of Defense Jim Mattis is an avid military strategist and a retired Marine General who is not prone to exaggeration or alarm. He has carefully studied weapons developments and previously said “the fundamental nature of war that does not change.” But earlier this year, he changed his tune. He’s been studying the impact of AI on warfare and essentially concluded that AI is a game-changer. He said the potential development of fully autonomous weapons has caused him to question his entire premise about the future of war.

Now to be clear, most military AI projects are not controversial: smart maintenance scheduling for aircraft, self-driving supply trucks, robots instead of people on “bomb squads,” pilotless search & rescue aircraft, tele-medicine to treat wounded soldiers in remote areas, etc. The Navy has several AI-enhanced training programs that have significantly improved the quality of language and other technical skills courses, while also reducing training times. This saves a lot of time and money. The Army has AI-enhanced combat training facilities that are extremely realistic. Secretary Mattis said that he wants every Soldier and Marine to fight twenty-five battles in these synthetic trainers before they ever step foot on a real battlefield. And all of the services use AI to enhance pilot training. This saves money, but more importantly, it saves lives by fostering realistic and effective training in a “safe” environment.

Intelligence is a perfect place for AI tools because it involves massive amounts of data that needs to be sifted through, and it’s always time sensitive. Analyzing photos, video feeds, written reports, phone and radio intercepts, emails, texts, and social media accounts. And in many cases, this all needs to be translated first, before any analysis can be done. AI-enhanced software can translate and then analyze everything and predict what the enemy will do next. This type of “predictive analysis” is the fundamental role of intelligence, and the place where AI can help. But there are concerns. If we are going to target someone based on a computer’s analysis of his threat or their role, how reliable is it? Are there biases?

OK, now let’s talk about weapons. DOD calls these Lethal Autonomous Weapons Systems, or LAWS. Human Right Watch calls them “Killer Robots,” which I admit is much catchier. There are offensive and defensive systems; semi-autonomous and fully-autonomous systems; systems with humans in- or on-the-loop (to monitor or abort) and those with no humans involved.

C-RAM

U.S. soldiers who have spent time in Iraq or Afghanistan are familiar with the “Counter Rocket, Artillery and Mortar” system. The C-RAM is a defensive autonomous system that scans the sky for incoming rounds. When it finds one, a loudspeaker – called “the big voice” — blares “Take Cover — Incoming rounds,” and it fires a machine gun to disable the incoming round while it’s in the air. This all happens in a few seconds, which is all the time you have with incoming rounds.

According to Paul Sharre, author of “Army of None,” the Navy has a more complex system called the Aegis to defend its ships against missiles and aircraft, but the basic concept is similar. South Korea has similar “Robot Sentries” on the DMZ. Israel has unmanned patrol boats off its coast. Many other countries have similar defensive autonomous systems to protect borders or military installations.

The US is testing Drone swarms to overwhelm enemy defenses. Initial indications are that those types of swarms can’t be countered without AI-enhanced defenses, so you can see the potential for rapid military escalation.

China and Russia are doing the same things. China has developed Military-Civilian AI Fusion Centers at its top universities. In Russia, President Putin announced that the nation which leads in AI will “be the ruler of the world.” And it’s clear that he intends for that to be Russia. Russia has several autonomous “Warbots” with names like Platform-M, Argo & Wolf-2, a driverless APC called the A-800 MARS and a fully autonomous tank called the URAN-9. I certainly think this trend will continue.

(4) The Future of International Humanitarian Law (IHL) or the Law of Armed Conflict (LOAC) in this context.

So what does this mean for the future of IHL or LOAC? The general consensus – and certainly my opinion — is that current LOAC construct and principles are sufficient to govern this emerging technology, including autonomous weapons. However, even though the legal framework is sufficient, it may be more complex and difficult to make the factual determinations in order to meet the legal standards.

Let’s look at the two most important LOAC principles: distinction and proportionality. Starting with distinction: even if you confirm the target is a combatant, fratricide is always a concern, so you have to differentiate between friend and foe. This is not a new concept. During World War II, the Germans developed torpedoes that used sonar to track to enemy ships. These torpedoes were autonomous once launched. But two torpedoes did U-turns and sank the German U-boats that launched them.

But let’s say the autonomous system does properly distinguish between friend and foe, and you know it has properly identified the enemy through advanced facial recognition technology that has access to pictures of every member of the enemy’s armed forces. It must still go a step further to confirm that the enemy is targetable: that they’re not waving a white flag or raising their hands in surrender; that they’re not marked with a red cross; not wounded and out of combat. These are important steps that every commander knows to follow. We have to ensure that AI-enhanced systems follow these rules too.

And there are other vulnerabilities. Let’s say you’re relying on facial recognition technology to identify a specific enemy commander. How confident are you in the program’s ability to get it right? Some MIT students spoofed an AI visual recognition program to consistently conclude that a plastic turtle was a rifle. In some ways, this is no different than the wooden decoy tanks we placed all over Britain before D-Day to fool the Germans about troop concentrations and where we would invade France. But because AI is all computer based, there are also cyber vulnerabilities at every turn, so it’s just more complex and harder to predict what you know and what you don’t know.

We’ve already said that data is the key to effective AI systems. We’ve all heard the expression that “garbage in equals garbage out.” But war has always been marked by incomplete or inaccurate information. Plus there is often chaos and confusion, called the “fog of war.” If all that wasn’t enough, both sides are deliberately trying to deceive the other. Deception is an inherent part of any military operation.

Let’s say you satisfy the distinction test, you also have to satisfy proportionality. In simple terms, commanders must assess — based on the information available at the time – that the expected collateral damage is not excessive compared to the expected military advantage. There’s more room for uncertainty and subjectivity here. And the commander employing an autonomous system has to understand the system well enough to have confidence in what it will do and what it won’t do. This doesn’t have to be a perfect prediction – legacy weapon systems and certainly humans can be unpredictable too – but in some cases, an AI system may be so unpredictable that the commander is unable to satisfy this requirement.

This basic principle of accountability or command responsibility can’t be abdicated. Commanders are always responsible for the systems they employ, including autonomous weapons.

(5) A Few Legal, Ethical and Practical Considerations. 

Let me tell you about the controversy involving Google, and the Campaign to Stop Killer Robots.

First, Google has a contract with the U.S. military to use AI tools to review and analyze video feeds from remotely-piloted aircraft, or drones. AI can do it faster and, in some cases, better that human intelligence analysts. But many Google employees – including some of the Nation’s top AI researchers – were concerned that their work was contributing to military operations, and pressured Google to not extend the contract. Google announced a few months ago that they will not continue when the contract expires. They also announced a series of ethical principles that essentially say they will not participate in AI projects related to weapons.

This led to a larger movement in July 2018, when more than 2,400 workers at over 160 technology companies signed a letter demanding laws banning lethal autonomous weapons. This parallels an effort called the “Campaign to Stop Killer Robots” led by Human Rights Watch and Harvard’s Human Rights Clinic. This campaign calls for an outright ban on developing or employing lethal autonomous weapons. These initiatives have led to significant public debate.

Let me state up front that I think these groups’ underlying concerns are valid. Nobody wants to see weapons that are unleashed on the world, or robots that turn on their creators like in Terminator. However, I don’t think their proposed solution is the most effective way to address the concerns. In fact, I think their proposed total boycott would significantly worsen the problem.

First, if industry’s best and brightest AI researchers and coders — who are concerned about the legal and ethical compliance of these new technologies — withdraw from the discussion, the void will be filled by others who are less capable, less ethical, and less law-abiding. If our talented and ethics-minded coders just quit or bury their heads in the sand, it could be a recipe for disaster.

Second, I don’t think a boycott will slow Russian or Chinese efforts. They do not appear to be nearly as concerned as the U.S. government about the legal or ethical implications of autonomous weapons.

Third, a boycott seems to ignore the likelihood that AI can in some cases produce better results than humans. This creates ethical obligations, or at least raises significant ethical questions. For example, what if AI can produce more precise weapons that reduce collateral damage and cause less suffering? Would we be legally required to leverage that technology? Morally or ethically required? This goes back to the evolving eDiscovery standards which may require us to use technology that outperforms humans.

I’m also concerned about the risk of escalation. Most AI systems operate at incredible speeds. In the Wall Street context, rapid AI-enhanced stock market trading caused several “Flash Crashes.” Do we need some sort of ethical governor, circuit breaker, or kill switch on these systems to prevent a rapid escalation of autonomous weapons to avoid a “Flash War”? I think the answer is “yes,” and we should all want our most talented developers solving this problem.

Finally, I’ll note that the current U.S. military policy (DoD Directive 3000.09) includes a provision for a human role in the process, which I think is important. It requires that all autonomous systems “allow commanders … to exercise appropriate levels of human judgment over the use of force” and states that these systems can only be employed when all LOAC principles are satisfied. And I think these standards strike the right balance.

So in conclusion, I want to circle back to, and end with, my three key takeaways:

(1) Human Judgment: We must ensure that autonomous weapons allow commanders to exercise appropriate levels of human judgment. In other words, we need humans to make certain key decisions, and we can’t unleash an autonomous weapon over which we expect to lose control.

(2) Accountability: All weapons use must comply with IHL, and commanders must remain responsible for all weapons they employ. These are fundamental principles.

(3) The importance of government cooperation with industry: The best and brightest AI researchers should insist on legal and ethical conduct for all military AI uses, and they should work with compliant governments (including the US) to this end. If they boycott military projects, the void will be filled by researchers who are less capable, less ethical, or both, and I think that would be a recipe for disaster.

Thank you all for your time and attention.  

BG Huston

Brigadier General Pat Huston is the Commanding General of The Judge Advocate General’s Legal Center and School in Charlottesville, Virginia. His current work focuses on the legal and ethical development and use of artificial intelligence, autonomous weapons, and other emerging technologies. General Huston has completed five combat tours in Iraq and Afghanistan, and was the General Counsel (Staff Judge Advocate) of three major defense organizations: the 101st Airborne Division, the Joint Special Operations Command (JSOC), and the U.S. Central Command (CENTCOM).  

The views expressed here are those of the author and do not reflect the official position of The Judge Advocate General’s Legal Center and School, the United States Army, or the Department of Defense.

As we like to say at Lawfire®, check the facts, assess the arguments, and decide for yourself!

 

You may also like...