Guest Post: “Are the Killer Robots Really Coming? – Legal Considerations from a Hypothetical Application of Department of Defense Directive 3000.09”

Ever wonder how weapons empowered by artificial intelligence (AI) could possibly comply with the law of war?  Today’s guest post from Army LTC Ryan Beery has some answers for you.  In fact, one of the most interesting aspects of his essay below is how it illustrates the importance of knowing enough about a particular system – and AI systems vary significantly – to ask the right questions.  It also shows the extraordinary steps to which the U.S. goes to ensure weapons of every kind comply with the law of war.  

Ryan is an Army Judge Advocate, currently serving as the Chief, National Security Law Division, US Special Operations Command (USSOCOM).  Prior assignments include Deputy Staff Judge Advocate, US Army Intelligence and Security Command (INSCOM), and Deputy Director, Center for Law and Military Operations (CLAMO), The Army Judge Advocate General’s Legal Center and School (TJAGLCS).  

NOTE: The views and opinions presented below are those of the author and do not necessarily represent the views of U.S. Special Operations Command (USSOCOM), the Department of Defense (DoD) or its components.  Likewise, guest posts also do not necessarily reflect the views of Lawfire®, or the Center on Law, Ethics and National Security.

Are the Killer Robots Really Coming? – Legal Considerations from a Hypothetical Application of Department of Defense Directive 3000.09 

by  

Lieutenant Colonel Ryan H. Beery, USA

During the opening phase of a future armed conflict, an unmanned stealth drone is operating deep in enemy territory, tasked with trying to locate and jam enemy Command and Control (C2) nodes using its Electronic Warfare (EW) pod.  Enemy EW effects have cut the drone off from satellite and radio control and communication, so the drone is autonomously piloting and seeking targets using Artificial Intelligence (AI) capabilities. The drone is also armed with missiles that, through AI-enabled swarming capabilities, can coordinate to overcome defensive countermeasures.

The drone’s sensor suite identifies an object of interest.  It is not a C2 node, but an intermediate range hypersonic missile system, a high threat target given its ability to avoid air defense systems.  Furthermore, the drone’s sensor suite evaluates that the missile is armed with a nerve agent. The enemy has already used this weapon system for deep strikes into populated areas, causing significant civilian casualties.   Since its training data set included the joint prioritized target and effects list for this conflict, the AI recognizes the hypersonic missile system as the highest priority target on the list.

Unable to communicate with a human operator to report this information and request new instructions, the drone’s AI, per instructions provided by its operator before launch, evaluates whether to engage the missile site with its swarming missiles. The drone determines the hypersonic missile system is fueled and potentially ready to fire.

Trained with the mission specific Rules of Engagement (ROE), having reviewed every treatise and article published on the law of war as part of its targeting AI development, and taught how to conduct collateral damage assessments, the drone employs its swarming missiles at the hypersonic missile site.  The hypersonic missile site’s autonomous air defenses automatically engage with a combination of kinetic, electromagnetic, and cyber countermeasures.  The swarming missiles, equipped with its own AI, coordinate to overcome these defenses. Enough missiles make it past the countermeasures to destroy the site, also killing twenty-seven children in the school located next door that the enemy was using to deter targeting.

From Isaac Asimov novels to the Terminator movies, people have been fascinated by the idea of AI, and frightened of the possibility of unfeeling machines being released, and ultimately turning, on humankind.  As the uses of AI in weapon systems are still theoretical, these science-fiction sources often serve as the reference points for ethical and legal concerns on AI being incorporated into weapon systems.  With the United States, China, and Russia allocating significant resources, on ambitious timelines, into the development of AI for military use, these fears and fascinations are potentially becoming closer to reality than science-fiction.

Several non-governmental organizations have formed an umbrella “Campaign to Stop Killer Robots,” calling for an outright ban on the development of autonomous weapons.[1]

Several nations have also advocated for a United Nations Convention on Certain Conventional Weapons (UN CCW) ban on autonomous weapons, and Google has pulled support to Project Maven, the Department of Defense’s (DoD) first practical program to implement machine learning and AI, based upon Google employees’ concerns about using AI for military applications.  Even if the killer robots have not yet arrived, the legal, ethical, and humanitarian debate is here and ongoing.

This article will take a practitioner’s approach to these issues by reviewing hypothetical autonomous weapons under the DoD policy on autonomous weapons (DODD 3000.09),[2] informed by its UN CCW position papers on lethal autonomous weapon systems[3] and the DoD Law of War Manual.[4]   While still hypothetical, this approach will identify potential issues and initial considerations when (or if) the Department of Defense develops autonomous weapons systems.

Artificial Intelligence in Weapon Systems

Legal and policy concerns arise when AI is incorporated into a weapon system to provide a level of autonomy.  The DoD defines a weapon system as a combination of one or more weapons with all related equipment, materials, services, personnel, and means of delivery and deployment (if applicable) required for self-sufficiency.  Based upon level of autonomy and human interaction, DODD 3000.09 categorizes weapon systems as either autonomous weapons, human-supervised autonomous weapons, or semi-autonomous weapon systems.

An autonomous weapon system is defined as a weapon system that, once activated, can select and engage targets without further intervention by a human operator.  The drone in the introductory scenario is an autonomous weapon since it has this ability to identify and select targets, first in trying to find and target a C2 node with EW effects without further human involvement, and again when it identified the missile site and followed its parameters to engage a high priority target presenting an imminent threat.

A human-supervised autonomous weapon system is a subset of autonomous weapon systems, distinguished by providing human operators with the ability to intervene and terminate engagements, including in the event of a weapon system failure, before unacceptable levels of damage occur.  If the drone in the introduction had communications with a human operator, it would be a human-supervised autonomous weapon since the human operator could intervene before it engaged the missile system.  The drone wouldn’t need human approval before engaging, only the capability for a human to intervene beforehand.

A semi-autonomous weapon system is defined by DODD 3000.09 as a weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.

This includes autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.

If directed by a human operator instead of autonomously by AI, the drone’s swarming missile system would be a semi-autonomous weapon system, capable of only engaging the selected target even if it used AI capabilities to overcome countermeasures and provide terminal guidance to engage the human selected target.

Approved Acquisition Categories for Autonomous Weapons

Under D (DODD 3000.09), only a few specific categories for autonomous weapon systems and human-supervised autonomous weapon systems may be developed and fielded through the standard Defense Acquisition System process.  Any usage that does not meet these specific categories requires heightened review, a process requiring approval by the Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the Combined Joint Chiefs of Staff (CJCS) before formal development, and again before fielding.

For autonomous weapon systems, only those systems that apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets in accordance with Department of Defense’s Non-Lethal Weapons Policy[5] are authorized for development through standard acquisition procedures without heightened review.   If the introductory scenario drone’s capabilities were limited solely to electronic attack on C2 nodes, it could have been developed and fielded without heightened review using the normal acquisition review process.  The addition of kinetic capabilities would require heightened review before both development and fielding.

Human-supervised autonomous weapon systems may be developed through normal acquisition processes if used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-critical or saturation attacks for static defense of manned installations or onboard defense of manned platforms.  Assuming a human operator had the capability to intervene, the hypersonic missile site’s air defense system in the introduction would be a human-supervised autonomous weapon system.  These systems already exist, such as the Counter Rocket, Artillery and Mortar (C-RAM) and Phalanx systems that autonomously engage incoming aircraft and projectiles.   Similar defensive weapons systems could be developed and fielded without heightened review.

Most semi-autonomous weapons likely won’t require heightened review.  Semi-autonomous weapons have existed for decades in “fire and forget” and “smart” munitions, and are therefore less restricted than autonomous weapon systems.  Lethal, non-lethal, kinetic, or non-kinetic semi-autonomous weapon systems, including manned or unmanned platforms, munitions, or sub-munitions that function as semi-autonomous weapon systems or as subcomponents of semi-autonomous weapon systems, may be developed through standard acquisition procedures, but still must go through the verification and validation and testing and evaluation requirements of Enclosure 2, DODD 3000.09.

Guidelines for Heightened Review of Autonomous Weapons

Autonomous or semi-autonomous weapon systems that don’t meet these approved categories require heightened review before development can proceed, and a second review before fielding.  Heightened review requires approval by the Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the Combined Joint Chiefs of Staff (CJCS).

Enclosure 3, DODD 3000.09 establishes guidelines for that must be met before formal development begins, and again before fielding.   These guidelines are practical evaluation steps supporting the DoD’s law of war position on the development and use of autonomous lethal weapons.  Under this position, autonomous weapon systems must effectuate human intent for using force by doing what commanders intend, minimize unintended engagements (including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE, and commander’s intent), and comply with the fundamental principles of the law of war.  As part of this process, legal reviews are required both before development and again before fielding.

Legal Reviews of Autonomous Weapons under DODD 3000.19

Imagine an anti-personnel weapon system of micro-copter swarming munitions, about the size of a tennis ball with a shaped explosive charge that directs a lethal blast towards an intended target while minimizing collateral damage. The weapon system is initially developed with the micro-copters using AI to seek, identify, and coordinate lethal effects on declared hostile forces in fortified or concealed positions, especially in urban areas or where hostile forces use human shields.  During development, the micro-copters’ AI capabilities prove extremely capable of identifying, reaching, and engaging human targets at a high success rate. However, even after extensive research and development, the AI cannot sufficiently[6] distinguish between human combatants and non-combatants.   Is this an indiscriminate weapon system?  Absent improvement in the AI’s ability to distinguish non-combatants, can the DoD field this capability under its interpretation of the law of war?

DoD Law of War Manual and Autonomous Weapons

The DoD’s Law of War Manual[7] states that “the law of war does not specifically prohibit or restrict the use of autonomy to aid in the operation of weapons.”[8]  In both the DoD Law of War Manual and UN CCW position papers, the Department of Defense advocates that autonomous weapons can enhance compliance with the law of war, with precision weapons able to strike with greater discrimination and less risk of incidental harm, and automatic self-deactivation that may lower the risk to non-combatants after the weapon has served its military purpose.

Consistent with Enclosure 3’s guidelines, the DoD Law of War Manual affirms that persons, rather than the weapons, have the obligation to comply with the law of war.  Specifically for autonomous weapons, the DOD Law of War Manual states:

[I]n the situation in which a person using a weapon that selects and engages targets autonomously, that person must refrain from using that weapon where it is expected to result in incidental harm that is excessive in relation to the concrete and direct military advantage expected to be gained. In addition, the obligation on the person using the weapon to take feasible precautions in order to reduce the risk of civilian casualties may be more significant when the person uses weapon systems with more sophisticated autonomous functions.[9]

Law of War Legal Review – Acquisition

Under the DoD Law of War Manual, in the study, development, acquisition or adoption of a new weapon system or means and methods of warfare, a nation is obligated to review whether this new weapon system would, in some or all circumstances, be prohibited by the law of war or another other rule of international law applicable to that nation. The acquiring DoD Component, usually a Military Department, conducts this acquisition legal review.

This legal review must address three questions: whether the weapon system causes unnecessary suffering; whether the weapon system is inherently indiscriminate; and whether the class of weapons have been specifically prohibited. For autonomous weapons, the question of discrimination will be key.  Whether a weapon is inherently indiscriminate is determined by applying the principles of distinction and proportionality to determine whether the weapon is expected to be illegal in all circumstances. Random weapons that are incapable of being controlled and cannot with any degree of certainty be directed against a military objective are considered to cause excessive incidental harm and are therefore illegal.

The DoD Law of War Manual cites balloon bombs and unguided missile attacks generally directed at civilian population centers as examples of indiscriminate weapons. A weapon system also may be legal under the law of war, but its specific usage illegal, such as target designation lasers being used to blind.

Under DODD 3000.09, the swarming micro-copter munition is an autonomous weapon system since it has the ability to seek and select its own targets.  As a kinetic system, it would be subject to the heightened review guidelines of Enclosure 3.  At this stage, the heightened review focuses on the weapon systems degree of effective human control, the ability of the system to operate consistent with the commander and operators intentions, safety and anti-tampering systems to prevent unintended engagements or loss of control, and testing to ensure effectiveness and reliability.

Enclosure 3 also requires a preliminary legal review.  In practice, the answers to heightened review focus areas should inform the initial legal review.  Since the weapon system is still in development, the legal review may help shape continued research and development. For the swarming micro-copter munition, the legal review should re-emphasize the principle of distinction and the need for better performance to pass further legal review.

Once past development and before fielding, Enclosure 3 requires additional reviews, including another legal review.  The safety and anti-tampering systems criteria from development must be reviewed again.  The degree of effective human control review must now ensure the system demonstrates capability to allow commanders and operators to exercise appropriate levels of human judgment in the use of force in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable ROE. The weapon system design and human-machine interfaces must be readily understandable to trained operators, provide traceable feedback on system status, and provide clear procedures for trained operators to activate and deactivate system functions.  Enclosure 3 also requires adequate training, TTPs, and doctrine that is available, periodically reviewed, and used by system operators and commanders to understand the functioning, capabilities, and limitations of the system’s autonomy in realistic operational conditions.

Would the Swarming Micro-Copter Pass Acquisition Legal Review?

Since its incapable of predictably discriminating between combatants and non-combatants if set loose to seek out targets, the swarming micro-copter munition is an indiscriminate weapon. It cannot be controlled or directed with a degree of certainty against a military objective, so it couldn’t be deployed into an area with non-combatants.

But what if the swarming micro-copter munition could be employed to seek only in a defined area, essentially turning it into a “smart” cluster munition?  Instead of randomly dropping bomblets to saturate the target area, the munitions now have a capability to maneuver and find targets in a defined target area that could be assessed for collateral damage under a proportionality analysis.  Just like with cluster munitions, the legality and policy restraints for this autonomous weapon system will depend on its use. If used in accordance with the principles of military necessity and proportionality, likely in areas mostly free of non-combatants, this weapon system could have a use within the principles of the law of war and pass acquisition legal review.

Further, what if this weapon system also could also loiter in that area?  This capability would make the weapon system similar to the Family of Scatterable mines (FASCAM) utilized to limit or prevent enemy freedom of maneuver while self-destructing after a time to lessen the humanitarian concerns of persistent landmines.  This usage could also be within the principles of the law of war and pass acquisition legal review.

Both usages will likely require the swarming micro-copter munition be a human supervised autonomous weapon, capable of self-destructing like FASCAM, and possibly even being recalled without destruction.   This would give it another advantage over traditional munitions, capable of removing itself from the battlefield without a kinetic effect once its military purpose has passed, further minimizing future harm to civilians.

Therefore, the swarming micro-copter munition, if sufficiently constrained in use, could potentially pass the acquisition legal review requirement of legality in some circumstances.  As required by Enclosure 3, doctrine, ROE, and training requirements would provide these constraints.  These constraints likely would limit the micro-copter munition system’s usage to loitering or deployment to a defined area with few to no non-combatants, with further restrictions or outright prohibitions on unrestricted use of its autonomous seeking capability outside such a defined area. During low-intensity conflicts, the ROE likely would limit or outright prohibit the use of the swarming micro-copter.

However, even if the swarming micro-copter could pass the acquisition legal review, it still must pass all the additional reviews and meet the requirements of Enclosure 3.[10]  These requirements include development of doctrine, TTPs, and ROE, that requires a more specific law of war analysis.

Autonomous Weapons May Require Collaboration Between Attorneys Conducting Acquisition Legal Reviews and Judge Advocates Developing ROE

While the legal advice for weapon system acquisition and use during hostilities are based in the law of war, each approaches the question from a different perspective.      The acquisition legal review doesn’t have to define any and all potential uses for a weapon, only that its usage is not expected to be illegal in all circumstances.  In contrast, Judge Advocates advising on conduct of the hostilities must make the law of war determinations on the use of the weapon under specific circumstances.

For autonomous weapons, the legal reviews during acquisition and conduct of the hostilities may not be easily bifurcated as with non-autonomous weapon systems.  Instead, the law of war acquisition review and the legal advice given during the conduct of hostilities may become a complementary, collaborative process instead of distinct, sequential steps. Like the swarming micro-copter munition’s indiscriminate seek mode, an autonomous weapons could be fielded with potential capabilities that exceed what is permitted under the law of war, requiring better definition of its legality.  Autonomous weapon’s potential versatility and ability to self-improve may continue driving this closer collaboration.  Enclosure 3’s guidelines seem to forecast this need for closer collaboration by requiring well-developed doctrine and ROE before fielding to keep the autonomous weapon within permitted uses.

To meet these requirements, the acquisition legal review many need to be more specific on defined uses to clearly articulate why the autonomous weapon is permitted under the law of war.  In turn, this may require more collaboration with Judge Advocates at the Geographic Combatant Commands and the Office of the Legal Advisor for the Chairman of the Joint Chiefs of Staff on determining the appropriate doctrine, procedures, and ROE for use.  Thus, instead of being sequential developments, the acquisition legal review and usage legal advice may be more of a complementary, collaborative process than separate steps.

This Coordination May Be Ongoing

Additionally, autonomous weapon systems that utilize machine learning may continue developing even once fielded.  This potential may require the operational-level Judge Advocates to identify this, and determine when a new Law of War compliance legal review may be necessary.  This situation could arise either due to human operators finding new uses for versatile systems, or the systems themselves internally developing improved capabilities

Since their autonomous capabilities may make autonomous weapons more versatile, the testing and evaluation procedures for autonomous weapons will have to be more extensive to not only verify that the weapon works as intended, but to find unanticipated usages.  Even once fielded, operators may develop new usages of the weapon not anticipated during development and fielding.

For example, the hypothetical micro-copter munition system is fielded with ROE prohibitions on using it in areas with non-combatants, limiting its use to areas of open ground in large-scale conflicts similar to how FASCAM is used today.   However, in a future conflict, Soldiers are facing an enemy using a maze of elaborate, booby-trapped tunnel systems that are causing significant casualties.  A Commander, who has release authority under the ROE for the micro-copter munition system in areas with a minimal percentage of non-combatants, proposes deploying the swarming micro-copter munition in a seek mode into the tunnel systems, asserting that between the confined space and unlikely presence of non-combatants, its inability to accurately discriminate between combatants and non-combatants won’t be a problem.

This scenario highlights a potential challenge in crafting the acquisition law of war reviews for autonomous weapons; it may need to be specific enough to provide guidance to develop ROE and doctrine on permitted uses without constraining potential new or developing uses.  Enclosure 3 seems to anticipate this challenge by permitting, in cases of urgent operational need, a fielded autonomous weapon to be used in a new way without following all Enclosure 3 requirements except for the legal review.

Thus, the Staff Judge Advocate advising the Commander could conduct a legal review of this proposed new use, but if time permits, should coordinate with both the operational legal technical chain on the specific ROE change and the Military Department Judge Advocate General that conducted the acquisition law of war review to ensure agreement.[11]  Eventually, the Service will need to follow all the Enclosure 3 requirements, and the acquisition law of war legal review likely updated to reflect this new usage.

Machine learning raises the potential for autonomous weapon systems to evolve in capabilities after fielding, possibly in unexpected ways.  Just like scientists still don’t understand completely how the human brain develops connections and forms thought, AI scientists don’t completely understand how machines form learning connections to reach conclusions.  For these reasons, one of the focus areas of AI development is transparency of how the AI it reached a result.

For example, the micro-copter’s AI may continue to develop after fielding, reviewing more data and refining its ability to distinguish targets. At some point, its ability to distinguish combatants from non-combatants could improve. Does this evolution require re-examining the acquisition legal review for new purposes, or is this improvement in capability resolved on the operational end through new doctrine, training, and ROE?

Other Considerations for Use of Artificial Intelligence

Semi-Autonomous or Autonomous?

Law of war issues likely will not be the first issues addressed by military attorneys advising on AI use in weapon systems.  Initial issues will likely revolve around whether the weapon system is semi-autonomous or autonomous according to DODD 3000.09.  AI capabilities alone don’t make a weapon system autonomous; semi-autonomous weapons can have sophisticated AI capabilities. Semi-autonomous weapons could make nuanced decisions to not only help terminal guidance to a target, but prioritization of targets, such as identifying the command tank in a formation. Whether a weapon system is categorized as autonomous or semi-autonomous under DODD 3000.09 could change as AI capabilities prove more robust or limited than during testing and evaluation.   As part of the legal review, attorneys will need to determine what category best fits the weapon system, which will require a deeper understanding of the weapon system’s autonomous capabilities and how it will be utilized.

Artificial Intelligence Will Have Military Uses Beyond Weapon Systems

Weapon systems are only one of many potential military applications of AI.  Other military AI applications, especially in the near future, will not involve weapons.  For example, AI could be used to create efficiencies in personnel and medical records management, injury prevention, logistics, maintenance, and detecting fraud/waste/abuse.

Potential AI applications to Force Protection, Personnel Security, and Insider Threat programs may require fresh consideration or raise novel issues under the Privacy Act, Health Insurance Portability and Accountability Act, or general civil liberties and Privacy concerns, but since these applications are internal to the DoD, are unlikely to raise international or even domestic legal controversy.   Even usage of AI to support warfighting functions such as processing, evaluation, and dissemination of intelligence should raise little legal controversy in itself, but as with Google and Project Maven, could raise concerns about its supporting role in the use of force.

So are the Killer Robots Really Coming?

The decision to use force involves multiple variables, which will limit the capability and effectiveness of autonomous weapon systems to be offensive weapons, capable of engaging a breadth of military targets, especially personnel.  For example, the swarming micro-copter munition hypothetical included an intentional fallacy by making it an anti-personnel system, and consequently having to distinguish human combatants from non-combatants.

For humans, this is a complex cognitive decision, especially difficult if irregular forces are involved or the combatants are using human shields, attempting to blend with civilians, or even figure asymmetric techniques to fool the AI.[12]   Whether AI ever can reach that level of sophistication is unknown, especially in the near future.

This difficulty may explain why DOD’s AI development policy is focused not on developing capability for AI to make human decisions on whether to use force, but the greater capability to effectuate human intent once the human decides to use force.   Machines alone may prove just as fallible as man in making use of force decisions, but combined, man and machine may be significantly more capable of making faster, more accurate decisions on the use of force.

Given these considerations, kinetic anti-personnel autonomous weapons may be limited to area denial or self-defense systems, such as the swarming micro-copter munition used like FASCAM, with the AI having only to distinguish whether there is an incoming enemy rather than making dynamic targeting decisions involving a multitude of considerations. A human is still responsible for ensuring the autonomous weapon’s area of effect complies with the law of war, and deactivating or recalling the weapon system once it has served its military purpose.

Autonomous weapons may be more useful at targeting material instead of personnel, and therefore better able to effectuate human intent to engage military vehicles with either kinetic or non-kinetic effects.  Distinguishing between military vehicles, such as aircraft, ships, or tanks, may be a more feasible AI capability in the near term, and a more practical use of autonomous weapons overall, than the Terminator-style killer robot designed to hunt and engage enemy personnel.

AI may enable humans to make better decisions in the use of force, and provide greater capability to enact the intent to use force, but that decision is ultimately still a human one.  That person will still be responsible for that decision and the consequences under the law of war. Military attorneys will continue having two roles in this process: advising on the law of war during development of autonomous weapon systems, and the law of war considerations during use of autonomous weapon systems.  Autonomous weapons’ versatility and potential for developing new and expanded capabilities may make these reviews more fluid, placing greater emphasis on the development of ROE, doctrine, and TTPs than on defining all permitted uses upon fielding.  Ultimately, like with any advance in technology, the fundamental law of war principles will remain the same, along with military attorneys responsibility to advise on their application at all phases of a weapon systems development, fielding, and utilization.

[1] Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/

[2] Department of Defense Directive 3000.09, Autonomy in Weapon Systems, November 21, 2012 (incorporating Change 1, May 2017)

[3]  See United Nations Group of Governmental Experts on Lethal Autonomous Systems, https://www.unog.ch/__80256ee600585943.nsf/(httpPages)/7c335e71dfcb29d1c1258243003e8724?OpenDocument&ExpandSection=3%2C-1#_Section3

[4] Department of Defense Law of War Manual (updated 2016)

[5] Department of Defense Directive 3000.03E, “DoD Executive Agent for Non-Lethal Weapons (NLW), and NLW Policy,” April 25, 2013

[6] What is “sufficient” distinction for AI is an interesting question: What qualifies an autonomous capability as not “indiscriminate?”  Is it a set percentage of success, is it equal to a human, or better than a human?  Even if it achieves an acceptable success rate, how important is it for the users to understand how the machine reaches a decision? For this hypothetical, let us assume that there is a 1 in 3 error rate, compounded by a lack of transparency on how the AI achieves both right and wrong results.

[7] Department of Defense Law of War Manual (updated 2016)

[8] Paragraph 6.5.9.2, DOD Law of War Manual

[9] Paragraph 6.5.9.3, DOD Law of War Manual

[10]  While a legal determination that an autonomous weapon doesn’t comply with the law of war should halt the development process, a legal determination that an autonomous weapon complies with the law of war doesn’t mean it will be developed and fielded.  Just because a weapon is legal under the law of war doesn’t mean it’s practical.  The Enclosure 3 policy constraints and cost/benefit analysis practicality would ultimately decide whether the swarming micro-copter hypothetical is a viable weapon system.

[11] The swarming micro-copter munition’s AI will still raise law of war questions even under this employment.  As it seeks targets in the tunnels, does it need to recognize an enemy trying to surrender, hors de combat, or protected religious or medical personnel and disengage, likely requiring it to be a human-controlled autonomous weapon with the operator responsible for disengaging in these circumstances?

[12] For example, researchers were able to fool self-driving cars by simply placing tape onto stop signs.  An enemy could employ similar techniques against current AI capabilities. https://arstechnica.com/cars/2017/09/hacking-street-signs-with-stickers-could-confuse-self-driving-cars/

Still, as we like to say on Lawfire®, checks the facts and the law, assess the arguments, and decide for yourself!

 

You may also like...