Guest Post: Maziar Homayounnejad on “Precautions in (Autonomous) Attack: Mere Rules, or a Legal Principle?”

Today’s guest blogger is Maziar Homayounnejad,  a PhD student at King’s College London.  His essay is a brief synopsis of some research done at King’s College London and South Texas College of Law, on how US and NATO forces might deploy autonomous weapon systems in a safe and lawful manner. The overall thesis is that legal compliance efforts should begin by focusing on the existing rules on precautions in attack, because of how well these rules sit with military operational practice. Maziar thanks Professor Geoff Corn for his invitation to research at STCL during Fall 2017, and for his helpful comments on earlier versions of this work.

Here’s Maziar:

Introduction

U.S. Dept of Defense image by Peggy Frierson

Brigadier General Pat Huston’s post on ‘Future War and Future Law’ aptly pointed out that lethal autonomous weapon systems (LAWS) will raise new challenges in ensuring compliance with the law of armed conflict (LOAC). The key distinguishing feature of LAWS is that the machine itself will select and engage targets; humans will define (program) its behavior, but this will occur before deployment and at some distance away from the intended strike site. Thus, while the current legal framework is arguably sufficient to govern these kinds of emerging technologies, Huston argues that “it may be more complex and difficult to make the factual determinations in order to meet the legal standards”.

In this context, the author focused on the two most commonly-cited LOAC principles: distinction (with respect to both persons and objects), and proportionality. Distinction requires the parties to a conflict to distinguish between civilians and legitimate military targets, and to only ever direct their attacks towards the latter. Once this test is satisfied, proportionality stipulates that the expected collateral damage must not be “excessive” in relation to the military advantage anticipated.

Any deployment of a LAWS must comply with these principles but, as Huston pointed out, there is some uncertainty as to whether the technology of autonomy will be sufficiently robust to afford such compliance. For example, will the sensory hardware and software be able to distinguish between combatants and civilians? What about targetable military objectives versus civilian objects?

Then there is the problem of ‘adversarial examples’, where minor deliberate changes to an image or an object fool the system into seeing something completely different: in past research, a dog has been perceived as an ostrich, and a 3D-printed turtle has been classified as a rifle. This should be expected to pose serious problems in armed conflict, especially during the heat and chaos of battle, and with the inherent tendency of both sides to try to deceive each other.

In addition, there is the indeterminate nature of the proportionality test: at what point does the expected collateral damage become “excessive” and how, if at all, can this be defined in machine-executable code?

At the very least, Huston argues that commanders must “understand the system well enough to have confidence in what it will do and what it won’t do”. Yet, unpredictability is the Achilles’ heel of most advanced autonomous systems; especially those predicated on (deep) machine learning techniques, which tend to make the systems behave like black boxes. Barring any major technological shift, these vulnerabilities look set to continue into the foreseeable future (for a semi-technical explanation, see pages 27-32 of the JASON Group report to the DoD).

I share similar concerns as the above, but I also believe strongly in the mitigating effects of another set of LOAC rules: precautions in attack. Not only are these an indispensable means to comply with the principles of distinction and proportionality but, as will be argued below, there is good reason to conceive of precautions as a LOAC principle in and of itself. In that regard, there are three key takeaways from this post:

  • Centaur Warfighting: By understanding the respective cognitive strengths of humans and machines, we can carefully delineate tasks along these lines during the targeting process. This will optimize the human-machine team in a LAWS deployment, and increase the likelihood of LOAC compliance.
  • Precautionary Rules: There are a limited number of specific rules in the LOAC, which provide commanders with a package of relatively tangible measures for mitigating civilian risk. Focusing on these as conduits for meeting the distinction and proportionality obligations will increase the likelihood of LOAC compliance.
  • A Precautionary Principle: As the LOAC has not necessarily evolved or been written with advanced technologies in mind, not all of the specific rules will be relevant in every LAWS deployment; while in other deployments there may be a need for hitherto unfamiliar precautionary measures. Thus, regarding precautions as a full LOAC principle may encourage the development of more apt, LAWS-specific measures that will be more effective for mitigating civilian risk, while retaining the military advantage of the systems.

Human versus Machine Cognition

Humans and machines have different cognitive strengths and weaknesses, and this is brought into sharp focus when we consider the difference between automatic and controlled processing. The former refers to the fast processing of routine data for deductive reasoning; machines generally do this better than humans. The latter refers to slower deliberative processing for inductive reasoning, recognizing novel patterns, metacognition and meaningful judgment; humans do this better than machines.

The British computer scientist Noel Sharkey argues that only when these attributes are in optimal balance with ‘human-machine collaboration’ can a weapon system have superior humanitarian impact (see pages 30-34 of his work on human control of weapon systems). This is because machines perform precisely but only what they are programmed to do; they cannot effectively respond to unexpected circumstances or abstract criteria in the way humans can. Accordingly, Paul Scharre advocates the idea of a ‘centaur warfighter’ that will “leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence”.

The implication is clear: the lawful deployment of LAWS will require an effective ‘division of labor’ that bounds autonomy to tasks in which machines are objectively superior, while humans decide on anything that requires deliberative thinking.

The Joint Targeting Cycle

This painstaking process of dividing and allocating tasks between man and machine will occur during design and development, but also continuously through the formal targeting process. The finer details of this process can be found in US doctrine (Chapter II) and NATO doctrine (Chapter 2) on the Joint Targeting Cycle.

It is beyond our scope here to go into an in-depth analysis, but suffice it to say that the deliberate targeting cycle encompasses six distinct phases, during which there is ample opportunity for commanders and their battle staffs to divide and allocate tasks. Briefly, these phases are:

  • End State and Commander’s Objectives: Broad strategic guidance from elected officials or from higher headquarters is translated into an operational plan.
  • Target Development and Prioritization: Intelligence analysts, legal advisers and a range of battle staffs work to identify potential targets, and to further develop these before nominating and prioritizing them for attack. This very detailed phase includes a great deal of vetting and validation of those potential targets.
  • Capabilities Analysis: Once prioritized targets are known, commanders and their staffs will rigorously analyze their available weapons, and how specific precautions may be taken to mitigate civilian risk while still achieving the desired effect.
  • Commander’s Decision and Force Assignment: Here, commanders, often supported by a Joint Targeting Coordination Board, will match capabilities against prioritized targets and assign those capabilities accordingly.
  • Mission Planning and Force Execution: This phase is carried out by unit commanders on the ground, who largely replicate phases 1-4 but on a more detailed and tactical level. Goals are re-evaluated, additional intelligence is collected, targets are further refined, and weapons are chosen from within the assigned unit that are best suited to achieve the goals.
  • Assessment: The final stage measures if, and to what extent, the planned effects have been realized, after tactical activities have been executed.

This process is both cyclical and iterative, thereby enabling continuous improvement of targeting efforts. More importantly, it brings together a large number of professional staffs who are able to exercise deliberative judgment for tasks that directly require it, and for deciding which tasks will need controlled versus automatic processing in the first instance.

Precautionary Rules

Moving on to the actual precautionary rules, there is a specific obligation to verify (as far as possible) that every target is a legitimate military target. For a LAWS deployment, this rule has the following implications:

  • It will put greater onus on intelligence analysts during phase 2 of the targeting cycle, to verify all aspects of target development that require controlled processing (as such cognitive skills may not be available on deployment).
  • It may also put greater onus on the battle staffs during phase 3, to ensure that the military status of the targets they are allocating for autonomous attack is non-changeable (e.g. tanks, rather than civilian dwellings being used by insurgents).
  • Moreover, when a LAWS is being deployed at phase 5, it will almost certainly require full use of onboard sensors. However, in some circumstances commanders may need to prioritize certain sensors for their objective strengths (e.g. GPS guidance systems, when engaging fixed targets like a bridge), and may even need to use external sensors (e.g. surveillance drones) to fully utilize the speed and precision of automatic data-processing.

A potential and fortunate flipside is that when an unexpected mass of presumed civilians (human heat signatures) appears on the scene, a LAWS may be relatively quicker to cancel or suspend the attack, due to its electronic data-processing speeds compared with the human neuromuscular delay of up to 0.5 seconds.

Another precautionary rule is to avoid or minimize collateral damage, and this also has several implications:

  • In some cases, it may simply require that commanders alter the timing of deployment. Consider a situation where a fixed object like a military barracks is being targeted by a LAWS that cannot distinguish between combatants and civilians. Where intelligence reveals that civilian flows into the area only begin after 7.00 am, an early morning attack (i.e. before 7.00 am) may be enough to comply.
  • Another way to minimize collateral damage may be through the direction of attack: hitting the barracks from an angle that leads into an empty field (rather than going towards a built-up area) will be likely to spare any nearby civilian structures.
  • Finally, where there is still a risk of civilians unexpectedly turning up, a LAWS can – unlike a fire-and-forget munition – loiter until passing civilians have moved out of the area. Alternatively, it can follow a moving target into an isolated or less populated area, before weapons release; again, minimizing collateral damage.

Where possible, attackers must give effective advance warning to civilians before an attack. The aim is for as many civilians as possible to leave the area, or at least to get out of the collateral effects zone. In a LAWS context, commanders will have to decide during the targeting process whether advance warning is compatible with mission goals and, if so, whether this should be done through traditional means (e.g. pamphlet-drops), or via the LAWS.

If they opt for the latter, a LAWS fitted with non-lethal munitions can discharge these before weapons release, similar to the Israeli ‘roof-knocking’ technique. Once a non-lethal munition is released, the LAWS can loiter and survey the area, where it performs repeat split-second calculations on fleeing civilians and vehicles in relation to the collateral effects radius. At the optimum time of attack, when observable civilians are sufficiently out of the kill zone, the weapon system can then discharge its lethal munitions.

Command Center; U.S. Air Force photo

Finally, there are precautionary rules on target selection, which focus on minimizing civilian danger when a number of alternative targets are seen to offer the same military advantage. An example may be an attack on a railway line, which aims to block a vital supply route, and where the same military advantage is gained from attacking close to a highly populated train station compared with a more remote and uninhabited area. Either way, the enemy’s supply route is cut off, though in the latter scenario there is little, if any collateral damage.

This task would seem amenable to automatic processing, as the LAWS would simply need to be programmed to recognize that attacking any point along a rail (or road) network is equally advantageous; beyond that, its integrated sensors would enable it to select the part of the overall target that poses the lowest civilian risk.

A Precautionary Principle?

The precautionary rules are undoubtedly a more tangible way to comply with the principles of distinction and proportionality, and the above provides just a sample of the potential applications to LAWS deployments. However, as the technology of autonomy raises new challenges, there is arguably a need to expand the number of precautionary measures available for mitigating civilian risk. Perhaps the best way to achieve this is to elevate precautions to the status of a full LOAC principle. But what is the difference between a rule and a principle, and why does it matter for LAWS deployments? 

Rules are relatively precise, and applicable only to the specific contexts that the drafters had in mind when writing them; all the above precautionary rules are clear examples of this. Principles are more vaguely drafted, and they serve as a general source of guidance; both distinction and proportionality clearly embody this.

Therefore, principles guide the interpretation and application of specific rules, and they guide decision-making where no discernible rule exists. This has particular value when a new technology is being deployed in a safety-critical context, with humans out-of-the-loop at the point of attack. Namely, a precautionary principle will ensure that the novel challenges raised by that technology will not be left unaddressed for want of any concrete rules, as the general guidance will encourage the development of new precautionary rules that are more apt for LAWS deployments.

Helpfully for US and NATO forces, the Joint Targeting process naturally accommodates new precautionary practices, to suit the specific circumstances of a given deployment. Indeed, as Geoffrey Corn has argued, this process in itself has enormous precautionary value, because of its multiple sub-processes, and its robust checks and balances – all of which incorporate numerous professional battle staffs and timely legal advice. Hence, there is an undeniable symmetry between military operational practice and the precautionary rules.

Yet, Joint Targeting doctrine is an operational imperative, not a legal one, and the extraordinary efforts expended by US and NATO forces are not necessarily replicated by other States. So, might there be a potential legal basis to require that precautions in attack is treated as a principle rather than a mere set of closed rules?

Perhaps yes. Within the LOAC, there is a basic obligation to take ‘constant care’ in the conduct of military operations, to spare civilians and civilian property. While not specifically defined, ‘constant care’ is clearly a pervasive and open-ended obligation, and the focus on broader ‘military operations’ – as opposed to narrower precautions ‘in attack’ – arguably stretches this obligation to both pre- and post-attack activities.

With these legal niceties out of the way, what kinds of precautionary rules and practices might we expect to see develop out of a precautionary principle? To an extent, answering this question is quite a speculative exercise because such practices will depend on circumstances. But it is worth noting a few broad observations already made by other authors.

For example, Larry Lewis advocates the front-loading of critical tasks, to mitigate both civilian risk and fratricide. This means commanders and their battle staffs will undertake (in a pre-deployment context) as many tasks and decisions as possible, which require controlled processing, thereby reducing the number of algorithmic decisions that need to be made on a chaotic battlefield. All else being equal, this should optimize the impact of the speed and precision of automatic processing.

Another precautionary practice may be to always set the tightest operational parameters without undermining the military advantage of a LAWS deployment. In other words, rather than deploying a LAWS to attack ‘enemy vehicles’ over a broad time and space of operation, it is arguably better to restrict all of this to the bare minimum, to shrink the margin of error while humans remain out-of-the-loop.

Thus, attacking specific targets that are unique to the enemy (e.g. ‘T-80 tank’, rather than just ‘large armored vehicle’), for the minimum length of time and over the narrowest geographical space possible, is arguably the preferred approach. In another paper (at pages 50-58), I develop this argument further, using examples of how this has been applied in other weapons treaties.

Finally, recall Huston’s argument that commanders will at least need a good understanding of system capabilities and limits, in order to confidently deploy LAWS in a lawful manner. This implicates training and staffing as precautionary measures in and of themselves (both being arguments already made in a general context by Geoffrey Corn). Namely, this calls for LAWS-specific LOAC training for commanders, and potentially the inclusion of roboticists and software engineers in the battle staffs.

Currently, the law requires that commanders have access to legal advisers when necessary, but there is no equivalent rule for technical personnel to give advice on weapons performance and effects. Yet, there is no reason why such a requirement cannot be read into a broader precautionary principle, given the highly complex and technical nature of LAWS. 

Conclusion

So, to summarize: while the principles of distinction and proportionality are undeniably crucial for securing overall LOAC compliance, how we get there is often where the focus should be; even more so in a LAWS context. As a reminder, my three takeaways from all of this are:

  • Centaur Warfighting: By understanding the respective cognitive strengths of humans and machines, we can carefully delineate tasks along these lines during the targeting process. This will optimize the human-machine team in a LAWS deployment, and increase the likelihood of LOAC compliance.
  • Precautionary Rules: There are a limited number of specific rules in the LOAC, which provide commanders with a package of relatively tangible measures for mitigating civilian risk. Focusing on these as conduits for meeting the distinction and proportionality obligations will increase the likelihood of LOAC compliance.
  • A Precautionary Principle: As the LOAC has not necessarily evolved or been written with advanced technologies in mind, not all of the specific rules will be relevant in every LAWS deployment; while in other deployments there may be a need for hitherto unfamiliar precautionary measures. Thus, regarding precautions as a full LOAC principle may encourage the development of more apt, LAWS-specific measures that will be more effective for mitigating civilian risk, while retaining the military advantage of the systems.

Maziar Homayounnejad is a PhD student at King’s College London, where he researches weapons law and targeting law issues in relation to new weapon systems. For two weeks during Fall 2017, he was a Visiting Scholar at South Texas College of Law, and the previous Fall (2016) he was a Visiting Scholar at Warwick Law School, UK. Maziar recently completed a thesis on ‘Lethal Autonomous Weapon Systems Under the Law of Armed Conflict’.

As we like to say at Lawfire®, check the facts, assess the arguments, and decide for yourself!

 

 

 

 

 

 

You may also like...