Jack McDonald


# Lethal Legibility

Note: this is a sketch of some work that I’ll be writing on over the next couple of years. The core of this work is dissatisfaction with the revisionist account of the ethics of war, how this in turn informs my perception of debates regarding the development and use of lethal autonomous systems, and what I think this holds for the future of warfare. The TL;DR - expert systems will mean that any object legible to machines as a military target are dead, whereas those that can’t (humans, for example) will continue to require human decision-making, and thus less vulnerable. This is because of the way we think about ethics and international law, not in spite of either. The future of warfare will be in part defined by the forced obsolescence of platforms that are legible to autonomous systems, or the vast amount of resources required to keep such things alive in armed conflicts featuring autonomous systems.

The Deficiencies of Warfare-as-Trolleyology

The enduring strength of Michael Walzer’s Just and Unjust Wars is the fact that he pegs his entire book to historical examples of moral problems. This, I think, makes it markedly more readable and intelligible to people who aren’t specialists in the ethics of war.1 At the same time, Walzer’s work isn’t without its critics, notably recent “just war revisionists” such as David Rodin and Jeff McMahan.2 Some of their arguments have proved highly controversial, notably the idea that soldiers have no right to participate in an unjust war. I disagree with a lot of their arguments, but for present purposes, two disagreements are important: the condition of war, and the modelling of moral relationships/choices.

The essence of McMahan’s arguments regarding the rights and wrongs of killing in war is that it is “absurd” to hold that the declaration of a state of war alters the moral relationships between persons (expressed in the sense that civilians lose their peacetime rights upon such a declaration). In this view, we shouldn’t treat war as a special condition. As someone with a PhD in War Studies, I’m probably pre-disposed towards rejecting this idea. Before getting to my view, we should recognise that this runs counter to commonly held social and political understandings of war. At the same time, this is also a continuation of the attitude in moral philosophy that seeks to define universally applicable principles from coherent ethical theories. In this, Walzer’s outline of a liberal theory of just war was clearly incomplete. It’s somewhat difficult to understand the project of universally applicable principles of justice in war without first referring to Walzer’s get-out-clause of “supreme emergency.” After all, once one introduces the idea that in extreme danger morality doesn’t hold, then where in war do moral considerations apply?3 If political declarations that the political community is at stake are not sufficient to explain the difference between war in general and supreme emergency, then McMahan’s question of why we should care about political declarations of war in the first place makes more sense.

For me, the most important aspect of McMahan’s work is that it is a serious and sustained challenge to the ethical relationships that exist in war and warfare. That I disagree with his view is almost besides the point.4 Whereas in Walzer’s view soldiers retained their moral status regardless of the justice of their war, in McMahan’s the justness of the war is (almost) everything. Individuals, in McMahan’s view, have no right to kill for an unjust cause. What interests me most about McMahanite soldiers is the ethical demands placed upon them. After all, they are being asked to make a complex political judgements, and, more to the point, distinguish between objective and subjective justifications for their actions. At heart, it’s the distinction between the kind of moral reasoning that takes place in philosophy texts and the epistemic conditions of the world we live in that I think is a real problem for the account of justifiable killing in war that McMahan advances. In my view, the existence of armed conflict and war presupposes fundamental differences in how two different political groups view the world. Who, or what, can define in such circumstances which justifiable beliefs are objectively justified, and those which are merely subjectively justified? McMahan does take on the “epistemic argument” when examining the moral equality of combatants, but again, this is moral theory to render judgement, rather than moral theory to help individuals guide their actions. The overarching problem, I think, is that we can’t really explain moral agency in war without engaging with the fundamental epistemic limits war places on soldiers and politicians alike. In fact, I’d go so far as to say that it’s impossible to understand ethical agency in war without reference to the social structures of military forces, and wider society.

Understood in this way, revisionist just war theory desaturates inherently uncertain and value-laden situations of both uncertainty and value. This, I think, is the same problem we have with “trolleyology”, or the reduction of moral choices/reasoning to near infinite varieties of the trolley problem. The classical problem, we should remember, is whether a person in control of a runaway trolley car should switch the car from its initial course where it will kill five people, to a track that will only kill one person who would otherwise be unharmed. The problem with the trolley problem is not necessarily the trolley itself, but the alternate model of ethical discussion that accompanied it. Philippa Foot’s article that gave rise to this way of discussing moral problems was, we should remember, about ethical discussions regarding abortion. More to the point, the trolley problem itself was a simplification of a far more ambiguous example:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed.

The point is that the example of the judge is something that can be argued without end, whereas in the case of the simple trolley problem, most people (myself included) appear ready to flip the switch to kill an otherwise unharmed person.

There are now lots and lots of books written on the trolley problem.5 The problem, I think, is that whereas the standardisation and simplification of moral choices into flipped switches, track changes, and people with infeasibly large backpacks6 pushed off bridges allows for the discussion and communication of theories and principles of ethics, this way of discussion bars entry to values, political membership, and uncertainty. More to the point, it implicitly reduces the context of a moral choice to a lonely individual standing by a switch or potential (unwilling) sacrifice.

This mode of ethical discussion abstracts everything from war and warfare that makes war and warfare a problem. If political ties didn’t matter, if everyone ascribed to the same rational set of ethical principles, if the consequences of our actions were both forseeable and predictable, and if we could ascertain everything we needed to make a moral choice alone, then there wouldn’t be war in the first place. But whereas the first two points (objective value, community ties) can be argued back and forth, it’s the fact that uncertainty is integral to warfare and that most decisions in warfare are not taken by singular individuals which I think pose a real challenge to the way moral philosophers have taken to discussing the ethics of war. If your model of ethical judgement presupposes certainty regarding outcome or effect, or if it conceives of soldiers as gathering all information for their choices independent of other people, then that model cannot account for ethical agency in the context of warfare, both contemporary and historical. The problem this poses for the LAWS debate is that this model of rational liberal individuals as sole moral agents forms the basis of all discussion regarding the employment of LAWS, and this also shapes the very discussions that we are having regarding the use of artificial intelligence or autonomous systems in a lethal context.

Meaningful Human Control

The connection between contemporary debates on just war and debates regarding the development/deployment lethal autonomous weapon systems (LAWS) is that both debates rely upon processes of ethical modelling, abstraction, and argument. Both are also concerned with the morality of killing in war. The model individual in McMahan’s world is a soldier (or other service personnel) free of political and social ties, free to analyse the world in order to ascertain the justice or injustice of large-scale social phenomena prior to action. The problem is that the world is complex, in the technical sense of the word, meaning that large scale phenomena cannot necessarily be predicted, observed, or comprehended by individual agents, and, most importantly, both military practice and ethics are something of a response to this epistemic condition.

Why does all of this matter for “killer robots” and LAWS? As I see it, there are four primary ethical debates currently milling about. First is whether it is ever ethical for a non-human agent to make a lethal decision, second is the ethical limits of human supervision of lethal non-human agents, third is whether non-human agents can ever be equated to human agents in ethical terms, and fourth is the “YOLO” ethical debate of replacing humans with better non-human agents when the machines are better than their human equivalents.7 Where these ethical debates depart from previous ones is that there is an obvious emphasis on the ethical relationship between human and non-human agents. There is also an emphasis on command relationships, at minimum the close-coupling of a human being with a non-human agent in a supervisory role. At the same time, the act of killing is again the primary focus for debate, as is the individual agent (human or non-human).

This focus on the decisions of individual agents is inherited from the just war tradition, with the obvious twist that these questions now involve non-human agency. What I think has been most rewarding about the entire debate on autonomous systems is the disaggregation of decision-making, and widening the issue of responsibility. Let me explain. We can take as a given the three classes of control - humans in the loop, on the loop, and out of the loop.8 Human decisions in this scheme are either positively authorising an action, saying “no” to a machine, or absent from the immediate decision entirely. So far, so command responsibility. In fact we can probably pull like examples from war crimes cases like Yamashita, Medina etc. The problem, obviously, is that we’d like responsibility for all uses of force, so rather than just looking at the responsibility of commanders, suddenly we have to think about the idea of humans taking responsibility for the autonomous decisions of a non-human agent - you tasked it, you own it, and so on and so forth. Unfortunately for military types, the kind of legal/policy innovations we see in the tech/auto sector9 to speed adoption of autonomous technology won’t really work in a war-fighting scenario. This means that there is an intense focus upon the degrees and definitions of autonomous systems,10 and the point at which a human can ultimately be held responsible for the operation of one. It’s perhaps because of this that we now have a debate regarding “meaningful human control” of weapon systems. In this sense, “meaningful” means something different to nearly everyone.11 Writing as someone who has been reading a lot of existentialist philosophy in recent months, I have to admit that the back-and-forth regarding the meaning of meaningful control causes me to smirk from time to time. It’s a perfect parking lot phrase, really.

The problem with lethal autonomous systems is that they go back a long way.12 If there’s no “cut-off” then what separates the autonomous decision-making of a guided munition, a Phalanx CIWS, or a loitering anti-radar munition? I don’t think ethics will provide an answer to this question, but I do think that the way in which we talk about these issues in ethical terms says something about our various societies. What I’m more interested in is the relationship between responsibility, decisions and information transfers, and it’s at this point that I’ll jump back onboard with my prior criticism of the trolley problem as a method for examining moral problems.

If a human being constructs justified belief for a machine, is the machine at fault for the consequences? There’s two quite obvious objections to this: 1) a machine isn’t able to be held accountable, and 2) the machine was acting on the world-view passed to it by a human being. Let’s flip the question, then: If a machine constructs justified belief for a human being, is the human being at fault for the consequences? Of our two prior objections, a human is able to be held to account for their actions, and the second still holds. So what happens if we start to offload things like intelligence processing to machines? What happens when human beings are making lethal decisions that are fundamentally reliant upon artificial intelligence?

I think these questions are part of an under-explored area for LAWS. This is in part because the kind of expert systems that would be involved would be similar to IBM’s Watson or Google’s Deepmind, and while we all know what these machines did with a simple Google-search, we don’t really know what they look like. As I see it, the LAWS debate focuses upon “downstream” systems, but a lot of the viable AI developments are essentially software platforms. This brings us to the wider world of situating autonomous human beings within systems and organisational processes. If you want to understand the ethics human/AI interactions, then I’d suggest looking at its impact on nursing protocols, algorithms (the non-digital kind), and the types of bureaucratic management tools that exist in healthcare systems. Look at clinical decision support systems, and the way in which professionals charged with life and death interact with them. Rudimentary If-Then AI has been kicking around in healthcare for decades and statistically speaking, is likely to have resulted in the death of someone by now. It’s the transition from explicit knowledge to systems based on machine learning (like the aformentioned Google Deepmind, IBM Watson platforms) that raises interesting questions. After all, if the decision-making of the machine is illegible to humans, and therefore effectively a black box, how can clinicians rely upon such a system to make a diagnosis, if ever?

The importance of this detour is that if expert systems utilising machine learning do get used in the processing or production of intelligence, then the subsequent decisions based upon said intelligence are also illegible to human beings. If all the processes, procedures, and standardised rules of military organisations are built around human beings as culpable agents, what happens when you introduce machine-processed information into that system? If a team of soldiers raids a building on an intelligence-led operation, would they be sure that another human was responsible for the production of that intelligence? Circling back again to the use of trolley-esque scenarios and logic in just war theory, the problem isn’t the scenario, the problem is how these kind of scenarios get produced within a war. In other words, how militaries cope with uncertainty is to reduce the moral agency of individuals (sometimes significantly, sometimes by not much) and give them set procedures/structures to engage with. These are all built upon a foundation of information shared by and between human agents. In my mind, the problem is that AI shows more promise aiding and augmenting these kind of higher level staff processes, and in doing so might undermine the integrity of the system itself.

LAWS, Legibility, and Future Warfare

That’s all well and good, but where’s the policy problem? Here goes: I think our current moral frameworks can, and will, accomodate LAWS. Regardless of how you think about western governments, their professional militaries don’t appear to be in a rush to design themselves out of a job anytime soon. I think, given the way the field of military robotics is going, that we’re far more likely to see specific systems that can beat humans in a given domain, but there are plenty of areas of warfare that humans will always dominate AI. The “centaur” concept of humans paired with autonomous systems is likely to work at all levels of warfare.

In our ethical (and legal) understanding of the rules of war, there are permissible and impermissible targets. This is a binary distinction that admits no gap between the two. Where values and human judgement come into play is identifying persons or objects as permissible or impermissible targets, and making inherently ambiguous calculations like “Is it okay to kill that General with an airstrike if I think there’s a 50% chance of killing two civilians?” I doubt that humans will allow machines into that kind of judgemental domain anytime soon, or unless pushed to by enemy action. Decisions to target and kill humans are, for machines, inherently difficult. After all, how do you identify a person as a legitimate target of attack?

The debate about targeting humans distracts us from the proverbial pink elephants in the room, which are aircraft carriers and fast jets. These are physical objects that can be sensed by machines, and, given the domains that they operate in, are relatively trivial to differentiate from civilian objects in the same domain. Identifying a person as a “direct participant” in hostilities is difficult. Identifying a radar-suppressed aircraft travelling near Mach 1.0 as a military target? Not so much. For the F-35, see also Nimitz-class aircraft carriers, and so on. This, again, is why The Terminator exerts a terrible influence over present debate and thinking. After all, if your primary arguments against the deployment of LAWS is the potential for civilian casualties, then there are easy ways to design mission parameters to enable LAWS to operate independent of human oversight that eliminate the possibility of civilian casualties.13 The long term effects, I think will be a hollowing out of military capability. At the top end, states that are able to field effective countermeasures will still be able to field the kind of kit that enables states to project power. At the bottom end, insurgent groups and terrorists might not see much difference. After all, if it is difficult for an autonomous system to differentiate between a flatbed truck, and a flatbed truck with a heavy machine gun built onto it, then this kind of military platform is less vulnerable to automated attack than, say, a tank. In the middle is where things get interesting. After all, mid-level military powers might have the resources to buy shiny bits of kit from industry suppliers, but what if there is a cheap autonomous system that can seek these out and destroy them?

I’m well aware that I’m in think-piece territory by this point, but I think the ultimate effect might be that mid-level powers turn to “illegible” pieces of kit. Inefficient up-armoured technical fighting vehicles. Another option would be to cling to the civilian population. After all, if the law-abiding-LAWS-using power can only use these pieces of kit free of direct oversight in unpopulated areas/domains, then this creates a definite benefit to move any/all expensive pieces of kit into urban terrain. Here we might want to reconsider how we think about human shields and the principle of distinction. Would moving tanks into a city to benefit from the “civilian clutter” be equivalent to using the civilian population as human shields? We often focus upon the direct threat of weapon systems, but it is the responses to these threats that often have far wider-reaching effects.

The second challenge, as I see it, is the idea of social legibility.14 As mentioned above, expert systems are the future, and it’s the ones that we don’t see that I think are the biggest challenge for the way we think about the ethics of war. I think there’s the capability for AI/expert systems to take war to the personal level in a way that we haven’t seen before. In my upcoming book on targeted killings, I call the way that America is developing its targeting systems and actions “individuated warfare.15 It’s a theme that I have been working on during/since my PhD. There’s also, I note, plenty of people now working on the issue of warfare on the individual level.16 My personal interest is the kinds/types of information and data used to identify people as belonging to non-conventional fighting forces (and, well, fighting forces in general). Here, I think there is a specific role for expert systems in processing the vast quantities of information produced and transmitted by present-day societies, “filtering the swamp” so to speak. I’m not predicting that they’ll be good or effective (since they’re built by humans, after all) but they will enable states to do more with less. I think there’s challenges associated with this (when you have a very good hammer, I’m sure even screws start to look like nails) but the fundamental issue, I think, is it again raises the absences in military ethics. After all, if the ethics of war is constructed around “killing well,” what place is there for privacy rights? More to the point, and echoing the discussion above, how should we integrate the information transfers inherent in contemporary warfighting into an ethical framework for understanding the rights and wrongs of war?

The consequence of the use of expert systems to target irregular actors is, again, likely to be felt at the mid-point. If I was to make a prediction, I’d say it’s likely to enable action against medium-sized groups moving from terrorism to insurgency. That’s when small-scale low-visibility and small-group discipline break down, but prior to a group being robust enough to resist sustained targeting by a state. For democracies, we’ll probably be thinking of this in terms of counter-insurgency or terrorism, but other states will be thinking about it in terms of quelling dissent and rebellion.

Anyway, that’s enough for one day.


  1. AKA: Most of us. ↩︎

  2. Key texts here are Rodin’s War & Self-Defense and McMahan’s Killing In War. Of course there are plenty of other authors working in this field, but I don’t have time to address them all at this point. ↩︎

  3. Yes, this is a very, very brief account of Walzer’s theory of supreme emergency, but at the same time, his arguments regarding the boundary between emergencies in war that are supreme and the general emergency of war aren’t nearly as well delineated as the rest of his account of the ethics of war. ↩︎

  4. Personally I’m pretty much in agreement with Alasdair MacIntyre’s Whose Justice? Which Rationality?, albeit from an existentialist perspective. I usually refer to the “just war tradition” rather than “just war theory” primarily due to MacIntyre’s connection between practical reason and justice. In a similar vein, Chris Brown’s division between just war thinking and just war theory keys into this idea of a tradition of thought, rather than an all-explanatory, or objective, ethical theory. Personally I’m interested in the way in which different societies/cultures/schools of thought approach common problems, and what that “says” (or could be held to say) about them. ↩︎

  5. A very readable rundown of the field for lay readers is Would You Kill the Fat Man? by David Edmonds. Foot’s point was picked up by Judith Jarvis Thomson in 1976, and from there, the myriad variations of uncontrollable trollies proliferated. ↩︎

  6. Some argue that pushing fat men off bridges is unfair or harmful to people large enough to stop an out of control trolley, hence replacing them with persons wearing heavy backpacks that cannot be removed. ↩︎

  7. That’s “YOLO” in the sense that it can appear to be needlessly provocative in the face of the first three ethical debates. After all, if one person is worried about the dehumanising effect of robots being permitted to kill humans, having another proclaim that robots should be used to kill humans if they can do it better is hardly a good basis for discussion. It is, however, a serious philosophical question, and a practical one if you happen to be a bystander to armed conflict. ↩︎

  8. For a good explanation of these terms, see Scharre (2016)↩︎

  9. See, for example, adoption of liability by the vendor for autonomous vehicles ↩︎

  10. If you’ve been following the rise of autonomous machines, it’s clear that experts with cutting edge technical knowledge of autonomous systems disagree about the definition of autonomy as much as columnists and activists. Unfortunately for activists, until you have a meaningful description of autonomous weapon systems, it’s kind of difficult to design legal architecture to ban them. ↩︎

  11.  ↩︎
  12. As Chris Jenks put it at a talk at SOAS last year, there’s the “30/30” problem of roughly 30 states using similar systems for roughly 30 years. He’s got a great working paper here. ↩︎

  13. Of course, sinking a nuclear-powered carrier is going to cause collateral damage to sea life, but then again if you find yourself in the position of needing to target a Nimitz carrier, I doubt that this calculation will ever figure much in the proportionality calculation. More to the point, if you’re in the position of needing to target a Nimitz carrier it’s probable that you won’t be alive to assess the results. ↩︎

  14. I’ll freely admit that I’m riffing off the ideas of James C. Scott here. ↩︎

  15. I use “individuation” in a different way to Issacharoff and Pildes, but I think I’m on a similar wavelength to them. ↩︎

  16. One project that looks really impressive is the Oxford/EUI research project on “The Individualisation of War: Reconfiguring the Ethics, Law and Politics of Armed Conflict” led by Jennifer Welsh ↩︎