What if Military AI is a Washout?

Military applications of artificial intelligence, we are told, are poised to transform military power. They might make the oceans transparent to sensor systems, threatening at-sea nuclear deterrent systems like the UK’s Trident. They might enable autonomous aircraft that could outfight human crewed planes. They could transform intelligence processing in war, enable all sorts of complex weapons that would make things like tanks and aircraft carriers yesterday’s news. The sky, it appears, is the limit.

In this light, big states are making large investments in military AI. One aspect of the UK’s recent Integrated Review (ahem, “Global Britain in a Competitive Age”) and Command Paper (ahem, “Defence in a competitive age”) is a bet that investment in military applications of artifical intelligence will offset cuts to things like tanks and troop numbers. For the UK, this is a big bet. It has consequences for the way in which the UK will contribute to NATO, and its key alliances such as the “special relationship” with the United States.

So what happens if military AI doesn’t pay off? In a more British sense, what happens if military AI turns out to be a bit pants? This post is my way of reasoning through my increased scepticism about an AI-driven “military revolution” in the near future. About five years ago I wrote about the possibility of AI systems being used to help detect insurgent networks, but this post is a roll-back from even that (limited) capability.

The argument goes something like this: The socially transformative vision of AI sold by venture capital over the last decade and a bit looks like it is going to wash out as a few niche areas of tremendous improvement, but no self-driving taxi fleets in London. The integration of some AI technologies will enable automation/autonomy in parts of pre-existing military processes (e.g. kill chains), but no robot super-soldier, HAL 9000 strategists, and limited institutional change. This would still have a huge impact on warfare by rendering machine-recognisable objects vulnerable to automated destruction by any variety of autonomous systems. This visibility asymmetry will make it harder to project power and sustain military forces in the field and reduces the capability gap that state militaries seek to maintain relative to non-state actors. Rections to this asymmetry will drive warfare towards urban environments. In short: worry about marginal improvements to what has already been fielded, and ways to constrain proliferation of those platforms, because the future is now (to shamelessly quote Non Phixion).

I: Selling Generalised AI is Easy, Making Money is Hard

We have been living through an era of big venture capital bets on AI in one form or another. Part of my scepticism over the last decade of AI hype has been driven by the dissonance between years of pitches, and what has matured into industry-defining technologies. A good rule of thumb in this regard is employment: it’s hard to reap pure profit (but not impossible, see Google/Facebook switching a large % of global advertising/marketing budgets to their systems) but reducing headcounts, salaries, and pensions is a pretty good way to make more money. In this regard, the stalking horse behind one of the leading venture capital bets - self-driving vehicles - has always been the promise that you might not have to pay for your taxi driver anymore (or that you might not have to factor a long distance trucker’s wages into the cost of your physical goods).

So, how’s that going?

Many of the big and public venture capital bets on AI-enabled self-driving cars appear to be unwinding before our eyes. Consider that Lyft, Uber, Tesla have all pitched self-driving cars for the past decade. While Tesla, at least, has made some huge inroads into the problem of directing a hunk of metal at 70mph along a road without killing the people sitting inside it. Uber has offloaded its driverless car attempts to another startup, Lyft has sold its self-driving division to Toyota. The problem appears to lie in getting from a technology that is almost there, to a technology that regulators consider safe enough to let the driver put their feet up and browse TikTok from A to B with no intervention.

How does this matter in military terms? Some military AI problems are harder than getting a Tesla to navigate the street grid of Los Angeles without confusing a red light for a green light, or taking a right turn through a cyclist or pedestrian. Any kind of autonomous military logistics system will have to deal with ad hoc depots and bases, off road scenarios, and roads that would make a Tesla’s brain shut down. But some military AI problems are perhaps simpler than self-driving cars. Advances in computer vision/sensors and information processing makes picking out really fast incoming missile-shaped objects easier, particularly if you’re at sea and don’t have to worry about random objects cluttering the view. Also, deciding how a missile defence system should respond to an incoming missile is somewhat more straightforward than the infinite trolley problematisation of self-driving cars (“Should a car drive itself into a wall to avoid killing five people?”, “What if there is a baby in the car and the five people are really old and about to die anyway?” and so on).

At the same time, artificial decisionmaking has replaced human decisionmaking in some areas. Consider two domains in which human decision-making has largely been turned over to computer systems, at least in terms of the ratio of human decisions to autonomous ones: High-Frequency Trading (HFT), and online advertising.

High-frequency trading works because computer systems can outcompete human beings in terms of speed. Once front-running market movements becomes an arena for algorithmic competition, then humans need to (politely) get the fuck out of the way because they are going to lose their shirt, and their investors’ shirts. But when it comes to slow decisions, things like whether or not to invest in a business or sector, then the speed aspect is not as decisive. Moreover, the risks inherent in HFT, as demonstrated in the “flash crash” of 2010 where a bunch of computer systems trading with each other managed to tank the market for no apparent reason, remain. Algorithms might outperform humans in a small slice of the market, and they can improve human performance, but they’re not replacing human traders, despite predictions.

In a similar fashion, the advert markets that determine what adverts you get served on 99.999% of the pages that you view on the internet these days do not require human decisionmaking. Someone buying ad space will make a decision, then computer magic happens, and their ads hit the right eyeballs. In theory, at least. That’s because an awful lot of online advertising appears to be a non-provable shell game that separates both advertising vendors and buyers from their money. Everyone selling services in this multi-level snake-oil scam claims their product will help an advertiser to get in front of the right set of eyeballs, but there appears to be little to no evidence that this is actually the case.

Given the kind of potential profit that could be reaped from safe autonomous vehicles, financial AI, and marketing, shouldn’t we expect these to be some of the most cutting edge areas of AI itself? Where AI truly appears to succeed is largely in controllable environments (factories), or in environments that can be rendered controllable (e.g. mining operations).

So consider this: if three distinct domains for artificial intelligence where is has to interact with the messiness of human society appear to have significant flaws that ultimately limit its utility (and profit potential), why should we expect military AI to be anything different?

II: Some Hunches About Military AI

Here are some hunches about how I think military AI plays out. I use hunches here because I think hunches are a better currency than predictions. Predictions are ten-a-penny and always subject to retroactive revision (“Well I said that AI was going to transform warfare but if you look at it this way then cleaning the operations room with a Roomba is transformative”). Hunches are like predictions but without the veneer of professional expertise. Everyone can have hunches. Hunches are often more descriptive of underlying thinking than they are of the end product, so to speak. Like predictions, hunches require little to no support, but in terms of plain language they are far more open about this fact.

Hunch 1: Waging war requires social processes, AI will be able to optimise some of them

We like to think of ourselves as living in a post-industrial era, but tell that to someone breaking their body in an Amazon fulfilment centre.

The industrial revolution was characterised by a shift from craft production to factory settings, where workers were organised around the rhythm of industrial machinery. Fordism and Taylorism brought task-specific time management to these settings, steadily regimenting the lives of workers. Now AI has brought a neo-Taylorist revolution in workplace optimisation that goes beyond traditional warehouse and factory settings to potentially optimise the lives of white collar information workers in the service economy.

What does labour productivity have to do with military AI? I think a fair bit, or at least more than a weapon-centric view of military AI might lead you to believe. Once there are niches within an organisation where technology can either radically increase worker productivity, or supplant the need for paid labour entirely, it is difficult - under competitive pressure - to resist change. You might enjoy manufacturing pins by hand, but you’ll never outperform a factory in the long run. Similarly, you might enjoy visually scanning satellite feeds for potential targets, but if your opponent’s feed is pre-annotated with potential targets, then you are at a distinct disadvantage in a peer conflict.

Integrated AI systems into organisational processes distorts them where it is possible (just as integrating desktop computers, or typewriters, or new production processes changes an organisation). If you have a factory where you can present a computer model of a required output and the factory itself will optimise the tooling and production lines to make it, then you’ll get a jump on competitors if you compete in terms of taking novel items to market. If, on the other hand, you make an error in the model, then model errors are now likely much more expensive, as there would be less time to identify and rectify them before producing the (expensive) tools to mass produce them. Result: less people in tooling, more people in model quality assurance. In my view, the same goes for targeting processes. If bits of the “kill chain” get automated with AI, then it increases risks of prior incorrect human (or machine) judgement. Result: re-shaping military organisations to account for potential optimisations offered by AI, and to minimise risks of errors.

This, in a nutshell, is my underlying hunch about AI: It will force changes in organisational practices where adopted, and it is likely to be adopted due to competitive pressure (or perceived competitive pressure). Some people are okay with this, some people are definitely not okay with this. I am somewhat agnostic on the issue: if the starting point is a social process designed to kill people, then so long as humans are still at the top of the decision chain, there isn’t that much normative difference between an AI-reliant force and a computer-reliant force, just as there isn’t much normative difference between a mid-WW2 industrial war machine and their predecessors.

Hunch 2: Military AI will automate some bits of warfare, but probably not most of them.

In this view artificial intelligence is essentially automation. We take something that would require human cognition and action, instantiate it in a physical system, and then something that used to require a human being no longer requires a human being. “That is not AI” I hear you say, well, in response, consider how many automatic things were once autonomous things. Fire-and-forget missiles have gone from being discussed as autonomous systems to simply being an automatic function of a system. Automated Target Recognition systems are performing equivalent cognitive work (recognising objects from sense data) to human beings. It’s just that they can make sense of many different types of data, and do it faster than we can, enabling forms of action beyond human capabilities.

As I see it, object recognition is a key domain in which AI will eventually outperform us. At least for big recognisable pieces of kit. Therein lies the asymmetry - big pieces of recognisable military kit will be vulnerable to recognition by autonomous systems, whereas distinguishing human beings as being combatants or civilians is going to be hard, if not impossible to achieve.

I see this as a hard limit: most of the concepts we use to make sense of war are too nebulous for machines to efficiently automate. Attempting to use AI for strategic goals will inevitably lead outcomes akin to a paperclip maximizer - where the modelling of a necessarily indeterminite set of desired goals leads to unwanted optimal solutions. Similarly, things like “combatant” or “civilian” are likely never going to be amenable to machines - at least in the sense that human beings approach them.

From this perspective AI’s transformative effect on warfare will be the consequence of how states and non-state actors respond to a variation in the ability of machines to automatically recognise military objects. Here I mean military objects in the physical sense, as in tanks, planes, ships, etc, rather than which bits of a tower block count as a military object due to its use by an opposing force.

Following from hunch 1 this has a similarly distorting effect on warfare itself. Persons and objects that might well be military objects by any lawyer’s definition will require GOFHT (Good Old Fashioned Human Targeting), whereas an assortment of key military platforms and objects will be liable to automated destruction. Less “Siri, kill that guy” and more “Siri, destroy any enemy MBTs in that grid square, but mind the Church”. In this view, ideas matter a lot - what kinds of automated destruction are tolerable to a given military force (and the inherent risk of civilian casualties/collateral damage) will determine where and when it will employ systems that enable the autonomous destruction of recognisable pieces of military kit.

Hunch 3: Military AI will create some unavoidable lethality/vulnerability issues for recognisable military platforms

Following from the above, automated recognition enabled by artificial intelligence will make some systems more vulnerable to attack. It will also help to make some systems more survivable. The kinds of active protection systems that we now see protecting vehicles from incoming rockets are a good example of this.

I think this is a big and unavoidable problem. I don’t think that most of the objections to LAWS really work against strikes on big machine-recognisable pieces of kit where LAWS are given guardrails in the form of geofencing (to prevent them operating in urban areas, etc) and other constraints that are also programmable (only attacking objects that aren’t near to humans after scanning areas for the presence of human beings, etc).

We’re already starting to see what drones can do to big pieces of military hardware without air defence (see the Idlib turkey shoot and Nagorno-Karabakh in 2020). Autonomous systems in the form of more advanced loitering munitions (and like devices) will exacerbate this.

This leads to a weird pattern of vulnerability asymmetry. Military objects that are not easy for machines to distinguish as military objects (e.g. a Toyota truck with a heavy machine gun on the back) will be less vulnerable to these systems than things that are easily distinguishable (tanks, large artillery pieces, etc).

I am not predicting the death of the tank, by the way, only that in order to survive, machine-recognisable pieces of kit will need a protective bubble that defeats LAWS that function akin to loitering munitions, whatever form that takes. I’d imagine that such a bubble would be expensive to generate and maintain, so this hunch is particularly likely to apply to middle powers who might not be able to afford it.

III: Closing the Gap

For good reason, big states and great powers worry about the military capabilities of their peers. Losing a war in Afghanistan might be an incredibly harmful loss for the US to accept, but losing a shooting war in the South China Sea is likely to have greater consequences. It is natural, therefore, to prioritise the analysis of capability gaps between peers: Will X technology close the gap between China and the US? Or will Y technology allow the US to offset its competitors once again?

Many military technologies diffuse through the international system, and often many are picked up by non-state actors and sub-state competitors. Imitation is difficult, as Andrea Gilli and Mauro Gilli have pointed out, but lesser quality systems can be emulated given time and effort. The smart bombs of the Gulf War - best explained as Michael Horowitz and Joshua Schwartz have done as the “precision strike complex” - are now increasingly available to both states and armed groups beyond NATO. Maybe Iran’s precision strike capabilities aren’t as good as America’s, and maybe they don’t have the capability (in theory) to hit a specific spot on the globe at short notice. They still have the ability to hit American bases where it matters, and that, in the end, is what matters.

One way to look at military AI is as something that will widen gaps between states that have it, and states that don’t. Certainly this is the way that most people seem to care about it in the international security space.

The way I see it, the kind of AI that I am referring to in this article is more of a levelling force. States might build all sorts of wonderful gizmos that are miles ahead of the next competitor state, but the fact that non-state armed groups have access to rudimentary forms of AI means that the gap between organised state militaries and their non-state military competitors gets smaller. If, as pointed out above, the only way to keep important platforms that effectively differentiate state forces from non-state competitors (the ability to effectively field main battle tanks and artillery, for example) gets more expensive, then less and less states are going to be able to do so without significant risk of loss. A washout AI world, in other words, is one in which middle powers and re-users of non-upgraded Soviet gear run significant risks.

One light in the tunnel is the fact that it would be quite difficult for a non-state actor to create its own loitering-munition-like system, in the same way that armed groups have developed their own drones or rudimentary precision strike capabilities. Rudimentary AI systems might also struggle to defeat protective measures employed by great powers.

That said, I don’t think we should discount the possibility of armed groups fielding otherwise high end systems in the same hi-lo technology mixes that we currently see in contemporary conflicts (and, arguably, is an integral feature of military assistance throughout the cold war). If rudimentary systems work against most state militaries, or require inordinately expensive kit to protect against them, then any armed group that can negotiate external support/supplies will be well placed to blunt the forces of states. In this sense, the future of war isn’t swarms of kill bots, it is ex-Soviet tanks and artillery getting toasted by an extremely smart missile launched from the back of a Toyota.

A second issue is also that many military forces might be more open to imprecise destruction, both of persons and objects. If systems rigged together from commercial components are able to recognise people or vehicles (likely, at least the way things are going), then suicide munitions that target packed clusters of human beings or any recognisable vehicle make sense as a means of attacking bases. If it is lawful to shell a military position with unguided artillery, then sending something in to selectively kill the humans present in that location seems like a plausible future. The further option - that these systems will be used to target and kill civilians in intentionally indiscriminate attacks - is also a possibility, but then it is a possibility with pretty much any weapon system that exists.

IV: War in the Cities

Urban warfare is currently a hot topic. The ICRC is talking about it, the Modern War Institute has a project dedicated to it, Anthony King just published a book on it. The scale of destruction wrought in Syria and Iraq serves as an immediate reminder of the carnage involved.

Even in the case of an AI washout, I think one of the long-term effects of increased AI use is to drive warfare to urban locations. This is for the simple reason that any opponent facing down autonomous systems is best served by “clutter” that impedes its use. This is not the only long-term driver of conflict towards and within urban locations. Cities have historically been the object of attack and conquest, and the 21st century is no different. Moreover humanity is now an urban species - most of us now live in urban or peri-urban spaces - thus most of the prizes of conflict are likely found in cities to a greater degree than previously.

If my reasoning outlined in the sections above holds - that the primary consequence of military AI will be some advances in computer vision, but not much else; that it will enable non-state actors to make good-enough stuff that renders military kit vulnerable - then anyone seeking to use big pieces of military kit will need either expensive protective systems or clutter to keep them alive, or a combination of both.

Let’s be clear: clutter means civilian objects, and civilian human beings. I’ll leave the legal review to someone else, but this isn’t so much active shielding (e.g. setting up shop in a civilian apartment block, using human shields, co-locating with protected objects like hospitals and healthcare facilities), but passive shielding in the sense that autonomous systems akin to loitering munitions (however advanced) are unlikely to be reliable or predictable enough to use in a fully-autonomous fashion in such an environment without causing significant civilian casualties. It is one thing for a commander to set a system off to scour a forest of armoured vehicles, safe in the knowledge that there are no nearby civilians, it is another to do the same not knowing if those vehicles will be driving along a backroad with nobody around, or driving past a school at the time the system selects it for destruction.

In this sense, the net effect of washout AI in military terms would be to drive conflict to cities. How states and non-state actors operate in those environments would be crucial to determining the net effects of such systems. If cities become no-autonomy zones, then the ultimate effect would be no significant difference to the kind of damage that urban warfare inflicts on cities and their populations in the present day. However if one or more sides to a conflict is okay with the unpredictability of autonomous systems in these environments (that’s not a good thing, in my view, but entirely possible) then the consequences would be greater chaos, and likely more civilian deaths than we currently see. Given that some states and non-state actors are okay with lobbing unguided artillery and rockets at targets in urban locations, it seems likely that they’d be okay with using autonomous systems in those same places.

V: Conclusion

AI doesn’t have to be revolutionary to have significant effects on the conduct of war. Marginal increases in the performance of one kind of system makes other systems more vulnerable to destruction, and may therefore significantly increase the cost of certain forms of warfare. While thinking about the long term revolutionary consequences of all types of wonderful forms of AI is all well and good, my hunch is that we should pay closer attention to the kind of stuff that seems achievable using already demonstrated technologies.

My technological predictions here are pretty limited by design: small further advances in computer vision, weaker versions of bleeding-edge weapon technologies being developed by middle powers, commercial object recognition technologies that can be bodged into functional weapon systems by non-state actors. This is a world of kinda-good loitering munitions used by non-state actors, rather than unsupervised uncrewed ground vehicles coordinating assaults on the basis of higher level commands by human beings. What does warfare look like when an insurgent can simply lob an anti-personnel loitering munition at the FOB on the hill, rather than pestering it with ineffective mortar fire? From the perspective of states, and those who defend a state-centric international order, it’s not good.