Jack McDonald


rAndom international’s Rain room

Why discuss automated targeting and art installations? In part, the genesis of this article (and some others that I have planned) is that I’m something of a war-junkie who can’t switch off thinking about war when stepping into art galleries. I do my best to engage, but sometimes an artist’s work makes me think about my ‘day job’ a little differently.

At the moment, I’m doing a fair bit of research about autonomous weapons and targeting. What’s commonly known as drones, and what is commonly depicted as Skynet. Like many, I think automated machines are likely to be further integrated into the conduct of warfare. But I’m slightly unsatisfied about the way in which most people appear to think that it will happen, the, uhhh, ‘Terminator’ model for robot war. Think, for a second, about the development of UAVs and drones. They no longer attempt to replicate human-piloted machine capabilities, and instead provide capabilities that could never be attempted with humans in the cockpit. Furthermore, the personal robots are an area of growth - drones that can be launched and operated by infantry, rather than piloted from afar. Giving them lethal capability is a Rubicon, of sorts, and giving them some form of autonomy is another. Ensuring that autonomous robots with guns don’t shoot the wrong person (or anyone) is an important issue, perhaps the most technically challenging one, but the opposition to it appears rooted in a very simplistic idea of how people are routinely targeted in war. Soldiers don’t recognise fellow combatants as individuals with their own identity, they identify whether killing them adheres to standard rules of engagement, and theatre-specific ROE. To do that, they make quick and dirty calculations (“Does that guy have a gun? Is he pointing it at me?”) and kill people on that basis. Computers can’t do that, but they can make other calculations (“Is that drone on our side? Is blowing it out of the sky going to hurt anyone?”) and I think this second, not directly lethal, way of thinking is likely to be an interesting area in future. Call it parallel robot warfare - humans kill each other, robots destroy the robots helping the humans on each side. One side might use robots to kill humans, but most Western militaries have serious reservations about that point, so might restrict themselves to building badass robot-killing robots.

Imagine, if you will, a military force that doesn’t want to use robots to kill humans, not even those sitting in tanks, fighting a force that has the same sort of technology, but doesn’t have the same mis-givings about HAL-on-human uses of force. Team Humanity is going to be at a distinct disadvantage versus Team HAL. HAL can put a couple of hundred tactical robots armed with guns into the air, they’re going to be quad-copters, and they’re going to be better than humans at killing humans. They’re going to be fast, and they’re going to be effective. For those who believe in the inevitability of robot war, Team Humanity has the choice to abandon their principles, or die a principled death. Quad-copters packing guns, limited autonomy and human identification algorithms will do to the 21st century squad what the machine gun did to the massed infantry charge. Team Humanity could adapt, maybe, dispersing among the population (which, umm, violates the laws of war that they’re seeking to protect) but if Team HAL doesn’t mind killing civilians, that might not work. The nightmare scenario is that this is what happens, and therefore it is inevitable that Team humanity will collectively say ‘Sod it’ and swap strict adherence to their principles for their own swarm of gun-toting quadcopters. The principle of humans being ‘in the loop’ for killing other humans dies a death.

Instead of this artificial choice, let’s imagine that Team Humanity sits down at a drawing board, draws up a list of what they need autonomous robots to do, and what they definitely don’t want them to do. The shorter version might be: “Kill those goddamn robots/Don’t kill humans, don’t even endanger them.” From this perspective, Team Humanity would be paying top dollar for robots equal or greater in capability to HAL’s, but they’d also be searching for ways to prevent them from ever killing a human being. We haven’t yet seen a war where both sides use comparable robotic/automated technologies, but we certainly play them. Anyone with a passing familiarity of Call of Duty multiplayer will have witnessed someone blowing an enemy UAV out of the sky with an RPG (I know, I know, but it’s just a game). What happens when two squads with equal auxiliary UAV/robot capabilities fight? The one with the better anti-robot capability will likely win. Therefore, having automated robot-killing capability will probably be important. Western forces that abstain from deploying ‘killer robots’ are going to need a veritable army of ‘counter killer robot’ robots in order to fight regular or irregular armies that are willing to use them. One slightly uncomfortable point for the Campaign to Stop Killer Robots to consider is that nothing is going to prevent irregular forces that don’t care about raping and mutilating civilian populations from acquiring ‘good enough’ versions of these technologies in future (see also: the IED). Furthermore counter-robot capability is near-indistinguishable from ‘killer robot’ capability (itself utterly indistinguishable from any number of civilian/commercial research projects at MIT/Google etc).

Future-prediction is a fool’s game at the best of times, but what the above points to is a need for thinking about how potentially-lethal robots could operate in an uncertain environment containing humans. After all, the technical capability to identify something as a human differs substantially from the requirements to differentiate between them. What objections could be raised to fully automated machines that purposefully avoid endangering humans, while engaging machines that might harm humans? For the life of me, I can’t identify any legal or moral constraints to robots destroying robots parallel to humans killing humans. I’m fully aware that the difference between a robot that can identify a human so as to not shoot it compared to one that identifies humans in order to shoot them is about zero, but then again, the same could be said of any piece of lethal military hardware. So the real focus for my thought isn’t a catch-all of the use of autonomous robots, but on how people that like the laws of war might seek to preserve such concepts like ‘distinction’ in the face of opponents that don’t. This, oddly enough, leads me to the rain room.

If you happen to live in New York City, rAndom international’s Rain Room exhibit is closing this week at MoMA. The installation itself is a hell of a lot of fun, hence why the queues for it routinely stretched into 3-4 hours when it was on display at the Barbican earlier this year. For those unfamiliar with the installation, the premise is that visitors walk through a hallway of artificial rain, while sensors track them, and keep them dry by turning off the rain around them as they walk. Of course, such a bare bones description doesn’t really capture the marvel of the experience. Coming from a country of interminable rain, walking through and staying relatively dry is quite fun. Moreover, the rain falls straight down, which captures the single light source at the end of the exhibition in a haunting fashion. It’s a solid piece of experiential art - the individual (or group, I didn’t see many solo visitors) makes their own experience by interacting with the installation. You do something that is normally impossible, thanks to the artist and gallery, and it makes you see the world a little differently for ten minutes or so.

Many people described the experience as ‘feeling like God’. Since I’m something of a pessimist, I’d describe it as being made acutely aware of our technological gods. The exhibit isn’t without its technical flaws - wear black, and the system can’t track you that well, making for a very damp cultural experience. More to the point, you’re not the one in control - the system is. Walk too fast for the system to track - get wet, and so on. A person entering the rainfall is pretty much at the mercy of entirely unseen sensor systems. I suspect that the artists have hacked something together using Microsoft Kinects, but if they’ve managed to make a 3-D sensing array without them, fair play.

Underneath the hood, the rain room tracks people as they walk through, with the express intent of keeping them dry. I imagine that the artists might be able to flip a switch to make possibly the most depressing installation ever - a rain storm that follows only you through a room - but the intent to keep to audience dry is there. One aspect of this is that multiple people walking through the room create their own ‘safe zones’ which merge and separate as they move nearer and further from one another. This is quite close to the inverse of a ‘kill box’, which is a military method of defining spaces as entirely free from friendly forces, which means that anyone can fire into them at targets. Kill boxes allow different branches of a particular military, or their allies, to effectively combine their lethal output in a flexible manner, without risking excessive friendly fire, and resulting ‘blue-on-blue’ casualties. By dividing up physical landscapes into boxes (allowing fire at ground targets inside, and at airbourne targets flying under a set altitude above), commanders can quickly allow friendly forces to pour fire onto targets ahead of an advance, and just as easily turn them ‘off’ when friendly forces enter an area. This second point is important - on an operational level, militaries tend to restrict fire so that shells and missiles can’t transgress spaces near to friendly forces without their express consent, so a kill box acts as a method of dividing up physical geography into permissible or impermissible space - a way for multiple humans to fight near each other with extremely lethal weapon systems without accidentally killing each other.

The reason kill boxes interest me, as an academic, is that they are a method of co-ordinating and controlling the use of force that exists outside the laws of war, but is designed to work in compliance with them. The laws of war enforce principles such as distinction and necessity, and kill boxes allow commanders to ensure those principles without signing off on every decision. I think these methods are important when talking about automatic targeting, because too often in this debate, people speak as if robots are to be equipped to make every decision possible, when in real life, most soldiers don’t need to make those decisions - they need to have them okayed by a commanding officer. For example, the creation of a kill box is an effective signal that the commanding officer has determined that it is necessary to destroy or incapacitate all enemy forces within an area. Furthermore, kill boxes negate the need for direction from a person on the ground (terminal attack controller) - friendly forces can engage at will, subject to the precise nature of the kill box.

But what about disctinction? Here we get into the tricky territory of target recognition. The most oft levelled attack on robot warfare is distinctions between combatants and non-combatants - how can a computer tell the difference between a soldier and a civilian, or between a partisan and a civilian? If it can’t, how could it ever fulfil the priniciple of distinction? It is a good point, and one that I’m not sure computers will be able to do in the next couple of decades. But what about tanks? That seems to me a slightly easier definition to make. Could we teach a computer to distinguish between tanks and other vehicles? It seems far more likely than distinguishing between humans. At the far end of automated recognition, naval point defence systems can already destroy incoming anti-ship missiles without a human in the loop, what about machines that automatically scour kill boxes looking for tanks and destroying them? If we’re uncomfortable with machines killing humans, what about machines that automatically scour kill boxes looking for other machines and destroying them? The robot equivalent of counter-battery fire? What if you had a means of ensuring that those machines never knowingly endangered a human? Like, say, a personal safety zone that they could not transgress or fire through?

In the Humanity/HAL war, the idea of permissive/non-permissive space operates in two comparable but asymmetric ways. HAL defines an area as a kill box, and throws robots into the breach. HAL’s robots operate on a simplistic level - identifying humans and killing them (it doesn’t particularly matter if they can differentiate between civilians and combatants here). Team Humanity uses non-permissive space (perhaps in a kill box of its own devising) like the rain room in order to ensure that robots don’t hurt humans. Where one of Humanity’s robots identifies a human, they instantly calculate a volume of non-permissive space around them, which they can’t fire into, nor can they fire through. It doesn’t particularly matter what the human is - if they’re a civilian, combatant or have their hands on a world-ending nuclear device - the robots let the humans do the heavy lifting of killing other humans. Humanity’s robots operate in an environment defined by the presence of hundreds, if not thousands, of mini safe spaces surrounding the humans that they identify. If one of HAL’s bots gets too close to another human, or if shooting at the bot means shooting through ‘protected’ space, then HAL’s bot is safe, but otherwise, the bot on Humanity’s side will let rip. All of the above could occur quite independently of human control or specific direction. Team Humanity’s bots fight at a disadvantage (and HAL’s bots could be programmed to specifically contravene the rules of war and take advantage of the safe spaces - autonomously using humans as shields) but it would mean that Team Humanity wouldn’t have to rip up their rule book and resort to autonomous targeting and robotic killing.

Returning to the rain room, the above is nice in theory, but my mate still got drenched for wearing a black t-shirt. Then again, he had a choice to step into the room, whereas we won’t be able to stop every single belligerent force from putting guns on robots.