Jack McDonald


Counter Singularity Warfare

Nick Bostrom’s latest book, Superintelligence: Paths, Dangers, Strategies, is a tour-de-force analysis of the consequences of research into artificial intelligence. One element of Bostrom’s book that I find enviable is that he manages to pack so many ideas into a single book. Different conjectures and ways of thinking about AI pretty much fly off the page. Although much of what Bostrom writes about could be found elsewhere, I can’t think of a book that addresses such a wide range of issues associated with AI. It leaves me wishing that it had been written 3-ish years ago when I started reading deeper into the literature on AI. That’s not to say that there aren’t problems with the book, I think I disagree with the chapter on ethics, in particular (and that’s probably the only one I’m qualified to disagree with, anyway).

The second element of Bostrom’s book that some might consider objectionable is that it is a deep, deep, dive into the future. Bostrom is primarily concerned with the threat that AI poses to, well, everything humans do. Superintelligence attempts to work through the pathways to, and the consequences of, the development of a superintelligent agent (or system) by human beings. To give some scale of the ramifications that Bostrom takes into account, by the end you’ll understand what the Hubble volume is, and why it matters when considering the consequences of AI research. Nightmare scenarios of nanotechnology research resulting in “grey goo” carpeting the Earth somewhat pale in comparison to the prospect of a self-replicating superintelligence terraforming the entirety of the observable universe. Nonetheless, Bostrom has a point, and a policy prescription - namely that AI research should be open, and for the common good of humanity, and, most importantly, something that we should examine very carefully so as to not accidentally create an agent that would quickly become more powerful than humans could comprehend.

The central element of Bostrom’s argument is that we can’t predict what a strong superintelligence (“a level of intelligence vastly greater than contemporary humanity’s combined intellectual wherewithal”) will be like, but by definition it would be very difficult to control, won’t necessarily share human values, and it is highly unlikely that we will be able to predict its final values (that could be hostile to humans, or blithely catastrophic). Bostrom characterises the shift from a human-level AI to superintelligence as a “takeoff” period. This could be slow (decades or centuries), moderate (months or years) or fast (minutes, hours, days). I think it’s fair to say that while Bostrom considers that there is reason to hope that humanity (as a collective) can come together and ameliorate/avoid the potential existential challenge of superintelligence, most of his book demonstrates that reversing a superintelligent takeover would be either impossible, or close enough to impossible that humanity shouldn’t rely on doing so.

One of the voids in Bostrom’s book is international politics. The reason I find this interesting is not because it detracts from the book itself, but because it’s my kind of bag. In terms of Bostrom’s work, what interests me is considering the responses of states to the prospect of the emergence of a superintelligence. In short: at what point would such emergence become irreversible? More to the point, what would states do, faced with the prospect of superintelligence? In a fast scenario, there might be nothing that could stop a superintelligence, but in a moderate or slow takeoff, it’s fair enough to reason that states worried about the prospect of superintelligence might take action to prevent takeoff. If the UK happened to make some breakthrough in AI, and refused to stop when asked by the President of the United States, could that give rise to war? Who knows - I’m not in the business of predictions - but it’s an interesting question to consider if nothing else but geek interest. Pretty much, I guess I could class this as an interest in “counter-singularity warfare”.

Perhaps as a reaction to the fact that every second article on autonomous weapons contains a lazy Terminator reference, I’ve recently embarked on trying to watch every film that I can lay my hands on that references AI in some form or another (good and bad, there’s a whole lot of bad out there). What’s interesting about the constant references to Terminator is that, in Bostrom’s schema, humanity got off pretty well, considering. Sure, there was judgement day, the nuclear annihilation of most of humanity, and so on, but at the end of the day: humans won, hence the need for Skynet to send back lethal robotic assassins in time to try to change the future. That kinda beats a superintelligence refashioning the entire volume of reachable space to suit its goals.

Consider the alternate in Colossus: The Forbin Project, where a pair of supercomputers attain sentience, and then decide to take control of the planet for humanity’s own good. After much back and forth and valiant attempts to prevent the re-organisation of the planet in service of a computer overlord, the machines win. Sure, nuclear war doesn’t wipe out billions of people, but in the long run, which is a better outcome? The Faraday cage/artillery combo in Transcendence might be on an all-too-human scale when compared to the anti-superintelligence actions in A Fire Upon the Deep (Transcendence is also a terrible film) but at the same time, it’s interesting to consider the ramifications of AI development from the point of view of trying to prevent or reverse the emergence of a superintelligence. After all, The Matrix is premised on the (failed) attempt to stop AI by intentionally creating a nuclear winter. Given the possible outcome of perpetual enslavement by superintelligence, wouldn’t that be an entirely rational choice? Edge cases matter, as do breakout scenarios. If you were President of the United States, would you take unilateral action to physically destroy global communications networks in order to “box in” a superintelligence that would otherwise pose an undefeatable threat to humanity in general? If in generations to come the prospect of a superintelligent AI becomes a reality rather than science fiction, there might be an awful lot of itchy trigger fingers in capitals across the world. I’m not too sure that the measures of openness and co-operation that Bostrom advocates will save us, either. We are, after all, only human.