By Chris Hibbert
Ken MacLeod’s The Corporation Wars: Insurgence is the second book of a trilogy. It (along with the first book in the series, Dissidence, is a finalist for the Prometheus award this year.
Insurgence continues the story of awakened robots struggling for freedom, and uploaded human ex-combatants fighting to retake the planetary system the robots had been mining and exploring.
This installment focuses less on the robots’ claim to be agents worthy of separate respect, and more on the uploaded warriors struggle to figure out the nature of the reality they inhabit while mostly following orders to fight the battles their supervisors are pursuing. Their ultimate worry is that they don’t have enough information to tell which side they’re fighting on or who they are battling to subdue. When you live in a simulation (particularly when you can tell that someone else has access to the control panel) it’s a little difficult to be sure that your choices aren’t effectively controlled by someone else.
Next, cracks appear in the simulation, and “real” revived people see the shortcomings, but non-player-characters (MacLeod calls them philosophical zombies) think everything is normal, so the real people can tell who’s just a simulated person. The idea of zombies in philosophy (sometimes “p-zombies”) is an exploration of the idea of consciousness. What if there were beings that acted just like people, but had no consciousness? Would it make a difference to them? Should we accord them lesser rights?
I consider the idea of p-zombies to be incoherent, but many smart people treat the question as exploring an important distinction. MacLeod here undercuts the point of the argument since there are actual behavioral differences. It isn’t an exploration of whether consciousness matters, it’s just that some characters in the story are imperfect simulations without an inner life, and the actual thinking beings can tell who they are. At the same time, MacLeod makes sure we notice that the robots and AIs who are active in the battles and the scheming do have an inner dialogue, and are making plans and collaborating with others to get things done.
The starting position for the agencies that represent the current Earth government and act under its protection is that only humans are allowed to be sentient. Even AIs’ powers are circumscribed. Whenever self awareness arises otherwise, it must be stamped out. It’s not clear why this would be a plausible stance, since it’s clearly the case that the AIs can become self-aware for short periods, and autonomously operating robots have the capacity for spontaneous self awareness given the right trigger. So they must be constantly battling to defeat uprisings, and track down newly minted sophonts who either try to escape from control, or hide in occupied systems. It would make more sense to forbid use of tools with the capacity for self awareness, than to constantly try to stomp them out. I’d also have a hard time going along with a regime that wanted to outlaw and destroy a class of beings because they were self aware. Self aware and hostile is a separate thing, but that’s not the distinction they’ve settled on.
Before one of the final battles, one of the leaders of the simulated humans challenges the combatants to each eat a slice of p-zombie flesh to prove that they believe they’re in a simulation, and that there can’t be any moral issues with simulated eating of simulated meat from simulated people that were never actually alive or aware. Except for a few who object to the initiation-ceremony aspect of the act, they all partake.
So there’s a lot of exploration here of of philosophical questions of identity, and what it means to be human. The questions of liberty are mostly focussed on what kinds of agents deserve respect as actual people, though I think MacLeod fumbled some of the issues. The action is interesting and the conflict exciting. Besides there are also weaponized communications packets, interrogations of potentially hostile agents by sending them into a dungeon simulation, double and triple agents, and terraforming. It’s a pretty good read, and the lead-in to part three, of course, leaves a few things to be resolved.
(Chris Hibbert is treasurer and past board president of the Libertarian Futurist Society. He works as a software engineer in Silicon Valley.)