There are no men, only artillery, infantry, cavalry. Huge masses and the instruments of their direction. Each member of these masses remembers everything and completely forgets himself. In this there must be and is pleasure…
Warfare is about killing people. Everyone seems to acknowledge that normal rules of moral behavior go out the window during war, but also that war is not completely free of rules – there are different codes of conduct that hold, and violating those rules gets will get you in trouble, especially if your side ends up losing. Nobody is quite sure what those rules are, and even less sure how such they are to be enforced. Soldiers are trained to kill, yet expected (at least in the modern era) to keep their killing carefully circumscribed. Killing civilians is a criminal atrocity when done at ground level, but perfectly acceptable when done from above. Or maybe the distinction is not altitude but scale, or whether the killing is authorized by someone who went to college.
There is a large sub-industry of philosophers and others who claim to have a theory of moral conduct of war. I am mostly unimpressed with this body of work, and I think the endless nattering hides a set of more interesting questions around the issue of agency. The military, like other institutions, has techniques for subduing or harnessing the agency of individuals and replacing it with a constructed group agency. The military has been at this longer than most other institutions and so has a set of well-established and finely honed techniques (eg basic training and drilling) for doing so. The desired end result is a structure for projecting controlled and overwhelming violence that is not hampered by individual morality, fear, or particularity.
At a larger scale, even the institutional agency of military organizations fades into the larger dynamics of conflict that are beyond their control. This can be seen most clearly in the political history of run-ups to major conflicts, which take on an autonomy of their own, beyond the control of any of the parties:
The emergence of a self-reinforcing cycle of heightened military preparedness and more acute political conflict…was an essential element in the conjuncture that led to disaster.
— David Stevenson, Armaments and the Coming of War: Europe, 1904-1914
The masters of war seem to have almost as little control over it as the grunts or hapless civilians in the way. War itself seems to have an agency; conflicts arise when conditions are right, like tornadoes, and like them just end up creating a vortex of violence that sweeps up everything in its path.
Drone warfare seems to horrify people in a special way, as if it breaks one of the intuitive rules of war. It’s not that it is especially destructive; in fact it is far more “surgical” than accepted techniques of war such as high-altitude carpet bombing, so far less likely to cause massive civilian deaths. I assume the horror of drones lies in the fact that the killer removes himself from risk – he has no “skin in the game” (this is an ethical concept I picked up from Taleb’s Antifragile). In normal war situation, the two sides are putting themselves at a roughly equal risk footing. Even with extreme technical superiority of one side, such as the bombers that were napalming the Vietnamese countryside, the bomber crews were still at some risk of being shot down and killed or captured. In drone warfare, the killers reside in suburban office parks and drive home for dinner like any other cubicle worker.
The concept of moral hazard tends to be invoked to explain why drones are objectionable, but war always involves moral hazard – the people making the decisions to go to war are not, in general, the ones who stand to lose their lives. No, it’s more like the same unease generated by the concept of “friend” on Facebook – just like there is something inescapably inauthentic about Facebook relationships, there is something inauthentic about killing people by remote control. Something is missing, and we mourn the old days when killing involved actual presence.
Autonomous Killing Machines
Even more challenging to conventional theories of war-morality is the possibility of autonomous military robots. This science fiction staple seems poised to become reality in the near-term future:
The key issue identified by Heyns in his UN submission is whether future weapons systems will be allowed to make the decision to kill autonomously, without human intervention. In military jargon, there are those unmanned weapons where “humans are in the loop” – ie retain control over the weapon and ultimately pull the trigger – as opposed to the future potential for autonomous weapons where humans are “out of the loop” and the decision to attack is taken by the robot itself.
Autonomous killer robots will take life not because Joe Soldier pulls on a little piece of metal (or clicks his mouse on a screen) but because a rule has been matched (aka ‘triggered’, a salient little coinage there). This little rule-machine, although built by men, runs without their intervention or control. The ethics of such devices are clearly problematic – they don’t have any, so any ethics built into them has to be at the meta level, that is, in their designers and programmers. (We might imagine robots that do implement ethical reasoning; indeed, that was the central conceit of Asimov’s classic robot stories, but that seems extremely unlikely to happen in real life any time soon).
Fortunately we don’t have to look to science fiction for examples of autonomous lethal warfare, since devices like this have been in common use for hundreds of years – the land mine. These devices, like the more sophisticated robots of SF, implement a single rule: if someone steps on you, blow him up. The history of such devices shows that they are both irresistible and problematic. The problem of course is that they can’t be turned off, and so are likely present a risk to their creators as well as their intended targets.
System agency escapes from human agency
The main slightly original point of this post is this: none of these developments are all that radical. Rather, they are natural extensions of what militaries have done for thousands of years – that is, for creating structures that have their own agency and powers that are different from the agency and powers of the individuals that constitute them. The military has always been very conscious of the need to replace the normal agency of autonomous individuals with the constructed nonhuman agency of a command structure.
In the past, these techniques were institutional and psychological, but if the same results can be achieved through technology, then you may be sure that avenue will be explored. Thus drones and autonomous warfare are just another step along the path to ensuring that the normal human moral decision making rules do not gum up the machinery of aggression. They don’t raise any new moral issues; just take old ones to a new extreme.
The military is another example of what I recently identified as “hostile AI” or perhaps more accurately, human-hostile systems. These systems are social constructs that, despite being created and powered by human action, act in ways that are generally inimical to human interests. This idea is, to be sure, in danger of being applied in a facile manner. Who decides what is inimical? These systems provide a benefit for some humans, or they wouldn’t exist. Nonetheless, there are some pretty clear cases. The military system (consisting of all the armies and armed aggressors in the world), is clearly inimical to human life. It exists because one part of it can justify itself by pointing to another part. This is clearest in cases of arms races and especially the nuclear arms race. Perhaps aggression and defense are inescapable aspects of human existence. But regardless of whether or not that is the case, once a military exists, it acts to strengthen and perpetuate itself far beyond whatever original purposes it may have served.
This is not to say that the military is uniquely evil or that people whose livelihood depends on the military are bad people. If they are, then all of us citizens who enjoy the benefits of their protection are also morally complicit. These institutions are part and parcel of society and we would be something else without them (for one thing, we wouldn’t have the internet). Corporations and other institutions also are created to serve genuine human needs, but as they become more powerful they pose the perpetual threat that their own self-aggrandizement will override the welfare of the individuals they ostensibily serve. Solving this problem is sadly beyond the scope of a blog post, but may be essential to the future welfare of the human species.