Introduction

A significant proportion—perhaps even the majority—of contemporary robotics research is funded by the military.Footnote 1 The US-led invasion and occupation of Iraq and Afghanistan has demonstrated the military worth of—and greatly increased the demand for—Uninhabited Air Vehicles (UAVs) such as Global Hawk, Uninhabited Combat Aerial Vehicles (UCAVs) such as Predator and Reaper, and also robots for bomb and IED disposal (Butler 2007; Hockmuth 2007; Office of the Secretary of Defence 2005a, b). Unmanned systems (UMS) in the form of UAVs, UCAVs, Unmanned Ground Vehicles (UGVs), Unmanned Undersea Vehicles (UUVs) and Unmanned Surface Vehicles (USVs) are currently being developed and deployed by nations including the United States, Israel, South Korea, Britain, France, Germany, Denmark, Sweden, China, and India (Card 2007; Kenyon 2006; Legien et al. 2006; McMains 2004; Masey 2006; Office of the Secretary of Defence 2005a, Appendix A; Office of the Secretary of Defense 2005, pp. 38–40; Peterson 2005).Footnote 2 As a consequence, many engineers, computer programmers, and roboticists are now working on systems for military applications.

The use of robots in military applications generates a large number of important ethical issues (Veruggio and Operto 2006, p. 1517)—indeed, so many that they defy treatment in a single journal article. Will these systems make conflict more likely by promoting the idea that wars can be waged without casualties? Will they encourage asymmetric warfare by rendering it impossible for any but the most high-tech of militaries to triumph in conventional battle? What are the implications for just war theory if nations can fight wars without putting the lives of their warfighters at risk and, conversely, what does just war theory have to say about the war being waged in this fashion? Will the development of robot weapons generate an “arms race to autonomy” thus effectively forcing the deployment of weapon systems in “fully autonomous mode”? Who should we hold responsible for killings committed by autonomous weapon systems? I have dealt with some of these issues elsewhere already (Sparrow 2007) and I hope to deal with the remainder in future publications.Footnote 3

It is possible that the answers to some of these larger questions may give us reason to doubt the wisdom and the ethics of developing robots for particular military applications.Footnote 4 However, it is exceedingly unlikely that—if we must have militaries—there is no role for the ethical and development use of robotic weapons. Thus in this paper, I focus on a subset of the larger issues raised by the development of robot weapons: those questions that confront the designers of these systems as designers. I group the ethical issues involved in UMS design under two broad headings, Building Safe Systems and Designing for the Law of Armed Conflict, and identify and discuss a number of issues under each of these headings. As well as identifying issues, I offer some analysis of their implications and how they might be addressed.

I must acknowledge at the outset that not all of the issues I treat in this paper are unique to UMS. Many of them clearly also arise in relation to the use of long-range and/or precision munitions more generally. Genuinely new issues may arise if future unmanned systems develop the capacity for truly autonomous decision-making due to advances in “Strong AI” (Sparrow 2007). However, it is difficult to assess the likelihood that this will come about; past predictions about the prospects for genuine machine intelligence of the sort necessary to allow machines to constitute autonomous moral agents have repeatedly been shown by events to have been overly optimistic. In the meantime, however, the many important ethical questions involved in the design of existing UMS, which lack the capacity for morally autonomous action, are not any less important or urgent by virtue of not being entirely unique. Moreover, as I note at several points below, familiar issues may arise in new guises or may arise with increased urgency in relation to the design of unmanned systems as compared to manned systems. My approach in this paper is therefore to examine and highlight the ethical issues involved in the design of UMS without denying that similar issues may arise elsewhere in engineering and especially in the design of other sorts of long-range weapon systems.

Building Safe Systems

Before it will be ethical to field UMS, they must be safe to operate and maintain, and safe to fight alongside. More precisely, they must be as safe as comparable manned systems capable of achieving similar performance in comparable roles. Their routine operation should not pose more of a threat to the health and safety of the operators than the systems they might replace. As far as is possible, they should not expose their operators to enemy line-of-sight fire. They must be capable of combined operations with manned systems during military exercises and wartime and also of being used for training and in peacetime operations alongside civilian systems (Barry and Zimet 2001; Hockmuth 2007, p. 73). This requirement is especially burdensome on UCAVs and UAVs which must be capable of sharing airspace with both military and civilian aircraft (Hockmuth 2007; Kenyon 2006; Kochan 2005; Lazarski 2002; Wise 2007). UMS, their operators, and other military forces that they are intended to operate alongside of must be linked by reliable and robust communications systems in order to minimise the possibility of dangerous accidents (Barry and Zimet 2001; Kenyon 2006, p. 44). They must be capable of reliable “friend or foe” identification in order to avoid causing casualties by “friendly fire” (Kainikara 2002; Marks 2006). Less obviously, as I discuss below, UMS must be safe to use around enemy non-combatants (Arkin 2007, p. 4). All of these requirements are likely to become more demanding as the scope of autonomous action available (and allowed) to UMS increases.

Because the relevant safety comparison between manned and unmanned systems involves comparing “systems capable of achieving similar performance in comparable roles”, this comparison actually speaks strongly in favour of the application of UMS in roles which currently involve a high risk to human life. Thus, for instance, the use of UMS in bomb and IED disposal and mine clearance is not merely ethical but ethically mandated where possible—even with existing systems. Similarly, once they develop the capacity, UMS may quickly become the option of choice for conducting suppression of enemy air defences (SEAD) and intelligence, surveillance, and reconnaissance (ISR) missions (Barry and Zimet 2001; Mustin 2002; Office of the Secretary of Defense 2005b; Appendix A; Peterson 2005; Sullivan 2006).

Initially, at least, the problem of making robots safe is for the most part a technological rather than an ethical challenge. It will be possible to greatly improve the safety of military robots by improving the technology in ways that can be specified independently of any commitments on controversial ethical questions. However, as the rest of my discussion makes clear, the use of UMS in armed conflict may generate a range of unanticipated problems and consequences involving harm, or the risk of harm, to various parties, the avoidance and/or mitigation of which will require making difficult choices between competing and sometimes incommensurable values. Designing “safe” systems will therefore require making ethical decisions.

Keeping Humans Out of Harm’s Way?

The literature on military robotics typically emphasises the value of UMS in keeping human beings out of harm’s way and thus saving (“friendly”) human lives (Boland 2007; Chapman et al. 2002; Featherstone 2007; Fielding 2006; Kainikara 2002; Legien et al. 2006; Office of the Secretary of Defense 2005a; Peterson 2005; Scarborough 2005; Sherman 2005; Veruggio 2006, §7.7.2). Yet, as UMS become more central to war fighting, it is likely that circumstances will arise in which this relationship will be reversed; human beings will start to be placed in harm’s way as a result of the operations of robots.

There are three scenarios in which this might be expected to occur.

First, as soon as robots can play a useful role in military operations, warfighters will rely on them to complete the tasks to which they have been assigned. For instance, where a robotic mine clearer has swept a minefield, warfighters will expect the area to be free of mines. If robots have been used to secure and patrol a perimeter, warfighters will expect the area to be clear of the enemy. If robots have been sent to conduct reconnaissance then warfighters will assume that they have reliable knowledge of enemy assets. Consequently, if robots fail in these tasks, these expectations will be confounded and human lives may be placed at risk.

Second, human lives might be placed at risk in order to defend, service, or recover UMS that are threatened in combat or other operations and also to prevent the enemy gaining valuable intelligence if they should capture a UMS. As UMS develop they are likely to become increasingly sophisticated and expensive systems and consequently important military assets. Where a high-value UMS is in jeopardy due to a malfunction, or to enemy action, there will be a significant temptation for the commanding officer to commit human forces to protect or recover it. If the system is valuable enough and the apparent risk is low enough, the Commanding Officer may even be obligated to do so (Wall 2006). My suspicion is that, given the extent of the operations in which Predator UCAVs have been engaged and the $4.5-million-plus price tag of each Predator, such circumstances have already arisen in the current conflicts in Iraq and Afghanistan.Footnote 5 Of course, similar scenarios may also arise with military systems without a robotic or remote component. However, these “rescue” or “salvage” missions are more likely with the use of UMS because the nominal capability of these systems to carry out operations without incurring friendly casualties is likely to lead them being deployed in circumstances and in pursuit of objectives where manned systems would not be deployed (Mustin 2002).

Third, the availability of UMS may mean that military conflicts are initiated with the intention that they can be completed without placing warfighters in harm’s way only to discover that victory can only be achieved—or the operation aborted without disaster—with the involvement of human beings in the theatre of operations. This may occur either because the weapon system malfunctions or because winning victory turns out to be beyond the capabilities of the UMS due to changed circumstances, including enemy action, or because the operation was ill-conceived in the first place. As a result, human warfighters, especially Air Force pilots and Special Operations forces, may find themselves involved in conflicts because robots have failed to achieve their objectives (Mustin 2002).Footnote 6

The obvious way to minimise these risks is to design and manufacture robots that are more reliable, effective, and also flexible, so that they seldom break down or fail in their missions.Footnote 7 A complex and interesting set of trade-offs then arises. More sophisticated robots are likely to be better able to complete their missions but also more expensive and therefore more likely to drag human beings into combat to repair or recover them when they do fail. Less versatile—and therefore less expensive—robots might be more “disposable” and consequently less likely to put human lives in jeopardy as a result of the need to recover them but also less reliable and more likely to involve humans in combat if they fail in their missions. On the other hand, “better” robots may be more likely to be sent on more ambitious missions and have an increased risk of failure as a result. People may also be more inclined to rely more heavily on what they perceive to be “better” robots. More elaborate and/or or ambitious missions may also be more likely to require human intervention to complete or “salvage” them. It will be important for designers of UMS to think through these trade offs before setting out to build robots that might avoid the dangers I have highlighted here.

The Psychological Stresses on Remote Operators

An issue, which is related to the safety of UMS, that is worthy of singling out for special attention is the possibility that the deployment of UMS in armed conflict will expose their operators to new forms of psychological stress. Designers will need to consider the impact of operating UMS on their operators when designing them.

At first sight, the use of UMS might be expected to reduce stress and trauma in warfighters by taking them off the battlefield. However, while the use of UMS may reduce physical trauma and some sorts of mental and emotional trauma it also seems likely to generate significant psychological stress related to the operator being simultaneously “present” in and absent from, the battlefield. Through their link with the UMS, operators may witness, and may even participate in, events that are psychologically distressing. In particular, operators in UMS may find themselves witnessing events in which they are powerless to intervene because of the limited capacities of the systems they are operating. Thus, for instance, an UMS operator may witness at close quarters the warfighters they were fighting alongside of being killed, massacres of civilians, or other war-crimes, yet be helpless to prevent them.

Of course, human beings involved in war have always been subject to the danger that their experiences will scar them psychologically. It is also true that modern-day warfighters, especially pilots, tankers, gunners, and submariners, already experience many aspects of combat through various electronic or other sensing systems, which mediate that experience (Shurtleff 2002). However, what may be new with the application of UMS is a near complete disjunct between the simultaneous experience of the consequences of war and the experience of the process of fighting it. In the past there has always been a significant gap between the consequences of war and the experience of fighting at, for most of its participants. In the future, however, the operators of UMS may find themselves simultaneously inhabiting a pleasant office space and the chaotic streets of an urban environment in the midst of violent battle. Operating a UGV or UAV may provide them with a “point of view” in the midst of the battlespace that exposes them to a vivid experience of the chaos of modern war and its consequences (Bender 2005). According to Capt. Steven Rolenc, spokesman for Predator operations at Nellis Air Force base (Quoted in Sherman 2005, p. 35),

(When) “you put your hands on the controls and your eyes on the screens, you feel as though you’re flying over Iraq or flying over Afghanistan. You get yourself into that reality. It’s not a video game. It’s the real deal”.

Similarly, Air Force Major Shannon Rogers (quoted in Donnelly and Sally 2005) claims that,

Physically, we may be in Vegas, but mentally, we’re flying over Iraq. It feels real.

Yet when they “log off” the operating system for the UMS, the operators may return to a peaceful office environment and perhaps even, later that day, to their families (Shachtman 2005a). This geographic and psychological distance between the operators and the environment in which they are operating may negatively affect the coping mechanisms that warfighters currently use to process the stresses they experience in combat. In ordinary warfare, the larger context of being physically present in the theatre of operations may allow warfighters to prepare for combat through a process of anticipation that makes reference to local circumstances and to deal with them afterwards through conversation and interaction with others who may have shared similar experiences. Fighting a war in a country that one has never set foot in, alongside people one has never met, may be uniquely difficult in terms of the opportunities for warfighters to “process” their experiences.

It may well prove preferable to expose operators to these stresses rather than the stresses they would experience if they were actually present in the battlespace. However, armed forces that operate UMS would be well advised to monitor of the impact of any such stresses on the operators of UMS and to be open to the possibility that the combination of close proximity to, and extreme distance from, violence and its consequences made possible by the development of UMS may have a unique affect on the mental and physical well-being of their operators. According to several reports, these effects are already being felt by the US Air Force operators of Predator UCAVs who pilot these vehicles through the skies of Iraq and Afghanistan by satellite link from the US mainland (Donnelly and Sally 2005; Kaplan 2006). The result has been physical and emotional exhaustion amongst the operators and also significant stresses on their families (Kaplan 2006).

It seems likely that the extent and consequences of such stresses will be, at least in part, a function of the design of UMS. In particular, two features of systems may play a role in determining the level of stress to which their operators are exposed.

Firstly, the nature of the interface used to operate the system will play an important role in the stress levels of operators. Systems which provide only abstract and mediated images of the battlespace might be expected to induce little, if any, trauma in those who operate them. On the other hand, the development of more sophisticated virtual reality displays, which will immerse the robot’s operator in the robot’s point of view, might be expected to increase the stress to which operators will be exposed. Designers will therefore need to consider these important human factors when designing the human-machine interface for UMS.

Secondly, the capacities of the systems to intervene in the events that they allow the operator to witness are likely to be relevant. However, the relationship between this capacity and the stress on the operators is unlikely to be straightforward. There are, in fact, two different possible types of stress that operating a UMS may produce: stresses relating to being a witness of traumatic events; and stresses relating to being a participant in traumatic events.

The most intensive source of psychological stress on the operators of UMS is likely to be regret arising from what the operator did—or failed to do—during combat operations. Systems that provide no capacity to intervene may therefore provoke less stress than those that do offer some capacity to shape the outcome of events in the battlespace. However, even these “passive” systems may traumatise their operators if they are forced to witness atrocities of various sorts; machines that provide the experience of “being there” are likely to be especially problematic in this regard.

Once the system does offer its operator some way to affect the battlespace then the operator is likely to feel responsibility for the exercise of this power and to be prone to regret if they come to believe that they should have done otherwise than they did. It is therefore tempting to conclude that the “best” systems would be those that provided their operators with the largest possible scope of action so that they could prevent or avoid outcomes that might otherwise traumatise them. That is to say that more powerful systems, which make possible a wider range of actions, would be less likely to expose their operators to “participant” trauma. However, if the operator has more power to affect things then he/she may also have more to regret. In particular, operators may come to regret what they did do, as much as what they didn’t. Increasing the capacities of systems therefore provides no easy solution to the problem of the stresses to which operators are exposed. However, it is apparent that, in terms of the risk of psychological trauma to operators, the worst systems will be those that leave the operator with the sense that they could have altered the outcome of the scenario when in fact they could not have. This conclusion is more significant than first appears because, despite the “hype” surrounding them, many unmanned systems are severely limited in the range of actions that they make possible.Footnote 8 It will therefore be crucial to ensure that operators have realistic understandings of the effective capacities of the systems that they are operating in order to minimise the chance that they will regret circumstances that they were in fact unable to alter.

Because of the difficulties involved in locating unclassified research on the operations of UMS in military roles, these remarks are necessarily, to some degree, speculative. However, at the very least, it is clear that good design in relation to these issues must be guided by the results of empirical research into the appropriate questions in the fields of human-robot interaction and combat psychology.

Designing for the Law of Armed Conflict

The ethical use of UMS in armed conflict, like that of other weapons, will need to be governed by the Law of Armed Conflict (LOAC) (Gulam and Lee 2006; Klein 2003; Lazarski 2002). The details of the application of the LOAC to the operation and design of UMS is a matter for military lawyers (Dalton 2006; Gulam and Lee 2006; Royal Australian Air Force 2004, pp. 74–75). However, several core requirements of the law of armed conflict do place significant constraints on the ethical use and design of these systems and are therefore worth considering here.

Can Robots Meet the Criteria of Discrimination and Proportionality?

It is clear that, at a bare minimum, any ethical use of UMS will need to comply with the principles of discrimination (Bender 2005) and proportionality, which derive from the just war doctrine of jus in bello (Schmitt 2005). That is to say, UMS must be capable of discriminating between legitimate and illegitimate targets and of applying force proportionate to the pursuit of legitimate military ends (Arkin 2007, p. 2). These are perhaps the principal design challenges involved in the future development of UMS, especially UMS that are intended to have the capacity to operate autonomously.

One possible “work around” these problems is to deploy UMS in such a way as to exclude the possibility that they will attack illegitimate targets. For instance, armed sentry robots could patrol inside a perimeter fence with suitable warnings on it to prevent any possibility of non-combatant presence (Arkin 2007, pp. 92–93). However, it may be difficult to reliably prevent robots from acquiring illegitimate targets in this way, especially during wartime. Moreover, limiting the use of UMS to circumstances where they will not come into contact with non-combatants will severely limit their military value. Canning (2005, 2006) has championed another version of this approach, in which autonomous weapon systems would be tasked with targeting only enemy weapon systems rather than enemy warfighters, thus minimising the chance of killing non-combatants. Similar problems seem likely to beset this approach. In many circumstances, it will be difficult to distinguish, for instance, an armed tribesman carrying an AK-47 because this is the local cultural practice, from a hostile insurgent. There is also the danger that the number of non-combatants killed when an attack is launched against an enemy weapon system (for instance, a mortar fired from a hospital car park) will be such as to render the force used disproportionate. Unwillingness to risk these outcomes, on the other hand, will significantly limit the context in which automated weapon systems may ethically be used.

Another possible solution—to the problem of discrimination at least—is to try to equip the systems themselves with the capacity to make the requisite judgements as to when a target is legitimate (Arkin 2007). Some weapon systems, including anti-tank, anti-aircraft, anti-ship, and counter-fire systems, may be capable of distinguishing between military and civilian targets. If the target recognition algorithms on weapons of this sort are sufficiently reliable, it might appear that there would be fewer ethical barriers to deploying them (Arkin 2007).

However, even with these weapons there is the possibility that a potential military target may have indicated its desire to surrender or may no longer pose sufficient military threat to be a legitimate target of attack (Fielding 2006). The fact that the principle of discrimination is extremely context-dependent suggests that autonomous weapon systems would have to have a very high level of autonomy indeed to be able to make the judgements necessary in order comply with its requirements. Similarly, decisions about what constitutes a level of force proportionate to the threat posed by enemy forces are extremely complex and context dependent and it seems unlikely that machines will be able to make these decisions reliably for the foreseeable future. Barring some remarkable breakthrough in artificial intelligence research, it will therefore be necessary to include a human “in the loop” before deploying lethal ordnance from a UMS in order to ensure that the use of UMS does not violate the LOAC in this regard (Gulam and Lee 2006; Fitzsimonds and Mahnken 2007; Kenyon 2006, p. 43).

If the application of UMS is going to rely on a “human in the loop” in order to ensure that it complies with the requirements of discrimination and proportionality, the operation of UMS must provide their operators with sufficient information to be able to make the necessary judgements reliably. Given the limited situational awareness available to operators of UMS (Kainikara 2002; Mustin 2002) this may require that operators have access to independent sources of information about the nature of the targets they are attacking. This, in turn, has implications for the nature and capacities of the communication systems that are required in order to be able to use UMS ethically (Thorton 2005).

Locating Responsibility

As I have argued more extensively elsewhere (Sparrow 2007), the application of the law of armed conflict to the use of UMS requires that a clear chain of responsibility can be established between the consequences of any such use and the person who is responsible for them (Arkin 2007, pp. 76–83; Asaro 2007; Foster 2006). If violations of ethical standards occur, it must be possible to identify the sources of violation in order that such violations can be addressed and, if necessary, the relevant party criticised, disciplined, or prosecuted. This will be a significant challenge given that multiple parties may be involved in the operations of UMS (Featherstone 2007; Sullivan 2006).

The need to be able to determine responsibility for the activities of UMS has implications for engineers, roboticists, and computer scientists working on UMS in at least three dimensions.

Firstly, the designers are an obvious and important possible endpoint for the allocation of responsibility for the consequences of the operation of these systems. Thus, for instance, if a design error in a UMS results in the killing of civilians or friendly forces then the designers will be partially responsible for these deaths. That the designers of UMS might be—and might be held to be—responsible for the deaths of innocents should serve as a reminder of the gravity of their role and an incentive for them to perform this role diligently.

Secondly, a concern for the attribution of responsibility will have implications for the appropriate organisation of the process of designing and operating unmanned systems, which should be such as to clearly allocate responsibility for each distinct function of the mechanism and also for the function of the system as a whole. If a problem arises with the operations of an unmanned weapon system, it must be possible to identify those responsible (Marino and Tamburrini 2006). Of course, this is but one instance of the application of more general principles of good design of complex systems and of good project management. However, this fact does not render their application any less important in this instance. Moreover, the requirement that it must be possible to identify those responsible for systems failures will be especially demanding if an unmanned weapon system has the capacity to operate in a “fully autonomous” mode, in which case it will be necessary to assign responsibility for the outcomes of the actions of the machine in this mode to appropriate parties (Marino and Tamburrini 2006), which may include the commanding officer and/or those who have designed/programmed the UMS.Footnote 9

Thirdly, the capacity to sustain a chain of responsibility during the operations of UMS is itself an important criterion of good design of these systems. Thus, for instance, “ethical” systems will have robust communications linking them with their operators and will have mechanisms in place to record telemetry data in order that problems with—and responsibility for—the operations of the systems can be identified when necessary.

“Designing Out” War Crimes

The combination of a number of features of UMS may function to lower the psychological, social, and institutional barriers to the commission of war crimes. In particular, the geographic and psychological distance between the operator and the UMS may make it easy for the operator to perform actions that they would not perform if they were physically present in the battlespace (Cummings 2004, 2006; Graham 2006; Ulin 2005). As a result, they may be more likely to attack illegitimate targets or to use a disproportionate amount of force in attacking legitimate targets.

Of course, this possibility exists with any long-range weapon. Modern warfighters already possess a godlike power to call down destruction from the skies upon their enemies. Telescopic sights, night-vision goggles, and other targeting systems already distance those exercising lethal force from the human beings their actions affect. These existing technologies may therefore already serve to lower some of the psychological barriers to illegitimate killing (Dunlap 1999; Shurtleff 2002).

However, the use of UMS is likely to exacerbate this distancing effect whilst at the same time increasing “contact” with the enemy. In particular, UAVs may make surveillance ubiquitous on the battlefield and also extend it throughout the entire territories of the warring states. The operators of the Predator UAV can watch people going about their daily activities in real time from half a world away. The Predator UAV flies at such a height and produces so little engine noise that those being tracked via this system may have no idea that they are under observation (Fulghum 2003; Kaplan 2006). Watching targeting video taken from the Predator (available via YouTube!) one is struck by the bizarre sense of intimacy this footage generates (Blackmore 2005, p. 202).Footnote 10 One description of the Predator UCAV in action recounts how it is possible for the operators to observe individuals in Afghanistan walking outside to defecate and confirm that they have done so by the infrared traces of the faeces they have left behind (Kaplan 2006). Yet the explosions that inevitably follow in the footage that I have watched seem entirely unreal, flickerings on a cathode ray screen made trivial by their similarity to images we have seen a thousand times before in film and on television. By simultaneously increasing the amount of contact with the enemy whilst distancing warfighters from them, UMS may contribute to weapons operators coming to see enemy combatants and non-combatants as distant annoyances only, to be destroyed on the merest of whims. This in turn may lead to (more) violations of the requirements of jus in bello.

It has been suggested to me that, on the contrary, by reducing the risks to which their operators are exposed, UMS may also lower the levels of anger, fear, and hatred among warfighters. The strong emotions that warfighters in combat feel towards their enemy are in part a product of the fact that that enemy often poses an immediate physical threat.Footnote 11 As the lives of operators of UMS are not at risk they may be less inclined to experience these passions. Insofar as anger, hatred, or fear are implicated in the commission of war crimes, reducing the level of these emotions amongst warfighters might also be expected to reduce the number of war crimes.

However, whether or not the operators of UMS are likely to develop more or less compassion and respect for their enemy or be more or less motivated by anger, fear, and hatred seems to me an open question. Anecdotal evidence suggests that the operators of UMS become emotionally engaged in and experience strong emotions in response to events in the battlespace regardless of their geographical distance from it. Bender (2005) quotes one operator reporting that,

… ‘‘the feeling of anger you get is pretty powerful” when American troops are seen or heard taking fire from insurgents. Others speak of the ‘‘adrenaline rush” in the room when the Predator destroys an enemy target. (p. A6)

Another report (Shachtman 2005a) quotes a Major Shannon Rogers describing his experience with a UAV.

“We left their truck one big smoking hole”, he remembers. “My heart was pumping as we were doing our business. It felt just as real to me, however many thousands of miles away, as if I were sitting right there in that cockpit”.

These reports suggest that the operators of UMS do experience those emotions implicated in the commission of war crimes. Yet while these systems are capable of sustaining and generating these powerful negative emotions it is less clear that they are capable of generating the emotions and moral attitudes that might serve to prevent war crimes. Emotions such as compassion, joy, love, or empathy or moral attitudes such as respect are unlikely to develop or be sustained in a context where warfighters are thousands of miles away from their purported objects. Indeed, a likely consequence of increased use of UMS is a decrease in the number of military personnel involved in conflicts who have ever met or spoken with the people who inhabit the territory in which war is being fought. In the absence of any human relations with those who their actions will affect, warfighters may be less inclined to resist impulses arising out of fear, hatred, or anger where they arise.

As Cummings (2004, 2006) has argued, the alienation of the operators from the persons that their actions affect means that building ethics “into” the user interface represents a key challenge in the design of UMS. The interface is the primary means by which the operators gain access to information on the basis of which they must make life-and-death decisions; it is also the mechanism whereby the operators act on the basis of these decisions. The design of the interface can therefore be expected to exercise a significant influence on the actions of operators. This in turn suggests that the designers of the interfaces must take ethical considerations into account when designing them. The interface for a UMS should facilitate killing where it is justified and frustrate it where it is not. Obviously, it will not be possible for designers to make this discrimination at the level of individual actions; nor will it be possible to prohibit deliberate unethical use of UMS by an ill-motivated operator. However, it should be possible to take into account the morally relevant features of the circumstances in which a UMS is designed to be used and also those of its typical use and to design systems that promote ethical usage in these circumstances. In particular, the designers of UMS should work to discourage and counteract the alienation between operator and those whose lives they affect, noted above.

Perhaps the most important feature of systems in relation to this imperative is their sensors and the information that they provide to operators. “Good” sensors and good interfaces will present the operator with as much of the information that is morally relevant to the decisions that they must make as is possible. This might include information relevant to the identity, intentions, and history, of potential targets, as well as their current location and activities. This in turn is likely to require providing operators with access to available “human intelligence” and to any other relevant information available via other networks or systems. This information should help operators distinguish between legitimate and illegitimate targets according to LOAC and also distinguish situations where killing of (legally, rather than morally) legitimate targets is justified from situations where it may not be. Moreover, the sensors and information systems should facilitate the operators being able not only to reliably identify targets but also to comprehend what happens to them when lethal force is deployed. That is, as much as is possible, the system’s sensors should communicate the moral reality of the consequences of the actions of the operator. It is essential that operators have a vivid awareness of what is at stake when they make decisions, so that they can learn to make them responsibly and well.

It must be acknowledged that these are demanding goals and that there are likely to be significant limits on designers capacity to achieve them, especially given the concerns expressed above about whether remote systems are capable of transmitting and representing the full range of moral and emotional information relevant to combat and also the budgetary and other design constraints the designers of UMS must operate under. Nevertheless, it is important to clarify and state them in this context in order that they may at least provide some direction in relation to ethical design. It is also worth noting that at least some of the imperatives arising out of a concern for ethical design align with those arising out of a concern with the operational demands on the systems.

Where possible, UMS should possess a range of capacities beyond the firing of deadly weapons. Lethal force should not be the first and only recourse available to operators. Thus, for instance, UGVs for operations in urban environments should have the capacity to communicate with other persons in the battle space, including non-combatants. Consideration should also be given to equipping such systems with “non”—or, more accurately, “sub”—lethal weapons. Providing operators with a larger range of options will allow them to respond more appropriately—and hopefully, more ethically—to each particular situation.

A possible policy response to the geographical and psychological distance between operators and their (potential) targets, with design implications, would be to return the operators of UMS to the theatre of operations in which the systems they operate function, so that they have some experience of the local culture and contact with the people whose lives they are affecting. This would have the obvious disadvantage of placing the operators at higher risk by virtue of being closer to the front line and would also involve expenses associated with the transport and supply of these personnel and associated systems.Footnote 12 However, it would go some way to addressing the concerns about “remote control” killing surveyed above.Footnote 13 Whether this trade-off is sufficient to justify putting the operators closer to the front line will depend on the details of the particular UMS and the roles for which it is intended, as well as the intensity of the particular conflict. Obviously, the range over which systems are intended to be used will have significant design implications. Armed Forces will therefore need to consider these issues early in the design of such systems.

Another important aspect of the project of ethical design will be working to avoid the creation of other types of what Cummings (2004, 2006) describes as “moral buffers” between the operator and their actions. Many UMS have a good deal of automation built into them, into their sensors as well as into the operations of the system itself. Cummings suggests that the operators may come to over rely on the automation in these systems, to the detriment of the decisions they make. Moreover, Cummings argues, the presence of this automation may form a (further) moral buffer between the users of the system and their actions, allowing them to tell themselves that “the machine” made the decision. This, in turn, may encourage unethical choices by operators who are thereby able to distance themselves morally and emotionally from the consequences of their actions. Good design of UMS will therefore not only (as discussed above) ensure that we can identify those responsible for the consequences of the operation of the system, it will also ensure that those who are responsible feel responsible—and know precisely what they are responsible for.

A final, important, consideration relating to the ethical design of UMS is that because the operators of the UMS are themselves not in any danger while they are in combat, it seems as though they should err on the side of caution when it comes to making decisions about killing.Footnote 14 This means that there is some room, in designing these systems, for checks and balances, such as the gathering of more information outlined above, which would not be practical for warfighters actually present in the battlespace. Of course, importantly, the lives of other friendly forces in the battlespace may well be at stake, which means that many missions will still be time critical, such that it will still be necessary for the UMS to be capable of responding quickly as required. Nevertheless, the fact that the operators of UMS are safe from harm does alter what it is reasonable to expect from them by way of a concern for the requirements of jus in bello and therefore the requirements of ethical design of these systems.

The challenge of ethical design is a challenge that, arguably, it is yet to be adequately met. A number of articles on UMS report that the control units for these systems have deliberately been designed around the controls for the PlayStation game console or around a “Gameboy” controller in order to take advantage of potential operators existing familiarity with these controls (Graham 2006; Hambling 2007; Kenyon 2007; Thorton 2005). Unsurprisingly, this has led to reports that there is a tendency for the operators to mistake their activities for playing computer games.

According to Bender (2005),

For the Predator crews in Nevada, however, the main challenge is simply to remember they are not playing a video game when they step out of their air-conditioned office for a Wendy’s hamburger.

‘‘We have to impress upon them that they are not just shooting electrons,” said Major Sam Morgan, a trainer of Predator pilots. ‘‘They’re killing people.”

Shachtman (2005b) also cites an analyst saying, of the operation of these systems:

It’s like a video game. It can get a little bloodthirsty. But it’s fucking cool.

Presuming that we do not want warfighters making life-and-death decisions in the subconscious belief that they are playing a video game, these remarks suggest that much work remains to be done in promoting ethical behaviour through the design of UMS.

Conclusion

The development of UMS for military applications poses ethical as well as technical design challenges for roboticists, engineers, computer scientists, and others involved in the design of these systems.

I have argued that one set of ethical issues arises out of a concern for the safety of the operators of the system and of those who will work alongside them. While the development of UMS may keep some warfighters safe from harm, there are also circumstances in which it may place others at risk. Responding to this phenomenon will require negotiating a complex set of trade-offs regarding the capacities—and expense—of these systems. I have also highlighted the possibility that operation of UMS may expose the operators to unique psychological stresses.

There is also another, arguably more difficult, set of issues that arise out of the need for the operations of UMS to meet the requirements of the law of armed conflict. The profound difficulties involved in enabling fully autonomous weapons systems to distinguish between legitimate and illegitimate targets and assess the proportionate use of force suggest that it will be necessary to retain a “human in the loop” for the foreseeable future. A critical requirement for the ethical design and operation of UMS is that those responsible for each aspect of their design and operations can be clearly identified; this may require attributing responsibility if these systems are used in a “fully autonomous” mode. Finally, ethical design of UMS will require paying attention to the ways in which these systems may facilitate unethical behaviour by separating the operators from the consequences of their actions and working to overcome this and other “moral buffers” that may arise in the operation of robotic weapons.

Throughout, I have tried to suggest how these issues might begin to be addressed by appropriate design of UMS. My investigations suggest that some of these ethical issues can be “designed out” or at least “designed around”. That is to say, with sufficient ingenuity and more sophisticated technology, engineers can avoid them and/or minimise their impacts. For instance, better neural networks, better sensors and better target recognition algorithms might mitigate some of the difficulties involved in the problem of discrimination and expand the number of roles in which it was appropriate to employ autonomous weapons systems. Sensors that make possible a more vivid appreciation of the battlespace might also reduce the psychological distance between operators and their targets and thus reduce the extent to which the distance between them acts as a “moral buffer”.

However, it is important to note that there are real trade-offs involved in attempting to address some of these issues, such that efforts to address one will intensify another. Thus, for instance, technologies that make possible a larger scope for autonomous operations of robotic weapons systems exacerbate the problem of locating the responsibility for the consequences of the operations of these systems. Systems with sensors that make the battle more “real” for the operators will also increase the risk that they will suffer psychological stress as a result. Controlling UMS over long distances will decrease the risk to the operators but increase the “moral buffer” between them and their actions. Designers of UMS will therefore need to consider the relative priorities of these issues, the relationships between them, and the trade-offs involved in attempts to address them, in the pursuit of “ethical” design.

Unmanned systems are already playing a key role in contemporary conflicts. Given current enthusiasm for UMS and their obvious utility it seems likely that robotic systems will be used much more widely in military roles in the future (Featherstone 2007; Graham 2006; Hanley and Charles 2007; Hockmuth 2007; Office of the Under Secretary of Defense 2006; Peterson 2005; Scarborough 2005). It also seems likely that the scope of autonomous action allowed to UMS will increase as the technology improves.Footnote 15 These trends make the challenges of ethical design of unmanned systems that I have highlighted here all the more urgent.