Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The aim of this chapter is to explore the myriad of law of armed conflict issues surrounding the potential future development and deployment of autonomous weapon systems (AWS). The United States (US) military defines AWS as those weapons that ‘once activated, can select and engage targets without further human intervention by a human operator’.Footnote 1 While many existing weapon systems possess features which are autonomous, fully autonomous weapons do not currently exist in any nation’s military arsenal. Experts predict this will change in the future as AWS begin to supplant manned and remotely piloted systems. Some projections are that within 20 years AWS will be the primary weapons used on the battlefield.Footnote 2 Fearful that these systems might not be able to comport with international law, several non-governmental organisations and human rights advocacy groups have already voiced their opposition to AWS and have called for a pre-emptive international treaty banning their development and use.Footnote 3 Furthermore, in April 2013, a United Nations (UN) Special Rapporteur issued a report to the UN Human Rights Council recommending a moratorium on all AWS testing and deployments until nations can agree on a legal and regulatory framework for their use.Footnote 4

States are keenly aware of the concerns over the systems. States recognise that uncertainties surround the potential deployment of AWS, and they are sensitive to the issues. Some states have begun implementing measures meant to ensure compliance with the law of armed conflict.Footnote 5 Yet the draw of autonomous weapons is significant. AWS offer tremendous promise in terms of protecting one’s own forces from harm, and they may even be able to provide greater protections to civilians by delivering more precise and accurate strikes. Given such possibilities, it seems unlikely at this stage that a consensus of states will emerge to pre-emptively ban these systems. This chapter critically examines the crux of this emerging debate, namely whether AWS can be designed and used in ways that comply with the fundamental principles of the law of armed conflict.

In order to fully detail the legal implications of AWS, the chapter is organised as follows. Section 13.2 explains how militaries are presently using autonomous technology and predicts how autonomy may be embedded into weapons in the future. Section 13.3 describes the relevant tenets of the law of armed conflict in terms of identifying whether an autonomous weapon itself would be lawful and whether such a weapon could be used in a lawful manner. Section 13.4 then identifies areas of the law that may need to be evaluated in a new light in response to the unique challenges raised by AWS. The author will focus particularly on the subjective judgements inherent in targeting and on responsibility. Finally, Sect. 13.5 draws conclusions as to whether AWS will ultimately be deemed lawful under the law of armed conflict.

2 The Context: Autonomous Weapon Systems Technology

Technology and science have dramatically advanced warfare and improved the capabilities of weapons throughout history, but the emergence of autonomous technology may well represent a revolution for modern warfare. Humans have traditionally always been ‘in the loop’ with regard to lethal targeting decisions. The potential creation of autonomous systems capable of independently selecting and engaging targets with lethal force may alter that paradigm. This shift may occur rapidly if recent leaps in artificial intelligence are any indication. Some experts even contend that the technology to develop fully autonomous weapons essentially exists today.Footnote 6

2.1 Current State and Uses of Autonomy

Tremendous strides taken by artificial intelligence researchers in recent years make the prospects for fully autonomous systems more certain. Computer scientists are now successfully coupling sophisticated computer algorithms together in novel approaches designed to allow wider separation between human operators and robotic systems.Footnote 7 Innovative computing approaches, including those using so called ‘machine learning’ processes, have enabled computers to more closely simulate human thought patterns.Footnote 8 Increasingly, these computer systems are able to decipher answers to complex problems by ‘improv[ing] automatically through experience,’ an approach which is similar to the way humans learn by example.Footnote 9 Given these artificial intelligence enhancements, embedded autonomous capabilities are becoming more prevalent across society. For instance, automobile manufacturers are outfitting new vehicles with a host of autonomous features, and many car makers and experts now predict that self-driving vehicles will become the norm in coming decades.Footnote 10 The uses of autonomous technology are not, however, confined to commercial products. Militaries around the world are quickly embracing these opportunities.

Militaries are particularly well situated to capitalise on these advances because they have consistently been at the forefront of innovation. Advanced militaries began including automatic or autonomous features into their weaponry years ago. The US Navy, for example, has long been using naval mines that respond automatically to ‘acoustic, electromagnetic or pressure’ signatures.Footnote 11 The Navy has also been using close-in weapon systems, like the Phalanx and other similar systems, on warships since the mid-1970s as a protective measure of last resort in self-defence. The systems are designed to defeat incoming missile and rocket attacks against ships by reacting automatically with lethal force in response to the signatures of such threats. Many air defence weapon systems, like the US Army’s Patriot Missile system and the Israeli Iron Dome, have also been able to function for many years with various degrees of autonomy to defeat incoming artillery or missile attacks on ground forces.

While the above examples showcase limited uses of autonomous technologies the newest defence systems are set to truly harness the latest innovations in artificial intelligence and represent a significant step towards the development of fully autonomous systems. One example is the ‘K-MAX’ variant helicopters developed by the US Army and Marines, which have already flown autonomously along pre-programmed routes in Afghanistan to deliver cargo to forward operating bases. The US Navy has also produced a combat aircraft, known as the X-47B, designed to autonomously take off and land on an aircraft carrier. The British Royal Air Force is developing the Taranis attack aircraft, which will be capable of supersonic autonomous flight. The US Defense Advanced Research Projects Agency (DARPA) has begun perfecting autonomous mid-air refueling techniques. The US has even developed underwater systems which are capable of autonomously adjusting themselves to maintain their position in the water for months. Although these exceptional developments in autonomy have not yet included autonomous attack features, that is expected to change as these systems become further refined.

Militaries will likely pursue autonomous targeting capabilities, in part, to counter several perceived operational gaps and shortcomings with the current fleet of manned and remotely controlled systems. First, current unmanned systems that are operated remotely by a human pilot are susceptible to the ever-increasing communications jamming and cyber attack capabilities of adversaries. If the communications link between the human controller and the unmanned system is cut, the system becomes unable to complete its assigned mission. An AWS, on the other hand, will likely fare better in such electronically contested environments as it is not dependent on a tethered link to an operator. Second, remotely controlled systems are heavily dependent on large numbers of pilots and analysts. As the demand for unmanned systems continues to grow (and correspondingly, the volume of data generated by these systems exponentially expands),Footnote 12 these support requirements may increase to the point of becoming prohibitively burdensome for militaries. Autonomous systems will require fewer human observers as tasks are instead delegated to computerised systems.Footnote 13 Third, the pace of combat in the future is expected to become too fast for human operators. Manned and remotely piloted systems might simply prove too slow and ineffective against an enemy who possesses autonomous systems.Footnote 14 Rather than being put at a competitive disadvantage, more nations will likely seek this capability and develop AWS.Footnote 15 Given these operational realities, it is apparent that AWS will play a continuing and more expansive role in future combat.

2.2 Future Technological Possibilities

The forecast for continued improvements in autonomous capabilities seems optimistic, but predicting exactly how technology might be developed for future weapon systems is difficult. One safe assumption is that the processing power of future AWS’s on-board computers will be dramatically faster and more capable than anything presently appearing on an unmanned system. As computing capabilities improve, AWS will increasingly be embedded with advanced artificial intelligence applications. These programs will likely feature a branch of artificial intelligence known as general or strong artificial intelligence, which will enable the systems to independently react to complex problems.Footnote 16 Systems powered by this strong artificial intelligence will adapt and learn from their experiences and their environment.Footnote 17 In fact, some contend that this feature may ultimately help AWS ‘behave more ethically and far more cautiously on the battlefield than a human being’.Footnote 18

Another likely facet of AWS is that they may not resemble contemporary unmanned systems. Radical changes in form and shape may be possible because of future computers, which will not only be faster and more powerful but also tremendously more compact. Thus, AWS of the future can be expected to operate both at dramatically increased ranges and without human interaction for extended periods. For instance, the US is developing designs for an anti-submarine warfare vessel capable of hunting enemy submarines autonomously for up to 3 months.Footnote 19 Other AWS will be smaller, more expendable, and able to operate collaboratively as part of a swarm. In fact, swarm technology holds great promise for rapidly engaging and overwhelming an enemy.Footnote 20 Regardless of the exact form these technological advances yield, future AWS will possess almost unimaginable increases in capability.

3 Legal Implications of Autonomous Weapon Systems

If AWS are indeed to become a technological reality to be used in battle, it is vital to examine the relevant law that would be applied. Unquestionably, the law that would govern AWS is the law of armed conflict. The law of armed conflict has evolved over time due to changes in weaponry and tactics. Whenever a new weapon, such as an autonomous weapon, is developed and considered for fielding, two distinct aspects of the law need to be fully scrutinised: weapons law and targeting law.Footnote 21 The former concentrates on whether the weapon itself is lawful per se. The latter focuses strictly on the prohibited uses of the weapon system. Both aspects must be satisfied before a weapon can be sent into battle.

3.1 Weapons Law Rules

Two separate weapons law rules comprise the heart of any examination of the lawfulness of a weapon system per se. The first rule prohibits any weapon system that is indiscriminate by its very nature. Weapons are indiscriminate by nature when they cannot be aimed at a specific objective and would be as likely to strike civilians as they would combatants. Generally considered reflective of customary international law,Footnote 22 this rule is codified in Article 51(4)(b) of Additional Protocol I.Footnote 23 As a customary rule, all states, even those not a party to the Protocol, are bound to obey this prohibition against indiscriminate attack. The fact that the autonomous system, as opposed to a human controlled system, may make the final targeting decision is irrelevant under this rule. So long as the autonomous weapon can be supplied with sufficiently reliable and accurate data to enable it to be directed at a specific military target, the weapon system would not be indiscriminate by nature, and thus not unlawful per se.

The second rule, codified in Article 35(2) of Additional Protocol I and reflective of customary international law,Footnote 24 prohibits any weapon that causes unnecessary suffering or superfluous injury. The prohibition seeks to prevent inhumane or needless injuries to combatants. Warheads filled with glass are a classic example of an unlawful weapon under this rule. Such warheads unnecessarily complicate medical treatment and thus violate the prohibition. Except in the unlikely scenario where a state equips its AWS with such unlawful munitions, this rule will generally not impede the creation of AWS. While AWS can obviously only be armed with weapons and ammunition that do not cause unnecessary suffering or superfluous injury, autonomous features by themselves would not affect or violate this prohibition.

To ensure both aforementioned rules are satisfied, a state wishing to deploy a new weapon must perform a legal review. Codified in Article 36 of Additional Protocol I, the legal review requirement ensures that the weapon is not indiscriminate and that it would not cause unnecessary suffering or superfluous injury. The review further determines whether any other provision of the law of armed conflict might prohibit the use of the weapon. This legal review of weapons and weapon systems is generally considered a rule of customary international law. Thus, such reviews are required by all states, including those not party to the Protocol.Footnote 25 Additionally, if a weapon is substantially modified after fielding, then a further legal review is necessary. Clearly, prior to use, any AWS would require a legal review.

3.2 Targeting Law Requirements

Assuming a specific autonomous weapon satisfies the weapons law rules described above, it must still be examined under targeting law to ascertain whether the use of the weapon system might be prohibited. This analysis requires examination of three key requirements of the law of armed conflict: distinction, proportionality, and precautions in the attack. Even if a weapon satisfies the weapons law rules discussed above, it may still not be deployed if its use violates any one of these three targeting law tenets.

The first targeting law requirement is distinction. Distinction is recognised as a ‘cardinal’ principle of the law of armed conflict.Footnote 26 Reflective of customary international law, distinction requires a combatant to differentiate between combatants and civilians, as well as between military and civilian objects.Footnote 27 The rule is codified in Article 48 of Additional Protocol I with accompanying rules in Articles 51 and 52.Footnote 28 The principle is intended to protect civilians by directing military attacks against only military objectives,Footnote 29 and it unequivocally applies to AWS.

When analysing whether the use of an autonomous weapon complies with the principle of distinction, the surrounding context and environment are of critical importance. Circumstances may exist in which AWS would only need a low level ability to distinguish in order to comply with the rule. Examples include conflicts against declared hostile forces where the fighting occurs in remote areas, such as underwater, deserts, or places like the Demilitarised Zone in Korea. In less clear-cut environments, the demands on AWS to distinguish civilians from legitimate military targets are much higher. For instance, on cluttered battlefields or in urban areas, AWS may need to be equipped with robust sensor packages and advanced recognition software. Even with such cutting-edge capabilities, there could be complex situations where AWS are simply unable to fulfill this requirement and, therefore, could not lawfully be used. In the end, AWS may only be lawfully used if the systems are able to reasonably distinguish between combatants and civilians (and between military objectives and civilian objects), given the specific circumstances of the battlefield ruling at the time.

The second targeting law requirement is proportionality, which requires combatants to examine whether the expected collateral damage from an attack would be excessive in relation to the anticipated military advantage. This complex principle is reflective of customary international law,Footnote 30 and is codified in both Article 51(5)(b) and Article 57(2)(iii) of Additional Protocol I.Footnote 31 To comply with the principle, AWS would need to be able to estimate the expected amount of collateral civilian damage that might occur as a result of an attack. Modern militaries have developed a procedure, known as the Collateral Damage Estimation Methodology, for making these required estimates.Footnote 32 The methodology relies on objective and scientific criteria, and, as such, AWS should undoubtedly be able to conduct this quantitative analysis. The next step in the proportionality analysis, however, will be more complicated for AWS.

If any civilian casualties are likely to result from an attack, the proportionality rule next requires AWS to compare that amount of collateral harm against the military advantage anticipated to be gained from destroying the target. This step may present challenges for AWS, because the military advantage of a particular target is contextual. These determinations are generally made on a case-by-case basis. It is, however, conceivable that AWS could lawfully operate upon a framework of pre-programmed values. The military operator setting these values would, in essence, pre-determine what constitutes excessive collateral damage for a particular target. Given the separation of the operator from the AWS and the fact that the judgements would be made in advance, these values would invariably need to be set at extremely conservative ends to comply with the rule.Footnote 33 Military controllers might also help AWS comport with this principle by establishing other controls, such as geographic or time limits on the use of these systems. States will have to diligently sort through these thorny proportionality issues prior to using an autonomous weapon on the battlefield.

The third and final targeting law requirement is the obligation to take feasible precautions in the attack. Customary in nature and codified in Article 57 of Additional Protocol I, Footnote 34 these precautions apply to the use of AWS and may present challenges for states wanting to deploy them. One such challenge is the requirement to do everything feasible to choose a means of attack ‘with a view to avoiding, and in any event minimizing’ collateral damage.Footnote 35 Feasible, in this discussion, means ‘that which is practicable or practically possible, taking into account all circumstances prevailing at the time, including humanitarian and military considerations’.Footnote 36 Under some circumstances, this precaution may prohibit the use of AWS if instead a different system could feasibly perform the mission and better protect civilians without sacrificing military advantage. Conversely, there may be circumstances where the use of AWS would be required, such as when their use is feasible and would offer greater protection to civilians. Another challenge may be posed by the obligation to do everything feasible to verify that a target is a military objective.Footnote 37 In many cases, the advanced recognition capabilities of AWS would be sufficiently precise and reliable to fulfill this requirement. Yet at other times, depending on the situation and what is practically possible, a force may have to augment AWS with other sensors to help validate the target. Of course, one must always be mindful that the standard in dealing with these rules on feasibility, as with other rules of the law of armed conflict, is reasonableness. AWS should not be held to an absolute standard or required to do more than is expected of manned or human controlled systems.

4 Legal Lacunae in Autonomous Weapon Systems?

While the above section outlined the basic legal standards applicable to AWS and their use, this section delves more deeply into two areas of the law that may need further development as AWS become operational. The first topic is the subjective nature of targeting decisions and how those judgements might be made in the context of autonomous systems. The second is the issue of responsibility with respect to actions by AWS. Both subjects showcase the complex and unique legal implications of AWS.

4.1 Subjectivity in Targeting

Subjectivity plays a significant role in various facets of targeting law. When analysing proportionality, for instance, the military operator ordering the strike must subjectively decide the value of the target from a military advantage perspective. That person must also subjectively evaluate whether the expected collateral harm is excessive in relation to the anticipated military advantage to be gained. Similarly, with all of the required precautions in attack, there are inherently subjective judgements that must be made regarding whether all feasible precautions have indeed been taken.

It is doubtful that AWS will be able to make these subjective determinations themselves in the foreseeable future, even with the most optimistic projections for artificial intelligence advancements. Many opponents of AWS and some scholars contend that the systems are unlawful because they lack that ability to make subjective determinations.Footnote 38 This view is somewhat misguided, however, by failing to fully appreciate how the AWS targeting process will actually occur. In autonomous attacks, the main targeting decisions remain subjective, and those value judgements will continue to be made exclusively by humans. However, the subjective choices may be made at an earlier stage of the targeting cycle than with the more traditional human controlled systems. Sometimes these judgement calls will be made before the AWS are even launched. This difference does not necessarily make AWS unlawful. On the contrary, it merely represents a new way of looking at the subjectivity requirements.

To comply with the law, humans will need to inject themselves at various points into the process and make the necessary subjective determinations. The first such point is when a military operator programs the autonomous weapon. Depending on the weapon system’s sophistication, this may transpire in the design phase, before launching the system, or perhaps even remotely during the mission. The controller must subjectively decide what numerical or other values to assign to targets as a guide for the autonomous system during its mission. In providing attack criteria or thresholds, the human operator is framing the environment within which the autonomous weapon will operate. The human operator is essentially providing the subjective answers in advance. Then, with that guidance embedded into its software, the autonomous system will be tasked with making objective calculations about how to perform on the battlefield.Footnote 39 For example, if an autonomous weapon objectively calculates that a potential strike will cause more collateral damage than the human controller has authorised, then the weapon would refrain from launching the attack and seek additional guidance or continue its mission elsewhere. In the end, it is the human, not the autonomous system, who makes the qualitative and subjective choices.

Another critical point in the process is when a military operator orders an autonomous weapon into battle. That choice is clearly a subjective one. The operator must personally decide whether the autonomous weapon can perform lawfully given the specific battlefield situation. To make such a judgement, the military controller must be thoroughly familiar with the system’s particular capabilities and must know what embedded values have been pre-programmed into it. Given what the operator knows about the battlefield environment and how the autonomous weapon is programmed to react in that given environment, he or she must subjectively determine whether AWS are the correct weapons for the given mission. Operators must be certain that the AWS are expected to perform in compliance with the law of armed conflict before ordering the systems into such a situation.

These human subjective decisions will ultimately be examined for reasonableness. Timing plays a pivotal role in any measure of reasonableness with AWS. The longer the amount of time between the last human operator input and the autonomous strike itself, the greater the risk of changes on the battlefield resulting in an unanticipated action. The greater the risk, the less reasonable the decision to deploy AWS into that battle generally becomes. Certainly, the risk could be lowered if the autonomous weapon is capable of regularly submitting data about the environment back to a human operator who could potentially adjust the engagement criteria. This may not always be an option, however.Footnote 40 In the end, the human will be expected to make a reasonable decision about the appropriate amount of risk in using the autonomous systems.

The fact that AWS will not be making subjective decisions themselves should not affect the lawfulness of the systems. On the contrary, the legal requirements will be met by the subjective human input throughout the targeting process. Given that much of this input may occur early in the targeting cycle, however, a new way of looking at the subjective requirements of targeting law may be required.

4.2 Responsibility

In many ways, AWS represent a greater separation of humans from the battlefield. Therefore, significant questions arise when one looks to assess legal responsibility for battlefield conduct.Footnote 41 Opponents of AWS argue that the removal of humans from the final targeting decisions prevents the proper assignment of legal responsibility.Footnote 42 This position fails to take into account the full involvement of human commanders in the overarching targeting process. Contrary to the critics’ concerns, humans can be held legally responsible for the results of AWS attacks, even when they are not controlling the specific actions of the system.Footnote 43

Some responsibility issues are relatively straightforward. An individual who intentionally programs AWS to engage in actions that amount to a war crime would certainly be liable. Likewise, an individual would be responsible for using a system in an unlawful manner, such as deploying AWS that are incapable of distinguishing combatants from civilians into areas where civilians are expected to be located. Superior commanders of such individuals could also be held responsible if they knew or should have known about the deliberate programming or unlawful use of the system and did not try to stop the action.Footnote 44

Other AWS responsibility issues are more complex. Humans can be held responsible for the decisions they make related to programming the weapon and to deploying it onto a battlefield in the circumstances. A human could also be held responsible for the underlying subjective targeting decisions that laid the foundation for the ultimate strike. These actions would be measured for reasonableness. As was discussed above, there are two components to the subjective targeting decisions that must be evaluated for reasonableness. First, the decision to deploy the autonomous weapon under the circumstances must be reasonable. Second, the expected length of time from the launch of the system to the strike on the target also needs to be reasonable. Given these unique challenges, a new, broader way of looking at the problem may be called for with respect to legal responsibility for the deployment of AWS. In the end, however, these human judgements, rather than the autonomous actions of pulling the trigger or pushing the button, are the critical ones in any issue of responsibility.

5 Conclusion

Before they have even been developed for use on a battlefield, AWS have sparked intense controversy. In this emerging debate, the law has often been conflated with policy, morality, and ethical concerns. This chapter sought only to address and distill out the legal issues surrounding AWS. If AWS development continues unabated, the law of armed conflict and its weapons and targeting rules will play a prominent role in shaping the systems. The unique application of autonomous targeting may influence the traditional interpretation of some of these rules, particularly in the areas of subjectivity and responsibility. As has been the case with other transformative developments in weaponry throughout history, the law will adapt and evolve as needed. In general, however, autonomy and the law of armed conflict are not inconsistent or incompatible.

In actuality, AWS and their use will likely be deemed lawful in many scenarios. When examining AWS under weapons law rules, there is little evidence to suggest that the systems would be unlawful per se. After all, a weapon that is autonomous is no more likely to cause unnecessary suffering or superfluous injury than any other type of weapon. Nor do the autonomous facets of a weapon prevent it from being directed at specific military targets. While targeting law requirements of distinction, proportionality, and precautions in attack do present more obstacles for the use of AWS, the challenges will not be insurmountable. There will undoubtedly be situations involving complex battlefields where AWS will be unable to comply with those principles and thus prohibited from being used. However, the frequency of such situations is likely to be diminished over time as technology improves.

Even as technology advances and autonomous systems gradually take greater control over actions, humans will continue to serve a critical function. Military operators are essential to making the subjective value judgements as required by the law. This role cannot be abdicated. While the subjective decision-making may occur earlier in the targeting process than has traditionally been the case, these judgements can nonetheless satisfy the legal requirements. Operators should further expect to be held responsible for the reasonableness of those decisions. Any human recklessness or criminal misuse of AWS will trigger the same war crimes accountability mechanisms that already exist under the law.

Autonomous weapons will profoundly affect how states fight wars. Weapons that can independently select and engage targets will prove invaluable in the frenetic pace of future warfare. Still, one should expect the moves towards fully autonomous weapons to be incremental and subtle. Systems will gradually become ever more autonomous, and humans will slowly begin to play a smaller role in the execution of actions. Ultimately, every shift towards autonomy on the battlefield will remain dependent on whether technology can indeed deliver autonomous systems discerning and sophisticated enough to comply with the law of armed conflict.