A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. The soldier is part of the EOD (Explosive Ordinance Device) and Search team based out of FOB (Forward Operating Base) Ouellette. Courtesy of the UK Ministry of Defense.

Online Exclusive 03/23/2015 Essay

Terminator Ethics: Should We Ban “Killer Robots”?

Lethal Autonomous Weapon Systems (LAWS), often called “killer robots,” are theoretically able to target and fire with neither human supervision nor interference. Their development corresponds to a widespread and inevitable trend of military robotization. Such systems potentially possess economic advantages, reducing costs and personnel; operational advantages, increasing the speed of decision-making and reducing dependence on communications; security advantages, replacing or assisting humans and minimizing the risks they face; and even humanitarian advantages, being able to potentially respect the laws of war better than humans.

LAWS have already generated great debate, launched by non-governmental organizations (NGOs)[1. Since 2009 with the International Committee for Robots Arms Control (ICRAC). In 2013, Human Rights Watch (HRW), Article 36 and PAX, along with a dozen other NGOs, launched the Campaign to Stop Killer Robots. See www.stopkillerrobots.org.] and the UN Special Rapporteur on extrajudicial, summary or arbitrary execution,[2. In 2010 Philip Alston drew attention to this topic, while his successor, Christofer Heyns, called for a ban in 2013.] who demand a preventive ban. In May 2014, France organized and took part in the UN’s first multilateral conference devoted to the subject, an Expert Meeting at the Convention on Certain Conventional Weapons (CCW) in Geneva, in which 24 national delegations and many NGOs expressed their opinions on the matter. The next CCW Expert Meeting will take place April 13 to 17, 2015.

WHAT ARE WE TALKING ABOUT?

The debate has caused confusion due to three distinct difficulties. First, there is a semantic ambiguity. The terms which are employed are very diverse (“killer robots,” “lethal autonomous robots,” “robotic weapons systems,” “autonomous weapons systems,” “lethal autonomous weapons systems”), each of which has its own definition, making the debate harder to understand. The term Lethal Autonomous Weapons Systems, the one most commonly used today, has the advantage over Lethal Autonomous Robots (LAR) in that it does not depend on the ambiguous word “robot.” Coined in 1920 by the Czech author Karel Tchapek to name machines that could accomplish tasks in place of humans, “robot” does not have a universally-accepted definition. In a wider sense, a robot is a programmable machine, endowed with sensors and able to react to its environment.

With an aircraft’s autopilot or a drone, a human is still present, and so they are referred to as “robotized systems.” Hence, the label of “robotized military systems” (RMS), which is equally popular, refers to something different. LAWS are not RMS: the idea behind the former is precisely that humans are outside of the loop. The distinction is therefore based on their level of autonomy.

Furthermore, the adjective “lethal” is contingent. The International Committee of the Red Cross (ICRC) states that the lethality of a weapon depends on its context. Others highlight that adjectives like “lethal” or “killer” restrict the debate to systems that kill humans, whereas it should also include those that merely wound or cause material damage. However, it is questionable whether those who oppose LAWS out of principle, and independently of the consequences of their use, would change their views if the discussion instead concerned more-or-less-autonomous systems that used non-lethal munitions, such as rubber bullets (a realistic assumption, given that riot police could use such systems).

The second difficulty in the debate stems from its speculative nature. It involves evaluating the legality and legitimacy of non-existent weapons, the technological development of which cannot be foreseen. This harms the discussion by: (i) not addressing the ambiguity between present and future: what exists and what is just being contemplated, if not fantasized; (ii) allowing imaginations to run wild: the defenders of LAWS predict that the systems will be able to respect international humanitarian law, even to a higher degree than humans can, while their opponents predict the opposite, and with neither side being able to prove anything; (iii) encouraging the opponents to exploit the ambiguity of the terms, through which they are able to merge the debate with ones regarding existing weapons that they also oppose.

While in international circles NGOs adopt the same, more neutral terms that states use (LAR, LAWS), in public discourse they adopt the term “killer robots.”[3. However, they are by no means the only ones to use this expression, which was first introduced by the philosopher Robert Sparrow (“Killer Robots,” Journal of Applied Philosophy, 24:1, 2007, p. 62–77) and the political scientist Armin Krishnan (Killer Robots: Legality and Ethicality of Autonomous Weapons, Ashgate, 2009).] This is not simply due to the term’s sensationalism, but mostly because of its broader scope: it is not restricted to autonomous machines and so can also include existing weapons such as mines, missiles, and, above all, armed drones. This generates confusion among the public about the difference between the “evil” killer-robots of science-fiction films and existing remote-controlled armed drones. This miss-categorization, the Terminator Syndrome, propagates misinformation and should be resisted.

The third difficulty lies in the notion of autonomy. Abandoning the term “robot” (which has the problem of evoking images of science-fiction humanoids) in favor of “LAWS” does not resolve the definitional problem. The concept of “autonomy,” which it includes, is still unclear and needs further examination.

THE NOTION OF AUTONOMY

Firstly, automated systems, pre-programmed to accomplish specific tasks in a predefined and controlled environment, need to be distinguished from autonomous systems, which decide if and when to accomplish tasks in a changing and unpredictable environment.[4. UK Ministry of Defence, Development, Concept and Doctrine Centre (2011), Joint Doctrine Publication 0-01.1: UK Supplement to the NATO Terminology Database, September 2011, p. A2.] The former’s behavior is based on rules and is deterministic, and so predictable. The Phalanx system employed by the U.S. Navy since 1980, its land-based version Counter Rocket, Artillery, and Mortar, the Israeli Iron Dome, and sensor-fused weapons are more advanced than vending machines, but operate on the same model. They carry out a set action after a set signal, dependably and unquestioningly. Autonomous systems are more independent, enjoying a certain freedom of behavior, and so are less predictable.

These two categories are neither mutually exclusive nor homogenous, as there are levels of automation and autonomy. There is no absolute distinction between automation and autonomy, but rather a continuum between the two. Future systems could be multimodal, and so ‘hybrid’: automated for certain roles (targeting, firing) and autonomous for others (movement). The question at hand does not concern navigation, however, but rather targeting and firing.

More-or-less-autonomous weapons are often divided into three categories. One: semi-autonomous weapons (human-in-the-loop), where the decision to fire is made by a human, and with which the lethal offensive and defensive use is seen as acceptable (for example, homing missiles, armed drones, intercontinental ballistic missiles, etc.). Two: supervised autonomous weapons (human-on-the-loop), which independently designate and process targets while fully under the supervision of a human, capable of interrupting its actions. Currently, only their defensive lethal use is seen as acceptable, and against material rather than directly human targets (for example, antimissile defense systems). And three: fully autonomous weapons (human-out-of-the-loop), which independently designate and process targets without supervision, and which are only currently seen as acceptable when used non-lethally against material targets (electronic jamming systems). The main question that hypothetical LAWS pose is whether it could be acceptable to use such autonomy lethally and against human targets.

This conventional description simplifies matters and does not take into account the fact that autonomy does not consist of three levels, but rather is a continuum of many degrees. The move from “total” control to “partial” control to a lack of control is blurred, and the weapons are likely to change categories as they evolve. However, the “in/on/out of the loop” distinction illustrates that LAWS are not always autonomous, but can be hybrid systems that can switch from a remote-controlled mode to a supervised mode to an unsupervised mode. The latter two both delegate the decision to fire, while the last (LAWS-mode) does so without human control.

Many proponents of LAWS prefer to give up on all attempts at a definition, believing that this would avoid the current debate and protect their future interests. This is a miscalculation. On the contrary, LAWS must be restrictively defined to avoid them being grouped with existing technologies (armed drones, missiles or anti-missile defense systems).

LAWS can be defined as a weapons system which, once activated, is able to independently–meaning without human interference or supervision—acquire and engage targets, adapting to a changing environment. The U.S. government, as well as the UN Special Rapporteur and Human Rights Watch (HRW), use a more summarized version: “robotic weapon systems that, once activated, can select and engage targets without further intervention by a human operator.”[5. UN Doc A/HRC/23/47 (9 April 2013), para. 38. See also U.S. Department of Defense, “Autonomy in Weapons Systems”, Directive 3000.09 (21 November 2012); UK Ministry of Defense, “The UK Approach to Unmanned Aircraft Systems”, Joint Doctrine Note 2/11 (30 March 2011); HRW, Losing Humanity: The Case Against Killer Robots (2012), p. 2.]

THE FALSE PROBLEM OF "FULLY" AUTONOMOUS WEAPONS SYSTEMS

The specification “once activated” is important, as before activation there is necessarily human interference in designing, programming, and deploying the system. The applicability of the expression “fully autonomous” is therefore questionable, as all current and future systems will depend on some initial human interference. Only once the machines are designed and programmed automatically by other machines—a scenario out of a science-fiction film—will it be correct to use the term “fully autonomous.”

It is therefore incorrect to speak of a fully autonomous weapon, as certain manufacturers and states do to show off their systems. The British Ministry of Defense presented the prototype of the Taranis combat drone in this way, implying that it could choose and engage targets without human interference or supervision, something which is yet to be demonstrated. BAE Systems and the Royal Air Force are likewise presenting the Brimstone missile[6. A fire and forget air-to-surface missile, autonomous after launching.]—manufactured by the European company MBDAas fully autonomous although, like sensor-fused weapons, there is still human control over the targeting parameters and area choice. The Samsung SGR-A1, the famous immobile sentry deployed on the border between North and South Korea, is also referred to as “autonomous,” even though autonomy is only one of its settings. The Russian Ministry of Defense has similarly presented its mobile robotic complex, developed by Izhevsk Radio Plant, which protects ballistic missile installations and is supposedly able to target and fire without human interference or supervision, although this too is yet to be demonstrated.

The only two countries to have an official policy on LAWS, the United States and United Kingdom, have claimed they are not seeking fully autonomous systems: for the former, “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force”;[7. U.S. Department of Defense, “Autonomy in Weapon Systems”, op. cit.] and for the latter, “let us be absolutely clear that the operation of weapons systems will always—always—be under human control.”[8. Lord Astor of Hever (Parliamentary Under Secretary of State, Defense; Conservative), House of Lords debate, 26 March 2013, Column 960.]

The Americans do not specify what should be an “appropriate” level of human judgement, while the British consider that human control can take place in the programming, which implies that a system acting without human supervision can still be considered as “under human control” to the extent that it was programmed by a human.

It is in no one’s interests to produce “totally” autonomous weapons, but this does not stop them from frightening the public. A poll conducted by Charli Carpenter of the University of Massachusetts concluded that a majority of Americans are “firmly opposed” to the use of totally autonomous lethal robots—among the most opposed being those in the military.[9. C. Carpenter, “US Public Opinion on Autonomous Weapons”, 2013, online.] The NGO PAX cited this study to illustrate that when members of the public become aware of such systems “the primary reaction is confusion, almost always followed by serious shock.” However, the NGO deleted the word “totally” and speaks instead of the opposition of those polled to “autonomous robots.”[10. PAX, Deadly Decisions: 8 Objections to Killer Robots, 2014, p. 6.] Opponents want to exploit the unanimous opposition to (non-existent) total autonomy—which unites NGOs, governments, civilians, and even those in the military—to amalgamate it with all other forms of autonomy, and so include existing weapon systems.

Inversely, one strategy of states to eliminate the problem is by restrictively defining LAWS as “fully autonomous lethal weapons systems,” and thus to recognize that such uncontrollable weapons are of little operational interest and will undoubtedly never be constructed.[11. The Japanese, for example, consider LAWS “as ‘fully’ lethal autonomous weapon systems, which once activated, can effectively select and engage a target without human intervention” and they “are not convinced of the need to develop ‘fully’ autonomous lethal weapon systems which {are} completely out of control of human intervention,” (statement by H. E. Ambassador Toshio Sano at the CCW Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 13 May 2014).] NGOs have anticipated such a strategy: “one potential hurdle for the Campaign [to ban killer robots] is likely to be states that support the idea of a ban, but set the threshold for autonomy so high that it will not affect any of the robot systems they wish to deploy.”[12. N. Marsh, Defining the Scope of Autonomy, PRIO Policy Brief, February 2014, p. 3.]

NGOs have understood that the debate does not concern the legitimacy or legality of fully autonomous weapons, which will never be produced, but rather partially autonomous weapons and their necessary level of human control. In Geneva, states have not ceased to demand “a human in the loop” and “human control,” while NGOs have not ceased asking them what exactly that entails. Certain states, like Ireland, have specified that human control should not be merely symbolic, but “sufficient,” “adequate,” and “meaningful.” NGOs have therefore moved away from campaigning against fully autonomous weapons to focus on the notion of “meaningful human control.” The example of meaningless human control usually given is that of a person who presses “fire” each time a signal lights up, with no other information. To exercise meaningful human control, the operator should take into account information concerning the target, the context, and the probable effects of the strike.

States do not look on this development favorably, since it includes a wider range of technologies, including existing systems that all depend on a certain level of human control. However, the calls to avoid this debate are futile: the debate has already begun, and fearing it is unjustified. As with all of the concepts of just war theory (legitimate authority, just cause, right intention, last resort, probability of success, proportionality), the notion of meaningful human control is itself vague and debatable. It gives rise to diverging interpretations, with little chance of generating the consensus needed to support the preventive ban that NGOs are calling for.

THE MORAL DEBATE

Numerous NGOs and certain states such as Pakistan and Cuba are calling for a preventive prohibition of LAWS. They are motivated by deontological and consequentialist reasoning. On the deontological side, certain philosophers such as Peter Asaro and Robert Sparrow, most NGOs, and the Vatican all argue that delegating the decision to target and open fire to a machine violates human dignity, and that people have the “right not to be killed by a machine.” In support of their position they repeatedly cite the Martens Clause.[13.Preamble to the 1899 second Hague convention. In the name of the Russian delegate, having made this declaration at the conference, the clause states that “until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.”]

The UN Special Rapporteur equally expressed a deontological approach when he wrote that even if it “can be proven that on average and in the aggregate they will save lives, the question has to be asked whether it is not inherently wrong to let autonomous machines decide who and when to kill. . . . The question here is whether the deployment of LARs against anyone, including enemy fighters, is in principle acceptable, because it entails non-human entities making the determination to use lethal force.”[14. UN Doc A/HRC/23/47 (9 April 2013), para. 92-93.]

Those who reply that to delegate firing at targets to a machine is on principle unacceptable are begging the question. They do not define the “human dignity” they invoke, nor do they explain how exactly it is violated. Regarding the Martens Clause, it is more of a reminder – that in the event that certain technologies were not covered by any particular convention, they would still be subject to other international norms—than a rule to be followed to the letter. It certainly does not justify the prohibition of LAWS.

If the target is legal and legitimate, does the question of who kills it (a human or a machine) have any moral relevance? And is it the machine that kills, or the human who programmed it? Its autonomy is not a Kantian “autonomy of the will,” a capacity to follow one’s own set of rules, but rather a functional autonomy, which simply implies mastering basic processes (physical and mental), in order to achieve a set goal.

Furthermore, to claim as the deontologist opponents of LAWS do, that it is always worse to be killed by a machine than a human, regardless of the consequences, can lead to absurdities. Sparrow’s deontological approach forces him to conclude that the bombings of Hiroshima and Nagasaki—which he does not justify—are more “human” and so respectful of their victims’ “human dignity” than any strike by LAWS, for the simple reason that the bombers were piloted.[15. His answer to my question at the International Studies Association (ISA) 2014 Annual Congress, Toronto, 28 March 2014.]

More reasonable objections do not rest on obscure principles but on probable outcomes. Many researchers, NGOs, and also states think that LAWS will have negative consequences on affected communities, which will ultimately compromise any operation’s effectiveness, making them counterproductive for the user.

The first objection is that LAWS are incapable of “winning hearts and minds.” For example, with the drone strikes in Pakistan—where killing is apparently carried out by a machine, even though it is remote-controlled and therefore it is a human who pulls the trigger—the strikes have clearly caused particular indignation and feelings of injustice among the local population.

The second objection is that LAWS lower the threshold of entering into a conflict; by reducing the human costs they encourage states to wage wars. NGOs are already envisaging governments circumventing the democratic process (a vote in parliament) to send an army of robots to fight in the place of people.[16. For instance PAX, Deadly Decisions: 8 Objections to Killer Robots, 2014, p. 9.]

These fears both assume a process of dehumanization: the eventual replacement of humans by robots. This is revealed in phrases recited by NGOs, such as “first, you had human beings without machines. Then you had human beings with machines. And finally you have machines without human beings.”[17. John Pike (GlobalSecurity.org), quoted in F. Reed, “Robotic Warfare Drawing Nearer,” Washington Times, 9 February 2005, and underlined by PAX, Deadly Decisions, op. cit., p. 1.] Yet military robotization—which, moreover, implies a reassignment of personnel (fewer troops on the ground, more in programming) rather than a reduction—consists of integrating robots into human units. It does not involve replacing humans on all their missions, but rather assisting them on certain ones.

Furthermore, there is the objection that LAWS dilute responsibility. In response to this, while it will certainly be more complicated to establish responsibility, it will not be impossible. The state could be held responsible, having decided to deploy the weapons, but equally so could the inventor, the programmer, the contractor, or the commander (to the extent that, under certain conditions, the doctrine of command responsibility applies).

RESPECT FOR INTERNATIONAL HUMANITARIAN LAW

Finally, the most important consequentialist objection is that these arms would never be able to respect international humanitarian law (IHL), as believed by NGOs, many researchers, and several states (Pakistan, Austria, Egypt, Mexico).

According to the ICRC, “there is no doubt that the development and use of autonomous weapon systems in armed conflict is governed by international humanitarian law.”[18. ICRC, Report of the ICRC Expert Meeting on ‘Autonomous Weapon Systems’, 9 May 2014, p. 12.] States recognize this: those who participated in the first UN Expert Meeting in May 2014 recognized respect for IHL as an essential condition for the implementation of LAWS. With diverse predictions, certain states believe LAWS will be unable to meet this criterion, while others underline the difficulty of adjudicating at this stage without knowing the weapons’ future capabilities (Japan, Australia). All insist equally on the ex-ante verification of the systems’ conformity to IHL before they are put into service, in virtue of article 36 of the first additional protocol to the Geneva Conventions.

Weapons that by their nature cannot discriminate, that cannot target only military objectives, or that cause superfluous harm are illegal. The former risk presents the following question: Can LAWS be directed against only military targets, and can their effects be limited? A negative response would signify that they violate the principle of precaution (First additional protocol, art 57(2)).

Roboticists often exaggerate their ability to program IHL and convert legal rules into algorithms. Non-jurists often have a simplistic understanding of the rules, reducing them to univocal commands: “If it’s a civilian, do not fire”/“If it’s a combatant, fire.” Aside from the fact that it is becoming increasingly difficult in contemporary conflicts to distinguish civilians from combatants, it can be legal to fire at a civilian if they are directly participating in hostilities—a complex notion that gives rise to diverging interpretations—while it can be illegal to fire on a combatant if they are hors de combat, which is also hard to establish.[19. It can also be useful not to fire on enemy combatants for tactical reasons, such as to conceal one’s position.] The application of the principle of distinction—to know who or what to fire at—depends on the context. The application of the principle of proportionality is even more difficult since it involves comparing an action’s potentially excessive collateral damage to its anticipated military benefits. It requires a case-by-case strategic and military evaluation, which a machine simply could not comprehend.

Nevertheless, humans themselves turn out to have a limited capacity to respect IHL and often violate it, causing endless doctrinal and judicial controversies. The argument that LAWS will never be able to respect IHL completely should be rejected, for this is to demand that they should be infallible, rather than limitedly fallible or fallible to the extent that humans are. One should instead require that the systems pass what George Lucas calls the “Arkin test”[20. From the name of the American roboticist Ronald C. Arkin.]—an adaptation of the famous Turing test in artificial intelligence in which the behavior of a machine could be indistinguishable from that of a human in a given context. A robot satisfies the legal and moral requirements—and can consequently be deployed—when it can be demonstrated that it can respect the laws of war as well as or better than a human in similar circumstances.[21. See for example G. R. Lucas, “Automated Warfare”, Stanford Law & Policy Review, 25:2, 2014, p. 322, 326 and 336.] Systems must be required, for example, “to be able to recognize the wounded, not like God would do, but like a human being would.”[22. Sassòli, CCW Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 14 May 2014.]No one is to say that LAWS will not be capable of this one day. One could already implement a test protocol in which the system has to identify and characterize behavior depicted in a video, for example, and to compare the results with those of humans.

Potentially, if a system passes this test not only can it be deployed but, by virtue of what Lucas (following Bradley Jay Strawser)[23. R. C. Arkin, Governing Lethal Behavior in Autonomous Robots, Chapman and Hall/CRC, 2009 and “The Case for Ethical Autonomy in Unmanned Systems”, Journal of Military Ethics, 9:4, 2010, p. 332.] calls “the principle of unnecessary risk,” there would even be a moral obligation to deploy it.[24. B. J. Strawser, “Moral Predators: The Duty to Employ Uninhabited Vehicles”, Journal of Military Ethics, 9:4, 2010, p. 342.] In the context of a legally and morally justifiable conflict we have the obligation to minimize the risks incurred by the combatants[25.G. R. Lucas, “Automated Warfare”, op. cit., p. 334.] and so to replace or assist the combatants with machines, so long as they pass the Arkin test.

To speculatively ask whether LAWS will be capable of respecting IHL in the future depends on whether conforming to IHL requires something inherent to human judgement. A distinctive feature that separates machines from humans is that they are devoid of emotions. Defenders of LAWS claim that the systems can therefore respect IHL better than humans, since without a self-preservation instinct, they will never be driven to use excessive force to protect themselves; while without stress and emotions like fear, revenge, or hatred, they will commit fewer war crimes. Not fearing judiciary proceedings, they will never have a reason to conceal information. LAWS could even, when present in a human team, encourage soldiers to respect IHL better. By registering the humans’ actions, the system could fulfill a monitoring role and conceivably even report soldiers for violations of IHL directly to their superiors.

Of course, the human commander stays subject to these emotions, and one cannot discount the possibility that they might abuse LAWS, for example, in an act of vengeance. More generally, the system’s behavior depends on its programming, and not all humans are well intentioned. Those opposed to LAWS envisage that a system could be specifically programmed to commit war crimes. In principle, human soldiers exercise compassion, are endowed with a moral sense, and have a natural inhibition toward killing,[26. This is at least the argument of lieutenant-colonel D. Grossman, On Killing: The Psychological Cost of Learning to Kill in War and Society, E-Reads, 1995.] which may lead them to disobey orders. On the other hand, lacking these qualities, machines follow any order without question. The absence of emotion argument therefore goes both ways: LAWS are deprived of human emotions that can both cause war crimes and prevent them.

The risk of such a weapon being used illegally or immorally, which exists for all weapons, is insufficient to prohibit them. The production of airplanes was not halted because they can be hijacked by terrorists. “The use that wicked men make of a thing,” said Hugo Grotius in 1625, “does not always hinder it from being just in itself. Pirates sail on the seas, and thieves wear swords, as well as others.”[27. H. Grotius, The Rights of War and Peace, Book 2, chap. 25, VIII, 4.]

AGAINST A PREVENTION BAN

Some believe that existing laws sufficiently regulate LAWS, while others are calling for a preventive ban. This would be an unusual but not a unique measure: the banning of blinding lasers in 1995 sets a precedent. NGOs are demanding such a measure for all weapons systems operating without significant human control during individual attacks.

The precautionary nature of this prohibition undermines the parallel that these organizations try to draw with antipersonnel mines to show that an important mobilization of civil society can achieve a prohibition (1997 Ottawa Convention). The difference is evident: mines, which had killed and continue to kill millions of civilians, had demonstrated their illegality with regard to IHL (violating the principle of distinction), while LAWS are yet to be observed in operation and do not ex ante violate the principles of IHL. For this reason, states Marco Sassòli, “a prohibition cannot be based on IHL.”[28. M. Sassòli, CCW Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 14 May 2014.]

Those who recognize this thus invoke the principle of precaution. But this can be deployed equally well by the defenders of LAWS, who believe that these machines will be more capable than humans of respecting IHL, of which one of the principles is precisely the precaution not to cause harm to civilians. The duty not to develop potentially dangerous technologies is counterbalanced by the duty to do so, if it could reduce the impact of armed conflicts on one’s own forces and civilians. Potentially, therefore, it would not only be moral to use such weapons, it would be immoral not to. Given that at this stage it is impossible to confirm or refute these two hypotheses or to predict the future developments that could clarify the decision, there is insufficient justification for a prohibition.

Furthermore, a preventive ban could even have perverse effects. On one hand, it might not be respected, and it is dangerous and counterproductive for the law to construct a non-respected judicial regime. On the other hand, these technologies have a dual use and a ban could prevent the useful and promising development of civilian applications. A wiser option is to install safeguards.

INSTALLING SAFEGUARDS

The first safeguard is the law itself. The opponents to LAWS who presume it will be impossible for the systems to respect IHL raise a false problem. In the same way that it is pointless to worry about totally autonomous weapons that will not exist as long as humans have no interest in producing them, it is pointless to fear weapons that are incapable of respecting IHL. If such weapons were created, they would simply never enter into service for legal reasons (Additional Protocol I, art. 36), and because it is not in the users’ interests to have a weapon that is incapable of targeting only military targets and that would therefore kill too many civilians.

Other safeguards could help minimize the risks linked to unpredictability. The first is to use LAWS only against certain military targets. IHL distinguishes between objects that are by their nature military (installations, vehicles, weapons systems, etc.) and those that become so by their “location, purpose or use” (Additional Protocol I, art. 52(2)). To prevent a LAWS from having to make difficult decisions, such as whether or not a civilian object (e.g., an ambulance) has lost its protection and has become a legitimate military target due to its location, its purpose, or its use, it is sufficient to restrict their use to the first type of military objectives (“by their nature”). Machines do not need to be able to distinguish civilians from combatants to identify a tank or an anti-aircraft battery.

An objection to this proposition is that a “by nature” military objective can cease to be so due to its location: for example, if a tank or an anti-aircraft battery is in a school playground or a column of refugees passes into its proximity. Once again, the legality of the target is contextual. LAWS can be programmed to detect the objects, but can they evaluate the objects’ environments? The response to this is simply that it is sufficient to control the context in which the weapons are deployed.

Second, therefore, LAWS should only be used in certain contexts. Opponents picture the systems as land-based weapons, deployed in populated areas where it is hard to make the distinction between civilians and combatants. They therefore conclude that LAWS will not be able to distinguish between targets. In doing so, the opponents ignore the importance of the situations in which these systems are deployed. The systems are most adapted to underwater, marine, aerial, or spatial environments, where the risk of accidentally targeting civilians is low. In urban environments they are of very limited operational use, precisely because the risk of collateral damage compromises the objective of winning the hearts and minds of the population, which is crucial in counterinsurgency campaigns.

More autonomous and therefore more unpredictable weapons have already been removed from densely populated areas. The Special Weapons Observation Reconnaissance Detection Systems (SWORDS),[29. A land-based mobile robot, able to be equipped with diverse weaponry (machine guns, grenade launchers, incendiary weapons). It is not autonomous—firing is controlled by an operator. Its successor is the Modular Advanced Robotic System (MAARS).] deployed in Iraq in 2007, were finally retired due to their dangerous behavior. Sensor-fused weapons are very rarely used, and the Brimstone missile had to be modified, on the orders of the British Minister of Defense, to conform to the rules of reengagement in a complex environment, such as Afghanistan, replaced by a laser-guided version with a human operator choosing the target.[30. N. Marsh, Defining the Scope of Autonomy, PRIO Policy Brief, February 2014, p. 3.]

It is a sophism to say that LAWS should be banned because they are not or will not be able to distinguish between a civilian and a combatant, if they are not deployed in contexts where they will have to make such a distinction. This supposed incapacity, which remains to be proven, is a sufficient reason not to deploy them in urban zones, but not to simply prohibit them, since not all battlefields contain civilians or civilian objects.[31.M. Schmitt, “Autonomous Weapon Systems and International Humanitarian Law: A Reply to Critics”, Harvard National Security Journal, 2013, p. 11.]

Third, military targets can even be prioritized. IHL insists that targets make “an effective contribution to military action” and that their destruction offers “a definite military advantage” (Additional Protocol I, art. 52(2)). The two expressions have given rise to differing interpretations: a strict one supported by the ICRC and a loose one by the United States. Will LAWS develop and apply their own special interpretation, too? No, these subtleties only apply to limited cases such as dual-use objects, against which these systems should not be deployed in any event. If their use is limited to “by nature” military targets (a military base, fighter-bomber, etc.), it is hard to argue that their destruction does not offer a definite military advantage. It is furthermore possible to program autonomous prioritization: Lockheed Martin’s Low Cost Autonomous Attack System processes targets in a programmed priority order. For example, it can identify 9K33 surface-to-air missiles and T-72 tanks, but will only destroy the former.

Fourth, “doubt” can also be programmed. Faced with an unforeseen event, LAWS could stop and consult their commanders, applying the rule “If in doubt, do not fire.” However, the enemy could exploit this to create unforeseen situations to paralyze the systems. In the current state of IHL, this would not necessarily constitute an act of perfidy.[32.M. Sassòli, CCW Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 14 May 2014.]

Finally, humans can also retain the possibility of deactivating the firing function (veto power). This precaution conforms to rule 19 of customary IHL, to “do everything feasible to cancel or suspend an attack if it becomes apparent that the target is not a military objective or that the attack may be expected to cause incidental loss of civilian life” (ICRC). It is still relevant to talk of an “unsupervised” mode as the absence of oversight, rather than of an override function.

The principle behind these safeguards is predictability, which may be unintuitive since by definition LAWS take initiative and may have uncontrollable effects linked to their autonomy. It is precisely to prevent this unpredictability from being a problem that their autonomy will not be total. LAWS will only be deployed if and where their behavior is seen as predictable.

* This essay was first published in French as “Terminator Ethics: Faut-il Interdire les ‘Robots Tueurs’?”, Politique Etrangère, December 2014, p. 151-167. Translated with the help of Andreas Capstack. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This version has been slightly altered for length and to conform to house style.

The views, opinions and positions expressed by the author in this article are his alone, and do not necessarily reflect the views, opinions or positions of the French Ministry of Foreign Affairs.