Online Exclusive 04/25/2015 Blog

Killer Robots: Toward the Loss of Humanity

This April, nations will join together at the United Nations in Geneva to hold formal talks on “lethal autonomous weapons systems,” also known as “killer robots,” under the auspices of the Convention on Certain Conventional Weapons. Lethal autonomous robotic weapons are systems that, once programmed and activated, can choose targets without further human intervention. While such weapons do not yet exist, it appears that some militaries and police have begun to see these systems as their best option for waging war, ensuring national defense, and police enforcement.

The United States and many other countries are working to develop killer robots, along with additional autonomous weapons, and might be able to do so in the not-so-distant future. Prototypes are already in place. The loss of effective human control in waging war, and thus the loss of humanity, is a real possibility.

How can globally-agreed norms be created and strengthened in all areas of autonomous weapons production, use, and proliferation to safeguard future generations from the scourge of violence? This question is particularly pressing in regards to emerging technologies. Due to the many complex ethical, legal, security, and moral implications of these weapons, states, nonstate entities, researchers, and activists find themselves in two camps.

Discussions on the issue of killer robots are vigorous at the international level, as demonstrated by the reach of the Campaign to Stop Killer Robots, which includes 54 NGOs spanning more than 25 countries, and by how unprecedentedly fast the topic reached the international agenda at the United Nations. The Campaign calls for a preemptive ban on the development, production, and use of fully autonomous weapons. The Campaign advances the idea that all weapons systems should have “meaningful human control” over decisions to kill. This solution would keep humans “in the loop” of the critical functions of weapons systems. States such as Austria, Brazil, Egypt, France, Germany, Switzerland, South Africa, and many others are considering the idea as a framework for a new international treaty, or wish to study it further.

Meaningful Human Control, as coined by the British nongovernmental organization Article 36, has gained a lot of traction in the discussions at the United Nations. Decisions to kill under Meaningful Human Control are based upon three pillars. The first is that information is provided to the attacker at all levels of a planned military action with full knowledge of the target area and mission objectives. This is entirely consistent with the principle of precaution in International Humanitarian Law (IHL), which regulates the excesses committed during war and mandates states to exercise precaution, particularly in distinguishing civilians and combatants. Second, to take lethal action requires deliberate human consideration. Third, there must be accountability whereby those who are evaluating the information for each attack can be deemed responsible. This is fully coherent with the principle of “attributability,” essential for international law. The Campaign to Stop Killer Robots argues that Meaningful Human Control must be adopted as the international standard.

Those who generally support the development of lethal autonomous weapons promote “ethical autonomy” as the solution to the ethical dilemmas associated with the decision to kill a human being. This proposal suggests that self-ruled robotic systems can be programmed with principled artificial intelligence that will ensure that robots act in compliance with international law and other legal and normative obligations.

Supporters of “ethical autonomy” argue that lethal autonomous weapons technology would significantly reduce casualties in the battlefield as well as atrocities committed by humans out of rage or revenge. They also contend that it is likely that these new systems will be able to comply with the existing requirements of international law, especially with IHL, by distinguishing between civilians and combatants, avoiding collateral damage, ensuring weapons do not create indiscriminate damage, and avoiding superfluous injury. They further argue that robotic systems could improve precision and actually be able to improve the “tempo” of war, or speed of defensive systems, in the case of rapidly incoming attacks. The ability of such machines to be in constant surveillance mode maximizes military efficiency; they also bring economic benefits by permitting states to have fewer or no troops on the ground.

Taken all together, these seem to be convincing arguments. Several objections, however, are in order:

First, there has been no demonstration of ethical autonomy, and there is no scientific evidence that every machine will be able to fully comply with IHL. The arguments about ethical autonomy boil down to speculation about the future development of the technology. Autonomous systems are inherently unpredictable. There can be no guarantee that they can improve precision.

Second, there is the ever present danger of proliferation. Even if “ethical autonomy” is possible to implement by those international actors who are concerned with reducing battlefield and civilian casualties, the weapons will inevitably be obtained by less scrupulous actors.

Third, we have no guarantees against malfunction due to, for example, coding errors. Robots will have no capacity to recognize humanity’s treasures, signals of neutrality, or the sanctity of many places and spaces throughout the planet. Supporters of ethical autonomy say that robots will be better at targeting. However, these systems could, through some defect, mistakenly choose wrongful targets (mistaking, for instance, a small child pointing a toy to a kneeling soldier pointing a gun).

Due to these dangers, supporters of Meaningful Human Control contend that new global norms on killer robots are in fact essential to the hopes for a more peaceful future for humanity. It is not yet clear, however, if the governing structures at the United Nations are sufficient to tackle the current challenges. There are noticeable gaps between developments in the technology of warfare and the legal regimes regulating them. It is promising, therefore, that states are meeting on these issues at the United Nations in April 2015.

Discussions from both camps rightfully examine whether lethal autonomous weapons will be able to comply with existing international law, that is, if existing law on the use and limits of force, IHL, state responsibility, and human rights will be able to be programmed in the weapons systems’ algorithms. Perhaps they will. However, even that will not be good enough: the profoundly complex judgments that arise from moral discernment is a human trait. Robots cannot be programmed with these.

States are at a critical juncture to forge a more humane future for humankind – one that ensures that the decision to kill is not relegated to weapons systems with no sense of morality. Moreover, the world seems to be plagued with taller order problems. Billions of dollars will be invested in inventing such new lethal weapons technology, with increasing degrees of autonomy in the realm of artificial intelligence, and with far-reaching implications regarding human welfare and world order. Stephen Hawking states that “successful Artificial Intelligence would be the biggest event in human history. Unfortunately, it might also be the last.”

Clearly, robotic systems are useful in many ways: reconnaissance, post-disaster management, demining and removal of unexploded ordinance, and surveillance of crimes against humanity. Without a doubt, the future of war will involve an amplified role for robots. However, the decision to kill human beings is fraught with moral and ethical dilemmas that are enshrined in “ius cogens” norms, that is, non-derogable obligations. No state can derogate from such ius cogens obligations, even in times of war. There are, additionally, implications regarding accountability, dignity, and justice. The decision to end life should remain firmly circumscribed within this peremptory parameter to protect humankind. Any deviation from this would represent retrogression on long-accepted norms and mores for civilized nations. Robots should not be allowed to make life and death decisions. These should stay firmly within the human purview.

Denise Garcia is the Sadeleer Research Faculty and associate professor in the Department of Political Science and the International Affairs program at Northeastern University in Boston. She is a member of the International Committee for Robot Arms Control and the Academic Council of the United Nations.