Skip to main content

Autonomous Weaponry: Are Killer Robots in Our Future?

February 14, 2020

Author:

Coley Felt

As technology continues to rapidly advance, the seemingly distant, futuristic societies depicted in various sci-fi stories are becoming closer to reality. The continual development of artificial intelligence (AI) capability promises to revolutionize human capacity across sectors including healthcare, agriculture, and scientific research. However, some uses of AI are deeply controversial. One of these areas is the development of autonomous weaponry.

Autonomous weaponry is being considered the third revolution in warfare, following gunpowder and nuclear arms. Debated by both states and the private sector worldwide, there is no agreed upon definition of what autonomous weaponry really is. Moreover, there are various forms of autonomous weaponry, primarily including semiautonomous weapon systems and fully autonomous weapon systems. As a result, the United Nations has established a Group of Governmental Experts to tackle the issue at the state-to-state level while many coalitions have formed to take action in the private sphere.

If global cooperation isn’t reached quickly, the development of such systems could put the international system, nations, and individuals at risk. International discussions must reach further agreement on the laws surrounding autonomous weaponry. Furthermore, the private sector must work collaboratively with governments to ensure that these systems are properly developed and regulated.

What is an Autonomous Weapon System?

A US Department of Defense Directive defines a fully autonomous weapon system as “a weapon that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage in targets without further human input after activation” (Department of Defense Directive 3000.09, 2012). In contrast, semiautonomous weapon systems are only intended to engage in specific targets or target groups that are selected by a human operator.

Outlining the Debate

The main arguments in support of the development of autonomous weaponry are the military advantages and cost reduction. Autonomous weapons systems would create military advantage because fewer warfighters would be needed, the battlefield could be expanded into previously inaccessible areas, and less casualties could occur by removing humans from dangerous missions (Etzioni, 2017).

Furthermore, robots do not have the mental or physiological constraints that humans possess. Thus, their judgements would not be clouded by emotions and more sensory information would be processed without the distorting of preconceived notions (Etzioni, 2017). Numbers from the US Department of Defense show that it costs the Pentagon about $850,000 per year for each soldier in Afghanistan. Contrarily, a small rover equipped with weapons costs roughly $230,000 (Etzioni, 2017).

In opposition to the development of autonomous weaponry, the primary arguments include compliance to international humanitarian law, morality, and accountability. In regards to compliance with international humanitarian law, the argument is that the principles of distinction, proportionality and precaution necessitate a minimum level of human supervision and control (Evans, 2018). The self-learning and automation capabilities of Lethal Autonomous Weapons Systems (LAWS) present an unclear level of human control, possibly violating these principles. Furthermore, Article 36 of the First Additional Protocol to the Geneva Conventions requires weapons reviews before legal use (Evans, 2018). While there are standards for these reviews, the application of meaningful human control is a fuzzy area that has fueled disagreement amongst various states. Some believe that autonomous weapons could be developed in the confinements of international law, while others believe it is not possible due to the uncertainty of human control.

Additionally, there is a moral concern surrounding the delegation of life-or-death decisions to machines. The “Scientists’ Call to Ban Autonomous Lethal Robots” was issued in 2013 by AI and robotics experts from 37 countries, stating that “decisions about the application of violent force must not be delegated to machines” (Etzioni, 2017). This concept relates to the unclear line of accountability relating to LAWS. In the case of an error, it is difficult to know who is responsible. For traditional soldiers on the battlefield, there is a clear line of accountability from who pulled the trigger, who commanded the order, and other factors. On the other hand, autonomous weapons systems that act on their own present a challenge when assigning accountability. Moreover, autonomous weapons systems are capable of making independent decisions, and unlike warfighters, they cannot be threatened with punishment. The long line of programmers, system verifiers, commanders and many more who are responsible for the development and use of the weapon make it extremely difficult to hold one person accountable (Sharkey, 2010).

In respect to the unclear line of accountability as an argument against development, the concept of the war algorithm is a key component to discussing autonomous weapons systems. According to the Harvard Law School, a war algorithm is “any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict” (Lewis, Blum, & Modirzadeh, 2016). When linking the war algorithm to accountability, the line stretches from states and their armed forces, to developers, operators, lawyers, industry bodies and more. As these algorithms continue to advance, they challenge some of the fundamental legal concepts that underpin the regulation of armed conflict. Thus, the development of fully autonomous weapons presents the possibility of “replacing human judgement with algorithmically-derived decisions” on the battlefield (Lewis et al., 2016).

Furthermore, there is always a risk of vulnerabilities which raises concern about the potential of a hacker takeover (Lewis et al., 2016). Not only would decisions made by these machines be the product of an algorithm, but ensuring the security and reliability of that algorithm presents a new set of issues. The capabilities that these weapons would attain could be especially attractive to malicious actors, whether it be other states or individuals. These unique characteristics rooted in autonomous weapons systems suggest the need for a specific legal category that can oversee both the design and the behavior of the machines.

UN Group of Governmental Experts on Lethal Autonomous Weapons Systems

The United Nations convened a Group of Governmental Experts to create a a platform in which international discussions regarding autonomous weaponry could occur. Little progress has been made since talks started in 2017, as nations hold opposing views about the subject.

In 2016, the United Nations Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) (Evans, 2018). The first UN GGE on LAWS meeting took place in November 2017, with a mandate to “examine emerging technologies in the area of LAWS, in the context of the objectives and purposes of the CCW, and with a view toward identification of the rules and principles applicable to such weapon systems” (Evans, 2018). Throughout the first meeting, the majority of the states shared a common understanding of the importance of retaining human control over weapon systems, including control of both the selection and engagement of targets. The GGE affirmed that international humanitarian law applies fully to all weapons systems and the responsibility of deployment remains with states. At this first meeting, 21 states expressed their desire to preemptively ban the development of LAWS (Evans, 2018).

The next round of the UN GGE on LAWS meetings took place in April and August of 2018. At the meeting in August, a total of 26 states supported a ban on LAWS (Evans, 2019). At the same time, 12 states opposed even negotiating such a treaty. To further discussions, three main ideas were proposed in regards for dealing with LAWS. The first, to “negotiate a legally-binding instrument” to address LAWS, was favored by the majority of states who support either a ban or regulation (Evans, 2019). The second proposal was to continue discussions of current commitments under international law and articulate best practices under international humanitarian law. The final proposal was “a political declaration to formally express areas of consensus and elaborate guiding principles regarding human control and accountability” (Evans, 2019). At least ten states voiced their support for this proposal.

The most recent UN GGE on LAWS meeting was held in March 2019. At this meeting, it was emphasized that the disarmament machinery and arms control on lethal autonomous weapons systems could lead to a global arms race driven by both state and non-state actors (“UN GGE,” n.d.). Furthermore, there was a deeper focus on the technological aspects of LAWS. There was discussion around the risks posed by different kinds of datasets as well as the challenges that arise for the systems reviews of self-learning systems (“UN GGE,” n.d.). It was agreed that the lawfulness of weapons must be determined by its intended use and additional legal review systems would be necessary (“UN GGE,” n.d.). While progress was made in regards to systems review requirements and consensus on the technological challenges, the GGE did not establish any legally-binding treaties or rules. A subsequent meeting occurred in August 2019.

Major Military Powers’ and Autonomous Weaponry

With little progress having been made at the UN GGE meetings on LAWS in regards to legal action, understanding some of the major powers’ stances on the issue can provide insight into future negotiations. China has not had a consistent stance on the development of LAWS, but the US, UK, and Russia have been consistently opposed to a ban on LAWS development.

China has changed its position on the development of LAWS in recent years, possibly to its advantage. In 2016, the country stated it “supports the development of a legally binding protocol on issues related to the use of LAWS… to fill the legal gap” (Kania, 2018). But, in 2018, China released a position paper that did not include support for such a ban. Instead, the paper focused on “full consideration of the applicability of general legal norms” to fully autonomous weapons systems (Kania, 2018). It is through this change of position that China could be strategizing the development of its own autonomous weapons. Through its ambiguity regarding international law, China could be publicly opposing the development of LAWS, while also permitting the flexibility necessary to develop such weapons inside the legal confinements.

On the other hand, Russia, the United Kingdom and the United States all oppose a ban on the development of LAWS, stating that it is too early. Throughout the UN GGE meetings on LAWS, all three countries have requested to discuss the benefits of such weapons systems, arguing that a preemptive ban is not necessary (Evans, 2019). Furthermore, Russia and the US have questioned the relevance of international humanitarian law to autonomous weapons systems. In this respect, these states have refused to even negotiate a treaty. The UK and US have also argued that existing national weapon reviews are the best way to deal with LAWS, disregarding the desire many other states have for multilateral work or an additional legal requirement (Evans, 2019).

Other Discussions of Autonomous Weaponry

The private sector holds a very critical position in this debate, as companies often are the ones that actually develop autonomous weapons systems. Various campaigns have been launched, primarily warning about the dangers that could result if laws aren’t established. Moreover, many scientists and experts have voiced their opinion, emphasizing the importance of this international debate.

Aside from the UN GGE on LAWS, the debate around autonomous weaponry is happening at various levels of the private sector as well. The Campaign to Stop Killer Robots is the largest coalition of non-governmental organizations that is attempting to ban fully autonomous weaponry. Established in 2013, there are currently 93 members spanning 53 countries (“Campaign to Stop Killer Robots,” n.d.). The campaign released a viral video titled Slaughterbot that portrays a future where automated drones kill innocent civilian groups under the order of an unknown operator. While autonomous weapons have not yet reached this degree of advancement, the video serves as a warning as to what could be developed if there is no ban on LAWS. The video was shown at the UN Convention on Certain Weapons in 2017 (Nield).

Furthermore, an open letter calling for “a ban on offensive autonomous weapons beyond meaningful human control” was signed by Elon Musk, Stephen Hawking and more than 3,000 AI and robotics experts in 2017 (Etzioni, 2017). The letter highlights that the development of this technology requires no costly or hard-to-obtain raw materials, raising concern about how easy it is to produce these types of weapons. Furthermore, researchers emphasize that LAWS are ideal for certain malicious tasks such as assassinations, reducing populations, destabilizing nations and selectively killing specific groups (Busby, 2018).

In the United States, tension is high between Silicon Valley and the federal government around this controversial topic. For example, the United States’ Project Maven aims to invest billions of dollars into artificial intelligence research and development pertaining to the military. After Google was originally contracted to work on the project, an employee protest led to the company pulling out once the contract expired. Many Google employees are not supportive of fully autonomous weaponry and refused to be part of its possible development (Fryer-Biggs, 2018).

Implications

With such strong opposing positions in the area of autonomous weapons systems, coming to international agreement will not be easy. At the same time, artificial intelligence technology is advancing at such a speed that it has caused scientists and researchers to raise their own concerns. Currently, the UN GGE on LAWS is caught in a time-constrained debate in which progress has been slowed due to vastly different attitudes around the world. In order to safeguard not only the future of warfare, but the possible risk to humanity as a whole, cooperation is necessary. The UN GGE on LAWS needs to accelerate progress, meaning that some countries will have to compromise on their opinions. Furthermore, the public sector must listen to the voices of the private sector to guarantee that if these weapons are further developed, they both are safe and reliable. Private and public sector collaboration as well as necessary consensus from the UN GGE meetings will provide an outline as to whether our future will see killer robots or not.

Bibliography

Busby, M. (2018, April 9). Killer robots: pressure builds for ban as governments meet. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/apr/09/killer-robots-pressure-builds-for-ban-as-governments-meet

Campaign to Stop Killer Robots. (n.d.). Retrieved March 23, 2019, from https://www.stopkillerrobots.org/about/

Department of Defense Directive 3000.09: Autonomy in Weapon Systems, November 21, 2012. (2012). Retrieved from https://www.hsdl.org/?abstract&did=

Etzioni, A. (2017). Pros and Cons of Autonomous Weapons Systems. Military Review. Retrieved from https://www.academia.edu/37885763/Pros_and_Cons_of_Autonomous_Weapons_Systems

Evans, H. (2018, April 9). Lethal Autonomous Weapons Systems at the First and Second U.N. GGE Meetings. Retrieved March 23, 2019, from Lawfare website: https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings

Evans, H. (2019, March 7). Lethal Autonomous Weapons Systems: Recent Developments. Retrieved March 23, 2019, from Lawfare website: https://www.lawfareblog.com/lethal-autonomous-weapons-systems-recent-developments

Fryer-Biggs, Z. (2018, December 21). Inside the Pentagon’s Plan to Win Over Silicon Valley. Wired. Retrieved from https://www.wired.com/story/inside-the-pentagons-plan-to-win-over-silicon-valleys-ai-experts/

Kania, E. (2018, April 17). China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems. Retrieved June 11, 2019, from Lawfare website: https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems

Lewis, D. A., Blum, G., & Modirzadeh, N. (2016). HLS PILAC. Retrieved June 11, 2019, from HLS PILAC website: https://pilac.law.harvard.edu/war-algorithm-accountability-report//credits

Nield, D. (n.d.). This Horrifying “Slaughterbot” Video Is The Best Warning Against Autonomous Weapons. Retrieved March 23, 2019, from ScienceAlert website: https://www.sciencealert.com/chilling-drone-video-shows-a-disturbing-vision-of-an-ai-controlled-future

Sharkey, N. (2010). Saying ‘No!’ to Lethal Autonomous Targeting. Journal of Military Ethics, 9(4), 369–383. https://doi.org/10.1080/15027570.2010.537903

UN GGE. (n.d.). Retrieved from Digital Watch Observatory website: https://dig.watch/processes/ungge

This publication was made possible in part by a grant from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author.