Challenges of Lethal Autonomous Weapon Systems
Although lacking a legal international definition, lethal autonomous weapons systems (LAWS) can be understood as weapons that, once activated, do not need any human intervention in selecting and engaging targets. Or to put it crudely, weapons that can decide on killing humans on their own.
Despite the name, LAWS are not defined by platforms but by certain functions. Those emerging functions are enabled by advances in sensors, processors and software capabilities such as machine learning that allow machines to fulfil certain, predefined tasks in dynamic environments. Although these machines can fulfil certain tasks faster and more accurately than humans, these techniques do not lead to human-like situational understanding and intelligence.
Nevertheless, the shift of certain decisions from humans to machines in the targeting process leads to a new quality in the use of force. It comes with various military advantages but also substantial challenges.
On advantages, in contrast to existing remotely piloted systems like drones, weapons with autonomous targeting functions would not require a constant communication link. Those systems would not necessarily be less prone to hacking but they would avoid technical latencies and slower human reactions times. In addition, they allow for operations to take place in secluded areas including undersea or in bunkers. Autonomous functions enable faster and new types of operations.
But these military advantages come at a price. From the perspective of international humanitarian law (IHL), autonomous weapon systems – like every other weapon – must distinguish between civilians and combatants and allow for the application of IHL principles, like military necessity and proportionality of an attack. Some argue that these legal decisions always require a human as the subject of law.[1] This does not necessarily forbid the application of autonomous targeting functions but considerably limits the degree of freedom.
Another legal challenge is the question of responsibility in the event of erroneous machine actions.[2] The machine or algorithm cannot be held legally accountable, the product liability might not cover all errors, and the commander or operator might not be able to understand the complex system and its possible interactions with the environment despite extensive testing and training. Nevertheless, at least some sort of strict liability for the use of dangerous objects could apply.
Even if the legal concerns can be resolved, ethical concerns would remain. In particular, those concerning human dignity. A machine cannot value human life or understand what mortality means. If no human is involved sufficiently in the targeting process this could break the moral link between action and effect and violate human dignity.[3] Depending on the moral position, this is one aspect to consider in relation to possible gains (utilitarianism) or categorically prohibit the use of lethal autonomous weapon systems (deontological ethics).[4]
From an operational security perspective, the use of LAWS bears risks due to the unpredictability of developments because they are intended to react to unforeseen situations. These risks could multiply if the LAWS interacts with another (possibly hostile) LAWS. The increasing speed of the operation would make it difficult for humans to intervene to prevent errors or violations of IHL. On a strategic level, this emerging technology precipitates arms racing. Especially since the United States, China, Israel, South Korea, Russia and the United Kingdom are investing in LAWS, while other countries are, at a minimum, looking into applications of AI techniques to support warfare.
These inherent legal, ethical and security concerns show that human control in the use force is needed. This could mean maintaining situational understanding and providing options for the intervention of human operators in various steps of the targeting cycle.[5] An arms control treaty, be it a ban or another form regulation, is necessary to establish and strengthen human control.
The CCW and the EU in this Process
The international arms control debate on autonomous weapon systems started in 2013 with the first informal expert meeting in the Convention on Certain Conventional Weapons (CCW) taking place in 2014. To date, States Parties have not come to a consensus about the necessity and scope of future regulation. The different positions illustrate the spectrum of views. These range from a comprehensive ban of development and use, a politically binding acknowledgement of the need for human control in the use of force, to the assumption that no additional regulation is necessary as international humanitarian law (IHL) is sufficient. The different positions are often linked to national capabilities and plans to develop and use LAWS.
This spectrum of political positions is also visible in Europe. Austria is calling for a ban, France and Germany propose a political declaration, the United Kingdom opposes any form of regulation, and others are, at a minimum, sceptical about a ban. This conflict is illustrated by the lack of substantial EU statements in the CCW. No common position, beyond noting the general applicability of IHL, is available.
In September 2018, the European Parliament adopted a resolution demanding a ban of LAWS. A large majority across all parties urged Member States to regulate this technology in the CCW and beyond.[6] The European Parliament also initially called for an exclusion of LAWS from the European Defence Fund. However, this was opposed by the Council and Commission. The current proposal for the Defence Fund of June 2018 avoids explicit exclusions by a mere reiteration of the importance and applicability of IHL.[7] Although the potential funding for technologies leading to LAWS would be relatively small and would have a minor effect on the development of LAWS in general, this compromise sends a problematic and contradictory political message.
Conclusion and Recommendations
In March and August 2019, the CCW States Parties will discuss the issue of LAWS again. After very slow progress over the past few years, agreeing on some sort of outcome is necessary. To address the issue of consensus regarding the overall need for human control in the use of force by LAWS, States Parties should adopt a regulative document. Since a legally binding text will hardly find consensus, a politically binding acknowledgement is a feasible first step. To avoid the halt of discussions afterwards, it must include a call for further deliberations and reviews.
A unified European voice would increase the chances of adopting regulation to mitigate the risks of autonomous functions in the use of force. The Franco-German proposal could be one starting point for this discussion. In addition, the European Parliament could support this debate by utilizing the momentum from its resolution on LAWS to keep up the pressure on Commission and Council. A regulation of LAWS is necessary and urgent. Every state as well as the EU should take up that responsibility.
[1] See International Committee of the Red Cross (2018), Further Consideration of the Human Element in the Use of Lethal Force; Aspects of Human-Machine Interaction in the Development, Deployment and Use of Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, https://www.unog.ch/80256EDD006B8954/(httpAssets)/5216D20D2E98E7AAC12582720057E6FC/$file/2018_LAWS6b_ICRC1.pdf .
[2] See Human Rights Watch (2015), Mind the Gap – The Lack of Accountability for Killer Robots, https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots .
[3] See Asaro, Peter (2016), „Jus Nascendi, Robotic Weapons and the Martens Clause“, in: Ryan Calo, Michael Froomkin & Ian Kerr (eds.), Robot Laws, Edward Elgar Publishing, pp. 367-386.
[4] See International Panel on the Regulation of Autonomous Weapons (August 2018), Focus on Ethical Implications for a Regulation of LAWS, https://www.ipraw.org/wp-content/uploads/2018/08/2018-08-17_iPRAW_Focus-On-Report-4.pdf .
[5] See International Panel on the Regulation of Autonomous Weapons (March 2018), Focus on the Human-Machine Relation in LAWS, https://www.ipraw.org/wp-content/uploads/2018/03/2018-03-29_iPRAW_Focus-On-Report-3.pdf .
[6] See European Parliament resolution of 12 September 2018 on autonomous weapon systems (2018/2752(RSP)), http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2018-0341+0+DOC+XML+V0//EN&language=EN.
[7] See European Commission (June 2018), Proposal for a Regulation of the European Parliament and of the Council establishing the European Defence Fund, https://ec.europa.eu/commission/sites/beta-political/files/budget-may2018-eu-defence-fund-regulation_en.pdf.
The opinions articulated above also do not necessarily reflect the position of the European Leadership Network or any of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address pressing foreign, defence, and security challenge.