Lethal Autonomous Weapon Systems in 2050: The World of Tomorrow, Today?

#WR2050 6 / March 2021

Author: Tom Watts, CHIP and Ingvild Bode, University of Southern Denmark

In both the popular consciousness and key international forums such as the United Nations Convention on Certain Conventional Weapons, Lethal Autonomous Weapon Systems (LAWS) are the subject of significant attention. LAWS come in many different shapes and sizes. In essence, they are weapon systems that can identify, select and engage targets without direct human intervention.

If ever developed, fully autonomous LAWS – or “Killer Robots”, as they are sometimes colloquially called – would transform the ethics, laws and conduct of war. Views on the desirability of this outcome differ significantly amongst the international community. On the one hand, some states such as the UK, US and Russia have opposed pressures to regulate the development of such technologies, with it being argued that the greater use of artificial intelligence could help military commanders reduce civilian casualties. On the other, key civil society organisations such as The Campaign To Stop Killer Robots and global leaders in the field of robotics have called for a ban on fully autonomous LAWS. Speaking in 2017, the late Stephen Hawking argued that “[u]nless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many” [1].

By 2050, major advances in artificial intelligence (AI) – the technologies which underpin LAWS – are predicted, and according to some will transform how we work, travel and navigate our daily lives. Professor Toby Walsh of the University of New South Wales has predicted that by 2050 autonomous systems will not only dominate the roads and healthcare but even the FIFA World Cup as ‘[r]obots will have superior ball skills, including unfailing accuracy in passes and penalties’ [2]. What will this mean for the development of LAWS? Our new report entitled ‘Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons’ published in collaboration between Drone Wars UK and the Centre for War Studies at the University of Southern Denmark suggests that ongoing technological developments will likely further reduce the control which human agents have over specific use of force decisions involving autonomous systems.

Cultural touchstones such as The Terminator and Battlestar Galactica present LAWS as being existential threats to humanity’s past, present and future. Bipedal ‘Killer Robots’ with human like intelligence of the kinds presented in these series are unlikely to be developed by 2050, if ever. As technical experts have noted, the technical obstacles to the development of general forms of AI, which are comparable to human intelligence, are significantly greater than those involving narrow forms of AI, which are designed to enable machines to perform specific tasks [3]. Advances in narrow forms of AI are likely to continue through 2050 however, further increasing the speeds at which LAWS can identify, select and engage targets, the complexity of the tasks they can perform, and the demands which their operation places human agents under. Over time, these developments will further reduce the control which human operators have over specific decisions to use force. In this way, even a world without Terminators and Cylons will present significant challenges to the already diminishing control which human agents exercise over the decisions of machines to project lethal force.

Our ‘Meaning-less human control’ report draws on a new data catalogue to examine the integration of autonomous features in 28 air defence systems used across the world since the 1970s, and analyses several high-profile incidents in which the use of these technologies failed including the downing of Iran Air Flight 655 (1988), Malaysian Airlines MH 17 (2014), Ukrainian Airlines PS752 (2020), and two instances of fratricide involving the Patriot Air Defense System in the Second Gulf War (2003). Our central argument is that the integration of autonomy and automation into the critical functions of air defense systems has, under some conditions, already made human control over specific use of force decisions essentially meaningless. In a practical sense, this has meant that human operators have come to fulfil minimal but at the same time impossibly complex roles lacking a sufficient understanding of the decision-making processes of the weapon systems they are using, sufficient situational understanding, and the time to properly think about decisions. If current trends continue, these pressures will likely have only multiplied by 2050 creating new and as yet potentially unforeseen challenges.

Read the full ‘Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons’. Published in collaboration between Drone Wars UK and the Centre for War Studies at the University of Southern Denmark.

  1. Arjun Kharpal. 2017. Stephen Hawking says A.I. could be ‘worst event [Online]. Retrieved from CNBC: https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html
  2. Steve Mackay. 2017. Artificial intelligence: 10 ways society will change by 2050 [Online]. Retrieved from Engineering Institute of Technology: https://www.eit.edu.au/artificial-intelligence-10-ways-society-will-change-by-2050/
  3. Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1), 1-9.