The Importance of Language: Autonomous Weapon Systems vs Weapon Systems With Autonomous Functions

By Clea Strydom 

I.  Introduction

States and corporations are utilising Artificial Intelligence (AI) technology to create more ‘intelligent’ weapon systems with autonomous functions. The international community is divided on whether or not this development in technology is positive. There are many who have called for fully autonomous weapon systems to be banned;[1] while others feel that this reaction is going too far and stands in the way of ‘progressive’ development.[2] The prevalent terms used by NGOs, researchers, academics, as well as the States and International Organizations to label weapons that can perform tasks autonomously is Autonomous Weapon Systems (AWS) or Lethal Autonomous Weapon Systems (LAWS).[3] However, there is to date no universally accepted definition for these labels,[4] or any agreement on what constitutes such a weapon. The terms are misleading and ambiguous and often conjure up images of rogue killer robots. This article postulates that in order to have a rational debate about these weapon systems, the AWS and LAWS labels need to be discarded in favour of more accurate descriptors.

II.  Where Confusion Reigns

It is no surprise that machines that are becoming more intelligent, and therefore autonomous, are already playing a key role in military operations. Due to its numerous applications and advantages, autonomous systems are also being weaponised. States and corporations are using Artificial Intelligence (AI) to develop weapon systems with autonomous functions.  Due to the substantial and increasing investment into the research and development of weapon systems with increasing autonomy in their functions and capabilities, a global debate has arisen about the legal and ethical implications of such weapon systems. For military and policymakers, the increased autonomy in weapons systems is both an opportunity and a challenge.[5]

The High Contracting Parties of the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) in 2016 and they have since met three times.[6] The prevalent terms used to label weapons that can perform tasks autonomously are Autonomous Weapon Systems (AWS) or Lethal Autonomous Weapon Systems (LAWS).[7] However, there is to date no universally accepted definition of such weapons,[8] and the political motives and varying perspectives seep through the working definitions that have been formulated by various states and organizations. Common to all the working definitions is that these weapons are able to complete critical functions i.e. identify and engage a target, by way of algorithms without human intervention.[9] This gives the impression that these weapons can act completely independently. However, autonomous weapons are obviously never completely human-free, as the system needs to be programmed in order to operate within certain parameters.[10]

The aim of the CCW is to create law and the GGE on LAWS meetings are geared towards developing a Protocol on weapons with autonomous functions. But there can be no Protocol without agreed upon definitions. It is not just at the CCW meetings that confusion reigns and terms are used incorrectly but is also evident in governmental and NGO reports, academic journal articles, at conferences, and of course in the media. All these sources influence each other and add to the confusion.[11]

The difference between the United States and United Kingdom definitions of these weapons is illustrative of the confusion around the different meanings attached to AWS/LAWS. When talking about autonomous weapons we could simply be referring to a weapon that “could search for, select, and engage targets on its own”, as defined by the United States;[12] or we could be referring to “machines with the ability to understand higher-level intent, being capable 
of deciding a course of action without depending on human oversight and control,” as defined by the UK.[13] While on the face of it, it may appear that these two descriptions come down to the same thing; this is in fact not the case. If we go with the United State’s definition then “autonomous” weapon systems already exist.[14] However, the United States’ definition is more in line with narrow-AI. Systems equipped with narrow-AI cannot set its own goals, it can simply adjust its actions to achieve the goal set by the human programmer and it can only apply lessons learned from a task to the same task or very similar tasks. A system with narrow-AI can perform more than one task but it needs to be programmed and trained in each individual task.[15] Narrow-AI is already used widely, also in weapons systems. The UK definition suggests that machines will be equipped with general-AI. In theory, general-AI will be able to perform any and all cognitive tasks that a human being can across an array of different tasks.[16] A system equipped with such would be able to set its own goals and reflect on them, and then change them if necessary.[17] In effect, it would be able to apply perspective and judgment to a situation.[18] Furthermore, the system would be able to take the lessons they learn from one task and apply it to a completely different task.[19] For the time being, general-AI is completely hypothetical.[20]

III.  AWS/LAWS vs Weapon Systems with Autonomous Functions

If we better understand the varying components of the underlying technology of “autonomous weapons” and properly conceptualise these it will be possible to have a rational conversation about the advantages, disadvantages, and future of weapon systems with autonomous functions. Much of the conversation surrounding these weapons uses emotive language, such as “lethal” or “killer” and creates the impression that weapon systems will be completely autonomous with no human oversight.

The international community has also tried to determine and define levels of autonomy in order to decide how much autonomy is too much autonomy. Peter Scharre, however, postulates that it is more fruitful to think of autonomy as having three dimensions instead of trying to define different levels of autonomy. The three dimensions are: “the human–machine command-and-control relationship; the sophistication of the machine’s decision-making process; and the types of decisions or functions being made autonomous”.[21]  A systems autonomy is not merely dependent on one of the dimensions but rather runs along all three.[22] The first dimension, referring to the relationship between humans and the machine, is used most often to determine autonomy; the complexity of a systems decision-making abilities is a more technical approach and is relied upon to classify systems as automated or autonomous; the last dimension is a functional approach and focuses on the nature of the tasks that the system can complete autonomously.[23] Scharre makes it clear that autonomy does not run along a single spectrum, but rather all three dimensions discussed and therefore instead of referring to a whole machine as autonomous it is more advantageous to consider the ‘autonomous functions’ of a system.[24]

Autonomy does not describe the whole system but is merely a general attribute that can be implemented into the system in numerous ways.[25] Williams points out that autonomy is not a fixed trait of a system, like colour, weight, or temperature, but is rather determined, based on the system’s interaction with the environment.[26] The incorrect use of the term is a hindrance to the adoption and proper regulation of weapons with autonomy as a capability because it implies that the weapon systems act completely independently.[27] Stensson and Jansson believe that there is an inherent problem in describing technology as autonomous in general, i.e. using the term autonomous weapon system because the term indicates attributes that technology cannot possess.[28] When we use terms that are usually connected to humans, such as intelligence and autonomy, we might expect the technology to operate in the same way as humans; which it does not.[29] While the term ‘autonomy’ in relation to non-human entities is not ideal, we use the term to describe the systems in question, because attributing human-like characteristics to technology helps us understand it in the same way we understand ourselves.[30] In a Summer Study on Autonomy, the United States Department of Defence (DoD) held that speaking of an “autonomous system” is problematic because a machine cannot be truly autonomous; therefore the term “autonomous capabilities” is preferable.[31] Ideally, we should do away with the term “autonomous” in relation to technology completely, but considering the prevalent use thereof it is unlikely to happen. Williams, therefore, suggests that it is more practical, and accurate to label the weapon systems in question as “weapon systems with autonomous functions” rather than the more widely used and accepted ambiguous labels AWS or LAWS. “Weapon systems with autonomous functions” implies that instead of the whole machine being autonomous, it can instead perform certain functions with varying degrees of human interference, which will depend on various factors such as the system’s design i.e. ”intelligence”, the external environmental conditions in which the systems will be required to operate, the nature and complexity of the mission, as well as international law principles and policy standards.[32]

IV.  Conclusion

The use of numerous terms interchangeably and emotive language such as “killer robots” or “lethal autonomous weapons” are not conducive to rational debates. These terms inspire images of robot armies going rogue[33]. The term “weapon systems with autonomous functions” makes it clear that machines cannot have consciousness as humans do, but can simply display autonomous like behaviour, depending on their programming and environment. Misunderstandings about what these weapon systems can and cannot do may result in over-cautious responses,[34] or over-reliance and trust-bias.


[1] See Human Rights Watch (HRW), Making the Case: The Dangers of Killer Robots and the need for a pre-emptive ban (2016),; for a full list of opponents of the weapons discussed see: Campaign to Stop Killer Robots, (18-06-2019)”; 26 States have called for a ban: Campaign to Stop Killer Robots, Country Views on Killer Robots (2018),; and 4501 AI/Robotics researchers and 26215 others have signed an open letter calling for a ban on AI weapons: Future of Life Institute, Autonomous Weapons: An Open Letter from AI and Robotics Researchers (2015),

[2] See e.g. Ronald Arkin, Governing Lethal Behavior in Autonomous Robots (2009); and Schmitt “Autonomous Weapon Systems and International Humanitarian Law: A reply to the critics” (4 December 2012) Harvard National Security Journal Features (4 December 2012),

[3] See High Contracting Parties of the Convention on Certain Conventional Weapons, Discussions on Emerging Technologies in the area of LAWS,

[4] UNIDIR, The weaponization of increasing autonomous technologies: Concerns, Characteristics and Definitional Approaches 19 (2017) , .

[5] Paul Scharre, The opportunity and challenge of autonomous systems, in Autonomous Systems: Issues for Defence Policymakers 4 (Andrew P. Williams and Paul D. Scharre eds., 2015)

[6] United Nations Office at Geneva, Background on Lethal Autonomous Weapons Systems in the CCW, Disarmament – The Convention on Certain Conventional Weapons (2019),

[7] See High Contracting Parties of the Convention on Certain Conventional Weapons, Discussions on Emerging Technologies in the area of LAWS,

[8] UNIDIR, supra note 4, at 19.

[9] For various working defintions of AWS/LAWS see: UNIDIR, supra note 4, at 21-32; Robin Geiss, The International Law Dimension of Autonomous Weapon Systems 6 (2015),

[10] Schmitt, supra note 2, at 4.

[11] Merel A.C. Ekelhof, Complications of a Common Language: Why it is so Hard to Talk about Autonomous Weapons, 22 Journal of Conflict & Security Law 311, 315.

[12] Scharre Army of None 96 (2018).

[13] British Ministry of Defence, Development, Concepts and Doctrine Centre, Unmanned Aircraft Systems: Joint Doctrine Publication 0.30.2, 43 (August 2017), available at

[14] See Scharre, supra note 12.

[15] Id. at 127.

[16] Chace, Artificial Intelligence and the Two Singularities, 5 (2018) ; Scharre, supra note 12, at 231.

[17] Chace, supra note 16, at 5.

[18] Scharre, supra note 12, at 231.

[19]Chace, supra note 16, at 5.

[20] Scharre, supra note 12, at 231.

[21] Scharre, supra note 5, at 56.

[22] Id, at 9.

[23] Vincent Boulanin & Maaike Verbruggen, Mapping the Development of Autonomy in Weapon Systems, 5-6 (2017),

[24] Scharre, supra note 5, at 11.

[25] Id, at 12.

[26] Andrew P. Williams, Defining autonomy in systems: challenges and solutions,  in Autonomous Systems: Issues for Defence Policymakers 53 (Andrew P. Williams and Paul D. Scharre eds., 2015)

[27] United States Department of Defense, Summer Study on Autonomy: Report of the Defense Science Board Summer Study 1 (June 2016),

[28] Patrick Stensson & Anders Jansson, Autonomous technology – Sources of Confusion: a model for explanation and prediction of conceptual shifts,  57(3) Ergonomics 455, 455 (2014).

[29] Williams, supra note 26, at 54.

[30] Stensson & Jansson, supra note 28, at 458.

[31] US DoD, supra note 27, at 5.

[32] Williams, supra note 26, at 57.

[33] Id, at 28.

[34] Id, at  54.