By Clea Strydom
I. Introduction
States and corporations are utilising Artificial Intelligence (AI) technology to create more ‘intelligent’ weapon systems with autonomous functions. The international community is divided on whether or not this development in technology is positive. There are many who have called for fully autonomous weapon systems to be banned;[1] while others feel that this reaction is going too far and stands in the way of ‘progressive’ development.[2] The prevalent terms used by NGOs, researchers, academics, as well as the States and International Organizations to label weapons that can perform tasks autonomously is Autonomous Weapon Systems (AWS) or Lethal Autonomous Weapon Systems (LAWS).[3] However, there is to date no universally accepted definition for these labels,[4] or any agreement on what constitutes such a weapon. The terms are misleading and ambiguous and often conjure up images of rogue killer robots. This article postulates that in order to have a rational debate about these weapon systems, the AWS and LAWS labels need to be discarded in favour of more accurate descriptors.