Making autonomous weapons accountable : command responsibility for computer-guided lethal force in armed conflicts
Research handbook on remote warfare
Cheltenham ; Northampton : E. Elgar, 2017
Some commentators view autonomous weapons systems (AWS) in armed conflict as a potential cure for defects in human perception and judgment. In contrast, AWS opponents warn of killer robots going rogue, and urge a ban on development and deployment of AWS. Proponents of a ban often also raise the specter of impunity, asserting that it will be impossible to hold a human accountable for the mistakes of a computer (a ‘machine’ or ‘agent’, in data scientists’ parlance). This chapter argues that a ban on AWS is unwise. Adaptations in current procedures for the deployment and use of weapons can ensure that any AWS used in the field complies with IHL. Solving the AWS accountability problem hinges on the doctrine of command responsibility, applied in a three-pronged approach that the chapter calls ‘dynamic diligence’.