The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process if we are to gain a fuller picture of the consequences of creating or fielding lethal autonomous robots. This paper argues that once we look to how militaries actually create military objectives, and thus identify potential targets, we face an additional problem: the Strategic Robot Problem. The ability to create targeting lists using military doctrine and targeting processes is inherently strategic, and handing this capability over to a machine undermines existing command and control structures and renders the use for humans redundant. The Strategic Robot Problem provides prudential and moral reasons for caution in the race for increased autonomy in war.
By entering this website, you consent to the use of technologies, such as cookies and analytics, to customise content, advertising and provide social media features. This will be used to analyse traffic to the website, allowing us to understand visitor preferences and improving our services. Learn more