Georgetown journal of international law, Vol. 45, issue 3, 2014, p. 617-681
Although remote-controlled robots flying over the Middle East and Central Asia now dominate reports on new military technologies, robots that are capable of detecting, identifying, and killing enemies on their own are quietly but steadily moving from the theoretical to the practical. The enormous difficulty in assigning responsibilities to humans and states for the actions of these machines grows with their increasing autonomy. These developments implicate serious legal, ethical, and societal concerns. This Article focuses on the accountability of states and underlying human responsibilities for autonomous weapons under International Humanitarian Law or the Law of Armed Conflict. After reviewing the evolution of autonomous weapon systems and diminishing human involvement in these systems along a continuum of autonomy, this Article argues that the elusive search for individual culpability for the actions of autonomous weapons foreshadows fundamental problems in assigning responsibility to states for the actions of these machines. It further argues that the central legal requirement relevant to determining accountability (especially for violation of the most important international legal obligations protecting the civilian population in armed conflicts) is human judgment. Access to effective human judgment already appears to be emerging as the deciding factor in establishing practical restrictions and framing legal concerns with respect to the deployment of the most advanced autonomous weapons.