At the crossroads of control : the intersection of artificial intelligence in autonomous weapon systems with international humanitarian law
Alan L. Schuller
Host item entries:
Harvard national security journal, Vol. 8, issue 2, 2017, p. 379-425
Lawyers and scientists have repeatedly expressed a need for practical, substantive guidance on the development of Autonomous Weapons Systems (AWS) consistent with the principles of international humanitarian law (IHL). Less proximate human control in the context of machine learning poses challenges for IHL compliance, since this technology carries the risk that subjective judgments on lethal decisions could be delegated to artificial intelligence (AI). Lawful employment of such technology depends on whether one can reasonably predict that the AI will comply with IHL in conditions of uncertainty. With this guiding principle, the article proposes clear, objective principles for avoiding unlawful autonomy: the decision to kill may never be functionally delegated to a computer; AWS may be lawfully controlled through programming alone; IHL does not require temporally proximate human interaction with an AWS prior to lethal action; reasonable predictability is only required with respect to IHL compliance; and close attention should be paid to the limitations on both authorities and capabilities of AWS.
By entering this website, you consent to the use of technologies, such as cookies and analytics, to customise content, advertising and provide social media features. This will be used to analyse traffic to the website, allowing us to understand visitor preferences and improving our services. Learn more