The humanitarian imperative for minimally-just AI in weapons
Author zone:
Jason Scholz and Jai Galliott
In:
Lethal autonomous weapons : re-examining the law and ethics of robotic warfare
Editor:
Oxford : Oxford University Press, 2021
Physical description:
p. 57-72
Languages:
English
Abstract:
For the use of force to be lawful and morally just, future autonomous systems must not commit humanitarian errors or acts of fratricide. To achieve this, the authors distinguish a novel preventative form of minimally-just autonomy using artificial intelligence (MinAI) to avert attacks on protected symbols, protected sites, and signals of surrender. MinAI compares favorably with respect to maximally-just forms proposed to date. The authors examine how fears of speculative artificial general intelligence has distracted resources from making current weapons more compliant with international humanitarian law, particularly Additional Protocol 1 of the Geneva Convention and its Article 36.
By entering this website, you consent to the use of technologies, such as cookies and analytics, to customise content, advertising and provide social media features. This will be used to analyse traffic to the website, allowing us to understand visitor preferences and improving our services. Learn more