This contribution argues that autonomous weapons systems may have advantages from the perspective of ensuring better respect for international humanitarian law (IHL). This may be the case if they are one day capable of perceiving the information necessary to comply with IHL, can apply IHL to that information, and if it can be ensured that they will not deviate from the ways in which humans have programmed them. In the view of the author, targeting decisions do not require subjective value judgments a machine would be unable to make. In order to ensure IHL is respected with regard to use of autonomous systems, agreement must be reached on how to interpret certain IHL rules properly when a machine executes autonomous attacks according to parameters established by human beings.
By entering this website, you consent to the use of technologies, such as cookies and analytics, to customise content, advertising and provide social media features. This will be used to analyse traffic to the website, allowing us to understand visitor preferences and improving our services. Learn more