Within the broader context of the problems raised by the interaction between humans and machines in weaponry and targeting, this paper deals with the specific issue of the mens rea required to establish responsibility for the war crime of indiscriminate attacks, in the context of attacks performed with semi-autonomous weapons or with the support of artificial intelligence (AI) in targeting decision-making. The author presents the difficulties that are determined by the interaction between humans and machines, and highlights that an interpretation that would allow for risk-taking mental elements such as dolus eventualis and recklessness in the framework of the war crime of attacking civilians would be better able to capture the criminality of the conduct of the person who knowingly accepts the risk of killing civilians as part of an AI-powered attack where the result of hitting the civilian target is one of the possible outcomes. However, the article indicates that this construction can be employed only in specific circumstances, since in most scenarios even these lowered mens rea requirements would not be met. In most human-machine teaming scenarios, lower types of intent such as dolus eventualis would still be insufficient for the ascription of criminal responsibility for such indiscriminate attacks against civilians. This is because of the specific risks posed by the integration of autonomy in the targeting process and the resulting changes to the cognitive environment in which human agents operate, which significantly affect specific components of mens rea standards.
By entering this website, you consent to the use of technologies, such as cookies and analytics, to customise content, advertising and provide social media features. This will be used to analyse traffic to the website, allowing us to understand visitor preferences and improving our services. Learn more