While the use of artificial intelligence (AI) and machine-learning algorithms in the context of armed conflicts has been subject to scholarly and political debate for at least the past half-decade, to date discussions have focused on the possible development and deployment of lethal autonomous weapon systems. Going beyond this narrow perspective, the article draws attention to other military uses of AI that are conceivable or in fact already exist, for example for the purpose of detention, force protection, equipment maintenance, or reconnaissance. It critically examines these different applications from a legal and ethical perspective, exposing some of the challenges inherent in the technology such as algorithmic bias or predictability. On the basis of existing and emerging approaches to the regulation of ‘civilian’ AI, the article concludes by proposing a granular, tiered way to future regulation of military AI that proceeds from the criticality of each particular application.
By entering this website, you consent to the use of technologies, such as cookies and analytics, to customise content, advertising and provide social media features. This will be used to analyse traffic to the website, allowing us to understand visitor preferences and improving our services. Learn more