This article tackles the tricky legal issues associated with autonomy and automation in attack. Having clarified the meanings of these notions, the implications of the rules of weapons law for such technologies are assessed. More challenging issues seem, however, to be raised by the law of targeting, and in particular by the evaluative assessments that are required of attackers, for example in relation to the precautions in attack prescribed by Additional Protocol I. How these rules can sensibly be applied when machines are undertaking such decision-making is therefore addressed. Human Rights Watch has called for a comprehensive ban on autonomous attack technologies and the appropriateness of such a proposal at the present stage of technological development is therefore assessed. The article then seeks to draw conclusions.