This is a call for the prohibition of autonomous lethal targeting by free-ranging robots. This article first points out the three main international humanitarian law (IHL)/ethical issues with armed autonomous robots and then move on to discuss a major stumbling block to their evitability: misunderstandings about the limitations of robotic systems and artificial intelligence. This is partly due to a mythical narrative from science fiction and the media, but the real danger is in the language being used by military researchers and others to describe robots and what they can do. The article looks at some anthropomorphic ways that robots have been discussed by the military and then go on to provide a robotics case study in which the language used obfuscates the IHL issues. Finally, the article looks at problems with some of the current legal instruments and suggest a way forward to prohibition.