Cordeschi and Tamburrini (2005) and Cordeschi (2013) lay out a view of autonomous weapons that situates these systems within their historical origin in cybernetics. Viewing autonomous weapons from this perspective is valuable not only as an historical exercise, but reveals some deeper notions about what autonomous weapons are, why we may find them objectionable or undesirable, and how we might "tame" them with engineering. In particular, Cordeschi (2013) develops the arguments of Wiener (1960) on the reliability of autonomous systems, and the potential dangers from their unreliability. He also extended these arguments to examine human-machine interactions, and their inherent unreliability. In response to the calls of academics, including myself, for a ban on autonomous weapons, Cordeschi further explored how the precautionary principle might be applied. While Cordeschi found the precautionary principle wanting, and a ban on autonomous weapons unworkable, his analysis of these questions reveals how some people might share these conclusions. In this paper, I review his analysis, challenge some of the assumptions made by Wiener and the early cyberneticians regarding teleology and epistemology, and offer a revised view drawn from the insights of second-order cybernetics to explore both the risks of autonomous weapons and the practical value of banning them. It is a view which is compatible with Cordeschi’s (2002) own views on cybernetic history, and one which I wish I had the opportunity to try to convince him of.
Keywords: Autonomous weapons, History of cybernetics and AI, Machine ethics, Precautionary principle, Reliability, Second-order cybernetics