The Case for New Strategies for Autonomous Weapons Systems

We are at another tipping point in history in military weapons; the United States can be like the many other nations that fell behind or became obsolete due to underestimating the application of emerging technologies in warfare. Autonomous weapons systems must be incorporated into national security strategies, even if just for defense purposes. However, implementing autonomous weapons systems into current and existing strategies is easier said than done. Keeping humans involved in autonomous weapons systems' decision-making process is crucial in ensuring safety, accountability, and control. Additionally, a question to raise is if it is the best-suited approach to make it an objective not to compromise democratic and legal norms? Have we not learned the damage a resistance to adapt has caused in the past? 

Referring to the human-in-the-loop concept, when discussing autonomous weapons systems, we are also discussing artificial intelligence. The software that is allowing the weapons system to be autonomous in the first place and make decisions. Payne outlined a particular challenge with artificial intelligence, which is the "control problem" (13); the situations that these systems are most advantageous in are high speed and carry the risk of letting a computer program decide the fate of a situation. If you have ever interacted with technology, you know that literal interpretations and logical flaws can make for a wonky interaction. As Payne articulated in his article, "The automation of decision-making in response to fast-moving threats entails a risk of inadvertently initiating hostilities – particularly … if there is an automatic capacity for escalation in response to perceived threats." (13). With the chaos and speed of the potential situations in which these systems may be used in, it makes it difficult, as Payne noted, to integrate humans into the process to provide a sanity check on the system and decisions being made. However, keeping humans as an authority within the decision-making process may not be possible, especially if it initially counteracts the benefits of using autonomous weapons. A concept Brooks described, artificially intelligent software is easier to control than humans and provides a higher level of consistent decision making based on the objective. These systems have also proven to be more reliable in decision-making and do not experience fatigue like humans. As Brooks put it, "computers will be far better than human beings at complying with international humanitarian law" (2), also noting that "Computers ... are excellent in crisis and combat situations. They don't get mad, they don't get scared, and they don't act out of sentimentality. They're exceptionally good at processing vast amounts of information in a short time and rapidly applying appropriate decision rules." (3). Ultimately, those not involved in the conflict, even those that are unless they are a target, are safer than if humans were at the helm making every decision. The type of speed, analysis of multiple data points, and rigid adherence to the rule of law sound like a weapons system that would provide a clear advantage.

Geis and Hailes observed this dynamic as well when discussing this very concept; they wrote:

"With several new technologies operating either at or near the speed of light, this decision loop is moving toward a point requiring much more rapid capabilities to observe and attribute incoming attacks. The nation-states that comprise our global security system are similarly chaotic and capable of rapidly tipping from one state to the next. The human system in which we must deter is complex and chaotic while the credibility of deterrence hinges on the capacity to accurately attribute such actions at ever-increasing speeds" (60).

Additionally, Marchant (et al.) discussed another critical element of the equation, the concept of "command responsibility,"; ensuring there is responsibility and reliability in the event of an issue or malfunction (296). Again, a particularly critical and sensitive component of autonomous weaponry, particularly lethal ones, ensures checks and balances are in place. Without it, these weapons systems would be a danger to all United States citizens if they could be used without repercussion. Therefore, it is a vital part of the strategic implementation equation in United States strategy. 

It does not require much of a stretch of the imagination to understand how quickly something like this could become a devastating national or global disaster. If we integrate and build autonomous weapons systems strategies around humans, including human limitations, we lose many of their benefits. However, suppose the United States fails to implement them strategically, effectively, and meaningfully. In that case, there is the risk of being at a catastrophic disadvantage against nations with less concern about ethics and human rights and having no problem letting their autonomous weapons systems off the leash.

So, where do we go from here? The A.I. software used in autonomous weapons has been compared to nuclear arms control as having a similarly strategic and ethical dilemma. It might be tempting to copy nuclear strategy and consider autonomous weapons an extension of the current and existing powers and liabilities to help minimize compromising democratic or legal norms. However, this would not suffice; the systems in question are far too dynamic and evolving for the nuclear framework to fit appropriately. Moreover, the emergence of nuclear weapons themselves caused massive disruptions to existing approaches to strategy, as Payne observed: "[Nucelar weapons] produced a profound break with the past, reshaping the distribution of power, changing the character of warfare and greatly enhancing the destructive force available to states that possessed them. This required the development of new strategic thinking, new organisational structures and new equipment." (16).

Autonomous weapons are the nuclear weapons of the 21st century; they do not fit easily into existing strategic frameworks. Additionally, as previously mentioned, an effort to conform autonomous weapons, something new and not the norm, to fit into existing norms and seek compromise does not seem practical. It is also situationally dependent on the type of autonomous weapons in question; with some systems, it would not be wise to allow permissive or extra permissive strategies. However, alternatively, there may be restrictive strategies that do not allow the U.S. to benefit from using autonomous weapons entirely. 

Let us look at what an autonomous weapons strategy might include. As Payne mentioned, the development of nuclear weapons strategies was the product of extensive "debate and deliberation on how best to employ nuclear weapons to coerce or deter adversaries." (16). In the same line of reasoning, the strategies for autonomous weapons also need to have a clearly defined intention. Though these systems can coerce and deter adversaries like nuclear weapons, they would just as often be utilized in on-the-ground practical weapons applications. Something nuclear is never used for (thank goodness!). Additionally, the following are examples of strategic priorities identified with nuclear weapons: "the importance of retaining …secondstrike capabilities via concealment and hardening; the advisability of using many bombs … to ensure delivery in the face of countermeasures; the tension between counterforce … and counter-value … in terms of what constituted the best deterrent; and the rational application of calibrated force in a crisis to deter an adversary from further escalation." (Payne 16). Because A.I. systems have been compared so often to nuclear weapons, this provides a clear example of why autonomous systems need their own "intense debate and deliberation" to determine the best strategies (Payne 16). For that reason, compromise with existing strategies should be approached with caution. 

The dynamic nature of autonomous weapons systems merits new strategies, using the lessons learned from past emerging technologies as the framework. When nuclear weapons were an emerging technology, it was not practical or safe to place pressure on fitting nuclear weapon strategies into existing strategic frameworks. We are in the dawn of a new era and dancing with the beginnings of digital nuclear abilities where the same care and caution should be exercised as was in the past.


Brooks, Rosa. “In Defense of Killer Robots.” Foreign Policy, 18 May 2015, foreignpolicy.com/2015/05/18/in-defense-of-killer-robots.

Geis, John P., and Theodore C. Hailes. “Deterring Emergent Technologies.” Actions Strategic Studies Quarterly, vol. 10, no. 3, 2016.

Marchant, Gary E., et al. “International Governance of Autonomous Military Robots”. The Columbia Science and Technology Review. 2011.

Payne, Kenneth. “Artificial Intelligence: A Revolution in Strategic Affairs?”, Survival, 60:5,732, DOI: 10.1080/00396338.2018.1518374,2018.

Previous
Previous

Cyberattacks as an Act of War (or not)

Next
Next

Past Performance is No Guarantee of Future Returns