Balancing the Lopsided Debate on Autonomous Weapon Systems

The question of whether new international rules should be developed to prohibit or restrict the use of autonomous weapon systems has preoccupied governments, academics and arms-control proponents for the better part of a decade. Many civil-society groups claim to see growing momentum in support of a ban. Yet broad agreement, let alone consensus, about the way ahead remains elusive.

us military robot ile ilgili görsel sonucu

In some respects, the discussion that’s occurring within the UN Group of Governmental Experts (GGE) on lethal autonomous weapons systems differs from any arms-control negotiations that have taken place before. In other respects, it’s a case of déjà vu.



To begin with, disagreements about the humanitarian risk–benefit balance of military technology are nothing new. Chemical weapons and cluster munitions provide the clearest examples of such controversies.

Chemical weapons have come to be regarded as inhumane, mainly because of the unacceptable suffering they can cause to combatants. But the argument has also been made that they’re more humane than the alternatives: some have described the relatively low ratio of deaths and permanent injuries resulting from chemical warfare as an ‘index of its humaneness’.

Cluster munitions, meanwhile, have been subjected to regulation because of the harm they can inflict on civilians and civilian infrastructure. Yet many have claimed that these weapons are particularly efficient against area targets, and that banning them is therefore counter-humanitarian because it leads to ‘more suffering and less discrimination’.

Autonomous weapon systems have triggered a similar debate: each side claims to be guided by humanitarian considerations. But the debate remains lopsided.

The autonomous weapons debate is unique at least in part because its subject matter lacks proper delimitation. Existing arms-control agreements deal with specific types of weapons or weapon systems, defined by their effects or other technical criteria. The GGE, in contrast, is tasked with considering functions and technologies that might be present in any weapon system. Unsurprisingly, then, it has proven difficult to agree on the kinds of systems that the group’s work should address.

Some set the threshold quite high and see an autonomous weapon system as a futuristic system that ‘can learn autonomously, [and] expand its functions and capabilities in a way exceeding human expectations’. Others consider autonomy to be a matter of degree, rather than a matter of kind, so that the functions of different weapon systems fall along a spectrum of autonomy. According to that view, autonomous weapons include systems that have been in operation for decades, such as air-defence systems (Iron Dome), fire-and-forget missiles (AMRAAM) and loitering munitions (Harop).

All of this has made it harder to pin down the object of the discussion. The GGE so far hasn’t made much headway on clarifying the amorphous concept. Indeed, rather than treat autonomous weapon systems as a category of weapons, the group’s recent reports refer circuitously to ‘weapons systems based on emerging technologies in the area of lethal autonomous weapons systems’. No wonder participants in the debate keep talking past each other.

The uncertainty about what autonomous weapon systems are has led to hypotheses about their adverse effects. The regulation of most other weapons has been achieved in large part due to their demonstrable or clearly predictable humanitarian harm. This is true even with respect to blinding laser weapons, the pre-emptive prohibition of which is often touted as a model to follow for autonomous weapon systems. The early evidence of battlefield effects of laser devices enabled reliable predictions to be made about the humanitarian consequences of wide-scale laser weapons use.

When autonomous weapon systems are considered to be some yet-to-exist category, it’s only possible to talk about potential adverse humanitarian consequences—in other words, humanitarian risks. The possible benefits of autonomous weapon systems also have a degree of uncertainty to them. However, the use of limited autonomous functionality in existing systems allows for some generalisations and projections to be made.

The range of risks has been discussed in detail and explicitly referenced in the consensus-based GGE reports. Such risks include harm to civilians and combatants in contravention of international humanitarian law, a lowering of the threshold for use of force, and vulnerability to hacking and interference.

Potential benefits of autonomous functions—for example, increased accuracy in some contexts or autonomous self-destruction, both to reduce the risk of indiscriminate effects—barely find their way into the GGE reports. The closest the most recent report gets to this issue is a suggestion that consideration be given to ‘the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with … applicable international legal obligations’. This vague language has been used despite some governments highlighting a range of military applications of autonomy that further humanitarian outcomes, and others noting that autonomy helps to overcome many operational and economic challenges associated with manned weapon systems.

The issue has become politicised and ideological: many see a discussion of benefits in this context as a way to legitimise autonomous weapon systems, thus getting in the way of a ban.

We do not wish to suggest that risks of autonomous technology be disregarded. Quite the opposite: a thorough identification and a careful assessment of risks associated with autonomous weapon systems remains crucial. However, rejecting the notion that there might also be humanitarian benefits to their use, or refusing to discuss them, is highly problematic and likely to jeopardise the prospect of finding a meaningful resolution to the debate.

Reasonable regulation cannot be devised by focusing on risks or benefits alone; some form of balancing must take place. Indeed, humanitarian benefits might sometimes be so significant as to make the use of an autonomous weapon system not only permissible, but also legally or ethically obligatory.