Inside the United Nations’ effort to regulate autonomous killer robots

Amandeep Gill has a difficult job, though he won’t admit it himself. As chair of the United Nations’ Convention on Conventional Weapons (CCW) meetings on lethal autonomous weapons, he has the task of shepherding 125 member states through discussions on the thorny technical and ethical issue of “killer robots” — military robots that could theoretically engage targets independently. It’s a subject that has attracted a glaring media spotlight and pressure from NGOs like Campaign to Stop Killer Robots, which is backed by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, to ban such machines outright.
Gill has to corral national delegations — diplomats, lawyers, and military personnel — as well as academics, AI entrepreneurs, industry associations, humanitarian organizations, and NGOs in order for member states to try to reach a consensus on this critical security issue.
The subject of killer robots can spark heated emotions. The Future of Life Institute, a nonprofit that works to “mitigate existential risks facing humanity” such as artificial intelligence, launched its sensationalistic short film Slaughterbots at a side event hosted by the Campaign to Stop Killer Robots at the CCW’s meetings last November. The film, which depicts a dystopian near-future menaced by homicidal drones, immediately went viral.
Gill, a former disarmament ambassador for India, sought to quell the rising hysteria sparked by a vision of murderous drone armies. “Ladies and gentlemen, I have news for you,” Gill said, speaking to the press after the initial round of CCW meetings. “The robots are not taking over the world. Humans are still in charge.”
An engineer by training, Gill honed his negotiating chops during his participation in discussions on the Comprehensive Nuclear-Test-Ban Treaty (CTBT) and civil nuclear agreements between India and its partners; he was also a member of India’s Artificial Intelligence Task Force. Gill is slated to be the executive director at the UN’s newly created High-Level Panel on Digital Cooperation, which is co-chaired by Melinda Gates and Alibaba Group’s Jack Ma.
The CCW will meet for the third time for discussions on lethal autonomous weapons (LAWs), from August 27th through 31st, after which it will likely issue a report and decide upon continuing discussions next year. The Verge spoke to Gill about Hollywood depictions of dangerous machines, weapons that already exist or are in development, and a potential ban on killer robots.
This interview has been edited for length and clarity.
I understand that the official definition of autonomous lethal weapons is still under discussion, but give us a sense of what we’re talking about when we say “killer robots.” Drones, planes, ships, tanks, computer systems?
The jury is still out on whether we have lethal autonomous weapons, as some people define them, out there yet. For others, yes, there are such weapons systems out there in labs and so on… To give you a concrete example, the C-RAM system (the Counter-Rocket, Artillery, and Mortar system) that the US deployed in the Iraq theater some years back. So this system responds in an autonomous fashion to incoming fire, but there is a degree of human control that is exercised. The discussion at our convention was around how meaningful that control is. If you had another system of response, would that be more respectful of IHL (international humanitarian law) or not? That was a useful way of visualizing some of the challenges with future systems.
So we’re not talking about the Terminator or tiny autonomous drones exploding on people’s heads.
I don’t think that these visualizations of Terminators or drones going berserk are very helpful in having an advanced conversation about intelligent autonomous systems. But we have to deal with what’s there — Hollywood and the rest of it. I think the best way would be thinking about the loss of human control. So the systems that we’re dealing with, whether it’s in the civilian space or in the military space, if they exhibit this aspect of autonomy, whereby human supervision becomes hard to implement in practice, we have a difficulty.
Whether that is a safety-related challenge with regard to autonomous vehicles or a hacking challenge in the civilian space — people hacking into autonomous vehicles or poisoning the data that’s used for training these systems — or in the military, if it’s a loss of control over these weapons systems in the battle space by commanders that results in friendly fire or accidental triggering of hostilities among states.
There are some governments and NGOs that would like to see a ban of lethal autonomous weapons. Do you see that as a possibility or likelihood?
The Convention on Conventional Weapons provides a range of possibilities for controlling weapons use, either banning systems in advance or accepting their inevitability but proscribing their use in certain scenarios, or prescribing some ways of exchanging information or warning people on their use, etc. So banning LAWs is one of the possibilities among the options. But there could be yet another option. There are some states that are quite content with leaving this to national regulations, to industrial standards. So at this point in time, there is no consensus on any option.
As chair, I don’t have a view on what option states should take. I have to make sure that whatever option states decide on, the results of the discussion are able to support that option.
There is a degree of common understanding in the room that the notion of human accountability for the use of force cannot be dispensed with. So if that is so, what is the quality of the human-machine interface that we are looking for?
Unlike nuclear weapons, an issue you’ve also worked on, the tools to build autonomous weapons are relatively accessible. Does that pose a challenge to controlling this technology?
Any military system today uses a number of technologies that are available off the shelf. But the international community has found ways to regulate these systems, to control their proliferation, whether it is through technology export control or treaties and conventions that have broader applicability or other ways of working with industry — such as in the area of nuclear security, for example — of managing the risks and unattended consequences.
So AI is perhaps not so different from these earlier examples. What is perhaps different is the speed and scale of change, and the difficulty in understanding the direction of deployment. That is why we need to have a conversation that is open to all stakeholders. If we set out to govern these through only one level, let’s say the international treaty-making level, ignoring what is done at the national level or by industry, then our chances of success would be not that great. That is my experience of the past couple of years. But if all these different levels move in sync, move in full cognizance of what the other levels are attempting, then we have a better chance of succeeding in managing some of the risks that are associated with AI.
What are your thoughts on the efforts of the Campaign to Stop Killer Robots? In 2014, Elon Musk said that “killer robots” would be here in five years, so that would be next year. Do you have any response?
No, I don’t want to comment on… There are a lot of predictions, a lot of assessments around, so I think it would be very brave of me to comment on any one of these predictions.
I respect your not wanting to sensationalize the subject, but I think there is this fear out there. I’m just wondering what your response to that fear is.
I think making policy out of fear is not a good idea. So as serious policy practitioners, we have to look at what has become of the situation in terms of technology development and what is likely to happen. What is the context we are dealing with? Here in the Geneva discussions, our context is international humanitarian law, the laws of armed conflict and other concerns related to international security implications of the possible deployment of lethal autonomous weapons systems. So we have to keep that context in mind and deal with it in a rational, systematic manner which carries along all the 125 states that are in the CCW. I don’t think being fearful or being paralyzed into inaction — or being cavalier about the risk either — is very helpful.
Do you have goals for this cycle of meetings in August?
Yes, indeed. One goal is to build on the consensus outcome of last year whereby we shaped the agenda into four distinct sets of issues. We agreed on a set of understandings around the concerns. In particular, the understanding that existing international mandating law continues to apply to weapons systems in whatever shape or form.
The four agenda items are first, the characterization issue — how do you define lethal autonomous weapons systems? Second, what should be the nature of the human element in the use of force through such systems? What should be the human-machine interface when such systems are deployed or developed? The third item is, what are the various options for dealing with the international humanitarian law and the international security-related concerns coming from the potential deployment of such weapons systems?
When I say “options,” I mean whether it should be a legally binding instrument, another protocol to the convention, or whether it should be a political declaration, a politically binding set of rules and principles, or whether it should center on the applicability of the existing rules, such as weapons reviews.
The fourth point is about technology review. In this field, more than any other today, technology is evolving very rapidly. So you want your policy responses to be tech-neutral. They should not have to be fundamentally revised when technology changes. At the same time, you want to make sure that the implementation stays in step with technical developments. … In the August meeting, it is my hope that we come up at the end of the meeting with a good report that captures some building blocks in these four areas.
theverge.com