Google’s artificial intelligence ethics won't curb war by algorithm

On March 29, 2018, a Toyota Land Cruiser carrying five members of the Al Manthari family was travelling through the Yemeni province of Al Bayda, inland from the Gulf of Aden. The family were heading to the city of al-Sawma’ah to pick up a local elder to witness the sale of a plot of land. At two in the afternoon, a rocket from a US Predator drone hit the vehicle, killing three of its passengers. A fourth later died. One of the four men killed, Mohamed Saleh al Manthari, had three children aged between one and six. His father, Saleh al Manthari, says Mohamed was the family’s only breadwinner.
The US took responsibility for the strike, claiming the victims were terrorists. Yet Yemenis who knew the family claim otherwise. “This is not a case where we’re just taking the community’s word for it – you’ve had verification at every level,” says Jen Gibson, an attorney with legal organisation Reprieve, which represents the Al Manthari family. “You've got everyone up to the governor willing to vouch for the fact that these guys were civilians.” The US Central Command (CENTCOM) has in the past few weeks opened an investigation – a “credibility assessment” – into the circumstances of the strike, which lawyers describe as unusual.
The Al Mantharis’ lawyers worry their clients may have been killed on the basis of metadata, which are used to select targets. Such data is drawn from a web of intelligence sources, much of it harvested from mobile phones – including text messages, email, web browsing behaviour, location, and patterns of behaviour. While the US army and CIA are secretive about how they select targets – a process known as the kill chain – metadata plays a role. Big data analytics, business intelligence and artificial intelligence systems are then used to spot the correlations that supposedly identify the target. “We kill people based on metadata,” said the former head of the CIA Michael Hayden in 2014.
Armies and secret services don’t do this work alone: they rely heavily on the research programmes of commercial companies, which in turn are keen to secure government business to recoup some of their research and development investments. As a result, companies who have not traditionally been associated with the military are becoming involved, Gibson says. “To date, most of the private actors that have been tied to the drone programme have been your traditional defence industry companies, your General Atomics, your Leidos, your typical kind of military contractors,” she says.
One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?
The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.
Gibson calls this a flawed rationale. Places like Yemen, she says, have become testbeds for a far more expansive programme of drone warfare, which is now being rolled out on a larger scale. “Strikes in Yemen have tripled under Trump, and we are not at present sure of the legal framework under which the programme operates,” Gibson claims. Several non-governmental organisations, among them Amnesty International and the American Civil Liberties Union, have accused the Trump administration of reducing the checks and balances on targeted killings abroad using drones.
According to The Bureau of Investigative Journalism, the first armed drones appeared about 18 years ago. Since then, the Bureau estimates, some 1,555 civilians have been killed during US drone strikes. The US government releases no official statistics about drone deaths.
In the case of the Al Mantharis, the killings happened where US forces are not formally at war, in what appears to have been a so-called “signature strike”. The identities of the people targeted in these strikes are often unknown, reports The New York Times, but attacks are deemed valid based on “certain predetermined criteria... a connection to a suspected terrorist phone number, a suspected Al Qaeda camp or the fact that a person is armed”.
In early June, Google CEO Sundar Pichai published a blog post outlining a new code of ethics for the company’s work in AI, following a campaign among company staff that saw 12 people resign and more than 3,100 sign an open letter.
Citing fears that military work would damage Google's reputation, the employees urged their leadership that “Google should not be in the business of war”. Google responded by promising that it would not renew its contract with the military once it comes to a close next year.
Gibson claims that Project Maven has deep implications for the US government’s programme of targeted killings: “Right now, what they’re doing sounds very innocuous, very innocent: you’re just teaching computers how to identify objects on a screen. But that same technology then can be used as it’s developed to automate the selection of individuals for targeting, to eventually even fire the weapon. Helping the programme in any sort of way puts you in the so-called kill chain.”
That seemingly puts the programme in direct conflict with Google’s new set of AI ethics, which demands that the company will not design or deploy AI as “technologies that cause or are likely to cause overall harm” either as “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. Significantly, Google will continue to work with the military on “cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue”.
But the guidelines offer room for manoeuvre. When does a software programme become a weapon, and how do Google’s principles relate to terms of international law, asks Gibson. 
Google’s guidelines promise that “where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.” This raises the question of how Google plans to balance notional risks and benefits.
How, asks Kate Crawford at New York University’s AI Now Institute, does Google propose to implement its AI guidelines? Cathy O’Neil, a mathematician, data scientist and author of Weapons of Math Destruction, calls for independent AI auditors to keep development in check, and government oversight to form a part of this regulatory ecosystem. Joanna Bryson, a professor in the Department of Computing at the University of Bath, argues that we shouldn’t stop there. “For every AI product, we need to know if something goes wrong why it went wrong.”
The weevil in the woodwork is whether the work of Google and others will result in the development of autonomous weapons that identify their own targets and kill of their own accord. This vision, says Scharre, is actually a long way off, especially when it comes to the selection of targets. Still, the global race to develop more intelligent weapons has already begun.
All this raises a range of legal, practical and ethical issues, from the incompatibility of autonomous weapons with international humanitarian law to the idea that it is an “affront to human dignity to have a machine delegated with the decision to kill”, in the words of Noel Sharkey, emeritus professor of AI and robotics at the University of Sheffield and the chair of the International Committee for Robot Arms Control, which published an open letter against Google’s involvement in Project Maven, signed by hundreds of prominent public thinkers.
The role of AI in law enforcement, meanwhile, is far more advanced. In China, police have already caught several alleged criminals using face recognition systems deployed at music events. In the United States, the Department of Homeland Security is on the cusp of mass-deploying facial recognition AI, says techno-sociologist Zeynep Tufekci. Such technologies, however, are frequently inaccurate, with UK civil rights groups arguing that the system is wrong nine out of 10 times.
Policing and modern warfare are two sides of the same coin, Gibson argues, with both relying on illusory algorithmic accuracy. “The idea that we're going to be inaccurate at the domestic level, but accurate at the international level - in cultural contexts that we neither understand, nor have accurate intelligence about - slightly beggars belief."
wired.co.uk