The UK government is actively funding the development of flying “killer robots” despite publicly stating it has no plans to develop them, a study claims.
Research into fully autonomous drone weapons by the campaign group Drone Wars UK revealed the UK’s Defence and Security Accelerator (Dasa) is funding research for developing weapons able to kill without direct human input.
The report, titled Off the Leash: The Development of Autonomous Military Drones in the UK, highlighted the Taranis drone, which is capable of autonomously flying, plotting routes and locating targets.
The Taranis drone is the culmination or more than a decade’s work by BAE Systems and the Ministry of Defence that has so far cost over £200m. The year-long study also uncovered dozens of other similar research programmes being funded by the MoD.
“There is tangible evidence that the MoD, military contractors and universities in the UK are actively engaged in research and development of the underpinning technology with the aim of using it in military applications,” said Peter Burt, author of the report.
“We have already seen the development of drones in Britain which have advanced autonomous capabilities, such as the Taranis stealth drone developed by BAE Systems, and the development of a truly autonomous lethal drone in the foreseeable future is now a real possibility.
“The government should be supporting international initiatives to prevent the development and use of fully autonomous weapons, and should be investigating the enormous potential of artificial intelligence to identify potential conflict areas and prevent wars before they start.”
Current Ministry of Defence policy states the UK opposes the development of autonomous weapons systems, with the government stating it “does not possess fully autonomous weapons and has no intention of developing them.”
A spokesperson for the MOD said: “There is no intent within the MoD to develop weapon systems that operate entirely without human input. Our weapons will always be under human control as an absolute guarantee of oversight, authority and accountability.”
Artificial experts have previously warned that delegating life-or-death decisions to machines crosses a moral line, with hundreds of academics calling for a pre-emptive ban on lethal autonomous robotics in 2017 as part of the Campaign to Stop Killer Robots.
“It’s not the Terminator that experts in AI and robotics like myself are worried about but much simpler technologies currently under development, and only a few years away from deployment,” Toby Walsh, Scientia Professor of AI at UNSW Sydney, said at the time.
“Without a ban, there will be an arms race to develop increasingly capable autonomous weapons. These will be weapons of mass destruction. One programmer will be able to control a whole army.”