Guarding against the threat from IoT killer drones

hovering drone / camera / propellor blades




IoT is being weaponized. The same sensors, networks and real-time data analysis used monitoring classrooms can morph into weapons for targeted killing. How do such malicious drones operate and what can be done to protest against their airborne threat?

Background

Here are three data-points of weaponized drones.
  1. The recent assassination attempt on the President of Venezuela with drones. “Aug 4, 2018. CARACAS, Venezuela — A drone attack caused pandemonium at a military ceremony where President Nicolás Maduro of Venezuela was speaking on Saturday, sending National Guard troops scurrying in what administration officials called an assassination attempt.”
  2. The use of drones to shoot down incendiary kites in the Israeli-Palestinian conflict. ”IDF reservists to help; troops able to shoot down flying objects 40 seconds from detection”
  3. Slaugtherbots. “A video by the Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogrammed criteria.”

How do they work?

Drones are aerial IoT devices. They’re mounted with sensors that relay their location, altitude and other sensor readings such as images to a back-end system or controller which determines what action the drone should take. Such drones have to remain within sight for a human control to operate them. The Federal Aviation Administration (FAA) stipulates that Unmanned Aircraft Systems (UAS) users must (1) Registertheir UAS with the FAA (2) Fly the UAS within visual line-of-sight. The examples above with the Venezuelan assassination attempt and shooting down incendiary kites both involve human controllers.
Commercial drones used are used to inspect pipelines for leaks. They can fly long distances looking for signs of an oil leak on their own and the images of leaks are easily recognizable. They can travel further because they do not require onboard power source for long range transmission with their pilot. Extending pattern recognition for oil leaks to facial recognition to identify a human target is not as big a hurdle as you would think. Chip technology is advancing so that small on-board chips could enable to a drone to find a target within a crowd on its own.
This isn’t science fiction as The Perpetual Line-Up explains, “the Government Accountability Office revealed that close to 64 million Americans do not have a say in the matter: 16 states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm.” The Verge reports “a major recipient of AI funding in China is facial recognition. This technology is widespread in the country’s cities, used for everything from identifying jaywalkers to allocating toilet paper. More significantly, it’s also been embraced by the government as a tool for surveillance and tracking”.
Similar artificial intelligence (AI) enables drones to operate autonomously. They can be programmed with a route or instructions and then navigate to the destination on their own. This makes it possible for a swarm of drones to operate collectively, without human operators.

Defensive strategies

How do you protect against such a ‘smart’ weapon? Here are three possible defenses:

1. Block communications

Pilots communicate with their drone using a transmitted C2 link. WhiteFox Defense provides a RF counter-drone security which constantly surveys for these signals and analyzes them in real-time to determine the danger posed by a drone. This information can be used to lock out the drone pilot and mitigate the threat. This approach has limited value though, when a drone is operating autonomously and not in regular communications with it pilot as there is not signal to block.

2. Airspace monitoring

This approach resembles anti-virus software where network packets are compared against a list of known virus ‘signatures’ to identify threats and block them. Unmanned Defence Specialists provides software that continuously displays real time airspace information and detects and identifies drones using "DroneDNA" pattern recognition. Defensive measures against hostile drones can be taken automatically with this information. These systems however, work best in areas with a known boundary and may not be suitable for protecting large areas.

3. Policy changes to weaponizing drones

The Future of Life Institute and Stuart Russell, a professor of computer science at Berkeley released the video ‘Slaughterbots’. The Future of Life Institute (FLI) is a volunteer-run research and outreach organization that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence(AI). "This short film is more than just speculation; it shows the results of integrating and miniaturizing technologies that we already have... AI's potential to benefit humanity is enormous, even in defense, but allowing machines to choose to kill humans will be devastating to our security and freedom", explains Russell.
Paul Scharre author of Army of None: Autonomous Weapons and the Future of War disagrees with Russell. "Every military technology has a countermeasure, and countermeasures against small drones aren't even hypothetical. The U.S. government is actively working on ways to shoot down, jam, fry, hack, ensnare, or otherwise defeat small drones. The microdrones in the video could be defeated by something as simple as chicken wire.", explained Scharre also stated that Russell's implied proposal, a legally binding treaty banning autonomous weapons, "won't solve the real problems humanity faces as autonomy advances in weapons. A ban won't stop terrorists from fashioning crude DIY robotic weapons”.

Summary

“Just as the Industrial Revolution spurred the creation of powerful and destructive machines like airplanes and tanks that diminished the role of individual soldiers, artificial intelligence technology is enabling the Pentagon to reorder the places of man and machine on the battlefield the same way it is transforming ordinary life with computers that can see, hear and speak and cars that can drive themselves”, reported the NYT. “The new weapons would offer speed and precision unmatched by any human while reducing the number — and cost — of soldiers and pilots exposed to potential death and dismemberment in battle. The challenge for the Pentagon is to ensure that the weapons are reliable partners for humans and not potential threats to them.”
Killer drones may not be here yet. But they’re closer than you’d think.

networkworld.com