Asymmetric Dialogue

Battlefield 2.0

Summary The detection, identification and location of a target in sufficient detail to permit the effective employment of weapons.

How Edge Artificial Intelligence Is Setting Man Against Machine



Getty

Target Acquisition: The detection, identification and location of a target in sufficient detail to permit the effective employment of weapons. [US Defense Department’s Dictionary of Military and Associated Terms]

The Killing House

Undisclosed location, Asia, 2018. The control room falls silent as multiple video streams from live bodycams fill its display monitors. On screen, four men, bulked by tactical body armor and carrying AR-14s, approach a single-story concrete structure. The windowless building, its meter-thick walls baking in the late-afternoon heat, carries no signage, no identifying markers. As the men enter, the control room displays adjust from sunlight to show a near pitch-black corridor with doorways visible to the left. The audio is silent, save for the tread of rubber boots through pooled condensation and the hum of a generator somewhere inside.

The four men had seen images of the hostage before they reached the building. Some limited intelligence was available on the suspected five to six hostiles. Nothing definite and individual features blur under the glow of an image intensifier in any case. This time it doesn’t matter. The AI bodycams clipped to their flak jackets will see in the dark, streaming live operational video back to the controllers. And the devices know exactly who they are looking for.

The first contact comes as they enter the corridor. Someone steps out of a room, weapon raised. Four rapid shots and he’s down. Facial recognition on one of the bodycams identifies him as a known hostile. This despite the darkness and his face paint. The men move further along the corridor. When they reach the third room, chaos ensues. Six individuals. None visibly armed. All moving frantically. Arms waving. Empty hands outstretched. Pleading. Shouting loudly, incoherently. Steered by the bodycams, the hostage is instantly recognized and bundled from the room. The others searched, disarmed, zip-tied. Absolute clarity. Despite the dark, the adrenaline, the conditions. Despite the risk of confusion. Terrorists hide amongst hostages. Enemy combatants amongst locals. Not any longer.

Back in the control room, the multiple live feeds and facial recognition matches have displayed in real-time. Voices talk softly through headsets. Nothing is left to chance. Servers capture every piece of data for the AI engine to process. The next generation mobile devices will carry object classifiers for automated weapon detection. Behavioral analytics. The standoff biometrics will be faster, more automated, more powerful. This exercise is close quarters battle training at a dedicated facility: the hostage, the rescue team and the hostiles all serving officers; but the AI is real and already battle-tested.

The Urban Battlefield

Montreal, Canada, 2018. In September this year, under the auspices of The Technical Cooperation Program (TTCP), the ‘Five Eyes’ nations collaborated on the Contested Urban Environment 2018 (CUE 18) evaluation exercise. Hundreds of scientists, researchers and military personnel from the US, UK, Canada, Australia and New Zealand came together to see how the latest tech would play in military hands.


One technology on show was Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT), co-developed over four years by the UK Government’s Defence Science and Technology Laboratory (Dstl) and industry partners. SAPIENT is an integrated system of autonomous sensors, the objective being to architect a hierarchy, such that lower level surveillance monitoring decisions can be made autonomously by an AI module. Deploying a tiered structure, SAPIENT can flex a range of edge sensors interfacing into a central system, which can then automatically monitor these multiple feeds, identifying risks and potential hostiles, saving military time, resources and potentially lives.

British Defence Procurement Minister, Stuart Andrew, said of SAPIENT: “This British system can act as autonomous eyes in the urban battlefield. This technology can scan streets for enemy movements so troops can be ready for combat with quicker, more reliable information on attackers hiding around the corner.”

In practice, this means the identification and target acquisition of enemy combatants by edge surveillance sensors and AI. Low-level sensors feed continuous streams of collected data to a higher-level computer. Do we know the enemy combatants? Can we identify them based on their movements, behaviors, objects? Soldiers on the ground will be directed towards threats by the complex networks deployed with them. Ground and air surveillance will be joined up seamlessly. Data will be analyzed, processed and checked before it’s tailored and delivered.

All About Data

In counter-terrorism, counter-insurgency, and of course national defense itself, there is simply too much data of every variety to manage effectively. Cyber collection and physical world surveillance programs collect and store constantly. Whole industries have developed around the offline analysis of this data. But battlefields are real-time. Decisions are taken live, in the moment. There is too much data for those on the ground to process. And it’s going to get worse, a lot worse.

The US Army’s Advanced Research Labs (ARL) writes about an integrated smorgasbord of sensors, wearables, weaponry and vehicles, with the objective “to develop the fundamental understanding of dynamically-composable, adaptive, goal-driven IoBT (Internet of Battlefield Things) to enable predictive analytics for intelligent command and control and battlefield services.” Alexander Kott, chief of ARL’s Network Science Division, and colleagues call this “the emerging reality of warfare.” What they mean is data, data and more data. Unlimited numbers of machines can be deployed, and those machines can capture unlimited amounts of information. Only machines can then process that information for simpler humans to act on. Predictive analytics means taking actions based on AI-derived assumptions from algorithms trained on past events. Targeting based on what is likely to happen, not what has actually happened. First-mover advantage.

But as Kott and colleagues acknowledge, “human warfighters, under extreme cognitive and physical stress, will be strongly challenged by the massive complexity of the IoBT and of the information it will produce and carry. IoBT will have to assist the humans in making useful sense of this massive, complex, confusing, and potentially deceptive ocean of information… at the very least, the IoBT’s colossal volume of information must be reduced to a manageable level, and to a reasonably meaningful content, before it is delivered to humans and intelligent things.” Essentially, people cannot possibly process this amount of battlefield information. It’s too much. The common thread in these research and development programs is that AI layers will analyze and filter data, taking lower-level decisions in order to provide filtered intelligence for simpler humans to process. People then make the final decisions. For now.

The Bigger Picture

The challenge of future warfare’s information overload sits within the bigger picture of the emerging AI Cold War. The potential for a high-tech battle or even World War III pitching the US and allies against China, or Russia, or both. Here scientists envisage laser missile shields and battlefields where ground and airborne vehicles, and even soldiers themselves, are autonomous robots. “Robots probably will fight robots,” says Kott, “there’s no question about it.” There is also talk of swarms of smart drones, of next-generation kinetic weapons, and of course offensive cyber attacks on critical infrastructure, command and control systems, the very fabric of a nation-state.

Russia’s President Putin said last year: “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” President Trump’s $700 billion defense budget and his escalating trade war with China has this as its context. China’s ambition is that by 2030 the country’s “AI theories, technologies, and applications should achieve world-Leading levels, making it the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power.”

Getty

AI is set to become the dominant economic and military technological development area for generations to come. This underpins everything.

Here And Now: All About Edge-AI And Bandwidth

As Kott and colleagues concede, “communications between things will also be challenged by high complexity, the dynamics and the scale of IoBT.” They point out that militaries will have to make use of commercial communications infrastructure in foreign countries to power their IoBT. Their technologies will operate over the broader IoT infrastructures now being deployed. And it is networks and associated bandwidth that is also today’s major inhibitor to live AI battlefield application systems.

The bodycams used in the killing room example above are edge computers, not the basic recording devices becoming ubiquitous in law enforcement. These devices provide live connectivity and edge-AI that can operate offline or in sync with the center, distributing computing in real-time, localizing the first layer of processing such as face or object detection to reduce the bandwidth load and ensure split-second timing.

Thus far, most AI facial and object detection, classification and matching have focused on cloud- or server-centric systems feeding on full-fat video piped to a central storage structure. Most analysis is offline. But where bandwidth and processing allows, it can be real-time. Now we are seeing the emergence of ‘AI chips’ and extensive use of GPUs to bring processing to the edge. Decision making is becoming tiered. This requires edge processing, but it also requires connectivity. And to make all this work there have to be limits on the amount of data that needs to move at zero latency in absolute real time. This means that for military systems to operate on the edge, they need to form part of a distributed, fully connected architecture that can operate in either an online or offline mode. Data needs to be focused and cut into manageable packets for live transmission and processing. With the evolving intelligence picture on a live operation and with the nature of the data being analyzed and matched in real-time, mass-market systems will not suffice. All this means architecting the edge and central systems to work in lock-step. And it means managing available bandwidth efficiently on a live operation, in what will likely be a hostile or uncontrolled environment.

Surveillance is most effective when every node of a system operates live and in real-time. Watchlists are constantly updated. Results are shared across networks. The architecture can accommodate tiers of requirements from a specific operational level up to national level. Given enough time and equipment, dedicated networks can be established in the field. Bandwidth can be deployed just like any other resource. But the nature of modern warfare is that it roams wide areas in often undeveloped or challenging terrain. Local communications infrastructure can be basic. Installing dedicated systems takes time and a good deal of complex equipment. This is where edge-AI and a distributed architecture built across bandwidth-efficient systems delivers in the field.

Getty

Man Vs Machine

Next generation systems are already being deployed, with edge-AI a core part of their architectures. These systems and wider networks of systems will drive efficiencies in decision making and information management. The sharp-end of edge-AI on today's battlefield, whatever size and shape that battlefield might be, is target acquisition. But it’s a fine line between pointing out the enemy and taking out the enemy.

Standoff biometrics with better than 99.9% accuracy and ethnic classification, open source images, including social media scraped datasets, as well as behavioral analytics and object classifiers, all in addition to endless cyber capabilities, will find their way onto the frontline. Today, the AI points and the human decides. But two, three, five years from now, long before we see which way AI’s Cold War will play out, in a hostage rescue situation or on the streets of Syria or Iraq or Afghanistan, will it be man or machine pulling the trigger?