Robotic warfare: training exercise breaches the future of conflict

Advertisement
Robotics and artificial intelligence (AI) are nothing new for many of the world’s militaries. But their use is growing, as an exercise in Germany earlier this year showcased. In the first of its kind exercise, robots were used to breach in a joint drill by US and British forces. First Lieutenant Cody Rothschild told US defence publication Stars & Stripes: “We did a robotic breach today, which has never been done before. This is a great step forward for the Army, and for robotics.”










The exercise involved remotely controlled robots clearing a path for forces, supported by M1A2 Abrams tanks and Bradley Fighting Vehicles. The unmanned systems disabled landmines and built a land bridge enabling infantry to navigate a tank trench.

“Testing and exercises are a key to learning what works, and what does not. It is akin to the lessons of the Louisiana Maneuvers or Fleet Problem exercises of the last century,” says Peter W Singer. The multi-award winning author, scholar and political scientist is one of the world’s leading voices on warfare and defence. “The wargames were where the key insights of future battles were gained.”
Advances in military robotics around the world

But it’s not just the West that is advancing in military robotics. In 2017 Business Insider declared Russia was a “leader in weaponised robots”. It said, confirming what many analysts believed, that Russia was advancing its efforts to develop unmanned ground vehicles, notably the Nerekhta, Uran-9, and the Vikhr.

The news should be of little surprise given militaries the world over are using robotic and AI systems, according to Singer. “It is more a question of who is not using them,” he says. “We’ve tracked robotics use by everyone from China and Israel to Belarus and Belgium. It also includes non-state actors that range from ISIS to journalists.”

And it won’t stop here, he adds. “This is clearly the future, whether it is in war or civilian life. Indeed, we will likely one day look at AI akin to how we look at electricity now. It was once revolutionary and now it is woven into our world in ways we no longer notice.”

That is a scenario that some, perhaps many, feel uneasy with. The late Stephen Hawking and Tesla’s Elon Musk have warned of the risks, often quite vocally, in the past. Suggesting that AI could not only be the biggest event in the history of civilisation, and perhaps the world, Hawking said we should prepare for the risks AI brings. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” he said. Meanwhile, Musk has warned that AI could lead to a Third World War.

Although perhaps not for the same reasons, just yet, Singer shares their concern about what autonomy on the battlefield could mean. “As we create distance from the point of violence, whether it is geographic distance through remote operation or temporal distance by autonomy, it changes the way we think about war itself,” he says. “You can already see tastes of that in how politicians and media talk about the ‘drone wars’, the air strike campaigns in places like Pakistan and Yemen, where the technology has shaped thinking.”
The next generation of military vehicles

Earlier this year Colonel Gerald (Andy) Boston told us of the advances being made in the development of next-generation combat vehicles. AI and autonomy were key to those innovations, he said, although not critical – for now at least.

“The hypothesis that we’re working on is that we are going to be able to, in the future, deliver decisively on the battlefield, using robotic combat vehicles,” he said, acknowledging a lot of funding for R&D has and was being made available.

However, Boston said a human has to have the final say before critical decisions are made, such as to fire at a target. “I do think that increasing levels of autonomy and AI have a big role to play in the future,” he added. “We look at it this way, artificial intelligence and autonomy should be used on combat vehicles in very discreet ways, based on the functions the vehicle is going to be performing… But the human has to be the one to make the decision, is that target hostile, is it the correct target. Is the target showing hostile intent and is it legitimate. The human has to make the decision as to whether or not to engage. Once you make the decision to engage the machine can help you do that faster.”

As the US-British military exercise in April demonstrated, there is much to excite us about the use of unmanned ground vehicles, none more so than the ability to protect personnel from the dangers of conflict.

Often breach manoeuvres are some of the most dangerous operations during active conflict, with units coming under artillery fire from adversaries. However, being able to carry this out from a distance is a convincing argument for the use of robots in theatre, and it doesn’t stop there says Singer. “Protecting lives certainly seems good to me. But it is not just about that. It is also about providing better outcomes.”
Unmanned conflict in the future

Although the April exercise and others carried out by the likes of Russia and China, together and on their own, are significant, they don’t mean we’re close to something the likes of the Terminator movies and their premise – at least not any time soon.

“The challenges we still face involve everything from navigating complex environments to how to react to humans,” Singer says. “Things that are simple for us to sense and communicate are difficult for machines and vice versa.” He adds that in the future we will see more advanced, intelligent robots, but also in a wider variety of forms than just direct manned machine replacements.

As troublesome for some the development of robots are, and the exercises we’ve seen in recent times, Singer would prefer we test rather than just assume we’ve got it right. “I would prefer we do far more of this kind of testing and learning, rather than thinking we know all that we need to know on acquisitions or doctrine.”

He does, however, have some concerns. “The growing use of robotics and AI is akin to the introduction of the steam engine or the airplane. It is disruptive to everything from tactics to doctrine.” He warns that we are not adjusting quickly enough.

Wars of the future will likely be very different to those of yesteryear. Conflict, as deleterious as it is, is unfortunately inevitable. The question is not how we prevent it; it is how we make it as effective as possible with the lowest possible cost – both in terms of finances and human lives. Robotics and AI will not answer those questions alone, but they may go some way to helping us find them – if they’re used correctly, which is another matter entirely.