A killer robot by any other name is far more palatable to the general public. That may be part of the logic behind chief scientist of the Army Research Laboratory Alexander Kott’s decision to refer to thinking and moving machines on the battlefield as “mobile intelligent entities.” Kott pitched the term, along with the new ARL concept of Fully Autonomous Maneuver, at the 2nd Annual Defense News Conference yesterday, in an panel on Artificial Intelligence that kept circling back to underlying questions of great power competition.
If there is a canon against which this autonomy seems heretical, it is likely the international community’s recent conference and negotiations over how, exactly, to permit or restrict lethal autonomous weapon systems. The most recent meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems took place last week in Geneva, Switzerland and concluded with a draft of recommendations on August 31st.
This diplomatic process, and the potential verdict of international law, could check or halt the development of AI-enabled weapons, especially ones where machines select and attack targets without human interventions. That’s the principle objection raised by humanitarian groups like the Campaign to Stop Killer Robots, as well as the nations that called for a preemptive ban on such autonomous weapons.
Kott likely overstates the uniformity of belief among those who may cast his work as heretical. Still, he understands the ethical concern at the heart of it, drawing an analogy to the moral concerns and tradeoffs in developing self driving cars.
“All know about self driving cars, all the angst, the issue of mobility errors ethics, take all this concern and multiply it by orders of magnitude and now you have the issues of mobility on the battlefield,” said Kott. “Mobile intelligent entities on the battlefield have to deal with much more unstructured, much less orderly environment than what self-driving cars have to do. This is a dramatically different world of urban rubble and broken vehicles, and all kind of dangers, in which we are putting a lot of effort.”
Throughout the panel, where Kott was joined by Jon Rambeau of Lockheed Martin C6ISR, Rear Admiral David Hahn of the Office of Naval Research, and Maj. Gen. William Cooley of the Air Force Research Laboratory, the answers skirted around the edges of lethal autonomy, instead focusing on the other degrees of autonomy that will be developed in accordance with the Department of Defense’s own policy guidelines mandating human-in-the-loop control.
“As industry then takes on mantle of developing some of these highly capable AI-enabled systems, our responsibility to make sure that we develop within those boundary conditions,” said Rambeau. This is likely to be a continuous process, as AI is a continuously updated medium and will likely need regular evaluation to make sure it doesn’t develop of its own in malicious or unexpected ways.
However AI develops, whether tightly controlled and regulated or allowed to more organically process information and reach its own conclusions without constant checks, the presence of AI on battlefields of the future is likely to change how nations fight wars, and maybe even how people understand war itself.
defensenews.com