The US Defense Department has revised its description of an initiative designed to use artificial intelligence to give tanks the ability to identify and engage targets on their own.
The change came after Quartz published details of the US Army’s ATLAS program as revealed in a solicitation to vendors and academics. ATLAS, which stands for “Advanced Targeting and Lethality Automated System,” aims to use artificial intelligence and machine learning to give ground-combat vehicles autonomous targeting capabilities that are at least three times faster than a human being.
In Quartz’s Feb. 26 article, the Army said is not planning to replace soldiers with machines but seeks to augment their abilities. ATLAS is primarily designed to increase the amount of response time tank gunners get in combat, Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security, a bipartisan think tank in Washington, DC, told Quartz.
Yet, Stuart Russell, a professor of computer science at UC Berkeley, said even this was a step too far. “It looks very much as if we are heading into an arms race where the current ban on full lethal autonomy”—a US military policy that mandates some level of human interaction when actually making the decision to fire—“will be dropped as soon as it’s politically convenient to do so,” said Russell, an AI expert.
The updated language added to the solicitation by the Army states:
All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.
According to Defense One, the Army will also be drafting new talking points to use when discussing ATLAS.
The machines are only partially taking over
US military leaders appeared March 12 before the Senate Armed Services Committee to discuss the state of AI Pentagon initiatives. They emphasized that ethical guidelines concerning AI use have been developed. Lt. Gen. Jack Shanahan, who runs the Defense Department’s AI center, used the word “ethics” or “ethical” four times during his prepared testimony.
An Army spokesman who responded to a request for further details on the new ATLAS description and talking points has not yet provided any.
No “human in the loop” requirement
ATLAS will require a soldier to throw a switch before firing, the Army told specialist website Breaking Defense, which published a March 4 follow-up on Quartz’s reporting that continued into a four-part series about the ethics surrounding autonomous weaponry.
Defense Department directive 3000.09 instructs everyone along the official chain of command that “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
However, as Scharre told Breaking Defense, “The US Defense Department policy on autonomy in weapons doesn’t say that the DoD has to keep the human in the loop. It doesn’t say that. That’s a common misconception.” (“The Directive does not use the phrase ‘human in the loop,’ so we recommend not indicating that DoD has established requirements using that term,’” a Pentagon spokesperson told Breaking Defense.)
Even mechanical firing systems can be made to operate on their own, Russell told the site, further warning of “automation bias” or “artificial stupidity.” This refers to instances in which technology reduces humans to button-pushers who blindly follow a robot’s commands.
Further, directive 3000.09 says the deputy secretary of defense can waive its restrictions—after a mandatory legal review—in times of “urgent military operational need.”
Worries of a “firestorm”
Military language is “at once abstrusely technical and sloppy,” wrote Breaking Defense’s Sydney Freedberg, and the Army’s definition of “lethality” can be quite different from a civilian’s. There were “people in the Pentagon… who were aware of how this all sounded,” well before the Quartz article was ever written, he reported. Within hours of the original solicitation going online, the head of the Pentagon’s Joint Artificial Intelligence Center expressed concerns over what he feared would be a ‘firestorm’ of negative news coverage” when it was spotted, Freedberg wrote.
Scharre describes the current crop of autonomous weaponry, such as ATLAS, as akin to blind-spot monitors on cars, and says they would reduce the chances of missing an intended target.
Still, critics of AI-assisted weaponry (who include Elon Musk) fear the lack of concrete, universally accepted guidelines. They say only a total ban will prevent eventual catastrophe.
As Article 36, a UK-based NGO that works to “prevent the unintended, unnecessary or unacceptable harm caused by certain weapons,” states on its website: “Action by states is needed now to develop an understanding over what is unacceptable when it comes to the use of autonomous weapons systems, and to prohibit these activities and development through an international treaty.”