AI Safety: Charting out the High Road

This past year, revelations about the plight of Muslim Uighurs in China have come to light, with massive-scale detentions and human rights violations of this ethnic minority of the Chinese population. 



Last month, additional classified Chinese government cables revealed that this policy of oppression was powered by artificial intelligence (AI): that algorithms fueled by massive data collection of the Chinese population were used to make decisions regarding detention and treatment of individuals. China failing to uphold the fundamental and inalienable human rights of its population is not new, and indeed, tyranny is as old as history. But the Chinese government is harnessing new technology to do wrong more efficiently.

Concerns about how governments can leverage AI also extend to the waging of war. Two major concerns about the application of AI to warfare are ethics (is it the right thing to do?) and safety (will civilians or friendly forces be harmed, or will the use of AI lead to accidental escalation leading to conflict?). With the United States, Russia, and China all signaling that AI is a transformative technology central to their national security strategy, with their militaries planning to move ahead with military applications of AI quickly, should this development raise the same kinds of concerns as China’s use of AI against its own population? In an era where Russia targets hospitals in Syria with airstrikes in blatant violation of international law, and indeed of basic humanity, could AI be used unethically to conduct war crimes more efficiently? Will AI in war endanger innocent civilians as well as protected entities such as hospitals?

To be clear, any technology can be misused in war. A soldier could use a rock to commit a war crime. A simple, low-tech land or sea mine can be used indiscriminately and endanger civilians if it is used in the wrong way. A transformative technology like AI can be used responsibly and safely, or it could fuel a much faster race to the bottom.

The United States has declared it will take the high road with military applications of AI. For example, the Department of Defense AI strategy has “AI ethics and safety” as one of the its fundamental lines of effort. And this is not an empty promise: The Defense Innovation Board just released its principles for ethical military use of AI, marking a year-long, deliberate initiative drawing in AI experts, ethicists, and the general public. By this laudable effort, the United States has shown leadership in the responsible and principled use of this technology in war.

But there is something missing: The AI strategy commitment was to ethics and safety. To date, the Department of Defense has not shown a similar, concerted focus on AI safety. Despite commitments made to the international community and in its own AI strategy, the Pentagon has done little to act on promises to address safety risks unique to the technology of AI or to use AI to enhance safety in conflict. My recent research has shown that this inaction creates risks to those on the battlefield, for civilians and combatants alike, and increases the likelihood of accidental escalation and conflict. In an era where the technology of AI can so easily be exploited by governments to violate the principles of humanity, the United States can demonstrate the high road is possible, but to do so it needs to keep its promises: to address safety risks intrinsic to AI and to search for ways to use AI for good.

Promise 1: Addressing Safety Risks Unique to AI

In its AI strategy, the Department of Defense made a promise to address the safety risks unique to AI technology. This is reflective of America’s long record of commitments to safety and adherence to international laws for armed conflict. For example, all military systems are subject to test and evaluation activities to ensure that they are reliable and safe, as well as legal reviews to ensure they are consistent with international humanitarian law (e.g., the Geneva Conventions). It is not surprising, therefore, that safety is prominent in the defense AI strategy.

Though a commendable intention, the strategy has not yet resulted in significant institutional steps to promote safety with regard to AI. The U.S. military has been busy supporting the Defense Innovation Board’s development of AI ethics, with the Joint AI Center also emphasizing the critical role ethics plays in AI applications, yet the pursuit of safety — for example, avoiding civilian casualties, friendly fire, and inadvertent escalation — has not received the same sort of attention.

I acknowledge a few steps are being taken towards promoting AI safety. For example, the Defense Advanced Research Projects Agency has a program working to develop explainable AI, to help address challenges with the use of machine learning as a black box technology. Explainability will enhance AI safety: for example, by being able to explain why an AI application does something amiss in testing or operations and to take corrective actions. But such steps, while important, do not make up a comprehensive approach to identify and then systematically address AI safety risks. To that end, our most recent report draws on a risk management approach to AI safety: identifying risks, analyzing them, and then suggesting concrete actions to begin addressing them.

From this we see two types of safety risks: those associated with the technology of AI in general (e.g., fairness and bias, unpredictability and unexplainability, cyber security and tampering), and those associated with specific military applications of AI (e.g., lethal autonomous systems, decision aids). The first type of safety risk will require the U.S. government, industry, and academia to work together in order to address existing challenges. The second type of risk, being associated with specific military missions, is a military problem with a military solution obtained through military experimentation, research, and concept development to find ways to promote effectiveness along with safety.

Promise 2: AI for Good

A second promise made by the U.S. government was to use AI to better protect civilians and friendly forces, as was first expressed in international discussions. The United Nations Convention for Certain Conventional Weapons, a forum that considers restrictions on the design and use of weapons in light of requirements of international humanitarian law, has held discussions regarding lethal autonomous weapon systems since 2014. Over time, the topic of those discussions has informally broadened from purely autonomous systems to also including the use of AI in weapon systems in general. As the State Department’s senior advisor on civilian protection, I was a member of the Convention’s delegation in the discussions on lethal autonomous weapon systems. The U.S. position paper in 2017 emphasized how, in contrast to the concerns of some over the legality of autonomous weapons, such weapons carried promise for upholding the law and better protecting civilians in war. This was a sincere position: Several of us on the delegation were also involved in the drafting of U.S. executive order on civilian casualties, which contained a policy commitment to make serious efforts to reduce civilian casualties in U.S. military operations. The thoughtful use of AI and autonomy represented one way to meet that commitment.

The 2018 Department of Defense AI strategy also contained a similar promise of using AI for better protecting civilians in war. As described in the unclassified summary of the strategy, one of its main commitments was to lead internationally in military ethics and AI safety. This included development of specific applications that would reduce the risk of civilian casualties.

That last commitment, made in both the strategy and in U.S. government position papers, is probably the one that draws the most skepticism. When Hollywood portrays AI and autonomous systems and the use of force, it is often to show machines running amok and killing innocents, such as seen in the Terminator series of movies. But using AI for good in war is not a fanciful notion: At CNA, our analysis of real-world incidents shows specific areas where AI can be used for this purpose. We have worked with the U.S. military and others to better understand the reasons that civilian casualties occur and what measures can be taken to avoid them. Based on analysis of underlying causes of over 1,000 incidents, AI technologies could be used to better avoid civilian harm in ways including:
Monitoring targeted areas for potential changes in the estimated collateral damage in order to avoid civilian casualties;
Mining military and open source data to better identify and reduce the risk to civilian infrastructure (e.g., power, water) in conflict areas, helping to avoid longer-term humanitarian impact from the use of force;
Using image processing techniques to better identify hospitals and avoid inadvertent attacks; and
Using AI-driven adaptive learning to improve military training for civilian harm mitigation.

These are just some examples of concrete applications of AI to promote civilian protection in conflict. The Department of Defense could be a leader in this area, and it is easy to imagine other countries following a U.S. lead. For example, many countries lament the frequency of military attacks on hospitals in recent operations, with a UN Security Council Resolution passed unanimously to promote protection of medical care in conflict in 2016. If the United States were to announce it was leading an effort to use AI to better protect hospitals, it is likely there would be interest from other countries in cooperating with such an effort.

Safety Is Strategically Smart

Why is it a problem if the United States does not rise to establish concrete steps to emphasize AI safety? After all, current and former U.S. government leaders have spoken about how neither Russia nor China will be slowing down their AI efforts in order to address ethical or safety issues. Does it matter?

A focus on safety and care in the conduct of operations has served the United States well. During the second offset, Washington developed precision capabilities to help counter the Soviet Union’s advantages in troop numbers. These developments then enabled the United States to take additional steps to promote safety in the form of reduced civilian casualties: developing and fielding new types of munitions for precision engagements with reduced collateral effects, developing intelligence capabilities for more accurately identifying and locating military targets, and creating predictive tools to help estimate and avoid collateral damage. These steps had strategic as well as practical benefits, enhancing freedom of action and boosting legitimacy of U.S. actions while enabling steps to reduce the civilian toll of recent operations. If AI is leveraged to help promote safety on the battlefield, it can yield similar strategic and practical benefits for the United States.

AI safety also has relevance to U.S. allies. Unlike its peer competitors, Russia and China, the United States almost always operates as part of a coalition effort. This is a significant advantage for the United States politically, numerically, and in terms of additional capabilities that can be brought to bear. But ally cooperation, and the interoperability of partners within a coalition, will depend on what capabilities our allies are willing to adopt or to operate in the same battlespace with. It is important that the United States be able to convince would-be allies of both the effectiveness and the safety of its military AI.

AI and America’s Future

The revelation that China used AI to violate human rights contrasts strongly to U.S. promises to take the high road with regard to its military applications of AI. Eric Schmidt and Bob Work have declared that the United States could easily lose its leadership in AI if it does not act urgently. The leadership the United States has shown in AI ethics is commendable, but to fully be the leader it needs to be for our national security and for our prosperity, America must lead the way on safety too. The opportunity to develop AI of unrivaled precision is historic. If it can build AI that is lethal, ethical, and safe, the United States would have an edge in both future warfare and the larger climate of competition that surrounds it. Developing safer AI would once again show the world that there is no better friend and no worse enemy than the United States.