In April, thousands of Google employees called for their company to discontinue support to Project Maven, a Department of Defense artificial intelligence initiative, stating that “Google should not be in the business of war.”
Following suit, tech workers from Microsoft, IBM and Amazon urged Google to break its contract with the department and for other companies to refuse to work with it. On June 1, the company informed employees that it would not renew its contract with the Pentagon.
For several years now, tech luminaries such as Elon Musk and others have called for an outright ban on the development of AI-enabled “killer robots.”
But in order to remain a global military leader and preserve U.S. national security, the department must make substantial investments in AI technology and leverage the expertise of Silicon Valley. Indeed, during a trip to the region last year, Secretary of Defense James Mattis insisted that AI has “got to be better integrated by the” Defense Department.
Obviously, Silicon Valley and the Pentagon are out of sync. Meanwhile, China and Russia are setting the conditions within their respective industrial bases to become world leaders in AI. In July 2017, the Chinese State Council released a comprehensive strategy entitled, “New Generation Artificial Intelligence Development Plan,” directing a whole-of-government approach and strongly encouraging large technology companies like Alibaba and Baidu to invest heavily in the technology with the goal of becoming the world leader in AI by 2030.
Former Deputy Secretary of Defense Robert Work has called China’s issuance of this strategy a “Sputnik moment.” Russia has taken similar steps to increase its AI capabilities. Last September, Russian President Vladimir Putin remarked: “Whoever becomes the leader in this sphere will become the ruler of the world. Artificial intelligence is the future not only of Russia but of all of mankind.
There are huge opportunities, but also threats that are difficult to foresee today.”
With China and Russia poised to become AI leaders, the current dissonance between Silicon Valley and the Pentagon is a real problem that cannot be dismissed as mere cultural differences or emblematic of a greater civilian-military divide.
The U.S. government has a clear need for reliance on Silicon Valley and it cannot expect notions of patriotism or national security to inspire tech companies into service. It is further unrealistic to expect that the Pentagon will be able to lure top talent from the private sector, where annual salaries are exceeding $1 million. Instead, the department should strengthen its relationship with Silicon Valley through education, presence and engagement.
In terms of education, the world is an exceedingly dangerous place. As noted in an unclassified summary of the 2018 National Defense Strategy: “We are facing increased global disorder, characterized by decline in the longstanding rules-based international order — creating a security environment more complex and volatile than any we have experienced in recent memory.”
In addition to those posed from Russia and China, the United States faces threats from North Korea and Iran and continues to battle non-state violent extremist organizations across several continents. Silicon Valley is far removed from these threats, and may benefit from realizing that keeping these threats at bay is vital to the safety, security and prosperity of their businesses.
Further, the Defense Department is not the war-mongering, baby-killing machine that Silicon Valley may perceive it to be. While there are many ways to disabuse this misunderstanding, a few key points are worth mentioning here. Primarily, as noted in the National Defense Strategy, the Defense Department seeks to deter war first, and only fight if deterrence fails. The ability to develop strong AI will be crucial to deterrence.
"The current dissonance between Silicon Valley and the Pentagon is a real problem that cannot be dismissed as mere cultural differences..."
Existing policy takes a cautious approach towards autonomous weapons and requires some level of human control in lethal actions. Further, Project Maven — the catalyst of the Google employee movement to stop working with the Pentagon — is used for the nonlethal purpose of detecting objects, and may serve to save soldiers and civilian lives overseas.
Finally, the U.S. military strives to minimize civilian harm, which is both required by the law of armed conflict and paramount to its strategic success in multiple theaters of operations. AI-enabled technologies might help achieve these objectives.
Education is a two-way street and it is necessary to understand Silicon Valley. Based on some of the questions Congress asked Facebook CEO Mark Zuckerberg during his recent hearing, many in Silicon Valley worried that Washington’s poor understanding of technology could lead to overly burdensome regulations. Similarly, Pentagon officials might not grasp the issues surrounding AI on the same plane as those on the cutting edge in the commercial sector.
Accordingly, the Defense Department should work not only to better understand the technology that Silicon Valley is producing, but also understand how Silicon Valley decision-makers, scientists and attorneys assess risks with regard to such technology. Such engagement could help inform the analysis of AI-enabled systems before fielding, and best ensure compliance with laws and regulations.
Also important is “presence.” Merely increasing the presence of military personnel in Silicon Valley could help bridge some gaps. Due to a series of base closures in the 1990s and 2000s, U.S. military presence there has become insignificant. As M.L. Cavanaugh wrote recently in the Wall Street Journal, the Defense Department could increase the number of fellowships with tech industry, increase the size of the Defense Innovation Unit-Experimental (DIUx), and expand ROTC opportunities at Silicon Valley feeder schools. Improving the placement and access of military personnel throughout Silicon Valley would serve to make the department part of that ecosystem rather than an outsider.
And finally, there is “engagement.” On its face, Google’s “do no evil” mantra can seem at odds with military rhetoric. For example, during a breakfast hosted by the Defense Writers Group earlier this year, Air Force Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, commented: “Wouldn’t it be cool if you could shoot somebody in the face at 200 kilometers and they don’t even know you’re there? That’s the kind of man-machine teaming we really want to get after.”
While this sort of commentary might resonate with warriors in the military or those deeply embedded in the defense industry, when spoken to an external audience, it is not a complete portrayal of the Defense Department’s views and practices on fighting wars.
Further, this type of rhetoric may unintentionally feed into a narrative that the U.S. military will irresponsibly spawn a horde of autonomous killer robots. Accordingly, Defense Department personnel at all levels must refine engagement with regard to potential uses of AI, and remind the public that the overall goal is to preserve national security, deter war and save lives.
If Silicon Valley continues to resist working with the Defense Department, China and Russia will not be effectively deterred and could use their respective industrial bases to surpass U.S. military technology. If this occurs, U.S. national security will be at risk and it will be too late for tech companies to lend their support.
Accordingly, it is imperative for the Pentagon to cement strategic relationships and win hearts and minds in Silicon Valley through mutual education, broader presence and considered engagement.