Do the benefits of artificial intelligence outweigh the risks?

This essay is the winner of The Economist’s Open Future essay competition in the category of Open Progress, responding to the question: “Do the benefits of artificial intelligence outweigh the risks?” The winner is Frank L. Ruta, 24 years old, from America. 
*    *    * 
Towards the end of the second world war, a group of scientists in America working to develop an atomic bomb for the Manhattan Project warned that using the weapon would inevitably lead to a geopolitical landscape characterised by a nuclear arms race. This would force America, they said, to outpace other nations in building up nuclear armaments. They recommended that if the military did choose to use the weapon, an international effort for nuclear non-proliferation should promptly be established.
The committee’s warnings went unheeded. After the nuclear attacks on Hiroshima and Nagasaki, it turned out they had been eerily prescient. The arms race between America and the Soviet Union escalated during the cold war and today rogue states like North Korea threaten peace with their nuclear arsenals.
A potentially even more transformative technology is currently being developed: a technology which could easily be distributed to rogue nations and terrorist groups without the need for expensive, specialised equipment. Prominent scientists and technologists like the late Stephen Hawking and Elon Musk have voiced concern for the risks associated with the accelerating development of artificial intelligence (AI).
Many experts in the field, like Stuart Russell, the founder of the Centre for Human-Compatible AI at UC Berkeley, believe that concerns about the misuse of AI should be taken seriously. More than 8,000 researchers, engineers, executives, and investors have signed an open letter recommending a direction for responsible AI research that recognises social impact and seeks to build robust technology that aligns with human values.
To avoid repeating history, policymakers should begin to think about regulating AI development now that the community itself is calling for policy action. As with past technologies, well-structured regulation can mitigate costly externalities, while ill-informed regulatory measures can interfere with progress. Policymakers must cooperate closely with researchers to implement protocols that align AI with human values without being overly burdensome to developers.
The emerging field of AI safety has already begun discussing guidelines to tackle the potential dangers of the technology. Sessions devoted to AI safety and ethics have taken place at major scientific conferences and several books and articles on the topic have been published. By understanding researchers’ concerns, regulators can address the dangers of AI and the benefits of the technology will greatly outweigh the risks.
AI is a general term for software that mimics human cognition or perception. Because AI encompasses a broad set of algorithms, policymakers must take a nuanced approach to regulation, underscoring the need for technical collaboration. At a high level, a distinction is made between narrow AI and artificial general intelligence (AGI).
Narrow AI is more intelligent, or at least faster, than humans at a specific task or set of tasks, like playing the board game Go or finding patterns in large datasets. On the other hand, an AGI would beat humans at a number of cognitive tasks, termed cognitive superpowers by Nick Bostrom, a philosopher at the University of Oxford. These include intelligence amplification, strategising, social manipulation, hacking, technology development and economic productivity.
Narrow AI is responsible for many useful tools that have already become mainstream: speech and image recognition, search engines, spam filters, product and movie recommendations. The list goes on. Narrow AI also has the potential to enable promising technologies like driverless cars, tools for rapid scientific discovery and digital assistants for medical image analysis.
In the near-term, some of these technologies have the potential to be abused by malicious groups. The cost of attacks requiring human labour or expertise could be reduced, and new threats exploiting vulnerabilities in AI systems could emerge. AI can automate labour-intensive cyberattacks, coordinate fleets of drones, allow for mass surveillance through facial recognition and social data mining, or generate realistic fake videos for political propaganda.
Furthermore, increased automation gives more physical control to digital systems, making cyberattacks even more dangerous. Regulation can ensure that AI engineers are employing best practices in cybersecurity and limiting distribution of military technology. Considering the portability of AI, enforcing these rules will be difficult and international cooperation will likely be necessary.
Some researchers are concerned that, since algorithms are only as good as the data they are fed, narrow AI can make biased decisions. Biased or incomplete training data will be reflected in the output. One study with a machine learning program trained on texts found that names associated with being European-American were significantly more likely to be correlated with pleasant terminology than African-American names. AI that make consequential decisions, like hiring job candidates or predicting recidivism, should be screened before being adopted. Regulatory agencies will have to decide if an AI makes fair decisions by combing through training data for stereotypes.
On the other hand, the possibility of AGI is uncertain but some futurists believe its unchecked consequences could be apocalyptic. Some speculate that an AGI could appear within the next few decades in a so-called hard take-off, where its capabilities increase very rapidly as the program undergoes a process of recursive self-improvement. At the same time, others believe that intelligent agents have intrinsic limitations to augmenting their predictive capabilities autonomously and doomsday scenarios are unlikely, if not provably impossible.
Nonetheless, researchers are already discussing the dangers that machine superintelligence might pose. One thesis claims that an AGI with almost any programmed goal would develop a set of “basic AI drives,” such as self-preservation, self-improvement and resource acquisition. In this model, the AGI would be motivated to spread itself across computer networks and evade programmers. The AGI would leverage its cognitive superpowers to escape containment and achieve self-determination.
For example, the AGI might train itself on psychology and economics textbooks and use personal information about its developers to learn how to bribe its way to freedom. The AGI may then see humans as a threat to its self-preservation and seek to extinguish the human species. 
Researchers have suggested several ways to contain an AGI during testing, which policymakers can use as guidelines for drafting regulations. Containment strategies range from filtering training data for sensitive information to significantly handicapping the development process by, for example, limiting output to simple yes/no questions and answers. Some researchers have suggested dividing containment procedures into light, medium, and heavy categories. Regulations should avoid slowing progress when possible, so the weight of containment should vary with the maturity of the AGI program.
Containment is a short-term solution for AGI testing. In the long run, regulations must ensure that an internet-enabled AGI is indefinitely stable and has benevolent properties such as value learning and corrigibility before being deployed. Value learning is an AGI’s ability to learn what humans value and act in accordance with those values. Corrigibility refers to an AGI’s lack of resistance to bug-fixes or recoding.
One can imagine how an ideal AGI with a conception of justice and solidarity would be beneficial. Such an AGI could replace corrupt governments and biased judicial systems, making decisions according to a democratically-determined objective function. Moreover, a sufficiently sophisticated AGI could perform virtually any job done by a human. It is conceivable that the economy would be restructured in such a way that humans are free to pursue their creative passions while AGI drives productivity.  As with past technologies, there will also be useful applications that we cannot even foresee.
There are many unknowns in the progress of AI and concerns should be met with due caution. But a fear of the unknown should not stop the advance of responsible AI development. Rather than ignoring researchers’ concerns until the technology is mature, as with nuclear weapons, governments should open dialogue with AI researchers to design regulations that balance practicality with security.
AI is already making our lives easier and its progress will continue to produce useful applications. With the right policies, we can work towards a future where AGI systems are friendly and military AI applications are out of the hands of malicious agents, while the underlying technology continues to be a driver of productivity and innovation. 
economist.com