In 2050 automated weapons systems (AWS; i.e., human out of the loop automated weapons systems), driven by artificial intelligence, will be the norm. The initial deployment of these systems, pre-2050, will be for anti-access/area denial (A2/AD), serving primarily as fixed-station or mobile patrol sentries in areas where civilians are not allowed (e.g., the Korean Demilitarized Zone). However, by 2050, these weapons systems may be ready to be deployed as mobile platforms accompanying units of warfighters. The ethical questions today are: 1) are AWS ethical weapons systems and 2) under what circumstances is it ethically acceptable for fully automated weapons systems to be used in just wars (jus ad bellum) and in ways that protect humanitarian interests (jus in bello).
The deployment of automated weapons systems (AWS) poses significant ethical challenges, especially jus in bello. Chief among them is the ethical question of whether we should possess weapons that can decide which humans to kill. Although this certainly saves lives of the just warfighter, it raises the jus in belloconcerns of how well an AI weapons system discriminates between combatant and non-combatants.
In this analysis I will describe the ethical obligation of commanders to utilize AWS once technologically feasible. I will then examine each of the ethical objections one might raise to the use of AWS and offer a response. In the end, a set of algorithmic and mathematical criteria, collectively called “policies”, for the ethical operation of AWS in combatant situations will be produced.
http://bit.ly/2oWLGe5