DOC PREVIEW
Lethality and Autonomous Robots: An Ethical Stance

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lethality and Autonomous Robots: An Ethical StanceRonald C. Arkin and Lilia MoshkinaCollege of ComputingGeorgia Institute of TechnologyAtlanta, GA 30332{arkin,lilia}@cc.gatech.eduAbstractThis paper addresses a difficult issue confronting thedesigners of intelligent robotic systems: their potentialuse of lethality in warfare. As part of an ARO-fundedstudy, we are currently investigating the points of view ofvarious demographic groups, including researchers,regarding this issue, as well as developing methods toengineer ethical safeguards into their use in thebattlefield.1. Introduction Battlefield ethics has for millennia been a seriousquestion and constraint for the conduct of militaryoperations by commanders, soldiers, and politicians, asevidenced, for example, by the creation of the Genevaconventions, the production of field manuals to guideappropriate activity for the warfighter in the battlefield,and the development and application of specific rules ofengagement for a given military context. Breeches in military ethical conduct often haveextremely serious consequences, both politically andpragmatically, as evidenced recently by the Abu Ghraiband Haditha incidents in Iraq, which can actually beviewed as increasing the risk to U.S. troops there, as wellas the concomitant damage to the United State’s publicimage worldwide. If the military keeps moving forward at its currentrapid pace towards the deployment of intelligentautonomous robots, we must ensure that these systems bedeployed ethically, in a manner consistent with standingprotocols and other ethical constraints that draw fromcultural relativism (our own society’s or the world’sethical perspectives), deontology (right-based approaches),or within other related ethical frameworks. Under the assumption that warfare, unfortunately andinevitably, will continue into the foreseeable future indifferent guises, the question arises as to how will theadvent of autonomous systems in the battlefield affect theconduct of war. There already exist numerousconventions, laws of war, military protocols, codes ofconduct, and rules of engagement, which are sometimesglobal in their application and at other times contextual,which are used to constrain or guide a human warfighter.Historically, mankind has been often unable to adhere tothese rules/laws thus resulting in serious violations andwar crimes. Can autonomous systems do better? In this paper, westudy the underlying thesis that robots can ultimately bemore humane than human beings in military situations,potentially resulting in a significant reduction of ethicalviolations. This class of autonomous robots that maintainan ethical infrastructure to govern their behavior will bereferred to as humane-oids.2. Understanding the Ethical Aspects ofLethal Robots As the Army’s Future Combat System (FCS) movescloser to deployment, including weapons-bearingsuccessors to DARPA’s Unmanned Ground CombatVehicle program, serious questions arise as to just howand when these robotic systems should be deployed.There are essentially two different cases:1. The robot as an extension of the warfighter. Inthis relatively straightforward application, the humanoperator/commander retains all of the decisionsregarding the application of lethality, and the robot isin essence a tool or weaponized extension of thewarfighter. In this case, it appears clear thatconventional ethical decision-making regarding theuse of weaponry applies. A human remains in controlof the weapons system at all times.2. The robot acting as an autonomous agent. Here,the robot reserves the right to make its own localdecisions regarding the application of lethal forcedirectly in the field, without requiring human consentat that moment, while acting either in direct supportof the conduct of an ongoing military mission or forthe robot’s own self-preservation. The robot may betasked to conduct a mission that possibly includesthe deliberate destruction of life. The ethical aspectsregarding the use of this sort of autonomous robot areunclear at this time, and they serve as the focal pointof this article. In order to fully understand the consequences of thedeployment of autonomous machines capable of takinghuman life under military doctrine and tactics [1,2], asystematic ethical evaluation needs to be conducted toguide users (e.g., warfighters), system designers, policymakers, and commanders regarding the intended futureuse of this technology. This study needs to be conductedprior to the deployment of these systems, not as anafterthought. Toward that end, a three-year research effort on thistopic is being conducted in our laboratory for the ArmyResearch Office, of which we are currently in the firstyear. Two topics are being investigated:(1) What is acceptable? Can we understand, define, andshape expectations regarding battlefield robotics? Asurvey is being conducted to establish opinion onthe use of lethality by autonomous systemsspanning the public, robotics researchers,policymakers, and military personnel to ascertain thecurrent point-of-view maintained by variousdemographic groups on this subject.(2) What can be done? Artificial Conscience andReflection. We are designing a computationalimplementation of an ethical code within an existingautonomous robotic system, i.e., an “artificialconscience”, that will be able to govern anautonomous system’s behavior in a mannerconsistent with the rules and laws of war.This paper focuses on the survey procedural aspects ofthis work, as the design and the software implementationof an ethical code will be conducted in years 2 and 3 ofthis project. It is too early to report the survey results aswell, as it is still open and we want to ensure thatexperimental bias is as far removed as possible from theresults. When the survey is closed, the results will bereported in a future article.3. Survey DesignA web-based public opinion survey is currently beingconducted to establish what is acceptable to the


Lethality and Autonomous Robots: An Ethical Stance

Download Lethality and Autonomous Robots: An Ethical Stance
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lethality and Autonomous Robots: An Ethical Stance and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lethality and Autonomous Robots: An Ethical Stance 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?