Creating Moral Robots Kiah Breidenbach Important Terms Machine - - PowerPoint PPT Presentation
Creating Moral Robots Kiah Breidenbach Important Terms Machine - - PowerPoint PPT Presentation
Creating Moral Robots Kiah Breidenbach Important Terms Machine Ethics Superintelligence Asimovs Three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A
Important Terms
- Machine Ethics
- Superintelligence
Asimov’s Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey orders given it by human beings except where such orders would conflict with L1. 3. A robot must protect its own existence as long as such protection does not conflict with L2. Added later:
- 0. A robot may not injure humanity, or, through inaction, allow humanity to come to
harm.
What Is A Moral Robot?
- A robot with one or more competencies considered important for living in a
moral community
- A robot with the capacity for moral judgment
How?
- Ethical theories
- Legal principles
- moral competence
In Favor of Creating Moral Robots
- “The greater the freedom of a machine, the more it will need moral standards” -
Roz Picard, MIT Affective computing lab
- There is already a need for moral robots
- They could be capable of teaching humans more about ethics and making moral
decisions
Need For Moral Robots
Arshia Khan with Robot Pepper
U.S. Military is already pursuing moral robots
Opposing View
Some experts warn that unsupervised advancements in AI and robots could lead to the end of the human race. Results are indeterminable - unintended effects Large scale harm due to crude ethical assessments Mistakes in code could have severe consequences Robots would be allowed to make decisions without human supervision
Case Study
Suppose moral robots were created… A moral robot is tasked with monitoring a person in a nursing home to lessen the workload of human staff.
Kantian Analysis
- Motive: Helping patient, lessen discomfort
- Universal Moral Rule: If you have the ability to help someone, you have the
responsibility to do so. __________________________________ From a kantian standpoint, the robot was indeed acting ethically and the implementation of moral robots was an ethically correct decision.
Virtue Ethics
Were human staff available to care for the patient, they would have made the same decision. We can conclude that the robot was acting virtuously and thus made the ethically correct decision
Act Utilitarian Analysis
From an act utilitarian standpoint, implementation of moral robots is the ethically correct choice
Robot was able to administer additional pain medication, thus benefiting the patient +2 Human staff are not overworked +1 Moral robots in conjunction with human staff ensure all the patients receive adequate care +3 Robot could have unforeseen error in code resulting in indeterministic results
- 3
total: +3 No one was available to help patient and then remained in pain longer than necessary
- 2
Human staff are overworked due to lack of staff
- 1
Lack of staff results in inadequate care
- f patients
- 3
Potential harms of moral robots are avoided +3 total:
- 3
In My Opinion:
As long as potential threats can be sufficiently mitigated and the robots do not possess autonomy, a sense of self or personal emotions, I think moral robots could have a powerful positive impact on
- ur world.