larry holder school of eecs washington state university
play

Larry Holder School of EECS Washington State University Artificial - PowerPoint PPT Presentation

Larry Holder School of EECS Washington State University Artificial Intelligence 1 } Weak AI Machines can act as if they were intelligent } Strong AI Machines can actually be intelligent (i.e., think) } Can we tell the difference? } Is


  1. Larry Holder School of EECS Washington State University Artificial Intelligence 1

  2. } Weak AI ◦ Machines can act as if they were intelligent } Strong AI ◦ Machines can actually be intelligent (i.e., think) } Can we tell the difference? } Is even weak AI achievable? } Should we care about achieving strong AI? } Are there ethical implications? Artificial Intelligence 2

  3. } Turing Test ◦ Can the machine convince a human that it is human via written English } Loebner Prize Alan Turing (1912-1954) ◦ en.wikipedia.org/wiki/Loebner_Prize } AI XPRIZE (ai.xprize.org): $5M } Mitsuku (mitsuku.com) The Singularity Is Near (2012) Artificial Intelligence 3

  4. } Disability ◦ But a machine can never… – Beat a master at chess ( ü ) – Compose a symphony (~) – Laugh at a joke – Appreciate beauty – Fall in love } Response ◦ Magenta Project (magenta.tensorflow.org) ◦ Engineer different approaches (planes vs. birds) ◦ If we can understand how humans do it… Artificial Intelligence 4

  5. } Mathematical objection ◦ Godel’s incompleteness theorem – In any formal system there are true sentences that cannot be proven – “This sentence is not provable” is true, Kurt Godel but not provable 1906-1978 } Response ◦ Formal systems are infinite, machines are finite ◦ Inability to prove obscure sentences not so bad ◦ Humans have limitations too Artificial Intelligence 5

  6. } Informality ◦ Human behavior too complex to model formally } Response ◦ Usually assumes overly-simplistic models (e.g., propositional logic) ◦ Learning can augment the model Artificial Intelligence 6

  7. } Machine thinks like a human } How do we define human thinking? ◦ Machine has to know it passed the Turing test ◦ Consciousness argument } Mental state = physical (brain) state } Mental state = physical state + ? } Arguments ill-defined } What is consciousness? Artificial Intelligence 7

  8. Brain in a Vat } Functionalists say "Yes" ◦ Brain maps inputs to outputs ◦ Can be modeled as a giant lookup table ◦ Brain in a vat } Naturalists say "No" ◦ Lookup tables are not intelligent Searle's Chinese Room ◦ Searle’s Chinese room argument } Does achieving strong AI matter? Artificial Intelligence 8

  9. } Impact on economy: Losing jobs to automation } Lethal and autonomous robots } Surveillance and privacy } Data mining “Eagle Eye” (2008) “Person of Interest” (2011-2016) Artificial Intelligence 9

  10. } AI responsibility ◦ Generally, human experts are responsible for relying on AI decisions ◦ Autonomous AI liability falls to the human designers ◦ Can an AI system be charged with a crime? “I, Robot” (2004) Artificial Intelligence 10

  11. } Stephen Hawking (2014) ◦ “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.” } Bill Gates (2015) ◦ “I am in the camp that is concerned about super intelligence.” } Elon Musk (2017) ◦ “AI is a fundamental risk to the existence of human civilisation.” } Henry Kissinger (2018) ◦ “… whose culmination is a world relying on machines ungoverned by ethical or philosophical norms.” Artificial Intelligence Laboratory 11

  12. A robot may not injure a human 1. being or, through inaction, allow a human being to come to harm. A robot must obey orders given it 2. by human beings except where such orders would conflict with the First Law. Isaac Asimov A robot must protect its own 3. 1920-1992 existence as long as such protection does not conflict with the First or Second Law. Artificial Intelligence Laboratory 12

  13. } Avoid Negative Side Effects ◦ How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals? } Avoid Reward Hacking ◦ How can we avoid gaming of the reward function? } Scalable Oversight ◦ How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? } Safe Exploration ◦ How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? } Robustness to Distributional Shift ◦ How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html Artificial Intelligence Laboratory 13

  14. } End of human race ◦ An unchecked AI system “ Colossus: The Forbin “The Matrix” (1999) makes a mistake Project” (1970) ◦ Utility function has undesired consequences ◦ Learning leads to undesired behavior ◦ Singularity “Terminator 3: Rise of the “Transcendence” } Friendly AI Machines” (2003) (2014) “I, Robot” (2004) Artificial Intelligence 14

  15. } Robot/AI rights “Bicentennial “A.I. Artificial “The Machine” “Ex Machina” Man” (1999) (2013) (2015) Intelligence” (2001) Artificial Intelligence 15

  16. } Artificial Intelligence for the American People ◦ www.whitehouse.gov/ai/ Artificial Intelligence Laboratory 16

  17. } Weak AI vs. Strong AI } Controlling AI } AI Laws } AI Rights } Human Future } Policy Artificial Intelligence 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend