governing the ai revolution
play

Governing the AI Revolution Allan Dafoe Yale University Future of - PowerPoint PPT Presentation

Governing the AI Revolution Allan Dafoe Yale University Future of Humanity Institute University of Oxford governance.ai Allan Dafoe governance.ai Yale / FHI, Oxford 1 / 19 The AI Governance Problem : the problem of devising global norms,


  1. Governing the AI Revolution Allan Dafoe Yale University Future of Humanity Institute University of Oxford governance.ai Allan Dafoe governance.ai Yale / FHI, Oxford 1 / 19

  2. The AI Governance Problem : the problem of devising global norms, policies, and institutions to best ensure the benefjcial development and use of advanced AI. Allan Dafoe governance.ai Yale / FHI, Oxford 2 / 19

  3. Common Misunderstanding 1 Attention to technological risks implies one believes ...the technology is net negative or risks are probable. ...there are risks which attention could mitigate. Allan Dafoe governance.ai Yale / FHI, Oxford 3 / 19

  4. Common Misunderstanding 1 Attention to technological risks implies one believes ...the technology is net negative or risks are probable. ...there are risks which attention could mitigate. Allan Dafoe governance.ai Yale / FHI, Oxford 3 / 19

  5. Near-term Governance Challenges Safety in critical systems , such as fjnance, energy systems, transportation, robotics, autonomous vehicles. (Consequential) algorithms that encode values , such as in hiring, loans, policing, justice, social network. Desiderata: fairness effjciency, privacy, ethics. AI impacts on employment, equality, privacy, democracy... Allan Dafoe governance.ai Yale / FHI, Oxford 4 / 19 Hardt , accountability, transparency,

  6. Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19

  7. Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19

  8. Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19

  9. Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19

  10. Some Extreme Challenges from Near-Term AI counterforce vulnerability from AI intel, cyber, drones; Yale / FHI, Oxford governance.ai Allan Dafoe systems and transformative capabilities. Accident/Emergent/Other Risks , from AI-dependent critical Military Advantage : LAWS, cyber, intel, info operations. autonomous nuclear retaliation (esp w/ hypersonics). Strategic (Nuclear) Stability : autonomous escalation; Mass labor displacement and inequality. If AI substitutes, persuasion, repression (LAWS). digitally-mediated behavior), intimate profjling, tailored Surveillance and Control : mass surveillance (sensors, AI services, incumbent advantage, high fjxed costs from AI R&D. are natural global monopolies, due to low/zero marginal costs of AI Oligopolies: strategic industry and trade. If AI industries rather than complements, labor. 5 / 19

  11. Corner-Cutting a hard problem when you’re Yale / FHI, Oxford governance.ai Allan Dafoe Demis Hassabis, January 2017 governments . talking about national scale, and that’s going to be The coordination problem is be a big issue on a global gets cut.... That’s going to starts happening and safety fjnish where corner-cutting this harmful race to the on now]. We want to avoid one thing [we should focus 6 / 19

  12. Corner-Cutting a hard problem when you’re Yale / FHI, Oxford governance.ai Allan Dafoe Demis Hassabis, January 2017 governments . talking about national scale, and that’s going to be The coordination problem is be a big issue on a global gets cut.... That’s going to starts happening and safety fjnish where corner-cutting this harmful race to the on now]. We want to avoid one thing [we should focus 6 / 19

  13. Allan Dafoe governance.ai Yale / FHI, Oxford 7 / 19

  14. Massive Media Reaction Allan Dafoe governance.ai Yale / FHI, Oxford 8 / 19

  15. National Strategies Allan Dafoe Yale / FHI, Oxford governance.ai 9 / 19 Pre-Decisional Draft 1.0--For Discussion Purposes Only China’s Technology Transfer Strategy: How Chinese Investments in Emerging Technology Enable A Strategic Competitor to Access the Crown Jewels of U.S. Innovation Michael Brown and Pavneet Singh February, 2017 1

  16. Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19

  17. Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19

  18. Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19

  19. Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19

  20. Epistemic Calibration “ Prediction is very diffjcult, especially about the future. ” -attributed to Niels Bohr, and others... Failure Mode 1 : Overconfjdence that some specifjc possibility, X, will happen. Failure Mode 2 : Overconfjdence that X will not happen. Failure Mode 3 : Given uncertainty, dismiss value of studying X. Lesson : Accept uncertainty and distributional beliefs. Uncertainty does not imply futility. Allan Dafoe governance.ai Yale / FHI, Oxford 10 / 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend