a glass box approach prof dr virginia dignum chair of
play

- A GLASS BOX APPROACH Prof. Dr. Virginia Dignum Chair of Social - PowerPoint PPT Presentation

RESPONSIBLE ARTIFICIAL INTELLIGENCE - A GLASS BOX APPROACH Prof. Dr. Virginia Dignum Chair of Social and Ethical Artificial Intelligence - Department of Computer Science Email: virginia@cs.umu.se - Twitter: @vdignum RESPONSIBLE AI: WHY CARE?


  1. RESPONSIBLE ARTIFICIAL INTELLIGENCE - A GLASS BOX APPROACH Prof. Dr. Virginia Dignum Chair of Social and Ethical Artificial Intelligence - Department of Computer Science Email: virginia@cs.umu.se - Twitter: @vdignum

  2. RESPONSIBLE AI: WHY CARE? • AI systems act autonomously in our world • Eventually, AI systems will make better decisions than humans AI is designed, is an artefact • We need to sure that the purpose put into the machine is the purpose which we really want Norbert Wiener, 1960 (Stuart Russell) King Midas, c540 BCE

  3. TAKING RESPONSIBILITY • Responsiblity / Ethics in Design o Ensuring that development processes take into account ethical and societal implications of AI as it integrates and replaces traditional systems and social structures • Responsibility /Ethics by Design o Integration of ethical abilities as part of the behaviour of artificial autonomous systems • Responsibility /Ethics for Design(ers) o Research integrity of researchers and manufacturers, and certification mechanisms

  4. TAKING RESPONSIBILITY • Responsiblity / Ethics in Design o Ensuring that development processes take into account ethical and societal implications of AI as it integrates and replaces traditional systems and social structures • Responsibility /Ethics by Design Can we guarantee that behaviour is ethical? o Integration of ethical abilities as part of the behaviour of artificial autonomous systems • Responsibility /Ethics for Design(ers) o Research integrity of researchers and manufacturers, and certification mechanisms

  5. ETHICS BY DESIGN Can AI artefacts be build to be verifiably ethical? • What does that mean? • What is needed? • • Which values? • Whose values? • Which ethical rules? • Which interpretation?

  6. VALUES IN CONTEXT Fairness? Fairness? Fairness?

  7. DECISIONS MATTER! fairness values interpretation Equal Equal … norms resources opportunity concretization … … functionalities Design for Values

  8. DECISIONS MATTER! safety values interpretation … Limit Ensure norms speed crash-worthiness concretization … functionalities Design for Values

  9. GUIDELINES – BE OPEN AND EXPLICIT Question your options and choices • Motivate your choices • • Document your choices and options • Compliance External monitoring and control o Norms and institutions o • Engineering principles for policy Analyze – synthetize – evaluate - repeat o https://medium.com/@virginiadignum/on-bias-black-boxes- and-the-quest-for-transparency-in-artificial-intelligence- bcde64f59f5b

  10. ASK YOURSELF • Who will be affected? • What are the decision criteria we are optimising for? • How are these criteria justified? • Are these justifications acceptable in the context we are designing for? • How are we training our algorithm? o Does training data resemble the context of use? IEEE standard Algorithmic bias https://standards.ieee.org/project/7003.html

  11. ALGORITHMS - THE BLACK BOX? input output

  12. GOVERNANCE - THE GLASS BOX input output

  13. GOVERNANCE - THE GLASS BOX Fairness Equal opportunity Minimize pre- existing bias output input

  14. GOVERN AND VERIFY - GLASS BOXES • Verify limits to action and decision • Define the ethical borders compliance o Formal Principles o Monitoring input – output compliance • Governance compliance Principles o Monitor o “block” undesirable in out Principles

  15. EXAMPLE - FAIRNESS • Value: Fairness Dutch Law • Norm: Equal opportunity compliance • Implementation: University Employment o Output evaluation Agreements (1) P(job | female) = P(job | male) compliance • Governance Equal-opp o Cut-off o Flag-out (1)

  16. extending (TC King et al, AAMAS 2015) GOVERNANCE TRANSPARENCY We can also check consistency of institutions!

  17. RESPONSIBLE DESIGN – ART OF AI • Principles for Responsible AI = ART o A ccountability  Explanation and justification  Design for values o R esponsibility  Autonomy  Chain of responsible actors o T ransparency  Data and processes  Algorithms

  18. https://medium.com/@virginiadignum/on-bias- black-boxes-and-the-quest-for-transparency-in- ART METHODOLOGY artificial-intelligence-bcde64f59f5b • Socially accepted o Participatory • Ethically acceptable o Ethical theories and human values • Legally allowed o Laws and regulations • Engineering principles o Cycle: Analyse – synthetize – evaluate – repeat o Report: Identify, Motivate, Document

  19. TAKE AWAY MESSAGE • AI influences and is influenced by our Center for Responsible AI @Umeå social systems • Design in never value-neutral A research institute dedicated to develop AI systems that meet their • Society shapes and is shaped by design social responsibility: o The AI systems we develop - Understand social implications o The processes we follow - Develop theories, models and tools o The institutions we establish for oversight, accountability and • Openness and explicitness are key! verification - Methods to design, measure and o Accountability, Responsibility, Transparency audit social implications • AI systems are artefacts built by us for our own purposes http://people.cs.umu.se/virginia • We set the limits We are hiring!!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend