five or so actionable tips for building trust and being
play

Five or so actionable tips for building trust and being trustworthy - PowerPoint PPT Presentation

Five or so actionable tips for building trust and being trustworthy (in interactive learning) Stefano Teso , University of Trento stefano.teso@unitn.it ML increasingly being used in sensitive domains like criminal justice , hiring , . . .


  1. Five or so actionable tips for building trust and being trustworthy (in interactive learning) Stefano Teso , University of Trento stefano.teso@unitn.it

  2. ML increasingly being used in sensitive domains like criminal justice , hiring , . . . Black-box ML models can be whimsical and hard to control How can we build justifiable trust into black-box ML models? 2

  3. How do humans establish/reject trust into others? 1 R Hoffman et al. “Trust in automation”. In: IEEE Intelligent Systems (2013). 2 Luke Chang et al. “Seeing is believing: Trustworthiness as a dynamic belief”. In: Cognitive psychology (2010). 3

  4. How do humans establish/reject trust into others? Understanding Trust involves understanding the other’s beliefs & intentions; it depends on the perceived understand- ability and competence 1 Interaction Trust is updated dynamically, interaction establishes expectations 2 ; it relies on directability (We have “hardware support” for all of this: theory of mind , mirror neurons , . . . ) 1 R Hoffman et al. “Trust in automation”. In: IEEE Intelligent Systems (2013). 2 Luke Chang et al. “Seeing is believing: Trustworthiness as a dynamic belief”. In: Cognitive psychology (2010). 3

  5. How do humans establish/reject trust into others? Understanding Trust involves understanding the other’s beliefs & intentions; it depends on the perceived understand- ability and competence 1 Interaction Trust is updated dynamically, interaction establishes expectations 2 ; it relies on directability (We have “hardware support” for all of this: theory of mind , mirror neurons , . . . ) Alas, explainable AI is passive and interactive ML is opaque 1 R Hoffman et al. “Trust in automation”. In: IEEE Intelligent Systems (2013). 2 Luke Chang et al. “Seeing is believing: Trustworthiness as a dynamic belief”. In: Cognitive psychology (2010). 3

  6. Local Explanations with LIME This helps to identify potential “Clever Hans” behavior 3 , but it does not provide the means to fix it 3 Sebastian Lapuschkin et al. “Unmasking clever hans predictors and assessing what machines really learn”. In: (2019). 4

  7. Active Learning The user a ) does not know the model’s beliefs , b ) cannot affect them directly, c ) has no clue of what his feedback does ! 5

  8. Explanatory Active Learning

  9. Explanatory Active Learning 4 a ) Explain predictions ( competence , understandability ), b ) Allow to correct explanations ( directability ) 4 Stefano Teso and Kristian Kersting. “Explanatory interactive machine learning”. In: 2019. 6

  10. Explanation Corrections 1. User’s correction indicates the false positive segments 2. converts correction to counterexamples , i.e., fill in random values while keeping the same label Example : husky predicted right for the wrong reasons 7

  11. Explanatory Guided Learning and Beyond

  12. On-going work with Teodora Popordanoska and Mohit Kumar (KU Leuven) 8

  13. By witnessing that the model’s beliefs improve over time, the human “teacher” builds trust into the “student” model 9

  14. By witnessing that the model’s beliefs improve over time, the human “teacher” builds trust into the “student” model Problem : nothing prevents the machine from repeatedly choosing instances where it does well. Not so far-fetched: the machine does not know how to choose difficult instances, think of high-loss unknown unknowns 9

  15. Example : dog vs wolf, machine very certain everywhere 10

  16. Example : dog vs wolf, machine very certain everywhere What about unknown unknowns ? 11

  17. Example : dog vs wolf, machine very certain everywhere AL doesn’t help with UUs, uncertainty may be wrong too 12

  18. Idea : let the user choose the challenging instances This is what professors do when testing students 13

  19. Piggy-back on Guided Learning machine chooses rare label, user searches for example Useful for tackling class unbalance where AL fails 14

  20. The human teacher is blind : • impossible to establish justifiable trust • she may provide examples that teach nothing new to the machine How can we expect the human to provide useful examples? 5 5 Interactive machine teaching with black-box models shows that blind teachers cannot do better than random teachers. 15

  21. Explanatory Guided Learning 16

  22. Explanatory Guided Learning 17

  23. Explanatory Guided Learning 18

  24. Explanatory Guided Learning 19

  25. RESULTS 20

  26. Plan : 1. Polish experiments with “imperfect user” 2. Case study with real users (!) 3. Hook up iterative machine teaching theory 21

  27. A “Theory of Mind” for Machine-Human Teams mutual understanding guides trust, learning, and teaching 22

  28. 23

  29. 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend