how do visual explanations foster end users appropriate
play

How Do Visual Explanations Foster End Users Appropriate Trust In - PowerPoint PPT Presentation

Honorable Mention at 25th International Conference on Intelligent User Interfaces (IUI '20) How Do Visual Explanations Foster End Users Appropriate Trust In Machine Learning? Fumeng Yang 1 , Zhuanyi (Yi) Huang 2 , Jean Scholtz 3 , and Dustin


  1. Honorable Mention at 25th International Conference on Intelligent User Interfaces (IUI '20) How Do Visual Explanations Foster End Users’ Appropriate Trust In Machine Learning? Fumeng Yang 1 , Zhuanyi (Yi) Huang 2 , Jean Scholtz 3 , and Dustin L. Arendt 2 1 Fumeng Yang is with Brown University. She conducted this research as a Ph.D. Intern at Pacific Northwest National Laboratory. 2 Zhuanyi Huang and Dustin L. Arendt are with Pacific Northwest National Laboratory. 3 Jean Scholtz retired from Pacific Northwest National Laboratory in September 2018. Brown Visual Computing Seminar | May 11, 2020

  2. Highlights • Visual explanations improve end users’ trust in an automated system. • Such trust must be appropriate . • The design of visual explanations affects users’ appropriate trust. � 2

  3. “Human-computer Trust is defined in this study to be, the extent to which a user is confident in , and willing to act on the basis of, the recommendations, actions, and decisions of an artificially intelligent decision aid. “ Madsen and Gregor Madsen, M., & Gregor, S. (2000, December). Measuring human-computer trust. In 11th australasian conference on information systems (Vol. 53, pp. 6-8). � 3

  4. Appropriate Trust is the alignment between the perceived and actual performance of the system. McBride, M., & Morgan, S. (2010). Trust calibration for automated decision aids. Institute for Homeland Security Solutions.[Online]. Available: https://www. ihssnc. org/portals/0/Documents/VIMSDocuments/McBride_Research_Brief. pdf. McGuirl, J. M., & Sarter, N. B. (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human factors, 48(4), 656-665. Marsh, S., & Dibben, M. R. (2005, May). Trust, untrust, distrust and mistrust–an exploration of the dark (er) side. In International conference on trust management (pp. 17-33). Springer, Berlin, Heidelberg. de Visser, E. J., Cohen, M., Freedy, A., & Parasuraman, R. (2014, June). A design methodology for trust cue calibration in cognitive agents. In International conference on virtual, augmented and mixed reality (pp. 251-262). Springer, Cham. � 4

  5. Appropriate System Recommendation Trust Correct Incorrect Appropriate trust Overtrust Follow User Decision Undertrust Appropriate trust Not follow Marsh, S., & Dibben, M. R. (2005, May). Trust, untrust, distrust and mistrust–an exploration of the dark (er) side. In International conference on trust management (pp. 17-33). Springer, Berlin, Heidelberg. 5 �

  6. Example: My trust in an iRobot My confidence in that it could clean the floor, my willingness to get it do the work; overtrust is when I think it would avoid hitting the wall, but it does not; undertrust is when I think it would hit the wall, but it makes a turn. � 6

  7. Goals • The relationship between users' trust in a system and visual explanations; • The effects of different visualization designs on users' trust in machine learning; • An understanding of users' appropriate trust for proper usage of an automated system. � 7

  8. Experiment • Materials Example-based explanation • Experimental variables Instance representation, Spatial layout • Measures Appropriate trust metrics, usability, individual differences • Task Assistant botanists and classify leaves aided by classifiers with or without visual explanations � 8

  9. Example-based Explanation “Escape Routes” The shortest paths to travel to another state (class) ARKANSAS Lexington Paducah Fayetteville Bowling Green KENTUCKY • k-nearest neighbors graph Fort Smith • Internal representation of the training set Clarksville Jonesboro Nashville • Minkowski distance Little Rock ? Knoxville Memphis Jackson • A shortest path tree rooted at the input node Chattanooga TENNESSEE • Prune until only leaves may have a different class from the input node � 9

  10. Instance Representation To represent each instance in a dataset � 10 Images Rose charts (Roses) for feature vector � 10

  11. Spatial Layout To arrange instances and illustrate the relationship between them Grid Tree Graph ? ? ? Sort instances within a column by Use a force-directed layout algorithm Use a layered graph layout of the their weighted geodesic distance to to arrange instances based on their pruned shortest path tree the input node connections � 11

  12. Examples Grid Tree Graph � 12

  13. Examples Grid Graph Tree � 13

  14. Interface & Task � 14 � 14

  15. Measuring Trust in the Classifier “Participants’ willingness to follow the recommendation and their self-confidence in the decision.” • Will you follow this recommendation? • How do you feel about your decision above? • Was the explanation helpful in making the decision above? • A linear ''Trust Meter'' ranged from -100 to +100 � 15

  16. Experimental Design A complete within-subjects design Each participant finished two instance representations on two different days three layouts and a control condition (no explanation) e.g., tree + roses, none + images A series of trials 27 trials for each condition 20 correct, 7 incorrect = 74% vs. classifier 71% a fixed sequence by MC with randomized instances 33 participants from PNNL 19 female, 14 male 16 data scientists, 17 others � 16

  17. Data Collected Trust Measures Appropriate trust - correct decision rate Overtrust - follow an incorrect recommendation Undertrust - not follow a correct recommendation � 17 Self-confidence Perceived helpfulness Trust meter 8,184 / 7,128 trials = (3+1) layout conditions x 2 representations x 27 trials x 33 participants � 17

  18. Analyses and Results Research Questions Five research questions (four for this talk) Methods bootstrapped 95% CIs, effect sizes, mixed-effects models for individual differences, aggregated each participant, and subtracted within participants Interpretation Summarizing all confidence intervals � 18

  19. RQ1 Do our visual explanations foster more appropriate trust? Differences (a) Appropriate trust (b) Overtrust (c) Undertrust (d) Self−confidence by subtraction gr id − none tree − none images � 19 gr aph − none grid/tree/graph − none gr id − none tree − none roses gr aph − none grid/tree/graph − none −0.2 −0.1 0.0 0.1 0.2 0.2 0.1 0.0 −0.1 −0.2 0.2 0.1 0.0 −0.1 −0.2 −2 −1 0 1 2 images roses grid tree graph mean and 95%CI “better” All our visual explanations largely increase appropriate trust, decrease overtrust and underthrust, and improve self-confidence. � 19

  20. RQ2 How did the three spatial layouts (grid, tree, and graph) affect users’ trust? Differences (a) Appropriate trust (b) Overtrust (c) Undertrust (d) Self−confidence (e) Helpfulness by subtraction gr id − tree images tree − g raph gr aph − g rid gr id − tree roses tree − g raph gr aph − g rid −0.1 0.0 0.1 0.1 0.0 −0.1 0.1 0.0 −0.1 −1 0 1 −1 0 1 grid tree graph mean and 95%CI “better” images roses Images : grid explanations are slightly more helpful than tree explanations, which are slightly more helpful than graph explanations. Roses : tree and graph explanations, especially tree, lead to more appropriate trust than grid explanations. � 20

  21. RQ3 How did the two instance representations (images and roses) affect users’ trust? Differences (a) Appropriate trust (b) Overtrust (c) Undertrust (d) Self−confidence (e) Helpfulness by subtraction none grid/tree/graph grid images - roses tree graph 0.0 0.1 0.2 0.3 0.0 −0.1 −0.2 −0.3 0.0 −0.1 −0.2 −0.3 0 1 2 3 0 1 2 3 grid tree graph mean and 95%CI “better” Image-based explanations outperform rose-based explanations on all the dimensions. � 21

  22. RQ4 How did individual differences (e.g., expert users vs. non-expert users, prior knowledge, and propensity to trust) affect users’ trust? Coefficients of (a) Appopriate trust (b) Overtrust (c) Undertrust (d) Self−confidence fixed effects propensity non- cf. scitists leaf familiarity images cf. roses grid/tree/graph cf. none −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.2 0.1 0.0 −0.1 −0.2 0.3 0.2 0.1 0.0 −0.1 −0.2 −3 −2 −1 0 1 2 “better” The strongest effects come from the two experimental variables: images outperform roses; having a visual explanation outperforms no explanation. The only exception is that non-expert users seem to have more confidence in their decisions. � 22

  23. Summary & Takeaways Use a grid layout if the representation is easy to understand; Use a tree layout if the representation is difficult to read or its usability is unknown. Understanding and trust are relevant but different. Future research should consider appropriate trust, instead of simply measuring an increase in users' trust. Overtrust and undertrust should be avoided. � 23

  24. Thank You “H OW D O V ISUAL E XPLANATIONS F OSTER E ND U SERS ’ A PPROPRIATE T RUST IN M ACHINE L EARNING ?” Fumeng Yang fy@brown.edu Zhuanyi (Yi) Huang zhuanyi.huang@pnnl.gov Jean Scholtz jean.scholtz@pnnl.gov Dustin L. Arendt dustin.arendt@pnnl.gov http://www.fmyang.com/projs/ml-trust � 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend