Sensitivity to risk profiles of users when developing AI systems - - PowerPoint PPT Presentation

β–Ά
sensitivity to risk profiles of users when developing ai
SMART_READER_LITE
LIVE PREVIEW

Sensitivity to risk profiles of users when developing AI systems - - PowerPoint PPT Presentation

Sensitivity to risk profiles of users when developing AI systems Robin Cohen Cheriton School of Computer Science Rishav Agarwal Dhruv Kumar Alexander Parmentier Tsz Him Leung Position Paper Message about Trusted AI Engender trust


slide-1
SLIDE 1

Sensitivity to risk profiles of users when developing AI systems

Robin Cohen Rishav Agarwal Dhruv Kumar Alexander Parmentier Tsz Him Leung Cheriton School of Computer Science

slide-2
SLIDE 2

2

Position Paper

  • Message about Trusted AI
  • Engender trust from users and organizations
  • Includes explainability, transparency, fairness, safety and ethics
  • Differing solutions for different users
  • Not one size fits all
  • User risk tolerance a factor
slide-3
SLIDE 3

Motivation

Image: xkcd.com 3

slide-4
SLIDE 4

Trusted AI

  • Think beyond a homogenous user base
  • Risk profiles of users important
  • One factor: risk averse users may require

more explanation

  • Diverse solutions not just for trust but for

fairness

Image: boston.com 4

slide-5
SLIDE 5

Background

  • Others advocate personalization (user preferences matter)
  • Explainable AI (Anjomshae et al.): context-awareness important
  • Trust in robots (Rossi et al.): differing user tolerances and emotions
  • Elements of trustworthiness (Mayer et al.): risk-taking perceptions

5

slide-6
SLIDE 6

Background

  • Inter-related concerns of fairness, explanability, trust (Cohen et al.)]
  • Al solutions to these problems matter
  • Dedicated effort examining planning and explainability (Kambhampati and coauthors)
  • Trading cost of computation vs. serving the user
  • e.g. less accurate but more explainable

6

slide-7
SLIDE 7
  • Two models, building on Kambhampati approach (Sengupta et al. AAMAS 2019 Trust workshop)
  • Game theoretic: reasoning about costs and actions

Explainability assumes a cost

  • Observe vs. execute
  • Agent's reasoning is in terms of user risk

profiles Allowing risky plans

  • Build up trust
  • Allow risk to be updated

Trust and Risk Profiles: Our Models

7

slide-8
SLIDE 8

Game-Theoretic Models

8

slide-9
SLIDE 9
  • Agent has a model of the Human’s assessment of the Agent
  • Use risk profile as a proxy for mental models
  • Risk profile is perceived
  • Consider costs of planning, explaining and not achieving goals.

Explainability, Cost and Risk Profiles

9

slide-10
SLIDE 10

Explainability, Cost and Risk Profiles

  • π΅π‘•π‘“π‘œπ‘’ 𝐷𝑝𝑑𝑒𝑑
  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘›π‘π‘™π‘—π‘œπ‘• π‘žπ‘šπ‘π‘œ π·π‘ž

𝐡 πœŒπ‘„ .

  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘“π‘¦π‘žπ‘šπ‘π‘—π‘œπ‘—π‘œπ‘• 𝑗𝑑 𝐷𝐹

𝐡 πœŒπ‘„ .

  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘“π‘¦π‘žπ‘šπ‘π‘—π‘œπ‘—π‘œπ‘• π‘£π‘œπ‘’π‘—π‘š 𝑏 π‘žπ‘π‘ π‘’π‘—π‘π‘š π‘žπ‘šπ‘π‘œ ො

πœŒπ‘ž

  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘œπ‘π‘’ π‘π‘‘β„Žπ‘—π‘“π‘€π‘—π‘œπ‘• π‘•π‘π‘π‘š 𝐻 𝑗𝑑 𝐷 ΰ· 

𝐻 𝐡.

  • 𝑋𝑓 π‘‘π‘π‘œ 𝑏𝑑𝑑𝑣𝑛𝑓 π‘’β„Žπ‘π‘’ π‘’β„Žπ‘“ 𝑑𝑏𝑔𝑓𝑑𝑒 π‘žπ‘šπ‘π‘œ π‘’π‘π‘“π‘‘π‘œβ€²π‘’ β„Žπ‘π‘€π‘“ 𝑏 𝑑𝑝𝑑𝑒 𝑝𝑔 π‘”π‘π‘—π‘šπ‘£π‘ π‘“

10

slide-11
SLIDE 11

Explainability, Cost and Risk Profiles

  • πΌπ‘£π‘›π‘π‘œ 𝐷𝑝𝑑𝑒𝑑
  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘π‘π‘‘π‘“π‘ π‘€π‘—π‘œπ‘• π‘’β„Žπ‘“ π‘žπ‘šπ‘π‘œ π‘£π‘œπ‘’π‘—π‘š 𝑑𝑝𝑛𝑓 π‘žπ‘šπ‘π‘œ β„Žπ‘π‘‘ π‘π‘“π‘“π‘œ 𝑓𝑦𝑓𝑑𝑣𝑒𝑓𝑒 𝑗𝑑 π·π‘ž

𝐼 ො

πœŒπ‘ž .

  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘π‘π‘‘π‘“π‘ π‘€π‘—π‘œπ‘• 𝑏𝑒 π‘’β„Žπ‘“ π‘“π‘œπ‘’π‘‘ 𝐷𝐹

𝐼 πœŒπ‘„ .

  • 𝐷𝑝𝑑𝑒 𝑝𝑔 π‘œπ‘π‘’ π‘π‘‘β„Žπ‘—π‘“π‘€π‘—π‘œπ‘• π‘•π‘π‘π‘š 𝑗𝑑 𝐷 ΰ· 

𝐻 𝐼.

  • 𝑆𝑗𝑑𝑙 𝑝𝑔 π‘“π‘¦π‘“π‘‘π‘£π‘’π‘—π‘œπ‘• 𝑏 π‘žπ‘šπ‘π‘œ 𝑆𝐼 πœŒπ‘„

11

slide-12
SLIDE 12

Observe Not Execute Observe and Execute

Safe plan Any plan

Explainability, Cost and Risk Profiles

12

slide-13
SLIDE 13

Explainability, Cost and Risk Profiles

Observe Not Execute Observe and Execute Risk Averse Cost of achieving the goal must at least be greater than the cost of explaining the rest of the task

Any plan

13

slide-14
SLIDE 14

Explainability, Cost and Risk Profiles

Observe Not Execute Observe and Execute Risk Taking Risk must be less than cost of not achieving goal

Any plan

14

slide-15
SLIDE 15

Explainability, Cost and Risk Profiles

Explain Observe Plan and Execute Risk profile

15

slide-16
SLIDE 16
  • Human's lack of trust suggests safe plan (Sengupta et al. 2019)
  • Trust boundary ensures Agent does not execute risky plan
  • Yet more risky lower cost plan might be preferred by user
  • If trust has built up enough to take that risk

Trust Boundaries and Risk Profiles

16

slide-17
SLIDE 17

Trust Boundaries and Risk Profiles

Trust boundary

17

  • Human's lack of trust suggests safe plan (Sengupta et al. 2019)
  • Trust boundary ensures Agent does not execute risky plan
  • Yet more risky lower cost plan might be preferred by user
  • If trust has built up enough to take that risk
slide-18
SLIDE 18

Trust Boundaries and Risk Profiles

Trust boundary

18

  • Human will reason: cost of executing plan is considered
  • Allows: beyond Agent simply modeling User trust for its decisions
  • Progressive updates of user profiles and trust should be possible
slide-19
SLIDE 19

Fairness

19

slide-20
SLIDE 20

Fairness and Differing User Tolerances

  • User preferences for fairness and

explainabilty also an issue

  • for example: In hiring may be very risk

averse to unfairness

  • People's positions on algorithmic fairness

will be an influence

20 Image: offthemark.com

slide-21
SLIDE 21

21

Fairness and Differing User Tolerances

  • Accurate may not be most fair solution
  • Concerns with bias to be taken into consideration
  • Key importance of which definition of fairness is at hand for user
  • Differing preferences need to be considered
slide-22
SLIDE 22

22

  • Current models designed to be more accurate than fair
  • Some users have less risk aversion to unfairness
  • e.g. more concerned with explainability
  • Again drives towards knowing user preferences
  • Risk tolerance can continue to be a determiner
  • Metrics of disparate impact (independent attributes), Individual fairness (equal opportunity), equalized odds (favors majority)

Fairness and Differing User Tolerances

slide-23
SLIDE 23

Outstanding Concerns

  • Acquiring user profiles
  • Important to consider elicitation across contexts
  • Engendering trust varies according to user tolerances
  • Expand the concept of risk aversion
  • Consider a collection of user profile preferences

23

slide-24
SLIDE 24

Conclusion

  • Continue to imagine personalized trusted AI solutions
  • Leverage the important concern of risk profiles
  • Tradeoffs in accuracy, explainability, fairness and other desiderata
  • Some suggested models for reasoning and decision making by agents

24

slide-25
SLIDE 25

References

  • Anjomshoae, S., Framling, K., Najjar, A.: Explanations of black-box model predictions bycontextual importance and utility. In: International Workshop
  • n Explainable, TransparentAutonomous Agents and Multi-Agent Systems. pp. 95–109. Springer (2019)
  • Cohen, R., Schaekermann, M., Liu, S., Cormier, M.: Trusted ai and the contribution of trustmodeling in multiagent systems. In: Proceedings of AAMAS.
  • pp. 1644–1648 (2019)
  • Kambhampati, S.: Synthesizing explainable behavior for human-ai collaboration. In: Pro-ceedings of AAMAS. pp. 1–2. Richland, SC (2019)
  • Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust.Academy of management review20(3), 709–734 (1995)
  • Rossi, A., Holthaus, P., Dautenhahn, K., Koay, K.L., Walters, M.L.: Getting to know pepper:Effects of people’s awareness of a robot’s capabilities on their

trust in the robot. In: Proceed-ings of the 6th International Conference on Human-Agent Interaction. pp. 246–252. ACM(2018)

  • Sengupta, S., Zahedi, Z., Kambhampati, S.: To monitor or to trust: Observing robot’s be-havior based on a game-theoretic model of trust. In: Proc. Trust

workshop at AAMAS 2019(2019)

25

slide-26
SLIDE 26

Questions?

  • Robin Cohen: rcohen@uwaterloo.ca
  • Rishav Raj Agarwal: http://rishavrajagarwal.com (rragarwal@uwaterloo.ca)
  • Dhruv Kumar: d35kumar@uwaterloo.ca
  • Alexander Parmentier: aparmentier@uwaterloo.ca
  • Tsz Him Leung: th4leung@uwaterloo.ca

26

slide-27
SLIDE 27

27

Agent Decision Procedure

  • Key factor (Cost of risk less then cost of not achieving goal) 𝑆𝐼(πœŒπ‘ž) < 𝐷 ΰ· 

𝐻 𝐼

  • Focus on explainability at expense of accuracy (optimality) with a risk averse human
  • Allow Human more agency (mixed-initiative dialogue)
  • Agent could reason at each step of the plan
  • Is cost of achieving goal more than cost of explanation
  • User risk profile model can be updated