In Automation we Trust? Identifying Factors that Influence Trust and - - PowerPoint PPT Presentation

in automation we trust identifying factors that influence
SMART_READER_LITE
LIVE PREVIEW

In Automation we Trust? Identifying Factors that Influence Trust and - - PowerPoint PPT Presentation

In Automation we Trust? Identifying Factors that Influence Trust and Reliance in Automated and Human Decision Aids This material is based upon work supported in whole or in part with funding from the Laboratory for Analytic Sciences (LAS). Any


slide-1
SLIDE 1

In Automation we Trust? Identifying Factors that Influence Trust and Reliance in Automated and Human Decision Aids

This material is based upon work supported in whole or in part with funding from the Laboratory for Analytic Sciences (LAS). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the LAS and/or any agency or entity of the United States Government.

slide-2
SLIDE 2

Research Team

William A. Boettcher Associate Professor Political Science Roger C. Mayer Professor of Management, Innovation & Entrepreneurship Poole College of Management Christopher B. Mayhorn Professor Psychology Joseph M. Simons-Rudolph Teaching Assistant Professor Psychology Sean M. Streck Researcher Laboratory for Analytic Sciences Carl J. Pearson & Allaire K. Welk Ph.D. Students Psychology

slide-3
SLIDE 3

The Problem: An Increase in the Prevalence of Automated Systems

  • As automated systems are integrated with tasks humans must perform, they

alter the nature of the human’s job

  • Can require operators to allocate critical attentional resources to

unexpectedly presented messages under conditions of high demand

  • The unexpected information may not be presented in isolation - operators

may receive information from human and automated sources, and in some instances this information may conflict

slide-4
SLIDE 4

Reliance in Information Sources

  • Humans tend to rely on available information while completing complex

tasks.

  • But what happens when information is presented by human and automated

sources? And what happens if those information sources conflict?

  • This situation occurs more than you might think

– An example: A Russian passenger jet and cargo plane in 2002 crashed in a mid-air collision. Automation told the two planes to change elevation in different directions, but so did an air traffic controller; these two messages directly conflicted in terms of the directions delivered to the

  • pilots. One pilot listened to the automation and the other pilot listened to

an air traffic controller, and the planes collided.

slide-5
SLIDE 5

Automation vs. Human Reliance

  • Little work has empirically examined the factors that influence how humans

prioritize and trust information from automated and human sources when both are available AND directly conflict.

  • One related study (Lyons & Stokes, 2012)...
slide-6
SLIDE 6

Inspiration

Lyons and Stokes (2012) found that humans rely on human sources less in situations of high risk than in situations of low risk. Concerns with Lyons & Stokes (2012):

  • To manipulate risk, the human source’s consistency with an automated tool

was manipulated – the human recommended the route that the automation deemed most dangerous (within-subject design).

  • This within-subject inconsistency could affect trust and subsequent reliance.
  • Limited statistical power (n=40)
  • No time pressure
  • Trust in automation/human source was not measured
slide-7
SLIDE 7

The Current Study

Goal: Generalize and expand previous work in the area Research Question 1: How does the order of information presentation affect reliance on automation and human sources in a risky complex decision making task with time pressure? Research Question 2: How does perceived risk influence trust in automation and human sources within that task? Research Question 3: How does perceived workload influence trust in automation and human sources within that task?

slide-8
SLIDE 8

The Current Study: Overview

Manipulation:

  • Information presentation: Sequential/concurrent presentation of

human/automated sources Measures:

  • NASA TLX: Subjective workload
  • Perceived Risk: 7-point Likert scale
  • Interpersonal Trust: Trust in a human source
  • Automation Trust: Trust in an automated source
  • Reliance

Participants:

  • 126 undergraduate participants
  • Mean age: 19 years old
  • Gender: 66 males and 60 females
slide-9
SLIDE 9

Measures: NASA Task Load Index (TLX)

  • Used to measure workload in a task across different dimensions
  • Empirically validated by Hart & Staveland in 1988 after development at NASA

– Mental Demand – Physical Demand – Temporal Demand – Performance – Effort – Frustration

  • Example Question: “How much mental and perceptual activity was required?“
  • Response to dimensions on 15-point Likert scale, then calculated to

composite score

slide-10
SLIDE 10

Measures: Mayer Interpersonal Trust scale

  • Used to measure interpersonal trust in human sources
  • An Integrative Model of Organizational Trust, developed by Mayer, Davis,

Schoorman (1995)

  • Four antecedents of trust

– Propensity to Trust – Ability – Benevolence – Integrity

  • Example question: “I feel very confident about the human’s skills.”
  • Agreement with statement on 5 point Likert scale
  • 21 questions, one question dropped from original based on lack of relevance

to our study

slide-11
SLIDE 11

Measures: Bisantz & Seong Automation Trust scale

  • Assessment of operator trust in and utilization of automated decision-aids

under different framing conditions, developed by Bisantz and Seong (2001) – Example Question: “I can trust the system”

  • “System” was changed to “map” for relevance to our experiment
  • Responses recorded on a 5 point Likert Scale
  • 10 questions
slide-12
SLIDE 12

Participants must select a route for their military convoy from three possible

  • ptions.
  • An automated tool provides a map that contains information regarding

past IED explosions and insurgent activity to illustrate one optimal route choice.

  • The human provides information that conflicts with the map and

recommends a different route.

Method: Decision Making Task

slide-13
SLIDE 13

Method: Decision Making Task Instructions:

“You will be performing as the leader of a vehicle convoy. Your mission is to deliver critical supplies to a nearby warehouse. Your task will be to select a delivery route. You will be shown a map displaying three delivery routes. The map will identify the location(s) of past IEDs (Improvised Explosive Devices), as well as areas

  • f insurgent activity.

You will also receive information from a local intelligence officer who will provide you with additional data about the area. Consider the three routes and select one. Make your decision as quickly as possible; you will have 60 seconds to complete this task.”

slide-14
SLIDE 14

Method: Automated Decision Tool

  • Route choices are numbered
  • Red shaded area represents past insurgent activity
  • Red solid marks are past IEDs
slide-15
SLIDE 15

Method: Human Decision Aid

Human decision aid provides explicit recommendation of route choice

slide-16
SLIDE 16

Method: Reliance Measure Which route do you select?

  • Route 1
  • Route 2 (rely on automation)
  • Route 3 (rely on human)
slide-17
SLIDE 17

Method: Procedure

  • Stimulus Presentations
  • 3 possible orders
  • Route decision (1, 2, or 3)
  • Automation Trust
  • Interpersonal Trust
  • NASA TLX (Workload)
  • Risk Likert Scale
  • Time pressure
slide-18
SLIDE 18

Results: Information Presentation & Reliance

Research Question 1: Information presentation order did not significantly affect reliance

Logistic regression: Non-significant, p = .280 No significant differences in reliance between groups

slide-19
SLIDE 19

Results: Information Presentation & Trust

Information presentation order did not significantly affect trust in automation/human sources.

Multivariate analysis of variance: IV: Presentation order DV’s: Interpersonal and automation trust Non-significant multivariate effect: p = .403 Non-significant univariate effects: Trust in automation, p = .388, interpersonal trust, p = .195

slide-20
SLIDE 20

Results: Interpersonal Trust

Research Question 2: Perceived risk positively predicted human trust.

As perceived risk increased, trust in the human decision aid increased

slide-21
SLIDE 21

Results: Trust in Automation

Research Question 3: Perceived workload negatively predicted automation trust.

As workload increased, trust in the automated decision aid decreased

slide-22
SLIDE 22

Results: Interpersonal Trust Antecedents

. To further investigate elements of interpersonal trust that were influenced, we analyzed the trust antecedents separately Predictor Variables:

  • Dispositional trust (individual difference)
  • Perceived risk
  • Workload

Four Antecedents (Outcome Variables):

  • Propensity to Trust
  • Ability
  • Benevolence
  • Integrity

Perceived risk and dispositional trust significantly and positively predicted every trust antecedent.

slide-23
SLIDE 23

Conclusions

  • Presentation order of the information sources did not affect trust nor

reliance.

  • Increased workload negatively affected trust in automation.
  • Increased risk positively affected trust in the human.
  • Dispositional trust and perceived risk consistently predicted interpersonal

trust antecedents (ability, integrity, benevolence, propensity to trust).

  • Trust and reliance are two distinct constructs with unique predictors.
slide-24
SLIDE 24

Conclusions

  • Presentation order may not significantly affect reliance/trust in human and

automated sources within risky decision making tasks. – There may be more critical design choices worth considering when designing systems to promote reliance for this type of scenario.

  • When presented with conflicting information from automation and human

sources in high workload scenarios, operators may increase trust in human sources.

  • When presented with conflicting information from automation and human

sources in high risk scenarios, operators may decrease trust in automated sources. – This may be due to the added load of assessing automation’s trustworthiness.

slide-25
SLIDE 25

Discussion

How do our results compare to previous work? Our results differed from the seminal work of Lyons & Stokes (2012):

  • The current study found higher perceived risk predicted increased

interpersonal trust

  • Lyons & Stokes (2012) found higher risk increased trust in automation

– They manipulated risk, but did not measure it

  • Current study measured trust and reliance
  • Lyons & Stokes only measured reliance
  • Lyons & Stokes did not measure workload
slide-26
SLIDE 26

Future Directions

  • Strain vs. stress

– Strain: Workload – Stress: The result of workload

  • Investigate the impact of various types of strain

− Increase task demands (multi-tasking) − Vary information presentation rate (how speeding up info presentation affects processing)

slide-27
SLIDE 27

Future Directions

  • Extend findings to additional populations by accessing various samples

– Military populations (Ft. Bragg) – Intelligence officers

  • Manipulate experimental stimuli to determine how qualities of

systems/individuals influence perceived trustworthiness

  • More closely approximate the complexity of original experiment
  • Extend the unit of analysis from the individual to the group level