Enabling Trust with Behavior Metamodels Scott Wallace WSU - - PowerPoint PPT Presentation

enabling trust with behavior metamodels
SMART_READER_LITE
LIVE PREVIEW

Enabling Trust with Behavior Metamodels Scott Wallace WSU - - PowerPoint PPT Presentation

Enabling Trust with Behavior Metamodels Scott Wallace WSU Vancouver Challenges Software assistants are increasingly a part of everyday life What will constrain the use of these assistants? Technology? Psychology? Scott A. Wallace


slide-1
SLIDE 1

Enabling Trust with Behavior Metamodels

Scott Wallace WSU Vancouver

slide-2
SLIDE 2

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Software assistants are increasingly a part of

everyday life

What will constrain the use of these assistants?

Technology? Psychology?

Challenges

slide-3
SLIDE 3

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Technology as a Constraint

The most obvious constraint on tomorrow’s

intelligent assistants

“we aren’t doing that yet because we don’t know

how”

“…because we don’t have

computers/sensors/algorithms/etc that are precise/fast enough”

Focus of most AI research

slide-4
SLIDE 4

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Psychology as a Constraint

Less obvious, less explored possibility Perhaps we aren’t willing to turn all tasks

  • ver to computerized assistants...

How do engineers weigh the risks and benefits of

the technology they develop?

How do end users determine when and what

technology to adopt?

slide-5
SLIDE 5

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Psychological Constraints

Potential concerns:

Will this project/invention be safe

for society?

Will it be a useful tool?

Approach:

Validation / Testing

Did we make what we intended to?

slide-6
SLIDE 6

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

The end user…

Potential concerns:

Will this project/invention be

safe?

Will it be a useful to me?

Needs:

Marketing? Trust

slide-7
SLIDE 7

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Thesis in a Nutshell

Trust is a critical factor in developing human-human

and human-computer relationships

We can design systems so as to help facilitate trust Trust seems most important for end-users,

especially early adopters, but the underlying components of trust will also benefit developers

slide-8
SLIDE 8

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Trust

Examined three models of trust

Recently cited / multi-disciplinary Developed from models with longer history

Based on this survey, four common

properties can be identified

Understandability, predictability, similarity, liability

slide-9
SLIDE 9

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Understandability/Predictability

Based on reputation of other party Based on knowledge of other party’s

behavior

Knowledge based trust (Ratnasignham) Cognitive Trust (Lewis & Weigert) Habitus (Misztal via Fahrenholtz, Bartlet)

Important for end-users and developers

slide-10
SLIDE 10

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Between Humans & Computers

An explanation of why software aids may

make mistakes can increase trust

Systems that can justify their actions

engender greater trust

slide-11
SLIDE 11

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Similarity

Can the parties in the trust relationship find

common ground?

Empathy, common values (Lewis & Weigert) Solidarity, familial associations (Misztal via

Fahrenholtz, Bartlet)

slide-12
SLIDE 12

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Between Humans & Computers

Users find programs more credible when they

are considered part of same group as user (e.g., company).

Agents that use a conversational strategy

that is consistent with the users’ behavior engender more trust.

slide-13
SLIDE 13

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Liability

Deterrence based trust (Ratnasignham)

Early form of trust Supported by threat of punishment

Emotional trust (Lewis & Weigert)

Entering trust relationship causes bond Breaking bond causes pain/wrath

slide-14
SLIDE 14

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Metamodels: Enabling Trust

Metamodels are high-level descriptions of the

agent’s behavior

They are easy to create “Easy” to understand Consistent with agent’s behavior

slide-15
SLIDE 15

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

A Hierarchical Representation

Similar to Finite State Machine, AND/OR Tree Describes potential sequences of behavior

Plan Trip Make Reservations Reserve Flight Reserve Hotel

Before

Reserve Car

slide-16
SLIDE 16

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

How Metamodels May Aid Trust

Understandability

Illustrates reasoning path leading to a state

Predictability

Illustrates reasoning path extending from a state

Similarity

Agent’s reasoning process may map to user’s

Liability

???

slide-17
SLIDE 17

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Exploring Understandability…

How can we see if metamodels improve

understandability?

By watching developers find bugs in a program.

Begin with an existing Soar agent performing a

simple goal-directed task: “Correct Behavior”.

Create two more agents based on this original: “A”, “B”. New agent’s behave somewhat differently. Participants observe correct and flawed behavior Do metamodels help find behavioral differences?

slide-18
SLIDE 18

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Three Agent Programs

Original Agent (“Correct Behavior”)

This is the specification of how to behave Serves identical role to a human expert we may want to

emulate

4 distinct behaviors

Flawed agent “A”

Occasionally pursues inappropriate goals 12 distinct behaviors, 4 are consistent with specification

Flawed agent “B”

Occasionally replaces one goal for another (inappropriately) 8 distinct behaviors, 4 are consistent with specification

slide-19
SLIDE 19

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Participants’ Task

Five participants – all experienced with Soar Each participant looks for bugs in “A” and “B”

On one agent users were aided by metamodels On other agent users were unaided

In aided task, users get metamodel of specification,

and metamodel of flawed agent

In unaided task, users get behavior sequences

(observations) from specification

In both tasks, users must identify error verbally,

then proceed to fix it

slide-20
SLIDE 20

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Finding & Fixing Error

slide-21
SLIDE 21

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Finding Error

slide-22
SLIDE 22

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Fixing

slide-23
SLIDE 23

Scott A. Wallace AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. March, 2007

Conclusions

Multi-resolution models of behavior may be valuable

tools for helping both developers and end users

A key challenge is to abstract away the correct

features

As we vary the level of abstraction there should be a

cost benefit curve associated with interpreting the model can we quantify this?