Towards a Grounded Dialog Model for Explainable Artificial - - PowerPoint PPT Presentation

towards a grounded dialog model for explainable
SMART_READER_LITE
LIVE PREVIEW

Towards a Grounded Dialog Model for Explainable Artificial - - PowerPoint PPT Presentation

Towards a Grounded Dialog Model for Explainable Artificial Intelligence Prashan Madumal , Tim Miller, Frank Vetere and Liz Sonenberg School of Computing and Information Systems University of Melbourne, Australia


slide-1
SLIDE 1

Towards a Grounded Dialog Model for Explainable Artificial Intelligence

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg

School of Computing and Information Systems University of Melbourne, Australia pmathugama@student.unimelb.edu.au

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-2
SLIDE 2

Introduction

Explainable Artificial Intelligence (XAI) Explaining the behaviours, actions and decisions made by AI systems.

1

A test footnote in the first column

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-3
SLIDE 3

Introduction

Explanation as a Process1(Miller, 2017) Cognitive process: process of determining an explanation for a given event. Social process: transferring knowledge between explainer and explainee.

1

Miller, “Explanation in Artificial Intelligence: Insights from the Social Sciences”.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-4
SLIDE 4

Motivation

Trust. Transparency and ethics.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-5
SLIDE 5

Human Explanation

Explanation as a continuous interaction. How humans engage in conversational explanation Explanation models influenced by human explanation more likely to be accepted. Easier for the AI to emulate human explanations.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-6
SLIDE 6

Goal

Introduce a Human explanation model. Grounded on data Analyze relationships in dialog components.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-7
SLIDE 7

Related work

Early work

Kass and Finin (Kass, 1988)2 and Moore and Paris (Moore, 1991)3 discussed the requirements a good explanation facility should have, including characteristics like “Naturalness”. Cawsey’s (Cawsey, 1993)4 EDGE system also focused on user interaction and user knowledge.

2

Kass and Finin, The Need for User Models in Generating Expert System Explanations.

3

Moore and Paris, “Requirements for an expert system explanation facility”.

4 Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-8
SLIDE 8

Related work

Explanation dialog models

Walton’s (Walton, 1993)5 shift model Figure: Argumentation and explanation in dialogue (Walton, 1993)

5

Walton and Bex, “Combining explanation and argumen- tation in dialogue”.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-9
SLIDE 9

Research Design and Methodology

Grounded theory6(Glasser, 1967) as the methodology Gain insights into three areas:

1

Key components that makeup an explanation dialog.

2

Relationships that exist within those components.

3

Component sequences that occur in an explanation dialog and cycles.

6

Glaser and Strauss, The Discovery of Grounded Theory: Strategies for Qualitative Research.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-10
SLIDE 10

Data

Six different data sources, Six different types of explanation dialogs. Total of 398 explanation dialogs. Text based sources, some are transcribed from voice and video-based interviews.

Table: Coded data description.

Explanation Dialog Type # Dialogs # Transcripts

  • 1. Human-Human static explainee

88 2

  • 2. Human-Human static explainer

30 3

  • 3. Human-Explainer agent

68 4

  • 4. Human-Explainee agent

17 1

  • 5. Human-Human QnA

50 5

  • 6. Human-Human multiple explainee

145 5

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-11
SLIDE 11

Data cont.

Different combinations explainee and explainer participants.

Table: Explanation dialog type description.

Participants Number Medium Data source

  • 1. Human-Human

One-to-one Verbal Journalist Interview transcripts

  • 2. Human-Human

One-to-one Verbal Journalist Interview transcripts

  • 3. Human-Agent

One-to-one Text Chatbot conversation transcripts

  • 4. Agent-Human

One-to-one Text Chatbot conversation transcripts

  • 5. Human-Human

Many-to-many Text Reddit AMA records

  • 6. Human-Human

One-to-many Verbal Supreme court transcripts

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-12
SLIDE 12

Data cont.

Coding, categories and their definition.

Code Category Description QE start Dialog Explanation dialog start QE end Dialog Explanation dialog end How Question Type How questions Why Question Type Why questions What Question Type What questions Explanation Explanation Explanation given for questions Explainee Affirmation Explanation Explainee acknowledges explanation Explainer Affirmation Explanation Explainer acknowledges explainee’s acknowledgment Question context Information Background to the question provided by the explainee Preconception Information Preconceived idea that the explainee has about some fact Counterfactual case Information Counterfactual case of the how/why question Argument Argumentation Argument presented by explainee or explainer Argument-s Argumentation An argument that starts the Dialog Argument-a Argumentation Argument Affirmation by explainee or explainer Argument-c Argumentation Counter argument Argument-contrast case Argumentation Argumentation contrast case Explainer Return question Questions Clarification question by explainer Explainee Return question Questions Follow up question asked by explainee Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-13
SLIDE 13

Explanation Dialog Model

Explainee Affirmed Explanation Presented Succesfull Explanation E: Explainer Return Question Explainer Affirmed Explanation End Composite Question Q: Ask Question E: Explain Q: Affirm Composite Argument Presented E: Explain E: Affirm Q: Explainee Return Question Q: Preconception statement E: Further Explanation Argument Affirmed Counter Argument Presented Q: Argue E: Affirm E: Argument Explanation E: Counter Argument E: Counter Argument Explanation Argument End

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-14
SLIDE 14

Explanation Dialog Model cont.

Explainee Affirmed Explanation Presented Succesfull Explanation E: Explainer Return Question Explainer Affirmed Explanation End Composite Question Q: Ask Question E: Explain Q: Affirm Composite Argument Presented E: Explain E: Affirm Q: Explainee Return Question Q: Preconception statement E: Further Explanation Argument Affirmed Counter Argument Presented Q: Argue E: Affirm E: Argument Explanation E: Counter Argument E: Counter Argument Explanation Argument End

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-15
SLIDE 15

Explanation Dialog Model cont.

Explainee Affirmed Explanation Presented Succesfull Explanation E: Explainer Return Question Explainer Affirmed Explanation End Composite Question Q: Ask Question E: Explain Q: Affirm Composite Argument Presented E: Explain E: Affirm Q: Explainee Return Question Q: Preconception statement E: Further Explanation Argument Affirmed Counter Argument Presented Q: Argue E: Affirm E: Argument Explanation E: Counter Argument E: Counter Argument Explanation Argument End

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-16
SLIDE 16

Explanation Dialog Model cont.

Explainee Affirmed Explanation Presented Succesfull Explanation E: Explainer Return Question Explainer Affirmed Explanation End Composite Question Q: Ask Question E: Explain Q: Affirm Composite Argument Presented E: Explain E: Affirm Q: Explainee Return Question Q: Preconception statement E: Further Explanation Argument Affirmed Counter Argument Presented Q: Argue E: Affirm E: Argument Explanation E: Counter Argument E: Counter Argument Explanation Argument End

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-17
SLIDE 17

Explanation Dialog Model cont.

Explainee Affirmed Explanation Presented Succesfull Explanation E: Explainer Return Question Explainer Affirmed Explanation End Composite Question Q: Ask Question E: Explain Q: Affirm Composite Argument Presented E: Explain E: Affirm Q: Explainee Return Question Q: Preconception statement E: Further Explanation Argument Affirmed Counter Argument Presented Q: Argue E: Affirm E: Argument Explanation E: Counter Argument E: Counter Argument Explanation Argument End

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-18
SLIDE 18

Model Comparison

Walton’s model (Walton, 1993) focus on combining explanation and examination dialogs with argumentation.

Figure: Argumentation and explanation in dialogue (Walton, 1993)

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-19
SLIDE 19

Model Comparison

Two differences,

The lack of examination dialog shift in our model Walton’s model focus on the evaluation of the successfulness of an explanation.

The differences are at a more detailed level than at the high-level. Our model captures the subtleties.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-20
SLIDE 20

Analysis and Evaluation

Analysis on three areas:

1

Key components of an Explanation Dialog.

2

Relationships between these components and their variations between different dialog types.

3

The sequence of components that can successfully carry out an explanation dialog.

Explanation Dialog How Why What Explanation Explainee Affirmation Explainer Affirmation Context Counterfactual Argumentation Initial Argumentation Argumentation Affirmation Counter Argument Argumentation Contrast Case Explainer Return Question Explainee Return Question Preconception Codes 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Frequency

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-21
SLIDE 21

Code Frequency Analysis

H

  • w

W h y W h a t E x p l a n a t i

  • n

E x p l a i n e e A f f i r m a t i

  • n

E x p l a i n e r A f f i r m a t i

  • n

C

  • n

t e x t C

  • u

n t e r f a c t u a l A r g u m e n t a t i

  • n

I n i t i a l A r g u m e n t a t i

  • n

A r g u m e n t a t i

  • n

A f f i r m a t i

  • n

C

  • u

n t e r A r g u m e n t A r g u m e n t a t i

  • n

C

  • n

t r a s t C a s e E x p l a i n e r R e t u r n Q u e s t i

  • E

x p l a i n e e R e t u r n Q u e s t i

  • n

P r e c

  • n

c e p t i

  • n

Codes 0.5 1 1.5 2 2.5 Average Occurrence in a Dialog

Human-Human static explainee Human-Human static explainer Human-Explainer agent Human-Explainee agent Human-Human QnA Human-Human multiple explainee

Figure: Average code occurrence in different explanation dialog types

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-22
SLIDE 22

Code Occurrence Analysis per Dialog

Explainer Affirmation 1 2 3 4 5 6 Explanation Dialog Type 20 40 60 80 Code Occurrence Type % per Dialog Explainee Affirmation 1 2 3 4 5 6 Explanation Dialog Type 20 40 60 80 100 Code Occurrence Type % per Dialog Explainee Return Question 1 2 3 4 5 6 Explanation Dialog Type 20 40 60 80 100 Code Occurrence Type % per Dialog Explainer Return Question 1 2 3 4 5 6 Explanation Dialog Type 20 40 60 80 100 Code Occurrence Type % per Dialog

Code Occurance in a Dialog = 0 Code Occurance in a Dialog = 1 Code Occurance in a Dialog = 2 Code Occurance in a Dialog > 2

Figure: Average code occurrence per dialog in different dialog types

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-23
SLIDE 23

Explanation Dialog Ending Sequence Analysis

H u m a n

  • H

u m a n s t a t i c e x p l a i n e e H u m a n

  • H

u m a n s t a t i c e x p l a i n e r H u m a n

  • E

x p l a i n e r a g e n t H u m a n

  • E

x p l a i n e e a g e n t H u m a n

  • H

u m a n Q n A H u m a n

  • H

u m a n m u l t i p l e e x p l a i n e e Explanation Dialog Type 10 20 30 40 50 60 70 Ending Type % per Dialog Dialog ending in explanation Dialog ending in Explainee Affirmation Dialog ending in Explainer Affirmation Dialog ending in Preconception Dialog ending in Argument Affirmation Dialog ending in Argument Counter Dialog ending in Other

Figure: Average ending code percentage in different dialog types

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-24
SLIDE 24

Discussion

Using the proposed model in Explainable AI systems. Providing interactive explanations, having the freedom for questioning and arguments Limitations: Inability to evaluate effectiveness of the delivered explanation.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-25
SLIDE 25

Conclusion and Future Work

Explanation dialog model is derived from different types of natural conversations between humans as well as humans and agents. Formulate the model by analysing the frequency of occurrences of patterns to identify the key components that makeup an explanation dialog. relationships between components and their sequence of

  • ccurrence inside a dialog.

XAI systems can build on top of this explanation dialog model to provide better explanations to the intended user. Future work: Evaluate the model in a Human-Agent setting.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-26
SLIDE 26

References I

Cawsey, Alison. “Planning interactive explanations”. In: International Journal of Man-Machine Studies 38.2 (1993),

  • pp. 169 –199. doi:

http://dx.doi.org/10.1006/imms.1993.1009. Glaser, Barney G and Anselm L Strauss. The Discovery of Grounded Theory: Strategies for Qualitative Research. Vol. 1. 4. 1967, p. 271. isbn: 0202302601. doi: 10.2307/2575405. arXiv: 9809069v1 [arXiv:gr-qc]. url: http://www.amazon.com/dp/0202302601. Kass, Robert and Tim Finin. The Need for User Models in Generating Expert System Explanations. Tech. rep. University

  • f Pennsylvania, 1988, pp. 1–32. url:

http://repository.upenn.edu/cis\_reports/585.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-27
SLIDE 27

References II

Miller, Tim. “Explanation in Artificial Intelligence: Insights from the Social Sciences”. In: (2017). doi: arXiv:1706.07269v1. arXiv: 1706.07269. url: http://arxiv.org/abs/1706.07269. Moore, Johanna D. and Cecile L. Paris. “Requirements for an expert system explanation facility”. In: Computational Intelligence 7.4 (1991), pp. 367–370. issn: 14678640. doi: 10.1111/j.1467-8640.1991.tb00409.x. Walton, Douglas and Floris Bex. “Combining explanation and argumentation in dialogue”. In: Argument and Computation 7.1 (2016), pp. 55–68. issn: 19462174. doi: 10.3233/AAC-160001.

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence

slide-28
SLIDE 28

Q&A

Prashan Madumal: pmathugama@student.unimelb.edu.au

Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg Towards a Grounded Dialog Model for Explainable Artificial Intelligence