What Gets Echoed? Understanding the Pointers in Explanations of - - PowerPoint PPT Presentation

what gets echoed
SMART_READER_LITE
LIVE PREVIEW

What Gets Echoed? Understanding the Pointers in Explanations of - - PowerPoint PPT Presentation

What Gets Echoed? Understanding the Pointers in Explanations of Persuasive Arguments David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan {david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu Explanations are important.


slide-1
SLIDE 1

What Gets Echoed?

Understanding the “Pointers” in Explanations of Persuasive Arguments

David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan

{david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu

slide-2
SLIDE 2

Explanations are important.1234567

1(Keil, 2006) 2(Ribeiro et al., 2016) 3(Lipton, 2016) 4(Guidotti et al., 2019) 5(Miller, 2019) 6(Doshi-Velez and Kim, 2019) 7...and so on. 1/18

slide-3
SLIDE 3

What is this?

2/18

slide-4
SLIDE 4

What is this?

8

8(Wagner et al., 2019) 3/18

slide-5
SLIDE 5

What is this?

8

Explanandum Explanation

8(Wagner et al., 2019) 3/18

slide-6
SLIDE 6

What about natural language explanations?

4/18

slide-7
SLIDE 7

Virgina Heffernan, writing in Wired

“In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise.”

5/18

slide-8
SLIDE 8

r/ChangeMyView

6/18

slide-9
SLIDE 9

r/ChangeMyView

6/18

slide-10
SLIDE 10

r/ChangeMyView

6/18

slide-11
SLIDE 11

r/ChangeMyView

6/18

slide-12
SLIDE 12

Pointers are common

7/18

slide-13
SLIDE 13

How do explanations selectively incorporate pointers from their explananda?

8/18

slide-14
SLIDE 14

Probability of echoing vs. word frequency

One answer:

9/18

slide-15
SLIDE 15

A prediction task!

10/18

slide-16
SLIDE 16

The task

  • 1. Take the set of unique stems in the explandum.
  • 2. For every such stem s, we assign the label

   1 if s is in the set of unique stems in the explanation

  • therwise.

11/18

slide-17
SLIDE 17

What could affect pointer use?

  • 1. Non-contextual

properties

  • 2. OP and PC usage
  • 3. How the word

connects the OP and PC

  • 4. General properties of

the OP or PC

For example:

  • IDF (↓)
  • Word length (↓)

12/18

slide-18
SLIDE 18

What could affect pointer use?

  • 1. Non-contextual

properties

  • 2. OP and PC usage
  • 3. How the word

connects the OP and PC

  • 4. General properties of

the OP or PC

For example:

  • POS tags (verb in OP: ↓, verb in PC: ↑,

noun in OP: ↓, noun in PC: ↓).

  • Term frequency (↑)
  • # of surface forms (↑)
  • in a quotation (↑)

12/18

slide-19
SLIDE 19

What could affect pointer use?

  • 1. Non-contextual

properties

  • 2. OP and PC usage
  • 3. How the word

connects the OP and PC

  • 4. General properties of

the OP or PC

For example:

  • Word is in both OP and PC (↑)
  • # of word’s surface forms in OP but not

in PC (↓), and vice versa (↑)

  • JS divergence between OP and PC POS

distributions for word (↓)

12/18

slide-20
SLIDE 20

What could affect pointer use?

  • 1. Non-contextual

properties

  • 2. OP and PC usage
  • 3. How the word

connects the OP and PC

  • 4. General

properties of the OP or PC

For example:

  • OP length (↓), PC length (↑)
  • Depth of PC in the thread (↑)
  • Difference between the avg. word

lengths in OP and PC (↓)

12/18

slide-21
SLIDE 21

Our features improve on LSTMs

13/18

slide-22
SLIDE 22

Our features improve on LSTMs

13/18

slide-23
SLIDE 23

Some parts of speech are more reliably predicted

9

9(Reynolds and Flagg, 1976) 14/18

slide-24
SLIDE 24

Some parts of speech are more reliably predicted

9

9(Reynolds and Flagg, 1976) 14/18

slide-25
SLIDE 25

Which features matter?

15/18

slide-26
SLIDE 26

Which features matter?

15/18

slide-27
SLIDE 27

Our features can improve the generation of explanations

Pointer generator network, with coverage10

+

  • ur features

10(See et al., 2017; Klein et al., 2017) 16/18

slide-28
SLIDE 28

Our features can improve the generation of explanations

Pointer generator network, with coverage10

+

  • ur features

10(See et al., 2017; Klein et al., 2017) 16/18

slide-29
SLIDE 29

...and increase copying

17/18

slide-30
SLIDE 30

...and increase copying

17/18

slide-31
SLIDE 31

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally occurring explanations.

18/18

slide-32
SLIDE 32

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally

  • ccurring

explanations. Our Findings:

  • 2. Pointers are

common.

18/18

slide-33
SLIDE 33

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally

  • ccurring

explanations. Our Findings:

  • 2. Pointers are common.
  • 3. Importance of nouns.

18/18

slide-34
SLIDE 34

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally

  • ccurring

explanations. Our Findings:

  • 2. Pointers are common.
  • 3. Importance of nouns.
  • 4. Non-contextual

properties for stopwords, contextual for content words.

18/18

slide-35
SLIDE 35

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally

  • ccurring

explanations. Our Findings:

  • 2. Pointers are common.
  • 3. Importance of nouns.
  • 4. Non-contextual

properties for stopwords, contextual for content words. Our Features:

  • 5. Improve on

prediction performance of vanilla LSTMs.

18/18

slide-36
SLIDE 36

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally

  • ccurring

explanations. Our Findings:

  • 2. Pointers are common.
  • 3. Importance of nouns.
  • 4. Non-contextual

properties for stopwords, contextual for content words. Our Features:

  • 5. Improve on

prediction performance of vanilla LSTMs.

  • 6. Improve quality
  • f generated

explanations.

18/18

slide-37
SLIDE 37

Takeaways

Our Dataset:

  • 1. We assemble a

novel, large-scale dataset of naturally

  • ccurring

explanations. Our Findings:

  • 2. Pointers are common.
  • 3. Importance of nouns.
  • 4. Non-contextual

properties for stopwords, contextual for content words. Our Features:

  • 5. Improve on

prediction performance of vanilla LSTMs.

  • 6. Improve quality of

generated explanations.

Thank you!

Code + data:

github.com/davatk/what-gets-echoed

18/18