What Gets Echoed?
Understanding the “Pointers” in Explanations of Persuasive Arguments
David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan
{david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu
What Gets Echoed? Understanding the Pointers in Explanations of - - PowerPoint PPT Presentation
What Gets Echoed? Understanding the Pointers in Explanations of Persuasive Arguments David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan {david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu Explanations are important.
David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan
{david.i.atkinson, kumar.srinivasan, chenhao.tan}@colorado.edu
1(Keil, 2006) 2(Ribeiro et al., 2016) 3(Lipton, 2016) 4(Guidotti et al., 2019) 5(Miller, 2019) 6(Doshi-Velez and Kim, 2019) 7...and so on. 1/18
2/18
8
8(Wagner et al., 2019) 3/18
8
Explanandum Explanation
8(Wagner et al., 2019) 3/18
4/18
Virgina Heffernan, writing in Wired
“In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise.”
5/18
r/ChangeMyView
6/18
r/ChangeMyView
6/18
r/ChangeMyView
6/18
r/ChangeMyView
6/18
7/18
8/18
One answer:
9/18
10/18
1 if s is in the set of unique stems in the explanation
11/18
properties
connects the OP and PC
the OP or PC
For example:
12/18
properties
connects the OP and PC
the OP or PC
For example:
noun in OP: ↓, noun in PC: ↓).
12/18
properties
connects the OP and PC
the OP or PC
For example:
in PC (↓), and vice versa (↑)
distributions for word (↓)
12/18
properties
connects the OP and PC
properties of the OP or PC
For example:
lengths in OP and PC (↓)
12/18
13/18
13/18
9
9(Reynolds and Flagg, 1976) 14/18
9
9(Reynolds and Flagg, 1976) 14/18
15/18
15/18
Pointer generator network, with coverage10
10(See et al., 2017; Klein et al., 2017) 16/18
Pointer generator network, with coverage10
10(See et al., 2017; Klein et al., 2017) 16/18
17/18
17/18
Our Dataset:
novel, large-scale dataset of naturally occurring explanations.
18/18
Our Dataset:
novel, large-scale dataset of naturally
explanations. Our Findings:
common.
18/18
Our Dataset:
novel, large-scale dataset of naturally
explanations. Our Findings:
18/18
Our Dataset:
novel, large-scale dataset of naturally
explanations. Our Findings:
properties for stopwords, contextual for content words.
18/18
Our Dataset:
novel, large-scale dataset of naturally
explanations. Our Findings:
properties for stopwords, contextual for content words. Our Features:
prediction performance of vanilla LSTMs.
18/18
Our Dataset:
novel, large-scale dataset of naturally
explanations. Our Findings:
properties for stopwords, contextual for content words. Our Features:
prediction performance of vanilla LSTMs.
explanations.
18/18
Our Dataset:
novel, large-scale dataset of naturally
explanations. Our Findings:
properties for stopwords, contextual for content words. Our Features:
prediction performance of vanilla LSTMs.
generated explanations.
Code + data:
github.com/davatk/what-gets-echoed
18/18