Rigorous Explanations for Machine Learning Models Joao Marques-Silva - - PowerPoint PPT Presentation

rigorous explanations for machine learning models
SMART_READER_LITE
LIVE PREVIEW

Rigorous Explanations for Machine Learning Models Joao Marques-Silva - - PowerPoint PPT Presentation

Rigorous Explanations for Machine Learning Models Joao Marques-Silva (joint work with A. Ignatiev and N. Narodytska) University of Lisbon, Portugal AITP 2019 Conference Obergurgl, Austria April 2019 1 / 42 Progress in automated reasoning


slide-1
SLIDE 1

Rigorous Explanations for Machine Learning Models

Joao Marques-Silva (joint work with A. Ignatiev and N. Narodytska)

University of Lisbon, Portugal AITP 2019 Conference Obergurgl, Austria

April 2019

1 / 42

slide-2
SLIDE 2

Progress in automated reasoning

  • Automated reasoners (AR):

– SAT – ILP

2 / 42

slide-3
SLIDE 3

Progress in automated reasoning

  • Automated reasoners (AR):

– SAT – ILP – ASP – SMT – FOL

2 / 42

slide-4
SLIDE 4

Progress in automated reasoning

  • Automated reasoners (AR):

– SAT – ILP – ASP – SMT – FOL – Reasoners as oracles – Reasoners within reasoners

2 / 42

slide-5
SLIDE 5

Progress in automated reasoning & our work

  • Automated reasoners (AR):

– SAT – ILP – ASP – SMT – FOL – Reasoners as oracles – Reasoners within reasoners

Propositional abduction

10−3 10−2 10−1 100 101 102 103 104 Hyper⋆ 10−3 10−2 10−1 100 101 102 103 104 AbHS+

1800 sec. timeout 1800 sec. timeout

2 / 42

slide-6
SLIDE 6

Progress in automated reasoning & our work

  • Automated reasoners (AR):

– SAT – ILP – ASP – SMT – FOL – Reasoners as oracles – Reasoners within reasoners

Model-based diagnosis

10−2 10−1 100 101 102 103 wboinc 10−2 10−1 100 101 102 103 scrypto

600 sec. timeout 600 sec. timeout

2 / 42

slide-7
SLIDE 7

Progress in automated reasoning & our work

  • Automated reasoners (AR):

– SAT – ILP – ASP – SMT – FOL – Reasoners as oracles – Reasoners within reasoners

Axiom pinpointing for EL+

10−2 10−1 100 101 102 103 104 EL2MUS 10−2 10−1 100 101 102 103 104 EL+SAT

3600 sec. timeout 3600 sec. timeout

2 / 42

slide-8
SLIDE 8

The question: how can AR improve ML’s robustness?

  • M. Vardi, MLMFM’18 Summit

3 / 42

slide-9
SLIDE 9

Machine learning vs. automated reasoning

Exploit ML Improve Reasoners (Efficiency)

4 / 42

slide-10
SLIDE 10

Machine learning vs. automated reasoning

Exploit ML Improve Reasoners (Efficiency) Exploit Reasoners Improve ML (Robustness)

4 / 42

slide-11
SLIDE 11

Our work ...

  • Focus on classification problems

5 / 42

slide-12
SLIDE 12

Our work ...

  • Focus on classification problems
  • Globally correct (ie rigorous) explanations for predictions made

5 / 42

slide-13
SLIDE 13

Our work ...

  • Focus on classification problems
  • Globally correct (ie rigorous) explanations for predictions made
  • Disclaimer: first inroads into ML & XAI;

comments welcome

5 / 42

slide-14
SLIDE 14

Outline

Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results

6 / 42

slide-15
SLIDE 15

Some ML successes & expectations

  • IBM Watson
  • Deepmind AlphaGo

– & AlphaZero

  • Image Recognition
  • Speech Recognition
  • Financial Services
  • Medical Diagnosis
  • ...

Circa 2017

Source: Goldman-Sachs

Opportunities for AI / ML (until 2025)

Agriculture $20bn addressable market Finance (US) $34bn~$43bn savings & revenue Healthcare $54bn savings Retail $54bn + $41bn savings + revenue Energy $140bn savings

7 / 42

slide-16
SLIDE 16

Many more applications expected

source: Google 8 / 42

slide-17
SLIDE 17

Many more applications expected

source: Google

c

DARPA

source: Wikipedia 8 / 42

slide-18
SLIDE 18

But ML models are brittle

9 / 42

slide-19
SLIDE 19

But ML models are brittle

Source: http://gradientscience.org/intro_adversarial/

9 / 42

slide-20
SLIDE 20

Also, some ML models are interpretable

Ex. Vacation (V) Concert (C) Meeting (M) Expo (E) Hike (H) e1 1 e2 1 1 e3 1 1 e4 1 1 1 e5 1 1 e6 1 1 1 e7 1 1 1 1 decision|rule lists|sets decision trees

10 / 42

slide-21
SLIDE 21

Also, some ML models are interpretable

Ex. Vacation (V) Concert (C) Meeting (M) Expo (E) Hike (H) e1 1 e2 1 1 e3 1 1 e4 1 1 1 e5 1 1 e6 1 1 1 e7 1 1 1 1

M? 1 V? ? 1 1

if ¬Meeting then Hike if ¬Vacation then ¬Hike decision|rule lists|sets decision trees

10 / 42

slide-22
SLIDE 22

But other ML models are not (interpretable)...

Why does the NN predict a cat?

c

DARPA

11 / 42

slide-23
SLIDE 23

Sample of ongoing efforts

  • Verification of NNs:

– Sound vs. unsound vs. complete

[M.P. Kumar, VMCAI’19]

– E.g. Reluplex: dedicated reasoning within SMT solver

  • Explanations for non-interpretable (ie black-box) models:

– Until recently, most approaches heuristic-based

12 / 42

slide-24
SLIDE 24

Outline

Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results

13 / 42

slide-25
SLIDE 25

What is eXplainable AI (XAI)?

14 / 42

slide-26
SLIDE 26

What is eXplainable AI (XAI)?

c DARPA

14 / 42

slide-27
SLIDE 27

Why XAI?

slide-28
SLIDE 28

Why XAI?

slide-29
SLIDE 29

Why XAI?

slide-30
SLIDE 30

Why XAI?

slide-31
SLIDE 31

Why XAI?

c DARPA

15 / 42

slide-32
SLIDE 32

Relevancy of XAI

c DARPA

16 / 42

slide-33
SLIDE 33

Relevancy of XAI & hundreds(?) of recent papers

c DARPA

16 / 42

slide-34
SLIDE 34

How to XAI?

Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor

[Guerreiro et al., KDD’16, AAAI’18]

– Compute local explanations ...

17 / 42

slide-35
SLIDE 35

How to XAI?

Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor

[Guerreiro et al., KDD’16, AAAI’18]

– Compute local explanations ... – ... offer no guarantees

17 / 42

slide-36
SLIDE 36

How to XAI?

Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor

[Guerreiro et al., KDD’16, AAAI’18]

– Compute local explanations ... – ... offer no guarantees

Recent efforts on rigorous approaches

– Compilation-based, e.g. for BNCs

[Shih,Choi&Darwiche, IJCAI’18] ◮ Issues with scalability

– Abduction-based, e.g. for NNs

[Ignatiev,Narodytska,M.-S., AAAI’19] ◮ Issues with scalability 17 / 42

slide-37
SLIDE 37

How to XAI?

Main challenge: black-box models Heuristic approaches, e.g. LIME & Anchor

[Guerreiro et al., KDD’16, AAAI’18]

– Compute local explanations ... – ... offer no guarantees

Recent efforts on rigorous approaches

– Compilation-based, e.g. for BNCs

[Shih,Choi&Darwiche, IJCAI’18] ◮ Issues with scalability

– Abduction-based, e.g. for NNs

[Ignatiev,Narodytska,M.-S., AAAI’19] ◮ Issues with scalability, but less significant 17 / 42

slide-38
SLIDE 38

Some current challenges

  • For heuristic methods: lack of rigor

(more later)

18 / 42

slide-39
SLIDE 39

Some current challenges

  • For heuristic methods: lack of rigor

(more later)

  • For rigorous methods: scalability, scalability, scalability...

18 / 42

slide-40
SLIDE 40

Outline

Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results

19 / 42

slide-41
SLIDE 41

From ML model to logic

c

DARPA

slide-42
SLIDE 42

From ML model to logic

formula F

c

DARPA

slide-43
SLIDE 43

From ML model to logic

formula F cube C

c

DARPA

slide-44
SLIDE 44

From ML model to logic

formula F cube C literal E

c

DARPA

slide-45
SLIDE 45

From ML model to logic

formula F cube C literal E C ∧ F E

c

DARPA

20 / 42

slide-46
SLIDE 46

From ML model to logic

formula F cube C literal E C ∧ F E

c

DARPA

Must be able to encode ML model E.g. SMT, ILP, etc.

20 / 42

slide-47
SLIDE 47

Abductive explanations of ML models

given a classifier F, a cube C and a prediction E,

21 / 42

slide-48
SLIDE 48

Abductive explanations of ML models

given a classifier F, a cube C and a prediction E, compute a (subset- or cardinality-) minimal Cm ⊆ C s.t.

21 / 42

slide-49
SLIDE 49

Abductive explanations of ML models

given a classifier F, a cube C and a prediction E, compute a (subset- or cardinality-) minimal Cm ⊆ C s.t.

Cm ∧ F ⊥

and

Cm ∧ F E

21 / 42

slide-50
SLIDE 50

Abductive explanations of ML models

given a classifier F, a cube C and a prediction E, compute a (subset- or cardinality-) minimal Cm ⊆ C s.t.

Cm ∧ F ⊥

and

Cm ∧ F E

iterative explanation procedure

21 / 42

slide-51
SLIDE 51

Computing primes

1. Cm ∧ F ⊥

22 / 42

slide-52
SLIDE 52

Computing primes

1. Cm ∧ F ⊥ — tautology

22 / 42

slide-53
SLIDE 53

Computing primes

1. Cm ∧ F ⊥ — tautology 2. Cm ∧ F E

22 / 42

slide-54
SLIDE 54

Computing primes

1. Cm ∧ F ⊥ — tautology 2. Cm ∧ F E ⇔ Cm (F → E)

22 / 42

slide-55
SLIDE 55

Computing primes

1. Cm ∧ F ⊥ — tautology 2. Cm ∧ F E ⇔ Cm (F → E) Cm is a prime implicant of F → E

22 / 42

slide-56
SLIDE 56

Computing one minimal explanation

  • One subset-minimal explanation:

Input: F under M, initial cube C, prediction E Output: Subset-minimal explanation Cm begin for l ∈ C : if Entails(C \ {l}, F → E) : C ← C \ {l} return C end

23 / 42

slide-57
SLIDE 57

Computing one minimal explanation

  • One subset-minimal explanation:

Input: F under M, initial cube C, prediction E Output: Subset-minimal explanation Cm begin for l ∈ C : if Entails(C \ {l}, F → E) : C ← C \ {l} return C end

  • One cardinality-minimal explanation:

– Harder than computing subset-minimal explanation – Exploit implicit hitting set dualization – Details in earlier papers

23 / 42

slide-58
SLIDE 58

Outline

Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Encoding Neural Networks Results

24 / 42

slide-59
SLIDE 59

Encodings NNs

Input #1 Input #2 Input #3 Input #4 Output Hidden layer Input layer Output layer

  • Each layer (except first) viewed as a block

– Compute x′ given input x, weights matrix A, and bias vector b – Compute output y given x′ and activation function

25 / 42

slide-60
SLIDE 60

Encodings NNs

Input #1 Input #2 Input #3 Input #4 Output Hidden layer Input layer Output layer

  • Each layer (except first) viewed as a block

– Compute x′ given input x, weights matrix A, and bias vector b – Compute output y given x′ and activation function

  • Each unit uses a ReLU activation function

25 / 42

slide-61
SLIDE 61

Encoding NNs using MILP

Computation for a NN ReLU block: x′ = A · x + b y = max( x′, 0)

26 / 42

slide-62
SLIDE 62

Encoding NNs using MILP

Computation for a NN ReLU block: x′ = A · x + b y = max( x′, 0) Block encoded as follows:

[Fischetti&Jo, CJ’18]

n

  • j=1

ai,jxj + bi = yi − si zi = 1 → yi ≤ 0 zi = 0 → si ≤ 0 yi ≥ 0, si ≥ 0, zi ∈ {0, 1}

– Simpler encodings not as effective

[Katz et al. CAV’17] 26 / 42

slide-63
SLIDE 63

Outline

Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results

27 / 42

slide-64
SLIDE 64

Experimental setup

  • Implementation in Python

– Supports SMT solvers through PySMT

◮ Yices2 used

– Supports CPLEX 12.8.0

28 / 42

slide-65
SLIDE 65

Experimental setup

  • Implementation in Python

– Supports SMT solvers through PySMT

◮ Yices2 used

– Supports CPLEX 12.8.0

  • ReLU-based neural networks

[Fischetti&Jo CJ’18]

– One hidden layer with i ∈ {10, 15, 20} neurons – Pick NN that achieves good accuracy

28 / 42

slide-66
SLIDE 66

Experimental setup

  • Implementation in Python

– Supports SMT solvers through PySMT

◮ Yices2 used

– Supports CPLEX 12.8.0

  • ReLU-based neural networks

[Fischetti&Jo CJ’18]

– One hidden layer with i ∈ {10, 15, 20} neurons – Pick NN that achieves good accuracy

  • Benchmarks selected from:

– UCI Machine Learning Repository – Penn Machine Learning Benchmarks – MNIST Digits Database

28 / 42

slide-67
SLIDE 67

Experimental setup

  • Implementation in Python

– Supports SMT solvers through PySMT

◮ Yices2 used

– Supports CPLEX 12.8.0

  • ReLU-based neural networks

[Fischetti&Jo CJ’18]

– One hidden layer with i ∈ {10, 15, 20} neurons – Pick NN that achieves good accuracy

  • Benchmarks selected from:

– UCI Machine Learning Repository – Penn Machine Learning Benchmarks – MNIST Digits Database

  • Machine configuration:

– Intel Core i7 2.8GHz, 8GByte – Time limit — 1800s – Memory limit — 4GByte

28 / 42

slide-68
SLIDE 68

Sample of experimental results

Dataset Minimal explanation Minimum explanation size SMT (s) MILP (s) size SMT (s) MILP (s) australian (14) m 1 0.03 0.05 — — — a 8.79 1.38 0.33 — — — M 14 17.00 1.43 — — — backache (32) m 13 0.13 0.14 — — — a 19.28 5.08 0.85 — — — M 26 22.21 2.75 — — — breast-cancer (9) m 3 0.02 0.04 3 0.02 0.03 a 5.15 0.65 0.20 4.86 2.18 0.41 M 9 6.11 0.41 9 24.80 1.81 cleve (13) m 4 0.05 0.07 4 — 0.07 a 8.62 3.32 0.32 7.89 — 5.14 M 13 60.74 0.60 13 — 39.06 hepatitis (19) m 6 0.02 0.04 4 0.01 0.04 a 11.42 0.07 0.06 9.39 4.07 2.89 M 19 0.26 0.20 19 27.05 22.23 voting (16) m 3 0.01 0.02 3 0.01 0.02 a 4.56 0.04 0.13 3.46 0.3 0.25 M 11 0.10 0.37 11 1.25 1.77 spect (22) m 3 0.02 0.02 3 0.02 0.04 a 7.31 0.13 0.07 6.44 1.61 0.67 M 20 0.88 0.29 20 8.97 10.73

29 / 42

slide-69
SLIDE 69

Sample of experimental results

Dataset Minimal explanation Minimum explanation size SMT (s) MILP (s) size SMT (s) MILP (s) australian (14) m 1 0.03 0.05 — — — a 8.79 1.38 0.33 — — — M 14 17.00 1.43 — — — backache (32) m 13 0.13 0.14 — — — a 19.28 5.08 0.85 — — — M 26 22.21 2.75 — — — breast-cancer (9) m 3 0.02 0.04 3 0.02 0.03 a 5.15 0.65 0.20 4.86 2.18 0.41 M 9 6.11 0.41 9 24.80 1.81 cleve (13) m 4 0.05 0.07 4 — 0.07 a 8.62 3.32 0.32 7.89 — 5.14 M 13 60.74 0.60 13 — 39.06 hepatitis (19) m 6 0.02 0.04 4 0.01 0.04 a 11.42 0.07 0.06 9.39 4.07 2.89 M 19 0.26 0.20 19 27.05 22.23 voting (16) m 3 0.01 0.02 3 0.01 0.02 a 4.56 0.04 0.13 3.46 0.3 0.25 M 11 0.10 0.37 11 1.25 1.77 spect (22) m 3 0.02 0.02 3 0.02 0.04 a 7.31 0.13 0.07 6.44 1.61 0.67 M 20 0.88 0.29 20 8.97 10.73

30 / 42

slide-70
SLIDE 70

Sample of experimental results

Dataset Minimal explanation Minimum explanation size SMT (s) MILP (s) size SMT (s) MILP (s) australian (14) m 1 0.03 0.05 — — — a 8.79 1.38 0.33 — — — M 14 17.00 1.43 — — — backache (32) m 13 0.13 0.14 — — — a 19.28 5.08 0.85 — — — M 26 22.21 2.75 — — — breast-cancer (9) m 3 0.02 0.04 3 0.02 0.03 a 5.15 0.65 0.20 4.86 2.18 0.41 M 9 6.11 0.41 9 24.80 1.81 cleve (13) m 4 0.05 0.07 4 — 0.07 a 8.62 3.32 0.32 7.89 — 5.14 M 13 60.74 0.60 13 — 39.06 hepatitis (19) m 6 0.02 0.04 4 0.01 0.04 a 11.42 0.07 0.06 9.39 4.07 2.89 M 19 0.26 0.20 19 27.05 22.23 voting (16) m 3 0.01 0.02 3 0.01 0.02 a 4.56 0.04 0.13 3.46 0.3 0.25 M 11 0.10 0.37 11 1.25 1.77 spect (22) m 3 0.02 0.02 3 0.02 0.04 a 7.31 0.13 0.07 6.44 1.61 0.67 M 20 0.88 0.29 20 8.97 10.73

31 / 42

slide-71
SLIDE 71

Sample of experimental results

Dataset Minimal explanation Minimum explanation size SMT (s) MILP (s) size SMT (s) MILP (s) australian (14) m 1 0.03 0.05 — — — a 8.79 1.38 0.33 — — — M 14 17.00 1.43 — — — backache (32) m 13 0.13 0.14 — — — a 19.28 5.08 0.85 — — — M 26 22.21 2.75 — — — breast-cancer (9) m 3 0.02 0.04 3 0.02 0.03 a 5.15 0.65 0.20 4.86 2.18 0.41 M 9 6.11 0.41 9 24.80 1.81 cleve (13) m 4 0.05 0.07 4 — 0.07 a 8.62 3.32 0.32 7.89 — 5.14 M 13 60.74 0.60 13 — 39.06 hepatitis (19) m 6 0.02 0.04 4 0.01 0.04 a 11.42 0.07 0.06 9.39 4.07 2.89 M 19 0.26 0.20 19 27.05 22.23 voting (16) m 3 0.01 0.02 3 0.01 0.02 a 4.56 0.04 0.13 3.46 0.3 0.25 M 11 0.10 0.37 11 1.25 1.77 spect (22) m 3 0.02 0.02 3 0.02 0.04 a 7.31 0.13 0.07 6.44 1.61 0.67 M 20 0.88 0.29 20 8.97 10.73

32 / 42

slide-72
SLIDE 72

Comparing quality to compilation-based BNC

[Shih,Choi&Darwiche, IJCAI’18]

  • “Congressional Voting Records” dataset

33 / 42

slide-73
SLIDE 73

Comparing quality to compilation-based BNC

[Shih,Choi&Darwiche, IJCAI’18]

  • “Congressional Voting Records” dataset
  • (0 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1) — data sample (16 features)

33 / 42

slide-74
SLIDE 74

Comparing quality to compilation-based BNC

[Shih,Choi&Darwiche, IJCAI’18]

  • “Congressional Voting Records” dataset
  • (0 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1) — data sample (16 features)

smallest size explanations computed by:

  • (

0 1 1 0 0 0 1 1 0 ) — 9 literals

  • (

0 1 1 1 0 0 1 1 0 ) — 9 literals

33 / 42

slide-75
SLIDE 75

Comparing quality to compilation-based BNC

[Shih,Choi&Darwiche, IJCAI’18]

  • “Congressional Voting Records” dataset
  • (0 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1) — data sample (16 features)

smallest size explanations computed by:

  • (

0 1 1 0 0 0 1 1 0 ) — 9 literals

  • (

0 1 1 1 0 0 1 1 0 ) — 9 literals subset-minimal explanations computed by our approach:

  • (

1 ) — 4 literals

  • (

1 ) — 3 literals

  • (

0 1 ) — 5 literals

  • (

0 1 1) — 5 literals

33 / 42

slide-76
SLIDE 76

There are many explanations of different quality

(a) digit 1 (b) simple expl. (c) central pixels (d) light pixels (a) digit 3 (b) simple expl. (c) central pixels (d) light pixels

34 / 42

slide-77
SLIDE 77

Outline

Successes & Pitfalls of ML Explainable AI Explanations with Abductive Reasoning Results Assessing Local Explanations – Recent Work

35 / 42

slide-78
SLIDE 78

Assessing precision with model counting

  • Evaluated Anchor

[Guerreiro et al., AAAI18]

– Anchor more accurate than LIME – Anchor computes accuracy estimate for each explanation

  • Represented ML model as propositional formula

– E.g. binarized NNs (BNNs) – Use (approximate) model counter to assess precision of ML model

  • n explanation (anchor) computed by Anchor

36 / 42

slide-79
SLIDE 79

Preliminary results

  • Anchor often claims ≈ 99% precision

37 / 42

slide-80
SLIDE 80

Preliminary results

  • Anchor often claims ≈ 99% precision; this cannot be confirmed

37 / 42

slide-81
SLIDE 81

Summary and future work

  • Principled approach to XAI

38 / 42

slide-82
SLIDE 82

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning

38 / 42

slide-83
SLIDE 83

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning
  • Applies a reasoning engine, e.g. SMT or MILP

38 / 42

slide-84
SLIDE 84

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning
  • Applies a reasoning engine, e.g. SMT or MILP
  • Provides minimality guarantees

38 / 42

slide-85
SLIDE 85

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning
  • Applies a reasoning engine, e.g. SMT or MILP
  • Provides minimality guarantees
  • Tested on ReLU-based NNs
  • First results on precision of Anchor’s explanations

38 / 42

slide-86
SLIDE 86

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning
  • Applies a reasoning engine, e.g. SMT or MILP
  • Provides minimality guarantees
  • Tested on ReLU-based NNs
  • First results on precision of Anchor’s explanations
  • Other ML models?

38 / 42

slide-87
SLIDE 87

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning
  • Applies a reasoning engine, e.g. SMT or MILP
  • Provides minimality guarantees
  • Tested on ReLU-based NNs
  • First results on precision of Anchor’s explanations
  • Other ML models?
  • Address scalability:

– Better encodings? – More advanced reasoners?

38 / 42

slide-88
SLIDE 88

Summary and future work

  • Principled approach to XAI
  • Based on abductive reasoning
  • Applies a reasoning engine, e.g. SMT or MILP
  • Provides minimality guarantees
  • Tested on ReLU-based NNs
  • First results on precision of Anchor’s explanations
  • Other ML models?
  • Address scalability:

– Better encodings? – More advanced reasoners?

  • Explanation enumeration? + preferences?

38 / 42

slide-89
SLIDE 89

Questions?

39 / 42

slide-90
SLIDE 90

References to our work

  • A. Ignatiev, N. Narodytska, J. Marques-Silva:

Abduction-Based Explanations for Machine Learning Models. AAAI 2019

  • N. Narodytska, A. Ignatiev, F. Pereira, J. Marques-Silva:

Learning Optimal Decision Trees with SAT. IJCAI 2018: 1362-1368

  • A. Ignatiev, F. Pereira, N. Narodytska, J. Marques-Silva:

A SAT-Based Approach to Learn Explainable Decision Sets. IJCAR 2018: 627-645

40 / 42

slide-91
SLIDE 91

Additional references I

  • M. T. Ribeiro, S. Singh, C. Guestrin:

”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144

  • G. Katz, C. W. Barrett, D. L. Dill, K. Julian, M. J. Kochenderfer:

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. CAV (1) 2017: 97-117

  • M. T. Ribeiro, S. Singh, C. Guestrin:

Anchors: High-Precision Model-Agnostic Explanations. AAAI 2018: 1527-1535

  • A Shih, A. Choi, A. Darwiche:

A Symbolic Approach to Explaining Bayesian Network Classifiers. IJCAI 2018: 5103-5111

  • M. Fischetti, J. Jo:

Deep neural networks and mixed integer linear optimization. Constraints 23(3): 296-309 (2018)

41 / 42

slide-92
SLIDE 92

Additional references II

  • B. Goodman, S. R. Flaxman:

European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine 38(3): 50-57 (2017)

  • A. M. Aung, Y. Fadila, R. Gondokaryono, L. Gonzalez:

Building Robust Deep Neural Networks for Road Sign Detection. CoRR abs/1712.09327 (2017)

  • K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A.

Prakash, T. Kohno, D. Song: Robust Physical-World Attacks on Deep Learning Visual Classification. CVPR 2018: 1625-1634

  • A. Madry, L. Schmidt:

A Brief Introduction to Adversarial Examples. http://gradientscience.org/intro_adversarial/, 2018

  • M. P. Kumar:

Tutorial: Neural Network Verification. VMCAI 2019 Winter School http://mpawankumar.info/tutorials/vmcai2019/, 2019

42 / 42