Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks - - PowerPoint PPT Presentation

reluplex an efficient smt solver for
SMART_READER_LITE
LIVE PREVIEW

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks - - PowerPoint PPT Presentation

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks David L Dill Stanford University 1 / 39 Based on: Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks Guy Katz, Clark Barrett, David Dill, Kyle


slide-1
SLIDE 1

1 / 39

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

David L Dill Stanford University

slide-2
SLIDE 2

2 / 39

Based on: “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks” Guy Katz, Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer Available on arXiv: 1702.01135 Accepted at CAV conference in July.

slide-3
SLIDE 3

3 / 39

Artificial Neural Networks

 An emerging solution to a wide variety of problems  A “black box” solution

  • An advantage, but also a drawback

 Goal:

Prove that some properties of NN are guaranteed

slide-4
SLIDE 4

4 / 39

Case Study: ACAS Xu

 New standard being developed for collision avoidance

advisories for unmanned aircraft

 Produce advisories:

1. Strong left (SL) 2. Weak left (L) 3. Weak right (R) 4. Strong right (SR) 5. Clear of conflict (COC)

 Implementation: an array of 45 neural networks

  • Under consideration by the FAA
slide-5
SLIDE 5

5 / 39

Certifying ACAS Xu Networks

 Neural networks generalize to previously-unseen inputs  Show that certain properties hold for all inputs  Examples:

  • If intruder is distant, advisory is always COC
  • If intruder is nearby on the left, advisory is always “strong right”

 Crucial for increasing the level of confidence

slide-6
SLIDE 6

6 / 39

Reasoning About Neural Nets

 Can we manually prove properties?  Networks are too big to reason about manually

  • ACAS Xu networks: 8 layers, 310 nodes
  • (times 45)
slide-7
SLIDE 7

7 / 39

Verifying Neural Nets

 A possible answer: automatic verification  … but, existing verification tools don’t scale up!  A difficult (NP-complete) problem

slide-8
SLIDE 8

8 / 39

Activation Functions

 The difficulty: Rectified Linear Units (ReLUs)  ReLU(𝑦) = max(0, 𝑦)

  • 𝑦 ≥ 0: active case, return 𝑦
  • 𝑦 < 0: inactive case, return 0

1 3 −2 2 1 −2 −1 4 1 ⋅ 1 + −2 ⋅ 3 + 0 ⋅ −2 = −5 2 ⋅ 1 + 0 ⋅ 3 + (−1) ⋅ −2 = 4

slide-9
SLIDE 9

9 / 39

Activation Functions (cnt’d)

 ReLUs require case-splitting:

  • Fix active/inactive states of ReLUs
  • Solve the resulting sub-problem (no activation functions)

 Property holds iff it holds for all sub-problems  𝑜 ReLUs imply 2𝑜 combinations

slide-10
SLIDE 10

10 / 39

Our Contributions

 A technique for solving linear programs with ReLUs

  • Can encode neural networks as input

 Extends the simplex method

  • Reluplex: ReLUs with simplex

 Does not require case splitting in advance

  • ReLU constraints satisfied incrementally
  • Split only if we must

 Scales to the ACAS Xu networks

  • An order of magnitude larger than previously possible
slide-11
SLIDE 11

11 / 39

Agenda

 Reluplex  Evaluation on the ACAS Xu Networks  Conclusion

slide-12
SLIDE 12

12 / 39

Agenda

 Reluplex  Evaluation on the ACAS Xu Networks  Conclusion

slide-13
SLIDE 13

13 / 39

 Split ReLUs into:

1. A weighted sum variable 𝑦𝑥 2. An activation function variable 𝑦𝑏

 Allow ReLU violations (𝑦𝑏 ≠ 𝑦𝑥),  Fix them gradually during simplex  Split cases (𝑦𝑥 < 0, 𝑦𝑥 ≥ 0) as a last resort

ReLUs As Variable Pairs

𝑦2

𝑥

𝑦3

𝑥

𝑦1 𝑦4 1 −1 1 1 𝑦2

𝑏

𝑦3

𝑏

ReLU ReLU

slide-14
SLIDE 14

14 / 39

T ermination

 Can we always find a solution using pivots and updates?  No: sometimes get into a loop  Have to split on ReLU variables  Use a Satisfiability Modulo Theories (SMT) framework

to manage decision tree resulting from splitting

  • Generic efficient framework for combining Boolean reasoning

(i.e. splitting) with reasoning in a theory (in this case, linear arithmetic with ReLUs)

slide-15
SLIDE 15

15 / 39

The Reluplex Method

 A method for solving linear programs with ReLUs  Sound and terminating  Efficient

  • For efficient implementation, see paper
slide-16
SLIDE 16

16 / 39

Agenda

 Reluplex  Evaluation on the ACAS Xu Networks  Conclusion

slide-17
SLIDE 17

17 / 39

Properties of Interest

1.

No unnecessary turning advisories

2.

Alerting regions are consistent

3.

Strong alerts do not appear when vertical separation is large

slide-18
SLIDE 18

18 / 39

A Few Simple Properties

 Satisfiable properties: 𝑧 ≥ 𝑑 for output node 𝑧

𝝌𝟐 𝝌𝟑 𝝌𝟒 𝝌𝟓 𝝌𝟔 𝝌𝟕 𝝌𝟖 𝝌𝟗 CVC4

  • Z3
  • Yices

1 37

  • MathSat

2040 9780

  • Gurobi

1 1 1

  • Reluplex

11 3 9 10 155 7 10 14

slide-19
SLIDE 19

19 / 39

Example 1

 “If the intruder is near and approaching from the left, the

network advises strong right”

  • Distance: 12000 ≤ 𝜍 ≤ 62000
  • Angle to intruder: 0.2 ≤ 𝜄 ≤ 0.4
  • Intruder’s heading angle: −𝜌 ≤ 𝜔 ≤ −𝜌 + 0.005
  • Ownship speed: 100 ≤ 𝑤𝑝𝑥𝑜 ≤ 400
  • Intruder speed: 0 ≤ 𝑤𝑗𝑜𝑢 ≤ 400
  • Previous advisory: COC
  • Time to loss of vertical separation: τ = 0

 Proof time: 01:29:29 (using 4 machines)

slide-20
SLIDE 20

20 / 39

Example 2

 “For a distant intruder, the network advises COC”

  • Distance: 36000 ≤ 𝜍 ≤ 60760
  • Angle to intruder: 0.7 ≤ 𝜄 ≤ 𝜌
  • Intruder’s heading angle: −𝜌 ≤ 𝜔 ≤ −𝜌 + 0.01
  • Ownship speed: 900 ≤ 𝑤𝑝𝑥𝑜 ≤ 1200
  • Intruder speed: 600 ≤ 𝑤𝑗𝑜𝑢 ≤ 1200
  • Previous advisory: strong right
  • Time to loss of vertical separation: τ = 20

 Proof time: 01:25:15 (using 4 machines)

slide-21
SLIDE 21

21 / 39

Example 3

 “If vertical separation is large and the previous advisory is

weak left, the network advises COC or weak left”

  • Distance: 0 ≤ 𝜍 ≤ 60760
  • Angle to intruder: −𝜌 ≤ 𝜄 ≤ −0.75 ⋅ 𝜌
  • Intruder’s heading angle: −0.1 ≤ 𝜔 ≤ 0.1
  • Ownship speed: 600 ≤ 𝑤𝑝𝑥𝑜 ≤ 1200
  • Intruder speed: 600 ≤ 𝑤𝑗𝑜𝑢 ≤ 1200
  • Previous advisory: weak left
  • Time to loss of vertical separation: τ = 100

 Time to find counter example: 11:08:22

(using 1 machine)

  • Previously observed also in simulation
slide-22
SLIDE 22

22 / 39

Agenda

 Reluplex  Evaluation on the ACAS Xu Networks  Conclusion

slide-23
SLIDE 23

23 / 39

Conclusion

 Reluplex: a technique for solving linear programs with

ReLUs

 Can encode neural networks and properties as

Reluplex inputs

 Scalable  Sound and terminating

  • Modulo floating point
slide-24
SLIDE 24

24 / 39

Next Steps

 More complete verification of ACAS Xu  Improving performance

  • Adopt more SMT techniques (e.g. conflict analysis)
  • Adopt more Linear Programming techniques

(e.g. sum of infeasibilities)

 Soundness guarantees

  • Replay floating-point computation using precise arithmetic
  • Proof certificates

 More expressiveness

  • Additional kinds of properties
  • Additional kinds of activation functions
slide-25
SLIDE 25

25 / 39

Questions?

Thank Y

  • u!

Available on arXiv: 1702.01135