decision tree based learning of program invariants
play

Decision Tree Based Learning of Program Invariants Deepak DSouza - PowerPoint PPT Presentation

Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Decision Tree Based Learning of Program Invariants Deepak DSouza Department of Computer Science and Automation Indian Institute of Science,


  1. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Decision Tree Based Learning of Program Invariants Deepak D’Souza Department of Computer Science and Automation Indian Institute of Science, Bangalore. FM Update Meeting IIT Mandi 17 July 2017

  2. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants What this talk is about Paper titled Learning invariants using decision trees and implication counterexamples, by Garg, Neider, Madhusudan, and Roth, in POPL 2016. A way to automate deductive-style program verification. Extends the Decision Tree classification technique in Machine Learning, to handle implication samples, with applications to finding proofs of programs. Also talk about some directions to extend this work.

  3. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Outline of this talk Floyd-Hoare Style Verification 1 Decision Tree Learning 2 ICE Learning 3 Proofs with Multiple Invariants 4

  4. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Proving assertions in programs // Pre: 10 <= y // Pre: true // Pre: 0 <= n y := y + 1; if (a <= b) int a = m; z := x + y; min = a; int x = 0; else while (x < n) { // Post: x <= z min = b; a = a + 1; x = x + 1; // Post: min <= a && min <= b } // Post: a = m + n

  5. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Proving assertions in programs // Pre: 10 <= y // Pre: true // Pre: 0 <= n y := y + 1; if (a <= b) int a = m; z := x + y; min = a; int x = 0; else while (x < n) { // Post: x <= z min = b; a = a + 1; x = x + 1; // Post: min <= a && min <= b } // Post: a = m + n Model-checking vs Deductive Reasoning.

  6. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Floyd-Hoare Style of Program Verification Robert W. Floyd: “Assigning meanings to programs” Proceedings of the American Mathematical Society Symposia on Applied Mathematics (1967) C A R Hoare: “An axiomatic basis for computer programming”, Communications of the ACM (1969).

  7. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Example proof y > 10 y ≥ 0 y := y + 1 y ≥ 1 z := x + y y ≥ 1 ∧ z = x + y z > x

  8. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Example proof of add program n ≥ 0 n ≥ 0 a := m; n ≥ 0 ∧ a = m x := 0 a = m + x ∧ x ≤ n while (x < n) { a := a + 1 x := x + 1 a = m + n

  9. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Problems with automating such proofs To check: { y > 10 } y := y + 1; z := x + y; { x < z } Use the weakest precondition rules to generate the verification condition: ( y > 10) = ⇒ ( y > − 1) . Check the verification condition by asking a theorem prover / SMT solver if the formula ( y > 10) ∧ ¬ ( y > − 1) . is satisfiable.

  10. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants What about Programs with loops? Find an adequate and assume Pre inductive invariant Inv : S 1 Pre = ⇒ WP ( S 1 , Inv ) 1 (“inductive invariant”) invariant Inv while (b) { ( Inv ∧ b ) = ⇒ 2 WP ( S 2 , Inv ) (“inductive S 2 invariant”) } Inv ∧ ¬ b = ⇒ 3 WP ( S 3 , Post ) S 3 (“adequate”). assert Post

  11. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Adequate loop invariant n ≥ 0 n ≥ 0 An adequate loop invariant needs to satisfy: a := m; { n ≥ 0 } a := m; x := 0 { a = m + x ∧ x ≤ n } . n ≥ 0 ∧ a = m { a = m + x ∧ x ≤ n ∧ x < n } a := a+1; x := x+1 { a = m + x ∧ x ≤ n } . x := 0 { a = m + x ∧ x ≤ n ∧ x ≥ n } skip a = m + x ∧ x ≤ n { a = m + n } . while (x < n) { Verification conditions are generated accordingly. a := a + 1 Note that a = m + x is not an adequate loop invariant. x := x + 1 a = m + n

  12. � fi fi fi fi fi fi fi fi fi fi ffi fi fi Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Learning loop invariants Main hurdle in automating program verification is coming up with adequate loop invariants. Several white-box approaches have been used (CEGAR, Lazy Annotation, using interpolation, and tools like Slam/Blast, Synergy). Instead explore a black-box approach, based on a Teacher-Learner model. ---- --- + + + Learner Teacher Dynamic + + Program + + engine Constraint Solver H

  13. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Black-box Learning for add program ( m �→ 2 , n �→ 3 , a �→ − , x �→ − ) ( m �→ 1 , n �→ 1 , − , − ) a := m; (2 , 3 , 2 , − ) x := 0 + (2 , 3 , 2 , 0) (1 , 1 , 1 , 0) (2 , 3 , 3 , 1) (1 , 1 , 2 , 1) (2 , 3 , 4 , 2) (2 , 3 , 5 , 3) − while (x < n) { (1 , 1 , 3 , 2) (2 , 3 , 2 , 0) a := a + 1 (2 , 3 , 3 , 0) x := x + 1 } (2 , 3 , 5 , 3) (1 , 1 , 2 , 1)

  14. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Decision Tree Based Learning Given a set of positive samples S + and negative samples S − , learn a predicate H from a given concept class. Example concept class: Boolean combinations of atomic predicates of the form x ≤ c , where x is a prog variable and c ≤ 10. Or octagonal constraints + x + y ≤ c ... A brute-force search is always possible, but we would like to be more efficient in practice.

  15. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Decision Tree learning algorithm Maintain a tree whose nodes correspond to subsets of the sample points Root node contains all given samples Choose a non-finished node n , and an attribute a to split on. Create two children n a and n ¬ a of n with corresponding subset of samples. If a node is “homogeneous”, mark it pos/neg and finished. Recurse till all nodes are finished. Output predicate corresponding to disjunction of all positive nodes.

  16. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Decision Tree learning by example 5 _ + _ _ _ + + + + 5 5 y ≤ 1 _ + 5 5 _ + _ _ _ _ + + + _ _ + + 5 5 x ≤ 3 + + 5 5 + _ + _ _ _ + 5 5 5 Predicate learnt: y ≤ 1 ∨ ( y > 1 ∧ x > 3).

  17. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Choosing attribute based on entropy If n has P positive and N negative samples: Entropy ( n ) = P + N · log P + N P + N · log P + N P N − − N P Entropy measures reduction in uncertainty in number of bits. Gives us a measure of the “impurity” of a node. Choose attribute a which maximizes Entropy ( n ) − ( Entropy ( n a ) + Entropy ( n ¬ a )).

  18. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants Decision Tree: Example where entropy does not do well 5 _ + _ + + _ + _ + 5 Best attribute would be y ≤ 1 followed by x ≤ 1, but entropy would choose x ≤ 3 as first split.

  19. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants ICE: The need for implication counterexamples Introduced by Garg, L¨ oding, Madhusudan, and Neider, in a assume Pre paper in CAV 2014. S 1 Just Examples (positive) and + invariant Inv Counterexamples (negative) are not _ while (b) { ? enough: the Teacher needs to give S 2 Implication samples as well. } This way the Teacher is honest, not precluding some candidate S 3 invariant by an arbitrary answer. assert Post Leads to a robust learning framework.

  20. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants ICE learning by example Learner conjectures H : 1 m ≤ n ∧ x ≤ a ( m �→ 2 , n �→ 3 , a �→ − , x �→ − ) ( m �→ 1 , n �→ 1 , − , − ) a := m; (2 , 3 , 2 , − ) x := 0 + (2 , 2 , 4 , 1) (2 , 3 , 2 , 0) (1 , 1 , 1 , 0) (2 , 3 , 3 , 1) (1 , 1 , 2 , 1) (2 , 3 , 4 , 2) (2 , 2 , 5 , 2) (2 , 3 , 5 , 3) while (x < n) { − (1 , 1 , 3 , 2) (2 , 3 , 2 , 0) a := a + 1 (2 , 3 , 3 , 0) x := x + 1 } (2 , 3 , 5 , 3) (1 , 1 , 2 , 1)

  21. Floyd-Hoare Style Verification Decision Tree Learning ICE Learning Proofs with Multiple Invariants ICE learning by example Learner conjectures H : 1 m ≤ n ∧ x ≤ a ( m �→ 2 , n �→ 3 , a �→ − , x �→ − ) 2 Teacher replies with Example: ( m �→ 1 , n �→ 1 , − , − ) (2 , 1 , 2 , 0). a := m; (2 , 3 , 2 , − ) x := 0 + (2 , 2 , 4 , 1) (2 , 3 , 2 , 0) (1 , 1 , 1 , 0) (2 , 3 , 3 , 1) (1 , 1 , 2 , 1) (2 , 3 , 4 , 2) (2 , 2 , 5 , 2) (2 , 3 , 5 , 3) while (x < n) { − (1 , 1 , 3 , 2) (2 , 3 , 2 , 0) a := a + 1 (2 , 3 , 3 , 0) x := x + 1 } (2 , 3 , 5 , 3) (1 , 1 , 2 , 1)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend