Program Analysis with Local Policy Iteration George Karpenkov - - PowerPoint PPT Presentation

program analysis with local policy iteration
SMART_READER_LITE
LIVE PREVIEW

Program Analysis with Local Policy Iteration George Karpenkov - - PowerPoint PPT Presentation

Program Analysis with Local Policy Iteration George Karpenkov VERIMAG May 6, 2015 George Karpenkov Program Analysis with Local Policy Iteration 1/41 1 / 41 Outline Algorithm 2/41 Program Analysis with Local Policy Iteration George


slide-1
SLIDE 1

Program Analysis with Local Policy Iteration

George Karpenkov

VERIMAG

May 6, 2015

George Karpenkov Program Analysis with Local Policy Iteration 1/41

1/41

slide-2
SLIDE 2

Outline

Introduction Motivation Finding Inductive Invariants Background Template Constraints Domain Policy Iteration Algorithm Path Focusing LPI Motivation Algorithm Example Contribution Results

George Karpenkov Program Analysis with Local Policy Iteration 2/41

2/41

slide-3
SLIDE 3

Motivation

  • Program verification

Finding inductive invariants LPI

Scalable algorithm for policy iteration Sent to FMCAD’15

George Karpenkov Program Analysis with Local Policy Iteration 3/41

3/41

slide-4
SLIDE 4

Motivation

  • Program verification
  • Finding inductive invariants

LPI

Scalable algorithm for policy iteration Sent to FMCAD’15

George Karpenkov Program Analysis with Local Policy Iteration 3/41

3/41

slide-5
SLIDE 5

Motivation

  • Program verification
  • Finding inductive invariants
  • LPI
  • Scalable algorithm for policy iteration
  • Sent to FMCAD’15

George Karpenkov Program Analysis with Local Policy Iteration 3/41

3/41

slide-6
SLIDE 6

Program Modeling

  • Control Flow Automaton (CFA)

int i=0; while (i<10) { i++; }

A i < 10 ∧ i′ = i + 1 i′ = 0

George Karpenkov Program Analysis with Local Policy Iteration 4/41

4/41

slide-7
SLIDE 7

Inductive Invariant

Motivation

  • Task: verify program properties
  • Prove: by induction
  • Aim: find inductive invariant
  • Includes initial state
  • Closed under transition

Inductive Invariant I τ

George Karpenkov Program Analysis with Local Policy Iteration 5/41

5/41

slide-8
SLIDE 8

Abstract Interpretation Limitations

  • Usual tool: abstract interpretation
  • Relies on widenings/narrowings to

enforce convergence

  • Can be very brittle

George Karpenkov Program Analysis with Local Policy Iteration 6/41

6/41

slide-9
SLIDE 9

Policy Iteration

Historical Perspective

  • Game-theoretique technique
  • Solving markov processes

Used for poker AI

George Karpenkov Program Analysis with Local Policy Iteration 7/41

7/41

slide-10
SLIDE 10

Policy Iteration

Historical Perspective

  • Game-theoretique technique
  • Solving markov processes
  • Used for poker AI

George Karpenkov Program Analysis with Local Policy Iteration 7/41

7/41

slide-11
SLIDE 11

Policy Iteration

Introduction - 2

  • Finds least inductive invariant in the given abstract domain
  • Considers the program as a set of equations
  • Game-theoretic algorithm adapted to find inductive invariant
  • Requires abstract semantics to be monotone & concave

Guarantees

Least inductive invariant, not least invariant in general!

George Karpenkov Program Analysis with Local Policy Iteration 8/41

8/41

slide-12
SLIDE 12

Policy Iteration

Introduction - 2

  • Finds least inductive invariant in the given abstract domain
  • Considers the program as a set of equations
  • Game-theoretic algorithm adapted to find inductive invariant
  • Requires abstract semantics to be monotone & concave

Guarantees

Least inductive invariant, not least invariant in general!

George Karpenkov Program Analysis with Local Policy Iteration 8/41

8/41

slide-13
SLIDE 13

Outline

Introduction Motivation Finding Inductive Invariants Background Template Constraints Domain Policy Iteration Algorithm Path Focusing LPI Motivation Algorithm Example Contribution Results

George Karpenkov Program Analysis with Local Policy Iteration 9/41

9/41

slide-14
SLIDE 14

Template Constraints Domain

Domain used in our work

  • Choose linear inequalities to be tracked before the analysis
  • E.g. x, y, x + y (templates)
  • We want to find inductive invariant

x ≤ d1 ∧ y ≤ d2 ∧ x + y ≤ d3 for all control states

  • An element of the domain above is a vector (3, 2, 4) which

corresponds to x ≤ 3 ∧ y ≤ 2 ∧ x + y ≤ 4 x y

George Karpenkov Program Analysis with Local Policy Iteration 10/41

10/41

slide-15
SLIDE 15

Template Constraints Domain

Abstract Semantics

  • Abstract Semantics: transition relation in the abstract domain
  • Convex optimization:
  • Template x, transition x′ = x + 1, previous element x ≤ 5
  • New element given by max x′ s. t. x′ = x + 1 ∧ x ≤ 5

George Karpenkov Program Analysis with Local Policy Iteration 11/41

11/41

slide-16
SLIDE 16

Policy Iteration

Simple Example

A i < 1000000 ∧ i′ = i + 1 i′ = 0

  • Template constraints domain {i}
  • Aim: find smallest d, s.t. i ≤ d is an

inductive invariant

  • Use semantical equations for d

Necessary and sufficient condition: s.t.

Disjunctions come from multiple edges represents unreachable state We take supremum as the answer can be (unbounded) or (unreachable)

George Karpenkov Program Analysis with Local Policy Iteration 12/41

12/41

slide-17
SLIDE 17

Policy Iteration

Simple Example

A i < 1000000 ∧ i′ = i + 1 i′ = 0

  • Template constraints domain {i}
  • Aim: find smallest d, s.t. i ≤ d is an

inductive invariant

  • Use semantical equations for d
  • Necessary and sufficient condition:
  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • Disjunctions come from multiple edges

represents unreachable state We take supremum as the answer can be (unbounded) or (unreachable)

George Karpenkov Program Analysis with Local Policy Iteration 12/41

12/41

slide-18
SLIDE 18

Policy Iteration

Simple Example

A i < 1000000 ∧ i′ = i + 1 i′ = 0

  • Template constraints domain {i}
  • Aim: find smallest d, s.t. i ≤ d is an

inductive invariant

  • Use semantical equations for d
  • Necessary and sufficient condition:
  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • Disjunctions come from multiple edges
  • ⊥ represents unreachable state
  • We take supremum as the answer can be ∞

(unbounded) or −∞ (unreachable)

George Karpenkov Program Analysis with Local Policy Iteration 12/41

12/41

slide-19
SLIDE 19

Policy Iteration

Explanation By Example

  • We have a min-max equation:

d = min (sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ =

⊥)

We consider separate cases for disjunctions Replacing each disjunction with one argument

s.t. Referred to as a policy

George Karpenkov Program Analysis with Local Policy Iteration 13/41

13/41

slide-20
SLIDE 20

Policy Iteration

Explanation By Example

  • We have a min-max equation:

d = min (sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ =

⊥)

  • We consider separate cases for disjunctions
  • Replacing each disjunction with one argument
  • d = sup i′ s.t. i′ = 0
  • Referred to as a policy

George Karpenkov Program Analysis with Local Policy Iteration 13/41

13/41

slide-21
SLIDE 21

Policy Iteration

Explanation By Example - 2

  • d = sup i′ s.t. i′ = 0
  • Simplified system (with no disjunctions):
  • Monotone and concave
  • ≤ 2 fixpoints
  • Can be solved using LP

George Karpenkov Program Analysis with Local Policy Iteration 14/41

14/41

slide-22
SLIDE 22

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation

s.t. evaluates to

  • 2. Substitute the value, does not hold:

s.t.

  • 3. Increase the value to

using policy s.t.

  • 4. Substituting, does not hold:

s.t.

  • 5. Increase to

using s.t.

  • 6. Substitute, holds!

s.t.

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-23
SLIDE 23

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation d = sup i′ s.t. ⊥ evaluates to d = −∞
  • 2. Substitute the value, does not hold:

s.t.

  • 3. Increase the value to

using policy s.t.

  • 4. Substituting, does not hold:

s.t.

  • 5. Increase to

using s.t.

  • 6. Substitute, holds!

s.t.

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-24
SLIDE 24

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation d = sup i′ s.t. ⊥ evaluates to d = −∞
  • 2. Substitute the value, does not hold:

−∞ = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 3. Increase the value to

using policy s.t.

  • 4. Substituting, does not hold:

s.t.

  • 5. Increase to

using s.t.

  • 6. Substitute, holds!

s.t.

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-25
SLIDE 25

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation d = sup i′ s.t. ⊥ evaluates to d = −∞
  • 2. Substitute the value, does not hold:

−∞ = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 3. Increase the value to 0 using policy d = sup i′ s.t. i′ = 0
  • 4. Substituting, does not hold:

s.t.

  • 5. Increase to

using s.t.

  • 6. Substitute, holds!

s.t.

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-26
SLIDE 26

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation d = sup i′ s.t. ⊥ evaluates to d = −∞
  • 2. Substitute the value, does not hold:

−∞ = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 3. Increase the value to 0 using policy d = sup i′ s.t. i′ = 0
  • 4. Substituting, does not hold:

0 = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 i′ = 0 ⊥

  • 5. Increase to

using s.t.

  • 6. Substitute, holds!

s.t.

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-27
SLIDE 27

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation d = sup i′ s.t. ⊥ evaluates to d = −∞
  • 2. Substitute the value, does not hold:

−∞ = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 3. Increase the value to 0 using policy d = sup i′ s.t. i′ = 0
  • 4. Substituting, does not hold:

0 = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 i′ = 0 ⊥

  • 5. Increase to 1000000 using

d = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d

  • 6. Substitute, holds!

s.t.

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-28
SLIDE 28

Policy Iteration Example

Algorithm Run

  • d = sup i′ s.t.

i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 1. Equation d = sup i′ s.t. ⊥ evaluates to d = −∞
  • 2. Substitute the value, does not hold:

−∞ = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d i′ = 0 ⊥

  • 3. Increase the value to 0 using policy d = sup i′ s.t. i′ = 0
  • 4. Substituting, does not hold:

0 = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 i′ = 0 ⊥

  • 5. Increase to 1000000 using

d = sup i′ s.t. i′ = i + 1 ∧ i < 1000000 ∧ i ≤ d

  • 6. Substitute, holds!

1000000 = sup i′ s.t. i′ = i+1∧i < 1000000∧i ≤ d i′ = 0 ⊥

George Karpenkov Program Analysis with Local Policy Iteration 15/41

15/41

slide-29
SLIDE 29

Policy Iteration

Algorithm Overview

Policy improvement Value determination Exit p0 p v v converged

Policy p ← p0 repeat v ← value determination based on p p ← policy based on v until v converges

George Karpenkov Program Analysis with Local Policy Iteration 16/41

16/41

slide-30
SLIDE 30

Policy Iteration

Algorithm Continued

  • Policy Improvement: SMT call
  • Policy which can improve current value?
  • Value Determination: LP call
  • Maximum value for current policy?

George Karpenkov Program Analysis with Local Policy Iteration 17/41

17/41

slide-31
SLIDE 31

Path Focusing

Similar to Large Block Encoding

  • Unknown per node per template

Over-approximates invariant in the abstract domain Loss of precision

if (unknown()) { x = -1; } else { x = 1; } assert(x != 0);

A B C George Karpenkov Program Analysis with Local Policy Iteration 18/41

18/41

slide-32
SLIDE 32

Path Focusing

Similar to Large Block Encoding

  • Unknown per node per template
  • Over-approximates invariant in the abstract domain
  • Loss of precision

if (unknown()) { x = -1; } else { x = 1; } assert(x != 0);

A B C George Karpenkov Program Analysis with Local Policy Iteration 18/41

18/41

slide-33
SLIDE 33

Path Focusing

Similar to Large Block Encoding

  • Unknown per node per template
  • Over-approximates invariant in the abstract domain
  • Loss of precision

if (unknown()) { x = -1; } else { x = 1; } assert(x != 0);

A B x′ = 1 x′ = −1 x ∈ [−1, 1] C x = 0 George Karpenkov Program Analysis with Local Policy Iteration 18/41

18/41

slide-34
SLIDE 34

Path Focusing

Adding disjunctions

  • Solution: remove nodes!
  • CFG Compaction:
  • Edges (A, τ1, B), (B, τ2, C), with no other incoming to B
  • Converted to (A, τ1 ∧ τ2, C), B removed
  • Edges (A, τ1, B), (A, τ2, B), no other incoming to B
  • Converted to (A, τ1 ∨ τ2, B)

A B C Pre-process A C George Karpenkov Program Analysis with Local Policy Iteration 19/41

19/41

slide-35
SLIDE 35

Path Focusing

Adding disjunctions

  • Solution: remove nodes!
  • CFG Compaction:
  • Edges (A, τ1, B), (B, τ2, C), with no other incoming to B
  • Converted to (A, τ1 ∧ τ2, C), B removed
  • Edges (A, τ1, B), (A, τ2, B), no other incoming to B
  • Converted to (A, τ1 ∨ τ2, B)

A B x′ = 1 x′ = −1 C x = 0 Pre-process A C (x′ = 1 ∨ x′ = −1) ∧ (x = 0) George Karpenkov Program Analysis with Local Policy Iteration 19/41

19/41

slide-36
SLIDE 36

Path Focusing

Properties

  • For a well-structured graph: only loop-heads remain

Disjunctions create new policies Possible improvement: cut-set

Set of nodes which cut all the cycles in the graph

Disadvantage: requires pre-processing

George Karpenkov Program Analysis with Local Policy Iteration 20/41

20/41

slide-37
SLIDE 37

Path Focusing

Properties

  • For a well-structured graph: only loop-heads remain
  • Disjunctions create new policies

Possible improvement: cut-set

Set of nodes which cut all the cycles in the graph

Disadvantage: requires pre-processing

George Karpenkov Program Analysis with Local Policy Iteration 20/41

20/41

slide-38
SLIDE 38

Path Focusing

Properties

  • For a well-structured graph: only loop-heads remain
  • Disjunctions create new policies
  • Possible improvement: cut-set
  • Set of nodes which cut all the cycles in the graph
  • Disadvantage: requires pre-processing

George Karpenkov Program Analysis with Local Policy Iteration 20/41

20/41

slide-39
SLIDE 39

Outline

Introduction Motivation Finding Inductive Invariants Background Template Constraints Domain Policy Iteration Algorithm Path Focusing LPI Motivation Algorithm Example Contribution Results

George Karpenkov Program Analysis with Local Policy Iteration 21/41

21/41

slide-40
SLIDE 40

Problems of Policy Iteration

Motivation for our work

  • Problems with the approach above:
  • Scalability: at each step, we update each policy, and at each

step, we solve the global equation system (of the size of the entire program)

  • Cooperability: policy iterations don’t fit into any existing

framework, pre-processing makes it worse

George Karpenkov Program Analysis with Local Policy Iteration 22/41

22/41

slide-41
SLIDE 41

Problems of Policy Iteration

Our Contribution

  • Our work: LPI (Local Policy Iteration)
  • Exploits the locality to avoid redundant computation
  • Avoids solving the global equation at each point
  • Unifies policy iteration with other approaches using CPA

(Configurable Program Analysis Framework)

  • No pre-processing is involved

George Karpenkov Program Analysis with Local Policy Iteration 23/41

23/41

slide-42
SLIDE 42

LPI as a Configurable Program Analysis

x ≤ 4 ? x′ = x + 1

  • Transfer Relation:
  • Similar to abstract interpretation
  • Record the bound along with the policy

Global map location abstract state When two states for the same node, merge

When merge closes the loop, perform value determination Follow backpointers to re-create global problem

George Karpenkov Program Analysis with Local Policy Iteration 24/41

24/41

slide-43
SLIDE 43

LPI as a Configurable Program Analysis

x ≤ 4 ? x′ = x + 1

  • Transfer Relation:
  • Similar to abstract interpretation
  • Record the bound along with the policy
  • Global map M : location → abstract state

When two states for the same node, merge

When merge closes the loop, perform value determination Follow backpointers to re-create global problem

George Karpenkov Program Analysis with Local Policy Iteration 24/41

24/41

slide-44
SLIDE 44

LPI as a Configurable Program Analysis

x ≤ 4 ? x′ = x + 1

  • Transfer Relation:
  • Similar to abstract interpretation
  • Record the bound along with the policy
  • Global map M : location → abstract state
  • When two states for the same node, merge
  • When merge closes the loop, perform value determination
  • Follow backpointers to re-create global problem

George Karpenkov Program Analysis with Local Policy Iteration 24/41

24/41

slide-45
SLIDE 45

Abstract Domain

  • Two lattices:
  • Abstracted State (element of template constraints domain)
  • Intermediate State (formula)
  • Idea: avoid pre-processing

Propagate intermediate states, convert to abstracted at loop-heads

George Karpenkov Program Analysis with Local Policy Iteration 25/41

25/41

slide-46
SLIDE 46

Abstract Domain

  • Two lattices:
  • Abstracted State (element of template constraints domain)
  • Intermediate State (formula)
  • Idea: avoid pre-processing
  • Propagate intermediate states, convert to abstracted at

loop-heads

George Karpenkov Program Analysis with Local Policy Iteration 25/41

25/41

slide-47
SLIDE 47

LPI Abstract Domain

Abstracted States

Abstracted State

Set of tuples (bound, policy, backpointer) for each template. Current bound, policy and the previous value used to derive that bound.

  • Abstracted state example: {i : (0, i′ = 0, A)}

Partial order given by component-wise comparison on bounds On merge:

Pick the upper bound for each template Keep the corresponding policy and backpointer

George Karpenkov Program Analysis with Local Policy Iteration 26/41

26/41

slide-48
SLIDE 48

LPI Abstract Domain

Abstracted States

Abstracted State

Set of tuples (bound, policy, backpointer) for each template. Current bound, policy and the previous value used to derive that bound.

  • Abstracted state example: {i : (0, i′ = 0, A)}
  • Partial order given by component-wise comparison on bounds

On merge:

Pick the upper bound for each template Keep the corresponding policy and backpointer

George Karpenkov Program Analysis with Local Policy Iteration 26/41

26/41

slide-49
SLIDE 49

LPI Abstract Domain

Abstracted States

Abstracted State

Set of tuples (bound, policy, backpointer) for each template. Current bound, policy and the previous value used to derive that bound.

  • Abstracted state example: {i : (0, i′ = 0, A)}
  • Partial order given by component-wise comparison on bounds
  • On merge:
  • Pick the upper bound for each template
  • Keep the corresponding policy and backpointer

George Karpenkov Program Analysis with Local Policy Iteration 26/41

26/41

slide-50
SLIDE 50

LPI Abstract Domain

Intermediate States

Intermediate State

  • Formula φ(X′) representing set of reachable states
  • Ω meta-variables instead of backpointers
  • Intermediate State example:
  • x′ = 1 ∧ Ω = A ∨ x′ = 0 ∧ Ω = B

Propagation: symbolic execution Can be converted to abstracted state using abstraction

Maximizing for every template Recording policy and backpointer

George Karpenkov Program Analysis with Local Policy Iteration 27/41

27/41

slide-51
SLIDE 51

LPI Abstract Domain

Intermediate States

Intermediate State

  • Formula φ(X′) representing set of reachable states
  • Ω meta-variables instead of backpointers
  • Intermediate State example:
  • x′ = 1 ∧ Ω = A ∨ x′ = 0 ∧ Ω = B
  • Propagation: symbolic execution
  • Can be converted to abstracted state using abstraction
  • Maximizing for every template
  • Recording policy and backpointer

George Karpenkov Program Analysis with Local Policy Iteration 27/41

27/41

slide-52
SLIDE 52

LPI

Propagation Overview

  • Start with abstracted state at node A: {x : (0, x′ = 0, I)}

Successor under edge Intermediate state: If we need to perform abstraction, we get

George Karpenkov Program Analysis with Local Policy Iteration 28/41

28/41

slide-53
SLIDE 53

LPI

Propagation Overview

  • Start with abstracted state at node A: {x : (0, x′ = 0, I)}
  • Successor under edge x′ = x + 5

Intermediate state: If we need to perform abstraction, we get

George Karpenkov Program Analysis with Local Policy Iteration 28/41

28/41

slide-54
SLIDE 54

LPI

Propagation Overview

  • Start with abstracted state at node A: {x : (0, x′ = 0, I)}
  • Successor under edge x′ = x + 5
  • Intermediate state: x′ = x + 5 ∧ x ≤ 0 ∧ Ω = A

If we need to perform abstraction, we get

George Karpenkov Program Analysis with Local Policy Iteration 28/41

28/41

slide-55
SLIDE 55

LPI

Propagation Overview

  • Start with abstracted state at node A: {x : (0, x′ = 0, I)}
  • Successor under edge x′ = x + 5
  • Intermediate state: x′ = x + 5 ∧ x ≤ 0 ∧ Ω = A
  • If we need to perform abstraction, we get

{x : (5, x′ = x + 5, A)}

George Karpenkov Program Analysis with Local Policy Iteration 28/41

28/41

slide-56
SLIDE 56

Local Value Determination

  • On closing the loop (at abstracted state):
  • Follow backpointers, keep adding constraints
  • Create value determination problem
  • Potentially size of the largest loop

George Karpenkov Program Analysis with Local Policy Iteration 29/41

29/41

slide-57
SLIDE 57

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state
  • 3. Abstracted to
  • 4. Intermediate state
  • 5. Abstracted to
  • 6. Merge

, val. det.:

  • 7. Intermediate
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-58
SLIDE 58

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to
  • 4. Intermediate state
  • 5. Abstracted to
  • 6. Merge

, val. det.:

  • 7. Intermediate
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-59
SLIDE 59

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state
  • 5. Abstracted to
  • 6. Merge

, val. det.:

  • 7. Intermediate
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-60
SLIDE 60

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to
  • 6. Merge

, val. det.:

  • 7. Intermediate
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-61
SLIDE 61

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge

, val. det.:

  • 7. Intermediate
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-62
SLIDE 62

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge A, val. det.: {i : (10, A), j : (0, I)}
  • 7. Intermediate
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-63
SLIDE 63

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge A, val. det.: {i : (10, A), j : (0, I)}
  • 7. Intermediate i ≤ 10 ∧ j ≤ 0 ∧ ¬(i < 10) ∧ Ω = A
  • 8. Abstracted:
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-64
SLIDE 64

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge A, val. det.: {i : (10, A), j : (0, I)}
  • 7. Intermediate i ≤ 10 ∧ j ≤ 0 ∧ ¬(i < 10) ∧ Ω = A
  • 8. Abstracted: {i : (10, A), j : (0, A)}
  • 9. Intermediate:
  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-65
SLIDE 65

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge A, val. det.: {i : (10, A), j : (0, I)}
  • 7. Intermediate i ≤ 10 ∧ j ≤ 0 ∧ ¬(i < 10) ∧ Ω = A
  • 8. Abstracted: {i : (10, A), j : (0, A)}
  • 9. Intermediate:

i = 10 ∧ j ≤ 0 ∧ j′ = j + 1 ∧ Ω = B

  • 10. Abstracted:
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-66
SLIDE 66

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge A, val. det.: {i : (10, A), j : (0, I)}
  • 7. Intermediate i ≤ 10 ∧ j ≤ 0 ∧ ¬(i < 10) ∧ Ω = A
  • 8. Abstracted: {i : (10, A), j : (0, A)}
  • 9. Intermediate:

i = 10 ∧ j ≤ 0 ∧ j′ = j + 1 ∧ Ω = B

  • 10. Abstracted: {i : (10, B), j : (1, B)}
  • 11. Merge

, val. det.:

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-67
SLIDE 67

Local Policy Iteration

Algorithm Example

I A B i′ = 0 ∧ j′ = 0 i < 10 ∧ i′ = i + 1 ¬(i < 10) j < 10 ∧ j′ = j + 1

  • 1. Start with abstracted state ⊤
  • 2. Intermediate state i′ = 0 ∧ j′ = 0 ∧ Ω = I
  • 3. Abstracted to {i : (0, I), j : (0, I)}
  • 4. Intermediate state i ≤ 0 ∧ j ≤ 0 ∧ Ω = A
  • 5. Abstracted to {i : (1, A), j : (0, A)}
  • 6. Merge A, val. det.: {i : (10, A), j : (0, I)}
  • 7. Intermediate i ≤ 10 ∧ j ≤ 0 ∧ ¬(i < 10) ∧ Ω = A
  • 8. Abstracted: {i : (10, A), j : (0, A)}
  • 9. Intermediate:

i = 10 ∧ j ≤ 0 ∧ j′ = j + 1 ∧ Ω = B

  • 10. Abstracted: {i : (10, B), j : (1, B)}
  • 11. Merge B, val. det.: {i : (10, B), j : (10, A)}

George Karpenkov Program Analysis with Local Policy Iteration 30/41

30/41

slide-68
SLIDE 68

Reachability of Bad States

  • Whether we are safe:
  • φ ∧ E is unsat
  • Example: (x ≤ 10) ∧ (x = 11)

Whether we are unsafe:

is unsat Example:

George Karpenkov Program Analysis with Local Policy Iteration 31/41

31/41

slide-69
SLIDE 69

Reachability of Bad States

  • Whether we are safe:
  • φ ∧ E is unsat
  • Example: (x ≤ 10) ∧ (x = 11)
  • Whether we are unsafe:
  • φ ∧ ¬E is unsat
  • Example: (x = 0) ∧ (x = 0)

George Karpenkov Program Analysis with Local Policy Iteration 31/41

31/41

slide-70
SLIDE 70

Algorithm Properties

  • Soundness
  • Only terminate when inductive

Termination

Bounds can only grow Each bound corresponds to some policy Finite number of policies

Least invariant property

Only select feasible policies

George Karpenkov Program Analysis with Local Policy Iteration 32/41

32/41

slide-71
SLIDE 71

Algorithm Properties

  • Soundness
  • Only terminate when inductive
  • Termination
  • Bounds can only grow
  • Each bound corresponds to some policy
  • Finite number of policies

Least invariant property

Only select feasible policies

George Karpenkov Program Analysis with Local Policy Iteration 32/41

32/41

slide-72
SLIDE 72

Algorithm Properties

  • Soundness
  • Only terminate when inductive
  • Termination
  • Bounds can only grow
  • Each bound corresponds to some policy
  • Finite number of policies
  • Least invariant property
  • Only select feasible policies

George Karpenkov Program Analysis with Local Policy Iteration 32/41

32/41

slide-73
SLIDE 73

LPI Configurations

  • Configurations
  • Intervals (±x)
  • Octagons (above and ±x, ±x ± y)
  • Rich Templates (above and ±2x ± y, ±x ± y ± z, ±2x ± y ± z)
  • Unrolling
  • Simple Congruence Analysis

Refinement: progressively switch to more expensive config

George Karpenkov Program Analysis with Local Policy Iteration 33/41

33/41

slide-74
SLIDE 74

LPI Configurations

  • Configurations
  • Intervals (±x)
  • Octagons (above and ±x, ±x ± y)
  • Rich Templates (above and ±2x ± y, ±x ± y ± z, ±2x ± y ± z)
  • Unrolling
  • Simple Congruence Analysis
  • Refinement: progressively switch to more expensive config

George Karpenkov Program Analysis with Local Policy Iteration 33/41

33/41

slide-75
SLIDE 75

Contrast with Classical Policy Iteration

  • Only update policies which need updating
  • Run value determination on a reduced program section
  • Stated in CPA framework
  • (Unguided) refinement of template precision
  • Local value determination optimizations

Not in the presentation

George Karpenkov Program Analysis with Local Policy Iteration 34/41

34/41

slide-76
SLIDE 76

Outline

Introduction Motivation Finding Inductive Invariants Background Template Constraints Domain Policy Iteration Algorithm Path Focusing LPI Motivation Algorithm Example Contribution Results

George Karpenkov Program Analysis with Local Policy Iteration 35/41

35/41

slide-77
SLIDE 77

Local Policy Iteration

Code analysis tool

Tool

Included in CPAchecker trunk.

  • https://github.com/dbeyer/cpachecker
  • Configurations:
  • -policy-intervals
  • -policy
  • -policy-ensemble
  • -policy-counterexample-checking

George Karpenkov Program Analysis with Local Policy Iteration 36/41

36/41

slide-78
SLIDE 78

LPI Evaluation

  • Evaluated on SV-Comp “Loops” category
  • Compared with
  • BLAST(2014)
  • CPAchecker-SVcomp15
  • PAGAI
  • Across true benchmarks

George Karpenkov Program Analysis with Local Policy Iteration 37/41

37/41

slide-79
SLIDE 79

Results

Comparison of Approaches

  • vs. LPI

PAGAI BLAST CPAchecker Unique Verified Incorrect LPI 13 21 22 8 60 1 PAGAI 5 14 15 52 1 BLAST 4 5 7 43 1 CPAchecker 19 20 21 12 57 2

  • Difference between approaches
  • Reads: A vs. B

George Karpenkov Program Analysis with Local Policy Iteration 38/41

38/41

slide-80
SLIDE 80

Results

Timing Results

10−2 10−1 100 101 102 103 CPU Time(s)

PAGAI LPI-Refinement BLAST(2014) CPAchecker LPI-Intervals

George Karpenkov Program Analysis with Local Policy Iteration 39/41

39/41

slide-81
SLIDE 81

Contributions

Re-iterating

  • New scalable algorithm for policy iteration
  • Tool for program analysis (using CPAchecker framework)
  • The only policy-iteration based tool capable of dealing with C

George Karpenkov Program Analysis with Local Policy Iteration 40/41

40/41

slide-82
SLIDE 82

Questions?

George Karpenkov Program Analysis with Local Policy Iteration 41/41

41/41

slide-83
SLIDE 83

Policy Iteration

Fixpoints and Concavity

  • Concavity and monotonicity limits the number of fixpoints
  • Can solve for x = f(x)

y = x

x y

y = f(x) f1 = f(f1) f2 = f(f2) x0 f(x0) f(f(x0)) f(f(f(x0))) George Karpenkov Program Analysis with Local Policy Iteration 41/41

41/41

slide-84
SLIDE 84

Policy Iteration

Concavity of Abstract Semantics

  • Linear Semantics: x′ = Tx ∧ G(x)
  • Let t′ = t(Tx)

t t′ G(x) max t′x s.t. x ∈ G(x) max t′x s. t. x ∈ G(x) ∧ tx ≤ d tx ≤ d d George Karpenkov Program Analysis with Local Policy Iteration 41/41

41/41