From Program Verification to Program Synthesis Overview Jaak - - PowerPoint PPT Presentation

from program verification to program synthesis
SMART_READER_LITE
LIVE PREVIEW

From Program Verification to Program Synthesis Overview Jaak - - PowerPoint PPT Presentation

From Program Verification to Program Synthesis Overview Jaak Ristioja March 30, 2010 1 / 91 Reference From Program Verification to Program Synthesis. @ POPL10; January 17-23, 2010 Saurabh Srivastava , University of Maryland, College Park


slide-1
SLIDE 1

From Program Verification to Program Synthesis

Overview Jaak Ristioja March 30, 2010

1 / 91

slide-2
SLIDE 2

Reference

From Program Verification to Program Synthesis.

@ POPL’10; January 17-23, 2010 Saurabh Srivastava, University of Maryland, College Park Sumit Gulwani, Microsoft Research, Redmond Jeffrey S. Foster University of Maryland, College Park doi:10.1145/1706299.1706337 (including numerous typos and ambiguities)

2 / 91

slide-3
SLIDE 3

Reference

From Program Verification to Program Synthesis.

@ POPL’10; January 17-23, 2010 Saurabh Srivastava, University of Maryland, College Park Sumit Gulwani, Microsoft Research, Redmond Jeffrey S. Foster University of Maryland, College Park doi:10.1145/1706299.1706337 (including numerous typos and ambiguities)

3 / 91

slide-4
SLIDE 4

Introduction

Automated program synthesis

◮ Correct-by-construction ◮ Eases task of programming

◮ Automated debugging ◮ Programmer only deals with high-level design

◮ New non-trivial algorithms could be discovered ◮ Difficult to implement

4 / 91

slide-5
SLIDE 5

Introduction

Verification and synthesis

Program verification

◮ synthesizes program proofs from programs ◮ for loops it uses

◮ inductive invariants for partial correctness ◮ ranking functions for termination

◮ does verification

Synthesis problem → verification problem

◮ encoding guards and statements etc as logical facts ◮ using verification tools for synthesis ◮ by verification we infer statements, guards etc

Proof-theoretic synthesis

◮ Proof for the program is synthesized alongside the program

5 / 91

slide-6
SLIDE 6

Introduction

Verification and synthesis

Program verification

◮ synthesizes program proofs from programs ◮ for loops it uses

◮ inductive invariants for partial correctness ◮ ranking functions for termination

◮ does verification

Synthesis problem → verification problem

◮ encoding guards and statements etc as logical facts ◮ using verification tools for synthesis ◮ by verification we infer statements, guards etc

Proof-theoretic synthesis

◮ Proof for the program is synthesized alongside the program

6 / 91

slide-7
SLIDE 7

Introduction

Verification and synthesis

Program verification

◮ synthesizes program proofs from programs ◮ for loops it uses

◮ inductive invariants for partial correctness ◮ ranking functions for termination

◮ does verification

Synthesis problem → verification problem

◮ encoding guards and statements etc as logical facts ◮ using verification tools for synthesis ◮ by verification we infer statements, guards etc

Proof-theoretic synthesis

◮ Proof for the program is synthesized alongside the program

7 / 91

slide-8
SLIDE 8

Motivating example

Bresenham’s line drawing algorithm

Pre- and post-condition for a line drawing program: τpre : 0 < Y ≤ X τpost : ∀k : 0 ≤ k ≤ X ⇒ 2|out[k] − (Y /X)k| ≤ 1 and resource constraints, for example constraints for

◮ control flow, ◮ stack space, ◮ available operations, etc

can we synthesize the program?

8 / 91

slide-9
SLIDE 9

Motivating example

Bresenham’s line drawing algorithm

Given the specification for a line drawing program τpre : 0 < Y ≤ X τpost : ∀k : 0 ≤ k ≤ X ⇒ 2|out[k] − (Y /X)k| ≤ 1 and resource constraints, for example constraints for

◮ control flow, ◮ stack space, ◮ available operations, etc

can we synthesize the program?

9 / 91

slide-10
SLIDE 10

Motivating example

Bresenham’s line drawing algorithm

Example

Bresenhams ( int X, int Y) v1 := 2Y − X; y := 0; x := 0; while ( x <= X) |

  • ut [ x ]

:= y ; | i f (v1 < 0) | | v1 := v1 + 2Y; | else | | v1 := v1 + 2(Y − X) ; y++; | x++; return

  • ut ;

10 / 91

slide-11
SLIDE 11

Motivating example

Bresenham’s line drawing algorithm

Observations

◮ We can write statements as equality predicates ◮ We can write acyclic program fragments as transition systems

Example

◮ x := e becomes an equality predicate x′ = e where

◮ x′ is a renaming of x to its output value ◮ e is the expression over the non-primed values

◮ y := x; x := y becomes y′ = x ∧ x′ = y′ ◮ if (x > 0) x := y; else skip; becomes

[] x > 0 → x′ = y [] x ≤ 0 → true

11 / 91

slide-12
SLIDE 12

Motivating example

Bresenham’s line drawing algorithm

Example

[]true → v′

1 = 2Y − X ∧ y′ = 0 ∧ x′ = 0

while ( x <= X) | []v1 < 0 → out′ = upd(out, x, y) ∧ | v′

1 = v1 + 2Y ∧

| y′ = y ∧ | x′ = x + 1 | []v1 ≥ 0 → out′ = upd(out, x, y) ∧ | v′

1 = v1 + 2(Y − X) ∧

| y′ = y + 1 ∧ | x′ = x + 1

12 / 91

slide-13
SLIDE 13

Motivating example

Bresenham’s line drawing algorithm

To prove partial correctness, we can write down the inductive loop invariant for the while-loop: τ : 0 < Y ≤ X ∧ v1 = 2 (x + 1) Y − (2y + 1) X ∧ 2 (Y − X) ≤ v1 ≤ 2Y ∧ ∀k : 0 ≤ k < x ⇒ 2 |out[k] − (Y /X)k| ≤ 1 and the verification condition can be written as four implications of four paths in the program: τpre ∧ sentry ⇒ τ ′ τ ∧ ¬gloop ⇒ τpost τ ∧ gloop ∧ gbody1 ∧ sbody1 ⇒ τ ′ τ ∧ gloop ∧ gbody2 ∧ sbody2 ⇒ τ ′ where τ ′ is the renamed version of the loop invariant.

13 / 91

slide-14
SLIDE 14

Motivating example

Bresenham’s line drawing algorithm

sentry : v′

1 = 2Y − X ∧ y′ = 0 ∧ x′ = 0

gloop : x ≤ X gbody1 : v1 < 0 sbody1 : out′ = upd(out, x, y) ∧ v′

1 = v1 + 2Y ∧

y′ = y ∧ x′ = x + 1 gbody2 : v1 ≥ 0 sbody2 : out′ = upd(out, x, y) ∧ v′

1 = v1 + 2(Y − X) ∧

y′ = y + 1 ∧ x′ = x + 1

14 / 91

slide-15
SLIDE 15

Motivating example

One can validate that the loop invariant τ satisfies the verification condition.

◮ e.g. by using SMT (Satisfiability Modulo Theory) solvers

There are also powerful program verification tools that can prove total correctness by

◮ automatically generating fixed-point solutions for loop

invariants, such as τ

◮ inferring ranking functions (ϕ) to prove termination

So if we can infer the verification condition, perhaps we can also infer

◮ the guards gi’s and ◮ the statements si’s

at the same time?

15 / 91

slide-16
SLIDE 16

Motivating example

One can validate that the loop invariant τ satisfies the verification condition.

◮ e.g. by using SMT (Satisfiability Modulo Theory) solvers

There are also powerful program verification tools that can prove total correctness by

◮ automatically generating fixed-point solutions for loop

invariants, such as τ

◮ inferring ranking functions (ϕ) to prove termination

So if we can infer the verification condition, perhaps we can also infer

◮ the guards gi’s and ◮ the statements si’s

at the same time?

16 / 91

slide-17
SLIDE 17

Motivating example

How to infer guards and statements

  • 1. encode programs as transition systems
  • 2. assert appropriate constraints
  • 3. use verification tools to systematically infer solutions for the

unknowns in the constraints. The unknowns are

◮ invariants ◮ statements ◮ guards

Types of constraints

◮ well-formedness constraints to get solutions corresponding to

real-life programs

◮ progress constraints to ensure termination

17 / 91

slide-18
SLIDE 18

Motivating example

How to infer guards and statements

  • 1. encode programs as transition systems
  • 2. assert appropriate constraints
  • 3. use verification tools to systematically infer solutions for the

unknowns in the constraints. The unknowns are

◮ invariants ◮ statements ◮ guards

Types of constraints

◮ well-formedness constraints to get solutions corresponding to

real-life programs

◮ progress constraints to ensure termination

18 / 91

slide-19
SLIDE 19

Specification for proof-theoretic approach

For synthesis we first need a specification for the program we want to construct.

Synthesis scaffold

F, D, R

◮ F - functional specification ◮ D - domain constraints ◮ R - resource constraints

19 / 91

slide-20
SLIDE 20

Specification for proof-theoretic approach

Synthesis scaffold

Functional specification F

Let vin and

  • vout be vectors containing the input and output

variables. F = (Fpre ( vin) , Fpost ( vout)) where Fpre ( vin) and Fpost ( vout) are formulas that hold at the program entry and exit locations, respectively.

20 / 91

slide-21
SLIDE 21

Specification for proof-theoretic approach

Synthesis scaffold

Domain constraints D

D = (Dexp, Dgrd) where Dexp is the domain of expressions in the program and Dgrd is the domain of boolean expressions used in program guards.

Proof domain Dprf

◮ Proof-theoretic synthesis needs to synthesize proof terms from

a proof domain Dprf .

◮ Dprf needs to be at least as expressive as Dexp and Dgrd. ◮ We need a solver capable of handling Dprf .

21 / 91

slide-22
SLIDE 22

Specification for proof-theoretic approach

Synthesis scaffold

Domain constraints D

D = (Dexp, Dgrd) where Dexp is the domain of expressions in the program and Dgrd is the domain of boolean expressions used in program guards.

Proof domain Dprf

◮ Proof-theoretic synthesis needs to synthesize proof terms from

a proof domain Dprf .

◮ Dprf needs to be at least as expressive as Dexp and Dgrd. ◮ We need a solver capable of handling Dprf .

22 / 91

slide-23
SLIDE 23

Specification for proof-theoretic approach

Synthesis scaffold

Resource constraints R

R = (Rflow, Rstack, Rcomp)

◮ Rflow is a flowgraph template from the grammar

T ::= ◦ | ∗(T) | T; T

◮ Rstack : type → N1 is a mapping indicating the number of

extra temporary variables of each type available to the program.

◮ Rcomp : op → N0 is a mapping defining how many operations

  • f each type can be included in the program. Rcomp = ∅

indicates no constraints.

23 / 91

slide-24
SLIDE 24

Specification for proof-theoretic approach

Synthesis scaffold

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Dexp limited to linear arithmetic (LA) expressions (no √ ) ◮ Dgrd limited to quantifier-free first-order logic (FOL) over LA ◮ Rflow = (◦; ∗(◦) ; ◦), Rstack = {(int, 1)}, Rcomp = ∅

I n t S q r t ( int x ) v := 1; i := 1; while τ,ϕ (v ≤ x ) | v := v + 2i + 1 ; i ++; return i − 1 ;

◮ Invariant τ : v = i2 ∧ x ≥ (i − 1)2 ∧ i ≥ 1 ◮ Ranking function ϕ : x − (i − 1)2

24 / 91

slide-25
SLIDE 25

Synthesis conditions

Transition systems for acyclic code

One way to infer a set of acyclic statements that transform a precondition to a postcondition would be to use assignments: {φpre} x := ex; y := ey; {φpost} Using Hoare’s axiom for assignment, we can generate the assignment condition φpre ⇒ (φpost [x → ex]) [y → ey] Shortcomings in respect to our task:

◮ substitutions are hard to reason about ◮ order of assignment matters ◮ we need more than a fixed number of statements

25 / 91

slide-26
SLIDE 26

Synthesis conditions

Transition systems for acyclic code

One way to infer a set of acyclic statements that transform a precondition to a postcondition would be to use assignments: {φpre} x := ex; y := ey; {φpost} Using Hoare’s axiom for assignment, we can generate the assignment condition φpre ⇒ (φpost [x → ex]) [y → ey] Shortcomings in respect to our task:

◮ substitutions are hard to reason about ◮ order of assignment matters ◮ we need more than a fixed number of statements

26 / 91

slide-27
SLIDE 27

Synthesis conditions

Transition systems for acyclic code

One way to infer a set of acyclic statements that transform a precondition to a postcondition would be to use assignments: {φpre} x := ex; y := ey; {φpost} Using Hoare’s axiom for assignment, we can generate the assignment condition φpre ⇒ (φpost [x → ex]) [y → ey] Shortcomings in respect to our task:

◮ substitutions are hard to reason about ◮ order of assignment matters ◮ we need more than a fixed number of statements

27 / 91

slide-28
SLIDE 28

Synthesis conditions

Transition systems for acyclic code

Transitions

A transition is a (possibly parallel) mapping of input variables (x) to output variables (x′). {φpre}

  • x′, y′

= ex, ey

  • φ′

post

  • Corresponding verification condition:

φpre ∧ x′ = ex ∧ y′ = ey ⇒ φ′

post

Every assignment (state update) can be written as a single transition

Example

For x := ex; y := ey we will have {φpre}

  • x′, y′

= ex, ey[x → ex]

  • φ′

post

  • φpre ∧ x′ = ex ∧ y′ = ey[x → ex] ⇒ φ′

post

28 / 91

slide-29
SLIDE 29

Synthesis conditions

Transition systems for acyclic code

Transitions

A transition is a (possibly parallel) mapping of input variables (x) to output variables (x′). {φpre}

  • x′, y′

= ex, ey

  • φ′

post

  • Corresponding verification condition:

φpre ∧ x′ = ex ∧ y′ = ey ⇒ φ′

post

Every assignment (state update) can be written as a single transition

Example

For x := ex; y := ey we will have {φpre}

  • x′, y′

= ex, ey[x → ex]

  • φ′

post

  • φpre ∧ x′ = ex ∧ y′ = ey[x → ex] ⇒ φ′

post

29 / 91

slide-30
SLIDE 30

Synthesis conditions

Transition systems for acyclic code

Transitions

A transition is a (possibly parallel) mapping of input variables (x) to output variables (x′). {φpre}

  • x′, y′

= ex, ey

  • φ′

post

  • Corresponding verification condition:

φpre ∧ x′ = ex ∧ y′ = ey ⇒ φ′

post

Every assignment (state update) can be written as a single transition

Example

For x := ex; y := ey we will have {φpre}

  • x′, y′

= ex, ey[x → ex]

  • φ′

post

  • φpre ∧ x′ = ex ∧ y′ = ey[x → ex] ⇒ φ′

post

30 / 91

slide-31
SLIDE 31

Synthesis conditions

Transition systems for acyclic code

Transitions

A transition is a (possibly parallel) mapping of input variables (x) to output variables (x′). {φpre}

  • x′, y′

= ex, ey

  • φ′

post

  • Corresponding verification condition:

φpre ∧ x′ = ex ∧ y′ = ey ⇒ φ′

post

Every assignment (state update) can be written as a single transition

Example

For x := ex; y := ey we will have {φpre}

  • x′, y′

= ex, ey[x → ex]

  • φ′

post

  • φpre ∧ x′ = ex ∧ y′ = ey[x → ex] ⇒ φ′

post

31 / 91

slide-32
SLIDE 32

Synthesis conditions

Transition systems for acyclic code

Guarded transitions

Lets extend transitions with guarded transitions [] g → s meaning that statements s are only executed if the quantifier-free g holds.

Transition systems

We can represent arbitrary acyclic program fragments using sets of guarded transitions: {φpre} {[] gi → si}i

  • φ′

post

  • The corresponding verification for is:
  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • Note that this is much simpler:

◮ no reasoning about statement ordering to puzzle us ◮ guards gi and statements si are facts just like pre- and

postconditions.

32 / 91

slide-33
SLIDE 33

Synthesis conditions

Transition systems for acyclic code

Guarded transitions

Lets extend transitions with guarded transitions [] g → s meaning that statements s are only executed if the quantifier-free g holds.

Transition systems

We can represent arbitrary acyclic program fragments using sets of guarded transitions: {φpre} {[] gi → si}i

  • φ′

post

  • The corresponding verification for is:
  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • Note that this is much simpler:

◮ no reasoning about statement ordering to puzzle us ◮ guards gi and statements si are facts just like pre- and

postconditions.

33 / 91

slide-34
SLIDE 34

Synthesis conditions

Transition systems for acyclic code

Guarded transitions

Lets extend transitions with guarded transitions [] g → s meaning that statements s are only executed if the quantifier-free g holds.

Transition systems

We can represent arbitrary acyclic program fragments using sets of guarded transitions: {φpre} {[] gi → si}i

  • φ′

post

  • The corresponding verification for is:
  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • Note that this is much simpler:

◮ no reasoning about statement ordering to puzzle us ◮ guards gi and statements si are facts just like pre- and

postconditions.

34 / 91

slide-35
SLIDE 35

Synthesis conditions

Transition systems for acyclic code

Guarded transitions

Lets extend transitions with guarded transitions [] g → s meaning that statements s are only executed if the quantifier-free g holds.

Transition systems

We can represent arbitrary acyclic program fragments using sets of guarded transitions: {φpre} {[] gi → si}i

  • φ′

post

  • The corresponding verification for is:
  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • Note that this is much simpler:

◮ no reasoning about statement ordering to puzzle us ◮ guards gi and statements si are facts just like pre- and

postconditions.

35 / 91

slide-36
SLIDE 36

Synthesis conditions

◮ Program verification tools find fixed-point solutions

(invariants) to satisfy verification conditions

◮ These conditions have known statements and guards.

◮ For synthesis, we need to generalize this problem

◮ We make statements and guards also unknowns in the

formulas.

36 / 91

slide-37
SLIDE 37

Synthesis conditions

◮ Program verification tools find fixed-point solutions

(invariants) to satisfy verification conditions

◮ These conditions have known statements and guards.

◮ For synthesis, we need to generalize this problem

◮ We make statements and guards also unknowns in the

formulas.

◮ Verification conditions for verification ◮ Synthesis conditions for synthesis

37 / 91

slide-38
SLIDE 38

Synthesis conditions

◮ Program verification tools find fixed-point solutions

(invariants) to satisfy verification conditions

◮ These conditions have known statements and guards.

◮ For synthesis, we need to generalize this problem

◮ We make statements and guards also unknowns in the

formulas.

◮ If a program is correct (verifiable), then its verification

condition is valid.

◮ If a valid program exists for a scaffold, then its synthesis

condition has a satisfying solution.

38 / 91

slide-39
SLIDE 39

Synthesis

Expanding the flowgraph

Transition system language (Tsl)

p ::= choose {[] gi → si}i | whileτ,ϕ (g) {p} | p; p

39 / 91

slide-40
SLIDE 40

Synthesis

Expanding the flowgraph

Expand function

Expandn,Dprf

D,R (◦) = choose {[] gi → si}i=1...n

Expandn,Dprf

D,R (∗(T)) = whileτ,ϕ (g)

  • Expandn,Dprf

D,R (T)

  • Expandn,Dprf

D,R (T1; T2) = Expandn,Dprf D,R (T1) ; Expandn,Dprf D,R (T2)

where all gi, si, g, τ and ϕ are new generated unknowns. s ∈

  • ixi = ei

where xi ∈ V , ei ∈ Dexp|V τ ∈ Dprf |V g ∈ Dgrd|V and V = vin ∪ vout ∪ T ∪ L where

◮ T is subject to Rstack ◮ ei is subject to Rcomp ◮ L is the set of iteration counters and ranking function tracker

variables

40 / 91

slide-41
SLIDE 41

Synthesis

Expanding the flowgraph

Expand function

Expandn,Dprf

D,R (◦) = choose {[] gi → si}i=1...n

Expandn,Dprf

D,R (∗(T)) = whileτ,ϕ (g)

  • Expandn,Dprf

D,R (T)

  • Expandn,Dprf

D,R (T1; T2) = Expandn,Dprf D,R (T1) ; Expandn,Dprf D,R (T2)

where all gi, si, g, τ and ϕ are new generated unknowns and s ∈

  • ixi = ei

where xi ∈ V , ei ∈ Dexp|V τ ∈ Dprf |V g ∈ Dgrd|V and V = vin ∪ vout ∪ T ∪ L where

◮ T is subject to Rstack ◮ ei is subject to Rcomp ◮ L is the set of iteration counters and ranking function tracker

variables

41 / 91

slide-42
SLIDE 42

Synthesis

Expanding the flowgraph

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Dexp limited to linear arithmetic (LA) expressions (no √ ) ◮ Dgrd limited to quantifier-free first-order logic (FOL) over LA ◮ Rflow = (◦; ∗(◦) ; ◦), Rstack = {(int, 1)}, Rcomp = ∅

For n = 1 and FOL over quadratic expressions as Dprf we get: expsqrt = Expandn,Dprf

D,R (Rflow) =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; where vin = vout = {x}, T = {v}, L = {i, r}.

42 / 91

slide-43
SLIDE 43

Synthesis

Expanding the flowgraph

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Dexp limited to linear arithmetic (LA) expressions (no √ ) ◮ Dgrd limited to quantifier-free first-order logic (FOL) over LA ◮ Rflow = (◦; ∗(◦) ; ◦), Rstack = {(int, 1)}, Rcomp = ∅

For n = 1 and FOL over quadratic expressions as Dprf we get: expsqrt = Expandn,Dprf

D,R (Rflow) =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; where vin = vout = {x}, T = {v}, L = {i, r}.

43 / 91

slide-44
SLIDE 44

Synthesis

Safety conditions

To encode a formula for the validity of a Hoare triple, we define PathC : φ × Tsl × φ → φ which takes a precondition, a sequence of statements and a postcondition, and returns the safety condition. PathC(φpre, choose {[] gi → si}i , φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • PathC(φpre, whileτ,ϕ (g) {

pl} , φpost) = φpre ⇒ τ ′ ∧ PathC(τ ∧ g, pl, τ) ∧

  • τ ∧ ¬g ⇒ φ′

post

  • Encoding sequences of statements a bit more difficult because of

variable renaming (primed versions of τ and φpost).

44 / 91

slide-45
SLIDE 45

Synthesis

Safety conditions

To encode a formula for the validity of a Hoare triple, we define PathC : φ × Tsl × φ → φ which takes a precondition, a sequence of statements and a postcondition, and returns the safety condition: PathC(φpre, choose {[] gi → si}i , φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • PathC(φpre, whileτ,ϕ (g) {

pl} , φpost) = φpre ⇒ τ ′ ∧ PathC(τ ∧ g, pl, τ) ∧

  • τ ∧ ¬g ⇒ φ′

post

  • Encoding sequences of statements a bit more difficult because of

variable renaming (primed versions of τ and φpost).

45 / 91

slide-46
SLIDE 46

Synthesis

Safety conditions

To encode a formula for the validity of a Hoare triple, we define PathC : φ × Tsl × φ → φ which takes a precondition, a sequence of statements and a postcondition, and returns the safety condition: PathC(φpre, choose {[] gi → si}i , φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ φ′

post

  • PathC(φpre, whileτ,ϕ (g) {

pl} , φpost) = φpre ⇒ τ ′ ∧ PathC(τ ∧ g, pl, τ) ∧

  • τ ∧ ¬g ⇒ φ′

post

  • Encoding sequences of statements a bit more difficult because of

variable renaming (primed versions of τ and φpost).

46 / 91

slide-47
SLIDE 47

Synthesis

Safety conditions

  • Note. Any 2 consecutive acyclic fragments with n1 and n2

transitions can be collapsed into one with n1 · n2 transitions. PathC(φpre, whileτ,ϕ (g) { pl} ; p, φpost) =

  • φpre ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧ PathC(τ ∧ ¬g, p, φpost) PathC(φpre, choose {[] gi → si}i ; whileτ,ϕ (g) { pl} , φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧

  • τ ∧ ¬g ⇒ φ′

post

  • PathC(φpre, choose {[] gi → si}i ; whileτ,ϕ (g) {

pl} ; p, φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧ PathC(τ ∧ ¬g, p, φpost) SafetyCond(exp, F) = PathC(Fpre, exp, Fpost)

47 / 91

slide-48
SLIDE 48

Synthesis

Safety conditions

  • Note. Any 2 consecutive acyclic fragments with n1 and n2

transitions can be collapsed into one with n1 · n2 transitions. PathC(φpre, whileτ,ϕ (g) { pl} ; p, φpost) =

  • φpre ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧ PathC(τ ∧ ¬g, p, φpost) PathC(φpre, choose {[] gi → si}i ; whileτ,ϕ (g) { pl} , φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧

  • τ ∧ ¬g ⇒ φ′

post

  • PathC(φpre, choose {[] gi → si}i ; whileτ,ϕ (g) {

pl} ; p, φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧ PathC(τ ∧ ¬g, p, φpost) SafetyCond(exp, F) = PathC(Fpre, exp, Fpost)

48 / 91

slide-49
SLIDE 49

Synthesis

Safety conditions

  • Note. Any 2 consecutive acyclic fragments with n1 and n2

transitions can be collapsed into one with n1 · n2 transitions. PathC(φpre, whileτ,ϕ (g) { pl} ; p, φpost) =

  • φpre ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧ PathC(τ ∧ ¬g, p, φpost) PathC(φpre, choose {[] gi → si}i ; whileτ,ϕ (g) { pl} , φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧

  • τ ∧ ¬g ⇒ φ′

post

  • PathC(φpre, choose {[] gi → si}i ; whileτ,ϕ (g) {

pl} ; p, φpost) =

  • i
  • φpre ∧ gi ∧ si ⇒ τ ′

∧ PathC(τ ∧ g, pl, τ) ∧ PathC(τ ∧ ¬g, p, φpost) SafetyCond(exp, F) = PathC(Fpre, exp, Fpost)

49 / 91

slide-50
SLIDE 50

Synthesis

Safety conditions

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ expsqrt =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; SafetyCond(expsqrt, F) = (x ≥ 1 ∧ g1 ∧ s1 ⇒ τ ′ ∧ (τ ∧ g0 ∧ g2 ∧ s2 ⇒ τ ′ ∧ (τ ∧ ¬g0 ∧ g3 ∧ s3 ⇒

  • i′ − 1

2 ≤ x′ < i′2 where gi, si and τ are all unknowns.

50 / 91

slide-51
SLIDE 51

Synthesis

Safety conditions

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ expsqrt =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; SafetyCond(expsqrt, F) = (x ≥ 1 ∧ g1 ∧ s1 ⇒ τ ′ ∧ (τ ∧ g0 ∧ g2 ∧ s2 ⇒ τ ′ ∧ (τ ∧ ¬g0 ∧ g3 ∧ s3 ⇒

  • i′ − 1

2 ≤ x′ < i′2 where gi, si and τ are all unknowns.

51 / 91

slide-52
SLIDE 52

Synthesis

Well-formedness conditions

WellFormTS({[] gi → si}i) . =

  • ivalid(si)
  • igi
  • where

◮ valid(si) ensures that each variable is assigned only once in si ◮ ( igi) guarantees all space is covered by the guards gi ◮ guards do not have to be mutually exclusive

WellFormCond(exp) =

  • WellFormTS({[] gi → si}i)

` choose {[] gi → si}i ´ ∈ cond(exp)

where cond(exp) is the set of all choose statements in the expanded scaffold exp. This is called non-iterative upper bounded search. Iterative lower bounded search is also possible (remember parameter n at expansion).

52 / 91

slide-53
SLIDE 53

Synthesis

Well-formedness conditions

WellFormTS({[] gi → si}i) . =

  • ivalid(si)
  • igi
  • where

◮ valid(si) ensures that each variable is assigned only once in si ◮ ( igi) guarantees all space is covered by the guards gi ◮ guards do not have to be mutually exclusive

WellFormCond(exp) =

  • WellFormTS({[] gi → si}i)

` choose {[] gi → si}i ´ ∈ cond(exp)

where cond(exp) is the set of all choose statements in the expanded scaffold exp. This is called non-iterative upper bounded search. Iterative lower bounded search is also possible (remember parameter n at expansion).

53 / 91

slide-54
SLIDE 54

Synthesis

Well-formedness conditions

WellFormTS({[] gi → si}i) . =

  • ivalid(si)
  • igi
  • where

◮ valid(si) ensures that each variable is assigned only once in si ◮ ( igi) guarantees all space is covered by the guards gi ◮ guards do not have to be mutually exclusive

WellFormCond(exp) =

  • WellFormTS({[] gi → si}i)

` choose {[] gi → si}i ´ ∈ cond(exp)

where cond(exp) is the set of all choose statements in the expanded scaffold exp. This is called non-iterative upper bounded search. Iterative lower bounded search is also possible (remember parameter n at expansion).

54 / 91

slide-55
SLIDE 55

Synthesis

Well-formedness conditions

Example

◮ expsqrt =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; WellFormCond(expsqrt) = valid(s1) ∧ valid(s2) ∧ valid(s3) ∧ g1 ∧ g2 ∧ g3

55 / 91

slide-56
SLIDE 56

Synthesis

Well-formedness conditions

Example

◮ expsqrt =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; WellFormCond(expsqrt) = valid(s1) ∧ valid(s2) ∧ valid(s3) ∧ g1 ∧ g2 ∧ g3

56 / 91

slide-57
SLIDE 57

Synthesis

Progress conditions

prog(whileτ,ϕ (g) { p}) . = (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧ PathC(τend ∧ g, end( p) , r > ϕ) where

◮ r is a new progress tracking variable (not an unknown) ◮ τend is the invariant for the last loop in

p

◮ Meaning, that we require intermediate loop invariants to carry

enough information

◮ end(

p) is the fragment of p after the last loop RankCond(exp) =

  • prog(whileτ,ϕ (g) {

p})

(whileτ,ϕ (g) { p}) ∈ loops(exp)

where loops(exp) is the set of all while statements in the expanded scaffold exp.

57 / 91

slide-58
SLIDE 58

Synthesis

Progress conditions

prog(whileτ,ϕ (g) { p}) . = (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧ PathC(τend ∧ g, end( p) , r > ϕ) where

◮ r is a new progress tracking variable (not an unknown) ◮ τend is the invariant for the last loop in

p

◮ Meaning, that we require intermediate loop invariants to carry

enough information

◮ end(

p) is the fragment of p after the last loop RankCond(exp) =

  • prog(whileτ,ϕ (g) {

p})

(whileτ,ϕ (g) { p}) ∈ loops(exp)

where loops(exp) is the set of all while statements in the expanded scaffold exp.

58 / 91

slide-59
SLIDE 59

Synthesis

Progress conditions

Example

◮ expsqrt =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; RankCond(expsqrt) = (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧

  • τ ∧ g0 ∧ g2 ∧ s2 ⇒ r′ > ϕ′

59 / 91

slide-60
SLIDE 60

Synthesis

Progress conditions

Example

◮ expsqrt =

choose {[] g1 → s1} ; while τ,ϕ (g0 ) { choose {[] g2 → s2} ; } ; choose {[] g3 → s3} ; RankCond(expsqrt) = (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧

  • τ ∧ g0 ∧ g2 ∧ s2 ⇒ r′ > ϕ′

60 / 91

slide-61
SLIDE 61

Synthesis

Entire synthesis algorithm

◮ Input:

◮ Scaffold F, D, R, ◮ Maximum number of transitions n ◮ Proof domain Dprf

◮ Output: Executable program or FAIL

exp := Expandn,Dprf

D,R (Rflow) ;

sc := SafetyCond(exp, F) ∧ WellFormCond(exp) ∧ RankCond(exp) ; π := S o l v e r (sc ) ; i f ( unsat (π )) | return FAIL ; return Exeπ(exp) ;

61 / 91

slide-62
SLIDE 62

Synthesis

Concretization algorithm

Exeπ(p; p) =Exeπ(p) ; Exeπ( p) Exeπ(whileτ,ϕ (g) { p}) =whileπ(τ),π(ϕ) (π (g)) {Exeπ( p)} Exeπ(choose {[] g → s}) = if (π (g) ) {Stmt(π (s)) } else {skip} Exeπ(choose {[] gi → si}i=1...n) = if (π (g) ) {Stmt(π (s)) } else {Exeπ(choose {[] gi → si}i=2...n) } Stmt

  • i=1...nxi = ei
  • = t1 := e1; . . . ;tn := en;

x1 := t1; . . . ;xn := tn;

62 / 91

slide-63
SLIDE 63

Synthesis

Concretization algorithm

Exeπ(p; p) =Exeπ(p) ; Exeπ( p) Exeπ(whileτ,ϕ (g) { p}) =whileπ(τ),π(ϕ) (π (g)) {Exeπ( p)} Exeπ(choose {[] g → s}) = if (π (g) ) {Stmt(π (s)) } else {skip} Exeπ(choose {[] gi → si}i=1...n) = if (π (g) ) {Stmt(π (s)) } else {Exeπ(choose {[] gi → si}i=2...n) } Stmt

  • i=1...nxi = ei
  • = t1 := e1; . . . ;tn := en;

x1 := t1; . . . ;xn := tn;

63 / 91

slide-64
SLIDE 64

Synthesis

Example

“ x ≥ 1 ∧ g1 ∧ s1 ⇒ τ′” ∧ “ τ ∧ g0 ∧ g2 ∧ s2 ⇒ τ′” ∧ „ τ ∧ ¬g0 ∧ g3 ∧ s3 ⇒ “ i′ − 1 ”2 ≤ x′ < i′2 « ∧ valid(s1) ∧ valid(s2) ∧ valid(s3) ∧ (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧ “ τ ∧ g0 ∧ g2 ∧ s2 ⇒ r′ > ϕ′”

τ :

  • v = i2

  • x ≥ (i − 1)2

∧ (i ≥ 1) g0 : v ≤ x ϕ : x − (i − 1)2 s1 :

  • v′ = 1
  • i′ = 1
  • x′ = x
  • r′ = r
  • s2 :
  • v′ = v + 2i + 1
  • i′ = i + 1
  • x′ = x
  • r′ = r
  • s3 :
  • v′ = v
  • i′ = i
  • x′ = x
  • r′ = r
  • 64 / 91
slide-65
SLIDE 65

Synthesis

Example

» “ x ≥ 1 ∧ g1 ∧ s1 ⇒ τ′” ∧ “ τ ∧ g0 ∧ g2 ∧ s2 ⇒ τ′” ∧ „ τ ∧ ¬g0 ∧ g3 ∧ s3 ⇒ “ i′ − 1 ”2 ≤ x′ < i′2 « – ∧ » valid(s1) ∧ valid(s2) ∧ valid(s3) – ∧ » (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧ “ τ ∧ g0 ∧ g2 ∧ s2 ⇒ r′ > ϕ′” –

τ :

  • v = i2

  • x ≥ (i − 1)2

∧ (i ≥ 1) g0 : v ≤ x ϕ : x − (i − 1)2 s1 :

  • v′ = 1
  • i′ = 1
  • x′ = x
  • r′ = r
  • s2 :
  • v′ = v + 2i + 1
  • i′ = i + 1
  • x′ = x
  • r′ = r
  • s3 :
  • v′ = v
  • i′ = i
  • x′ = x
  • r′ = r
  • 65 / 91
slide-66
SLIDE 66

Synthesis

Example

» “ x ≥ 1 ∧ g1 ∧ s1 ⇒ τ′” ∧ “ τ ∧ g0 ∧ g2 ∧ s2 ⇒ τ′” ∧ „ τ ∧ ¬g0 ∧ g3 ∧ s3 ⇒ “ i′ − 1 ”2 ≤ x′ < i′2 « – ∧ » valid(s1) ∧ valid(s2) ∧ valid(s3) – ∧ » (r = ϕ) ∧ (τ ⇒ r ≥ 0) ∧ “ τ ∧ g0 ∧ g2 ∧ s2 ⇒ r′ > ϕ′” –

τ :

  • v = i2

  • x ≥ (i − 1)2

∧ (i ≥ 1) g0 : v ≤ x ϕ : x − (i − 1)2 s1 :

  • v′ = 1
  • i′ = 1
  • x′ = x
  • r′ = r
  • s2 :
  • v′ = v + 2i + 1
  • i′ = i + 1
  • x′ = x
  • r′ = r
  • s3 :
  • v′ = v
  • i′ = i
  • x′ = x
  • r′ = r
  • 66 / 91
slide-67
SLIDE 67

Synthesis

Requirements for solvers

◮ Support for multiple positive and negative unknowns

◮ (τ ∧ g ⇒ τ ′) ∧ (τ ∧ ¬g ⇒ φpost)

◮ Solutions are maximally weak,

◮ ensuring that the non-standard conditions valid(si) will hold. 67 / 91

slide-68
SLIDE 68

Synthesis

Requirements for solvers

◮ Support for multiple positive and negative unknowns

◮ (τ ∧ g ⇒ τ ′) ∧ (τ ∧ ¬g ⇒ φpost)

◮ Solutions are maximally weak,

◮ ensuring that the non-standard conditions valid(si) will hold. 68 / 91

slide-69
SLIDE 69

Experimental case studies

Tools

The VS3 project

◮ Arithmetic verification tool VS3 LIA

◮ works over the theory of linear arithmetic ◮ discovers (quantifier-free) invariants in DNF form with linear

inequalities over program variables as the atomic facts

◮ supports limits on data size in bits and a limit on the number

  • f conjunctions/disjunctions

◮ VS3 QA = VS3 LIA+ quadratic expressions (incomplete) ◮ Predicate abstraction verification tool VS3 PA

◮ works over a combination of the theories of equality with

uninterpreted functions, arrays, and linear arithmetic

◮ discovers (possibly) quantified invariants ◮ requires a boolean template for the invariant and a set of

predicates to put into template holes

◮ e.g. [−] ∧ ∀k : [−] ⇒ [−]

◮ VS3 AX = VS3 PA+ user-specified axioms over uninterpreted

symbols

69 / 91

slide-70
SLIDE 70

Experimental case studies

Tools

The VS3 project

◮ Arithmetic verification tool VS3 LIA

◮ works over the theory of linear arithmetic ◮ discovers (quantifier-free) invariants in DNF form with linear

inequalities over program variables as the atomic facts

◮ supports limits on data size in bits and a limit on the number

  • f conjunctions/disjunctions

◮ VS3 QA = VS3 LIA+ quadratic expressions (incomplete) ◮ Predicate abstraction verification tool VS3 PA

◮ works over a combination of the theories of equality with

uninterpreted functions, arrays, and linear arithmetic

◮ discovers (possibly) quantified invariants ◮ requires a boolean template for the invariant and a set of

predicates to put into template holes

◮ e.g. [−] ∧ ∀k : [−] ⇒ [−]

◮ VS3 AX = VS3 PA+ user-specified axioms over uninterpreted

symbols

70 / 91

slide-71
SLIDE 71

Experimental case studies

Tools

The VS3 project

◮ Arithmetic verification tool VS3 LIA

◮ works over the theory of linear arithmetic ◮ discovers (quantifier-free) invariants in DNF form with linear

inequalities over program variables as the atomic facts

◮ supports limits on data size in bits and a limit on the number

  • f conjunctions/disjunctions

◮ VS3 QA = VS3 LIA+ quadratic expressions (incomplete) ◮ Predicate abstraction verification tool VS3 PA

◮ works over a combination of the theories of equality with

uninterpreted functions, arrays, and linear arithmetic

◮ discovers (possibly) quantified invariants ◮ requires a boolean template for the invariant and a set of

predicates to put into template holes

◮ e.g. [−] ∧ ∀k : [−] ⇒ [−]

◮ VS3 AX = VS3 PA+ user-specified axioms over uninterpreted

symbols

71 / 91

slide-72
SLIDE 72

Experimental case studies

Tools

The VS3 project

◮ Arithmetic verification tool VS3 LIA

◮ works over the theory of linear arithmetic ◮ discovers (quantifier-free) invariants in DNF form with linear

inequalities over program variables as the atomic facts

◮ supports limits on data size in bits and a limit on the number

  • f conjunctions/disjunctions

◮ VS3 QA = VS3 LIA+ quadratic expressions (incomplete) ◮ Predicate abstraction verification tool VS3 PA

◮ works over a combination of the theories of equality with

uninterpreted functions, arrays, and linear arithmetic

◮ discovers (possibly) quantified invariants ◮ requires a boolean template for the invariant and a set of

predicates to put into template holes

◮ e.g. [−] ∧ ∀k : [−] ⇒ [−]

◮ VS3 AX = VS3 PA+ user-specified axioms over uninterpreted

symbols

72 / 91

slide-73
SLIDE 73

Experimental case studies

Flowgraphs with initialization and finalization

We instead treat loops (∗ (T)) in Expand as ◦; ∗ (T) ; ◦ to make things easier for the verification tools.

73 / 91

slide-74
SLIDE 74

Experimental case studies

Swapping of values

Example

◮ Fpre .

= (x = c1) ∧ (y = c2)

◮ Fpost .

= (x = c2) ∧ (y = c1)

◮ Rflow .

= ◦

◮ Rcomp .

= ∅

◮ Rstack .

= ∅ Synthesizer generates various versions, including Swap( int x , int y ) | x := x + y ; | y := x − y ; | x := x − y ;

74 / 91

slide-75
SLIDE 75

Experimental case studies

Swapping of values

Example

◮ Fpre .

= (x = c1) ∧ (y = c2)

◮ Fpost .

= (x = c2) ∧ (y = c1)

◮ Rflow .

= ◦

◮ Rcomp .

= ∅

◮ Rstack .

= ∅ Synthesizer generates various versions, including Swap( int x , int y ) | x := x + y ; | y := x − y ; | x := x − y ;

75 / 91

slide-76
SLIDE 76

Experimental case studies

Integral square root

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Rflow .

= ∗ (◦) and Rcomp

.

= ∅

◮ Rstack .

= {(int, 1)} + quadratic expressions in Dexp, Dgrd = sequential search

◮ Rstack .

= {(int, 2)} + linear expressions in Dexp, Dgrd = sequential search v := 1; i := 1; while τ,ϕ (v ≤ x ) | v := v + 2i + 1 ; i ++; return i − 1 ;

◮ Rstack .

= {(int, 3)} + quadratic + extra assumptions

◮ binary search (temporaries hold search range) 76 / 91

slide-77
SLIDE 77

Experimental case studies

Integral square root

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Rflow .

= ∗ (◦) and Rcomp

.

= ∅

◮ Rstack .

= {(int, 1)} + quadratic expressions in Dexp, Dgrd = sequential search

◮ Rstack .

= {(int, 2)} + linear expressions in Dexp, Dgrd = sequential search v := 1; i := 1; while τ,ϕ (v ≤ x ) | v := v + 2i + 1 ; i ++; return i − 1 ;

◮ Rstack .

= {(int, 3)} + quadratic + extra assumptions

◮ binary search (temporaries hold search range) 77 / 91

slide-78
SLIDE 78

Experimental case studies

Integral square root

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Rflow .

= ∗ (◦) and Rcomp

.

= ∅

◮ Rstack .

= {(int, 1)} + quadratic expressions in Dexp, Dgrd = sequential search

◮ Rstack .

= {(int, 2)} + linear expressions in Dexp, Dgrd = sequential search v := 1; i := 1; while τ,ϕ (v ≤ x ) | v := v + 2i + 1 ; i ++; return i − 1 ;

◮ Rstack .

= {(int, 3)} + quadratic + extra assumptions

◮ binary search (temporaries hold search range) 78 / 91

slide-79
SLIDE 79

Experimental case studies

Integral square root

Example

◮ F =

  • x ≥ 1, (i − 1)2 ≤ x < i2

◮ Rflow .

= ∗ (◦) and Rcomp

.

= ∅

◮ Rstack .

= {(int, 1)} + quadratic expressions in Dexp, Dgrd = sequential search

◮ Rstack .

= {(int, 2)} + linear expressions in Dexp, Dgrd = sequential search v := 1; i := 1; while τ,ϕ (v ≤ x ) | v := v + 2i + 1 ; i ++; return i − 1 ;

◮ Rstack .

= {(int, 3)} + quadratic + extra assumptions

◮ binary search (temporaries hold search range) 79 / 91

slide-80
SLIDE 80

Experimental case studies

Non-recursive sorting

Example

◮ F = (true, ∀k : 0 ≤ k < n ⇒ A[k] ≤ A[k + 1]) ◮ Dexp includes swapping of array elements, Rcomp allows

swapping only, Rflow

.

= ∗ (∗ (◦))

◮ Rstack .

= ∅: Bubble Sort and a non-standard version of Insertion Sort.

◮ Rstack .

= {(int, 1)}: Selection Sort.

80 / 91

slide-81
SLIDE 81

Experimental case studies

Non-recursive sorting

Example

◮ F = (true, ∀k : 0 ≤ k < n ⇒ A[k] ≤ A[k + 1]) ◮ Dexp includes swapping of array elements, Rcomp allows

swapping only, Rflow

.

= ∗ (∗ (◦))

◮ Rstack .

= ∅: Bubble Sort and a non-standard version of Insertion Sort.

◮ Rstack .

= {(int, 1)}: Selection Sort.

81 / 91

slide-82
SLIDE 82

Experimental case studies

Non-recursive sorting

Example

◮ F = (true, ∀k : 0 ≤ k < n ⇒ A[k] ≤ A[k + 1]) ◮ Dexp includes swapping of array elements, Rcomp allows

swapping only, Rflow

.

= ∗ (∗ (◦))

◮ Rstack .

= ∅: Bubble Sort and a non-standard version of Insertion Sort.

◮ Rstack .

= {(int, 1)}: Selection Sort.

82 / 91

slide-83
SLIDE 83

Experimental case studies

Recursive divide-and-conquer sorting

Example

◮ F = (true, ∀k : 0 ≤ k < n ⇒ A[k] ≤ A[k + 1]) ◮ Dexp includes swapping and moving of array elements ◮ Flowgraph template includes recursive call ⊛ ◮ Rstack .

= ∅, Rflow

.

= ⊛; ⊛; ◦: Merge Sort.

◮ Rstack .

= {(int, 1)}, Rflow

.

= ◦; ⊛; ⊛: Quick Sort.

83 / 91

slide-84
SLIDE 84

Experimental case studies

Recursive divide-and-conquer sorting

Example

◮ F = (true, ∀k : 0 ≤ k < n ⇒ A[k] ≤ A[k + 1]) ◮ Dexp includes swapping and moving of array elements ◮ Flowgraph template includes recursive call ⊛ ◮ Rstack .

= ∅, Rflow

.

= ⊛; ⊛; ◦: Merge Sort.

◮ Rstack .

= {(int, 1)}, Rflow

.

= ◦; ⊛; ⊛: Quick Sort.

84 / 91

slide-85
SLIDE 85

Experimental case studies

Recursive divide-and-conquer sorting

Example

◮ F = (true, ∀k : 0 ≤ k < n ⇒ A[k] ≤ A[k + 1]) ◮ Dexp includes swapping and moving of array elements ◮ Flowgraph template includes recursive call ⊛ ◮ Rstack .

= ∅, Rflow

.

= ⊛; ⊛; ◦: Merge Sort.

◮ Rstack .

= {(int, 1)}, Rflow

.

= ◦; ⊛; ⊛: Quick Sort.

85 / 91

slide-86
SLIDE 86

Experimental case studies

Dynamic programming

Example

◮ Fibonacci ◮ Longest Common Subsequence ◮ Path-finding

◮ Checkerboard (least-cost path on rectangular grid) ◮ Single Source Shortest Path ◮ All-pairs Shortest Path

◮ Matrix Chain Multiply (minimizing the number of

multiplications)

86 / 91

slide-87
SLIDE 87

Experimental case studies

Benchmarks

◮ Synthesis time 0.12-9658.52 seconds (median 14.23) ◮ Slowdown in respect to verification 1.09-92.28 (median 6.68)

Limitations not easily overcome

◮ Need to add new assumptions to compensate for incomplete

VS3

QA (quadratic expression handling) and inefficient VS3 AX. ◮ Need a set of candidate predicates for VS3 AX

Scalability

◮ More efficient verifiers are needed.

Relevance

◮ Multiple solutions differ in performance, readability.

87 / 91

slide-88
SLIDE 88

Experimental case studies

Benchmarks

◮ Synthesis time 0.12-9658.52 seconds (median 14.23) ◮ Slowdown in respect to verification 1.09-92.28 (median 6.68)

Limitations not easily overcome

◮ Need to add new assumptions to compensate for incomplete

VS3

QA (quadratic expression handling) and inefficient VS3 AX. ◮ Need a set of candidate predicates for VS3 AX

Scalability

◮ More efficient verifiers are needed.

Relevance

◮ Multiple solutions differ in performance, readability.

88 / 91

slide-89
SLIDE 89

Experimental case studies

Benchmarks

◮ Synthesis time 0.12-9658.52 seconds (median 14.23) ◮ Slowdown in respect to verification 1.09-92.28 (median 6.68)

Limitations not easily overcome

◮ Need to add new assumptions to compensate for incomplete

VS3

QA (quadratic expression handling) and inefficient VS3 AX. ◮ Need a set of candidate predicates for VS3 AX

Scalability

◮ More efficient verifiers are needed.

Relevance

◮ Multiple solutions differ in performance, readability.

89 / 91

slide-90
SLIDE 90

Experimental case studies

Benchmarks

◮ Synthesis time 0.12-9658.52 seconds (median 14.23) ◮ Slowdown in respect to verification 1.09-92.28 (median 6.68)

Limitations not easily overcome

◮ Need to add new assumptions to compensate for incomplete

VS3

QA (quadratic expression handling) and inefficient VS3 AX. ◮ Need a set of candidate predicates for VS3 AX

Scalability

◮ More efficient verifiers are needed.

Relevance

◮ Multiple solutions differ in performance, readability.

90 / 91

slide-91
SLIDE 91

E O F

91 / 91