Complexity of Well-Quasi-Orderings and Well-Structured Transition - - PowerPoint PPT Presentation

complexity of well quasi orderings and well structured
SMART_READER_LITE
LIVE PREVIEW

Complexity of Well-Quasi-Orderings and Well-Structured Transition - - PowerPoint PPT Presentation

Complexity of Well-Quasi-Orderings and Well-Structured Transition Systems Part IV: Complexity of WSTS Verification Philippe Schnoebelen LSV, CNRS & ENS Cachan + Oxford 1-year visitor Oxford Dept. Comp. Sci, Mar. 9th, 2012 Part IV.a: Upper


slide-1
SLIDE 1

Complexity of Well-Quasi-Orderings and Well-Structured Transition Systems

Part IV: Complexity of WSTS Verification

Philippe Schnoebelen LSV, CNRS & ENS Cachan + Oxford 1-year visitor Oxford Dept. Comp. Sci, Mar. 9th, 2012

slide-2
SLIDE 2

Part IV.a: Upper Bounds via the Length-Function Theorem

2/23

slide-3
SLIDE 3

IF YOU MISSED PART III

LA,g(n)

def

= length of longest controlled bad sequence x0,x1,...,xL over WQO A (where “controlled”

def

⇔ |xi| gi(n)) Length Function Theorem. if g is a smooth control function in Fγ and A is an exponential WQO such that o(A) < ωβ+1 then LA,g is: – in Fβ if γ < ω β, – in Fγ+β if γ 2 and β < ω In a nutshell: in Fm for Nm, in Fωm−1 for Γ∗

m, in Fωωm for (Nm)∗, etc.,

where Ackermann’s function is in Fω (See [Schmitz & Schnoebelen, 2011] for all details)

3/23

slide-4
SLIDE 4

COUNTER MACHINES

Finite state control + finite number of “counters” (say m) + simple instructions and tests

ℓ0 ℓ1 ℓ2 ℓ3 c1++ c2>0? c2-- c3=0? 4 c2 1 c1 c3

Operational semantics: – Configurations: Conf

def

= Loc × NC = {s,t,...}, e.g., s0 = (ℓ0,1,4,0) – Steps: (ℓ0,1,4,0) − → (ℓ1,2,4,0) − → (ℓ2,2,3,0) − → (ℓ3,2,3,0) − → ··· A well-known model, Turing-powerful as soon as there are 2 counters

4/23

slide-5
SLIDE 5

COUNTER MACHINES

Finite state control + finite number of “counters” (say m) + simple instructions and tests

ℓ0 ℓ1 ℓ2 ℓ3 c1++ c2>0? c2-- c3=0? 4 c2 1 c1 c3

Operational semantics: – Configurations: Conf

def

= Loc × NC = {s,t,...}, e.g., s0 = (ℓ0,1,4,0) – Steps: (ℓ0,1,4,0) − → (ℓ1,2,4,0) − → (ℓ2,2,3,0) − → (ℓ3,2,3,0) − → ··· A well-known model, Turing-powerful as soon as there are 2 counters

4/23

slide-6
SLIDE 6

LCM = LOSSY COUNTER MACHINES

LCM = Counter machines with unreliability: “counters decrease nondeterministically” (Weaker) computational model useful, e.g., for logics like XPath or LTL+data

  • Semantics. Reliable steps: s −

→rel t as above Lossy steps: s − → t

def

⇔ s s′ − →rel t′ t for some s′ and t′ where s = (ℓ,a1,...,am) (ℓ′,b1,...,bm) = s′ def ⇔ ℓ = ℓ′ ∧ a1 b1 ∧ ... ∧ am bm I.e., (Conf,) = (Loc,Id) × (N,) × ··· × (N,) hence is WQO

  • Prop. [Monotony] s +

− → t implies s′ + − → t′ for all s′ s and t′ t

5/23

slide-7
SLIDE 7

LCM = LOSSY COUNTER MACHINES

LCM = Counter machines with unreliability: “counters decrease nondeterministically” (Weaker) computational model useful, e.g., for logics like XPath or LTL+data

  • Semantics. Reliable steps: s −

→rel t as above Lossy steps: s − → t

def

⇔ s s′ − →rel t′ t for some s′ and t′ where s = (ℓ,a1,...,am) (ℓ′,b1,...,bm) = s′ def ⇔ ℓ = ℓ′ ∧ a1 b1 ∧ ... ∧ am bm I.e., (Conf,) = (Loc,Id) × (N,) × ··· × (N,) hence is WQO

  • Prop. [Monotony] s +

− → t implies s′ + − → t′ for all s′ s and t′ t

5/23

slide-8
SLIDE 8

LCM = LOSSY COUNTER MACHINES

LCM = Counter machines with unreliability: “counters decrease nondeterministically” (Weaker) computational model useful, e.g., for logics like XPath or LTL+data

  • Semantics. Reliable steps: s −

→rel t as above Lossy steps: s − → t

def

⇔ s s′ − →rel t′ t for some s′ and t′ where s = (ℓ,a1,...,am) (ℓ′,b1,...,bm) = s′ def ⇔ ℓ = ℓ′ ∧ a1 b1 ∧ ... ∧ am bm I.e., (Conf,) = (Loc,Id) × (N,) × ··· × (N,) hence is WQO

  • Prop. [Monotony] s +

− → t implies s′ + − → t′ for all s′ s and t′ t

5/23

slide-9
SLIDE 9

DECIDING TERMINATION FOR LCM’S

(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s

  • Furthermore. There is a loop from sinit iff there is a loop that is a bad

sequence (until sn−1)

  • Proof. Assume a length-n loop has an increasing pair si sj for

i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′

j = si. Thus the shortest loop has no increasing pair

  • Furthermore. Since necessarily s −

→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.

  • Cor. Termination of LCM’s can be decided with complexity in Fω, and

in Fm when we fix |C| = m

6/23

slide-10
SLIDE 10

DECIDING TERMINATION FOR LCM’S

(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s

  • Furthermore. There is a loop from sinit iff there is a loop that is a bad

sequence (until sn−1)

  • Proof. Assume a length-n loop has an increasing pair si sj for

i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′

j = si. Thus the shortest loop has no increasing pair

  • Furthermore. Since necessarily s −

→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.

  • Cor. Termination of LCM’s can be decided with complexity in Fω, and

in Fm when we fix |C| = m

6/23

slide-11
SLIDE 11

DECIDING TERMINATION FOR LCM’S

(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s

  • Furthermore. There is a loop from sinit iff there is a loop that is a bad

sequence (until sn−1)

  • Proof. Assume a length-n loop has an increasing pair si sj for

i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′

j = si. Thus the shortest loop has no increasing pair

  • Furthermore. Since necessarily s −

→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.

  • Cor. Termination of LCM’s can be decided with complexity in Fω, and

in Fm when we fix |C| = m

6/23

slide-12
SLIDE 12

DECIDING TERMINATION FOR LCM’S

(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s

  • Furthermore. There is a loop from sinit iff there is a loop that is a bad

sequence (until sn−1)

  • Proof. Assume a length-n loop has an increasing pair si sj for

i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′

j = si. Thus the shortest loop has no increasing pair

  • Furthermore. Since necessarily s −

→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.

  • Cor. Termination of LCM’s can be decided with complexity in Fω, and

in Fm when we fix |C| = m

6/23

slide-13
SLIDE 13

DECIDING REACHABILITY FOR LCM’S

Same ideas work for reachability: “is there a run from sinit to sgoal?”

  • Proof. if a run sinit = s0 −

→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn

  • Cor. If sgoal can be reached from sinit, this can be achieved via a run

that is a (reversed) bad sequence

  • But. How is the reversed run g-controlled for some g?
  • Prop. In the smallest run, |si| |si+1| + 1 for all 0 < i < n
  • Cor. Reachability in LCM’s can be decided with complexity in Fω, or

Fm (same as Termination)

  • Nb. generic technique extends to other problems/models

7/23

slide-14
SLIDE 14

DECIDING REACHABILITY FOR LCM’S

Same ideas work for reachability: “is there a run from sinit to sgoal?”

  • Proof. if a run sinit = s0 −

→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn

  • Cor. If sgoal can be reached from sinit, this can be achieved via a run

that is a (reversed) bad sequence

  • But. How is the reversed run g-controlled for some g?
  • Prop. In the smallest run, |si| |si+1| + 1 for all 0 < i < n
  • Cor. Reachability in LCM’s can be decided with complexity in Fω, or

Fm (same as Termination)

  • Nb. generic technique extends to other problems/models

7/23

slide-15
SLIDE 15

DECIDING REACHABILITY FOR LCM’S

Same ideas work for reachability: “is there a run from sinit to sgoal?”

  • Proof. if a run sinit = s0 −

→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn

  • Cor. If sgoal can be reached from sinit, this can be achieved via a run

that is a (reversed) bad sequence

  • But. How is the reversed run g-controlled for some g?
  • Prop. In the smallest run, |si| |si+1| + 1 for all 0 < i < n
  • Cor. Reachability in LCM’s can be decided with complexity in Fω, or

Fm (same as Termination)

  • Nb. generic technique extends to other problems/models

7/23

slide-16
SLIDE 16

DECIDING REACHABILITY FOR LCM’S

Same ideas work for reachability: “is there a run from sinit to sgoal?”

  • Proof. if a run sinit = s0 −

→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn

  • Cor. If sgoal can be reached from sinit, this can be achieved via a run

that is a (reversed) bad sequence

  • But. How is the reversed run g-controlled for some g?
  • Prop. In the smallest run, |si| |si+1| + 1 for all 0 < i < n
  • Cor. Reachability in LCM’s can be decided with complexity in Fω, or

Fm (same as Termination)

  • Nb. generic technique extends to other problems/models

7/23

slide-17
SLIDE 17

DECIDING REACHABILITY FOR LCM’S

Same ideas work for reachability: “is there a run from sinit to sgoal?”

  • Proof. if a run sinit = s0 −

→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn

  • Cor. If sgoal can be reached from sinit, this can be achieved via a run

that is a (reversed) bad sequence

  • But. How is the reversed run g-controlled for some g?
  • Prop. In the smallest run, |si| |si+1| + 1 for all 0 < i < n
  • Cor. Reachability in LCM’s can be decided with complexity in Fω, or

Fm (same as Termination)

  • Nb. generic technique extends to other problems/models

7/23

slide-18
SLIDE 18

LCS = LOSSY CHANNEL SYSTEMS

A configuration σ = (ℓ1,ℓ2,w1,w2) with ℓi ∈ Loci and wi ∈ Σ∗. E.g., w1 = hup.ack.ack. Reliable steps read in front of channels, write at end (FIFO) Lossy steps: messages may be lost nondeterministically σ − → σ′ def ⇔ σ ⊒ ρ − →rel ρ′ ⊒ σ′ for some ρ,ρ′ where (Conf,⊑) is the WQO Loc1 × Loc2 × (Σ∗)C A model useful for concurrent protocols but also timed automata, metric temporal logic, products of modal logics, ...

8/23

slide-19
SLIDE 19

LCS = LOSSY CHANNEL SYSTEMS

A configuration σ = (ℓ1,ℓ2,w1,w2) with ℓi ∈ Loci and wi ∈ Σ∗. E.g., w1 = hup.ack.ack. Reliable steps read in front of channels, write at end (FIFO) Lossy steps: messages may be lost nondeterministically σ − → σ′ def ⇔ σ ⊒ ρ − →rel ρ′ ⊒ σ′ for some ρ,ρ′ where (Conf,⊑) is the WQO Loc1 × Loc2 × (Σ∗)C A model useful for concurrent protocols but also timed automata, metric temporal logic, products of modal logics, ...

8/23

slide-20
SLIDE 20

LCS = LOSSY CHANNEL SYSTEMS

A configuration σ = (ℓ1,ℓ2,w1,w2) with ℓi ∈ Loci and wi ∈ Σ∗. E.g., w1 = hup.ack.ack. Reliable steps read in front of channels, write at end (FIFO) Lossy steps: messages may be lost nondeterministically σ − → σ′ def ⇔ σ ⊒ ρ − →rel ρ′ ⊒ σ′ for some ρ,ρ′ where (Conf,⊑) is the WQO Loc1 × Loc2 × (Σ∗)C A model useful for concurrent protocols but also timed automata, metric temporal logic, products of modal logics, ...

8/23

slide-21
SLIDE 21

LCS VERIFICATION (IN A NUTSHELL)

Termination and Reachability are decidable for LCS’s (as for LCM’s) — Furthermore non-termination can be witnessed by a loop that is a bad sequence, and reachability can be witnessed by a run that is a reversed bad sequence. — These bad sequences are Succ-controlled — Hence upper bounds given via the Length Function Theorem: ⇒ LCS verification is in Fωω, and in Fωm−1 when we fix |Σ| = m

  • NB. here the main parameter for complexity is the size of the

message alphabet Σ

  • NB. generic technique extends to other problems/models

9/23

slide-22
SLIDE 22

LCS VERIFICATION (IN A NUTSHELL)

Termination and Reachability are decidable for LCS’s (as for LCM’s) — Furthermore non-termination can be witnessed by a loop that is a bad sequence, and reachability can be witnessed by a run that is a reversed bad sequence. — These bad sequences are Succ-controlled — Hence upper bounds given via the Length Function Theorem: ⇒ LCS verification is in Fωω, and in Fωm−1 when we fix |Σ| = m

  • NB. here the main parameter for complexity is the size of the

message alphabet Σ

  • NB. generic technique extends to other problems/models

9/23

slide-23
SLIDE 23

Part IV.b: Lower Bounds via Simulation of Fast-Growing Functions

10/23

slide-24
SLIDE 24

PROBLEM STATEMENT

We have upper bounds on the complexity of verification for lossy counter machines and lossy channel systems Do we have matching lower bounds? Yes for the simple-minded algorithms we presented (see Part II) No for the underlying decision problems (witness: Petri nets) Reduction stategy for proving lower bounds in lossy systems:

  • 1. Compute unreliably fast-growing functions: Hardy hierarchy
  • 2. Use this as an unreliable computational ressource
  • 3. “Check” in the end that nothing was lost
  • 4. Need computing unreliably the inverses of fast-growing functions

11/23

slide-25
SLIDE 25

PROBLEM STATEMENT

We have upper bounds on the complexity of verification for lossy counter machines and lossy channel systems Do we have matching lower bounds? Yes for the simple-minded algorithms we presented (see Part II) No for the underlying decision problems (witness: Petri nets) Reduction stategy for proving lower bounds in lossy systems:

  • 1. Compute unreliably fast-growing functions: Hardy hierarchy
  • 2. Use this as an unreliable computational ressource
  • 3. “Check” in the end that nothing was lost
  • 4. Need computing unreliably the inverses of fast-growing functions

11/23

slide-26
SLIDE 26

PROBLEM STATEMENT

We have upper bounds on the complexity of verification for lossy counter machines and lossy channel systems Do we have matching lower bounds? Yes for the simple-minded algorithms we presented (see Part II) No for the underlying decision problems (witness: Petri nets) Reduction stategy for proving lower bounds in lossy systems:

  • 1. Compute unreliably fast-growing functions: Hardy hierarchy
  • 2. Use this as an unreliable computational ressource
  • 3. “Check” in the end that nothing was lost
  • 4. Need computing unreliably the inverses of fast-growing functions

11/23

slide-27
SLIDE 27

FAST-GROWING VS. HARDY HIERARCHY

F0(n)

def

= n + 1 H0(n)

def

= n Fα+1(n)

def

= Fn+1

α

(n) =

n+1 times

  • Fα(Fα(...Fα(n)...))

Hα+1(n)

def

= Hα(n + 1) Fλ(n)

def

= Fλn(n) Hλ(n)

def

= Hλn(n)

  • Prop. Hωα(n) = Fα(n) for all α and n
  • Nb. Hα(n) can be evaluated by transforming a pair

α,n = α0,n0

H

− → α1,n1

H

− → α2,n2

H

− → ··· H − → αk,nk with α0 > α1 > α2 > ··· until eventually αk = 0 and nk = Hα(n) % tail-recursion!! Below we compute fast-growing functions and their inverses by encoding α,n H − → α′,n′ and α′,n′ H − →−1 α,n

12/23

slide-28
SLIDE 28

FAST-GROWING VS. HARDY HIERARCHY

F0(n)

def

= n + 1 H0(n)

def

= n Fα+1(n)

def

= Fn+1

α

(n) =

n+1 times

  • Fα(Fα(...Fα(n)...))

Hα+1(n)

def

= Hα(n + 1) Fλ(n)

def

= Fλn(n) Hλ(n)

def

= Hλn(n)

  • Prop. Hωα(n) = Fα(n) for all α and n
  • Nb. Hα(n) can be evaluated by transforming a pair

α,n = α0,n0

H

− → α1,n1

H

− → α2,n2

H

− → ··· H − → αk,nk with α0 > α1 > α2 > ··· until eventually αk = 0 and nk = Hα(n) % tail-recursion!! Below we compute fast-growing functions and their inverses by encoding α,n H − → α′,n′ and α′,n′ H − →−1 α,n

12/23

slide-29
SLIDE 29

FAST-GROWING VS. HARDY HIERARCHY

F0(n)

def

= n + 1 H0(n)

def

= n Fα+1(n)

def

= Fn+1

α

(n) =

n+1 times

  • Fα(Fα(...Fα(n)...))

Hα+1(n)

def

= Hα(n + 1) Fλ(n)

def

= Fλn(n) Hλ(n)

def

= Hλn(n)

  • Prop. Hωα(n) = Fα(n) for all α and n
  • Nb. Hα(n) can be evaluated by transforming a pair

α,n = α0,n0

H

− → α1,n1

H

− → α2,n2

H

− → ··· H − → αk,nk with α0 > α1 > α2 > ··· until eventually αk = 0 and nk = Hα(n) % tail-recursion!! Below we compute fast-growing functions and their inverses by encoding α,n H − → α′,n′ and α′,n′ H − →−1 α,n

12/23

slide-30
SLIDE 30

LCM WEAKLY COMPUTING

H

− → FOR α < ωω

Write α in CNF with coefficients α = ωm.am + ωm−1.am−1 + ··· + ω0a0. Encoding of α is [am,...,a0] ∈ Nm+1. [am,...,a0 + 1],n H − → [am,...,a0],n + 1 %Hα+1(n) = Hα(n + 1 [am,...,ak + 1,0,0,...,0],n H − → [am,...,ak,n + 1,0,...,0],n %Hλ(n) = Hλn(n) Recall (γ + ωk+1)n = γ + ωk · (n + 1)

13/23

slide-31
SLIDE 31

LCM WEAKLY COMPUTING

H

− → FOR α < ωω

Write α in CNF with coefficients α = ωm.am + ωm−1.am−1 + ··· + ω0a0. Encoding of α is [am,...,a0] ∈ Nm+1. [am,...,a0 + 1],n H − → [am,...,a0],n + 1 %Hα+1(n) = Hα(n + 1 [am,...,ak + 1,0,0,...,0],n H − → [am,...,ak,n + 1,0,...,0],n %Hλ(n) = Hλn(n) Recall (γ + ωk+1)n = γ + ωk · (n + 1)

13/23

slide-32
SLIDE 32

LCM WEAKLY COMPUTING

H

− →−1 FOR α < ωω

[am,...,a0],n + 1 H − →−1 [am,...,a0 + 1],n %Hα+1(n) = Hα(n + 1) [am,...,ak,n + 1,...,0],n H − →−1 [am,...,ak + 1,0,...,0],n %Hλ(n) = Hλn(n)

14/23

slide-33
SLIDE 33

LCM WEAKLY COMPUTING

H

− →−1 FOR α < ωω

[am,...,a0],n + 1 H − →−1 [am,...,a0 + 1],n %Hα+1(n) = Hα(n + 1) [am,...,ak,n + 1,...,0],n H − →−1 [am,...,ak + 1,0,...,0],n %Hλ(n) = Hλn(n)

  • Prop. [Robustness] a a′ and n n′ imply H[a](n) H[a′](n′)

14/23

slide-34
SLIDE 34

COUNTER MACHINES ON A BUDGET

Ensures:

  • 1. Mb ⊢ (ℓ,B,a) ∗

− →rel (ℓ,B′,a′) implies B + |a| = B′ + |a′|

  • 2. Mb ⊢ (ℓ,B,a) ∗

− →rel (ℓ,B′,a′) implies M ⊢ (ℓ,a) ∗ − →rel (ℓ′,a′)

  • 3. If M ⊢ (ℓ,a) ∗

− →rel (ℓ,a′) then ∃B,B′: Mb ⊢ (ℓ,B,a) ∗ − →rel (ℓ′,B′,a′)

  • 4. If M ⊢ (ℓ,B,a) ∗

− → (ℓ,B′,a′) then M ⊢ (ℓ,B,a) ∗ − →rel (ℓ,B′,a′) iff B + |a| = B′ + |a′|

15/23

slide-35
SLIDE 35

COUNTER MACHINES ON A BUDGET

Ensures:

  • 1. Mb ⊢ (ℓ,B,a) ∗

− →rel (ℓ,B′,a′) implies B + |a| = B′ + |a′|

  • 2. Mb ⊢ (ℓ,B,a) ∗

− →rel (ℓ,B′,a′) implies M ⊢ (ℓ,a) ∗ − →rel (ℓ′,a′)

  • 3. If M ⊢ (ℓ,a) ∗

− →rel (ℓ,a′) then ∃B,B′: Mb ⊢ (ℓ,B,a) ∗ − →rel (ℓ′,B′,a′)

  • 4. If M ⊢ (ℓ,B,a) ∗

− → (ℓ,B′,a′) then M ⊢ (ℓ,B,a) ∗ − →rel (ℓ,B′,a′) iff B + |a| = B′ + |a′|

15/23

slide-36
SLIDE 36

M(m): WRAPPING IT UP

  • Prop. M(m) has a lossy run

(ℓH,am :1,0,...,n :m,0,...) ∗ − → (ℓH−1,1,0,...,m,0,...) iff M(m) has a reliable run (ℓH,am : 1,0,...,n : m,0,...) ∗ − →rel (ℓH−1,am : 1,0,...,n : m,0,...) iff M has a reliable run from ℓini to ℓfin that is bounded by Hωm(m), i.e., by Ackermann(m)

  • Cor. LCM verification is Ackermann-complete

16/23

slide-37
SLIDE 37

M(m): WRAPPING IT UP

  • Prop. M(m) has a lossy run

(ℓH,am :1,0,...,n :m,0,...) ∗ − → (ℓH−1,1,0,...,m,0,...) iff M(m) has a reliable run (ℓH,am : 1,0,...,n : m,0,...) ∗ − →rel (ℓH−1,am : 1,0,...,n : m,0,...) iff M has a reliable run from ℓini to ℓfin that is bounded by Hωm(m), i.e., by Ackermann(m)

  • Cor. LCM verification is Ackermann-complete

16/23

slide-38
SLIDE 38

M(m): WRAPPING IT UP

  • Prop. M(m) has a lossy run

(ℓH,am :1,0,...,n :m,0,...) ∗ − → (ℓH−1,1,0,...,m,0,...) iff M(m) has a reliable run (ℓH,am : 1,0,...,n : m,0,...) ∗ − →rel (ℓH−1,am : 1,0,...,n : m,0,...) iff M has a reliable run from ℓini to ℓfin that is bounded by Hωm(m), i.e., by Ackermann(m)

  • Cor. LCM verification is Ackermann-complete

16/23

slide-39
SLIDE 39

ENCODING ORDINALS < ωωω IN CHANNELS

We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)

def

= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )

def

= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF

17/23

slide-40
SLIDE 40

ENCODING ORDINALS < ωωω IN CHANNELS

We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)

def

= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )

def

= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF

17/23

slide-41
SLIDE 41

ENCODING ORDINALS < ωωω IN CHANNELS

We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)

def

= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )

def

= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF

17/23

slide-42
SLIDE 42

ENCODING ORDINALS < ωωω IN CHANNELS

We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)

def

= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )

def

= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF

17/23

slide-43
SLIDE 43

WEAKLY COMPUTING

H

− → WITH LCS’S

( w,n) H − → (w,n + 1) %Hα+1(n) = Hα(n + 1) (ua0 w,n) H − → (u n+1a0w,n) %Hγ+ω(n) = Hγ+n+1(n) (uar+1 w,n) H − → (uan+1

r

arw,n) %Hγ+ωk+1(n) = Hγ+ωk·(n+1)(n) (··· similar rules for H − →−1 ···)

  • Prop. [Robustness]

w ⊑ w′ and n n′ and w′ pure imply Hα(w)(n) Hα(w′)(n′) where purity means that w′ has no superfluous symbols (a regular condition that can be enforced by LCS’s)

18/23

slide-44
SLIDE 44

WEAKLY COMPUTING

H

− → WITH LCS’S

( w,n) H − → (w,n + 1) %Hα+1(n) = Hα(n + 1) (ua0 w,n) H − → (u n+1a0w,n) %Hγ+ω(n) = Hγ+n+1(n) (uar+1 w,n) H − → (uan+1

r

arw,n) %Hγ+ωk+1(n) = Hγ+ωk·(n+1)(n) (··· similar rules for H − →−1 ···)

  • Prop. [Robustness]

w ⊑ w′ and n n′ and w′ pure imply Hα(w)(n) Hα(w′)(n′) where purity means that w′ has no superfluous symbols (a regular condition that can be enforced by LCS’s)

18/23

slide-45
SLIDE 45

COMPUTING

H

− → WITH LCS’S: FIRST RULE

We now store u and n as two strings (with endmarker #) on two channels p and d. p : u# d :

n# ∗

− →

u#

n+1#

19/23

slide-46
SLIDE 46

COMPUTING

H

− → WITH LCS’S: SECOND RULE

p : ai1 ...aipa0 u# d :

n# ∗

− →

ai1 ...aip

n+1a0u# n#

20/23

slide-47
SLIDE 47

WRAPPING IT UP (SKETCHILY)

As we did for lossy counters, this time with channels Bottom line: a LCS with |Σ| = m + 3 — can build a workspace of size Hωωm+1(n), — use this as a computational resource, — and fold back the workspace by computing the inverse of H Checking that the above computation is performed reliably can be stated as (reduces to) a reachability (or termination) question (See [Chambart & Schnoebelen, 2008] for complete construction)

  • Cor. LCS verification is complete for Fωω

Confirms: the main parameter for complexity is the size of the message alphabet

21/23

slide-48
SLIDE 48

WRAPPING IT UP (SKETCHILY)

As we did for lossy counters, this time with channels Bottom line: a LCS with |Σ| = m + 3 — can build a workspace of size Hωωm+1 (n), — use this as a computational resource, — and fold back the workspace by computing the inverse of H Checking that the above computation is performed reliably can be stated as (reduces to) a reachability (or termination) question (See [Chambart & Schnoebelen, 2008] for complete construction)

  • Cor. LCS verification is complete for Fωω

Confirms: the main parameter for complexity is the size of the message alphabet

21/23

slide-49
SLIDE 49

WRAPPING IT UP (SKETCHILY)

As we did for lossy counters, this time with channels Bottom line: a LCS with |Σ| = m + 3 — can build a workspace of size Hωωm+1 (n), — use this as a computational resource, — and fold back the workspace by computing the inverse of H Checking that the above computation is performed reliably can be stated as (reduces to) a reachability (or termination) question (See [Chambart & Schnoebelen, 2008] for complete construction)

  • Cor. LCS verification is complete for Fωω

Confirms: the main parameter for complexity is the size of the message alphabet

21/23

slide-50
SLIDE 50

CONCLUSION

Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or LCS’s) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at Fωωω

22/23

slide-51
SLIDE 51

CONCLUSION

Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or LCS’s) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at Fωωω

22/23

slide-52
SLIDE 52

CONCLUSION

Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or LCS’s) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at Fωωω

22/23

slide-53
SLIDE 53

BIBLIOGRAPHICAL POINTERS

Finkel & S., Theor.Comp.Sci. 2001: well-structured transition systems Baier, Bertrand & S., LPAR 2006: more on well-structured transition systems (games, probabilities, ..) Schmitz & S., ICALP 2011: compositional length of bad sequences S., MFCS 2010: hardness for LCM’s and related models Chambart & S., LICS 2008: hardness for LCS’s S., RP 2010: decidability for LCM’s Chambart & S., ICALP 2010: Post Embedding Problem

23/23