Complexity of Well-Quasi-Orderings and Well-Structured Transition Systems
Part IV: Complexity of WSTS Verification
Philippe Schnoebelen LSV, CNRS & ENS Cachan + Oxford 1-year visitor Oxford Dept. Comp. Sci, Mar. 9th, 2012
Complexity of Well-Quasi-Orderings and Well-Structured Transition - - PowerPoint PPT Presentation
Complexity of Well-Quasi-Orderings and Well-Structured Transition Systems Part IV: Complexity of WSTS Verification Philippe Schnoebelen LSV, CNRS & ENS Cachan + Oxford 1-year visitor Oxford Dept. Comp. Sci, Mar. 9th, 2012 Part IV.a: Upper
Part IV: Complexity of WSTS Verification
Philippe Schnoebelen LSV, CNRS & ENS Cachan + Oxford 1-year visitor Oxford Dept. Comp. Sci, Mar. 9th, 2012
2/23
LA,g(n)
def
= length of longest controlled bad sequence x0,x1,...,xL over WQO A (where “controlled”
def
⇔ |xi| gi(n)) Length Function Theorem. if g is a smooth control function in Fγ and A is an exponential WQO such that o(A) < ωβ+1 then LA,g is: – in Fβ if γ < ω β, – in Fγ+β if γ 2 and β < ω In a nutshell: in Fm for Nm, in Fωm−1 for Γ∗
m, in Fωωm for (Nm)∗, etc.,
where Ackermann’s function is in Fω (See [Schmitz & Schnoebelen, 2011] for all details)
3/23
Finite state control + finite number of “counters” (say m) + simple instructions and tests
ℓ0 ℓ1 ℓ2 ℓ3 c1++ c2>0? c2-- c3=0? 4 c2 1 c1 c3
Operational semantics: – Configurations: Conf
def
= Loc × NC = {s,t,...}, e.g., s0 = (ℓ0,1,4,0) – Steps: (ℓ0,1,4,0) − → (ℓ1,2,4,0) − → (ℓ2,2,3,0) − → (ℓ3,2,3,0) − → ··· A well-known model, Turing-powerful as soon as there are 2 counters
4/23
Finite state control + finite number of “counters” (say m) + simple instructions and tests
ℓ0 ℓ1 ℓ2 ℓ3 c1++ c2>0? c2-- c3=0? 4 c2 1 c1 c3
Operational semantics: – Configurations: Conf
def
= Loc × NC = {s,t,...}, e.g., s0 = (ℓ0,1,4,0) – Steps: (ℓ0,1,4,0) − → (ℓ1,2,4,0) − → (ℓ2,2,3,0) − → (ℓ3,2,3,0) − → ··· A well-known model, Turing-powerful as soon as there are 2 counters
4/23
LCM = Counter machines with unreliability: “counters decrease nondeterministically” (Weaker) computational model useful, e.g., for logics like XPath or LTL+data
→rel t as above Lossy steps: s − → t
def
⇔ s s′ − →rel t′ t for some s′ and t′ where s = (ℓ,a1,...,am) (ℓ′,b1,...,bm) = s′ def ⇔ ℓ = ℓ′ ∧ a1 b1 ∧ ... ∧ am bm I.e., (Conf,) = (Loc,Id) × (N,) × ··· × (N,) hence is WQO
− → t implies s′ + − → t′ for all s′ s and t′ t
5/23
LCM = Counter machines with unreliability: “counters decrease nondeterministically” (Weaker) computational model useful, e.g., for logics like XPath or LTL+data
→rel t as above Lossy steps: s − → t
def
⇔ s s′ − →rel t′ t for some s′ and t′ where s = (ℓ,a1,...,am) (ℓ′,b1,...,bm) = s′ def ⇔ ℓ = ℓ′ ∧ a1 b1 ∧ ... ∧ am bm I.e., (Conf,) = (Loc,Id) × (N,) × ··· × (N,) hence is WQO
− → t implies s′ + − → t′ for all s′ s and t′ t
5/23
LCM = Counter machines with unreliability: “counters decrease nondeterministically” (Weaker) computational model useful, e.g., for logics like XPath or LTL+data
→rel t as above Lossy steps: s − → t
def
⇔ s s′ − →rel t′ t for some s′ and t′ where s = (ℓ,a1,...,am) (ℓ′,b1,...,bm) = s′ def ⇔ ℓ = ℓ′ ∧ a1 b1 ∧ ... ∧ am bm I.e., (Conf,) = (Loc,Id) × (N,) × ··· × (N,) hence is WQO
− → t implies s′ + − → t′ for all s′ s and t′ t
5/23
(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s
sequence (until sn−1)
i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′
j = si. Thus the shortest loop has no increasing pair
→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.
in Fm when we fix |C| = m
6/23
(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s
sequence (until sn−1)
i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′
j = si. Thus the shortest loop has no increasing pair
→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.
in Fm when we fix |C| = m
6/23
(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s
sequence (until sn−1)
i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′
j = si. Thus the shortest loop has no increasing pair
→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.
in Fm when we fix |C| = m
6/23
(Non-)Termination. There is an infinite run sinit = s0 − → s1 − → s2 ··· iff there is a loop sinit = s0 − → ··· − → sk − → ··· − → sn = sk Hence termination is co-r.e. for LCM’s
sequence (until sn−1)
i < j < n. Then we obtain a shorter loop by replacing sj−1 − → sj by sj−1 − → s′
j = si. Thus the shortest loop has no increasing pair
→ t implies |t| |s| + 1, any run is Succ-controlled Hence n LA,Succ(|sinit|) for A ≡ Loc × N|C| ≡ Nm × |Loc|.
in Fm when we fix |C| = m
6/23
Same ideas work for reachability: “is there a run from sinit to sgoal?”
→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn
that is a (reversed) bad sequence
Fm (same as Termination)
7/23
Same ideas work for reachability: “is there a run from sinit to sgoal?”
→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn
that is a (reversed) bad sequence
Fm (same as Termination)
7/23
Same ideas work for reachability: “is there a run from sinit to sgoal?”
→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn
that is a (reversed) bad sequence
Fm (same as Termination)
7/23
Same ideas work for reachability: “is there a run from sinit to sgoal?”
→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn
that is a (reversed) bad sequence
Fm (same as Termination)
7/23
Same ideas work for reachability: “is there a run from sinit to sgoal?”
→ s1 − → ··· − → sn = sgoal has a decreasing pair si sj for 0 < i < j it can be shortened as s0 − → ··· − → si−1 − → sj − → ··· − → sn
that is a (reversed) bad sequence
Fm (same as Termination)
7/23
A configuration σ = (ℓ1,ℓ2,w1,w2) with ℓi ∈ Loci and wi ∈ Σ∗. E.g., w1 = hup.ack.ack. Reliable steps read in front of channels, write at end (FIFO) Lossy steps: messages may be lost nondeterministically σ − → σ′ def ⇔ σ ⊒ ρ − →rel ρ′ ⊒ σ′ for some ρ,ρ′ where (Conf,⊑) is the WQO Loc1 × Loc2 × (Σ∗)C A model useful for concurrent protocols but also timed automata, metric temporal logic, products of modal logics, ...
8/23
A configuration σ = (ℓ1,ℓ2,w1,w2) with ℓi ∈ Loci and wi ∈ Σ∗. E.g., w1 = hup.ack.ack. Reliable steps read in front of channels, write at end (FIFO) Lossy steps: messages may be lost nondeterministically σ − → σ′ def ⇔ σ ⊒ ρ − →rel ρ′ ⊒ σ′ for some ρ,ρ′ where (Conf,⊑) is the WQO Loc1 × Loc2 × (Σ∗)C A model useful for concurrent protocols but also timed automata, metric temporal logic, products of modal logics, ...
8/23
A configuration σ = (ℓ1,ℓ2,w1,w2) with ℓi ∈ Loci and wi ∈ Σ∗. E.g., w1 = hup.ack.ack. Reliable steps read in front of channels, write at end (FIFO) Lossy steps: messages may be lost nondeterministically σ − → σ′ def ⇔ σ ⊒ ρ − →rel ρ′ ⊒ σ′ for some ρ,ρ′ where (Conf,⊑) is the WQO Loc1 × Loc2 × (Σ∗)C A model useful for concurrent protocols but also timed automata, metric temporal logic, products of modal logics, ...
8/23
Termination and Reachability are decidable for LCS’s (as for LCM’s) — Furthermore non-termination can be witnessed by a loop that is a bad sequence, and reachability can be witnessed by a run that is a reversed bad sequence. — These bad sequences are Succ-controlled — Hence upper bounds given via the Length Function Theorem: ⇒ LCS verification is in Fωω, and in Fωm−1 when we fix |Σ| = m
message alphabet Σ
9/23
Termination and Reachability are decidable for LCS’s (as for LCM’s) — Furthermore non-termination can be witnessed by a loop that is a bad sequence, and reachability can be witnessed by a run that is a reversed bad sequence. — These bad sequences are Succ-controlled — Hence upper bounds given via the Length Function Theorem: ⇒ LCS verification is in Fωω, and in Fωm−1 when we fix |Σ| = m
message alphabet Σ
9/23
10/23
We have upper bounds on the complexity of verification for lossy counter machines and lossy channel systems Do we have matching lower bounds? Yes for the simple-minded algorithms we presented (see Part II) No for the underlying decision problems (witness: Petri nets) Reduction stategy for proving lower bounds in lossy systems:
11/23
We have upper bounds on the complexity of verification for lossy counter machines and lossy channel systems Do we have matching lower bounds? Yes for the simple-minded algorithms we presented (see Part II) No for the underlying decision problems (witness: Petri nets) Reduction stategy for proving lower bounds in lossy systems:
11/23
We have upper bounds on the complexity of verification for lossy counter machines and lossy channel systems Do we have matching lower bounds? Yes for the simple-minded algorithms we presented (see Part II) No for the underlying decision problems (witness: Petri nets) Reduction stategy for proving lower bounds in lossy systems:
11/23
F0(n)
def
= n + 1 H0(n)
def
= n Fα+1(n)
def
= Fn+1
α
(n) =
n+1 times
Hα+1(n)
def
= Hα(n + 1) Fλ(n)
def
= Fλn(n) Hλ(n)
def
= Hλn(n)
α,n = α0,n0
H
− → α1,n1
H
− → α2,n2
H
− → ··· H − → αk,nk with α0 > α1 > α2 > ··· until eventually αk = 0 and nk = Hα(n) % tail-recursion!! Below we compute fast-growing functions and their inverses by encoding α,n H − → α′,n′ and α′,n′ H − →−1 α,n
12/23
F0(n)
def
= n + 1 H0(n)
def
= n Fα+1(n)
def
= Fn+1
α
(n) =
n+1 times
Hα+1(n)
def
= Hα(n + 1) Fλ(n)
def
= Fλn(n) Hλ(n)
def
= Hλn(n)
α,n = α0,n0
H
− → α1,n1
H
− → α2,n2
H
− → ··· H − → αk,nk with α0 > α1 > α2 > ··· until eventually αk = 0 and nk = Hα(n) % tail-recursion!! Below we compute fast-growing functions and their inverses by encoding α,n H − → α′,n′ and α′,n′ H − →−1 α,n
12/23
F0(n)
def
= n + 1 H0(n)
def
= n Fα+1(n)
def
= Fn+1
α
(n) =
n+1 times
Hα+1(n)
def
= Hα(n + 1) Fλ(n)
def
= Fλn(n) Hλ(n)
def
= Hλn(n)
α,n = α0,n0
H
− → α1,n1
H
− → α2,n2
H
− → ··· H − → αk,nk with α0 > α1 > α2 > ··· until eventually αk = 0 and nk = Hα(n) % tail-recursion!! Below we compute fast-growing functions and their inverses by encoding α,n H − → α′,n′ and α′,n′ H − →−1 α,n
12/23
H
Write α in CNF with coefficients α = ωm.am + ωm−1.am−1 + ··· + ω0a0. Encoding of α is [am,...,a0] ∈ Nm+1. [am,...,a0 + 1],n H − → [am,...,a0],n + 1 %Hα+1(n) = Hα(n + 1 [am,...,ak + 1,0,0,...,0],n H − → [am,...,ak,n + 1,0,...,0],n %Hλ(n) = Hλn(n) Recall (γ + ωk+1)n = γ + ωk · (n + 1)
13/23
H
Write α in CNF with coefficients α = ωm.am + ωm−1.am−1 + ··· + ω0a0. Encoding of α is [am,...,a0] ∈ Nm+1. [am,...,a0 + 1],n H − → [am,...,a0],n + 1 %Hα+1(n) = Hα(n + 1 [am,...,ak + 1,0,0,...,0],n H − → [am,...,ak,n + 1,0,...,0],n %Hλ(n) = Hλn(n) Recall (γ + ωk+1)n = γ + ωk · (n + 1)
13/23
H
[am,...,a0],n + 1 H − →−1 [am,...,a0 + 1],n %Hα+1(n) = Hα(n + 1) [am,...,ak,n + 1,...,0],n H − →−1 [am,...,ak + 1,0,...,0],n %Hλ(n) = Hλn(n)
14/23
H
[am,...,a0],n + 1 H − →−1 [am,...,a0 + 1],n %Hα+1(n) = Hα(n + 1) [am,...,ak,n + 1,...,0],n H − →−1 [am,...,ak + 1,0,...,0],n %Hλ(n) = Hλn(n)
14/23
Ensures:
− →rel (ℓ,B′,a′) implies B + |a| = B′ + |a′|
− →rel (ℓ,B′,a′) implies M ⊢ (ℓ,a) ∗ − →rel (ℓ′,a′)
− →rel (ℓ,a′) then ∃B,B′: Mb ⊢ (ℓ,B,a) ∗ − →rel (ℓ′,B′,a′)
− → (ℓ,B′,a′) then M ⊢ (ℓ,B,a) ∗ − →rel (ℓ,B′,a′) iff B + |a| = B′ + |a′|
15/23
Ensures:
− →rel (ℓ,B′,a′) implies B + |a| = B′ + |a′|
− →rel (ℓ,B′,a′) implies M ⊢ (ℓ,a) ∗ − →rel (ℓ′,a′)
− →rel (ℓ,a′) then ∃B,B′: Mb ⊢ (ℓ,B,a) ∗ − →rel (ℓ′,B′,a′)
− → (ℓ,B′,a′) then M ⊢ (ℓ,B,a) ∗ − →rel (ℓ,B′,a′) iff B + |a| = B′ + |a′|
15/23
(ℓH,am :1,0,...,n :m,0,...) ∗ − → (ℓH−1,1,0,...,m,0,...) iff M(m) has a reliable run (ℓH,am : 1,0,...,n : m,0,...) ∗ − →rel (ℓH−1,am : 1,0,...,n : m,0,...) iff M has a reliable run from ℓini to ℓfin that is bounded by Hωm(m), i.e., by Ackermann(m)
16/23
(ℓH,am :1,0,...,n :m,0,...) ∗ − → (ℓH−1,1,0,...,m,0,...) iff M(m) has a reliable run (ℓH,am : 1,0,...,n : m,0,...) ∗ − →rel (ℓH−1,am : 1,0,...,n : m,0,...) iff M has a reliable run from ℓini to ℓfin that is bounded by Hωm(m), i.e., by Ackermann(m)
16/23
(ℓH,am :1,0,...,n :m,0,...) ∗ − → (ℓH−1,1,0,...,m,0,...) iff M(m) has a reliable run (ℓH,am : 1,0,...,n : m,0,...) ∗ − →rel (ℓH−1,am : 1,0,...,n : m,0,...) iff M has a reliable run from ℓini to ℓfin that is bounded by Hωm(m), i.e., by Ackermann(m)
16/23
We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)
def
= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )
def
= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF
17/23
We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)
def
= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )
def
= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF
17/23
We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)
def
= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )
def
= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF
17/23
We use Σ = {a0,...,am} ∪ { } to encode ordinals α < ωωm+1. Two-level “differential” encoding: β : {a0,...,am}∗ → ωm+1 β(ar1 ...ark)
def
= ωr1 + ··· + ωrk E.g. β(ǫ) = 0, β(a3a0a0) = ω3 + 2 α : Σ∗ → ωωm+1 α(a1 a2 ...al )
def
= ωβ(a1a2...al) + ··· + ωβ(a1a2) + ωβ(a1) E.g. α( ) = ω0 + ω0 + ω0 = 3 α(a1a0 a1 ) = ωω·2 + ωω+1 · 2 Point of this encoding: w ⊑ w′ implies α(w) α(w′) Difficulty: a is not always a CNF
17/23
H
( w,n) H − → (w,n + 1) %Hα+1(n) = Hα(n + 1) (ua0 w,n) H − → (u n+1a0w,n) %Hγ+ω(n) = Hγ+n+1(n) (uar+1 w,n) H − → (uan+1
r
arw,n) %Hγ+ωk+1(n) = Hγ+ωk·(n+1)(n) (··· similar rules for H − →−1 ···)
w ⊑ w′ and n n′ and w′ pure imply Hα(w)(n) Hα(w′)(n′) where purity means that w′ has no superfluous symbols (a regular condition that can be enforced by LCS’s)
18/23
H
( w,n) H − → (w,n + 1) %Hα+1(n) = Hα(n + 1) (ua0 w,n) H − → (u n+1a0w,n) %Hγ+ω(n) = Hγ+n+1(n) (uar+1 w,n) H − → (uan+1
r
arw,n) %Hγ+ωk+1(n) = Hγ+ωk·(n+1)(n) (··· similar rules for H − →−1 ···)
w ⊑ w′ and n n′ and w′ pure imply Hα(w)(n) Hα(w′)(n′) where purity means that w′ has no superfluous symbols (a regular condition that can be enforced by LCS’s)
18/23
H
We now store u and n as two strings (with endmarker #) on two channels p and d. p : u# d :
n# ∗
u#
n+1#
19/23
H
p : ai1 ...aipa0 u# d :
n# ∗
ai1 ...aip
n+1a0u# n#
20/23
As we did for lossy counters, this time with channels Bottom line: a LCS with |Σ| = m + 3 — can build a workspace of size Hωωm+1(n), — use this as a computational resource, — and fold back the workspace by computing the inverse of H Checking that the above computation is performed reliably can be stated as (reduces to) a reachability (or termination) question (See [Chambart & Schnoebelen, 2008] for complete construction)
Confirms: the main parameter for complexity is the size of the message alphabet
21/23
As we did for lossy counters, this time with channels Bottom line: a LCS with |Σ| = m + 3 — can build a workspace of size Hωωm+1 (n), — use this as a computational resource, — and fold back the workspace by computing the inverse of H Checking that the above computation is performed reliably can be stated as (reduces to) a reachability (or termination) question (See [Chambart & Schnoebelen, 2008] for complete construction)
Confirms: the main parameter for complexity is the size of the message alphabet
21/23
As we did for lossy counters, this time with channels Bottom line: a LCS with |Σ| = m + 3 — can build a workspace of size Hωωm+1 (n), — use this as a computational resource, — and fold back the workspace by computing the inverse of H Checking that the above computation is performed reliably can be stated as (reduces to) a reachability (or termination) question (See [Chambart & Schnoebelen, 2008] for complete construction)
Confirms: the main parameter for complexity is the size of the message alphabet
21/23
Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or LCS’s) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at Fωωω
22/23
Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or LCS’s) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at Fωωω
22/23
Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or LCS’s) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at Fωωω
22/23
Finkel & S., Theor.Comp.Sci. 2001: well-structured transition systems Baier, Bertrand & S., LPAR 2006: more on well-structured transition systems (games, probabilities, ..) Schmitz & S., ICALP 2011: compositional length of bad sequences S., MFCS 2010: hardness for LCM’s and related models Chambart & S., LICS 2008: hardness for LCS’s S., RP 2010: decidability for LCM’s Chambart & S., ICALP 2010: Post Embedding Problem
23/23