Distributed synthesis: synchronous and asynchronous semantics Paul - - PowerPoint PPT Presentation

distributed synthesis synchronous and asynchronous
SMART_READER_LITE
LIVE PREVIEW

Distributed synthesis: synchronous and asynchronous semantics Paul - - PowerPoint PPT Presentation

Distributed synthesis: synchronous and asynchronous semantics Paul Gastin LSV ENS de Cachan & CNRS Paul.Gastin@lsv.ens-cachan.fr EPIT, May 31st, 2006 1 / 65 Outline Control for sequential systems 1 Control for distributed systems


slide-1
SLIDE 1

1 / 65

Distributed synthesis: synchronous and asynchronous semantics

Paul Gastin

LSV ENS de Cachan & CNRS Paul.Gastin@lsv.ens-cachan.fr

EPIT, May 31st, 2006

slide-2
SLIDE 2

2 / 65

Outline

1

Control for sequential systems

Control for distributed systems Synchronous semantics Asynchronous semantics

slide-3
SLIDE 3

3 / 65

Open / Reactive system

inputs from E

  • utputs to E

Open system S

Model for the open system

◮ Transitions system A = (Q, Σ, q0, δ) ◮ Q: finite or infinite set of states, ◮ δ: deterministic or non deterministic transition function. ◮ Σ = Σc ⊎ Σuc Controllable / Uncontrollable events. ◮ Σ = Σo ⊎ Σuo Observable / Unobservable events.

slide-4
SLIDE 4

4 / 65

Example: Elevator

Transition system

◮ States: ◮ position of the cabin ◮ flag is open for each door ◮ flag is called for each level ◮ number of persons in the cabin ◮ Events:

Σo Σuo Σuc call level i enter/exit cabin Σc

  • pen/close door i

move 1 level up/down

◮ We get easily a finite and deterministic transition system.

slide-5
SLIDE 5

5 / 65

Specification

Linear time: LTL, FO, MSO, regular, . . .

◮ Safety: G(level = i −

→ is closedi)

◮ Liveness: G(is calledi −

→ F(level = i ∧ is openi))

Branching time: CTL, CTL∗, µ-calculus, . . .

◮ AGcalli⊤

(calli is uncontrollable)

◮ AG EF(level = 0 ∧ is open0)

slide-6
SLIDE 6

6 / 65

Control problem

inputs from E

  • utputs to E

Open system S Specification ϕ

Two problem

◮ Control: Given a system S and a specification ϕ, decide whether there exists

a controller C such that S ⊗ C | = ϕ.

◮ Synthesis: Given a system S and a specification ϕ, builda controller C (if one

exists) such that S ⊗ C | = ϕ.

slide-7
SLIDE 7

6 / 65

Control problem

inputs from E

  • utputs to E

Open system S Specification ϕ Controller C enables/disables actions

  • bservation

Two problem

◮ Control: Given a system S and a specification ϕ, decide whether there exists

a controller C such that S ⊗ C | = ϕ.

◮ Synthesis: Given a system S and a specification ϕ, builda controller C (if one

exists) such that S ⊗ C | = ϕ.

slide-8
SLIDE 8

7 / 65

Controller

Under full state-event observation

◮ Controller: f : Q(ΣQ)∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Q(ΣQ)∗. ◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f(q0a1q1 · · · qi−1) for all i > 0.

◮ Controlled execution tree: t : D∗ → Σ × Q with ◮ t(ε) = (a, q0)

(a ∈ Σ fixed arbitrarily)

◮ for all x = d1 · · · dn ∈ D∗ with t(d1 · · · di) = (ai, qi), we have

t(sons(x)) = {(a, q) | a ∈ f(q0a1q1 · · · anqn) and (qn, a, q) ∈ δ}.

Under full event observation

◮ Controller: f : Σ∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

Remark: same as full state-event observation if the system is deterministic.

Under partial event observation

◮ Controller: f : Σ∗

  • → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f ◦ ΠΣo(a1 · · · ai−1) for all i > 0.

slide-9
SLIDE 9

7 / 65

Controller

Under full state-event observation

◮ Controller: f : Q(ΣQ)∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Q(ΣQ)∗. ◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f(q0a1q1 · · · qi−1) for all i > 0.

◮ Controlled execution tree: t : D∗ → Σ × Q with ◮ t(ε) = (a, q0)

(a ∈ Σ fixed arbitrarily)

◮ for all x = d1 · · · dn ∈ D∗ with t(d1 · · · di) = (ai, qi), we have

t(sons(x)) = {(a, q) | a ∈ f(q0a1q1 · · · anqn) and (qn, a, q) ∈ δ}.

Under full event observation

◮ Controller: f : Σ∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

Remark: same as full state-event observation if the system is deterministic.

Under partial event observation

◮ Controller: f : Σ∗

  • → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f ◦ ΠΣo(a1 · · · ai−1) for all i > 0.

slide-10
SLIDE 10

7 / 65

Controller

Under full state-event observation

◮ Controller: f : Q(ΣQ)∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Q(ΣQ)∗. ◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f(q0a1q1 · · · qi−1) for all i > 0.

◮ Controlled execution tree: t : D∗ → Σ × Q with ◮ t(ε) = (a, q0)

(a ∈ Σ fixed arbitrarily)

◮ for all x = d1 · · · dn ∈ D∗ with t(d1 · · · di) = (ai, qi), we have

t(sons(x)) = {(a, q) | a ∈ f(q0a1q1 · · · anqn) and (qn, a, q) ∈ δ}.

Under full event observation

◮ Controller: f : Σ∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

Remark: same as full state-event observation if the system is deterministic.

Under partial event observation

◮ Controller: f : Σ∗

  • → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f ◦ ΠΣo(a1 · · · ai−1) for all i > 0.

slide-11
SLIDE 11

7 / 65

Controller

Under full state-event observation

◮ Controller: f : Q(ΣQ)∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Q(ΣQ)∗. ◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f(q0a1q1 · · · qi−1) for all i > 0.

◮ Controlled execution tree: t : D∗ → Σ × Q with ◮ t(ε) = (a, q0)

(a ∈ Σ fixed arbitrarily)

◮ for all x = d1 · · · dn ∈ D∗ with t(d1 · · · di) = (ai, qi), we have

t(sons(x)) = {(a, q) | a ∈ f(q0a1q1 · · · anqn) and (qn, a, q) ∈ δ}.

Under full event observation

◮ Controller: f : Σ∗ → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

Remark: same as full state-event observation if the system is deterministic.

Under partial event observation

◮ Controller: f : Σ∗

  • → 2Σ with Σuc ⊆ f(x) for all x ∈ Σ∗.

◮ Controlled behavior: q0, a1, q1, a2, q2, . . . with (qi−1, ai, qi) ∈ δ and

ai ∈ f ◦ ΠΣo(a1 · · · ai−1) for all i > 0.

slide-12
SLIDE 12

8 / 65

Control versus Game

Correspondance

Transition system = Game arena (graph). Controllable events = Actions of player 1 (controller). Uncontrollable events = Action of player 0 (opponent, environment). Behavior = Play. Controller = Strategy. Specification = Winning condition. Finding a controller = finding a winning strategy.

Control problem

Given a system S and a specification ϕ, does there exist a controller C such that L(C ⊗ S) ⊆ L(ϕ)?

Theorem

If the system is finite state and the specification is regular then the control problem is decidable. Moreover, when (S, ϕ) is controllable, we can synthesize a finite state controller.

slide-13
SLIDE 13

8 / 65

Control versus Game

Correspondance

Transition system = Game arena (graph). Controllable events = Actions of player 1 (controller). Uncontrollable events = Action of player 0 (opponent, environment). Behavior = Play. Controller = Strategy. Specification = Winning condition. Finding a controller = finding a winning strategy.

Control problem

Given a system S and a specification ϕ, does there exist a controller C such that L(C ⊗ S) ⊆ L(ϕ)?

Theorem

If the system is finite state and the specification is regular then the control problem is decidable. Moreover, when (S, ϕ) is controllable, we can synthesize a finite state controller.

slide-14
SLIDE 14

9 / 65

Ramadge - Wonham 87→

Control problem (Exact)

Given a system S (with accepting states) and a specification K ⊆ Σ∗, does there exist a controller C such that L(C ⊗ S) = K?

Theorem

◮ (S, Pref(K)) is controllable iff Pref(K) · Σuc ∩ Pref(L(S)) ⊆ Pref(K). ◮ (S, K) is controllable without deadlock iff ◮ Pref(K) · Σuc ∩ Pref(L(S)) ⊆ Pref(K) ◮ Pref(K) ∩ L(S) = K. ◮ If S is finite state and K regular then the control problem is decidable.

When (S, K) is controllable, we can synthesize a finite state controller.

Other results

◮ control under partial observation ◮ maximal controllable sub-specification ◮ generalization to infinite behaviors (Thistle - Wonham) ◮ . . .

slide-15
SLIDE 15

9 / 65

Ramadge - Wonham 87→

Control problem (Exact)

Given a system S (with accepting states) and a specification K ⊆ Σ∗, does there exist a controller C such that L(C ⊗ S) = K?

Theorem

◮ (S, Pref(K)) is controllable iff Pref(K) · Σuc ∩ Pref(L(S)) ⊆ Pref(K). ◮ (S, K) is controllable without deadlock iff ◮ Pref(K) · Σuc ∩ Pref(L(S)) ⊆ Pref(K) ◮ Pref(K) ∩ L(S) = K. ◮ If S is finite state and K regular then the control problem is decidable.

When (S, K) is controllable, we can synthesize a finite state controller.

Other results

◮ control under partial observation ◮ maximal controllable sub-specification ◮ generalization to infinite behaviors (Thistle - Wonham) ◮ . . .

slide-16
SLIDE 16

9 / 65

Ramadge - Wonham 87→

Control problem (Exact)

Given a system S (with accepting states) and a specification K ⊆ Σ∗, does there exist a controller C such that L(C ⊗ S) = K?

Theorem

◮ (S, Pref(K)) is controllable iff Pref(K) · Σuc ∩ Pref(L(S)) ⊆ Pref(K). ◮ (S, K) is controllable without deadlock iff ◮ Pref(K) · Σuc ∩ Pref(L(S)) ⊆ Pref(K) ◮ Pref(K) ∩ L(S) = K. ◮ If S is finite state and K regular then the control problem is decidable.

When (S, K) is controllable, we can synthesize a finite state controller.

Other results

◮ control under partial observation ◮ maximal controllable sub-specification ◮ generalization to infinite behaviors (Thistle - Wonham) ◮ . . .

slide-17
SLIDE 17

10 / 65

Synthesis of reactive programs

Pnueli-Rosner 89

x y

◮ Qx: domain for input variable x ◮ Qy: domain for output variable y ◮ Program: f : Q+

x → Qy

◮ Input: x1x2 · · · ∈ Qω

x.

◮ Behavior: (x1, y1)(x2, y2)(x3, y3) · · · with yn = f1(x1 · · · xn) for all n > 0.

Implementability problem

◮ Given a linear time specification ϕ over the alphabet Σ = Qx × Qy,

Does there exist a program f such that all f-behaviors satisfy ϕ?

◮ Given a branching time specification ϕ over the alphabet Σ = Qx × Qy,

Does there exist a program f such that its run-tree satisfies ϕ?

slide-18
SLIDE 18

10 / 65

Synthesis of reactive programs

Pnueli-Rosner 89

x y

◮ Qx: domain for input variable x ◮ Qy: domain for output variable y ◮ Program: f : Q+

x → Qy

◮ Input: x1x2 · · · ∈ Qω

x.

◮ Behavior: (x1, y1)(x2, y2)(x3, y3) · · · with yn = f1(x1 · · · xn) for all n > 0.

Implementability problem

◮ Given a linear time specification ϕ over the alphabet Σ = Qx × Qy,

Does there exist a program f such that all f-behaviors satisfy ϕ?

◮ Given a branching time specification ϕ over the alphabet Σ = Qx × Qy,

Does there exist a program f such that its run-tree satisfies ϕ?

slide-19
SLIDE 19

10 / 65

Synthesis of reactive programs

Pnueli-Rosner 89

x y

◮ Qx: domain for input variable x ◮ Qy: domain for output variable y ◮ Program: f : Q+

x → Qy

◮ Input: x1x2 · · · ∈ Qω

x.

◮ Behavior: (x1, y1)(x2, y2)(x3, y3) · · · with yn = f1(x1 · · · xn) for all n > 0.

Implementability problem

◮ Given a linear time specification ϕ over the alphabet Σ = Qx × Qy,

Does there exist a program f such that all f-behaviors satisfy ϕ?

◮ Given a branching time specification ϕ over the alphabet Σ = Qx × Qy,

Does there exist a program f such that its run-tree satisfies ϕ?

slide-20
SLIDE 20

11 / 65

Synthesis of reactive programs

Implementability problem

Given a linear time specification ϕ over the alphabet Σ = Qx × Qy, Does there exist a program f such that all f-behaviors satisfy ϕ?

Implementability = Satisfiability

◮ Qx = {0, 1} and ϕ = F(x = 1) ◮ ϕ is satisfiable: (1, 0)ω |

= ϕ

◮ ϕ is not implementable since the input is not controllable.

Implementability = Validity of ∀ x ∃ y ϕ

◮ Qx = Qy = {0, 1} and ϕ = (y = 1) ←

→ F(x = 1)

◮ ∀

x ∃ y ϕ is valid.

◮ ϕ is not implementable by a reactive program.

For non-reactive terminating programs, Implementability = Validity of ∀ x ∃ y ϕ

slide-21
SLIDE 21

11 / 65

Synthesis of reactive programs

Implementability problem

Given a linear time specification ϕ over the alphabet Σ = Qx × Qy, Does there exist a program f such that all f-behaviors satisfy ϕ?

Implementability = Satisfiability

◮ Qx = {0, 1} and ϕ = F(x = 1) ◮ ϕ is satisfiable: (1, 0)ω |

= ϕ

◮ ϕ is not implementable since the input is not controllable.

Implementability = Validity of ∀ x ∃ y ϕ

◮ Qx = Qy = {0, 1} and ϕ = (y = 1) ←

→ F(x = 1)

◮ ∀

x ∃ y ϕ is valid.

◮ ϕ is not implementable by a reactive program.

For non-reactive terminating programs, Implementability = Validity of ∀ x ∃ y ϕ

slide-22
SLIDE 22

11 / 65

Synthesis of reactive programs

Implementability problem

Given a linear time specification ϕ over the alphabet Σ = Qx × Qy, Does there exist a program f such that all f-behaviors satisfy ϕ?

Implementability = Satisfiability

◮ Qx = {0, 1} and ϕ = F(x = 1) ◮ ϕ is satisfiable: (1, 0)ω |

= ϕ

◮ ϕ is not implementable since the input is not controllable.

Implementability = Validity of ∀ x ∃ y ϕ

◮ Qx = Qy = {0, 1} and ϕ = (y = 1) ←

→ F(x = 1)

◮ ∀

x ∃ y ϕ is valid.

◮ ϕ is not implementable by a reactive program.

For non-reactive terminating programs, Implementability = Validity of ∀ x ∃ y ϕ

slide-23
SLIDE 23

12 / 65

Synthesis of reactive programs

Implementability problem

Given a linear time specification ϕ over the alphabet Σ = Qx × Qy, Does there exist a program f such that all f-behaviors satisfy ϕ?

Theorem (Pnueli-Rosner 89)

◮ The specification ϕ ∈ LTL is implementable iff the formula

Aϕ ∧ AG(

  • a∈Qx

EX(x = a)) is satisfiable.

◮ When ϕ is implementable, we can construct a finite state implementation

(program) in time doubly exponential in ϕ.

slide-24
SLIDE 24

13 / 65

Program synthesis versus System control

Equivalence

The implementability problem for x y is equivalent to the control problem for the system Qx Qy

slide-25
SLIDE 25

14 / 65

Outline

Control for sequential systems

2

Control for distributed systems

Synchronous semantics Asynchronous semantics

slide-26
SLIDE 26

15 / 65

Distributed control

inputs from E

  • utputs to E

Open distributed system S S1 S2 S3 S4 Specification ϕ

Two problems, again

◮ Decide whether there exists a distributed controller st.

(S1 ⊗ C1) · · · (Sn ⊗ Cn) E | = ϕ.

◮ Synthesis: If so, compute such a distributed controller.

Peterson-Reif 1979, Pnueli-Rosner 1990

In general, the problems are undecidable.

slide-27
SLIDE 27

15 / 65

Distributed control

inputs from E

  • utputs to E

Open distributed system S Open distributed system S Controlled open distributed system S S1 S2 S3 S4 Specification ϕ C1 C2 C3 C4

Two problems, again

◮ Decide whether there exists a distributed controller st.

(S1 ⊗ C1) · · · (Sn ⊗ Cn) E | = ϕ.

◮ Synthesis: If so, compute such a distributed controller.

Peterson-Reif 1979, Pnueli-Rosner 1990

In general, the problems are undecidable.

slide-28
SLIDE 28

16 / 65

Architectures with shared variables

Architecture A = (P, V, R, W)

◮ P finite set of processes/agents. ◮ V finite set of Variables. ◮ R ⊆ P × V:

(a, x) ∈ R iff a reads x.

◮ R(a) variables read by process a ∈ P, ◮ R−1(x) processes reading variable x ∈ V. ◮ W ⊆ P × V: (a, x) ∈ W iff a writes to x. ◮ W (a) variables written by process a ∈ P, ◮ W −1(x) processes writing to variable x ∈ V.

Example

x0 x1 x2 x3 x4 x5 a1 a2 a3 a4

slide-29
SLIDE 29

17 / 65

Distributed systems with shared variables

Distributed system/plant/arena

◮ A = (P, V, R, W) architecture. ◮ Qx (finite) domain for each variable x ∈ V. ◮ δa ⊆ QR(a) × QW(a) legal actions/moves for process/player a ∈ P. ◮ q0 ∈ QV initial state

where QI =

x∈I Qx for I ⊆ V.

slide-30
SLIDE 30

18 / 65

Distributed Synthesis

Problem

Given a distributed system and a specification Problem existence/synthesis of programs/strategies for the processes/players such that the system satisfies the specification (whatever the environment/opponent does).

Main parameters

◮ Which subclass of architectures? ◮ Which semantics?

synchronous (with our without delay), asynchronous

◮ What kind of specification?

LTL, CLT∗, µ-calculus Rational, Recognizable word/tree

◮ What kind of memory for the programs?

memoryless, local memory, causal memory finite or infinite memory

slide-31
SLIDE 31

18 / 65

Distributed Synthesis

Problem

Given a distributed system and a specification Problem existence/synthesis of programs/strategies for the processes/players such that the system satisfies the specification (whatever the environment/opponent does).

Main parameters

◮ Which subclass of architectures? ◮ Which semantics?

synchronous (with our without delay), asynchronous

◮ What kind of specification?

LTL, CLT∗, µ-calculus Rational, Recognizable word/tree

◮ What kind of memory for the programs?

memoryless, local memory, causal memory finite or infinite memory

slide-32
SLIDE 32

18 / 65

Distributed Synthesis

Problem

Given a distributed system and a specification Problem existence/synthesis of programs/strategies for the processes/players such that the system satisfies the specification (whatever the environment/opponent does).

Main parameters

◮ Which subclass of architectures? ◮ Which semantics?

synchronous (with our without delay), asynchronous

◮ What kind of specification?

LTL, CLT∗, µ-calculus Rational, Recognizable word/tree

◮ What kind of memory for the programs?

memoryless, local memory, causal memory finite or infinite memory

slide-33
SLIDE 33

18 / 65

Distributed Synthesis

Problem

Given a distributed system and a specification Problem existence/synthesis of programs/strategies for the processes/players such that the system satisfies the specification (whatever the environment/opponent does).

Main parameters

◮ Which subclass of architectures? ◮ Which semantics?

synchronous (with our without delay), asynchronous

◮ What kind of specification?

LTL, CLT∗, µ-calculus Rational, Recognizable word/tree

◮ What kind of memory for the programs?

memoryless, local memory, causal memory finite or infinite memory

slide-34
SLIDE 34

18 / 65

Distributed Synthesis

Problem

Given a distributed system and a specification Problem existence/synthesis of programs/strategies for the processes/players such that the system satisfies the specification (whatever the environment/opponent does).

Main parameters

◮ Which subclass of architectures? ◮ Which semantics?

synchronous (with our without delay), asynchronous

◮ What kind of specification?

LTL, CLT∗, µ-calculus Rational, Recognizable word/tree

◮ What kind of memory for the programs?

memoryless, local memory, causal memory finite or infinite memory

slide-35
SLIDE 35

19 / 65

Outline

Control for sequential systems Control for distributed systems

3

Synchronous semantics

Asynchronous semantics

slide-36
SLIDE 36

20 / 65

Pnueli-Rosner (FOCS’90)

Pipeline

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4

Restrictions

◮ Unique writer: |W −1(x)| = 1 for all x ∈ V ◮ Unique reader: |R−1(x)| = 1 for all x ∈ V ◮ Acyclic graph (0-delay) ◮ No restrictions on moves: δa = QR(a) × QW(a) for all a ∈ P. ◮ Synchronous behaviors: q0q1q2 · · · where qn ∈ QV are global states. ◮ program with local memory: fa : Q∗

R(a) → QW(a) for all a ∈ P.

◮ Specification: LTL over input and output variables only. ◮ Input variables: In = W (environment) ◮ output variables: Out = R(environment)

slide-37
SLIDE 37

21 / 65

0-delay synchronous semantics

Example

u x v z a b

Programs: fx : Q∗

u → Qx and fz : (Qx × Qv)∗ → Qz.

◮ Input:

  • u1

u2 u3 · · · v1 v2 v3 · · ·

  • ∈ (Qu × Qv)ω.

◮ Behavior:

    u1 u2 u3 · · · v1 v2 v3 · · · x1 x2 x3 · · · z1 z2 z3 · · ·     with

  • xn = fx(u1 · · · un)

zn = fz((x1, v1) · · · (xn, vn)) for all n > 0.

slide-38
SLIDE 38

22 / 65

Undecidability

Architecture A0

u x v y a b

Theorem (Pnueli-Rosner FOCS’90)

The synthesis problem for architecture A0 and LTL (or CTL) specifications is unde- cidable.

Proof

Reduction from the halting problem on the empty tape.

slide-39
SLIDE 39

23 / 65

Undecidability proof 1

SPEC1: processes a and b must output configurations

u x v y 0q1p0 · · · : n(v) = p #q+pC#ω : where C ∈ Γ∗QΓ+ a b (v = 0 ∧ y = #) W

  • v = 1 ∧ (v = 1 ∧ y = #) W (v = 0 ∧ y ∈ Γ∗QΓ+#ω)
  • where

y ∈ Γ∗QΓ+#ω

def

= y ∈ Γ U

  • y ∈ Q ∧ X
  • y ∈ Γ U (y ∈ Γ ∧ X G y = #)
slide-40
SLIDE 40

23 / 65

Undecidability proof 1

SPEC1: processes a and b must output configurations

u x v y 0q1p0 · · · : n(v) = p #q+pC#ω : where C ∈ Γ∗QΓ+ a b (v = 0 ∧ y = #) W

  • v = 1 ∧ (v = 1 ∧ y = #) W (v = 0 ∧ y ∈ Γ∗QΓ+#ω)
  • where

y ∈ Γ∗QΓ+#ω

def

= y ∈ Γ U

  • y ∈ Q ∧ X
  • y ∈ Γ U (y ∈ Γ ∧ X G y = #)
slide-41
SLIDE 41

24 / 65

Undecidability proof 2

SPEC2: processes a and b must start with the first configuration

u x v y 0q10 · · · : n(v) = 1 #q+1C1#ω a b v = 0 W

  • v = 1 ∧ X
  • v = 0 −

→ y ∈ C1#ω

slide-42
SLIDE 42

24 / 65

Undecidability proof 2

SPEC2: processes a and b must start with the first configuration

u x v y 0q10 · · · : n(v) = 1 #q+1C1#ω a b v = 0 W

  • v = 1 ∧ X
  • v = 0 −

→ y ∈ C1#ω

slide-43
SLIDE 43

25 / 65

Undecidability proof 3

SPEC3: if n(u) = n(v) are synchronized then x = y

u x v y 0q1p0 · · · #q+pC#ω 0q1p0 · · · #q+pC#ω a b n(u) = n(v) − → G(x = y) where n(u) = n(v)

def

= (u = v = 0) U (u = v = 1 ∧ (u = v = 1 U u = v = 0))

slide-44
SLIDE 44

25 / 65

Undecidability proof 3

SPEC3: if n(u) = n(v) are synchronized then x = y

u x v y 0q1p0 · · · #q+pC#ω 0q1p0 · · · #q+pC#ω a b n(u) = n(v) − → G(x = y) where n(u) = n(v)

def

= (u = v = 0) U (u = v = 1 ∧ (u = v = 1 U u = v = 0))

slide-45
SLIDE 45

26 / 65

Undecidability proof 4

SPEC4: if n(u) = n(v) + 1 are synchronized then Cy ⊢ Cx

u x v y 0q1p+10 · · · #q+p+1Cx#ω 0q+11p0 · · · #q+p+1Cy#ω a b n(u) = n(v) + 1 − → x = y U

  • Trans(y, x) ∧ X3 G x = y
  • where Trans(y, x) is defined by
  • (p,a,q,b,←)∈T,c∈Γ

(y = cpa ∧ x = qcb) ∨

  • (p,a,q,b,→)∈T,c∈Γ

(y = pac ∧ x = bqc) ∨

  • (p,a,q,b,→)∈T

(y = pa# ∧ x = bq✷)

slide-46
SLIDE 46

26 / 65

Undecidability proof 4

SPEC4: if n(u) = n(v) + 1 are synchronized then Cy ⊢ Cx

u x v y 0q1p+10 · · · #q+p+1Cx#ω 0q+11p0 · · · #q+p+1Cy#ω a b n(u) = n(v) + 1 − → x = y U

  • Trans(y, x) ∧ X3 G x = y
  • where Trans(y, x) is defined by
  • (p,a,q,b,←)∈T,c∈Γ

(y = cpa ∧ x = qcb) ∨

  • (p,a,q,b,→)∈T,c∈Γ

(y = pac ∧ x = bqc) ∨

  • (p,a,q,b,→)∈T

(y = pa# ∧ x = bq✷)

slide-47
SLIDE 47

27 / 65

Undecidability proof 5

Lemma: winning strategies must simulate the Turing machine

For each p ≥ 1, if n(u) = p then Cx = Cp is the p-th configuration of the Turing machine starting from the empty tape.

Proof

u x v y a b

Corollary

Specifications 1-4 and 5: G x = stop are implementable iff the Turing machine does not halt starting from the empty tape.

slide-48
SLIDE 48

27 / 65

Undecidability proof 5

Lemma: winning strategies must simulate the Turing machine

For each p ≥ 1, if n(u) = p then Cx = Cp is the p-th configuration of the Turing machine starting from the empty tape.

Proof

u x v y a b SPEC2 0q10 · · · #q+1C1#ω

Corollary

Specifications 1-4 and 5: G x = stop are implementable iff the Turing machine does not halt starting from the empty tape.

slide-49
SLIDE 49

27 / 65

Undecidability proof 5

Lemma: winning strategies must simulate the Turing machine

For each p ≥ 1, if n(u) = p then Cx = Cp is the p-th configuration of the Turing machine starting from the empty tape.

Proof

u x v y a b Induction 0q+11p0 · · · #q+p+1Cp#ω

Corollary

Specifications 1-4 and 5: G x = stop are implementable iff the Turing machine does not halt starting from the empty tape.

slide-50
SLIDE 50

27 / 65

Undecidability proof 5

Lemma: winning strategies must simulate the Turing machine

For each p ≥ 1, if n(u) = p then Cx = Cp is the p-th configuration of the Turing machine starting from the empty tape.

Proof

u x v y a b Induction 0q+11p0 · · · #q+p+1Cp#ω SPEC3 0q+11p0 · · · #q+p+1Cp#ω

Corollary

Specifications 1-4 and 5: G x = stop are implementable iff the Turing machine does not halt starting from the empty tape.

slide-51
SLIDE 51

27 / 65

Undecidability proof 5

Lemma: winning strategies must simulate the Turing machine

For each p ≥ 1, if n(u) = p then Cx = Cp is the p-th configuration of the Turing machine starting from the empty tape.

Proof

u x v y a b SPEC3 0q+11p0 · · · #q+p+1Cp#ω SPEC4 0q1p+10 · · · #q+p+1Cp+1#ω

Corollary

Specifications 1-4 and 5: G x = stop are implementable iff the Turing machine does not halt starting from the empty tape.

slide-52
SLIDE 52

27 / 65

Undecidability proof 5

Lemma: winning strategies must simulate the Turing machine

For each p ≥ 1, if n(u) = p then Cx = Cp is the p-th configuration of the Turing machine starting from the empty tape.

Proof

u x v y a b SPEC3 0q+11p0 · · · #q+p+1Cp#ω SPEC4 0q1p+10 · · · #q+p+1Cp+1#ω

Corollary

Specifications 1-4 and 5: G x = stop are implementable iff the Turing machine does not halt starting from the empty tape.

slide-53
SLIDE 53

28 / 65

Communication allows to cheat

Architecture with communication

u x v y z a b

◮ Strategy for a: ◮ copy u to z ◮ if u = 0q1p0 · · · then x =

( #p+qC1#ω if p = 1 (for SPEC2) #p+qC2#ω

  • thewise (for SPEC4).

◮ Strategy for b: if z = 0q′1p′0 · · · and v = 0q1p0 · · · then

y =      #p+qC1#ω if p = 1 (for SPEC2) #p+qC2#ω if p = p′ > 1 and q = q′ (for SPEC3) #p+qC1#ω

  • thewise (for SPEC4).
slide-54
SLIDE 54

29 / 65

More undecidable architectures

Exercices

  • 1. Show that the architecture below is undecidable.

u w x v y z a b

  • 2. Show that the undecidability results also hold for CTL specifications
slide-55
SLIDE 55

30 / 65

Uncomparable information

Definition

For an output variable y, View(y) is the set of input variables x such that there is a path from x to y.

Definition

An architecture has uncomparable information if there exist y1,y2 output variables such that View(y2) \ View(y1) = ∅ and View(y1) \ View(y2) = ∅. Otherwise it is said to have preordered information. x1 x2 y1 y2

slide-56
SLIDE 56

30 / 65

Uncomparable information

Definition

For an output variable y, View(y) is the set of input variables x such that there is a path from x to y.

Definition

An architecture has uncomparable information if there exist y1,y2 output variables such that View(y2) \ View(y1) = ∅ and View(y1) \ View(y2) = ∅. Otherwise it is said to have preordered information. x1 y1 x2 y2 x3 y3 x4 y4

slide-57
SLIDE 57

30 / 65

Uncomparable information

Definition

For an output variable y, View(y) is the set of input variables x such that there is a path from x to y.

Definition

An architecture has uncomparable information if there exist y1,y2 output variables such that View(y2) \ View(y1) = ∅ and View(y1) \ View(y2) = ∅. Otherwise it is said to have preordered information. x1 y1 x2 y2 x3 y3 x4 y4

slide-58
SLIDE 58

30 / 65

Uncomparable information

Definition

For an output variable y, View(y) is the set of input variables x such that there is a path from x to y.

Definition

An architecture has uncomparable information if there exist y1,y2 output variables such that View(y2) \ View(y1) = ∅ and View(y1) \ View(y2) = ∅. Otherwise it is said to have preordered information. x1 y1 x2 y2 x3 y3 x4 y4

slide-59
SLIDE 59

30 / 65

Uncomparable information

Definition

For an output variable y, View(y) is the set of input variables x such that there is a path from x to y.

Definition

An architecture has uncomparable information if there exist y1,y2 output variables such that View(y2) \ View(y1) = ∅ and View(y1) \ View(y2) = ∅. Otherwise it is said to have preordered information. x1 y1 x2 y2 x3 y3 x4 y4

slide-60
SLIDE 60

31 / 65

Uncomparable information yields undecidability

Theorem

Architectures with uncomparable information are undecidable for LTL or CTL input-

  • utput specifications.

Proof for LTL specifications

x0 x1 y0 y1 x0 x1 y0 y1

slide-61
SLIDE 61

31 / 65

Uncomparable information yields undecidability

Theorem

Architectures with uncomparable information are undecidable for LTL or CTL input-

  • utput specifications.

Proof for LTL specifications

x0 x1 y0 y1 x0 x1 y0 y1

slide-62
SLIDE 62

31 / 65

Uncomparable information yields undecidability

Theorem

Architectures with uncomparable information are undecidable for LTL or CTL input-

  • utput specifications.

Proof for LTL specifications

x0 x1 y0 y1 x0 x1 y0 y1

slide-63
SLIDE 63

32 / 65

Decidability

Pipeline

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4

Pnueli-Rosner (FOCS’90)

The synthesis problem for pipeline architectures and LTL specifications is non ele- mentary decidable.

slide-64
SLIDE 64

33 / 65

Decidability proof 1

Pipeline

x y z a b x y z a & b

From distributed to global

If fy : Q+

x → Qy and fz : Q+ y → Qz are local (distributed) strategies then we can

define an equivalent global strategy h = fy ⊗ fz : Q+

x → Qy × Qz by

h(x1 · · · xn) = (yn, fz(y1 · · · yn)) where yi = fy(x1, · · · , xi).

From global to distributed

z should only depend on y. We cannot transmit x to y if |Qy| < |Qx|. We have to check whether there exists a global strategy that can be distributed.

slide-65
SLIDE 65

33 / 65

Decidability proof 1

Pipeline

x y z a b x y z a & b

From distributed to global

If fy : Q+

x → Qy and fz : Q+ y → Qz are local (distributed) strategies then we can

define an equivalent global strategy h = fy ⊗ fz : Q+

x → Qy × Qz by

h(x1 · · · xn) = (yn, fz(y1 · · · yn)) where yi = fy(x1, · · · , xi).

From global to distributed

z should only depend on y. We cannot transmit x to y if |Qy| < |Qx|. We have to check whether there exists a global strategy that can be distributed.

slide-66
SLIDE 66

33 / 65

Decidability proof 1

Pipeline

x y z a b x y z a & b

From distributed to global

If fy : Q+

x → Qy and fz : Q+ y → Qz are local (distributed) strategies then we can

define an equivalent global strategy h = fy ⊗ fz : Q+

x → Qy × Qz by

h(x1 · · · xn) = (yn, fz(y1 · · · yn)) where yi = fy(x1, · · · , xi).

From global to distributed

z should only depend on y. We cannot transmit x to y if |Qy| < |Qx|. We have to check whether there exists a global strategy that can be distributed.

slide-67
SLIDE 67

34 / 65

Decidability proof 2

Pipeline

x y z a b x y z a & b

Proof

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

slide-68
SLIDE 68

34 / 65

Decidability proof 2

Pipeline

x y z a b x y z a & b

Proof

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

slide-69
SLIDE 69

35 / 65

Tree automata

non deterministic transitions

a a1 a2 1 2

Alternating transitions

  • r
slide-70
SLIDE 70

35 / 65

Tree automata

non deterministic transitions

a a1 a2 1 2 p p1 p2

Alternating transitions

  • r
slide-71
SLIDE 71

35 / 65

Tree automata

non deterministic transitions

a a1 a2 1 2 p p1 p2

Alternating transitions

a a1 a2 1 2

  • r
slide-72
SLIDE 72

35 / 65

Tree automata

non deterministic transitions

a a1 a2 1 2 p p1 p2

Alternating transitions

a a1 a2 1 2 p p1 p2 ∧ p3

  • r
slide-73
SLIDE 73

35 / 65

Tree automata

non deterministic transitions

a a1 a2 1 2 p p1 p2

Alternating transitions

a a1 a2 1 2 p p1 p2 ∧ p3

  • r

a a1 a2 a2 1 2 2 p p1 p2 p3

slide-74
SLIDE 74

36 / 65

Decidability proof 3

Proof

x y z a b x y z a & b

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

slide-75
SLIDE 75

36 / 65

Decidability proof 3

Proof

x y z a b x y z a & b

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

A (y, z) (y1, z1) (y2, z2) (y2, z2) x x1 x2 x3 p p1 p2 p3

slide-76
SLIDE 76

36 / 65

Decidability proof 3

Proof

x y z a b x y z a & b

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

A (y, z) (y1, z1) (y2, z2) (y2, z2) x x1 x2 x3 p p1 p2 p3 A′ z z1 z2 z2 y y1 y2 y2 (x, p) (x1, p1) (x2, p2) (x3, p3)

slide-77
SLIDE 77

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A′ alternating

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-78
SLIDE 78

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A′ alternating

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-79
SLIDE 79

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A1 non deterministic

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-80
SLIDE 80

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A′

1 alternating

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-81
SLIDE 81

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A2 non deterministic

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-82
SLIDE 82

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A′

2 alternating

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-83
SLIDE 83

37 / 65

Decidability proof 4

Proof

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4 A3 non deterministic

  • 1. We first solve the global game: We obtain an ND tree-automaton A accepting

the global strategies h : Q+

x → Qy × Qz that implement the specification.

Easily obtained from a ND tree automaton for the specification.

  • 2. We build from A an alternating tree automaton A′ accepting a local strategy

fz : Q+

y → Qz iff there exists a local strategy fy : Q+ x → Qy such that

h = fy ⊗ fz : Q+

x → Qy × Qz is accepted by A

  • 3. Transform the alternating TA A′ to an equivalent non determinisitic TA A1

(Muller and Schupp 1985). Exponential blow-up.

  • 4. Iterate and check the last automaton for emptiness.
slide-84
SLIDE 84

38 / 65

Decidability

Pipeline

x y1 y2 y3 z1 z2 z3 z4 a1 a2 a3 a4

Pnueli-Rosner (FOCS’90)

The synthesis problem for pipeline architectures and LTL specifications is non ele- mentary decidable.

Peterson-Reif (FOCS’79)

multi-person games with incomplete information. = ⇒ non-elementary lower bound for the synthesis problem.

slide-85
SLIDE 85

39 / 65

Decidability

Kupferman-Vardi (LICS’01)

The synthesis problem is non elementary decidable for

◮ one-way chain, one-way ring, two-way chain and two-way ring, ◮ CTL∗ specifications (or tree-automata specifications) on all variables, ◮ synchronous, 1-delay semantics, ◮ local strategies.

  • ne-way chain

x y1 y2 y3 z1 z2 z3 a1 a2 a3

slide-86
SLIDE 86

39 / 65

Decidability

Kupferman-Vardi (LICS’01)

The synthesis problem is non elementary decidable for

◮ one-way chain, one-way ring, two-way chain and two-way ring, ◮ CTL∗ specifications (or tree-automata specifications) on all variables, ◮ synchronous, 1-delay semantics, ◮ local strategies.

  • ne-way ring

x y1 y2 y3 z1 z2 z3 a1 a2 a3

slide-87
SLIDE 87

39 / 65

Decidability

Kupferman-Vardi (LICS’01)

The synthesis problem is non elementary decidable for

◮ one-way chain, one-way ring, two-way chain and two-way ring, ◮ CTL∗ specifications (or tree-automata specifications) on all variables, ◮ synchronous, 1-delay semantics, ◮ local strategies.

two-way chain

x y1 y2 y3 y′

1

y′

2

y′

3

z1 z2 z3 z4 a1 a2 a3 a4

slide-88
SLIDE 88

40 / 65

1-delay synchronous semantics

Example

u x v z a b

Programs: fx : Q∗

u → Qx and fz : (Qx × Qv)∗ → Qz.

◮ Input:

  • u1

u2 u3 · · · v1 v2 v3 · · ·

  • ∈ (Qu × Qv)ω.

◮ Behavior:

    u1 u2 u3 · · · v1 v2 v3 · · · x1 x2 x3 · · · z1 z2 z3 · · ·     with

  • xn+1 = fx(u1 · · · un)

zn+1 = fz((x1, v1) · · · (xn, vn)) for all n > 0.

slide-89
SLIDE 89

41 / 65

Decidability

Adequately connected sub-architecture Qx = Q for all x ∈ V

u v x y z a b c

Pnueli-Rosner (FOCS’90)

◮ An adequately connected architecture is equivalent to a singleton architecture. ◮ The synthesis problem is decidable for LTL specifications and pipelines of

adequately connected architectures.

slide-90
SLIDE 90

41 / 65

Decidability

Adequately connected sub-architecture Qx = Q for all x ∈ V

u v x y z a b c x = u ⊗ v

Pnueli-Rosner (FOCS’90)

◮ An adequately connected architecture is equivalent to a singleton architecture. ◮ The synthesis problem is decidable for LTL specifications and pipelines of

adequately connected architectures.

slide-91
SLIDE 91

41 / 65

Decidability

Adequately connected sub-architecture Qx = Q for all x ∈ V

u v x y z a b c x = u ⊗ v u v y z

Pnueli-Rosner (FOCS’90)

◮ An adequately connected architecture is equivalent to a singleton architecture. ◮ The synthesis problem is decidable for LTL specifications and pipelines of

adequately connected architectures.

slide-92
SLIDE 92

41 / 65

Decidability

Adequately connected sub-architecture Qx = Q for all x ∈ V

u v x y z a b c x = u ⊗ v u v y z

Pnueli-Rosner (FOCS’90)

◮ An adequately connected architecture is equivalent to a singleton architecture. ◮ The synthesis problem is decidable for LTL specifications and pipelines of

adequately connected architectures.

slide-93
SLIDE 93

42 / 65

Information fork criterion (Finkbeiner–Schewe LICS ’05)

u v p q w x0 x1 a b y0 y1

slide-94
SLIDE 94

42 / 65

Information fork criterion (Finkbeiner–Schewe LICS ’05)

u v p q w x0 x1 a b y0 y1

slide-95
SLIDE 95

42 / 65

Information fork criterion (Finkbeiner–Schewe LICS ’05)

u v p q w x0 x1 a b y0 y1

slide-96
SLIDE 96

43 / 65

Uniformly well connected architectures

Definition

An architecture is uniformly well connected if there is a uniform way to route variables in View(y) to y for each output variable y.

Example

u v w p p s t p p p x y z

slide-97
SLIDE 97

43 / 65

Uniformly well connected architectures

Definition

An architecture is uniformly well connected if there is a uniform way to route variables in View(y) to y for each output variable y.

Example

u v w p p s t p p p x y z u ⊕ v v ⊕ w

slide-98
SLIDE 98

44 / 65

Uniformly well connected architectures

Definition

An architecture is uniformly well connected if there is a uniform way to route variables in View(v) to v for each output variable v.

◮ If the capacity of internal variables is big enough then the architecture is

uniformly well-connected.

◮ If the architecture is uniformly well-connected then we can use causal

strategies instead of local ones.

Proposition

Checking whether a given architecture is uniformly well connected is NP-complete.

Proof

Reduction to the multicast problem in Network Information Flow. The multicast problem is NP-complete (Rasala Lehman-Lehman 2004).

slide-99
SLIDE 99

44 / 65

Uniformly well connected architectures

Definition

An architecture is uniformly well connected if there is a uniform way to route variables in View(v) to v for each output variable v.

◮ If the capacity of internal variables is big enough then the architecture is

uniformly well-connected.

◮ If the architecture is uniformly well-connected then we can use causal

strategies instead of local ones.

Proposition

Checking whether a given architecture is uniformly well connected is NP-complete.

Proof

Reduction to the multicast problem in Network Information Flow. The multicast problem is NP-complete (Rasala Lehman-Lehman 2004).

slide-100
SLIDE 100

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-101
SLIDE 101

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-102
SLIDE 102

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-103
SLIDE 103

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-104
SLIDE 104

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-105
SLIDE 105

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4 y1 y2 y3 y4 a1 a2 a3 a4 x1 x2 x3 x4 x2 x3 x4 x3 x4 x4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-106
SLIDE 106

45 / 65

Uniformly well connected architectures

Theorem (PG, Nathalie Sznajder, Marc Zeitoun)

Uniformly well connected architectures with preordered information are decidable for CTL* external specifications.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4 y1 y2 y3 y4 a1 a2 a3 a4 x1 x2 x3 x4 x2 x3 x4 x3 x4 x4

Theorem: Kupferman-Vardi (LICS’01)

The synthesis problem is decidable for pipeline architectures and CTL∗ specifications

  • n all variables.
slide-107
SLIDE 107

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

slide-108
SLIDE 108

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

slide-109
SLIDE 109

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4

slide-110
SLIDE 110

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4 x1 y1 x2 x3

slide-111
SLIDE 111

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4 x1 y1 x2 x3 x2 y2 x3 x4

slide-112
SLIDE 112

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4 x1 y1 x2 x3 x2 y2 x3 x4 y3 x2 x3

slide-113
SLIDE 113

46 / 65

Robust specifications

Definition

A specification ϕ is robust if it can be written ϕ =

z∈Out ϕz where ϕz depends

  • nly on View(z) ∪ {z}.

Theorem

The synthesis problem for uniformly well-connected architectures and external and robust CTL∗ specifications is decidable.

Proof.

x1 y1 x2 y2 x3 y3 x4 y4 x1 y1 x2 x3 x2 y2 x3 x4 y3 x2 x3 y4 x3 x4

slide-114
SLIDE 114

47 / 65

Open problem

◮ Decidability of the distributed control/synthesis problem for robust and

external specifications.

slide-115
SLIDE 115

48 / 65

Outline

Control for sequential systems Control for distributed systems Synchronous semantics

4

Asynchronous semantics

slide-116
SLIDE 116

49 / 65

An example: Romeo and Juliet

R J

4 4 3 3 2 2

Broken line

1 1

Romeo and Juliet against the environment

◮ Want to communicate through the same communication line. ◮ At any time, one line is broken. ◮ Environment looks where R&J are connected, and then, atomically, changes

(possibly) the broken line.

◮ Romeo/Juliet looks status of lines and, atomically, chooses where to connect.

slide-117
SLIDE 117

49 / 65

An example: Romeo and Juliet

R J

4 4 3 3 2 2

Broken line

1 1

Romeo and Juliet against the environment

◮ Want to communicate through the same communication line. ◮ At any time, one line is broken. ◮ Environment looks where R&J are connected, and then, atomically, changes

(possibly) the broken line.

◮ Romeo/Juliet looks status of lines and, atomically, chooses where to connect.

slide-118
SLIDE 118

49 / 65

An example: Romeo and Juliet

R J

4 4 3 3 2 2

Broken line

1 1

Romeo and Juliet against the environment

◮ Want to communicate through the same communication line. ◮ At any time, one line is broken. ◮ Environment looks where R&J are connected, and then, atomically, changes

(possibly) the broken line.

◮ Romeo/Juliet looks status of lines and, atomically, chooses where to connect.

slide-119
SLIDE 119

49 / 65

An example: Romeo and Juliet

R J

4 4 3 3 2 2

Broken line

1 1

Romeo and Juliet against the environment

◮ Want to communicate through the same communication line. ◮ At any time, one line is broken. ◮ Environment looks where R&J are connected, and then, atomically, changes

(possibly) the broken line.

◮ Romeo/Juliet looks status of lines and, atomically, chooses where to connect.

slide-120
SLIDE 120

50 / 65

Romeo and Juliet (continued)

Architecture

◮ Variables: ◮ x1: Romeo’s current line.

Q1 = {1, 2, 3, 4}

◮ x2: broken line

Q2 = {1, 2, 3, 4}

◮ x3: Juliet’s current line.

Q3 = {1, 2, 3, 4}

◮ Agents: Romeo, Juliet and Environment. ◮ Read/Write table

Romeo Juliet Environment Read {x1, x2} {x2, x3} {x1, x2, x3} Write {x1} {x3} {x2} x1 x2 x3 R E J x1 x2 x3 R E J read-write ability read-only ability

slide-121
SLIDE 121

51 / 65

Romeo and Juliet (continued)

Legal moves: δa ⊆ QR(a) × QW (a)

x1 : 3 x2 : 1 x3 : 4 x2 : 4 E x1 : 1 x2 : 1 x1 : 3 R

A distributed play of the asynchronous system, R & J against E

slide-122
SLIDE 122

51 / 65

Romeo and Juliet (continued)

Legal moves: δa ⊆ QR(a) × QW (a)

x1 : 3 x2 : 1 x3 : 4 x2 : 4 E x1 : 1 x2 : 1 x1 : 3 R

A distributed play of the asynchronous system, R & J against E

x1 x2 x3

1 1 1

#

2 R 2 J 4 J 3 E 1 R 4 E 1 J

#

1 1 1 J 2 J 4 R 2 E 3 R 1 E 4 J 1

slide-123
SLIDE 123

52 / 65

Distributed Behaviors

A play is a Mazurkiewicz (real) trace

◮ A finite play:

#

1 1 1 J 2 J 4 R 2 E 3 R 1 E 4 J 1

◮ Move: extension of the current Mazurkiewicz trace following the rules. ◮ The game is not “position based”, nor “turn based”. ◮ Winning condition: set of finite or infinite Mazurkiewicz traces W ⊆ R(Σ, D).

Team 0 wins plays of W and loses plays of R(Σ, D) \ W.

Romeo and Juliet

W imposes fairness conditions to the environment.

slide-124
SLIDE 124

53 / 65

Memory for strategies

Memory

◮ Each player only has a partial view of the global history. ◮ Memoryless: move can depend only on the current state. ◮ Local memory: a player can remember its read history.

1 1 1

#

2 J 2 R 4 J 3 E 1 R 1 J 4 E 1 J

#

1 1 1 J 2 J 4 R 2 E 3 R 1 J 1 E 4 J 2

Causal memory (intuitively, maximal history a player can observe)

◮ Players gather and forward as much information as possible. ◮ but no global view, the choice for an action cannot depend on a concurrent

event.

slide-125
SLIDE 125

53 / 65

Memory for strategies

Memory

◮ Each player only has a partial view of the global history. ◮ Memoryless: move can depend only on the current state. ◮ Local memory: a player can remember its read history.

1 1 1

#

2 J 2 R 4 J 3 E 1 R 1 J 4 E 1 J ?? R

#

1 1 1 J 2 J 4 R 2 E 3 R 1 J 1 E 4 J 2 R ??

Causal memory (intuitively, maximal history a player can observe)

◮ Players gather and forward as much information as possible. ◮ but no global view, the choice for an action cannot depend on a concurrent

event.

slide-126
SLIDE 126

53 / 65

Memory for strategies

Memory

◮ Each player only has a partial view of the global history. ◮ Memoryless: move can depend only on the current state. ◮ Local memory: a player can remember its read history.

1 1 1

#

2 J 2 R 4 J 3 E 1 R 1 J 4 E 1 J ?? R

#

1 1 1 J 2 J 4 R 2 E 3 R 1 J 1 E 4 J 2 R ??

Causal memory (intuitively, maximal history a player can observe)

◮ Players gather and forward as much information as possible. ◮ but no global view, the choice for an action cannot depend on a concurrent

event.

slide-127
SLIDE 127

53 / 65

Memory for strategies

Memory

◮ Each player only has a partial view of the global history. ◮ Memoryless: move can depend only on the current state. ◮ Local memory: a player can remember its read history.

1 1 1

#

2 J 2 R 4 J 3 E 1 R 1 J 4 E 1 J ?? R

#

1 1 1 J 2 J 4 R 2 E 3 R 1 J 1 E 4 J 2 R ??

Causal memory (intuitively, maximal history a player can observe)

◮ Players gather and forward as much information as possible. ◮ but no global view, the choice for an action cannot depend on a concurrent

event.

slide-128
SLIDE 128

53 / 65

Memory for strategies

Memory

◮ Each player only has a partial view of the global history. ◮ Memoryless: move can depend only on the current state. ◮ Local memory: a player can remember its read history.

1 1 1

#

2 J 2 R 4 J 3 E 1 R 1 J 4 E 1 J ?? R

#

1 1 1 J 2 J 4 R 2 E 3 R 1 J 1 E 4 J 2 R ??

Causal memory (intuitively, maximal history a player can observe)

◮ Players gather and forward as much information as possible. ◮ but no global view, the choice for an action cannot depend on a concurrent

event.

slide-129
SLIDE 129

54 / 65

Winning strategies

Tuple (fa)a∈P0 where fa tells player a ∈ P0 how to play. Memoryless fa : QR(a) → QW(a) ∪ Stop Local memory fa : (QR(a))∗QR(a) → QW(a) ∪ Stop Causal memory fa : M(Σ, D) × QR(a) → QW(a) ∪ Stop #

1 1 1

J

2

J

4 R 2

E

3 R 1

J

1 E 4 J 2 R ??

f-maximal f-plays

Given a strategy f = (fa)a∈P0, one looks at plays t which are

◮ consistent with f: all a-moves played according to fa (f-play). ◮ maximal: f predicts to Stop for all a-moves enabled at t with a ∈ P0.

Winning strategies

A strategy f is winning in G if all f-maximal f-plays in G are in W.

slide-130
SLIDE 130

54 / 65

Winning strategies

Tuple (fa)a∈P0 where fa tells player a ∈ P0 how to play. Memoryless fa : QR(a) → QW(a) ∪ Stop Local memory fa : (QR(a))∗QR(a) → QW(a) ∪ Stop Causal memory fa : M(Σ, D) × QR(a) → QW(a) ∪ Stop #

1 1 1

J

2

J

4 R 2

E

3 R 1

J

1 E 4 J 2 R ??

f-maximal f-plays

Given a strategy f = (fa)a∈P0, one looks at plays t which are

◮ consistent with f: all a-moves played according to fa (f-play). ◮ maximal: f predicts to Stop for all a-moves enabled at t with a ∈ P0.

Winning strategies

A strategy f is winning in G if all f-maximal f-plays in G are in W.

slide-131
SLIDE 131

55 / 65

Finite abstraction of the causal memory

Distributed memory

A distributed memory is a mapping µ : M(Σ, D) → M satisfying the following equivalent properties:

  • 1. µ−1(m) is recognizable for each m ∈ M,
  • 2. µ is an abstraction of an asynchronous mapping (cf. Zielonka),
  • 3. µ can be computed in a distributed way

(allowing additional contents inside existing communications (piggy-backing), but no extra communications).

Strategy with memory µ

fa : M × QR(a) → QW(a) ∪ Stop the associated strategy is defined by f µ

a (t, q) = fa(µ(t), q)

If M is finite then f µ is a distributed strategy with finite memory. If |M| = 1 then f µ is memoryless.

slide-132
SLIDE 132

56 / 65

Embedding causal memory inside games

Proposition: PG-Lerman-Zeitoun (LATIN’04)

For a distributed game G and a distributed memory µ, one can build a game Gµ such that team 0 has a WDS in G with memory µ iff team 0 has a memoryless WDS in Gµ.

Proof.

Gµ = G × µ

slide-133
SLIDE 133

57 / 65

From distributed to sequential games

Theorem: PG-Lerman-Zeitoun (LATIN’04)

Given a finite distributed game (G, W), we can effectively build a finite sequential 2-players game ( G, W) st. the following are equivalent:

◮ There exists a memoryless distributed WS for team 0 in (G, W). ◮ There exists a memoryless WS for player 0 in (

G, W).

◮ There exists a WS for player 0 in (

G, W). Moreover, if W is recognizable then so is W Naive idea Consider the game on the global transition system. Main problem The controller has more information than its causal memory. Solution

◮ The opponent controls the linearization to be played. ◮ Using reset moves, he can replay different linearizations for the same play. ◮ The winning condition

W makes sure that the strategy followed by the controller is indeed distributed.

slide-134
SLIDE 134

57 / 65

From distributed to sequential games

Theorem: PG-Lerman-Zeitoun (LATIN’04)

Given a finite distributed game (G, W), we can effectively build a finite sequential 2-players game ( G, W) st. the following are equivalent:

◮ There exists a memoryless distributed WS for team 0 in (G, W). ◮ There exists a memoryless WS for player 0 in (

G, W).

◮ There exists a WS for player 0 in (

G, W). Moreover, if W is recognizable then so is W Naive idea Consider the game on the global transition system. Main problem The controller has more information than its causal memory. Solution

◮ The opponent controls the linearization to be played. ◮ Using reset moves, he can replay different linearizations for the same play. ◮ The winning condition

W makes sure that the strategy followed by the controller is indeed distributed.

slide-135
SLIDE 135

57 / 65

From distributed to sequential games

Theorem: PG-Lerman-Zeitoun (LATIN’04)

Given a finite distributed game (G, W), we can effectively build a finite sequential 2-players game ( G, W) st. the following are equivalent:

◮ There exists a memoryless distributed WS for team 0 in (G, W). ◮ There exists a memoryless WS for player 0 in (

G, W).

◮ There exists a WS for player 0 in (

G, W). Moreover, if W is recognizable then so is W Naive idea Consider the game on the global transition system. Main problem The controller has more information than its causal memory. Solution

◮ The opponent controls the linearization to be played. ◮ Using reset moves, he can replay different linearizations for the same play. ◮ The winning condition

W makes sure that the strategy followed by the controller is indeed distributed.

slide-136
SLIDE 136

58 / 65

(Un)deciding games

Proposition: (Folklore)

Deciding whether team 0 has a distributed WS with causal memory is undecidable for rational winning conditions.

  • Proof. Simple reduction of the universality problem for rational trace languages.

Peterson-Reif Madhusudan–Thiagarajan Bernet–Janin–Walukiewicz

Deciding whether team 0 has a distributed WS with local memory is undecidable even:

◮ for reachability or safety winning conditions. ◮ with 3 players against the environment.

slide-137
SLIDE 137

58 / 65

(Un)deciding games

Proposition: (Folklore)

Deciding whether team 0 has a distributed WS with causal memory is undecidable for rational winning conditions.

  • Proof. Simple reduction of the universality problem for rational trace languages.

Peterson-Reif Madhusudan–Thiagarajan Bernet–Janin–Walukiewicz

Deciding whether team 0 has a distributed WS with local memory is undecidable even:

◮ for reachability or safety winning conditions. ◮ with 3 players against the environment.

slide-138
SLIDE 138

59 / 65

Series-parallel architectures

Theorem: PG-Lerman-Zeitoun (FSTTCS’04)

Distributed games with recognizable winning conditions are decidable for series- parallel systems and causal memory strategies.

Definition : let A = (P, V, R, W) be some architecture.

◮ A is a parallel product if

P = A ⊎ B with R(a) ∩ W(b) = ∅ for all (a, b) ∈ A × B.

◮ A is a serial product if

P = A ⊎ B with R(a) ∩ W(b) = ∅ for all (a, b) ∈ A × B.

◮ A is series-parallel if it can be obtained from singletons (|P| = 1) using serial

and parallel compositions.

◮ A is series-parallel iff the associated dependence relation does not contain a

P4: a b c d as induced subgraph.

◮ Behaviors of series parallel architectures are series-parallel posets.

slide-139
SLIDE 139

59 / 65

Series-parallel architectures

Theorem: PG-Lerman-Zeitoun (FSTTCS’04)

Distributed games with recognizable winning conditions are decidable for series- parallel systems and causal memory strategies.

Definition : let A = (P, V, R, W) be some architecture.

◮ A is a parallel product if

P = A ⊎ B with R(a) ∩ W(b) = ∅ for all (a, b) ∈ A × B.

◮ A is a serial product if

P = A ⊎ B with R(a) ∩ W(b) = ∅ for all (a, b) ∈ A × B.

◮ A is series-parallel if it can be obtained from singletons (|P| = 1) using serial

and parallel compositions.

◮ A is series-parallel iff the associated dependence relation does not contain a

P4: a b c d as induced subgraph.

◮ Behaviors of series parallel architectures are series-parallel posets.

slide-140
SLIDE 140

60 / 65

Proof outline

Team 0 has a WDS ⇒ it has a WDS with a “small” distributed memory. Induction on Σ. Difficult case: serial product. B A A µ · · ·

  • 1. A WS on A ⊎ B induces WS on the restrictions of the game to A and B.
  • 2. Replace the WS on A, B by WS with small memory (induction).
  • 3. Finally, glue together these WS on A and B to obtain a WS on A ∪ B using

small memory.

Main problem

◮ Team 0 must know on which small game it is playing. ◮ Team 0 has to compute this information in a distributed way.

slide-141
SLIDE 141

61 / 65

Madhusudan and Thiagarajan (Concur’02)

Setting

◮ Architecture: A = (P, V, R, W) with R(a) = W(a) for all a ∈ P. ◮ Moves: δa are built from local moves for variables δa,x ⊆ Qx × Qx:

δa =

  • x∈R(a)

δa,x

◮ Strategies with local memory: associated with variables and not with agents,

and only predict the next actions and not the next state: fx : Q∗

x → 2R−1(x)

action a is enabled by (fx)x∈V at some finite play t if ∀x ∈ R(a), a ∈ fx(πQx(t))

◮ The environment decides which a-transition should be taken among the

actions a enabled by the strategies.

slide-142
SLIDE 142

62 / 65

Madhusudan and Thiagarajan (Concur’02)

Restricted control synthesis problem

Given a distributed system and a recognizable specification, Question existence of a clocked and com-rigid non-blocking winning distributed strategy with local memory.

◮ clocked: fx(w) only depends on |w|. ◮ com-rigid: a, b ∈ fx(w) implies R(a) = R(b).

Theorem

  • 1. The restricted control synthesis problem is decidable.
  • 2. It becomes undecidable if one of the red condition is dropped.
slide-143
SLIDE 143

62 / 65

Madhusudan and Thiagarajan (Concur’02)

Restricted control synthesis problem

Given a distributed system and a recognizable specification, Question existence of a clocked and com-rigid non-blocking winning distributed strategy with local memory.

◮ clocked: fx(w) only depends on |w|. ◮ com-rigid: a, b ∈ fx(w) implies R(a) = R(b).

Theorem

  • 1. The restricted control synthesis problem is decidable.
  • 2. It becomes undecidable if one of the red condition is dropped.
slide-144
SLIDE 144

63 / 65

Mohalik and Walukiewicz (FSTTCS’03)

Restrictions

◮ Controllable actions: R(a) = W(a) is a singleton for all a ∈ P0. ◮ Environment actions: R(e) = W(e) = V and P1 = {e}. ◮ Moves: δe ⊆ QV × QV. ◮ Strategies: local memory with stuttering reduction so that a player a ∈ P0

cannot see how long it has been idle.

Theorem

◮ Previous settings with local memory can be encoded. ◮ two constructions to solve the distributed control problem subsuming

previously known decidable cases with local memory.

slide-145
SLIDE 145

64 / 65

Open problems

◮ Generalization to arbitrary symmetric architectures. ◮ Generalization to non-symmetric architectures. ◮ Reasonable upper bounds for synthesis?

slide-146
SLIDE 146

65 / 65

Symmetric architecture

Architecture A = (P, V, R, W)

◮ Restrictions:

  • ∀a ∈ P

∅ = W(a) ⊆ R(a) ∀a, b ∈ P R(a) ∩ W(b) = ∅ ⇐ ⇒ R(b) ∩ W(a) = ∅

◮ Dependence: a D b ⇐

⇒ R(a) ∩ W(b) = ∅ ⇐ ⇒ R(b) ∩ W(a) = ∅

Legal and forbidden architectures

R E J OK a c d e b OK a b Forbidden (not symmetric)