On the Metric-based Approximate Minimization of Markov Chains* - - PowerPoint PPT Presentation

on the metric based approximate minimization of markov
SMART_READER_LITE
LIVE PREVIEW

On the Metric-based Approximate Minimization of Markov Chains* - - PowerPoint PPT Presentation

On the Metric-based Approximate Minimization of Markov Chains* Giovanni Bacci, Giorgio Bacci , Kim G. Larsen, Radu Mardare Aalborg University REPAS Meeting Ljubljana, 13th June 2017 (*) accepted for publication at ICALP17 1/33


slide-1
SLIDE 1

On the Metric-based Approximate Minimization

  • f Markov Chains*

Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, Radu Mardare Aalborg University

REPAS Meeting

Ljubljana, 13th June 2017

1/33

(*) accepted for publication at ICALP’17

slide-2
SLIDE 2

Introduction

  • Moore‘56, Hopcroft‘71: Minimization algorithm for DFA

(partition refinement wrt Myhill-Nerode equiv.)

  • Minimization via partition refinement:
  • Kanellakis-Smolka’83: minimization of LTSs wrt

Milner’s strong bisimulation

  • Baier’96: minimization of MCs wrt Larsen-Skou

probabilistic bisimulation

  • Alur et al.’92, Yannakakis-Lee’97: minimization of

timed & real-time transition systems.

  • and many more…

2/33

slide-3
SLIDE 3

1

A fundamental problem

Jou-Smolka’90 observed that behavioral equivalences are not robust for systems with real-valued data

m0 m1 m2

1/3 2/3 1 1

n1 n0 n2 n3

1/3+ε 1 1 1/3 1/3-ε

3/33

slide-4
SLIDE 4

1

A fundamental problem

Jou-Smolka’90 observed that behavioral equivalences are not robust for systems with real-valued data

m0 m1 m2

1/3 2/3 1 1

n1 n0 n2 n3

1/3+ε 1 1 1/3 1/3-ε

3/33

S

  • l

u t i

  • n

! e q u i v .

  • d

i s t a n c e d(m0,n0)

slide-5
SLIDE 5

Metric-based Approximate Minimization

Closest Bounded Approximant (CBA) Minimum Significant Approximant Bound (MSAB)

4/33

slide-6
SLIDE 6

Metric-based Approximate Minimization

Closest Bounded Approximant (CBA) MC(k) MC N M

d

Minimum Significant Approximant Bound (MSAB)

4/33

slide-7
SLIDE 7

Metric-based Approximate Minimization

Closest Bounded Approximant (CBA) MC(k) MC N M

d

Minimum Significant Approximant Bound (MSAB) minimize d

4/33

slide-8
SLIDE 8

Metric-based Approximate Minimization

Closest Bounded Approximant (CBA) MC(k) MC N M

d

Minimum Significant Approximant Bound (MSAB) MC(k) MC N M

< 1

minimize d

4/33

slide-9
SLIDE 9

Metric-based Approximate Minimization

Closest Bounded Approximant (CBA) MC(k) MC N M

d

Minimum Significant Approximant Bound (MSAB) MC(k) MC N M

< 1

minimize d minimize k

4/33

slide-10
SLIDE 10

1 1

m1 m3

1

m4 m2 m5 m0

1/2 1/2 1/2 1/2 1/2 1/6 1/3

MC(5)

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

5/33

slide-11
SLIDE 11

1 1

m1 m3

1

m4 m2 m5 m0

1/2 1/2 1/2 1/2 1/2 1/6 1/3

1 1 m12

m3

1

m4 m5 m0

1/6 1/2 1/2 1/2 1/3

MC(5)

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

4/9

5/33

slide-12
SLIDE 12

1 1

m1 m3

1

m4 m2 m5 m0

1/2 1/2 1/2 1/2 1/2 1/6 1/3

1 1 m12

m3

1

m4 m5 m0

1/6 1/2 1/2 1/2 1/3 1 m12 1

m4 m5 m0

1/6 1/3 1/2 1/2 1/2 m12 1

MC(5)

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

4/9 1/6

5/33

slide-13
SLIDE 13

1 1

m1 m3

1

m4 m2 m5 m0

1/2 1/2 1/2 1/2 1/2 1/6 1/3

1 1 m12

m3

1

m4 m5 m0

1/6 1/2 1/2 1/2 1/3 1 m12 1

m4 m5 m0

1/6 1/3 1/2 1/2 1/2 m12 1

MC(5)

No unique solution!

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

4/9 1/6

5/33

slide-14
SLIDE 14

1

m1 m3 m4 m2 m0

79/100 21/100 79/100 79/100 21/100 1 21/100 1

n1 n2 n0

1 - x - y y 1 x

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

6/33

slide-15
SLIDE 15

1

m1 m3 m4 m2 m0

79/100 21/100 79/100 79/100 21/100 1 21/100 1

n1 n2 n0

1 - x - y y 1 x

x = 1 30 ⇣ 10 + √ 163 ⌘ y = 21 200

O p t i m a l p a r a m e t e r s m a y b e i r r a t i

  • n

a l !

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

6/33

slide-16
SLIDE 16

1

m1 m3 m4 m2 m0

79/100 21/100 79/100 79/100 21/100 1 21/100 1

n1 n2 n0

1 - x - y y 1 x

O p t i m a l d i s t a n c e i s i r r a t i

  • n

a l !

δ(m0, n0) = 436 675 − 163 √ 163 13500 ≈ 0.49

x = 1 30 ⇣ 10 + √ 163 ⌘ y = 21 200

O p t i m a l p a r a m e t e r s m a y b e i r r a t i

  • n

a l !

CBA: Example*

(*) With respect to the undiscounted probabilistic bisimilarity distance

6/33

slide-17
SLIDE 17

Talk Outline

Probabilistic bisimilarity distance

  • fixed point characterization (Kantorovich oper.)
  • remarkable properties
  • relation with probabilistic model checking

Metric-based Optimal Approximate Minimization

  • Closest Bounded Approximant (CBA)


— definition, characterization, complexity

  • Minimum Significant Approximant Bound (MSAB)


— definition, characterization, complexity

  • Expectation Maximization-like algorithm


— 2 heuristics + experimental results

7/33

slide-18
SLIDE 18

m0 m1 m2 n1 n0 n2 n3

1/3 1/3 2/3 1/3 1/3 1 1 1 1 1

It tries to match the behaviors “quantitatively”

Probabilistic bisimulation

8/33

slide-19
SLIDE 19

m0 m1 m2 n1 n0 n2 n3

1/3 1/3 1/3 2/3 1/3 1/3 1 1 1 1 1

It tries to match the behaviors “quantitatively”

Probabilistic bisimulation

8/33

slide-20
SLIDE 20

m0 m1 m2 n1 n0 n2 n3

1/3 1/3 1/3 1/3 2/3 1/3 1/3 1 1 1 1 1

It tries to match the behaviors “quantitatively”

Probabilistic bisimulation

8/33

slide-21
SLIDE 21

m0 m1 m2 n1 n0 n2 n3

1/3 1/3 1/3 1/3 1/3 2/3 1/3 1/3 1 1 1 1 1

It tries to match the behaviors “quantitatively”

Probabilistic bisimulation

8/33

slide-22
SLIDE 22

Coupling

A coupling of a pair (μ,ν) of probability distributions

  • n M is a distribution ω on M×M such that
  • ∑n∈M ω(m,n) = μ(m) (left marginal)
  • ∑m∈M ω(m,n) = ν(n) (right marginal).

Definition (W. Doeblin 36) One can think of a coupling as a measure-theoretic relation between probability distribution

9/33

slide-23
SLIDE 23

1 1 1

m0 m1 m2 n1 n0 n2 n3

1/3 1/3 1/3-ε 1/3 1/3 2/3 1/3+ε 1/3-ε

minimize ∑ ω(u,v) d(u,v)

u,v∈M

ε

1 1

A quantitative generalization

10/33

slide-24
SLIDE 24

A quantitative generalization

  • f probabilistic bisimilarity

The λ-discounted probabilistic bisimilarity pseudometric is the smallest dλ: M×M→[0,1] such that min λ ∑ ω(u,v) dλ(u,v) otherwise

u,v∈M

dλ(m,n) =

ω∈Ω(τ(m),τ(n))

1 if ℓ(m)≠ℓ(n)

11/33

slide-25
SLIDE 25

A quantitative generalization

  • f probabilistic bisimilarity

The λ-discounted probabilistic bisimilarity pseudometric is the smallest dλ: M×M→[0,1] such that min λ ∑ ω(u,v) dλ(u,v) otherwise

u,v∈M

dλ(m,n) =

ω∈Ω(τ(m),τ(n))

1 if ℓ(m)≠ℓ(n) Kantorovich distance K(d)(μ,ν) = min ∑ ω(u,v) d(u,v)

u,v∈M ω∈Ω(μ,ν) 11/33

slide-26
SLIDE 26

Remarkable properties

Theorem (Desharnais et. al 99)

m ~ n iff dλ(m,n) = 0

Theorem (Chen, van Breugel, Worrell 12) The probabilistic bisimilarity distance can be computed in polynomial time

12/33

slide-27
SLIDE 27

Relation with Model Checking

Theorem (Chen, van Breugel, Worrell 12) For all φ ∈ LTL | Pr(m ⊨ φ) - Pr(n ⊨ φ) | ≤ d1(m,n)

13/33

slide-28
SLIDE 28

Relation with Model Checking

Theorem (Chen, van Breugel, Worrell 12) For all φ ∈ LTL | Pr(m ⊨ φ) - Pr(n ⊨ φ) | ≤ d1(m,n) Pr(m ⊨ φ)

Pr(n ⊨ φ)

1 d d

approximate solution on φ

…imagine that |M|≫|N|, we can use N in place of M

13/33

slide-29
SLIDE 29

Talk Outline

Probabilistic bisimilarity distance

  • fixed point characterization (Kantorovich oper.)
  • remarkable properties
  • relation with probabilistic model checking

Metric-based Optimal Approximate Minimization

  • Closest Bounded Approximant (CBA)


— definition, characterization, complexity

  • Minimum Significant Approximant Bound (MSAB)


— definition, characterization, complexity

  • Expectation Maximization-like algorithm


— 2 heuristics + experimental results

14/33

slide-30
SLIDE 30

The CBA-λ problem

The Closest Bounded Approximant wrt dλ Instance: An MC M, and a positive integer k Ouput: An MC Ñ, with at most k states minimizing dλ(m0,ñ0)

dλ(m0,ñ0) = inf { dλ(m0,n0) | N ∈ MC(k) }

we get a solution iff the infimum is a minimum

15/33

slide-31
SLIDE 31

The CBA-λ problem

The Closest Bounded Approximant wrt dλ Instance: An MC M, and a positive integer k Ouput: An MC Ñ, with at most k states minimizing dλ(m0,ñ0)

dλ(m0,ñ0) = inf { dλ(m0,n0) | N ∈ MC(k) }

we get a solution iff the infimum is a minimum

generalization of bisimilarity quotient

15/33

slide-32
SLIDE 32

CBA-λ as a Bilinear Program

dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}

16/33

slide-33
SLIDE 33

CBA-λ as a Bilinear Program

dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}

mimimize dm0,n0 such that dm,n = 1 `(m) 6= ↵(n) P

(u,v)∈M×N cm,n u,v · du,v  dm,n

`(m) = ↵(n) P

v∈N cm,n u,v = ⌧(m)(u)

m, u 2 M, n 2 N P

u∈M cm,n u,v = ✓n,v

m 2 M, n, v 2 N cm,n

u,v 0

m, u 2 M, n, v 2 N

16/33

slide-34
SLIDE 34

CBA-λ as a Bilinear Program

dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}

mimimize dm0,n0 such that dm,n = 1 `(m) 6= ↵(n) P

(u,v)∈M×N cm,n u,v · du,v  dm,n

`(m) = ↵(n) P

v∈N cm,n u,v = ⌧(m)(u)

m, u 2 M, n 2 N P

u∈M cm,n u,v = ✓n,v

m 2 M, n, v 2 N cm,n

u,v 0

m, u 2 M, n, v 2 N

16/33

slide-35
SLIDE 35

CBA-λ as a Bilinear Program

dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}

mimimize dm0,n0 such that dm,n = 1 `(m) 6= ↵(n) P

(u,v)∈M×N cm,n u,v · du,v  dm,n

`(m) = ↵(n) P

v∈N cm,n u,v = ⌧(m)(u)

m, u 2 M, n 2 N P

u∈M cm,n u,v = ✓n,v

m 2 M, n, v 2 N cm,n

u,v 0

m, u 2 M, n, v 2 N

what labels should the MC N have?

16/33

slide-36
SLIDE 36

CBA-λ as a Bilinear Program

Lemma (Meaningful labels) For any N∈MC(k), there exists N’∈MC(k) with labels taken from M, such that dλ(M,N) ≥ dλ(M,N’)

17/33

slide-37
SLIDE 37

mimimize dm0,n0 such that ⁄ q

(u,v)œM◊N cm,n u,v · du,v Æ dm,n

m œ M, n œ N 1 ≠ –n,l Æ dm,n Æ 1 n œ N, l œ L(M), ¸(m) ”= l –n,l · –n,lÕ = 0 n œ N, l, lÕ œ L(M), l ”= lÕ q

lœL(M) –n,l = 1

n œ N q

vœN cm,n u,v = ·(m)(u)

m, u œ M, n œ N q

uœM cm,n u,v = ◊n,v

m œ M, n, v œ N cm,n

u,v Ø 0

m, u œ M, n, v œ N

CBA-λ as a Bilinear Program

Lemma (Meaningful labels) For any N∈MC(k), there exists N’∈MC(k) with labels taken from M, such that dλ(M,N) ≥ dλ(M,N’)

17/33

slide-38
SLIDE 38

mimimize dm0,n0 such that ⁄ q

(u,v)œM◊N cm,n u,v · du,v Æ dm,n

m œ M, n œ N 1 ≠ –n,l Æ dm,n Æ 1 n œ N, l œ L(M), ¸(m) ”= l –n,l · –n,lÕ = 0 n œ N, l, lÕ œ L(M), l ”= lÕ q

lœL(M) –n,l = 1

n œ N q

vœN cm,n u,v = ·(m)(u)

m, u œ M, n œ N q

uœM cm,n u,v = ◊n,v

m œ M, n, v œ N cm,n

u,v Ø 0

m, u œ M, n, v œ N

CBA-λ as a Bilinear Program

Lemma (Meaningful labels) For any N∈MC(k), there exists N’∈MC(k) with labels taken from M, such that dλ(M,N) ≥ dλ(M,N’)

17/33

slide-39
SLIDE 39

this characterization has two main consequences…

1.CBA-λ admits always a solution


(finite intersection of closed subsets)

2.CBA-λ can be approximated up to any precision

CBA-λ as a Bilinear Program

18/33

slide-40
SLIDE 40

Complexity of CBA-λ

“To study the complexity of an optimization problem

  • ne has to look at its decision variant”

(C. Papadimitriou)

19/33

slide-41
SLIDE 41

Complexity of CBA-λ

“To study the complexity of an optimization problem

  • ne has to look at its decision variant”

(C. Papadimitriou) Bounded Approximant threshold wrt dλ Instance: An MC M, a positive integer k, and a rational ε>0 Output: yes iff there exists N with at most k states such that dλ(m0,n0) ≤ ε

19/33

slide-42
SLIDE 42

Complexity upper bound

BA-λ is in PSPACE

Theorem

Proof sketch: we can encode the question ⟨M,k,ε⟩∈BA-λ to that of checking the feasibility of a set of bilinear inequalities. This can be encoded as a decision problem for the existential theory of the reals, thus it can be solved in PSPACE [Canny—STOC88].

20/33

slide-43
SLIDE 43

Complexity lower bound

BA-λ is NP-hard

Theorem

Proof idea: we provide a reduction from VERTEX COVER. (see the appendix for a sketch of the reduction)

21/33

slide-44
SLIDE 44

Complexity lower bound

BA-λ is NP-hard

Theorem unlikely to solve CBA as simple linear program

Proof idea: we provide a reduction from VERTEX COVER. (see the appendix for a sketch of the reduction)

21/33

slide-45
SLIDE 45

The MSAB-λ problem

The Minimum Significant Approximant Bound wrt dλ Instance: An MC M Ouput: The smallest k such that dλ(m0,n0)<1, for some N∈MC(k)

22/33

slide-46
SLIDE 46

The MSAB-λ problem

The Minimum Significant Approximant Bound wrt dλ Instance: An MC M Ouput: The smallest k such that dλ(m0,n0)<1, for some N∈MC(k) For λ<1, the MSAB-λ problem is trivial, because the solution is always k=1

22/33

slide-47
SLIDE 47

The MSAB-λ problem

The Minimum Significant Approximant Bound wrt dλ Instance: An MC M Ouput: The smallest k such that dλ(m0,n0)<1, for some N∈MC(k) For λ<1, the MSAB-λ problem is trivial, because the solution is always k=1 For λ=1, the same problem is surprisingly difficult…

22/33

slide-48
SLIDE 48

Complexity of MSAB-1

…as before we should look at its decision variant

23/33

slide-49
SLIDE 49

Complexity of MSAB-1

Significant Bounded Approximant wrt d1 Instance: An MC M and a positive k Ouput: yes iff there exists N with at most k states such that d1(m0,n0)<1. …as before we should look at its decision variant

23/33

slide-50
SLIDE 50

Complexity of MSAB-1

Significant Bounded Approximant wrt d1 Instance: An MC M and a positive k Ouput: yes iff there exists N with at most k states such that d1(m0,n0)<1. …as before we should look at its decision variant

SBA-1 is NP-complete

Theorem

23/33

slide-51
SLIDE 51

SBA-1 ⊆ NP

Lemma Assume M be maximally collapsed. Then, ⟨M,k⟩∈SBA-1 iff

C

m0 mn

G(M) =

BSCC

and h+|C| ≤ k

number of labels in m0…mn-1

24/33

slide-52
SLIDE 52

SBA-1 ⊆ NP

Lemma Assume M be maximally collapsed. Then, ⟨M,k⟩∈SBA-1 iff

C

m0 mn

G(M) =

BSCC

and h+|C| ≤ k

Proof sketch: compute with Tarjan’s algorithm all the SCCs of G(M). Then non deterministically choose a BSCC and a path to it. In poly- time we can count the number of labels in the path and the size of the BSCC.

number of labels in m0…mn-1

24/33

slide-53
SLIDE 53

SBA-1 is NP-hard

Proof sketch: by reduction to VERTEX COVER: ⟨G,h⟩∈VERTEX COVER iff ⟨MG, h+m+1⟩∈SBA-1

1 2 3 4

e1 e2 e3

e3 e2 e1 e0

1 2 3 2 3 4

1 1/2 1/2 1 1/2 1/2 1/2 1/2 1 1 1 1 1

sink

25/33

slide-54
SLIDE 54

SBA-1 is NP-hard

Proof sketch: by reduction to VERTEX COVER: ⟨G,h⟩∈VERTEX COVER iff ⟨MG, h+m+1⟩∈SBA-1

1 2 3 4

e1 e2 e3

e3 e2 e1 e0

1 2 3 2 3 4

1 1/2 1/2 1 1/2 1/2 1/2 1/2 1 1 1 1 1

sink

paths from e3 to e0 describes all vertex covers of G

25/33

slide-55
SLIDE 55

Towards an Algorithm…

26/33

slide-56
SLIDE 56

Towards an Algorithm…

  • The CBA can be solved as a bilinear program.


Theoretically nice, but practically unfeasible!
 (our implementation in PENBMI can
 handle MCs with at most 5 states…)

26/33

slide-57
SLIDE 57

Towards an Algorithm…

  • The CBA can be solved as a bilinear program.


Theoretically nice, but practically unfeasible!
 (our implementation in PENBMI can
 handle MCs with at most 5 states…)

  • We are happy with sub-optimal solutions if 


they can be obtained by a practical algorithm.

26/33

slide-58
SLIDE 58

EM-like Algorithm

  • Given the MC M and an initial approximant N0
  • it produces a sequence N0, …, Nh of approximants


having strictly decreasing distance from M

  • Nh may be a sub-optimal solution of CBA-λ

MC(k) MC N0 M

dh

N1 Nh

d0 > d1 > … > dh

27/33

slide-59
SLIDE 59

EM-like Algorithm

Algorithm 1 Expectation Maximization

Input: M = (M, ⌧, `), N0 = (N, ✓0, ↵), and h ∈ N.

  • 1. i ← 0
  • 2. repeat

3. i ← i + 1 4. compute C ∈ ⌦(M, Ni−1) such that λ(M, Ni−1) = C

λ(M, Ni−1)

5. ✓i ← UpdateTransition(✓i−1, C) 6. Ni ← (N, ✓i, ↵)

  • 7. until λ(M, Ni) > λ(M, Ni−1) or i ≥ h
  • 8. return Ni−1

28/33

slide-60
SLIDE 60

EM-like Algorithm

UpdateTransition assigns greater probability to transitions that are most representative of the behavior of M Intuitive Idea

Algorithm 1 Expectation Maximization

Input: M = (M, ⌧, `), N0 = (N, ✓0, ↵), and h ∈ N.

  • 1. i ← 0
  • 2. repeat

3. i ← i + 1 4. compute C ∈ ⌦(M, Ni−1) such that λ(M, Ni−1) = C

λ(M, Ni−1)

5. ✓i ← UpdateTransition(✓i−1, C) 6. Ni ← (N, ✓i, ↵)

  • 7. until λ(M, Ni) > λ(M, Ni−1) or i ≥ h
  • 8. return Ni−1

28/33

slide-61
SLIDE 61

Two update heuristics

  • Averaged Marginal (AM): given Nk we construct


Nk+1 by averaging the marginal of certain 
 “coupling variables” obtained by optimizing 
 the number of occurrences of the edges that
 are most likely to be seen in M.

  • Averaged Expectations (AE): similar to the above,


but now the Nk+1 looks only the expectation


  • f the number of occurrences of the edges 


likely to be found in M.

29/33

slide-62
SLIDE 62

Two update heuristics

  • Averaged Marginal (AM): given Nk we construct


Nk+1 by averaging the marginal of certain 
 “coupling variables” obtained by optimizing 
 the number of occurrences of the edges that
 are most likely to be seen in M.

  • Averaged Expectations (AE): similar to the above,


but now the Nk+1 looks only the expectation


  • f the number of occurrences of the edges 


likely to be found in M.

29/33

UpdateTransition in polynomial time for both heuristics!

slide-63
SLIDE 63

Case |M| k λ = 1 λ = 0.8 δλ-init δλ-final # time δλ-init δλ-final # time IPv4 (AM) 23 5 0.775 0.054 3 4.8 0.576 0.025 3 4.8 53 5 0.856 0.062 3 25.7 0.667 0.029 3 25.9 103 5 0.923 0.067 3 116.3 0.734 0.035 3 116.5 53 6 0.757 0.030 3 39.4 0.544 0.011 3 39.4 103 6 0.837 0.032 3 183.7 0.624 0.017 3 182.7 203 6 – – – TO – – – TO IPv4 (AE) 23 5 0.775 0.109 2 2.7 0.576 0.049 3 4.2 53 5 0.856 0.110 2 14.2 0.667 0.049 3 21.8 103 5 0.923 0.110 2 67.1 0.734 0.049 3 100.4 53 6 0.757 0.072 2 21.8 0.544 0.019 3 33.0 103 6 0.837 0.072 2 105.9 0.624 0.019 3 159.5 203 6 – – – TO – – – TO DrkW (AM) 39 7 0.565 0.466 14 259.3 0.432 0.323 14 252.8 49 7 0.568 0.460 14 453.7 0.433 0.322 14 420.5 59 8 0.646 – – TO 0.423 – – TO DrkW (AE) 39 7 0.565 0.435 11 156.6 0.432 0.321 2 28.6 49 7 0.568 0.434 10 247.7 0.433 0.316 2 46.2 59 8 0.646 0.435 10 588.9 0.423 0.309 2 115.7 Table 1. Comparison of the performance of EM algorithm on the IPv4 zeroconf pro- tocol and the classic Drunkard’s Walk w.r.t. the heuristics AM and AE.

30/33

slide-64
SLIDE 64

What we have seen

Metric-based state space reduction for MCs

  • 1. Closest Bounded Approximant (CBA)


encoded as a bilinear program

  • 2. Bounded Approximant (BA)


PSPACE & NP-hard for all 𝜇∈(0,1]

  • 3. Significant Bounded Approximant (SBA)


NP-complete for 𝜇=1

Theoretical

We proposed an EM-like method to

  • btain a sub-optimal approximants

Practical

31/33

slide-65
SLIDE 65

Future Work

  • Is BA-λ SUM-OF-SQUARE-ROOTS-hard?


(conjecture: for λ<1, BA-λ is in NP)

  • Can we obtain a real/better EM-heuristics?
  • What about different models/distances?
  • What about different constraints?


—beyond minimization!

32/33

slide-66
SLIDE 66

Thank you for your attention

slide-67
SLIDE 67

Appendix

slide-68
SLIDE 68

v1 v2 v3 v4 e1 e2 e3 e4 e1 e2 e3 e4 v1 v2 v3 v4 r

1/m2

s

1/2m

BA-λ is NP-hard

⟨G,h⟩∈VERTEX COVER iff ⟨MG, m+h+2, λ2/2m2⟩∈BA-λ

1-(1/m)

slide-69
SLIDE 69

EM-like algorithm (experimental results)

slide-70
SLIDE 70

IPv4 Zero Conf Protocol

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2

Averaged Marginal (AM)

Input model

slide-71
SLIDE 71

IPv4 Zero Conf Protocol

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2 0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N0) ≈ 0.67

Averaged Marginal (AM)

Input model

slide-72
SLIDE 72

IPv4 Zero Conf Protocol

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2 0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N0) ≈ 0.67

Averaged Marginal (AM)

d0.9(M,N1) ≈ 0.043

0.8 0.2 1. 0.526316 0.473684 1. 0.9 0.1 1 4 2 3 2

Input model

slide-73
SLIDE 73

IPv4 Zero Conf Protocol

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2 0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N0) ≈ 0.67

Averaged Marginal (AM)

d0.9(M,N1) ≈ 0.043

0.8 0.2 1. 0.526316 0.473684 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N2) ≈ 0.041

0.8 0.2 1. 0.5 0.5 1. 0.909091 0.0909091 1 4 2 3 2

Input model

slide-74
SLIDE 74

0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N0) ≈ 0.67

Averaged Expectations (AE)

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2

IPv4 Zero Conf Protocol

Input model

slide-75
SLIDE 75

0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N0) ≈ 0.67

0.78728 0.21272 1. 0.872927 0.127073 1. 0.946363 0.0536368 1 4 2 3 2

d0.9(M,N1) ≈ 0.08

Averaged Expectations (AE)

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2

IPv4 Zero Conf Protocol

Input model

slide-76
SLIDE 76

0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2

d0.9(M,N0) ≈ 0.67

0.78728 0.21272 1. 0.872927 0.127073 1. 0.946363 0.0536368 1 4 2 3 2

d0.9(M,N1) ≈ 0.08

0.873866 0.126134 1. 0.875451 0.124549 1. 0.990499 0.00950111 1 4 2 3 2

d0.9(M,N2) ≈ 0.11

Averaged Expectations (AE)

0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2

IPv4 Zero Conf Protocol

Input model

slide-77
SLIDE 77

Drunkard's Walk

Averaged Marginal (AM)

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

Input model

slide-78
SLIDE 78

Drunkard's Walk

Averaged Marginal (AM)

d0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

Input model

slide-79
SLIDE 79

Drunkard's Walk

Averaged Marginal (AM)

d0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

d0.9(M,N1) ≈ 0.56

0.235318 0.764682 0.144928 0.855072 0.149254 0.850746 1. 1. 1. 0.844828 0.155172 1 4 4 2 3 4 4

Input model

slide-80
SLIDE 80

Drunkard's Walk

Averaged Marginal (AM)

d0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

d0.9(M,N1) ≈ 0.56

0.235318 0.764682 0.144928 0.855072 0.149254 0.850746 1. 1. 1. 0.844828 0.155172 1 4 4 2 3 4 4

d0.9(M,N2) ≈ 0.567

0.22602 0.77398 0.147059 0.852941 0.169492 0.830508 1. 1. 1. 0.842105 0.157895 1 4 4 2 3 4 4

Input model

slide-81
SLIDE 81

Averaged Expectations (AE)

𝜀0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

Drunkard's Walk

Input model

slide-82
SLIDE 82

Averaged Expectations (AE)

𝜀0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.257064 0.742936 0.523324 0.476676 0.114461 0.885539 1. 1. 1. 0.40076 0.59924 1 4 4 2 3 4 4

𝜀0.9(M,N1) ≈ 0.56

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

Drunkard's Walk

Input model

slide-83
SLIDE 83

Averaged Expectations (AE)

𝜀0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.257064 0.742936 0.523324 0.476676 0.114461 0.885539 1. 1. 1. 0.40076 0.59924 1 4 4 2 3 4 4

𝜀0.9(M,N1) ≈ 0.56

0.194657 0.805343 0.377773 0.622227 0.0685134 0.931487 1. 1. 1. 0.46345 0.53655 1 4 4 2 3 4 4

𝜀0.9(M,N2) ≈ 0.543

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

Drunkard's Walk

Input model

slide-84
SLIDE 84

Averaged Expectations (AE)

𝜀0.9(M,N0) ≈ 0.64

0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 4

0.257064 0.742936 0.523324 0.476676 0.114461 0.885539 1. 1. 1. 0.40076 0.59924 1 4 4 2 3 4 4

𝜀0.9(M,N1) ≈ 0.56

0.194657 0.805343 0.377773 0.622227 0.0685134 0.931487 1. 1. 1. 0.46345 0.53655 1 4 4 2 3 4 4

𝜀0.9(M,N2) ≈ 0.543

0.142708 0.857292 0.280029 0.719971 0.0441171 0.955883 1. 1. 1. 0.501941 0.498059 1 4 4 2 3 4 4

𝜀0.9(M,N3) ≈ 0.540

0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4

Drunkard's Walk

Input model