On the Metric-based Approximate Minimization
- f Markov Chains*
Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, Radu Mardare Aalborg University
REPAS Meeting
Ljubljana, 13th June 2017
1/33
(*) accepted for publication at ICALP’17
On the Metric-based Approximate Minimization of Markov Chains* - - PowerPoint PPT Presentation
On the Metric-based Approximate Minimization of Markov Chains* Giovanni Bacci, Giorgio Bacci , Kim G. Larsen, Radu Mardare Aalborg University REPAS Meeting Ljubljana, 13th June 2017 (*) accepted for publication at ICALP17 1/33
Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, Radu Mardare Aalborg University
REPAS Meeting
Ljubljana, 13th June 2017
1/33
(*) accepted for publication at ICALP’17
(partition refinement wrt Myhill-Nerode equiv.)
Milner’s strong bisimulation
probabilistic bisimulation
timed & real-time transition systems.
2/33
1
Jou-Smolka’90 observed that behavioral equivalences are not robust for systems with real-valued data
m0 m1 m2
1/3 2/3 1 1
n1 n0 n2 n3
1/3+ε 1 1 1/3 1/3-ε
3/33
1
Jou-Smolka’90 observed that behavioral equivalences are not robust for systems with real-valued data
m0 m1 m2
1/3 2/3 1 1
n1 n0 n2 n3
1/3+ε 1 1 1/3 1/3-ε
3/33
S
u t i
! e q u i v .
i s t a n c e d(m0,n0)
Closest Bounded Approximant (CBA) Minimum Significant Approximant Bound (MSAB)
4/33
Closest Bounded Approximant (CBA) MC(k) MC N M
d
Minimum Significant Approximant Bound (MSAB)
4/33
Closest Bounded Approximant (CBA) MC(k) MC N M
d
Minimum Significant Approximant Bound (MSAB) minimize d
4/33
Closest Bounded Approximant (CBA) MC(k) MC N M
d
Minimum Significant Approximant Bound (MSAB) MC(k) MC N M
< 1
minimize d
4/33
Closest Bounded Approximant (CBA) MC(k) MC N M
d
Minimum Significant Approximant Bound (MSAB) MC(k) MC N M
< 1
minimize d minimize k
4/33
1 1
m1 m3
1
m4 m2 m5 m0
1/2 1/2 1/2 1/2 1/2 1/6 1/3
(*) With respect to the undiscounted probabilistic bisimilarity distance
5/33
1 1
m1 m3
1
m4 m2 m5 m0
1/2 1/2 1/2 1/2 1/2 1/6 1/3
1 1 m12
m3
1
m4 m5 m0
1/6 1/2 1/2 1/2 1/3
(*) With respect to the undiscounted probabilistic bisimilarity distance
4/9
5/33
1 1
m1 m3
1
m4 m2 m5 m0
1/2 1/2 1/2 1/2 1/2 1/6 1/3
1 1 m12
m3
1
m4 m5 m0
1/6 1/2 1/2 1/2 1/3 1 m12 1
m4 m5 m0
1/6 1/3 1/2 1/2 1/2 m12 1
(*) With respect to the undiscounted probabilistic bisimilarity distance
4/9 1/6
5/33
1 1
m1 m3
1
m4 m2 m5 m0
1/2 1/2 1/2 1/2 1/2 1/6 1/3
1 1 m12
m3
1
m4 m5 m0
1/6 1/2 1/2 1/2 1/3 1 m12 1
m4 m5 m0
1/6 1/3 1/2 1/2 1/2 m12 1
No unique solution!
(*) With respect to the undiscounted probabilistic bisimilarity distance
4/9 1/6
5/33
1
m1 m3 m4 m2 m0
79/100 21/100 79/100 79/100 21/100 1 21/100 1
n1 n2 n0
1 - x - y y 1 x
(*) With respect to the undiscounted probabilistic bisimilarity distance
6/33
1
m1 m3 m4 m2 m0
79/100 21/100 79/100 79/100 21/100 1 21/100 1
n1 n2 n0
1 - x - y y 1 x
x = 1 30 ⇣ 10 + √ 163 ⌘ y = 21 200
O p t i m a l p a r a m e t e r s m a y b e i r r a t i
a l !
(*) With respect to the undiscounted probabilistic bisimilarity distance
6/33
1
m1 m3 m4 m2 m0
79/100 21/100 79/100 79/100 21/100 1 21/100 1
n1 n2 n0
1 - x - y y 1 x
O p t i m a l d i s t a n c e i s i r r a t i
a l !
δ(m0, n0) = 436 675 − 163 √ 163 13500 ≈ 0.49
x = 1 30 ⇣ 10 + √ 163 ⌘ y = 21 200
O p t i m a l p a r a m e t e r s m a y b e i r r a t i
a l !
(*) With respect to the undiscounted probabilistic bisimilarity distance
6/33
Probabilistic bisimilarity distance
Metric-based Optimal Approximate Minimization
— definition, characterization, complexity
— definition, characterization, complexity
— 2 heuristics + experimental results
7/33
m0 m1 m2 n1 n0 n2 n3
1/3 1/3 2/3 1/3 1/3 1 1 1 1 1
It tries to match the behaviors “quantitatively”
8/33
m0 m1 m2 n1 n0 n2 n3
1/3 1/3 1/3 2/3 1/3 1/3 1 1 1 1 1
It tries to match the behaviors “quantitatively”
8/33
m0 m1 m2 n1 n0 n2 n3
1/3 1/3 1/3 1/3 2/3 1/3 1/3 1 1 1 1 1
It tries to match the behaviors “quantitatively”
8/33
m0 m1 m2 n1 n0 n2 n3
1/3 1/3 1/3 1/3 1/3 2/3 1/3 1/3 1 1 1 1 1
It tries to match the behaviors “quantitatively”
8/33
A coupling of a pair (μ,ν) of probability distributions
Definition (W. Doeblin 36) One can think of a coupling as a measure-theoretic relation between probability distribution
9/33
1 1 1
m0 m1 m2 n1 n0 n2 n3
1/3 1/3 1/3-ε 1/3 1/3 2/3 1/3+ε 1/3-ε
minimize ∑ ω(u,v) d(u,v)
u,v∈M
ε
1 1
10/33
The λ-discounted probabilistic bisimilarity pseudometric is the smallest dλ: M×M→[0,1] such that min λ ∑ ω(u,v) dλ(u,v) otherwise
u,v∈M
dλ(m,n) =
ω∈Ω(τ(m),τ(n))
1 if ℓ(m)≠ℓ(n)
11/33
The λ-discounted probabilistic bisimilarity pseudometric is the smallest dλ: M×M→[0,1] such that min λ ∑ ω(u,v) dλ(u,v) otherwise
u,v∈M
dλ(m,n) =
ω∈Ω(τ(m),τ(n))
1 if ℓ(m)≠ℓ(n) Kantorovich distance K(d)(μ,ν) = min ∑ ω(u,v) d(u,v)
u,v∈M ω∈Ω(μ,ν) 11/33
Theorem (Desharnais et. al 99)
Theorem (Chen, van Breugel, Worrell 12) The probabilistic bisimilarity distance can be computed in polynomial time
12/33
Theorem (Chen, van Breugel, Worrell 12) For all φ ∈ LTL | Pr(m ⊨ φ) - Pr(n ⊨ φ) | ≤ d1(m,n)
13/33
Theorem (Chen, van Breugel, Worrell 12) For all φ ∈ LTL | Pr(m ⊨ φ) - Pr(n ⊨ φ) | ≤ d1(m,n) Pr(m ⊨ φ)
Pr(n ⊨ φ)
1 d d
approximate solution on φ
…imagine that |M|≫|N|, we can use N in place of M
13/33
Probabilistic bisimilarity distance
Metric-based Optimal Approximate Minimization
— definition, characterization, complexity
— definition, characterization, complexity
— 2 heuristics + experimental results
14/33
The Closest Bounded Approximant wrt dλ Instance: An MC M, and a positive integer k Ouput: An MC Ñ, with at most k states minimizing dλ(m0,ñ0)
dλ(m0,ñ0) = inf { dλ(m0,n0) | N ∈ MC(k) }
we get a solution iff the infimum is a minimum
15/33
The Closest Bounded Approximant wrt dλ Instance: An MC M, and a positive integer k Ouput: An MC Ñ, with at most k states minimizing dλ(m0,ñ0)
dλ(m0,ñ0) = inf { dλ(m0,n0) | N ∈ MC(k) }
we get a solution iff the infimum is a minimum
generalization of bisimilarity quotient
15/33
dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}
16/33
dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}
mimimize dm0,n0 such that dm,n = 1 `(m) 6= ↵(n) P
(u,v)∈M×N cm,n u,v · du,v dm,n
`(m) = ↵(n) P
v∈N cm,n u,v = ⌧(m)(u)
m, u 2 M, n 2 N P
u∈M cm,n u,v = ✓n,v
m 2 M, n, v 2 N cm,n
u,v 0
m, u 2 M, n, v 2 N
16/33
dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}
mimimize dm0,n0 such that dm,n = 1 `(m) 6= ↵(n) P
(u,v)∈M×N cm,n u,v · du,v dm,n
`(m) = ↵(n) P
v∈N cm,n u,v = ⌧(m)(u)
m, u 2 M, n 2 N P
u∈M cm,n u,v = ✓n,v
m 2 M, n, v 2 N cm,n
u,v 0
m, u 2 M, n, v 2 N
16/33
dλ(m0,ñ0) = inf { dλ(m0,n0) | N∈MC(k) } = inf { d(m0,n0) | Γλ(d)≤d, N∈MC(k)}
mimimize dm0,n0 such that dm,n = 1 `(m) 6= ↵(n) P
(u,v)∈M×N cm,n u,v · du,v dm,n
`(m) = ↵(n) P
v∈N cm,n u,v = ⌧(m)(u)
m, u 2 M, n 2 N P
u∈M cm,n u,v = ✓n,v
m 2 M, n, v 2 N cm,n
u,v 0
m, u 2 M, n, v 2 N
what labels should the MC N have?
16/33
Lemma (Meaningful labels) For any N∈MC(k), there exists N’∈MC(k) with labels taken from M, such that dλ(M,N) ≥ dλ(M,N’)
17/33
mimimize dm0,n0 such that ⁄ q
(u,v)œM◊N cm,n u,v · du,v Æ dm,n
m œ M, n œ N 1 ≠ –n,l Æ dm,n Æ 1 n œ N, l œ L(M), ¸(m) ”= l –n,l · –n,lÕ = 0 n œ N, l, lÕ œ L(M), l ”= lÕ q
lœL(M) –n,l = 1
n œ N q
vœN cm,n u,v = ·(m)(u)
m, u œ M, n œ N q
uœM cm,n u,v = ◊n,v
m œ M, n, v œ N cm,n
u,v Ø 0
m, u œ M, n, v œ N
Lemma (Meaningful labels) For any N∈MC(k), there exists N’∈MC(k) with labels taken from M, such that dλ(M,N) ≥ dλ(M,N’)
17/33
mimimize dm0,n0 such that ⁄ q
(u,v)œM◊N cm,n u,v · du,v Æ dm,n
m œ M, n œ N 1 ≠ –n,l Æ dm,n Æ 1 n œ N, l œ L(M), ¸(m) ”= l –n,l · –n,lÕ = 0 n œ N, l, lÕ œ L(M), l ”= lÕ q
lœL(M) –n,l = 1
n œ N q
vœN cm,n u,v = ·(m)(u)
m, u œ M, n œ N q
uœM cm,n u,v = ◊n,v
m œ M, n, v œ N cm,n
u,v Ø 0
m, u œ M, n, v œ N
Lemma (Meaningful labels) For any N∈MC(k), there exists N’∈MC(k) with labels taken from M, such that dλ(M,N) ≥ dλ(M,N’)
17/33
this characterization has two main consequences…
(finite intersection of closed subsets)
18/33
“To study the complexity of an optimization problem
(C. Papadimitriou)
19/33
“To study the complexity of an optimization problem
(C. Papadimitriou) Bounded Approximant threshold wrt dλ Instance: An MC M, a positive integer k, and a rational ε>0 Output: yes iff there exists N with at most k states such that dλ(m0,n0) ≤ ε
19/33
BA-λ is in PSPACE
Theorem
Proof sketch: we can encode the question ⟨M,k,ε⟩∈BA-λ to that of checking the feasibility of a set of bilinear inequalities. This can be encoded as a decision problem for the existential theory of the reals, thus it can be solved in PSPACE [Canny—STOC88].
20/33
BA-λ is NP-hard
Theorem
Proof idea: we provide a reduction from VERTEX COVER. (see the appendix for a sketch of the reduction)
21/33
BA-λ is NP-hard
Theorem unlikely to solve CBA as simple linear program
Proof idea: we provide a reduction from VERTEX COVER. (see the appendix for a sketch of the reduction)
21/33
The Minimum Significant Approximant Bound wrt dλ Instance: An MC M Ouput: The smallest k such that dλ(m0,n0)<1, for some N∈MC(k)
22/33
The Minimum Significant Approximant Bound wrt dλ Instance: An MC M Ouput: The smallest k such that dλ(m0,n0)<1, for some N∈MC(k) For λ<1, the MSAB-λ problem is trivial, because the solution is always k=1
22/33
The Minimum Significant Approximant Bound wrt dλ Instance: An MC M Ouput: The smallest k such that dλ(m0,n0)<1, for some N∈MC(k) For λ<1, the MSAB-λ problem is trivial, because the solution is always k=1 For λ=1, the same problem is surprisingly difficult…
22/33
…as before we should look at its decision variant
23/33
Significant Bounded Approximant wrt d1 Instance: An MC M and a positive k Ouput: yes iff there exists N with at most k states such that d1(m0,n0)<1. …as before we should look at its decision variant
23/33
Significant Bounded Approximant wrt d1 Instance: An MC M and a positive k Ouput: yes iff there exists N with at most k states such that d1(m0,n0)<1. …as before we should look at its decision variant
SBA-1 is NP-complete
Theorem
23/33
Lemma Assume M be maximally collapsed. Then, ⟨M,k⟩∈SBA-1 iff
C
m0 mn
G(M) =
BSCC
and h+|C| ≤ k
number of labels in m0…mn-1
24/33
Lemma Assume M be maximally collapsed. Then, ⟨M,k⟩∈SBA-1 iff
C
m0 mn
G(M) =
BSCC
and h+|C| ≤ k
Proof sketch: compute with Tarjan’s algorithm all the SCCs of G(M). Then non deterministically choose a BSCC and a path to it. In poly- time we can count the number of labels in the path and the size of the BSCC.
number of labels in m0…mn-1
24/33
Proof sketch: by reduction to VERTEX COVER: ⟨G,h⟩∈VERTEX COVER iff ⟨MG, h+m+1⟩∈SBA-1
1 2 3 4
e1 e2 e3
e3 e2 e1 e0
1 2 3 2 3 4
1 1/2 1/2 1 1/2 1/2 1/2 1/2 1 1 1 1 1
sink
25/33
Proof sketch: by reduction to VERTEX COVER: ⟨G,h⟩∈VERTEX COVER iff ⟨MG, h+m+1⟩∈SBA-1
1 2 3 4
e1 e2 e3
e3 e2 e1 e0
1 2 3 2 3 4
1 1/2 1/2 1 1/2 1/2 1/2 1/2 1 1 1 1 1
sink
paths from e3 to e0 describes all vertex covers of G
25/33
26/33
Theoretically nice, but practically unfeasible! (our implementation in PENBMI can handle MCs with at most 5 states…)
26/33
Theoretically nice, but practically unfeasible! (our implementation in PENBMI can handle MCs with at most 5 states…)
they can be obtained by a practical algorithm.
26/33
having strictly decreasing distance from M
MC(k) MC N0 M
dh
N1 Nh
d0 > d1 > … > dh
27/33
Algorithm 1 Expectation Maximization
Input: M = (M, ⌧, `), N0 = (N, ✓0, ↵), and h ∈ N.
3. i ← i + 1 4. compute C ∈ ⌦(M, Ni−1) such that λ(M, Ni−1) = C
λ(M, Ni−1)
5. ✓i ← UpdateTransition(✓i−1, C) 6. Ni ← (N, ✓i, ↵)
28/33
UpdateTransition assigns greater probability to transitions that are most representative of the behavior of M Intuitive Idea
Algorithm 1 Expectation Maximization
Input: M = (M, ⌧, `), N0 = (N, ✓0, ↵), and h ∈ N.
3. i ← i + 1 4. compute C ∈ ⌦(M, Ni−1) such that λ(M, Ni−1) = C
λ(M, Ni−1)
5. ✓i ← UpdateTransition(✓i−1, C) 6. Ni ← (N, ✓i, ↵)
28/33
Nk+1 by averaging the marginal of certain “coupling variables” obtained by optimizing the number of occurrences of the edges that are most likely to be seen in M.
but now the Nk+1 looks only the expectation
likely to be found in M.
29/33
Nk+1 by averaging the marginal of certain “coupling variables” obtained by optimizing the number of occurrences of the edges that are most likely to be seen in M.
but now the Nk+1 looks only the expectation
likely to be found in M.
29/33
UpdateTransition in polynomial time for both heuristics!
Case |M| k λ = 1 λ = 0.8 δλ-init δλ-final # time δλ-init δλ-final # time IPv4 (AM) 23 5 0.775 0.054 3 4.8 0.576 0.025 3 4.8 53 5 0.856 0.062 3 25.7 0.667 0.029 3 25.9 103 5 0.923 0.067 3 116.3 0.734 0.035 3 116.5 53 6 0.757 0.030 3 39.4 0.544 0.011 3 39.4 103 6 0.837 0.032 3 183.7 0.624 0.017 3 182.7 203 6 – – – TO – – – TO IPv4 (AE) 23 5 0.775 0.109 2 2.7 0.576 0.049 3 4.2 53 5 0.856 0.110 2 14.2 0.667 0.049 3 21.8 103 5 0.923 0.110 2 67.1 0.734 0.049 3 100.4 53 6 0.757 0.072 2 21.8 0.544 0.019 3 33.0 103 6 0.837 0.072 2 105.9 0.624 0.019 3 159.5 203 6 – – – TO – – – TO DrkW (AM) 39 7 0.565 0.466 14 259.3 0.432 0.323 14 252.8 49 7 0.568 0.460 14 453.7 0.433 0.322 14 420.5 59 8 0.646 – – TO 0.423 – – TO DrkW (AE) 39 7 0.565 0.435 11 156.6 0.432 0.321 2 28.6 49 7 0.568 0.434 10 247.7 0.433 0.316 2 46.2 59 8 0.646 0.435 10 588.9 0.423 0.309 2 115.7 Table 1. Comparison of the performance of EM algorithm on the IPv4 zeroconf pro- tocol and the classic Drunkard’s Walk w.r.t. the heuristics AM and AE.
30/33
Metric-based state space reduction for MCs
encoded as a bilinear program
PSPACE & NP-hard for all 𝜇∈(0,1]
NP-complete for 𝜇=1
Theoretical
We proposed an EM-like method to
Practical
31/33
(conjecture: for λ<1, BA-λ is in NP)
—beyond minimization!
32/33
v1 v2 v3 v4 e1 e2 e3 e4 e1 e2 e3 e4 v1 v2 v3 v4 r
1/m2
s
1/2m
⟨G,h⟩∈VERTEX COVER iff ⟨MG, m+h+2, λ2/2m2⟩∈BA-λ
1-(1/m)
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2
Averaged Marginal (AM)
Input model
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2 0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2
d0.9(M,N0) ≈ 0.67
Averaged Marginal (AM)
Input model
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2 0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2
d0.9(M,N0) ≈ 0.67
Averaged Marginal (AM)
d0.9(M,N1) ≈ 0.043
0.8 0.2 1. 0.526316 0.473684 1. 0.9 0.1 1 4 2 3 2Input model
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2 0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2
d0.9(M,N0) ≈ 0.67
Averaged Marginal (AM)
d0.9(M,N1) ≈ 0.043
0.8 0.2 1. 0.526316 0.473684 1. 0.9 0.1 1 4 2 3 2d0.9(M,N2) ≈ 0.041
0.8 0.2 1. 0.5 0.5 1. 0.909091 0.0909091 1 4 2 3 2Input model
0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2
d0.9(M,N0) ≈ 0.67
Averaged Expectations (AE)
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2
Input model
0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2
d0.9(M,N0) ≈ 0.67
0.78728 0.21272 1. 0.872927 0.127073 1. 0.946363 0.0536368 1 4 2 3 2d0.9(M,N1) ≈ 0.08
Averaged Expectations (AE)
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2
Input model
0.2 0.8 1. 0.9 0.1 1. 0.9 0.1 1 4 2 3 2
d0.9(M,N0) ≈ 0.67
0.78728 0.21272 1. 0.872927 0.127073 1. 0.946363 0.0536368 1 4 2 3 2d0.9(M,N1) ≈ 0.08
0.873866 0.126134 1. 0.875451 0.124549 1. 0.990499 0.00950111 1 4 2 3 2d0.9(M,N2) ≈ 0.11
Averaged Expectations (AE)
0.8 0.2 1. 0.5 0.5 1. 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 4 2 3 2 2 2 2 2 2 2 2 2
Input model
Averaged Marginal (AM)
0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
Input model
Averaged Marginal (AM)
d0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
Input model
Averaged Marginal (AM)
d0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
d0.9(M,N1) ≈ 0.56
0.235318 0.764682 0.144928 0.855072 0.149254 0.850746 1. 1. 1. 0.844828 0.155172 1 4 4 2 3 4 4
Input model
Averaged Marginal (AM)
d0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
d0.9(M,N1) ≈ 0.56
0.235318 0.764682 0.144928 0.855072 0.149254 0.850746 1. 1. 1. 0.844828 0.155172 1 4 4 2 3 4 4
d0.9(M,N2) ≈ 0.567
0.22602 0.77398 0.147059 0.852941 0.169492 0.830508 1. 1. 1. 0.842105 0.157895 1 4 4 2 3 4 4
Input model
Averaged Expectations (AE)
𝜀0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
Input model
Averaged Expectations (AE)
𝜀0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.257064 0.742936 0.523324 0.476676 0.114461 0.885539 1. 1. 1. 0.40076 0.59924 1 4 4 2 3 4 4
𝜀0.9(M,N1) ≈ 0.56
0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
Input model
Averaged Expectations (AE)
𝜀0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.257064 0.742936 0.523324 0.476676 0.114461 0.885539 1. 1. 1. 0.40076 0.59924 1 4 4 2 3 4 4
𝜀0.9(M,N1) ≈ 0.56
0.194657 0.805343 0.377773 0.622227 0.0685134 0.931487 1. 1. 1. 0.46345 0.53655 1 4 4 2 3 4 4
𝜀0.9(M,N2) ≈ 0.543
0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
Input model
Averaged Expectations (AE)
𝜀0.9(M,N0) ≈ 0.64
0.3 0.7 0.7 0.3 0.3 0.7 1. 1. 0.7 0.3 0.3 0.7 1 4 4 2 3 4 40.257064 0.742936 0.523324 0.476676 0.114461 0.885539 1. 1. 1. 0.40076 0.59924 1 4 4 2 3 4 4
𝜀0.9(M,N1) ≈ 0.56
0.194657 0.805343 0.377773 0.622227 0.0685134 0.931487 1. 1. 1. 0.46345 0.53655 1 4 4 2 3 4 4
𝜀0.9(M,N2) ≈ 0.543
0.142708 0.857292 0.280029 0.719971 0.0441171 0.955883 1. 1. 1. 0.501941 0.498059 1 4 4 2 3 4 4
𝜀0.9(M,N3) ≈ 0.540
0.1 0.9 0.9 0.1 0.1 0.9 1. 1. 0.9 0.1 0.9 0.1 0.1 0.9 0.1 0.9 0.1 0.9 0.9 0.1 1 4 4 2 3 4 4 4 4 4 4
Input model