cooperating stochastic automata approximate lumping an
play

Cooperating stochastic automata: approximate lumping an reversed - PowerPoint PPT Presentation

Cooperating stochastic automata: approximate lumping an reversed process Simonetta Balsamo Gian-Luca Dei Rossi Andrea Marin Dipartimento di Scienze Ambientali, Informatica e Statistica Universit` a Ca Foscari, Venezia ISCIS 12, Paris,


  1. Cooperating stochastic automata: approximate lumping an reversed process Simonetta Balsamo Gian-Luca Dei Rossi Andrea Marin Dipartimento di Scienze Ambientali, Informatica e Statistica Universit` a Ca’ Foscari, Venezia ISCIS ’12, Paris, 3-4 October 2012

  2. Context: Cooperating stochastic models • Models with underlying Continuous Time Markov Chain (CTMC) • Exploitation of compositionality in model definition • Each component is specified in isolation • Semantics of cooperation is defined so that the joint model can be algorithmically derived • Stochastic automata considered here synchronise on the active/passive semantics • Performance Evaluation Process Algebra (PEPA) active/passive synchronisation • Buchholz’s Communicating Markov Processes • Plateau’s stochastic automata networks (SAN) with master/slave cooperation • . . . Cooperating stochastic automata: approximate lumping an reversed process 2 of 22

  3. Motivation • In general, the state-space’s cardinality of the joint model grows exponentially with the number of components • Steady-state analysis becomes quickly unfeasible • Space cost • Time cost • Numerical stability issues • Workarounds • Approximate analysis (e.g. fluid) • Exploitation of the geometry of the state space • Product-form decomposition • Lumping • Approximate lumping Cooperating stochastic automata: approximate lumping an reversed process 3 of 22

  4. Previous work: Lumping on cooperating automata Definition (Lumping condition) Given active automaton M 1 , a set of labels T , and a partition of the states of M 1 into N 1 clusters C = {C 1 , C 2 , . . . , C N 1 } , we say that C is an exact lumping for M 1 if: � 1 ∈C k q 1 ( s 1 → s ′ 1 ∀C i , C j , C i � = C j , ∀ s 1 ∈ C i 1 ) = ˜ q 1 ( C i → C j ) not s ′ synchronising label � 1 ∈C k q t 1 ( s 1 → s ′ q t 2 ∀ t ∈ T , ∀C i , C j , ∀ s 1 ∈ C i 1 ) = ˜ 1 ( C i → C k ) s ′ where ϕ t s ′ 1 ) = � 1 q t 1 ( s 1 , s ′ 1 ( s 1 , ˜ 1 ) . s ′ 1 ∈ ˜ s ′ • Reduce complexity GBEs’ solution through component-wise lumping • If both automata have a spate-space of cardinality M , time cost reduces from O (( MM ) 3 ) to O (( NM ) 3 ) , where N is the number of clusters in the lumping • Intuition: for each synchronising label the original and lumped automata must behave (in steady-state) equivalently • We treat non-synchronising transitions as a special case • Conditions are stronger than the ones for regular lumpability and weaker than for PEPA strong equivalence Cooperating stochastic automata: approximate lumping an reversed process 4 of 22

  5. Example a, λ 1 a, λ 1 µ 1 s 0 a, 3 4 λ 2 µ 1 a, λ 2 µ 2 µ 3 s 2 C 0 C 1 a, λ 2 a, 1 µ 2 4 λ 2 a, λ 3 s 1 a, λ 3 µ 1 a, λ 1 Cooperating stochastic automata: approximate lumping an reversed process 5 of 22

  6. Marginal distribution Theorem Let M 1 and M 2 be two cooperating automata, where M 2 is passive and M 1 active. If: • M 2 never blocks M 1 ˜ • M 1 is a lumped automaton of M 1 Then the marginal steady state distribution of M 2 in the cooperations M 1 ⊗ M 2 and ˜ M 1 ⊗ M 2 are the same. Note that ergodicity is assumed and the state-space of the joint process is the Cartesian product of the single automata state-spaces. Cooperating stochastic automata: approximate lumping an reversed process 6 of 22

  7. A trivial example a, λ a, λ a, λ P 1 0 1 2 λ µ 1 µ 1 µ 1 a, ⊤ a, ⊤ a, ⊤ P 1 P 2 0 1 2 P 2 µ 2 µ 2 µ 2 µ 1 µ 2 C 0 ˜ a, λ P 1 � � λ � � λ � n � n � � 1 − λ 1 − λ π 1 ( n ) = π 2 ( n ) = µ 1 µ 1 µ 2 µ 2 Cooperating stochastic automata: approximate lumping an reversed process 7 of 22

  8. Another trivial one Q 1 Q 2 µ 2 µ 1 ( n ) λ λ λ λ 0 2 1 Q 1 a, µ 1 (1) a, µ 1 (2) ˜ a, µ 1 (3) Q R a, λ 1 µ 1 (2) µ 1 (1) µ 1 (3) 2 C 1 0 1 Q R 1 a, λ a, λ a, λ a, ⊤ a, ⊤ a, ⊤ 0 2 1 Q 2 µ 2 µ 2 µ 2 Cooperating stochastic automata: approximate lumping an reversed process 8 of 22

  9. Reversed lumping and product-forms • Both previous examples allowed for a lumping into a single cluster • First is derived from the forward automaton • Second is derived from the reversed automaton • In both cases we obtain the marginal distribution, but in the latter we also have product-form! • product-form ⇒ the joint distribution is the product of the marginal ones Corollary (Product-forms) A synchronisation is in product-form if the reversed active automaton can be lumped into a single state Note that, in general the marginal steady state distribution of M 2 in ˜ 1 ⊗ M 2 ≃ the one in ˜ M R M 1 ⊗ M 2 , and is equal in product-form models. Cooperating stochastic automata: approximate lumping an reversed process 9 of 22

  10. Approximation of marginal SSD through aggregation • With our theorem we can reduce the cost to compute marginal steady state distributions of a cooperating automaton if we’re able to find an exact lumping of the other one . • What if this is not feasible or even possible? • We could try to find an approximated lumping. • Can be applied also to the reversed process. • How we evaluate the quality of an approximation? • How we can adapt clustering algorithms to use our definition of (approximated) exact lumping? Cooperating stochastic automata: approximate lumping an reversed process 10 of 22

  11. Evaluating the quality of an approximate lumping How close is an arbitrary state partition W to an exact lumping? • We measure the coefficient of variation of the outgoing fluxes φ t 1 ( s 1 ) of the states in ˜ s 1 . • We further refine that measurement. Cooperating stochastic automata: approximate lumping an reversed process 11 of 22

  12. ǫ -error Definition ( ǫ -error) Given model M 1 and a partition of states W = { ˜ 1 , . . . , ˜ N 1 } , for all ˜ s 1 ∈ W and t > 2 , we define: s 1 π 1 ( s 1 ) φ t � 1 ( s 1 ) t s 1 ∈ ˜ 1 (˜ s 1 ) = φ � s 1 π 1 ( s 1 ) s 1 ∈ ˜  �  t π 1 ( s 1 )( φ t � s 1 )) 2 1 ( s 1 ) − φ 1 (˜ ǫ t (˜ � � �  . s 1 ) = 1 − exp  − � s 1 π 1 ( s 1 ) s ∈ ˜ s 1 ∈ ˜ s 1 1 ( s 1 ) = � N 1 where φ t 1 =1 q t 1 ( s 1 , s ′ 1 ) . s ′ Cooperating stochastic automata: approximate lumping an reversed process 12 of 22

  13. δ -error Definition ( δ -error) 1 , . . . , ˜ Given model M 1 and a partition of states W = { ˜ N 1 } , for all s ′ ˜ s 1 , ˜ 1 ∈ W , we define:  s ′ 0 s 1 = ˜ ˜ 1 ∧ t = 1  ϕ t s ′ 1 (˜ s 1 , ˜ 1 ) = ( s 1 π 1 ( s 1 ) ϕ t s ′ 1 ) ) � 1 ( s 1 , ˜ s 1 ∈ ˜ otherwise  � s 1 π 1 ( s 1 ) s 1 ∈ ˜ π 1 ( s 1 )( ϕ t s ′ 1 ) − ϕ t s ′ 1 )) 2 1 ( s 1 , ˜ 1 (˜ s 1 , ˜ � 2 = σ t (˜ s ′ � � s 1 , ˜ 1 ) � s 1 π 1 ( s ) s ∈ ˜ s 1 ∈ ˜ s 1 s ′ δ t (˜ s ′ 1 ) = 1 − e − σ (˜ s 1 , ˜ 1 ) s 1 , ˜ where function ϕ t 1 has been defined in Lumping conditions. Cooperating stochastic automata: approximate lumping an reversed process 13 of 22

  14. An ideal algorithm Definition (Ideal algorithm) • Input: automata M 1 , M 2 , T , tolerances ǫ ≥ 0 , δ ≥ 0 • Output: marginal distribution π 1 of M 1 ; approximated marginal distribution of M 2 1 Find the minimum ˜ N ′ 1 such that there exists a partition W = { ˜ 1 , . . . , ˜ N ′ 1 } of the states of M 1 such that ∀ t ∈ T , t > 2 and ∀ ˜ s 1 ∈ W ǫ (˜ s 1 ) ≤ ǫ 2 Let W ′ ← W 3 Check if partition W ′ is such that ∀ t ∈ T , ∀ ˜ s 1 , ˜ s 2 ∈ W , ˜ s 1 � = ˜ s 2 , δ t (˜ s ′ s 1 , ˜ 1 ) ≤ δ . If this is true then return the marginal distribution of M 1 and the approximated of M 2 by computing the marginal distribution of ˜ M 1 ⊗ M 2 and terminate. 4 Otherwise, refine partition W to obtain W new such that the number of clusters of W new is greater than the number of clusters in W ′ . W ′ ← W new . Repeat from Step 3 Cooperating stochastic automata: approximate lumping an reversed process 14 of 22

  15. Constructing the approximate lumped automata Definition (Approx. lumped automata) Given active automaton M 1 , a set of transition types T , and a partition of the states of M 1 into ˜ N 1 clusters W = { ˜ 1 , ˜ 2 , . . . , ˜ N 1 } , then we define the automaton M ≃ 1 as follows: � 1 )˜ λ − 1 ϕ 1 s ′ 1 (˜ s 1 , ˜ if ˜ s 1 � = ˜ s 2 ˜ s ′ 1 E 11 (˜ s 1 , ˜ 1 ) = 0 otherwise ˜ E 12 = I , ˜ ϕ t s ′ 1 ) λ − 1 E 1 t (˜ s 1 , ˜ s 1 ) = 1 (˜ s 1 , ˜ t > 2 t where   ˜ N 1 ˜ � ϕ t s ′ λ t = max 1 (˜ s 1 , ˜ 1 )   s 1 =1 ,..., ˜ ˜ N 1 s ′ ˜ 1 =1 are the rates associated with the transition types in the cooperation between M ≃ 1 and M 2 . Cooperating stochastic automata: approximate lumping an reversed process 15 of 22

  16. Initial clustering and refinement phase Initial clustering: • similarity measure can be Euclidean distance between ( φ 3 1 ( s 1 ) , . . . , φ T 1 ( s 1 )) and ( φ 3 1 ( s ′ 1 ) , . . . , φ T 1 ( s ′ )) • can be implemented using various algorithm • hierarchical clustering • K-means (but number of clusters must be decided a priori...) • . . . Refinement phase: • using the tolerance constant δ • distances between clusters depend on clusters themselves = ⇒ K-means cannot be used. • spectral analysis or iterative algorithms Cooperating stochastic automata: approximate lumping an reversed process 16 of 22

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend