on model checking techniques for randomized distributed
play

On Model Checking Techniques for Randomized Distributed Systems - PowerPoint PPT Presentation

On Model Checking Techniques for Randomized Distributed Systems Christel Baier Technische Universit at Dresden joint work with Nathalie Bertrand Frank Ciesinski Marcus Gr oer 1 / 161 Probability elsewhere int-01 randomized


  1. Randomized mutual exclusion protocol mdp-05 • interleaving of the request operations • competition if both processes are waiting • randomized arbiter tosses a coin if both are waiting n 1 n 2 n 1 n 2 n 1 n 2 request 2 request 2 request 2 request 1 request 1 request 1 MDP release 2 release 2 release 2 release 1 release 1 release 1 w 1 n 2 w 1 n 2 w 1 n 2 n 1 w 2 n 1 w 2 n 1 w 2 request 2 request 2 request 2 request 1 request 1 request 1 e e e n r r r n n 1 1 1 e e e t t t t t t e e e n n n r r r e e e 2 2 2 w 1 w 2 w 1 w 2 w 1 w 2 w 1 w 2 w 1 w 2 w 1 w 2 r r r e e e e 1 e 1 e 1 l l l c 1 n 2 s s n 1 c 2 c 1 n 2 c 1 n 2 e e e s n 1 c 2 n 1 c 2 a a a a a a e e e s s s l l e 2 e 2 e 2 1 1 1 1 1 1 l e e e coin coin coin r r r 2 2 r r r 2 2 2 2 e e e t t t q 1 1 1 q q s s s u u u e e e u u u e e e s s s q q q t t t e e e r r r 2 2 2 c 1 w 2 w 1 c 2 c 1 w 2 c 1 w 2 w 1 c 2 w 1 c 2 21 / 161

  2. Randomized mutual exclusion protocol mdp-05 • interleaving of the request operations • competition if both processes are waiting • randomized arbiter tosses a coin if both are waiting n 1 n 2 n 1 n 2 n 1 n 2 request 2 request 2 request 2 request 1 request 1 request 1 MDP release 2 release 2 release 2 release 1 release 1 release 1 w 1 n 2 w 1 n 2 w 1 n 2 n 1 w 2 n 1 w 2 n 1 w 2 request 2 request 2 request 2 request 1 request 1 request 1 e e e n r r r n n 1 1 1 e e e t t t t t t e e e n n n r r r e e e 2 2 2 w 1 w 2 w 1 w 2 w 1 w 2 w 1 w 2 w 1 w 2 w 1 w 2 r r r e e e e 1 e 1 e 1 l l l c 1 n 2 s s n 1 c 2 c 1 n 2 c 1 n 2 e e e s n 1 c 2 n 1 c 2 a a a a a a e e e s s s l l e 2 e 2 e 2 1 1 1 1 1 1 l e e e r r r 2 toss a toss a toss a 2 r r r 2 2 2 2 e e e t t t q 1 1 1 q q s s s u u u e e e coin coin coin u u u e e e s s s q q q t t t e e e r r r 2 2 2 c 1 w 2 c 1 w 2 w 1 c 2 w 1 c 2 c 1 w 2 c 1 w 2 c 1 w 2 c 1 w 2 w 1 c 2 w 1 c 2 w 1 c 2 w 1 c 2 22 / 161

  3. Reasoning about probabilities in MDP mdp-10 • requires resolving the nondeterminism by schedulers 23 / 161

  4. Reasoning about probabilities in MDP mdp-10 • requires resolving the nondeterminism by schedulers a scheduler is a function D : S ∗ − D : S ∗ − D : S ∗ − • → Act → Act → Act s.t. action D ( s 0 . . . s n ) D ( s 0 . . . s n ) D ( s 0 . . . s n ) is enabled in state s n s n s n 24 / 161

  5. Reasoning about probabilities in MDP mdp-10 • requires resolving the nondeterminism by schedulers a scheduler is a function D : S ∗ − D : S ∗ − D : S ∗ − • → Act → Act → Act s.t. action D ( s 0 . . . s n ) D ( s 0 . . . s n ) D ( s 0 . . . s n ) is enabled in state s n s n s n • each scheduler induces an infinite Markov chain β β β 2 1 1 1 2 2 α α α 3 3 3 3 3 3 σ σ σ α α α γ γ γ 1 1 1 β β β δ δ δ 2 2 2 3 3 3 3 3 3 δ δ δ γ γ γ σ σ σ MDP α 2 α 2 α 2 1 1 1 σ σ σ 3 3 3 3 3 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 / 161

  6. Reasoning about probabilities in MDP mdp-10 • requires resolving the nondeterminism by schedulers a scheduler is a function D : S ∗ − D : S ∗ − D : S ∗ − • → Act → Act → Act s.t. action D ( s 0 . . . s n ) D ( s 0 . . . s n ) D ( s 0 . . . s n ) is enabled in state s n s n s n • each scheduler induces an infinite Markov chain � � �    yields a notion of probability measure Pr D Pr D Pr D on measurable sets of infinite paths 26 / 161

  7. Reasoning about probabilities in MDP mdp-10 • requires resolving the nondeterminism by schedulers a scheduler is a function D : S ∗ − D : S ∗ − D : S ∗ − • → Act → Act → Act s.t. action D ( s 0 . . . s n ) D ( s 0 . . . s n ) D ( s 0 . . . s n ) is enabled in state s n s n s n • each scheduler induces an infinite Markov chain � � �    yields a notion of probability measure Pr D Pr D Pr D on measurable sets of infinite paths typical task: given a measurable path event E E E , ∗ ∗ ∗ check whether E E E holds almost surely, i.e., Pr D ( E ) = 1 Pr D ( E ) = 1 Pr D ( E ) = 1 for all schedulers D D D 27 / 161

  8. Reasoning about probabilities in MDP mdp-10 • requires resolving the nondeterminism by schedulers a scheduler is a function D : S ∗ − D : S ∗ − D : S ∗ − • → Act → Act → Act s.t. action D ( s 0 . . . s n ) D ( s 0 . . . s n ) D ( s 0 . . . s n ) is enabled in state s n s n s n • each scheduler induces an infinite Markov chain � � �    yields a notion of probability measure Pr D Pr D Pr D on measurable sets of infinite paths typical task: given a measurable path event E E E , ∗ ∗ ∗ check whether E E E holds almost surely ∗ ∗ ∗ compute the worst-case probability for E E E , i.e., Pr D ( E ) Pr D ( E ) Pr D ( E ) Pr D ( E ) Pr D ( E ) Pr D ( E ) sup sup sup or inf inf inf D D D D D D 28 / 161

  9. Quantitative analysis of MDP mdp-15 given: MDP M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) with initial state s 0 s 0 s 0 ω -regular path event E ω ω E , E e.g., given by an LTL formula Pr M compute Pr M Pr M Pr D ( s 0 , E ) Pr D ( s 0 , E ) Pr D ( s 0 , E ) max ( s 0 , E ) = sup task: max ( s 0 , E ) = sup max ( s 0 , E ) = sup D D D 29 / 161

  10. Quantitative analysis of MDP mdp-15 given: MDP M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) with initial state s 0 s 0 s 0 ω ω ω -regular path event E E , E e.g., given by an LTL formula Pr M compute Pr M Pr M Pr D ( s 0 , E ) Pr D ( s 0 , E ) Pr D ( s 0 , E ) max ( s 0 , E ) = sup task: max ( s 0 , E ) = sup max ( s 0 , E ) = sup D D D x s = Pr M compute x s = Pr M x s = Pr M method: max ( s , E ) max ( s , E ) for all s ∈ S max ( s , E ) s ∈ S s ∈ S via graph analysis and linear program [Vardi/Wolper’86] [Courcoubetis/Yannakakis’88] [Bianco/de Alfaro’95] [Baier/Kwiatkowska’98] 30 / 161

  11. probabilistic “bad behaviors” system 31 / 161

  12. probabilistic “bad behaviors” system M MDP M M 32 / 161

  13. probabilistic “bad behaviors” system LTL formula ϕ ϕ ϕ M MDP M M deterministic automaton A A A 33 / 161

  14. probabilistic “bad behaviors” system LTL formula ϕ ϕ ϕ M MDP M M deterministic automaton A A A quantitative analysis in the product-MDP M × A M × A M × A � � � � � � � s , init s � , acceptance Pr M Pr M Pr M = Pr M×A Pr M×A Pr M×A max ( s , ϕ ) = � s , init s � , max ( s , ϕ ) max ( s , ϕ ) = � s , init s � , max max max cond. of A A A 34 / 161

  15. probabilistic “bad behaviors” system LTL formula ϕ ϕ ϕ M MDP M M deterministic automaton A A A maximal probabilility for reaching an quantitative analysis accepting end in the product-MDP M × A M × A M × A component � � � � � � Pr M Pr M Pr M = Pr M×A Pr M×A Pr M×A max ( s , ϕ ) = � s , init s � , max ( s , ϕ ) max ( s , ϕ ) = � s , init s � , � s , init s � , ♦ accEC ♦ accEC ♦ accEC max max max 35 / 161

  16. probabilistic “bad behaviors” system LTL formula ϕ ϕ ϕ M MDP M M deterministic automaton A A A maximal probabilility for reaching an probabilistic reachability analysis accepting end in the product-MDP M × A M × A M × A component linear program � � � � � � Pr M Pr M Pr M = Pr M×A Pr M×A Pr M×A max ( s , ϕ ) = � s , init s � , max ( s , ϕ ) max ( s , ϕ ) = � s , init s � , � s , init s � , ♦ accEC ♦ accEC ♦ accEC max max max 36 / 161

  17. probabilistic “bad behaviors” system LTL formula ϕ ϕ ϕ M MDP M M deterministic automaton A A A probabilistic reachability analysis polynomial in the product-MDP M × A M × A M × A in |M| · |A| |M| · |A| |M| · |A| linear program � � � � � � Pr M Pr M Pr M = Pr M×A Pr M×A Pr M×A max ( s , ϕ ) = � s , init s � , max ( s , ϕ ) max ( s , ϕ ) = � s , init s � , � s , init s � , ♦ accEC ♦ accEC ♦ accEC max max max 37 / 161

  18. probabilistic “bad behaviors” system LTL formula ϕ ϕ ϕ M MDP M M 2exp deterministic automaton A A A probabilistic reachability analysis polynomial in the product-MDP M × A M × A M × A in |M| · |A| |M| · |A| |M| · |A| linear program � � � � � � Pr M Pr M Pr M = Pr M×A Pr M×A Pr M×A max ( s , ϕ ) = � s , init s � , max ( s , ϕ ) max ( s , ϕ ) = � s , init s � , � s , init s � , ♦ accEC ♦ accEC ♦ accEC max max max 38 / 161

  19. probabilistic “bad behaviors” system state explosion LTL formula ϕ ϕ ϕ problem M MDP M M deterministic automaton A A A probabilistic reachability analysis polynomial in the product-MDP M × A M × A M × A in |M| · |A| |M| · |A| |M| · |A| linear program � � � � � � Pr M Pr M Pr M = Pr M×A Pr M×A Pr M×A max ( s , ϕ ) = � s , init s � , max ( s , ϕ ) max ( s , ϕ ) = � s , init s � , � s , init s � , ♦ accEC ♦ accEC ♦ accEC max max max 39 / 161

  20. Advanced techniques for PMC por-01-cmu • • • symbolic model checking with variants of BDDs e.g., in PRISM [Kwiatkowska/Norman/Parker] ProbVerus [Hartonas-Garmhausen, Campos, Clarke] • • • state aggregation with bisimulation e.g., in MRMC [Katoen et al] • • • abstraction-refinement e.g., in RAPTURE [d’Argenio/Jeannet/Jensen/Larsen] PASS [Hermanns/Wachter/Zhang] • • • partial order reduction e.g., in LiQuor [Baier/Ciesinski/Gr¨ oßer] 40 / 161

  21. Advanced techniques for PMC por-01-cmu • • • symbolic model checking with variants of BDDs e.g., in PRISM [Kwiatkowska/Norman/Parker] ProbVerus [Hartonas-Garmhausen, Campos, Clarke] randomized distributed algorithms, communication and multimedia protocols, . . . power management, security, . . . . . . • • • state aggregation with bisimulation e.g., in MRMC [Katoen et al] • • • abstraction-refinement e.g., in RAPTURE [d’Argenio/Jeannet/Jensen/Larsen] PASS [Hermanns/Wachter/Zhang] • • • partial order reduction e.g., in LiQuor [Baier/Ciesinski/Gr¨ oßer] 41 / 161

  22. Advanced techniques for PMC por-01-cmu • • • symbolic model checking with variants of BDDs e.g., in PRISM [Kwiatkowska/Norman/Parker] ProbVerus [Hartonas-Garmhausen, Campos, Clarke] randomized distributed algorithms, communication and multimedia protocols, . . . power management, security, . . . . . . • • • state aggregation with bisimulation e.g., in MRMC [Katoen et al] • • • abstraction-refinement e.g., in RAPTURE [d’Argenio/Jeannet/Jensen/Larsen] PASS [Hermanns/Wachter/Zhang] • • • partial order reduction e.g., in LiQuor [Baier/Ciesinski/Gr¨ oßer] 42 / 161

  23. Partial order reduction por-02 technique for reducing the state space of concurrent systems [Godefroid,Peled,Valmari, ca. 1990] • attempts to analyze a sub-system by identifying “redundant interleavings” • explores representatives of paths that agree up to the order of independent actions 43 / 161

  24. Partial order reduction por-02 technique for reducing the state space of concurrent systems [Godefroid,Peled,Valmari, ca. 1990] • attempts to analyze a sub-system by identifying “redundant interleavings” • explores representatives of paths that agree up to the order of independent actions e.g., x := x + y x := x + y x := x + y � � z := z +3 � z := z +3 z := z +3 � �� � � �� � action β β β action α α α has the same effect as α ; β α ; β α ; β or β ; α β ; α β ; α 44 / 161

  25. Partial order reduction por-02 technique for reducing the state space of concurrent systems [Godefroid,Peled,Valmari, ca. 1990] • attempts to analyze a sub-system by identifying “redundant interleavings” • explores representatives of paths that agree up to the order of independent actions DFS-based on-the-fly generation of a reduced system for each expanded state s s s • choose an appropriate subset Ample ( s ) Ample ( s ) Ample ( s ) of Act ( s ) Act ( s ) Act ( s ) α α ∈ Ample ( s ) • expand only the α α -successors of s s s for α ∈ Ample ( s ) α ∈ Ample ( s ) (but ignore the actions in Act ( s ) \ Ample ( s ) Act ( s ) \ Ample ( s ) Act ( s ) \ Ample ( s )) 45 / 161

  26. Partial order reduction por-02a concurrent execution α α α λ λ λ of processes P 1 P 1 P 1 , P 2 P 2 P 2 • no communication β µ µ µ β β λ α α α λ λ • no competition µ γ γ γ β µ α µ λ β β α α ν ν ν λ λ µ γ γ γ µ β µ β β ν ν ν transition system α α α λ λ λ for P 1 �P 2 P 1 �P 2 where P 1 �P 2 γ γ γ ν ν ν µ µ µ β β β P 1 = α ; β ; γ P 1 = α ; β ; γ P 1 = α ; β ; γ P 2 = λ ; µ ; ν P 2 = λ ; µ ; ν P 2 = λ ; µ ; ν γ γ γ ν ν ν 46 / 161

  27. Partial order reduction por-02a concurrent execution α α α of processes P 1 P 1 P 1 , P 2 P 2 P 2 • no communication β β β • no competition λ λ λ µ µ µ transition system for P 1 �P 2 P 1 �P 2 P 1 �P 2 where ν ν ν P 1 = α ; β ; γ P 1 = α ; β ; γ P 1 = α ; β ; γ P 2 = λ ; µ ; ν P 2 = λ ; µ ; ν P 2 = λ ; µ ; ν γ γ γ idea: explore just 1 1 path as representative for all paths 1 47 / 161

  28. Ample-set method [Peled 1993] por-03 given: processes P i P i P i of a parallel system P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n with transition system T = ( S , Act , → , . . . ) T = ( S , Act , → , . . . ) T = ( S , Act , → , . . . ) task: on-the-fly generation of a sub-system T r T r T r s.t. . . . (A1) stutter condition . . . . . . . . . (A2) dependency condition . . . . . . . . . (A3) cycle condition . . . . . . 48 / 161

  29. Ample-set method [Peled 1993] por-03 given: processes P i P i P i of a parallel system P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n with transition system T = ( S , Act , → , . . . ) T = ( S , Act , → , . . . ) T = ( S , Act , → , . . . ) task: on-the-fly generation of a sub-system T r T r T r s.t. � π � π r (A1) stutter condition π � π r π � π r (A2) dependency condition by permutations of independent actions (A3) cycle condition π T Each path π π in T T is represented by an “equivalent” π r T r path π r π r in T r T r 49 / 161

  30. Ample-set method [Peled 1993] por-03 given: processes P i P i P i of a parallel system P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n with transition system T = ( S , Act , → , . . . ) T = ( S , Act , → , . . . ) T = ( S , Act , → , . . . ) task: on-the-fly generation of a sub-system T r T r T r s.t. � π � π r (A1) stutter condition π � π r π � π r (A2) dependency condition by permutations of independent actions (A3) cycle condition π T Each path π π in T T is represented by an “equivalent” π r T r path π r π r in T r T r � � � � � � T T r T T and T r T r satisfy the same stutter-invariant events, e.g., next-free LTL formulas 50 / 161

  31. Ample-set method for MDP por-04 given: processes P i P i P i of a probabilistic system P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n with MDP-semantics M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) task: on-the-fly generation of a sub-MDP M r M r M r s.t. M r M M r M r and M M have the same extremal probabilities for stutter-invariant events 51 / 161

  32. Ample-set method for MDP por-04 given: processes P i P i P i of a probabilistic system P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n with MDP-semantics M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) task: on-the-fly generation of a sub-MDP M r M r M r s.t. M For all schedulers D D D for M M there is a scheduler D r D r D r for M r M r s.t. for all measurable, stutter-invariant events E M r E : E Pr D M ( E ) = Pr D r M ( E ) = Pr D r M ( E ) = Pr D r Pr D Pr D M r ( E ) M r ( E ) M r ( E ) � � � � � � M r M M r M r and M M have the same extremal probabilities for stutter-invariant events 52 / 161

  33. Independence of actions por-06 53 / 161

  34. Independence of non-probabilistic actions por-06 Actions α α α and β β β are called independent in a transition system T T T iff: β α α α β β whenever s s s − − − → t → t → t and s s s − − − → u → u → u then (1) α α α is enabled in u u u (2) β β β is enabled in t t t α β β β α α (3) if u u − − − → v → v → v and t t − − − → w → w → w then v = w v = w v = w u t s s s β β β α α α u u u t t t α α α β β β v v v 54 / 161

  35. Independence of actions in an MDP por-06 Let M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) be a MDP and α, β ∈ Act α, β ∈ Act α, β ∈ Act . α α α and β β β are independent in M M M if for each state s s s s.t. α, β ∈ Act ( s ) α, β ∈ Act ( s ) α, β ∈ Act ( s ): (1) if P ( s , α, t ) > 0 P ( s , α, t ) > 0 P ( s , α, t ) > 0 then β ∈ Act ( t ) β ∈ Act ( t ) β ∈ Act ( t ) P ( s , β, u ) > 0 α ∈ Act ( u ) (2) if P ( s , β, u ) > 0 P ( s , β, u ) > 0 then α ∈ Act ( u ) α ∈ Act ( u ) . . . (3) . . . . . . 55 / 161

  36. Independence of actions in an MDP por-06 Let M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) be a MDP and α, β ∈ Act α, β ∈ Act α, β ∈ Act . α α α and β β β are independent in M M M if for each state s s s s.t. α, β ∈ Act ( s ) α, β ∈ Act ( s ) α, β ∈ Act ( s ): (1) if P ( s , α, t ) > 0 P ( s , α, t ) > 0 P ( s , α, t ) > 0 then β ∈ Act ( t ) β ∈ Act ( t ) β ∈ Act ( t ) P ( s , β, u ) > 0 α ∈ Act ( u ) (2) if P ( s , β, u ) > 0 P ( s , β, u ) > 0 then α ∈ Act ( u ) α ∈ Act ( u ) (3) for all states w w w : P ( s , αβ, w ) = P ( s , βα, w ) P ( s , αβ, w ) = P ( s , βα, w ) P ( s , αβ, w ) = P ( s , βα, w ) ր տ ր ր տ տ � � � � � � P ( s , α, t ) · P ( t , β, w ) P ( s , β, u ) · P ( u , α, w ) P ( s , α, t ) · P ( t , β, w ) P ( s , α, t ) · P ( t , β, w ) P ( s , β, u ) · P ( u , α, w ) P ( s , β, u ) · P ( u , α, w ) t ∈ S u ∈ S t ∈ S t ∈ S u ∈ S u ∈ S 56 / 161

  37. Example: ample set method por-08 s s s β γ γ γ β β α α α α α α α α α γ γ γ β β β δ δ δ δ δ δ T original system T T α β γ α α independent from β β and γ γ 57 / 161

  38. Example: ample set method por-08 s s s s s s β γ γ γ β β γ γ γ β β β α α α α α α α α α γ γ γ β β β α α α α α α δ δ δ δ δ δ δ δ δ δ δ δ T T r original system T T reduced system T r T r (A1)-(A3) are fulfilled α β γ α α independent from β β and γ γ 58 / 161

  39. Example: ample set method fails for MDP por-08 s s s s s s β γ γ γ β β γ γ γ β β β 1 1 1 1 1 1 2 2 2 2 2 2 α α α 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 γ γ γ β β β α α α α α α α α α α α α γ γ γ β β β δ δ δ δ δ δ δ δ δ δ δ δ M M r original MDP M M reduced MDP M r M r (A1)-(A3) are fulfilled α β γ α α independent from β β and γ γ 59 / 161

  40. Example: ample set method fails for MDP por-08 s s s s s s β γ γ γ β β γ γ γ β β β 1 1 1 1 1 1 2 2 2 2 2 2 α α α 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 γ γ γ β β β α α α α α α α α α α α α γ γ γ β β β δ δ δ δ δ δ δ δ δ δ δ δ M M r original MDP M M reduced MDP M r M r Pr M Pr M Pr M max ( s , ♦ green ) = 1 ♦ ♦ ♦ “eventually” max ( s , ♦ green ) = 1 max ( s , ♦ green ) = 1 60 / 161

  41. Example: ample set method fails for MDP por-08 s s s s s s β γ γ γ β β γ γ γ β β β 1 1 1 1 1 1 2 2 2 2 2 2 α α α 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 γ γ γ β β β α α α α α α α α α α α α γ γ γ β β β δ δ δ δ δ δ δ δ δ δ δ δ M M r original MDP M M reduced MDP M r M r Pr M 2 = Pr M r Pr M Pr M 1 2 = Pr M r 2 = Pr M r 1 1 max ( s , ♦ green ) = 1 > max ( s , ♦ green ) max ( s , ♦ green ) = 1 max ( s , ♦ green ) = 1 > > max ( s , ♦ green ) max ( s , ♦ green ) 61 / 161

  42. Partial order reduction for MDP por-09 extend Peled’s conditions (A1)-(A3) for the ample-sets (A1) stutter condition . . . . . . . . . (A2) dependency condition . . . . . . . . . . . . (A3) cycle condition . . . . . . (A4) probabilistic condition β 1 β 1 β 1 β 2 β 2 β 2 β n β n β n α α α If there is a path s s − − − → → → − − − → . . . → . . . → . . . − − − → → → − − − → → → in M M M s.t. s β 1 , . . . , β n , α / ∈ Ample ( s ) α β 1 , . . ., β n , α / β 1 , . . . , β n , α / ∈ Ample ( s ) ∈ Ample ( s ) and α α is probabilistic then | Ample ( s ) | = 1 | Ample ( s ) | = 1 | Ample ( s ) | = 1. 62 / 161

  43. Partial order reduction for MDP por-09 extend Peled’s conditions (A1)-(A3) for the ample-sets (A1) stutter condition . . . . . . . . . (A2) dependency condition . . . . . . . . . . . . (A3) cycle condition . . . . . . (A4) probabilistic condition β 1 β 1 β 1 β 2 β 2 β 2 β n β n β n α α α If there is a path s s − − − → → → − − − → . . . → . . . → . . . − − − → → → − − − → → → in M M s.t. M s β 1 , . . . , β n , α / ∈ Ample ( s ) α β 1 , . . ., β n , α / β 1 , . . . , β n , α / ∈ Ample ( s ) ∈ Ample ( s ) and α α is probabilistic then | Ample ( s ) | = 1 | Ample ( s ) | = 1 | Ample ( s ) | = 1. M M r If (A1)-(A4) hold then M M and M r M r have the same extremal probabilities for all stutter-invariant properties. 63 / 161

  44. Probabilistic model checking por-ifm-32 probabilistic quantitative system requirements Markov decision LTL \� \� formula ϕ ϕ ϕ \� M process M M (path event) quantitative analysis of M M M against ϕ ϕ ϕ maximal/minimal probability for ϕ ϕ ϕ 64 / 161

  45. Probabilistic model checking, e.g., LiQuor por-ifm-32 modeling language quantitative P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n requirements partial order reduction reduced LTL \� \� formula ϕ ϕ ϕ \� M r MDP M r M r (path event) quantitative analysis of M r M r M r against ϕ ϕ ϕ maximal/minimal probability for ϕ ϕ ϕ 65 / 161

  46. Probabilistic model checking, e.g., LiQuor por-ifm-32a modeling language quantitative P 1 � . . . �P n P 1 � . . . �P n P 1 � . . . �P n requirements partial order reduction reduced LTL \� \� formula ϕ ϕ ϕ \� M r MDP M r M r (path event) quantitative analysis of M r M r M r against ϕ ϕ ϕ � worst-case maximal/minimal probability for ϕ ϕ ϕ analysis 66 / 161

  47. Outline overview-pomdp • Markov decision processes (MDP) and quantitative analysis against path events • partial order reduction for MDP • partially-oberservable MDP ← ← ← − − − • conclusions 67 / 161

  48. Monty-Hall problem pomdp-01 3 3 3 doors initially closed show candidate master 68 / 161

  49. Monty-Hall problem pomdp-01 3 3 3 doors no prize prize no prize initially closed show candidate master 69 / 161

  50. Monty-Hall problem pomdp-01 3 3 3 doors no prize prize no prize initially closed show candidate master 1. candidate chooses one of the doors 70 / 161

  51. Monty-Hall problem pomdp-01 3 3 3 doors no prize prize no prize initially closed show candidate master 1. candidate chooses one of the doors 2. show master opens a non-chosen, non-winning door 71 / 161

  52. Monty-Hall problem pomdp-01 3 3 3 doors no prize prize no prize initially closed show candidate master 1. candidate chooses one of the doors 2. show master opens a non-chosen, non-winning door 3. candidate has the choice: • keep the choice or • switch to the other (still closed) door 72 / 161

  53. Monty-Hall problem pomdp-01 3 3 3 doors 100 . 000 100 . 000 100 . 000 no prize no prize Euro initially closed show candidate master 1. candidate chooses one of the doors 2. show master opens a non-chosen, non-winning door 3. candidate has the choice: • keep the choice or • switch to the other (still closed) door 4. show master opens all doors 73 / 161

  54. Monty-Hall problem pomdp-01 3 3 3 doors 100 . 000 100 . 000 100 . 000 no prize no prize Euro initially closed show candidate master optimal strategy for the candidate: initial choice of the door: arbitrary revision of the initial choice (switch) 2 probability for getting the prize: 2 2 3 3 3 74 / 161

  55. MDP for the Monty-Hall problem pomdp-02 75 / 161

  56. MDP for the Monty-Hall problem pomdp-02 3 3 3 doors no prize prize no prize initially closed show master’s actions candidate’s actions 2. opens a non-chosen, 1. choose one door non-winning door 3. keep or switch ? 4. opens all doors 76 / 161

  57. MDP for the Monty-Hall problem pomdp-02 3 3 3 doors no prize prize no prize initially closed ❍❍❍❍❍❍❍❍❍❍❍❍ ❍❍❍❍❍❍❍❍❍❍❍❍ ❍❍❍❍❍❍❍❍❍❍❍❍ ✟ ✟ ✟ show master’s actions candidate’s actions ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ 2. opens a non-chosen, 1. choose one door non-winning door 3. keep or switch ? 4. opens all doors ❍ ❍ ❍ start 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 door 1 door 2 door 3 1 1 2 2 3 3 keep switch keep switch keep switch won lost 77 / 161

  58. MDP for the Monty-Hall problem pomdp-02 3 3 3 doors no prize prize no prize initially closed candidate’s actions Pr max (start , ♦ won ) , ♦ won ) , ♦ won ) = 1 , ♦ won ) , ♦ won ) Pr max (start , ♦ won ) Pr max (start , ♦ won ) , ♦ won ) = 1 , ♦ won ) = 1 1. choose one door 3. keep or switch ? optimal scheduler requires start complete information 1 1 1 1 1 1 on the states 1 1 1 3 3 3 3 3 3 3 3 3 door 1 door 2 door 3 1 1 2 2 3 3 keep switch switch won won lost 78 / 161

  59. MDP for the Monty-Hall problem pomdp-02 3 3 3 doors no prize prize no prize initially closed candidate’s actions 1. choose one door cannot be distinguished 3. keep or switch ? by the candidate start 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 door 1 door 2 door 3 1 1 2 2 3 3 keep switch keep switch keep switch won won lost 79 / 161

  60. MDP for the Monty-Hall problem pomdp-02 3 3 3 doors no prize prize no prize initially closed candidate’s actions observation-based strategy: 1. choose one door choose action switch in state door i 3. keep or switch ? i i start 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 door 1 door 2 door 3 1 1 2 2 3 3 switch switch switch won won lost 80 / 161

  61. MDP for the Monty-Hall problem pomdp-02 3 3 3 doors no prize prize no prize initially closed candidate’s actions observation-based strategy: 1. choose one door choose action switch in state door i 3. keep or switch ? i i 2 ♦ won : 2 2 probability for ♦ won ♦ won start 3 3 3 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 door 1 door 2 door 3 1 1 2 2 3 3 switch switch switch won won lost 81 / 161

  62. Partially-observable Markov decision process pomdp-05 A partially-observable MDP (POMDP for short) is an MDP M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) together with an equivalence relation ∼ ∼ ∼ on S S S 82 / 161

  63. Partially-observable Markov decision process pomdp-05 A partially-observable MDP (POMDP for short) is an MDP M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) together with an equivalence relation ∼ ∼ ∼ on S S S � � �    if s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 then s 1 , s 2 s 1 , s 2 s 1 , s 2 cannot be distinguished from outside (or by the scheduler) observables: equivalence classes of states 83 / 161

  64. Partially-observable Markov decision process pomdp-05 A partially-observable MDP (POMDP for short) is an MDP M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) M = ( S , Act , P , . . . ) together with an equivalence relation ∼ ∼ ∼ on S S S � � �    if s 1 ∼ s 2 s 1 ∼ s 2 then s 1 , s 2 s 1 ∼ s 2 s 1 , s 2 s 1 , s 2 cannot be distinguished from outside (or by the scheduler) observables: equivalence classes of states observation-based scheduler: scheduler D : S ∗ → Act D : S ∗ → Act D : S ∗ → Act such that for all π 1 , π 2 ∈ S ∗ π 1 , π 2 ∈ S ∗ π 1 , π 2 ∈ S ∗ : D ( π 1 ) = D ( π 2 ) obs ( π 1 ) = obs ( π 2 ) D ( π 1 ) = D ( π 2 ) D ( π 1 ) = D ( π 2 ) if obs ( π 1 ) = obs ( π 2 ) obs ( π 1 ) = obs ( π 2 ) where obs ( s 0 s 1 . . . s n ) = [ s 0 ] [ s 1 ] . . . [ s n ] obs ( s 0 s 1 . . . s n ) = [ s 0 ] [ s 1 ] . . . [ s n ] obs ( s 0 s 1 . . . s n ) = [ s 0 ] [ s 1 ] . . . [ s n ] 84 / 161

  65. Extreme cases of POMDP pomdp-11 extreme cases of an POMDP: • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 iff s 1 = s 2 s 1 = s 2 s 1 = s 2 • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 for all s 1 s 1 s 1 , s 2 s 2 s 2 85 / 161

  66. Extreme cases of POMDP pomdp-11 extreme cases of an POMDP: • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 iff s 1 = s 2 s 1 = s 2 s 1 = s 2 ← ← ← − − − standard MDP • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 for all s 1 s 1 s 1 , s 2 s 2 s 2 86 / 161

  67. Probabilistic automata are special POMDP pomdp-11 extreme cases of an POMDP: • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 iff s 1 = s 2 s 1 = s 2 s 1 = s 2 ← ← ← − − − standard MDP • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 for all s 1 s 1 s 1 , s 2 s 2 ← ← ← − − − probabilistic automata s 2 note that for totally non-observable POMDP: observation-based function = infinite word = = = = = � � � � � � scheduler D : N → Act D : N → Act D : N → Act over Act Act Act 87 / 161

  68. Undecidability results for POMDP pomdp-11 extreme cases of an POMDP: • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 iff s 1 = s 2 s 1 = s 2 s 1 = s 2 ← ← ← − − − standard MDP • s 1 ∼ s 2 s 1 ∼ s 2 s 1 ∼ s 2 for all s 1 s 1 s 1 , s 2 s 2 ← ← ← − − − probabilistic automata s 2 note that for totally non-observable POMDP: observation-based function = infinite word = = = = = � � � � � � scheduler D : N → Act D : N → Act D : N → Act over Act Act Act undecidability results for PFA carry over to POMDP maximum probabilistic non-emptiness reachability problem problem for = = = � � � Pr obs “does Pr obs Pr obs PFA max ( ♦ F ) > p hold ? ” max ( ♦ F ) > p max ( ♦ F ) > p 88 / 161

  69. Undecidability results for POMDP pomdp-30-new • The model checking problem for POMDP and quantitative properties is undecidable, e.g., probabilistic reachability properties. 89 / 161

  70. Undecidability results for POMDP pomdp-30-new • The model checking problem for POMDP and quantitative properties is undecidable, e.g., probabilistic reachability properties. • There is no even no approximation algorithm for reachability objectives. [Paz’71], [Madani/Hanks/Condon’99], [Giro/d’Argenio’07] 90 / 161

  71. Undecidability results for POMDP pomdp-30-new • The model checking problem for POMDP and quantitative properties is undecidable, e.g., probabilistic reachability properties. • There is no even no approximation algorithm for reachability objectives. • The model checking problem for POMDP and several qualitative properties is undecidable, e.g., repeated reachability with positive probability “does Pr obs Pr obs Pr obs max ( �♦ F ) max ( �♦ F ) max ( �♦ F ) > 0 > 0 > 0 hold ? ” = �♦ �♦ �♦ � = = “infinitely often” � � 91 / 161

  72. Undecidability results for POMDP pomdp-30-new • The model checking problem for POMDP and quantitative properties is undecidable, e.g., probabilistic reachability properties. • There is no even no approximation algorithm for reachability objectives. • The model checking problem for POMDP and several qualitative properties is undecidable, e.g., repeated reachability with positive probability “does Pr obs Pr obs Pr obs max ( �♦ F ) max ( �♦ F ) max ( �♦ F ) > 0 > 0 > 0 hold ? ” Many interesting verification problems for distributed probabilistic multi-agent systems are undecidable. 92 / 161

  73. Undecidability results for POMDP pomdp-30-new • The model checking problem for POMDP and quantitative properties is undecidable, e.g., probabilistic reachability properties. • There is no even no approximation algorithm for reachability objectives. • The model checking problem for POMDP and several qualitative properties is undecidable, e.g., repeated reachability with positive probability “does Pr obs Pr obs Pr obs max ( �♦ F ) max ( �♦ F ) max ( �♦ F ) > 0 > 0 > 0 hold ? ” ... already holds for totally non-observable POMDP � �� � probabilistic B¨ uchi automata 93 / 161

  74. Remind: LTL model checking for MDP pomdp-50 requirements MDP M M M LTL formula ϕ ϕ ϕ 2exp deterministic automaton A A A probabilistic reachability analysis in the product-MDP M × A M × A M × A 94 / 161

  75. PA rather than DA ? pomdp-50 requirements MDP M M M LTL formula ϕ ϕ ϕ ? probabilistic automaton A A A probabilistic reachability analysis in the product-MDP M × A M × A M × A 95 / 161

  76. PA rather than DA ? pomdp-50 requirements MDP M M M LTL formula ϕ ϕ ϕ ? probabilistic automaton A A A probabilistic reachability analysis in the product-MDP M × A M × A M × A impossible, due to undecidability results 96 / 161

  77. Decidability results for POMDP pomdp-15-fm 97 / 161

  78. Decidability results for POMDP pomdp-15-ifm The model checking problem for POMDP and several qualitative properties is decidable, e.g., • invariance with positive probability “does Pr obs Pr obs Pr obs max ( � F ) max ( � F ) max ( � F ) > 0 > 0 > 0 hold ? ” • almost-sure reachability Pr obs “does Pr obs Pr obs max ( ♦ F ) max ( ♦ F ) max ( ♦ F ) = 1 = 1 = 1 hold ? ” • almost-sure repeated reachability “does Pr obs Pr obs Pr obs max ( �♦ F ) = 1 max ( �♦ F ) max ( �♦ F ) = 1 = 1 hold ? ” • persistence with positive probability Pr obs “does Pr obs Pr obs max ( ♦� F ) max ( ♦� F ) max ( ♦� F ) > 0 > 0 > 0 hold ? ” 98 / 161

  79. Decidability results for POMDP pomdp-15-ifm The model checking problem for POMDP and several qualitative properties is decidable, e.g., • invariance with positive probability “does Pr obs Pr obs Pr obs max ( � F ) max ( � F ) max ( � F ) > 0 > 0 > 0 hold ? ” • almost-sure reachability Pr obs “does Pr obs Pr obs max ( ♦ F ) max ( ♦ F ) max ( ♦ F ) = 1 = 1 hold ? ” = 1 • almost-sure repeated reachability “does Pr obs Pr obs Pr obs max ( �♦ F ) = 1 max ( �♦ F ) max ( �♦ F ) = 1 = 1 hold ? ” • persistence with positive probability Pr obs “does Pr obs Pr obs max ( ♦� F ) max ( ♦� F ) max ( ♦� F ) > 0 > 0 > 0 hold ? ” algorithms use a certain powerset construction 99 / 161

  80. Probabilistic Automata and Verification overview-conc • Markov decision processes (MDP) and quantitative analysis against path events • partial order reduction for MDP • partially-oberservable MDP ← − • conclusions ← ← − − 100 / 161

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend