learning markov models for stationary system behaviors
play

Learning Markov Models for Stationary System Behaviors Yingke Chen - PowerPoint PPT Presentation

Learning Markov Models for Stationary System Behaviors Yingke Chen Hua Mao Manfred Jaeger Thomas D. Nielsen Kim G. Larsen Brian Nielsen Department of Computer Science, Aalborg University, Denmark NFM 2012 April 4, 2012 Motivation


  1. Learning Markov Models for Stationary System Behaviors Yingke Chen Hua Mao Manfred Jaeger Thomas D. Nielsen Kim G. Larsen Brian Nielsen Department of Computer Science, Aalborg University, Denmark NFM 2012 April 4, 2012

  2. Motivation Learning Markov Models for Stationary System Behaviors Introduction 2 Motivation Overview Related Work ◮ Constructing formal models manually can be time consuming Preliminaries ◮ Formal system models may not exist LMC PSA & PST ◮ legacy software SPLTL ◮ 3rd party components PSA Learning Construct PST ◮ black-box embedded system component PST to PSA and PSA to LMC ◮ Our proposal: learn models from observed system behaviors Parameter Tunning Experiment PSA-equivalent Non PSA-equivalent Conclusion Dept. of Computer Science, Aalborg University, 27 Denmark

  3. Overview of Our Approach Learning Markov Models for Stationary System Behaviors System Data Introduction Motivation observe 3 Overview Probabilistic Automata Related Work Idle, idle, coffe_request, idle, Preliminaries idle, cup, idle, idle, coffee, LMC coffee, idle, idle, ... PSA & PST SPLTL learn PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning Experiment PSA-equivalent Non PSA-equivalent Conclusion Specification Model Checker yes/no Dept. of Computer Science, Aalborg University, 27 Denmark

  4. Related Work Learning Markov Models for Stationary Related Work System Behaviors ◮ Learning probabilistic finite automata Introduction ◮ Alergia — R. Carrasco and J. Oncina (1994) Motivation Overview ◮ Probabilistic Suffix Autumata — D. Ron et al. (1996) 4 Related Work ◮ Learning models for model checking Preliminaries LMC ◮ Learning CTMCs — K. Sen and et al. (2004) PSA & PST ◮ Learning DLMCs — H. Mao and et al. (2011) SPLTL PSA Learning Construct PST PST to PSA and PSA to Limitation LMC Parameter Tunning ◮ Hard to restart the system any number of times. Experiment PSA-equivalent ◮ Can not reset the system to a well-defined unique initial state. Non PSA-equivalent Conclusion Proposal ◮ Learn a model from a single observation sequence Dept. of Computer Science, Aalborg University, 27 Denmark

  5. Labeled Markov Chain (LMC) Learning Markov Models for Stationary System Behaviors Introduction Motivation Overview A LMC is a tuple, M = � Q , Σ , π, τ, L � , Related Work Preliminaries ◮ Q : a finite set of states 5 LMC PSA & PST ◮ Σ : finite alphabet SPLTL PSA Learning ◮ π : Q → [ 0 , 1 ] is an initial probability distribution Construct PST PST to PSA and PSA to ◮ τ : Q × Q → [ 0 , 1 ] is the transition probability function LMC Parameter Tunning ◮ L : Q → Σ is a labeling function Experiment PSA-equivalent Non PSA-equivalent Conclusion Dept. of Computer Science, Aalborg University, 27 Denmark

  6. Probabilistic Suffix Automata - PSA Learning Markov A PSA is LMC that Models for Stationary System Behaviors ◮ H : Q → Σ ≤ N is a extended labeling function , which Introduction represents the history of the most recent visited states. Motivation Overview ◮ Each state q i is associated with a string s i = H ( q i ) L ( q i ) . If Related Work τ ( q 1 , q 2 ) > 0, then H ( q 2 ) ∈ suffix ∗ ( s 1 ) Preliminaries LMC ◮ Let S be the set of strings associated with states in the PSA, 6 PSA & PST then ∀ s ∈ S , suffix ∗ ( s ) ∩ S = { s } SPLTL PSA Learning Construct PST PST to PSA and PSA to cup, LMC milk Parameter Tunning 0.5 0.3 milk, Experiment 0.3 cup 0.7 idle milk PSA-equivalent 1 Non PSA-equivalent 0.5 0.7 1 coff Conclusion Figure: A PSA over Σ = {idle, cup, milk, coff} Dept. of Computer Science, Aalborg University, 27 Denmark

  7. Prediction Suffix Tree - PST Learning Markov Models for Stationary System Behaviors ◮ A tree over the alphabet Σ = {idle, cup, milk, coff} Introduction ◮ Each node is labeled by a pair ( s , γ s ) , and each edge is Motivation Overview labeled by a symbol σ ∈ Σ Related Work ◮ The parent’s string is the suffix of its children’s Preliminaries LMC 7 PSA & PST SPLTL e (0.57,0.16,0.1,0.16) cup, PSA Learning milk Construct PST 0.5 PST to PSA and PSA to 0.3 idle milk, cup milk coff LMC 0.3 idle cup 0.7 milk (0.7,0.3,0,0) (0,0,0.5,0.5) (0,0,0.3,0.7) (1,0,0,0) Parameter Tunning 1 Experiment 0.5 cup, milk, 0.7 coff 1 PSA-equivalent milk milk (0,0,0.3,0.7) (0,0,0,1) Non PSA-equivalent Conclusion Figure: PSA and PST define the same distribution of strings over Σ Dept. of Computer Science, Aalborg University, 27 Denmark

  8. Stationary Probabilistic LTL - SPLTL Learning Markov Models for Stationary System Behaviors Introduction Syntax Motivation Overview The syntax of stationary probabilistic LTL is: Related Work Preliminaries φ ::= S ⊲ ⊳ r ( ϕ ) ( ⊲ ⊳ ∈ ≥ , ≤ , =; r ∈ [ 0 , 1 ]; ϕ ∈ LTL ) LMC PSA & PST 8 SPLTL PSA Learning Semantics Construct PST PST to PSA and PSA to For a model M , the stationary probability of an LTL property ϕ is LMC Parameter Tunning Experiment ⊳ r ( ϕ ) iff P π s M ( { s ∈ Σ ω | s | M | = S ⊲ = ϕ } ) ⊲ ⊳ r PSA-equivalent Non PSA-equivalent for all stationary distributions π s . Conclusion Dept. of Computer Science, Aalborg University, 27 Denmark

  9. Outline Learning Markov Introduction Models for Stationary System Behaviors Motivation Overview Introduction Motivation Related Work Overview Related Work Preliminaries Preliminaries LMC LMC PSA & PST PSA & PST SPLTL SPLTL PSA Learning 9 Construct PST PSA Learning PST to PSA and PSA to LMC Construct PST Parameter Tunning PST to PSA and PSA to LMC Experiment PSA-equivalent Parameter Tunning Non PSA-equivalent Experiment Conclusion PSA-equivalent Non PSA-equivalent Conclusion Dept. of Computer Science, Aalborg University, 27 Denmark

  10. Overview Learning Markov Models for Stationary System Behaviors e (0.57,0.16,0.1,0.16) Introduction cup, (1,0,0,0) Motivation milk Overview 0.5 idle cup milk coff 0.3 milk, Related Work 0.3 idle cup 0.7 milk (0.7,0.3,0,0) (0,0,0.5,0.5) (0,0,0.3,0.7) Preliminaries 1 0.5 LMC 0.7 coff 1 cup, milk, PSA & PST (0,0,0.3,0.7) (0,0,0,1) milk milk SPLTL PSA Learning 10 Construct PST PST to PSA and PSA to LMC Parameter Tunning milk Experiment 0.5 “coff , idle, idle, cup, milk, 0.3 PSA-equivalent 0.3 milk idle cup 0.7 milk, coff, idle, cup, milk, Non PSA-equivalent 1 coff, … . ” 0.5 Conclusion 0.7 coff 1 Dept. of Computer Science, Aalborg University, 27 Denmark

  11. Construct PST Learning Markov ◮ Start with T, only consisting root node ( e ), and Models for Stationary S = { σ | σ ∈ Σ and ˜ System Behaviors P ( σ ) ≥ ǫ } . ◮ For each s ∈ S , s will be included in T if Introduction Motivation ˜ P ( σ | s ) Overview ˜ ˜ � P ( s ) · P ( σ | s ) · log ≥ ǫ Related Work ˜ σ ∈ Σ P ( σ | suffix ( s )) Preliminaries LMC P ( s ) ≥ ǫ , for all σ ′ ∈ Σ , σ ′ s will be added ◮ For each s that ˜ PSA & PST SPLTL into S PSA Learning ◮ Loop until S is empty 11 Construct PST PST to PSA and PSA to ◮ Calculate the next symbol distribution for each node in T LMC Parameter Tunning Experiment e PSA-equivalent Non PSA-equivalent Conclusion idle cup milk coff milk, Dept. of Computer Science, milk Aalborg University, 27 Denmark

  12. Construct PST Learning Markov ◮ Start with T, only consisting root node ( e ), and Models for Stationary S = { σ | σ ∈ Σ and ˜ System Behaviors P ( σ ) ≥ ǫ } . ◮ For each s ∈ S , s will be included in T if Introduction Motivation ˜ P ( σ | s ) Overview ˜ ˜ � P ( s ) · P ( σ | s ) · log ≥ ǫ Related Work ˜ σ ∈ Σ P ( σ | suffix ( s )) Preliminaries LMC P ( s ) ≥ ǫ , for all σ ′ ∈ Σ , σ ′ s will be added ◮ For each s that ˜ PSA & PST SPLTL into S PSA Learning ◮ Loop until S is empty 11 Construct PST PST to PSA and PSA to ◮ Calculate the next symbol distribution for each node in T LMC Parameter Tunning Experiment e PSA-equivalent Non PSA-equivalent Conclusion idle cup milk coff cup, milk, Dept. of Computer Science, milk milk Aalborg University, 27 Denmark

  13. Construct PST Learning Markov ◮ Start with T, only consisting root node ( e ), and Models for Stationary S = { σ | σ ∈ Σ and ˜ System Behaviors P ( σ ) ≥ ǫ } . ◮ For each s ∈ S , s will be included in T if Introduction Motivation ˜ P ( σ | s ) Overview ˜ ˜ � P ( s ) · P ( σ | s ) · log ≥ ǫ Related Work ˜ σ ∈ Σ P ( σ | suffix ( s )) Preliminaries LMC P ( s ) ≥ ǫ , for all σ ′ ∈ Σ , σ ′ s will be added ◮ For each s that ˜ PSA & PST SPLTL into S PSA Learning ◮ Loop until S is empty 11 Construct PST PST to PSA and PSA to ◮ Calculate the next symbol distribution for each node in T LMC Parameter Tunning Experiment e (0.57,0.16,0.1,0.16) PSA-equivalent Non PSA-equivalent Conclusion (1,0,0,0) idle cup milk coff (0,0,0.3,0.7) (0.7,0.3,0,0) (0,0,0.5,0.5) cup, milk, (0,0,0.3,0.7) (0,0,0,1) Dept. of Computer Science, milk milk Aalborg University, 27 Denmark

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend