Learning Markov Models for Stationary System Behaviors Yingke Chen - - PowerPoint PPT Presentation

learning markov models for stationary system behaviors
SMART_READER_LITE
LIVE PREVIEW

Learning Markov Models for Stationary System Behaviors Yingke Chen - - PowerPoint PPT Presentation

Learning Markov Models for Stationary System Behaviors Yingke Chen Hua Mao Manfred Jaeger Thomas D. Nielsen Kim G. Larsen Brian Nielsen Department of Computer Science, Aalborg University, Denmark NFM 2012 April 4, 2012 Motivation


slide-1
SLIDE 1

Learning Markov Models for Stationary System Behaviors

Yingke Chen Hua Mao Manfred Jaeger Thomas D. Nielsen Kim G. Larsen Brian Nielsen

Department of Computer Science, Aalborg University, Denmark

NFM 2012 April 4, 2012

slide-2
SLIDE 2

27

Learning Markov Models for Stationary System Behaviors Introduction

2 Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Motivation

◮ Constructing formal models manually can be time consuming ◮ Formal system models may not exist

◮ legacy software ◮ 3rd party components ◮ black-box embedded system component

◮ Our proposal: learn models from observed system behaviors

slide-3
SLIDE 3

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation 3 Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Overview of Our Approach

System

Idle, idle, coffe_request, idle, idle, cup, idle, idle, coffee, coffee, idle, idle, ... Model Checker

Specification

yes/no

  • bserve

learn

Probabilistic Automata Data

slide-4
SLIDE 4

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview 4 Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Related Work

Related Work

◮ Learning probabilistic finite automata

◮ Alergia— R. Carrasco and J. Oncina (1994) ◮ Probabilistic Suffix Autumata — D. Ron et al. (1996)

◮ Learning models for model checking

◮ Learning CTMCs — K. Sen and et al. (2004) ◮ Learning DLMCs — H. Mao and et al. (2011)

Limitation

◮ Hard to restart the system any number of times. ◮ Can not reset the system to a well-defined unique initial state.

Proposal

◮ Learn a model from a single observation sequence

slide-5
SLIDE 5

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

5 LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Labeled Markov Chain (LMC)

A LMC is a tuple, M = Q, Σ, π, τ, L,

◮ Q: a finite set of states ◮ Σ: finite alphabet ◮ π : Q → [0, 1] is an initial probability distribution ◮ τ : Q × Q → [0, 1] is the transition probability function ◮ L : Q → Σ is a labeling function

slide-6
SLIDE 6

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC 6 PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Probabilistic Suffix Automata - PSA

A PSA is LMC that

◮ H : Q → Σ≤N is a extended labeling function, which

represents the history of the most recent visited states.

◮ Each state qi is associated with a string si = H(qi)L(qi). If

τ(q1, q2) > 0, then H(q2) ∈ suffix∗(s1)

◮ Let S be the set of strings associated with states in the PSA,

then ∀ s ∈ S, suffix∗(s) ∩ S = {s}

cup, milk 0.5 0.5 0.3 1 0.7 0.3 1 0.7 cup coff milk, milk idle

Figure: A PSA over Σ = {idle, cup, milk, coff}

slide-7
SLIDE 7

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC 7 PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Prediction Suffix Tree - PST

◮ A tree over the alphabet Σ = {idle, cup, milk, coff} ◮ Each node is labeled by a pair (s, γs), and each edge is

labeled by a symbol σ ∈ Σ

◮ The parent’s string is the suffix of its children’s cup, milk 0.5 0.5 0.3 1 0.7 0.3 1 0.7 cup coff milk, milk idle

e idle cup coff cup, milk milk, milk

(0,0,0.5,0.5) (0,0,0.3,0.7) (0,0,0,1)

milk

(0.7,0.3,0,0) (0,0,0.3,0.7) (1,0,0,0) (0.57,0.16,0.1,0.16)

Figure: PSA and PST define the same distribution of strings over Σ

slide-8
SLIDE 8

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST 8 SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Stationary Probabilistic LTL - SPLTL

Syntax

The syntax of stationary probabilistic LTL is: φ ::= S⊲

⊳r(ϕ)

(⊲ ⊳ ∈ ≥, ≤, =; r ∈ [0, 1]; ϕ ∈ LTL)

Semantics

For a model M, the stationary probability of an LTL property ϕ is M | = S⊲

⊳r(ϕ) iff Pπs M ({s ∈ Σω|s |

= ϕ}) ⊲ ⊳ r for all stationary distributions πs.

slide-9
SLIDE 9

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL 9

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Outline

Introduction Motivation Overview Related Work Preliminaries LMC PSA & PST SPLTL PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning Experiment PSA-equivalent Non PSA-equivalent Conclusion

slide-10
SLIDE 10

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL 10

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Overview

e idle cup coff cup, milk milk, milk

(0,0,0.5,0.5) (0,0,0.3,0.7) (0,0,0,1)

milk

(0.7,0.3,0,0) (0,0,0.3,0.7) (1,0,0,0) (0.57,0.16,0.1,0.16)

cup, milk 0.5 0.5 0.3 1 0.7 0.3 1 0.7 cup coff milk, milk idle milk 0.5 0.5 0.3 1 0.7 0.3 1 0.7 cup coff milk idle

“coff, idle, idle, cup, milk, milk, coff, idle, cup, milk, coff,….”

slide-11
SLIDE 11

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

11 Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Construct PST

◮ Start with T, only consisting root node (e), and

S = {σ | σ ∈ Σ and ˜ P(σ) ≥ ǫ}.

◮ For each s ∈ S, s will be included in T if

˜ P(s) ·

  • σ∈Σ

˜ P(σ|s) · log ˜ P(σ|s) ˜ P(σ|suffix(s)) ≥ ǫ

◮ For each s that ˜

P(s) ≥ ǫ, for all σ′ ∈ Σ, σ′s will be added into S

◮ Loop until S is empty ◮ Calculate the next symbol distribution for each node in T e idle cup coff milk, milk milk

slide-12
SLIDE 12

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

11 Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Construct PST

◮ Start with T, only consisting root node (e), and

S = {σ | σ ∈ Σ and ˜ P(σ) ≥ ǫ}.

◮ For each s ∈ S, s will be included in T if

˜ P(s) ·

  • σ∈Σ

˜ P(σ|s) · log ˜ P(σ|s) ˜ P(σ|suffix(s)) ≥ ǫ

◮ For each s that ˜

P(s) ≥ ǫ, for all σ′ ∈ Σ, σ′s will be added into S

◮ Loop until S is empty ◮ Calculate the next symbol distribution for each node in T e idle cup coff cup, milk milk, milk milk

slide-13
SLIDE 13

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

11 Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Construct PST

◮ Start with T, only consisting root node (e), and

S = {σ | σ ∈ Σ and ˜ P(σ) ≥ ǫ}.

◮ For each s ∈ S, s will be included in T if

˜ P(s) ·

  • σ∈Σ

˜ P(σ|s) · log ˜ P(σ|s) ˜ P(σ|suffix(s)) ≥ ǫ

◮ For each s that ˜

P(s) ≥ ǫ, for all σ′ ∈ Σ, σ′s will be added into S

◮ Loop until S is empty ◮ Calculate the next symbol distribution for each node in T e idle cup coff cup, milk milk, milk

(0,0,0.5,0.5) (0,0,0.3,0.7) (0,0,0,1)

milk

(0.7,0.3,0,0) (0,0,0.3,0.7) (1,0,0,0) (0.57,0.16,0.1,0.16)

slide-14
SLIDE 14

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST 12 PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Transform the PST to the LMC

e idle cup coff cup, milk milk, milk

(0,0,0.5,0.5) (0,0,0.3,0.7) (0,0,0,1)

milk

(0.7,0.3,0,0) (0,0,0.3,0.7) (1,0,0,0) (0.57,0.16,0.1,0.16)

cup, milk 0.5 0.5 0.3 1 0.7 0.3 1 0.7 cup coff milk, milk idle milk 0.5 0.5 0.3 1 0.7 0.3 1 0.7 cup coff milk idle transform (Ron96) relabel

slide-15
SLIDE 15

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC 13 Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Parameter Tunning

Smaller ǫ induces bigger model

◮ ˜

P(s) ·

σ∈Σ ˜

P(σ|s) · log

˜ P(σ|s) ˜ P(σ|suffix(s)) ≥ ǫ ◮ ˜

P(s) ≥ ǫ

◮ Overfitting;

slide-16
SLIDE 16

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC 13 Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Parameter Tunning

Smaller ǫ induces bigger model

◮ ˜

P(s) ·

σ∈Σ ˜

P(σ|s) · log

˜ P(σ|s) ˜ P(σ|suffix(s)) ≥ ǫ ◮ ˜

P(s) ≥ ǫ

◮ Overfitting;

Bayesian Information Criterion - (BIC)

◮ BIC(A | Seq) := log(L(A | Seq)) − 1/2 |A| log(|Seq |)

Here, |A|=|QA | ·(|Σ| −1)

slide-17
SLIDE 17

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning 14

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Outline

Introduction Motivation Overview Related Work Preliminaries LMC PSA & PST SPLTL PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning Experiment PSA-equivalent Non PSA-equivalent Conclusion

slide-18
SLIDE 18

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning 15

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Experiments Setting

◮ A single sequence is generated by a given LMC model ◮ The difference between the generating model Mg and the

learned model Ml is measured as the mean absolute difference D in stationary probability over a set Φ of randomly generated LTL formula (Computed by PRISM) D = 1 |Φ|

  • φ∈Φ |Ps

Mg(φ) − Ps Ml(φ)| ◮ PSA-equivalent ◮ Non PSA-equivalent

slide-19
SLIDE 19

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning 16

Experiment

PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

PSA-equivalent

An LMC M is called PSA-equivalent if there exists a PSA M′, such that for every string s, PM(s) = PM′(s) b

1 1 1 1/2 1/2 1/2 1/2 1 1 1

s s a a a sa aa b (a) (b)

slide-20
SLIDE 20

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

17 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Phone Model

iiir ri rir rii p riir iii hir hr h t hi hii hiir

0.7 0.3 0.4 0.6 0.4 0.6 0.95 0.4 0.4 0.6 0.2 0.8 0.4 0.6 0.3 0.7 0.9 0.1 0.2 0.8 0.05 0.9 0.6 0.4 1 0.6 0.1

Figure: Σ = {(r)ing, (i)dle, (t)alk, (p)ick-up, (h)ang-up}

slide-21
SLIDE 21

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

18 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Phone Model

cont.

Table: D is based on 507 random LTL formulas. For reference: Ddummy = 0.1569

|S| |Ql| D t rp|r irp|ir iirp|iir ♦i 320 5 0.03200 0.344 0.310 0.309 0.309 1280 5 0.04900 0.385 0.446 0.446 0.446 5120 10 0.00590 0.379 0.490 0.490 0.490 10240 14 0.00160 0.381 0.506 0.477 0.409 20480 14 0.00049 0.378 0.515 0.489 0.414 Mg 14 0.378 0.512 0.488 0.424

slide-22
SLIDE 22

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

19 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Self-stabilizing Protocol

P1 P2 Pn P3 …...

x1 x2 x3 xn-1 xn

“000,110,000,000,011,000,010,000,011, 000,101,000,001,000,011,000,000,001, 000,001,000,101,000,101,000,….” Generate

Learned Model

3 processes Learn

slide-23
SLIDE 23

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

19 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Self-stabilizing Protocol

P1 P2 Pn P3 …...

x1 x2 x3 xn-1 xn

“000,110,000,000,011,000,010,000,011, 000,101,000,001,000,011,000,000,001, 000,001,000,101,000,101,000,….” “tokens3,stable,tokens3,stable,tokens3, stable,tokens3,tokens3,tokens3,tokens3, stable,tokens3,stable,tokens3,….” Generate Abstract Learn

Learned Model

3 processes Learn 000, 111 à 3tokens 010, 110, 011, 101, 001, 100à stable

slide-24
SLIDE 24

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

20 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Self-stabilizing Protocol

cont.

Table: Self-stabilizing protocol with 7 processes. D is based on 503 random LTL formulas. For reference: Dd = 0.1669.

Full model Abstract model |Seq| time(sec)

  • rder

|Ql| D time(sec)

  • rder

|Ql| D 80 73.0 1 0.0192 1.6 1 4 0.0172 160 49.4 1 0.0325 2.1 1 4 0.0079 320 162.9 1 0.0292 3.3 1 4 0.0369 640 34.3 1 0.0234 2.3 1 4 0.0114 1280 37.2 1 0.0193 4.1 1 4 0.0093 2560 42.0 1 0.0204 5.0 1 4 0.0054 5120 47.9 1 0.0182 8.9 1 4 0.0018 10240 59.3 1 0.0390 16.3 1 4 0.0013 20480 80.7 1 0.0390 31.4 1 4 0.0016 50000 1904.4 1 128 0.00034 152.42 1 4 0.0011 100k 3435.5 1 128 0.00071 308.9 1 4 0.0007

slide-25
SLIDE 25

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

21 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Self-stabilizing Protocol

cont. 20 40 60 80 100 120 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 L Stationary Probability

real model−11 proc. abstract model−11 proc. real model−19 proc. abstract model−19 proc.

5 10 15 20 25 30 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 L Stationary Probability

real model−3 proc. full model−3 proc. abstract model−3 proc. real model−7 proc. full model−7 proc. abstract model−7 proc.

Figure: Ps

M(trueU≤L stable | token = N)

slide-26
SLIDE 26

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

22 PSA-equivalent Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Self-stabilizing Protocol

cont.

10 20 30 40 1000 2000 3000 4000 5000 6000 7000 L Time

real model−19 proc. abstract−19 proc. real −21 proc. abstract−21 proc.

Figure: The time for calculating Ps

M(trueU≤L stable | token = N) in the

generating model and the abstract model.

slide-27
SLIDE 27

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent 23 Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Non PSA-equivalent

Dice Model

start H T H T H T t1 h2 t3 h4 t5 h6

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 1

start H T H T H T t1 h2 t3 h4 t5 h6

0.52 0.48 0.52 0.48 0.52 0.5 0.5 0.48 0.32 0.44 0.45 0.49 0.51 0.34 1 1 1 1 1 1 0.18 0.16

Figure: Left: The generating model. Right: A model learned from a sequence with 1440 symbols.

slide-28
SLIDE 28

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent 24 Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Dice Model

cont.

Table: D is based on 501 random LTL formulas. For reference: Ddummy = 0.1014

|S| |Ql| D Ps(1) Ps(2) Ps(3) Ps(4) Ps(5) Ps(6) 360 13 0.0124 0.137 0.17 0.182 0.103 0.205 0.203 720 13 0.0043 0.188 0.174 0.174 0.149 0.168 0.147 1440 13 0.0023 0.184 0.166 0.169 0.143 0.153 0.185 2880 17 0.0023 0.173 0.166 0.159 0.142 0.176 0.184 5760 17 0.0016 0.173 0.165 0.153 0.161 0.174 0.174 11520 19 0.00094 0.162 0.17 0.176 0.157 0.168 0.167 20000 21 0.00092 0.164 0.173 0.171 0.166 0.164 0.162 Mg 13 0.167 0.167 0.167 0.167 0.167 0.167

For the non PSA-equivalent system, the learned model still provide good approximation for SPLTL properties.

slide-29
SLIDE 29

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent 25 Non PSA-equivalent

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

20000 symbols!

start

H T H T H T t1 h2 t3 h4 t5 h6 H T H H T T T H

slide-30
SLIDE 30

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent 26

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Outline

Introduction Motivation Overview Related Work Preliminaries LMC PSA & PST SPLTL PSA Learning Construct PST PST to PSA and PSA to LMC Parameter Tunning Experiment PSA-equivalent Non PSA-equivalent Conclusion

slide-31
SLIDE 31

27

Learning Markov Models for Stationary System Behaviors Introduction

Motivation Overview Related Work

Preliminaries

LMC PSA & PST SPLTL

PSA Learning

Construct PST PST to PSA and PSA to LMC Parameter Tunning

Experiment

PSA-equivalent Non PSA-equivalent 27

Conclusion

  • Dept. of Computer Science,

Aalborg University, Denmark

Conclusion

◮ Single observation sequence ◮ Learning algorithms ◮ SPLTL for stationary behavior ◮ Experimental validation