o on automata learning a t t l i and and conformance
play

O On Automata Learning A t t L i and and Conformance Testing - PowerPoint PPT Presentation

O On Automata Learning A t t L i and and Conformance Testing onformance est ng Bengt Jonsson Bengt Jonsson Uppsala University Acknowledgments Fides Aarts Therese Berg Johan Blom Olga Fides Aarts, Therese Berg, Johan Blom, Olga


  1. Discussion • Problem w. Biermanns algorithm: Exponential • Q: Is there a setting to learn automata polynomially in some way? • By Gold’s result, we cannot hope to learn minimal DFA f DFA from arbitrary sample. bit l MOVEP '10 on Automata Learning 25 ...

  2. Identification in the Limit L b Teacher … aabb+ aab- aaa+ aa- b+ a+ λ - Learner 1 a, a, b Enumeration of Σ * a,b b 2 1 a a, b 3 2 a MOVEP '10 on Automata Learning 26 ...

  3. Identification in the Limit L b Teacher … aabb+ aab- aaa+ aa- b+ a+ λ - Learner 1 a, b a, a b 2 1 a a, b 3 a 2 a b 3 3 b MOVEP '10 on Automata Learning 27 ...

  4. Identification in the Limit L b Teacher … aabb+ aab- aaa+ aa- b+ a+ λ - Learner 1 a, b a, • Assume Teacher incrementally enumerates b 2 all words (classified) in Σ * a • After each word, Learner can use previous f h 3 words to form hypothesis H a Learner identifies L in the limit Learner identifies L in the limit , if H converges to correct hypothesis after finitely many words f y m y Still, (exponentially) much data may be needed MOVEP '10 on Automata Learning 28 ...

  5. Efficient Identification in the Limit Teacher … aabb+ aab- aaa+ aa- b+ a+ λ - Learner Concept Class is efficiently identifiable in the limit if ∃ polynomials p,q, s.t. for any concept C in concept class • Learner can produce H in time O(p(|seen sample|)) • Exists sample S of size O(q(|C|)) s.t. Learner produces correct H whenever seen sample contains S d t H h l t i S • S called “characteristic sample” for C • S called characteristic sample for C • S can depend on Learner MOVEP '10 on Automata Learning 29 ...

  6. Observations if Concept class is efficiently identifiable in the limit if Concept class is efficiently identifiable in the limit, then • Learner needs polynomial time to produce hypothesis • Concepts characterized by polynomial-size h d l l characteristic sets • With “helpful” Teacher the Learner needs only • With helpful Teacher, the Learner needs only polynomially much data to infer C • With “unhelpful” Teacher, the Learner may need a pf , m y lot of data to infer C • Learner should work well for characteristic sets, should make “reasonable” hypotheses otherwise. MOVEP '10 on Automata Learning 30 ...

  7. Characteristic Samples A characteristic sample S for C should uniquely A characteristic sample S for C should uniquely characterize C in the following sense: Learner should produce hypothesis C from any sample p yp y p that contains S and is consistent with C Implies that if • S is characteristic sample for C and h l f d • S’ is characteristic sample for C’ th then either ith C is inconsistent with S’ or • C’ is inconsistent with S C is inconsistent with S • • • (otherwise what to do with S ∪ S’ ?) MOVEP '10 on Automata Learning 31 ...

  8. Characteristic Samples for DFAs A characteristic sample for L should identify its DFA A characteristic sample for L should identify its DFA. This can be done by • Demonstrating that there are n states Demonstrating that there are n states • Each state represented by access string u u represents δ (q 0 ,u) p (q 0 , ) • For each state q and symbol a, uniquely identify δ (q,a) q y y q MOVEP '10 on Automata Learning 32 ...

  9. Separating Sequences A separating sequence for q and q’ is a suffix v A separating sequence for q and q is a suffix v such that δ (q v) is accepting and δ (q’ v) is rejecting δ (q,v) is accepting and δ (q ,v) is rejecting (or vice versa) b 1 1 2 : λ a, b 1 3 : b (not a) b 2 a 2 3 : λ 3 3 a MOVEP '10 on Automata Learning 33 ...

  10. Separating Sequences A separating sequence for q and q’ is a suffix v A separating sequence for q and q is a suffix v such that δ (q v) is accepting and δ (q’ v) is rejecting δ (q,v) is accepting and δ (q ,v) is rejecting (or vice versa) A separating family of DFA is a family of sets p g f y f f y f { Z q | q is a state of DFA} s.t. Z q ∩ Z q’ contains separating sequence for q and q’ p g q q q q q b 1 1 : λ b a, b 2 : λ b 2 a 3 : λ b 3 3 a MOVEP '10 on Automata Learning 34 ...

  11. Separating Sequences A separating family of DFA is a family of sets A separating family of DFA is a family of sets { Z q | q is a state of DFA} s t Z ∩ Z ’ contains separating sequence for q and q’ s.t. Z q ∩ Z q’ contains separating sequence for q and q If all Z q are equal (to W), then W is a characterizing f q ( ), g q set b 1 a, b 1 1 : λ b b b 2 2 : λ a 3 3 : : λ λ b b 3 3 a MOVEP '10 on Automata Learning 35 ...

  12. Separating Sequences A separating family of DFA is a family of sets A separating family of DFA is a family of sets { Z q | q is a state of DFA} s t Z ∩ Z ’ contains separating sequence for q and q’ s.t. Z q ∩ Z q’ contains separating sequence for q and q If all Z q are equal (to W), then W is a characterizing f q ( ), g q set b 1 a, b W : λ W b b b 2 a 3 3 a MOVEP '10 on Automata Learning 36 ...

  13. Characteristic Sample Let Sp(L) be prefixes in minimal spanning tree of DFA(L) Let Sp(L) be prefixes in minimal spanning tree of DFA(L) Let K(L) be { ua | u ∈ Sp(L) a ∈ Σ } Let Characteristic Sample be Let Characteristic Sample be Sp(L) ∪ { uv | u ∈ Sp(L) ∪ K(L) v ∈ Z qu } 1 Λ b a b a 1 aa 2 2 b a, b b a a ab 3 1 aaa b 2 b a aab a b abb 2 1 3 3 2 2 aabb b a MOVEP '10 on Automata Learning 37 2 ...

  14. Why characteristic sample? When forming DFA from prefix tree: When forming DFA from prefix tree: • The states {q u | u ∈ Sp(L) } cannot be merged • since they are separated by suffixes since they are separated by suffixes • Each state in {q u | u ∈ K(L) } can be merged with at most one state in {q u | u ∈ Sp(L) } • Easy to construct minimal DFA from sample • if Sp(L) is known Λ 1 b a a aa 2 2 b b a a ab ab aaa 3 1 b aab a b abb 2 1 aabb aabb 2 2 b MOVEP '10 on Automata Learning 38 2 ...

  15. State Merging Algorithms • Traverse the prefix tree from root • Traverse the prefix tree from root • For each new state • if possible merge it with some seen state if possible, merge it with some seen state • Otherwise, promote it to a new state in the resulting DFA Red states are determined to become DFA states • Blue states (frontier) are the successors of red states, • waiting to be candidates for merging with red states. waiting to be candidates for merging with red states. Repeatedly • Merge blue with red if no inconsistency results • “U “Unmergeable” blue state becomes red bl ” bl t t b d • MOVEP '10 on Automata Learning 39 ...

  16. State Merging: Example - b b a a + + b a - - b a b + - + b b + MOVEP '10 on Automata Learning 40 ...

  17. State Merging: Example - b b a a + + b a - - b a b + - + b b + MOVEP '10 on Automata Learning 41 ...

  18. State Merging: Example b - b b a a + + a - b a + - b + MOVEP '10 on Automata Learning 42 ...

  19. State Merging: Example b - b b a a + + a - b a + - b + MOVEP '10 on Automata Learning 43 ...

  20. State Merging: Example b b 1 a, b b b 2 2 a 3 a MOVEP '10 on Automata Learning 44 ...

  21. What if we change order? - b b a a + + b a - - b a b + - + b b + MOVEP '10 on Automata Learning 45 ...

  22. About State Merging Order in which blue states are considered matters Order in which blue states are considered matters. • If considered states stay within {q u | u ∈ K(L) } • a minimal DFA will be constructed Otherwise, “suboptimal” merges may result h “ b l” l • Remedy: Teacher and Learner agree on a fixed technique • to construct Sp(L) p e.g., to consider strings in lexicographic order • RPNI algorithm. [Oncina, Garcia] • Otherwise: use heuristics for choosing “best merge” Otherwise: use heuristics for choosing best merge , • • e.g., to select states with “largest” subtrees. • MOVEP '10 on Automata Learning 46 ...

  23. About State Merging Time Complexity (in size of sample): Time Complexity (in size of sample): • At most a quadratic number of candidate merges • considered. Each merge takes linear time to check E ch m r t k s lin r tim t ch ck • I.e., time complexity is polynomial. • MOVEP '10 on Automata Learning 47 ...

  24. Active Learning i L i Learner actively constructs the characteristic sample, Membership query: is w accepted or rejected? Teacher Teacher w is accepted/rejected is cc pt d/ j ct d Learner Yes/counterexample v Oracle E Equivalence query: i l is H equivalent to A ? MOVEP '10 on Automata Learning 48 ...

  25. Ideas Maintain candidates for Maintain candidates for • Sp(L) K(L) W where W is a distinguishing set Ask membership queries for k b h f • { uv | u ∈ Sp(L) ∪ K(L) v ∈ W } If u in K(L) is separated from all prefixes in Sp(L) by • separating suffix, move u to Sp(L) and extend K(L) For new u’ in K(L) let W be large enough to separate u’ For new u in K(L) let W be large enough to separate u • • from all but (at most) one prefix in Sp(L) MOVEP '10 on Automata Learning 49 ...

  26. b 1 L* Algorithm a, b b 2 a 3 Observation table W - b b a a + + λ λ - Sp(L) a + b + K(L) K(L) MOVEP '10 on Automata Learning 50 ...

  27. b 1 L* Algorithm a, b b 2 a 3 Observation table W - b b a a + + λ λ - Sp(L) a + b + K(L) K(L) MOVEP '10 on Automata Learning 51 ...

  28. b 1 L* Algorithm a, b b 2 a 3 Observation table W - b b a a + + b λ a - - λ - b Sp(L) a + b + aa - K(L) K(L) ab - MOVEP '10 on Automata Learning 52 ...

  29. b 1 Closed - Form Hypothesis a, b b 2 a 3 Observation table W - b b a a + + b λ a - - λ - b Sp(L) a + b + a,b aa - - K(L) K(L) ab - a, b + MOVEP '10 on Automata Learning 53 ...

  30. b 1 Ask Equivalence Query a, b b 2 a 3 Observation table W - b b a a + + b λ a - - λ - b Sp(L) a + b + a,b aa - aab- b - K(L) K(L) ab - a, b + MOVEP '10 on Automata Learning 54 ...

  31. b 1 Decompose counterexample a, b b 2 a 3 Observation table a W - b b a a + + b λ a - - λ - b Sp(L) a + b + a,b aa - aab- b - K(L) K(L) ab - a, b + MOVEP '10 on Automata Learning 55 ...

  32. b 1 Add new suffix to W a, b b 2 a Observation table 3 a W - b b a a + + b λ b a b - - λ - + + b a b Sp(L) a + - + - + b + - aa - - aab- b K(L) K(L) ab - - MOVEP '10 on Automata Learning 56 ...

  33. b 1 Not closed- Add new prefix to Sp(L) a, b b 2 a 3 Observation table a W - b b a a + + b λ b a b - - λ - + + b a b Sp(L) a + - + - + aa aa - - b + - aab- b K(L) K(L) ab - + MOVEP '10 on Automata Learning 57 ...

  34. b 1 Add new extensions to K(L) a, b b 2 a 3 Observation table a W - b b a a + + b λ b a b - - λ - + + b a b Sp(L) a + - + - + b b b b aa aa - - + + b + - ab - + aab- b K(L) K(L) aaa + - aab - + MOVEP '10 on Automata Learning 58 ...

  35. About L* [Angluin] DFA with n states can be learned using DFA with n states can be learned using • ≤ n equivalence queries • O(| Σ |n 2 + n log m) membership queries • m is size of longest counterexample f l l • Produced hypothesis is always minimal DFA which is • consistent with seen membership queries p q These are a characteristic set for hypothesis • Equivalence query idealizes (possibly) exponential search • for deviations from model for deviations from model The setup with Membership and Equivalence queries makes • it possible to formulate polymial-complexity algorithm. MOVEP '10 on Automata Learning 59 ...

  36. Mealy Machines •Finite State Machines w. input & output I input symbols input O output symbols output q 0 Q states a/1 δ Q δ : Q х I → Q transition function I Q t iti f ti b/1 a/0 λ : Q х I → O output function •Often used for protocol modeling, for f f p m g, f b/0 b/0 b/0 b/0 protocol testing techniques, q 1 q 2 Assumptions: a/0 a/0 •Deterministic D t mi isti •Completely specified MOVEP '10 on Automata Learning 60 ...

  37. Conformance Testing Given MM A construct a sample (i e Given MM A, construct a sample (i.e., a test suite) S such a test suite) S such • that A is “best fit” to explain S Typically: A is the only MM with ≤ |A| states, which is • consistent with S consistent with S MOVEP '10 on Automata Learning 61 ...

  38. W W-method th d Let Sp(L) be prefixes in minimal spanning tree of MM Let Sp(L) be prefixes in minimal spanning tree of MM Let K(L) be { ua | u ∈ Sp(L) a ∈ I } q 0 a/0 b/1 a/1 a/1 b/1 a/0 a/1 b/0 a/0 b/0 b/0 b/0 q 1 q 2 a/0 MOVEP '10 on Automata Learning 62 ...

  39. W W-method th d Let Sp(L) be prefixes in minimal spanning tree of MM Let Sp(L) be prefixes in minimal spanning tree of MM Let K(L) be { ua | u ∈ Sp(L) a ∈ I } Let Sample be { uv | u ∈ Sp(L) ∪ K(L) Let Sample be { uv | u ∈ Sp(L) ∪ K(L) v ∈ W } v ∈ W } where W is a distinguishing set q 0 a/0 b/1 a/1 a/1 b/1 a/0 a/1 b/0 a/0 b/0 b/0 b/0 b/0 b/0 b/0 a/0 a/0 a/1 b/1 a/0 q 1 q 2 a/0 MOVEP '10 on Automata Learning 63 ...

  40. Z Z-method th d Let Sp(L) be prefixes in minimal spanning tree of MM Let Sp(L) be prefixes in minimal spanning tree of MM Let K(L) be { ua | u ∈ Sp(L) a ∈ I } Let Sample be { uv | u ∈ Sp(L) ∪ K(L) Let Sample be { uv | u ∈ Sp(L) ∪ K(L) v ∈ Z qu } v ∈ Z } where {Z q | q ∈ Sp(L) } is a separating family of MM q 0 a/0 b/1 a/1 a/1 b/1 a/0 a/1 b/0 a/0 b/0 b/0 b/0 b/1 a/0 b/0 a/0 q 1 q 2 a/0 MOVEP '10 on Automata Learning 64 ...

  41. Learning vs. Conformance Testing Learning: Find Concept A which is “best fit” to explain a Learning: Find Concept A which is best fit to explain a • given sample S Conformance Testing: Given Concept A, construct a sample • S such that A is “best fit” to explain S S such that A is best fit to explain S For automata learning: A characteristic sample for A is • also a conformance test suite for A MOVEP '10 on Automata Learning 65 ...

  42. L* vs. W-method A sample generated by L* is also a conformance test suite A sample generated by L is also a conformance test suite • generated by the W-method A conformance test suite generated by the W-method is a • characteristic sample characteristic sample A is the only MM of size ≤ |A| which is consistent with S • Q: Can we check whether A is the only automaton of size ≤ |A| + k which is consistent with S MOVEP '10 on Automata Learning 66 ...

  43. Vasilevski-Chow test suite Let k =2 Let k =2 • Test suite should allow non-minimised MM • a/0 q 0 a/1 b/0 b/0 a/1 a/1 r 1 r 2 b/1 a/0 a/0 b/0 b/0 b/0 q 1 q 2 MOVEP '10 on Automata Learning 67 ...

  44. Vasilevski-Chow test suite Let k =2 Let k =2 • Test suite should allow non-minimised MM • Must cope with anomaly • a/0 q 0 a/1 b/0 b/0 a/1 a/1 r 1 r 2 b/1 a/0 a/0 b/0 b/0 ERROR ERROR b/0 b/0 q 1 q 2 MOVEP '10 on Automata Learning 68 ...

  45. Resulting test suite Let W be a characterizing set for A Let W be a characterizing set for A • VC-test suite has form • S = { uxv | u ∈ Sp(L) ∪ K(L) x ∈ I ≤ k v ∈ W } A is only MM of size ≤ |A| + k which is consistent with S • Size of sample: O(| Σ | k +1 n 2 ) 2 ) Si f l O(| Σ | k +1 • MOVEP '10 on Automata Learning 69 ...

  46. Adaptive Model Checking [Peled Yannakakis 02] L* Model Checking SUT SUT H H φ φ OK Conformance Testing MOVEP '10 on Automata Learning 70 ...

  47. Adaptive Model Checking [Peled Yannakakis 02] L* Model Checking SUT SUT H H φ φ Counterexample w Check behavior on w MOVEP '10 on Automata Learning 71 ...

  48. Adaptive Model Checking [Peled Yannakakis 02] L* Model Checking SUT SUT H H φ φ Counterexample w Check behavior on w True counter example / ERROR MOVEP '10 on Automata Learning 72 ...

  49. Adaptive Model Checking [Peled Yannakakis 02] L* Model Checking SUT SUT A A φ φ Counterexample w Check behavior on w False counter example / New counterexample for L* for L* MOVEP '10 on Automata Learning 73 ...

  50. LearnLib: a Tool for Inferring Models • Developed at Dortmund Univ. [Steffen, Raffelt, Howar, Merten] • Central Idea: use domain specific knowledge to • Central Idea: use domain-specific knowledge to reduce the number of queries: – Prefix-closure Prefix closure – Independence between symbols (e.g., in parallel components) – Symmetries • These properties correspond to “filters” between observation table and SUT observation table and SUT MOVEP '10 on Automata Learning 74 ...

  51. Overview of the LearnLib LearnLib algorithms approximative equivalence queries Angluin (automatic) filters chain of filters state cover (DFA) query strategy prefix closure (DFA) transition cover (DFA) transition cover (DFA) DFA and Mealy DFA and Mealy independence (DFA) W-method (DFA) symmetry (DFA) Wp-method (DFA) Angluin (interactive) I/O determinism (DFA) chains of filters chains of filters UIO-method (DFA) UIO method (DFA) access internal convert Mealy (DFA) UIOv-method (DFA) constraints prefix closure (Mealy) insert examples state cover (Mealy) and distinguishing independence (Mealy) strings g transition cover (Mealy) ( y) DFA and Mealy symmetry (Mealy) W-method (Mealy) model checking Wp-method (Mealy) UIO-method (Mealy) Others observation packs UIOv-method (Mealy) discrimination tree ... MOVEP '10 on Automata Learning 75 ...

  52. Whata about Extensions of Automata? • Input and output symbols parameterized by data values. • State variables remember parameters in received input • Types of parameters could be Types of parameters could be, .e.,g e g – Identifiers of connections, sessions, users – Sequence numbers – Time values l MOVEP '10 on Automata Learning 76 ...

  53. Timed Automata • Based on standard automata • Clocks give upper and lower bounds on distance in time bounds on distance in time l 0 l between occurrences of symbols. • Temporal properties of Timed T mp l p p ti s f Tim d put ; t get ; Automata (reachability, LTL, …) x ≤ 2 / x ≥ 10 / x := 0 can be model-checked x := 0 • Implemented in tools l 1 (UPPAAL, IF/Kronos ) Timed words: (get, 14.4) (put, 16.4) (get, 29.34) (put, 30.3) … MOVEP '10 on Automata Learning 77 ...

  54. Event-Recording Automata • Timed Automata can not be determinized in general • Event-Recording Automata (ERA): Event Recording Automata (ERA) l 0 l One clock for each symbol, which is reset on that symbol. • ERA ERA can be determinized n b d t mini d put ; t get ; x get ≤ Assumption: x put ≥ 10 2 Inference algorithm can precisely g p y control and record timing of l 1 symbols. Timed words: (get, 14.4) (put, 16.4) (get, 29.34) (put, 30.3) … Clocked words: Clocked words: (get, [14.4,14.4]) (put, [2.0,14.4]) (get, [14.94,12.94]) (get, [0.96,13.9]) … MOVEP '10 on Automata Learning 78 ...

  55. Event-Recording Automata Σ (symbols) {put, get} ( b l ) { } L (locations) {l 0, l 1 } l 0 (initial location) l 0 (initial location) l 0 l E (edges) ⊆ L х Σ х Guards x L F (accepting locations) ⊆ L put ; t get ; x get ≤ x put ≥ 10 2 l 1 MOVEP '10 on Automata Learning 79 ...

  56. Event-Recording Automata Σ (symbols) {put, get} ( b l ) { } Conjunctions of L (locations) {l 0, l 1 } interval constraints l 0 (initial location) l 0 (initial location) l 0 l E (edges) ⊆ L х Σ х Guards x L F (accepting locations) ⊆ L put ; t get ; Semantics x get ≤ x put ≥ 10 Q (states) L х R ≥ 0 х R ≥ 0 2 q 0 (initial state) (l 0 , [0,0]) (i i i l ) (l [0 0]) l 1 I Σ х R ≥ 0 х R ≥ 0 δ : δ : Q х I → Q Q х I → Q δ (<l 0 , [0,0]> ,< get, [14.4,14.4]>) = <l 1 , [0, 14.4]> δ (<l 1, [0,14.4]> ,< put, [2.0,14.4]>) = <l 0 , [2.0 ,0]> MOVEP '10 on Automata Learning 80 ...

  57. Non-Unique Representation • Deterministic ERAs do not have unique representations a ; x a = 1 a ; x a b ; x ≥ 1 b ; x a ≥ 1 l 0 l 1 l 2 b ; x b ≥ 2 MOVEP '10 on Automata Learning 81 ...

  58. Learning DERAs by Quotienting [Grinchtein , Leucker, al.] • Find equivalence relation ≈ on symbols and states, s.t. – ≈ respects accepting/non-accepting states – q ≈ q’ a ≈ a’ implies δ (q,a) ≈ δ (q’,a’) • Learn the Quotient DFA Σ / ≈ Q / ≈ ( δ ([q] ≈ ,[a] ≈ ) = [ δ (q,a)] ≈ ) F / ≈ δ ≈ For DERAs For DERAs • Equivalence on states based on region equivalence • Assume largest constant K a in constraints on x a • <l , [x a , x b ]> ≈ <l , [y a , y b ]> iff – x a > K a and y a > K a or integer parts of x a and y a same and x a is integer iff y a is integer – same for x b and y b y b b – If x a ≤ K a and x b ≤ K b then x a ≤ x b iff y a ≤ y b • <a , [x a , x b ]> ≈ <a , [y a , y b ]> iff for all k ≤ K a – x a ≤ k iff k ff y a ≤ k k and x a ≥ k iff d k ff y a ≥ k k MOVEP '10 on Automata Learning 82 ...

  59. Regions: From infinite to finite Symbolic state (region ) Concrete State (l (l, ) ) (l [2 2 (l, [2.2, 1.5] ) 1 5] ) ∞ x b x b b 2 2 1 1 x a x a 1 2 3 1 2 3 An equivalence class (i.e. a region ) 83 MOVEP '10 on Automata Learning There are only finite many such!! ...

  60. Abstraction of symbols Abstract symbol Concrete Symbol (a (a, ) ) (a [2 2 (a, [2.2, 1.5] ) 1 5] ) ∞ x b x b b 2 2 1 1 x a x a 1 2 3 1 2 3 84 MOVEP '10 on Automata Learning ...

  61. We need only initial regions Symbolic state (region ) Concrete State (l (l, ) ) (l [0 7 (l, [0.7, 0] ) 0] ) ∞ x b x b b 2 2 1 1 x a x a 1 2 3 1 2 3 An equivalence class (i.e. a region ) 85 MOVEP '10 on Automata Learning There are only finite many such!! ...

  62. Regions preserved by transitions Symbolic state (region ) Concrete State (l (l, ) ) (l [0 7 (l, [0.7, 0] ) 0] ) ∞ x b x b b 2 2 1 1 x a x a 1 2 3 1 2 3 An equivalence class (i.e. a region ) 86 MOVEP '10 on Automata Learning There are only finite many such!! ...

  63. Simple DERAs • DERA with ”small guards” l 0 get ; l 0 0 < x put < 1 0 < x get < 1 get ; get ; x put = 10 x put = x get = 0 put ; x get > 2 get ; x x get ≤ t ≤ put ; put ; g get ; x put ≥ 10 x ≥ 10 x put > 10 x get = 2 x put >10 2 l 1 l 1 l 1 l 1 MOVEP '10 on Automata Learning 87 ...

  64. M dif i Modifying Setup The following setup does not work Membership query: is w accepted or rejected? Teacher Teacher w is accepted/rejected is cc pt d/ j ct d Learner Yes/counterexample v Oracle E Equivalence query: i l is H equivalent to A ? MOVEP '10 on Automata Learning 88 ...

  65. Adding Assistant ddi i Learner actively constructs the characteristic sample, Membership query: For timed word T Teacher h Membership query for abstract words w is accepted/rejected Learner Assistant Yes/counterexample v Y s/count r amp Equivalence query for quotient automata Oracle Equivalence query: q q y For timed automata MOVEP '10 on Automata Learning 89 ...

  66. Query Complexity • Size of Region graph is roughly O(|L| K | Σ | ) • Number of Membership Queries is about cubic in this number Number of Membership Queries is about cubic in this number MOVEP '10 on Automata Learning 90 ...

  67. Single-Clock Automata [Verwer et al. 09] Consider Deterministic Timed Automata with one clock • Still, no unique minimal representation • But But, there is a variant of Nerode Congruence there is a variant of Nerode Congruence – if we know where resets occur Timed word: Ti d d (get, 14.4) (put, 16.4) (get, 29.34) (put, 30.3) … l 0 Clocked word: Clocked word: (get, 14.4) (put, 2.0) (get, 12.96) put ; get ; (get, 14.4) reset (put, 2.0) reset (get, 12.96) reset x ≤ 2 / x ≤ 2 / x ≥ 10 / x ≥ 10 / I Is equivalent to i l x := 0 x := 0 (get, 12.4) reset but not to (get, 12.4) l 1 l 1 MOVEP '10 on Automata Learning 91 ...

  68. Single-Clock Automata [Verwer et al. 09] The timed language can be formed from a finite number of Congruence classes Only Only, it must be determined when to reset? it must be determined when to reset? Define canonical form by prioritizing conflicts l 0 put ; get ; x ≤ 2 / x ≤ 2 / x ≥ 10 / x ≥ 10 / x := 0 x := 0 l 1 l 1 MOVEP '10 on Automata Learning 92 ...

  69. Refining Guards [Verwer et al. 09] • Guards can be refined by counterexamples Guards refined from counterexamples • get @0 put @2 accepted • get @3 put @7 rejected Determine the reason for difference by Determine the reason for difference by investigating other traces l 0 • (binary) search procedure • Finds ”explaining pair”, e.g., put ; – get @2.2 put @4.2 accepted get ; – – get @2.2 put @4.7 rejected get @2 2 put @4 7 rejected • Suggests reset at get and guard x ≤ 2 on put transition g p l 1 l 1 MOVEP '10 on Automata Learning 93 ...

  70. Single-Clock Automata [Verwer et al. 09] Have ”reasonable” canonical forms Exist characteristic samples which are polynomial in size of canonical form canonical form (does not depend on largest constant) (does not depend on largest constant) Learning can be polynomial in (Membership,Equivalence)- query model Version for multiple clocks [Grinchtein,Jonsson] l 0 Higher complexity g p y put ; get ; x ≤ 2 / x ≤ 2 / x ≥ 10 / x ≥ 10 / x := 0 x := 0 l 1 l 1 MOVEP '10 on Automata Learning 94 ...

  71. Applications to Realistic Applications to Realistic Procotols Procotols MOVEP '10 on Automata Learning 95 ...

  72. SIP Protocol [Aarts,Jonsson, Uijen] From RFC 3261: From RFC 3261: • SIP is an application-layer control protocol that can – establish, modify, and terminate multimedia sessions (conferences) such as Internet telephony calls as Internet telephony calls. – invite participants to already existing sessions, such as multicast conferences. MOVEP '10 on Automata Learning 96 ...

  73. Structure of SIP packets Meth d(Fr m;T ; C ntact; CallId; CSeq; Via) where Method(From;To; Contact; CallId; CSeq; Via), where • Method: type of request, either INVITE, PRACK, or ACK. • From and To: addresses of the originator and receiver From and To addresses of the originator and receiver • CallId: unique session identier. • Cseq: sequence number that orders transactions in a session. IGNORE THE BELOW • Contact: address where the Client wants to receive input • • Via: transport path for the transaction Via: transport path for the transaction. MOVEP '10 on Automata Learning 97 ...

  74. part of SIP Server Variables: From, CurId, CurSeq Constants: Me M C t t INVITE( from,to,cid,cseq ) [ to == Me ]/ From = from ; CurId = cid ; CurSeq = cseq; s 0 100( From,to,CurId,CurSeq ) 100( From,to,CurId,CurSeq ) 0 s 1 PRACK(from to cid cseq) [ from == From PRACK(from,to,cid,cseq) [ from == From /\ to == Me /\ cid == CurId /\ cseq == CurSeq+1 ] / 200( From,to,CurId,CurSeq+1 ) s 2 ACK( from,to,cid,cseq ) [ from ACK( from to cid cseq ) [ from == From From /\ to == Me /\ cid == CurId s 3 /\ cseq == CurSeq ] / ε MOVEP '10 on Automata Learning 98 ...

  75. Finding an Abstraction • • Abstraction of Concrete Message Abstraction of Concrete Message PRACK(558,1) PRACK(558 1) depends on internal state of SUT previous history • Assistant must maintain relevant parts of history: e.g., local copies of CurId, CurSeq MOVEP '10 on Automata Learning 99 ...

  76. Adapting to Automata Learning Learner Assistant SIP (SUT) (SUT) MOVEP '10 on Automata Learning 100 ...

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend