semi formal validation of cyber physical systems
play

Semi-formal Validation of Cyber-Physical Systems Thao Dang 2 - PowerPoint PPT Presentation

Semi-formal Validation of Cyber-Physical Systems Thao Dang 2 Collaborators : Arvind Adimoolam 1 , Alexandre Donz e 3 , James Kapinski 4 , Xiaoqing Jin 5 1 , 2 VERIMAG/ 2 CNRS Grenoble, France 3 Decyphir, Inc, France Toyota Motors North America


  1. Semi-formal Validation of Cyber-Physical Systems Thao Dang 2 Collaborators : Arvind Adimoolam 1 , Alexandre Donz´ e 3 , James Kapinski 4 , Xiaoqing Jin 5 1 , 2 VERIMAG/ 2 CNRS Grenoble, France 3 Decyphir, Inc, France Toyota Motors North America R&D, USA. 1 / 34

  2. Semi-formal Validation of CPS - Testing with Quantitative Guarantees ◮ Falsification : Find input signal so that output violates requirement. ◮ Coverage : measure to evaluate testing quality. When no bug is found, this allows quantifying the ”correctness degree” of the system. 2 / 34

  3. Validation of CPS ◮ CPS models: Specification of Input-Output function f can be highly complex. Eg. [Differential Equations + Automata + Look-up tables + Delays + Control Programs]. ◮ Black-box systems: Testing with knowing a model f of the system under test, i.e. only by sampling input signals. 3 / 34

  4. Robustness - Quantitative Guarantee ◮ Quantitative semantics: A function ρ measures extent of satisfifaction of a formal specification φ by output y . y → ρ φ ( y ) ◮ Robustness of STL formulas. Eg, given φ : � ( y ≤ 0 . 04), ρ φ ( y ) = max t ≥ 0 0 . 4 − y ( t ) ◮ (Robustness < 0) ⇒ Falsified. 4 / 34

  5. Robustness ◮ Quantitative semantics: A function ρ measures extent of satisfifaction of a formal specification φ by output y . y → ρ φ ( y ) ◮ Robustness of STL formulas. Eg, given φ : � ( y ≤ 0 . 04), ρ φ ( y ) = max t ≥ 0 0 . 04 − y ( t ) ◮ (Robustness < 0) ⇒ Falsified. 5 / 34

  6. Coverage - Star Discrepancy Star Discrepancy ◮ Let P be a set of k points inside B = [ l 1 , L 1 ] × . . . × [ l n , L n ]. ◮ Local discrepancy : D ( P , J ) = | #( P , J ) − vol ( J ) vol ( B ) | . Example: k D ( P , J ) = | 2 7 − 1 4 | ◮ Discrepancy : supremum of local discrepancy values of all sub-boxes 6 / 34

  7. Coverage - Star Discrepancy Faure sequence of 100 points. Its star discrepancy value is 0 . 048. 7 / 34

  8. Coverage - Star Discrepancy Halton sequence of 100 points. The star discrepancy value is 0 . 05. 8 / 34

  9. Coverage - Star Discrepancy Sequence of 100 points generated by a pseudo-random function in the C library . Its star discrepancy value is 0 . 1. 9 / 34

  10. From Points to Signals ◮ Actual input signal space is INFINITE DIMENSIONAL, but we may search on a Finite Dimensional Space. ◮ For example, a uniform step signal in a bounded time horizon can be represented by a finite set of parameters. u ∈ R m u → � ◮ Extension to signals satisfying some temporal properties (STL) 10 / 34

  11. Falsification as Optimization Define new robustness function on parametrized input space. 1 Falsification: 2 u ∈ ( S ⊂ R m ) � min ρ φ ( � u ) < 0 � 11 / 34

  12. Testing as Optimization Define new robustness function on the parametrized input space. 1 Falsification: 2 u ∈ ( S ⊂ R m ) � min ρ φ ( � u ) < 0 � Good coverage over input signal space or state space 3 12 / 34

  13. Testing as Optimization ◮ Randomized exploration, inspired by probabilistic motion planning techniques RRT (Random Rapidly-Exploring Trees) in robotics. Guided by coverage criteria ◮ Classification + black-box search 13 / 34

  14. Sensitivity to Initial search Conditions ◮ Common black-box search approaches Bias Sampling towards local optimum, generally called stochastic local search techniques. Eg. Simulated Annealing, CMA-ES, Nelder-Mead, etc. ◮ Local Search Effectiveness is Sensitive to Initial conditions. 14 / 34

  15. Problem: Find good Initialization Conditions Global search: Find well separated regions of search space that 1 are likely to contain a falsifier. Initialize local search with promising initialization conditions 2 based on above analysis. 15 / 34

  16. Overview of global search ◮ STATISTICAL CLASSIFICATION + BIASED SAMPLING. 16 / 34

  17. Overview of global search ◮ STATISTICAL CLASSIFICATION + BIASED SAMPLING. 16 / 34

  18. Overview of global search ◮ STATISTICAL CLASSIFICATION + BIASED SAMPLING. 16 / 34

  19. Overview of global search ◮ STATISTICAL CLASSIFICATION + BIASED SAMPLING. 16 / 34

  20. Classification Use Axis Aligned Hyperplane for best possible separation of points 1 BELOW and ABOVE Average Robustness µ . Criteria for separation: Minimize misclassification error, like Soft 2 Margin Support Vector machines (SVM). � error ( d , r ) = min p ( ρ ( x ) − µ ) ( x d − r ) p ∈{ 0 , 1 } x ∈ S d ∈ { 1 , ..., m } : axis along which classifier is aligned, r ∈ [ a d , b d ]: position of classifier, S : set of points, µ : average robustness. 17 / 34

  21. Biased Sampling 18 / 34

  22. Biased Sampling 18 / 34

  23. Biased Sampling 18 / 34

  24. Biased Sampling 18 / 34

  25. Coverage based Probability distribution ◮ Let h i denote coverage in rectangle R i . ◮ Coverage based probability: (1 − h i ) P c � K i = i =1 (1 − h i ) 19 / 34

  26. Robustness based Probability distribution ◮ Given set of samples S i in rectangle R i , the expected reduction below average robustness: � 1 λ i = max( µ i − ρ ( x ) , 0) | S i | x ∈ S i ◮ Expected reduced robustness below average: θ i = µ i − λ i ◮ So, we heuristically determine a robustness based probability distribution as 1 θ i P i r = � K 1 j =1 θ j 20 / 34

  27. Weighted Probabilistic Sampling ◮ User defined Weight w ∈ [0 , 1]. ◮ Weighted coverage and robustness based probability and distribute N samples accordingly. P i = wP c i + (1 − w ) P r i 21 / 34

  28. Singular samples Very low robustness samples: Singular samples. ◮ Given γ : Vector of lowest robust values in different rectangles. ◮ µ γ : Average of elements of γ . λ γ : Average deviation below µ γ . Definition A point x ∈ � k i =1 S i for which ρ ( x ) ≤ max ( µ γ − 3 λ γ , λ γ ) is called a singular sample. Reason: For a normal distribution, less than 15% samples are singular. 22 / 34

  29. Singularity based sampling Given N : User defined threshold no. samples for Classification, ◮ If R i has a singular sample and contains total X i samples, then add max (0 , N − X i ) samples. 23 / 34

  30. One Iteration of Global Search Given N: User define threshold no. samples for classification. 24 / 34

  31. One Iteration of Global Search Given N: User define threshold no. samples for classification. 24 / 34

  32. One Iteration of Global Search Given N: User define threshold no. samples for classification. 24 / 34

  33. One Iteration of Global Search Given N: User define threshold no. samples for classification. 24 / 34

  34. Illustration of Final Subdivision 25 / 34

  35. CMA-ES local search CMA-ES: Covariance Matrix Adaptive Evolutionary Search. ◮ Procedure : Update Mean and Covariance Matrix of Normally Distributed Samples in each iteration, based on Less Robust Samples. 26 / 34

  36. CMA-ES local search CMA-ES: Covariance Matrix Adaptive Evolutionary Search. ◮ Procedure : Update Mean and Covariance Matrix of Normally Distributed Samples in each iteration, based on Less Robust Samples. 26 / 34

  37. CMA-ES local search CMA-ES: Covariance Matrix Adaptive Evolutionary Search. ◮ Procedure : Update Mean and Covariance Matrix of Normally Distributed Samples in each iteration, based on Less Robust Samples. 26 / 34

  38. CMA-ES local search CMA-ES: Covariance Matrix Adaptive Evolutionary Search. ◮ Procedure : Update Mean and Covariance Matrix of Normally Distributed Samples in each iteration, based on Less Robust Samples. 26 / 34

  39. Combine Global and CMA-ES Local search ◮ Use Global Search to Find good Initial Mean and Covariance Matrix for CMAES search. 27 / 34

  40. Combine Global and CMA-ES Local search ◮ Use Global Search to Find good Initial Mean and Covariance Matrix for CMAES search. Initialize Mean with each of the Lowest 1 Robust Points in promissing regions. 27 / 34

  41. Combine Global and CMA-ES Local search ◮ Use Global Search to Find good Initial Mean and Covariance Matrix for CMAES search. Initialize Mean with each of the Lowest 1 Robust Points in promissing regions. Initialize Mean and Covariance Matrix 2 as that of the Mean and Covariance of Lowest Robust Points in promissing regions. 27 / 34

  42. Example: Automatic Powertrain Control System ◮ Requirement: � [5 , 10] ( η < 0 . 5). ◮ Parametrization. Pedal Angle Signal: 10 control points. ◮ Dimension of Search Space: 10. 28 / 34

  43. Experimental results: PTC benchmark Solver Seed Computation time (secs) Falsification � 0 2891 � 5000 2364 Hyperplane classification � + CMA-ES-Breach 10000 2101 � 15000 2271 0 T.O (5000) 5000 T.O. (5000) CMA-ES-Breach 10000 T.O. (5000) 15000 T.O. (5000) 0 T.O. (5000) 5000 T.O. (5000) Grid based random � 10000 3766 sampling � 15000 268 � Global Nelder-Mead-Breach T.O. (5000) � S-TaLiRo (Simulated Annealing) 4481 29 / 34

  44. Example: Automatic Transmission � � ◮ Requirement. φ = ¬ ( ♦ [0 , 10] v > 50) ∧ ( � w ≤ 2520) ◮ Parametrization. Throttle: 7 Control Points, Break: 3 Control Points. ◮ Dimension of Search Space. 7+3=10. 30 / 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend