discontinuous observers with strong convergence
play

Discontinuous Observers with strong convergence properties and some - PowerPoint PPT Presentation

Discontinuous Observers with strong convergence properties and some applications Jaime A. Moreno JMorenoP@ii.unam.mx Instituto de Ingenier a Universidad Nacional Aut onoma de M exico Mexico City, Mexico Seminars in Systems and


  1. Recapitulation. Sliding Mode Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 converges in finite time, and e 2 converges exponentially fast. If there are unknown inputs/Uncertainties: no convergence. At best bounded error. Only e 1 converges in finite time! 24 April 2012, Jaime A. Moreno Discontinuous Observers 18 / 59

  2. Recapitulation. Sliding Mode Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 converges in finite time, and e 2 converges exponentially fast. If there are unknown inputs/Uncertainties: no convergence. At best bounded error. Only e 1 converges in finite time! Convergence time depends on the initial conditions of the observer 24 April 2012, Jaime A. Moreno Discontinuous Observers 18 / 59

  3. Recapitulation. Sliding Mode Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 converges in finite time, and e 2 converges exponentially fast. If there are unknown inputs/Uncertainties: no convergence. At best bounded error. Only e 1 converges in finite time! Convergence time depends on the initial conditions of the observer It is not the solution we expected! None of the objectives has been achieved! 24 April 2012, Jaime A. Moreno Discontinuous Observers 18 / 59

  4. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 18 / 59

  5. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 18 / 59

  6. Super-Twisting Algorithm (STA) Plant: x 1 = x 2 , ˙ x 2 = w ( t ) ˙ Observer: 1 ˙ 2 sign ( e 1 ) + ˆ x 1 = − l 1 | e 1 | x 2 , ˆ ˙ x 2 = − l 2 sign ( e 1 ) ˆ Estimation Error: e 1 = ˆ x 1 − x 1 , e 2 = ˆ x 2 − x 2 1 2 sign ( e 1 ) + e 2 e 1 = − l 1 | e 1 | ˙ e 2 = − l 2 sign ( e 1 ) − w ( t ) , ˙ Solutions in the sense of Filippov. 24 April 2012, Jaime A. Moreno Discontinuous Observers 19 / 59

  7. Figure: Linear Plant with an unknown input and a SOSM Observer. 24 April 2012, Jaime A. Moreno Discontinuous Observers 20 / 59

  8. 60 2 50 1.5 40 State x 1 Statet x 2 30 1 20 0.5 10 0 0 0 20 40 60 0 20 40 60 Time (sec) Time (sec) 4 2 Linear Observer Linear Observer Estimation error e 1 Estimation error e 2 1.5 Nonlinear Observer Nonlinear Observer 3 1 2 0.5 0 1 −0.5 0 −1 −1 −1.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) Figure: Behavior of Plant and the Non Linear Observer without unknown input. 24 April 2012, Jaime A. Moreno Discontinuous Observers 21 / 59

  9. 140 4 120 3.5 100 3 State x 1 Statet x 2 80 2.5 60 2 40 1.5 20 1 0 0.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) 4 2 Linear Observer Linear Observer Estimation error e 1 Estimation error e 2 1.5 Nonlinear Observer Nonlinear Observer 3 1 2 0.5 0 1 −0.5 0 −1 −1 −1.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) Figure: Behavior of Plant and the Non Linear Observer with unknown input. 24 April 2012, Jaime A. Moreno Discontinuous Observers 22 / 59

  10. 60 2 50 1.5 40 Statet x 2 State x 1 30 1 20 0.5 10 0 0 0 20 40 60 0 20 40 60 Time (sec) Time (sec) 4 2 Linear Observer Estimation error e 1 Estimation error e 2 Linear Observer 1.5 Nonlinear Observer 3 Nonlinear Observer 1 2 0.5 0 1 −0.5 0 −1 −1 −1.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) Figure: Behavior of Plant and the Non Linear Observer without UI with large initial conditions. 24 April 2012, Jaime A. Moreno Discontinuous Observers 23 / 59

  11. Recapitulation. Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! 24 April 2012, Jaime A. Moreno Discontinuous Observers 24 / 59

  12. Recapitulation. Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! If there are unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! Observer is insensitive to perturbation/uncertainty! 24 April 2012, Jaime A. Moreno Discontinuous Observers 24 / 59

  13. Recapitulation. Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! If there are unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! Observer is insensitive to perturbation/uncertainty! Convergence time depends on the initial conditions of the observer. This objective is not achieved! 24 April 2012, Jaime A. Moreno Discontinuous Observers 24 / 59

  14. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 24 / 59

  15. Generalized Super-Twisting Algorithm (GSTA) Plant: x 1 = x 2 , ˙ x 2 = w ( t ) ˙ Observer: ˙ x 1 = − l 1 φ 1 ( e 1 ) + ˆ x 2 , ˆ ˙ x 2 = − l 2 φ 2 ( e 1 ) ˆ Estimation Error: e 1 = ˆ x 1 − x 1 , e 2 = ˆ x 2 − x 2 e 1 = − l 1 φ 1 ( e 1 ) + e 2 ˙ e 2 = − l 2 φ 2 ( e 1 ) − w ( t ) , ˙ Solutions in the sense of Filippov. 1 3 2 sign ( e 1 ) + µ 2 | e 1 | 2 sign ( e 1 ) , µ 1 , µ 2 ≥ 0 , φ 1 ( e 1 ) = µ 1 | e 1 | φ 2 ( e 1 ) = µ 2 2 sign ( e 1 ) + 2 µ 1 µ 2 e 1 + 3 2 | e 1 | 2 sign ( e 1 ) , 1 2 µ 2 24 April 2012, Jaime A. Moreno Discontinuous Observers 25 / 59

  16. Figure: Linear Plant with an unknown input and a Non Linear Observer. 24 April 2012, Jaime A. Moreno Discontinuous Observers 26 / 59

  17. 60 2 50 1.5 40 Statet x 2 State x 1 30 1 20 0.5 10 0 0 0 20 40 60 0 20 40 60 Time (sec) Time (sec) 4 2 Linear Observer Linear Observer Estimation error e 1 Estimation error e 2 1.5 Nonlinear Observer Nonlinear Observer 3 1 2 0.5 0 1 −0.5 0 −1 −1 −1.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) Figure: Behavior of Plant and the Non Linear Observer without unknown input and large initial conditions. 24 April 2012, Jaime A. Moreno Discontinuous Observers 27 / 59

  18. 60 2 50 1.5 40 Statet x 2 State x 1 30 1 20 0.5 10 0 0 0 20 40 60 0 20 40 60 Time (sec) Time (sec) 4 2 Linear Observer Linear Observer Estimation error e 1 Estimation error e 2 1.5 Nonlinear Observer Nonlinear Observer 3 1 2 0.5 0 1 −0.5 0 −1 −1 −1.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) Figure: Behavior of Plant and the Non Linear Observer without unknown input and very large initial conditions. 24 April 2012, Jaime A. Moreno Discontinuous Observers 28 / 59

  19. 140 4 120 3.5 100 3 Statet x 2 State x 1 80 2.5 60 2 40 1.5 20 1 0 0.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) 4 2 Linear Observer Linear Observer Estimation error e 1 Estimation error e 2 1.5 Nonlinear Observer Nonlinear Observer 3 1 2 0.5 0 1 −0.5 0 −1 −1 −1.5 0 20 40 60 0 20 40 60 Time (sec) Time (sec) Figure: Behavior of Plant and the Non Linear Observer with UI with large initial conditions. 24 April 2012, Jaime A. Moreno Discontinuous Observers 29 / 59

  20. Effect: Convergence time independent of I.C. 16 NSOSMO GSTA with linear term 14 STO 12 Convergence Time T 10 8 6 4 2 0 1 2 3 4 10 10 10 10 norm of the initial condition ||x(0)|| (logaritmic scale) Figure: Convergence time when the initial condition grows. 24 April 2012, Jaime A. Moreno Discontinuous Observers 30 / 59

  21. Recapitulation. Generalized Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! 24 April 2012, Jaime A. Moreno Discontinuous Observers 31 / 59

  22. Recapitulation. Generalized Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! If there are unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! Observer is insensitive to perturbation/uncertainty! 24 April 2012, Jaime A. Moreno Discontinuous Observers 31 / 59

  23. Recapitulation. Generalized Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! If there are unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! Observer is insensitive to perturbation/uncertainty! Convergence time is independent of the initial conditions of the observer!. 24 April 2012, Jaime A. Moreno Discontinuous Observers 31 / 59

  24. Recapitulation. Generalized Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! If there are unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! Observer is insensitive to perturbation/uncertainty! Convergence time is independent of the initial conditions of the observer!. All objectives were achieved! 24 April 2012, Jaime A. Moreno Discontinuous Observers 31 / 59

  25. Recapitulation. Generalized Super-Twisting Observer for Linear Plant If no unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! If there are unknown inputs/Uncertainties: e 1 and e 2 converge in finite-time! Observer is insensitive to perturbation/uncertainty! Convergence time is independent of the initial conditions of the observer!. All objectives were achieved! How to proof these properties? 24 April 2012, Jaime A. Moreno Discontinuous Observers 31 / 59

  26. What have we achieved? An algorithm Robust: it converges despite of unknown inputs/uncertainties Exact: it converges in finite-time The convergence time can be preassigned for any arbitrary initial condition. But there is no free lunch! It is useful for Observation Estimation of perturbations/uncertainties Control: Nonlinear PI-Control in practice? Some Generalizations are available but Still a lot is missing 24 April 2012, Jaime A. Moreno Discontinuous Observers 32 / 59

  27. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 32 / 59

  28. Lyapunov functions: 1 We propose a Family of strong Lyapunov functions, that are Quadratic-like 2 This family allows the estimation of convergence time, 3 It allows to study the robustness of the algorithm to different kinds of perturbations, 4 All results are obtained in a Linear-Like framework, known from classical control, 5 The analysis can be obtained in the same manner for a linear algorithm, the classical ST algorithm and a combination of both algorithms (GSTA), that is non homogeneous. 24 April 2012, Jaime A. Moreno Discontinuous Observers 33 / 59

  29. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 33 / 59

  30. Generalized STA x 1 = − k 1 φ 1 ( x 1 ) + x 2 ˙ (1) x 2 = − k 2 φ 2 ( x 1 ) , ˙ Solutions in the sense of Filippov. 1 2 sign ( e 1 ) + µ 2 | e 1 | q sign ( e 1 ) , µ 1 , µ 2 ≥ 0 , q ≥ 1 , φ 1 ( e 1 ) = µ 1 | e 1 | φ 2 ( e 1 ) = µ 2 � q + 1 � µ 1 µ 2 | e 1 | q − 1 1 2 sign ( e 1 ) + 2 sign ( e 1 ) + 2 2 | e 1 | 2 q − 1 sign ( e 1 ) , + q µ 2 Standard STA: µ 1 = 1, µ 2 = 0 Linear Algorithm: µ 1 = 0, µ 2 > 0, q = 1. GSTA: µ 1 = 1, µ 2 > 0, q > 1. 24 April 2012, Jaime A. Moreno Discontinuous Observers 34 / 59

  31. Quadratic-like Lyapunov Functions System can be written as: � φ 1 ( x 1 ) � � − k 1 � 1 ζ = φ ′ ˙ 1 ( x 1 ) A ζ , ζ = , A = . x 2 − k 2 0 Family of strong Lyapunov Functions: V ( x ) = ζ T P ζ , P = P T > 0 . Time derivative of Lyapunov Function: 1 ( x 1 ) ζ T � � V ( x ) = φ ′ ˙ A T P + PA ζ = − φ ′ 1 ( x 1 ) ζ T Q ζ Algebraic Lyapunov Equation (ALE): A T P + PA = − Q 24 April 2012, Jaime A. Moreno Discontinuous Observers 35 / 59

  32. Figure: The Lyapunov function. 24 April 2012, Jaime A. Moreno Discontinuous Observers 36 / 59

  33. Lyapunov Function Proposition If A Hurwitz then x = 0 Finite-Time stable (if µ 1 = 1 ) and for every Q = Q T > 0 , V ( x ) = ζ T P ζ is a global, strong Lyapunov function, with P = P T > 0 solution of the ALE, and 1 ˙ 2 ( x ) − γ 2 ( Q , µ 2 ) V ( x ) , V ≤ − γ 1 ( Q , µ 1 ) V where 1 λ min { Q } λ min { P } 2 λ min { Q } γ 1 ( Q , µ 1 ) � µ 1 � µ 2 , γ 2 ( Q , µ 2 ) 2 λ max { P } λ max { P } If A is not Hurwitz then x = 0 unstable. 24 April 2012, Jaime A. Moreno Discontinuous Observers 37 / 59

  34. Convergence Time Proposition If k 1 > 0 , k 2 > 0 , and µ 2 ≥ 0 a trajectory of the GSTA starting at x 0 ∈ R 2 converges to the origin in finite time if µ 1 = 1 , and it reaches that point at most after a time  1 2 2 ( x 0 ) γ 1 ( Q , µ 1 ) V if µ 2 = 0  T = , � � γ 2 ( Q , µ 2 ) 1 2 2 ( x 0 ) + 1 γ 2 ( Q , µ 2 ) ln γ 1 ( Q , µ 1 ) V if µ 2 > 0  When µ 1 = 0 the convergence is exponential. For Design: T depends on the gains! 24 April 2012, Jaime A. Moreno Discontinuous Observers 38 / 59

  35. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 38 / 59

  36. GSTA with perturbations: ARI GSTA with time-varying and/or nonlinear perturbations x 1 = − k 1 φ 1 ( x 1 ) + x 2 ˙ x 2 = − k 2 φ 2 ( x 1 ) + ρ ( t , x ) . ˙ Assume 2 | ρ ( t , x ) | ≤ δ Analysis: The construction of Robust Lyapunov Functions can be done with the classical method of solving an Algebraic Ricatti Inequality (ARI), or equivalently, solving the LMI � A T P + PA + ǫ P + δ 2 C T C PB � ≤ 0 , B T P − 1 where � − k 1 � � 0 � 1 � � A = , C = 1 0 , B = . − k 2 0 1 24 April 2012, Jaime A. Moreno Discontinuous Observers 39 / 59

  37. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 39 / 59

  38. Derivation of a signal under noise Signal model: x 1 : = σ and x 2 : = ˙ σ x 1 ( t ) = x 2 ( t ) , ˙ x 2 ( t ) = ρ ( t ) = − ¨ ˙ σ ( t ) , y ( t ) = x 1 ( t ) + η ( t ) , where | ρ ( t ) | ≤ L , | η ( t ) | ≤ δ , ∀ t ≥ 0. 24 April 2012, Jaime A. Moreno Discontinuous Observers 40 / 59

  39. Derivation of a signal under noise Signal model: x 1 : = σ and x 2 : = ˙ σ x 1 ( t ) = x 2 ( t ) , ˙ x 2 ( t ) = ρ ( t ) = − ¨ ˙ σ ( t ) , y ( t ) = x 1 ( t ) + η ( t ) , where | ρ ( t ) | ≤ L , | η ( t ) | ≤ δ , ∀ t ≥ 0. The estimator (differentiator/observer) proposed is: x 1 = − α 1 ˙ ˆ ε φ 1 ( ˆ x 1 − y ) + ˆ x 2 , x 2 = − α 2 ˙ ε 2 φ 2 ( ˆ x 1 − y ) , ˆ where 1 2 sign ( x ) + µ 2 x , φ 1 ( x ) = µ 1 | x | µ 1 , µ 2 ≥ 0 1 φ 2 ( x ) = 1 1 sign ( x ) + 3 2 µ 2 2 sign ( x ) + µ 2 2 µ 1 µ 2 | x | 2 x , 24 April 2012, Jaime A. Moreno Discontinuous Observers 40 / 59

  40. µ 1 = 0: High Gain Observer 24 April 2012, Jaime A. Moreno Discontinuous Observers 41 / 59

  41. µ 1 = 0: High Gain Observer µ 2 = 0: Super-Twisting differentiator. 24 April 2012, Jaime A. Moreno Discontinuous Observers 41 / 59

  42. µ 1 = 0: High Gain Observer µ 2 = 0: Super-Twisting differentiator. µ 1 > 0 and µ 2 > 0: combined actions. 24 April 2012, Jaime A. Moreno Discontinuous Observers 41 / 59

  43. µ 1 = 0: High Gain Observer µ 2 = 0: Super-Twisting differentiator. µ 1 > 0 and µ 2 > 0: combined actions. Performance of the differentiator: “global uniform ultimate bound” of the differentiation error ˆ x 2 − x 2 . 24 April 2012, Jaime A. Moreno Discontinuous Observers 41 / 59

  44. µ 1 = 0: High Gain Observer µ 2 = 0: Super-Twisting differentiator. µ 1 > 0 and µ 2 > 0: combined actions. Performance of the differentiator: “global uniform ultimate bound” of the differentiation error ˆ x 2 − x 2 . Remark about the Figures. In all figures and examples that follow, the parameters are set as α 1 = 1.5, α 2 = 1.1, L = 1, δ = 0.01, unless otherwise stated. 24 April 2012, Jaime A. Moreno Discontinuous Observers 41 / 59

  45. µ 1 = 0: High Gain Observer µ 2 = 0: Super-Twisting differentiator. µ 1 > 0 and µ 2 > 0: combined actions. Performance of the differentiator: “global uniform ultimate bound” of the differentiation error ˆ x 2 − x 2 . Remark about the Figures. In all figures and examples that follow, the parameters are set as α 1 = 1.5, α 2 = 1.1, L = 1, δ = 0.01, unless otherwise stated. Differentiation error ˜ x = ˆ x − x x 1 = − α 1 ˙ ˜ ε φ 1 ( ˜ x 1 − η ) + ˜ x 2 , x 2 = − α 2 ˙ ε 2 φ 2 ( ˜ x 1 − η ) + ρ , ˜ 24 April 2012, Jaime A. Moreno Discontinuous Observers 41 / 59

  46. The performance of linear and ST differentiators. 7 3 6 0.6 2.5 5 differentiation error 0.5 differentiation error differentiation error 2 4 0.4 1.5 3 0.3 1 2 0.2 0.5 0.1 1 0 0 0 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 8 gain ε gain ε gain ε Figure: Ultimate bound of the differentiation error as a function of ε . Solid: linear case µ 1 = 0, µ 2 = 1; dashed: pure ST case µ 1 = 1, µ 2 = 0. Left: parameters L = 1, δ = 0.01; middle: parameters L = 1, δ = 1; right: parameters L = 0.1, δ = 1. 24 April 2012, Jaime A. Moreno Discontinuous Observers 42 / 59

  47. 1 0.9 0.6 0.8 0.5 0.7 0.6 0.4 0.5 0.3 0.4 0.3 0.2 0.2 0.1 0.1 0 0 0 1 2 3 4 5 6 7 8 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 gain ε gain ε Figure: Steady state error in the differentiation versus the gain from L = 1, δ = 0.01 to L = 0.001, δ = 0.01. The figure on the right is a zoom of the left figure. Solid: linear case µ 1 = 0, µ 2 = 1; dashed: pure ST case µ 1 = 1, µ 2 = 0. 24 April 2012, Jaime A. Moreno Discontinuous Observers 43 / 59

  48. Advantages of the GST 0.9 0.8 differentiation error 0.7 0.6 0.5 0.4 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 gain ε Figure: Ultimate bound of the differentiation error as a function of ε , for L = 1 and δ = 0.01. Solid: linear case µ 1 = 0, µ 2 = 1; dashed: pure ST case µ 1 = 1, µ 2 = 0; dotted: two experiments µ 1 = 0.8, µ 2 = 0.2 and µ 1 = 0.5, µ 2 = 0.5 (with circles). 24 April 2012, Jaime A. Moreno Discontinuous Observers 44 / 59

  49. The sensitivity to variations in the noise amplitude 2.5 1.5 2 differentiation error differentiation error 1 1.5 1 0.5 0.5 0 0 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 noise amplitude δ noise amplitude δ Figure: The differentiation error as a function of the noise amplitude. Solid: linear case µ 1 = 0, µ 2 = 1; dashed: pure ST case µ 1 = 1, µ 2 = 0; dotted: GST case with µ 1 = µ 2 = 0.5. Dash-dot: the nominal noise 0.01 for which the optimal gains are selected. Left : for L = 1, Right : for L = 0. 24 April 2012, Jaime A. Moreno Discontinuous Observers 45 / 59

  50. Sensitivity to variations in the perturbation amplitude. 0.8 0.7 0.6 differentiation error 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 perturbation amplitude L Figure: The differentiation error as a function of the perturbation amplitude, for δ = 0.01. Solid: linear µ 1 = 0, µ 2 = 1; dashed: ST µ 1 = 1, µ 2 = 0; dotted: GST µ 1 = µ 2 = 0.5. Dash-dotted: the nominal perturbation L = 1 for which the optimal gains are selected. 24 April 2012, Jaime A. Moreno Discontinuous Observers 46 / 59

  51. 5 0.5 0.4 Linear Observer 4.5 Nonlinear Observer 0.3 4 0.2 3.5 Estimation error e 2 0.1 Statet x 2 3 0 −0.1 2.5 −0.2 2 −0.3 1.5 −0.4 1 −0.5 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 Time (sec) Time (sec) Figure: Behavior of Linear and ST Observers under noise and perturbation. 24 April 2012, Jaime A. Moreno Discontinuous Observers 47 / 59

  52. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 47 / 59

  53. Background Super Twisting-Algorithm Applications: Differentiator [Levant, 1998], [Levant, 2003], Controller [Levant, 2003], Observer [Davila et al., 2005] and as an observer and parameter estimator [Davila et al., 2006]. [Moreno, 2009], (Generalized Super-Twisting Algorithm). Adaptive Systems Some documents that summarize several techniques are [Narendra and Annaswamy, 1989], [Sastry and Bodson, 1989], [Ioannou and Sun, 1996] and [Ioannou and Findan, 2006] among others. Finite-Time Parameter Estimation Finite-Time Parameter Estimator ([Adetola and Guay, 2008]) 24 April 2012, Jaime A. Moreno Discontinuous Observers 48 / 59

  54. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 48 / 59

  55. [Morgan and Narendra, 1977] x 1 = A ( t ) x 1 + B ( t ) x 2 ˙ (2) x 2 = − B T ( t ) P ( t ) x 1 ˙ where x i ∈ R n i , i = 1, 2, A ( t ) , B ( t ) are matrices of bounded piecewise continuous functions. There exist a symmetric, positive definite matrix P ( t ) , which satisfies ˙ P ( t ) + A T ( t ) P ( t ) + P ( t ) A ( t ) = − Q ( t ) � is � ˙ � � Persistence of Excitation Conditions: B ( t ) is smooth, B ( t ) uniformly bounded, there exist T > 0, ǫ > 0 such that for any unit vector w ∈ R n 2 � t + T � B ( τ ) w � d τ ≥ ǫ (3) t Then x ( t ) → 0 exponentially. 24 April 2012, Jaime A. Moreno Discontinuous Observers 49 / 59

  56. Application: Parameter Estimation Linearly parametrized System y = α ( x , t ) + Γ ( t ) θ ˙ Parameter Estimation Algorithm y = α ( x , t ) + Γ ( t ) ˆ ˙ ˆ θ − k y e y ˙ ˆ θ = − k θ Γ T ( t ) e y y − y and e θ = ˆ Defining errors as e y = ˆ θ − θ . Error Dynamics e y = − k y e y + Γ ( t ) e θ ˙ ˙ ˆ θ = − k θ Γ T ( t ) e y 24 April 2012, Jaime A. Moreno Discontinuous Observers 50 / 59

  57. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 50 / 59

  58. x 1 = A ( t ) φ 1 ( x 1 ) + B ( t ) x 2 ˙ (4) x 2 = − B T ( t ) P ( t ) φ 2 ( x 1 )+ δ ( t ) ˙ where φ 1 ( x 1 ) = µ 1 | x 1 | 1 / 2 sign ( x 1 ) + µ 2 x 1 , µ 1 , µ 2 > 0 φ 2 ( x 1 ) = µ 2 3 µ 1 µ 2 | x 1 | 1 / 2 sign ( x 1 ) + µ 2 2 sign ( x 1 ) + 2 1 2 x 1 where x 1 ∈ R n 1 , x 2 ∈ R n 2 , n 1 = 1, n 2 ≥ 1, µ 1 > 0 y µ 2 ≥ 0 There exist a symmetric, positive definite matrix P ( t ) , which satisfies P ( t ) + A T ( t ) P ( t ) + P ( t ) A ( t ) = − Q ( t ) ˙ Persistence of Excitation Conditions (PE): There exist T 0 , ǫ 0 , δ 0 , with t 2 ∈ [ t , t + T ] such that for any unit vector w ∈ R n 2 � t 2 + δ 0 � � 1 � � B ( τ ) w d τ � ≥ ǫ 0 (5) � � T 0 t 2 � 24 April 2012, Jaime A. Moreno Discontinuous Observers 51 / 59

  59. Finite-Time Convergence Under PE conditions, if δ ( t ) = 0 then x ( t ) → 0 in Finite-Time. Robustness Under PE conditions, if δ ( t ) is bounded then x ( t ) is bounded (ISS). Under some extra conditions it converges in finite time. 24 April 2012, Jaime A. Moreno Discontinuous Observers 52 / 59

  60. Case ( µ 1 , µ 2 , p , q ) Linear (0,1,–,1) STA (1,0,1/2,–) GSTA (1,1,1/2,q) 24 April 2012, Jaime A. Moreno Discontinuous Observers 53 / 59

  61. Idea of the Proof Under PE conditions it is possible to construct a strict Lyapunov function [Moreno, 2009] V ( t , x ) = ζ T Π ( t ) ζ � = ζ T = ζ T x T � � φ 1 ( x 1 ) � ζ 1 2 2 V ≤ − γ 1 V 1 / 2 − γ 2 V ˙ 24 April 2012, Jaime A. Moreno Discontinuous Observers 54 / 59

  62. Application: Parameter Estimation System representation, y = Γ ( t ) θ (6) ˙ Finite-time parameter estimator, y = − k 1 φ 1 ( e y ) + Γ ( t ) ˆ ˙ ˆ θ (7) θ = − k 2 φ 2 ( e y ) Γ T ( t ) ˙ ˆ φ 1 ( e y ) = µ 1 | e y | 1 / 2 sign ( e y ) + µ 2 e y , µ 1 , µ 2 > 0 (8) φ 2 ( e y ) = µ 2 2 sign ( e y ) + 3 2 µ 1 µ 2 | e x | 1 / 2 sign ( e y ) + µ 2 1 2 e y (9) y − y , e θ = ˆ Estimation Error Dynamics, where e y = ˆ θ − θ e y = − k 1 φ 1 ( e y ) + Γ ( t ) e θ ˙ (10a) e θ = − k 2 φ 2 ( e y ) Γ T ( t ) ˙ (10b) 24 April 2012, Jaime A. Moreno Discontinuous Observers 55 / 59

  63. Example: A simple pendulum x 1 = x 2 ˙ (11a) x 2 = 1 J u − Mgl 2 J sin x 1 − V s ˙ J x 2 (11b) x 1 angular position, x 2 angular velocity, M mass, g gravity, L rope’s length mass and J = ML 2 mass inertia. Parameter’s estimator x 2 = − k 1 φ 1 ( e x 2 ) + Γ ˆ ˙ ˆ θ (12a) ˙ ˆ θ = − k 2 φ 2 ( e x 2 ) Γ T (12b) � � e x 2 = ˆ x 2 − x 2 velocity’s error, Γ = x 2 sin x 1 u regressor, � � ˆ g ˆ ˆ − ˆ V s M ˆ L 1 θ = vector of unknown parameters, φ 1 ( · ) and φ 2 ( · ) − ˆ ˆ ˆ J J J are given as in the GSTA. 24 April 2012, Jaime A. Moreno Discontinuous Observers 56 / 59

  64. Parameter’s estimation 0.5 0 −0.5 θ 1 θ 1 and ˆ −1 −1.5 θ 1 ˆ θ 1 with linear algorithm −2 ˆ θ 1 with nonlinear algorithm −2.5 0 10 20 30 40 50 Time (sec) 2.5 θ 2 0 ˆ θ 2 with linear algorithm 2 ˆ θ 2 with nonlinear algorithm −2 1.5 −4 θ 2 θ 3 θ 2 and ˆ θ 3 and ˆ 1 −6 0.5 θ 3 −8 ˆ θ 3 with linear algorithm 0 ˆ θ 3 with nonlinear algorithm −10 −0.5 −12 −1 0 10 20 30 40 50 0 10 20 30 40 50 Time (sec) Time (sec) Figure: Parameter estimation of θ 1 , θ 2 and θ 3 with GSTA and with linear algorithm. 24 April 2012, Jaime A. Moreno Discontinuous Observers 57 / 59

  65. Parameter’s estimation, with perturbation in θ 1 = − 0.2 + 0.2 sin ( t ) 0.5 0 θ 1 −0.5 θ 1 and ˆ −1 θ 1 ˆ θ 1 with linear algorithm −1.5 ˆ θ 1 with nonlinear algorithm −2 0 10 20 30 40 50 Time (sec) 2 2 θ 2 ˆ θ 2 with linear algorithm 0 ˆ θ 2 with nonlinear algorithm 1.5 −2 1 θ 2 θ 3 θ 2 and ˆ θ 3 and ˆ −4 0.5 −6 θ 3 0 −8 ˆ θ 3 with linear algorithm ˆ −0.5 θ 3 with nonlinear algorithm −10 −12 −1 0 10 20 30 40 50 0 10 20 30 40 50 Time (sec) Time (sec) Figure: Parameter estimation of θ 1 , θ 2 and θ 3 with GSTA and with linear algorithm. 24 April 2012, Jaime A. Moreno Discontinuous Observers 58 / 59

  66. Overview 1 Introduction 2 Observers a la Second Order Sliding Modes (SOSM) Super-Twisting Observer Generalized Super-Twisting Observers 3 Lyapunov Approach for Second-Order Sliding Modes GSTA without perturbations: ALE GSTA with perturbations: ARI 4 Optimality of the ST with noise 5 A Recursive Finite-Time Convergent Parameter Estimation Algorithm The Classical Algorithm The Proposed Algorithm 6 Conclusions 24 April 2012, Jaime A. Moreno Discontinuous Observers 58 / 59

  67. Conclusions 1 The GSTO is an observer that is Robust: it converges despite of unknown inputs/uncertainties Exact: it converges in finite-time Preassigned Convergence time for any initial condition. But there is no free lunch! 2 It can be extended to estimate parameters in finite time with applications in Adaptive Control 3 Applications for Bioreactors seem to be attractive due to: Robustness against uncertainties Possibility of reconstructing uncertainties, e.g. reaction rates 4 Lyapunov functions for Higher Order algorithms is an ongoing work. 5 Useful for control, reaction rate parameters (functional form) estimation, ... 24 April 2012, Jaime A. Moreno Discontinuous Observers 59 / 59

  68. Thank you! 24 April 2012, Jaime A. Moreno Discontinuous Observers 59 / 59

  69. Overview 7 Some Applications Finite-Time, robust MRAC 24 April 2012, Jaime A. Moreno Discontinuous Observers 59 / 59

  70. Overview 7 Some Applications Finite-Time, robust MRAC 24 April 2012, Jaime A. Moreno Discontinuous Observers 59 / 59

  71. Introduction Direct Model Reference Adaptive Control (MRAC) is a well-known approach for adaptive control of linear and some nonlinear systems. 24 April 2012, Jaime A. Moreno Discontinuous Observers 60 / 59

  72. Introduction Direct Model Reference Adaptive Control (MRAC) is a well-known approach for adaptive control of linear and some nonlinear systems. If Plant has relative degree n ∗ = 1 and the reference model is Strictly Positive Real (SPR), the controller is particularly simple to implement, and to design. 24 April 2012, Jaime A. Moreno Discontinuous Observers 60 / 59

  73. Introduction Direct Model Reference Adaptive Control (MRAC) is a well-known approach for adaptive control of linear and some nonlinear systems. If Plant has relative degree n ∗ = 1 and the reference model is Strictly Positive Real (SPR), the controller is particularly simple to implement, and to design. The Adjustment Mechanism of the MRAC is basically a parameter estimation algorithm. 24 April 2012, Jaime A. Moreno Discontinuous Observers 60 / 59

  74. Introduction Direct Model Reference Adaptive Control (MRAC) is a well-known approach for adaptive control of linear and some nonlinear systems. If Plant has relative degree n ∗ = 1 and the reference model is Strictly Positive Real (SPR), the controller is particularly simple to implement, and to design. The Adjustment Mechanism of the MRAC is basically a parameter estimation algorithm. Our objective: Modify the Adjustment Mechanism of the classical Direct MRAC by adding the Super-Twisting-Like nonlinearities, so as to achieve the finite-time convergence and robustness properties. 24 April 2012, Jaime A. Moreno Discontinuous Observers 60 / 59

  75. MRAC with relative degree n ∗ = 1 Figure: General structure of MRAC scheme. 24 April 2012, Jaime A. Moreno Discontinuous Observers 61 / 59

  76. R p ( s ) , Relative degree n ∗ = 1. Z p ( s ) SISO Plant: y p = G p ( s ) u p = k p Z m ( s ) Reference model: y m = W m ( s ) r = k m R m ( s ) Hypothesis: A1 An upper bound n of the degree n p of R p ( s ) is known. A2 The relative degree n ∗ = n p − m p of G p ( s ) is one, i.e. n ∗ = 1. A3 Z p ( s ) is a monic Hurwitz polynomial of degree m p = n p − 1. A4 The sign of the high frequency gain k p is known. B1 Z m ( s ) , R m ( s ) are monic Hurwitz polynomials of degree q m , p m , respectively, where p m ≤ n . B2 The relative degree n ∗ m = p m − q m of W m is the same m = n ∗ = 1. as that of G p ( s ) , i.e., n ∗ B3 W m ( s ) is designed to be Strictly Positive Real (SPR). 24 April 2012, Jaime A. Moreno Discontinuous Observers 62 / 59

  77. Solution of the MRC problem for known parameters: w 1 = Fw 1 + gu p , w 1 ( 0 ) = 0 (13a) ˙ w 2 = Fw 2 + gy p , ˙ w 2 ( 0 ) = 0 (13b) u p = θ ∗ T w (13c) � T , θ ∗ = � T θ ∗ T θ ∗ T θ ∗ T c ∗ � w T w T � w = y p r (14) 1 2 1 2 3 0 For unknown parameters the control law is u p = θ T ( t ) w (15) θ ( t ) = − Γ e 1 w sign ( ρ ∗ ) , ˙ (16) � k p � , Γ = Γ T > 0 . e 1 = y p − y m , sign ( ρ ∗ ) = sign (17) k m 24 April 2012, Jaime A. Moreno Discontinuous Observers 63 / 59

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend