nonlinear control lecture 25 state feedback stabilization
play

Nonlinear Control Lecture # 25 State Feedback Stabilization - PowerPoint PPT Presentation

Nonlinear Control Lecture # 25 State Feedback Stabilization Nonlinear Control Lecture # 25 State Feedback Stabilization Backstepping = f a ( ) + g a ( ) R n , , u R = f b ( , ) + g b ( , ) u, g b


  1. Nonlinear Control Lecture # 25 State Feedback Stabilization Nonlinear Control Lecture # 25 State Feedback Stabilization

  2. Backstepping η ˙ = f a ( η ) + g a ( η ) ξ ˙ η ∈ R n , ξ, u ∈ R ξ = f b ( η, ξ ) + g b ( η, ξ ) u, g b � = 0 , Stabilize the origin using state feedback View ξ as “virtual” control input to the system η = f a ( η ) + g a ( η ) ξ ˙ Suppose there is ξ = φ ( η ) that stabilizes the origin of η = f a ( η ) + g a ( η ) φ ( η ) ˙ ∂V a ∂η [ f a ( η ) + g a ( η ) φ ( η )] ≤ − W ( η ) Nonlinear Control Lecture # 25 State Feedback Stabilization

  3. z = ξ − φ ( η ) η ˙ = [ f a ( η ) + g a ( η ) φ ( η )] + g a ( η ) z z ˙ = F ( η, ξ ) + g b ( η, ξ ) u 2 z 2 = V a ( η ) + 1 V ( η, ξ ) = V a ( η ) + 1 2 [ ξ − φ ( η )] 2 ∂V a ∂η [ f a ( η ) + g a ( η ) φ ( η )] + ∂V a ˙ V = ∂η g a ( η ) z + zF ( η, ξ ) + zg b ( η, ξ ) u � ∂V a � − W ( η ) + z ∂η g a ( η ) + F ( η, ξ ) + g b ( η, ξ ) u ≤ Nonlinear Control Lecture # 25 State Feedback Stabilization

  4. � ∂V a � ˙ V ≤ − W ( η ) + z ∂η g a ( η ) + F ( η, ξ ) + g b ( η, ξ ) u 1 � ∂V a � u = − ∂η g a ( η ) + F ( η, ξ ) + kz , k > 0 g b ( η, ξ ) ˙ V ≤ − W ( η ) − kz 2 Nonlinear Control Lecture # 25 State Feedback Stabilization

  5. Example 9.9 x 1 = x 2 1 − x 3 ˙ 1 + x 2 , x 2 = u ˙ x 1 = x 2 1 − x 3 ˙ 1 + x 2 x 2 = φ ( x 1 ) = − x 2 x 1 = − x 1 − x 3 1 − x 1 ⇒ ˙ 1 ˙ V a ( x 1 ) = 1 2 x 2 V a = − x 2 1 − x 4 1 , ∀ x 1 ∈ R ⇒ 1 z 2 = x 2 − φ ( x 1 ) = x 2 + x 1 + x 2 1 − x 1 − x 3 x 1 ˙ = 1 + z 2 u + (1 + 2 x 1 )( − x 1 − x 3 z 2 ˙ = 1 + z 2 ) Nonlinear Control Lecture # 25 State Feedback Stabilization

  6. V ( x ) = 1 2 x 2 1 + 1 2 z 2 2 ˙ x 1 ( − x 1 − x 3 V = 1 + z 2 ) + z 2 [ u + (1 + 2 x 1 )( − x 1 − x 3 1 + z 2 )] ˙ − x 2 1 − x 4 V = 1 + z 2 [ x 1 + (1 + 2 x 1 )( − x 1 − x 3 1 + z 2 ) + u ] u = − x 1 − (1 + 2 x 1 )( − x 1 − x 3 1 + z 2 ) − z 2 ˙ V = − x 2 1 − x 4 1 − z 2 2 The origin is globally asymptotically stable Nonlinear Control Lecture # 25 State Feedback Stabilization

  7. Example 9.10 x 1 = x 2 1 − x 3 ˙ 1 + x 2 , x 2 = x 3 , ˙ x 3 = u ˙ x 1 = x 2 1 − x 3 ˙ 1 + x 2 , x 2 = x 3 ˙ def x 3 = − x 1 − (1 + 2 x 1 )( − x 1 − x 3 1 + z 2 ) − z 2 = φ ( x 1 , x 2 ) ˙ V a ( x ) = 1 2 x 2 1 + 1 2 z 2 V a = − x 2 1 − x 4 1 − z 2 2 , 2 z 3 = x 3 − φ ( x 1 , x 2 ) x 2 1 − x 3 x 1 ˙ = 1 + x 2 , x 2 = φ ( x 1 , x 2 ) + z 3 ˙ u − ∂φ 1 + x 2 ) − ∂φ ( x 2 1 − x 3 z 3 ˙ = ( φ + z 3 ) ∂x 1 ∂x 2 Nonlinear Control Lecture # 25 State Feedback Stabilization

  8. V = V a + 1 2 z 2 3 ∂V a 1 + x 2 ) + ∂V a ˙ ( x 2 1 − x 3 V = ( z 3 + φ ) ∂x 1 ∂x 2 � u − ∂φ 1 + x 2 ) − ∂φ � ( x 2 1 − x 3 + z 3 ( z 3 + φ ) ∂x 1 ∂x 2 ˙ − x 2 1 − x 4 1 − ( x 2 + x 1 + x 2 1 ) 2 V = � ∂V a − ∂φ 1 + x 2 ) − ∂φ � ( x 2 1 − x 3 + z 3 ( z 3 + φ ) + u ∂x 2 ∂x 1 ∂x 2 u = − ∂V a + ∂φ 1 + x 2 ) + ∂φ ( x 2 1 − x 3 ( z 3 + φ ) − z 3 ∂x 2 ∂x 1 ∂x 2 The origin is globally asymptotically stable Nonlinear Control Lecture # 25 State Feedback Stabilization

  9. Strict-Feedback Form x ˙ = f 0 ( x ) + g 0 ( x ) z 1 z 1 ˙ = f 1 ( x, z 1 ) + g 1 ( x, z 1 ) z 2 z 2 ˙ = f 2 ( x, z 1 , z 2 ) + g 2 ( x, z 1 , z 2 ) z 3 . . . z k − 1 ˙ = f k − 1 ( x, z 1 , . . . , z k − 1 ) + g k − 1 ( x, z 1 , . . . , z k − 1 ) z k z k ˙ = f k ( x, z 1 , . . . , z k ) + g k ( x, z 1 , . . . , z k ) u g i ( x, z 1 , . . . , z i ) � = 0 for 1 ≤ i ≤ k Nonlinear Control Lecture # 25 State Feedback Stabilization

  10. Example 9.12 x = − x + x 2 z, ˙ z = u ˙ x = − x + x 2 z ˙ 2 x 2 ⇒ ˙ V a = 1 V a = − x 2 z = 0 ⇒ ˙ x = − x, 2 ( x 2 + z 2 ) V = 1 V = x ( − x + x 2 z ) + zu = − x 2 + z ( x 3 + u ) ˙ u = − x 3 − kz, k > 0 , ⇒ V = − x 2 − kz 2 ˙ Global stabilization Compare with semiglobal stabilization in Example 9.7 Nonlinear Control Lecture # 25 State Feedback Stabilization

  11. Example 9.13 x = x 2 − xz, ˙ z = u ˙ x = x 2 − xz ˙ V 0 ( x ) = 1 z = x + x 2 ⇒ ˙ 2 x 2 ⇒ ˙ x = − x 3 , V = − x 4 V = V 0 + 1 2( z − x − x 2 ) 2 V = − x 4 + ( z − x − x 2 )[ − x 2 + u − (1 + 2 x )( x 2 − xz )] ˙ u = (1 + 2 x )( x 2 − xz ) + x 2 − k ( z − x − x 2 ) , k > 0 V = − x 4 − k ( z − x − x 2 ) 2 ˙ Global stabilization Nonlinear Control Lecture # 25 State Feedback Stabilization

  12. Passivity-Based Control x = f ( x, u ) , ˙ y = h ( x ) , f (0 , 0) = 0 V = ∂V u T y ≥ ˙ ∂x f ( x, u ) Theorem 9.1 If the system is (1) passive with a radially unbounded positive definite storage function and (2) zero-state observable, then the origin can be globally stabilized by y T φ ( y ) > 0 ∀ y � = 0 u = − φ ( y ) , φ (0) = 0 , Nonlinear Control Lecture # 25 State Feedback Stabilization

  13. Proof V = ∂V ˙ ∂x f ( x, − φ ( y )) ≤ − y T φ ( y ) ≤ 0 ˙ V ( x ( t )) ≡ 0 ⇒ y ( t ) ≡ 0 ⇒ u ( t ) ≡ 0 ⇒ x ( t ) ≡ 0 Apply the invariance principle A given system may be made passive by (1) Choice of output, (2) Feedback, or both Nonlinear Control Lecture # 25 State Feedback Stabilization

  14. Choice of Output ∂V x = f ( x ) + G ( x ) u, ˙ ∂x f ( x ) ≤ 0 , ∀ x No output is defined. Choose the output as � T � ∂V def y = h ( x ) = ∂x G ( x ) V = ∂V ∂x f ( x ) + ∂V ˙ ∂x G ( x ) u ≤ y T u Check zero-state observability Nonlinear Control Lecture # 25 State Feedback Stabilization

  15. Example 9.14 x 2 = − x 3 x 1 = x 2 , ˙ ˙ 1 + u V ( x ) = 1 4 x 4 1 + 1 2 x 2 2 ˙ V = x 3 1 x 2 − x 2 x 3 With u = 0 1 = 0 Take y = ∂V ∂x G = ∂V = x 2 ∂x 2 Is it zero-state observable? with u = 0 , y ( t ) ≡ 0 ⇒ x ( t ) ≡ 0 u = − (2 k/π ) tan − 1 ( x 2 ) u = − kx 2 or ( k > 0) Nonlinear Control Lecture # 25 State Feedback Stabilization

  16. Feedback Passivation Definition The system x = f ( x ) + G ( x ) u, ˙ y = h ( x ) ( ∗ ) is equivalent to a passive system if ∃ u = α ( x ) + β ( x ) v such that x = f ( x ) + G ( x ) α ( x ) + G ( x ) β ( x ) v, ˙ y = h ( x ) is passive Theorem [20] The system (*) is locally equivalent to a passive system (with a positive definite storage function) if it has relative degree one at x = 0 and the zero dynamics have a stable equilibrium point at the origin with a positive definite Lyapunov function Nonlinear Control Lecture # 25 State Feedback Stabilization

  17. Example 9.15 ( m -link Robot Manipulator) M ( q )¨ q + C ( q, ˙ q ) ˙ q + D ˙ q + g ( q ) = u M = M T > 0 , ( ˙ M − 2 C ) T = − ( ˙ M − 2 C ) , D = D T ≥ 0 Stabilize the system at q = q r e = q − q r , e = ˙ ˙ q M ( q )¨ e + C ( q, ˙ q )˙ e + D ˙ e + g ( q ) = u ( e = 0 , ˙ e = 0) is not an open-loop equilibrium point ( K p = K T u = g ( q ) − K p e + v, p > 0) M ( q )¨ e + C ( q, ˙ q )˙ e + D ˙ e + K p e = v Nonlinear Control Lecture # 25 State Feedback Stabilization

  18. M ( q )¨ e + C ( q, ˙ q )˙ e + D ˙ e + K p e = v 2 e T K p e V = 1 e T M ( q )˙ e + 1 2 ˙ e T ( ˙ ˙ V = 1 e T D ˙ e T K p e + ˙ e T v + e T K p ˙ e T v 2 ˙ M − 2 C )˙ e − ˙ e − ˙ e ≤ ˙ y = ˙ e Is it zero-state observable? Set v = 0 e ( t ) ≡ 0 ⇒ ¨ ˙ e ( t ) ≡ 0 ⇒ K p e ( t ) ≡ 0 ⇒ e ( t ) ≡ 0 e T φ (˙ v = − φ (˙ e ) , [ φ (0) = 0 , ˙ e ) > 0 , ∀ ˙ e � = 0] u = g ( q ) − K p e − φ (˙ e ) K d = K T Special case: u = g ( q ) − K p e − K d ˙ e, d > 0 Nonlinear Control Lecture # 25 State Feedback Stabilization

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend