discontinuous feedback in nonlinear control stabilization
play

Discontinuous Feedback in Nonlinear Control: Stabilization Under - PowerPoint PPT Presentation

Discontinuous Feedback in Nonlinear Control: Stabilization Under Disturbances and Optimization Yuri S. Ledyaev Western Michigan University ledyaev@wmich.edu +1-(269)-387-4557 DIMACS Workshop on Perspectives and Future Directions in


  1. Nonlinear Control Systems under Persistent Disturbances “Output Regulation" Program: definition of asymptotic controllability Control system x = f ( x, u, d ) ˙ u ∈ U , d ∈ D u ( t ) - control, d ( t ) -disturbance d t a restriction of function d ( ∙ ) on the interval [0 , t ] Non-anticipating strategy : operator F defining control u ( t ) u ( t ) = F ( t, d t ) Asymptotic Controllability (AC): ∀ initial point x 0 ∃ a strategy F ( t, d t ) x ( t ; x 0 , u ( ∙ ) , d ( ∙ )) → 0 as t → + ∞ in some uniform manner (with respect to d ( ∙ ) and x 0 ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 8/24

  2. Nonlinear Control Systems under Persistent Disturbances Feedback stabilizing controller; k : R n → U x = f ( x, k ( x ) , d ( t )) , ˙ x (0) = x 0 for any d ( ∙ ) x ( t ; x 0 , d ( ∙ )) → 0 as t → + ∞ uniformly with respect to d ( ∙ ) (and x 0 in some sense) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 8/24

  3. Nonlinear Control Systems under Persistent Disturbances x = f ( x, k ( x ) , d ( t )) , ˙ x (0) = x 0 for any d ( ∙ ) x ( t ; x 0 , d ( ∙ )) → 0 as t → + ∞ uniformly with respect to d ( ∙ ) (and x 0 in some sense) Why do we need feedback DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 8/24

  4. Nonlinear Control Systems under Persistent Disturbances x = f ( x, k ( x ) , d ( t )) , ˙ x (0) = x 0 for any d ( ∙ ) x ( t ; x 0 , d ( ∙ )) → 0 as t → + ∞ uniformly with respect to d ( ∙ ) (and x 0 in some sense) Robustness with respect to errors and perturbations! DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 8/24

  5. Nonlinear Control Systems under Persistent Disturbances Original system x = f ( x, k ( x ) , d ( t )) ˙ Perturbed system x ( t ) = f ( x ( t ) , k ( x ( t ) + e ( t )) + a ( t ) , d ( t )) + w ( t ) ˙ e ( t ) – measurement error a ( t ) – actuator error w ( t ) – external disturbance If k ( x ) is CONTINUOUS then robustness follows from classical results on structural robustness of AS property ( Krasovskii mid-1950s ) x = f ( x, k ( x )) + w ( t ) ˙ � w ( t ) � ≤ ∆( x ( t )) What happens when k ( x ) is DISCONTINUOUS? DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 8/24

  6. Main Results Control system under persistent disturbances x = f ( x, u ( t ) , d ( t )) ˙ Closed-loop system for feedback k ( x ) x = f ( x, k ( x ) , d ( t )) ˙ Ledyaev and Vinter 2005, 2010 THEOREM: Asymptotic Controllability ∃ Feedback Stabilizer IFF THEOREM: Discontinuous Feedback Stabilizer is Robust w.r.t.Small Errors x = f ( x, k ( x + e ( t )) + a ( t ) , d ( t )) + w ( t ) ˙ DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 9/24

  7. Main Results Meaning of these results DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 9/24

  8. Main Results Meaning of these results THEOREM: Asymptotic Controllability ∃ Feedback Stabilizer IFF Asymptotic Controllability: for any x 0 ∃ F s.t. using complete perfect INFINITE MEMORY information d t at each moment t u ( t ) = F ( t, d ( ∙ ) t ) we can drive to the origin as t → + ∞ Theorem claims: NO NEED to use infinite memory information (NO infinite-dimensional information states ) to drive to the origin Only use updated values of FINITE-DIMENSIONAL state vector x ( t ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 9/24

  9. Precise Definitions and Statements Main Assumptions: A1. Sets U , D are compact, function f : R n × U × D → R n is continuous and is loc. Lipschitz on x on compact subsets of R n × U × D . A2. ( Isaacs 1965 condition) For any ( x, p ) ∈ R n × R n max d ∈ D min u ∈ U � p, f ( x, u, d ) � = min u ∈ U max d ∈ D � p, f ( x, u, d ) � REMARK. NO growth condition on f . DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  10. Precise Definitions and Statements Main Assumptions: A1. Sets U , D are compact, function f : R n × U × D → R n is continuous and is loc. Lipschitz on x on compact subsets of R n × U × D . A2. For any ( x, p ) ∈ R n × R n max d ∈ D min u ∈ U � p, f ( x, u, d ) � = min u ∈ U max d ∈ D � p, f ( x, u, d ) � Set D of all meas. func. d : R + → D (called disturbances ) Set M U of all relaxed controls (weakly meas. functions) μ : R + → prm ( U ) ( prm ( U ) – set of all probab. Radon measures on U ) N : D → M U – non-anticipating strategy if ∀ d 1 , d 2 ∈ D s.t. for d 1 t = d 2 t we have N ( d 1 ) t = N ( d 2 ) t . some t ∈ R + DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  11. Precise Definitions and Statements Main Assumptions: A1. Sets U , D are compact, function f : R n × U × D → R n is continuous and is loc. Lipschitz on x on compact subsets of R n × U × D . A2. For any ( x, p ) ∈ R n × R n max d ∈ D min u ∈ U � p, f ( x, u, d ) � = min u ∈ U max d ∈ D � p, f ( x, u, d ) � Set D of all meas. func. d : R + → D (called disturbances ) Set M U of all relaxed controls (weakly meas. functions) μ : R + → prm ( U ) ( prm ( U ) – set of all probab. Radon measures on U ) N : D → M U – non-anticipating strategy if ∀ d 1 , d 2 ∈ D s.t. for d 1 t = d 2 t we have N ( d 1 ) t = N ( d 2 ) t . some t ∈ R + Varaiya-Lin, Kalton-Elliot 1970s, Chentsov 1980s, Gusyatnikov ... DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  12. Precise Definitions and Statements For ∀ d ( ∙ ) ∈ D and a strategy N consider relaxed control ν := N ( d ( ∙ )) x ( t ; x 0 , N, d ) – is a solution (locally exists) x ( t ) = ˆ ˙ f ( x ( t ) , ν ( t ) , d ( t )) , x ( t 0 ) = x 0 where � ˆ f ( x, ν.d ) := f ( x, u, d ) ν ( du ) U DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  13. Precise Definitions and Statements x ( t ; x 0 , N, d ) – is a solution (locally exists) x ( t ) = ˆ ˙ f ( x ( t ) , ν ( t ) , d ( t )) , x ( t 0 ) = x 0 where � ˆ f ( x, ν.d ) := f ( x, u, d ) ν ( du ) U REMEMBER x ( t ; x 0 , N, d ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  14. Precise Definitions and Statements DISCONTINUOUS feedback k : R n → U and diff.equation with discontinuous right-hand side x = f ( x, k ( x ) , d ( t )) , ˙ x (0) = x 0 Concept of solution : π -trajectory (from positional differential games theory Krasovskii & Subbotin 1970s ) Partition π = { t i } i ≥ 0 of [0 , + ∞ ) , lim i →∞ t i = + ∞ Diameter of partition: d ( π ) := sup i ( t i +1 − t i ) π -trajectory x π ( t ) := x ( t ) x ( t ) = f ( x ( t ) , k ( x ( t i )) , d ( t )) , t ∈ [ t i , t i +1 ] ˙ Natural model of computer digital control ( "sampling" ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  15. Precise Definitions and Statements DEFINITION: ASYMPTOTIC CONTROLLABILITY ˙ x = f ( x, u, d ) ∀ x 0 ∈ R n there exists a non-anticipating strategy N such that ( ATTRACTIVENESS ) For any disturbance d ∈ D a trajectory x ( t ; x 0 , N, d ) is defined on the entire interval R + and x ( t ; x 0 , N, d ) → 0 as t → + ∞ uniformly with respect to disturbances d ∈ D ; ( UNIFORM BOUNDEDNESS ) sup sup � x ( t ; x 0 , N, d ) � < + ∞ t ≥ 0 d ∈D ( LYAPUNOV STABILITY ) ∀ ε > 0 ∃ δ > 0 s.t. ∀ x 0 satisfying � x 0 � < δ ∃ non-anticipating strategy N s.t. ∀ d ∈ D � x ( t ; x 0 , N, d ) � < ε ∀ t ≥ 0 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  16. Precise Definitions and Statements DEFINITION: STABILIZING FEEDBACK ˙ x = f ( x, k ( x ) , d ) For any 0 < r < R ∃ M = M ( R ) > 0 , δ = δ ( r, R ) > 0 , and T = T ( r, R ) > 0 s.t. ∀ π with d ( π ) < δ and ∀ x 0 such that � x 0 � ≤ R and ∀ disturbance d ∈ D , the π -trajectory x ( ∙ ) , x (0) = x 0 is defined on [0 , + ∞ ) and ( UNIFORM ATTRACTIVENESS) � x ( t ) � ≤ r ∀ t ≥ T ( OVERSHOOT BOUNDEDNESS ) � x ( t ) � ≤ M ( R ) ∀ t ≥ 0 ( LYAPUNOV STABILITY ) lim R ↓ 0 M ( R ) = 0 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  17. Precise Definitions and Statements Ledyaev and Vinter 2005, 2010 THEOREM: Under Assumptions A1 and A2 we have Asymptotic Controllability ∃ Feedback Stabilizer IFF DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  18. Precise Definitions and Statements Ledyaev and Vinter 2005, 2010 THEOREM: Under Assumptions A1 and A2 we have Asymptotic Controllability ∃ Feedback Stabilizer IFF Even more, we can prove existence of continuous functions δ : R n \{ 0 } → (0 , + ∞ ) , β : [0 , + ∞ ) × [0 , + ∞ ) → (0 , + ∞ ) of class KL : β ( t, r ) - monot. decreasing in t , increasing in r , t → + ∞ β ( t, r ) = 0 , lim r → 0 β ( t, r ) = 0 . lim for discontinuous stabilizing feedback k ( x ) and any π = { t i } i ≥ 0 s.t. 0 < t i +1 − t i ≤ δ ( x ( t i )) we have the next decay estimate � x ( t ) � ≤ β ( t, � x (0) � ) ∀ t ≥ 0 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 10/24

  19. Proof: Control Lyapunov Functions Control Lyapunov function (CLF) pair ( V ( x ) , W ( x )) ( POSITIVENESS ) V ( x ) ≥ 0 , V ( x ) = 0 ⇔ x = 0 , W ( x ) > 0 ∀ x � = 0 ( PROPERNESS ) as � x � → + ∞ V ( x ) → + ∞ ( INFINITESIMAL DECREASE ) ∀ x ∈ R n \{ 0 } min u ∈ U max d ∈ D �∇ V ( x ) , f ( x, u, d ) � ≤ − W ( x ) Kokotovic & Freeman 1990s robust control Lyapunov function DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  20. Proof: Control Lyapunov Functions Control Lyapunov function (CLF) pair ( V ( x ) , W ( x )) ( POSITIVENESS ) V ( x ) ≥ 0 , V ( x ) = 0 ⇔ x = 0 , W ( x ) > 0 ∀ x � = 0 ( PROPERNESS ) as � x � → + ∞ V ( x ) → + ∞ ( INFINITESIMAL DECREASE ) ∀ x ∈ R n \{ 0 } min u ∈ U max d ∈ D �∇ V ( x ) , f ( x, u, d ) � ≤ − W ( x ) We assumed that V is C 1 and ∃ continuous k : R n → U s.t. ∀ x ∈ R n \{ 0 } max d ∈ D �∇ V ( x ) , f ( x, k ( x ) , d ) � ≤ − W ( x ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  21. Proof: Control Lyapunov Functions We assumed that V is C 1 and ∃ continuous k : R n → U s.t. ∀ x ∈ R n \{ 0 } ( ✱ ) max d ∈ D �∇ V ( x ) , f ( x, k ( x ) , d ) � ≤ − W ( x ) Then solutions x ( t ) of the closed-loop system x = f ( x, k ( x ) , d ( t )) , ˙ x (0) = x 0 are well-defined and we have a decay estimate � x ( t ) � ≤ β ( t, � x (0) � ) ∀ t ≥ 0 Thus, existence of C 1 CLF V and continuous (or DISCONTINUOUS) k ( x ) satisfying ( ✱ ) AC (asymptotic controllability) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  22. Proof: Control Lyapunov Functions We assumed that V is C 1 and ∃ continuous k : R n → U s.t. ∀ x ∈ R n \{ 0 } ( ✱ ) max d ∈ D �∇ V ( x ) , f ( x, k ( x ) , d ) � ≤ − W ( x ) Then solutions x ( t ) of the closed-loop system x = f ( x, k ( x ) , d ( t )) , ˙ x (0) = x 0 are well-defined and we have a decay estimate � x ( t ) � ≤ β ( t, � x (0) � ) ∀ t ≥ 0 Thus, existence of C 1 CLF V and continuous (or DISCONTINUOUS) k ( x ) satisfying ( ✱ ) AC (asymptotic controllability) Is inverse valid? existence of C 1 CLF V AC (asymptotic controllability) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  23. Proof: Control Lyapunov Functions In general, NO C 1 control Lyapunov function V exists but DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  24. Proof: Control Lyapunov Functions In general, NO C 1 control Lyapunov function V exists but Ledyaev and Vinter 2005, 2010 THEOREM: Under Assumptions A1 and A2 Asymptotic Controllability ∃ lower semicont. CLF V IFF CLF pair ( V, W ) : V is lower semicontinuous ( lim inf x → x 0 V ( x ) ≥ V ( x 0 ) ), W – continuous ( POSITIVENESS ) V ( x ) ≥ 0 , V ( x ) = 0 ⇔ x = 0 , W ( x ) > 0 ∀ x � = 0 ( PROPERNESS ) as � x � → + ∞ V ( x ) → + ∞ ( INFINITESIMAL DECREASE ) ∀ ζ ∈ ∂ P V ( x ) , x ∈ R n \{ 0 } min u ∈ U max d ∈ D � ζ, f ( x, u, d ) � ≤ − W ( x ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  25. Proof: Control Lyapunov Functions NONSMOOTH ANALYSIS: proximal subgradients ζ ∈ ∂ P f ( x ) if ∃ σ > 0 � ζ, z − x � − σ � z − x � 2 ≤ f ( z ) − f ( x ) ∀ z near x y ( ����� (f’(x),-1) Normal vectors x DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  26. Proof: Control Lyapunov Functions Reference on Nonsmooth Analysis (proximal calculus) and its applications DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  27. Proof: Control Lyapunov Functions Proof of the existence of l.s.c. CLF V for AC system � + ∞ V ( x ) := inf N sup W ( x ( t ; x, N, d )) dt d ∈D 0 It is analogous to proofs of inverse Lyapunov function theorems for diff.equations: asymptotic stability existence of Lyapunov function DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  28. Proof: Control Lyapunov Functions Proof of the existence of l.s.c. CLF V for AC system � + ∞ V ( x ) := inf N sup W ( x ( t ; x, N, d )) dt d ∈D 0 It is analogous to proofs of inverse Lyapunov function theorems for diff.equations: asymptotic stability existence of smooth Lyapunov functions Massera 1949, Krasovskii 1950s,Kurzweil 1955, ... For control systems (AC continuous CLF) Sontag 1983 OPEN QUESTION: Does CONTINUOUS CLF exist for AC control system under persistent disturbances? DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 11/24

  29. Design of (Dis-)Continuous Feedback Stabilizer via CLF Let ( V, W ) be a Control Lyapunov Function (CLF) pair V ( x ) is lower semicontinuous, V ( x ) > 0 iff x � = 0 , V ( x ) → + ∞ as � x � → + ∞ and infinitesimal decrease condition holds d ∈ D � ζ, f ( x, u, d ) � ≤ − W ( x ) ∀ ζ ∈ ∂ P V ( x ) , ∀ x ∈ R n \{ 0 } H ( x, ζ ) := min u ∈ U max Note, if V ∈ C 1 then ∂ P V ( x ) ⊂ {∇ V ( x ) } In the case V continuous, the stabilizing feedback construction is contained in Clarke,Ledyaev,Sontag&Subbotin 1996 Asymptotic Controllability Implies Feedback Stabilization How to handle a lower semicontinuous CLF V ? DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  30. Design of (Dis-)Continuous Feedback Stabilizer via CLF Let ( V, W ) be a Control Lyapunov Function (CLF) pair V ( x ) is lower semicontinuous, V ( x ) > 0 iff x � = 0 , V ( x ) → + ∞ as � x � → + ∞ and infinitesimal decrease condition holds d ∈ D � ζ, f ( x, u, d ) � ≤ − W ( x ) ∀ ζ ∈ ∂ P V ( x ) , ∀ x ∈ R n \{ 0 } H ( x, ζ ) := min u ∈ U max Note, if V ∈ C 1 then ∂ P V ( x ) ⊂ {∇ V ( x ) } In the case V continuous, the stabilizing feedback construction is contained in Clarke,Ledyaev,Sontag&Subbotin 1996 Asymptotic Controllability Implies Feedback Stabilization How to handle a lower semicontinuous CLF V ? Use method of Clarke,Ledyaev and Subbotin 1997 The synthesis of universal feedback pursuit strategies in differential games SIAM J.Control and Optimization DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  31. Design of (Dis-)Continuous Feedback Stabilizer via CLF Method Kruzhkov transform ( κ - some constant) v ( x ) := 1 − exp( − κV ( x )) > 0 , v ( x ) = 0 ⇐ ⇒ x = 0 For any x ∈ R n and ζ ∈ ∂ P v ( x ) H ( x, ζ ) ≤ κW ( x )( v ( x ) − 1) ζ ∈ ∂ P v ( x ) ⇔ ζ ∈ κ exp( − κV ( x )) ∂ P V ( x ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  32. Design of (Dis-)Continuous Feedback Stabilizer via CLF Method Kruzhkov transform ( κ - some constant) v ( x ) := 1 − exp( − κV ( x )) > 0 , v ( x ) = 0 ⇐ ⇒ x = 0 For any x ∈ R n and ζ ∈ ∂ P v ( x ) H ( x, ζ ) ≤ κW ( x )( v ( x ) − 1) ζ ∈ ∂ P v ( x ) ⇔ ζ ∈ κ exp( − κV ( x )) ∂ P V ( x ) Iosida-Moreau regularization (from monotone operators theory) v α – loc.Lipschitz 1 2 α 2 � y − x � 2 ] v α ( x ) := min y [ v ( y ) + DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  33. Design of (Dis-)Continuous Feedback Stabilizer via CLF For any x ∈ R n H ( x, ζ ) ≤ κW ( x )( v ( x ) − 1) ∀ ζ ∈ ∂ P v ( x ) Iosida-Moreau regularization (from monotone operators theory), v a – loc.Lipschitz 1 2 α 2 � y − x � 2 ] v α ( x ) := min y [ v ( y ) + "Taylor expansion" formula: ∀ f ∈ R n v α ( x + τf ) ≤ v α ( x ) + τ � ζ α ( x ) , f � + τ 2 � f � 2 . 2 α 2 ζ α ( x ) := x − y α ( x ) ∈ ∂ P v ( y α ( x )) α 2 1 2 α 2 � y − x � 2 y α ( x ) an arbitrary minimizer y → v ( y ) + DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  34. Design of (Dis-)Continuous Feedback Stabilizer via CLF "Taylor expansion" formula: ∀ f ∈ R n ( ✱ ) v α ( x + τf ) ≤ v α ( x ) + τ � ζ α ( x ) , f � + τ 2 � f � 2 . 2 α 2 ζ α ( x ) := x − y α ( x ) ∈ ∂ P v ( y α ( x )) α 2 1 2 α 2 � y − x � 2 y α ( x ) an arbitrary minimizer y → v ( y ) + Compare traditional one-sided Taylor expansion formula for ϕ ∈ C 2 : ϕ ( x + τf ) ≤ ϕ ( x ) + τ � ϕ ′ ( x ) , f � + Cτ 2 � f � 2 We have some analogue for v α ( v is only l.s.c.) ( ✱ ) magic of proximal calculus! DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  35. Design of (Dis-)Continuous Feedback Stabilizer via CLF Definition of the stabilizing feedback k ( x ) max d ∈ D � ζ α ( x ) , f ( x, k ( x ) , d ) � = min u ∈ U max d ∈ D � ζ α ( x ) , f ( x, u, d ) � = H ( x, ζ α ( x )) Then max d ∈ D � ζ α ( x ) , f ( x, k ( x ) , d ) � ≤ H ( x, ζ α ( x )) ≤ − κW ( y α ( x ))(1 − v ( y α ( x ))) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  36. Design of (Dis-)Continuous Feedback Stabilizer via CLF Definition of the stabilizing feedback k ( x ) max d ∈ D � ζ α ( x ) , f ( x, k ( x ) , d ) � = min u ∈ U max d ∈ D � ζ α ( x ) , f ( x, u, d ) � = H ( x, ζ α ( x )) Then max d ∈ D � ζ α ( x ) , f ( x, k ( x ) , d ) � ≤ H ( x, ζ α ( x )) ≤ − κW ( y α ( x ))(1 − v ( y α ( x ))) v α ( x ( t )) ≤ v α ( x ( t i )) (invariance of level sets) and also v α ( x ( t )) is monotonic.decreasing DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 12/24

  37. Robustness of Discontinuous Feedback I Original closed-loop system x = f ( x, k ( x ) , d ( t )) ˙ Perturbed system x = f ( x, k ( x + e ( t )) + a ( t ) , d ( t )) + w ( t ) ˙ e ( t ) – measurement error a ( t ) – actuator error w ( t ) – external disturbance DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  38. Robustness of Discontinuous Feedback I Perturbed system x = f ( x, k ( x + e ( t )) + a ( t ) , d ( t )) + w ( t ) ˙ e ( t ) – measurement error a ( t ) – actuator error w ( t ) – external disturbance Structural assumption a ( t ) = a 1 ( t ) + a 2 ( t ) , w ( t ) = w 1 ( t ) + w 2 ( t ) Small errors means small magnitude but unbounded impulse � e ( ∙ ) � ∞ < ε, � a 1 ( ∙ ) � ∞ < ε, � w 1 ( ∙ ) � ∞ < ε small impulse but unbounded magnitude � a 2 ( ∙ ) � 1 < ε, � w 2 ( ∙ ) � 1 < ε DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  39. Robustness of Discontinuous Feedback I It follows from the design of discontinuous feedback k ( x ) that it is robust with respect to small actuator errors and external disturbances... What about measurement errors? Instead of x ( t i ) we use corrupted data x ′ ( t i ) := x ( t i ) + e ( t i ) k ( x ′ ( t i )) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  40. Robustness of Discontinuous Feedback I What about measurement errors? Instead of x ( t i ) we use corrupted data x ′ ( t i ) := x ( t i ) + e ( t i ) k ( x ′ ( t i )) Control Problem: Drive x ( t ) to S := ( −∞ , − 1 | ∪ [1 , + ∞ ) x = u, ˙ x ∈ R , u ∈ U := {− 1 , 1 } Feedback � +1 , x ≥ 0 k ( x ) = − 1 , x < 0 set S set S k(x)=-1 k(x)=1 0 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  41. Robustness of Discontinuous Feedback I What about measurement errors? Instead of x ( t i ) we use corrupted data x ′ ( t i ) := x ( t i ) + e ( t i ) k ( x ′ ( t i )) Control Problem: Drive x ( t ) to S := ( −∞ , − 1 | ∪ [1 , + ∞ ) x = u, ˙ x ∈ R , u ∈ U := {− 1 , 1 } Feedback � +1 , x ≥ 0 k ( x ) = − 1 , x < 0 set S set S k(x)=-1 k(x)=1 0 x’=x+e x DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  42. Robustness of Discontinuous Feedback I What about measurement errors? Instead of x ( t i ) we use corrupted data x ′ ( t i ) := x ( t i ) + e ( t i ) k ( x ′ ( t i )) Control Problem: Drive x ( t ) to S := ( −∞ , − 1 | ∪ [1 , + ∞ ) x = u, ˙ x ∈ R , u ∈ U := {− 1 , 1 } Feedback � +1 , x ≥ 0 k ( x ) = − 1 , x < 0 set S set S k(x)=-1 k(x)=1 0 x’=x+e x DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  43. Robustness of Discontinuous Feedback I What about measurement errors? Instead of x ( t i ) we use corrupted data x ′ ( t i ) := x ( t i ) + e ( t i ) k ( x ′ ( t i )) Control Problem: Drive x ( t ) to S := ( −∞ , − 1 | ∪ [1 , + ∞ ) x = u, ˙ x ∈ R , u ∈ U := {− 1 , 1 } Feedback � +1 , x ≥ 0 k ( x ) = − 1 , x < 0 set S set S k(x)=-1 k(x)=1 0 x’=x+e x DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  44. Robustness of Discontinuous Feedback I What about measurement errors? Instead of x ( t i ) we use corrupted data x ′ ( t i ) := x ( t i ) + e ( t i ) k ( x ′ ( t i )) Control Problem: Drive x ( t ) to S := ( −∞ , − 1 | ∪ [1 , + ∞ ) x = u, ˙ x ∈ R , u ∈ U := {− 1 , 1 } Feedback � +1 , x ≥ 0 k ( x ) = − 1 , x < 0 set S set S k(x)=-1 k(x)=1 0 x’=x+e x DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 13/24

  45. Robustness of Discontinuous Feedback II FIRST REMEDY: Control with Guide Procedure Krasovskii & Subbotin begin. 1970s use of a computational model of closed-loop system In the context of stabilization problems Ledyaev&Sontag 1997 1 SECOND REMEDY: Restrict a sampling rate ν := sup t i +1 − t i from above t i +1 − t i ≥ 1 /ν and let us assume that small measurement error: � e ( t ) � < 1 / 2 ν ≤ 1 2 ( t i +1 − t i ) set S set S k(x)=-1 k(x)=1 0 x’=x+e x DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  46. Robustness of Discontinuous Feedback II FIRST REMEDY: Control with Guide Procedure Krasovskii & Subbotin begin. 1970s use of a computational model of closed-loop system In the context of stabilization problems Ledyaev&Sontag 1997 1 SECOND REMEDY: Restrict a sampling rate ν := sup t i +1 − t i from above t i +1 − t i ≥ 1 /ν and let us assume that small measurement error: � e ( t ) � < 1 / 2 ν ≤ 1 2 ( t i +1 − t i ) set S set S k(x)=-1 k(x)=1 x 0 x’=x+e DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  47. Robustness of Discontinuous Feedback II PRESCRIPTION in GENERAL CASE: Keep sampling interval t i +1 − t i bounded from below, then k is also robust with respect to small measurement errors In the case of stabilization of control system Clarke,Ledyaev, Rifford and Stern, 2000 Lyapunov functions and feedback stabilization SIAM J.Control Optimiz. In the case of stabilization of control system under persistent disturbances Ledyaev and Vinter 2005, 2010 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  48. Robustness of Discontinuous Feedback II DEFINITION: Feedback k : R n → U is robust stabilizing if ∀ 0 < r < R ∃ M = M ( R ) > 0 , δ = δ ( r, R ) > 0 , T = T ( r, R ) > 0 and b j = b j ( r, R ) , j = 1 , 2 , 3 , s.t. ∀ partition π with 1 2 δ < t i +1 − t i < δ ∀ initial state x 0 : � x 0 � ≤ R , for any disturb. d ∈ D , any external disturb. w ( t ) , actuator errors a ( t ) and measurement errors e ( t ) satisfying � w ( t ) � < b 1 , � a ( t ) � < b 2 , � e ( t ) � < b 3 ∀ t ≥ 0 the π -trajectory x ( ∙ ) starting from x 0 is well-defined and it holds: ( UNIFORM ATTRACTIVENESS ) � x ( t ) � ≤ r ∀ t ≥ T ; ( OVERSHOOT BOUNDEDNESS ) � x ( t ) � ≤ M ( R ) ∀ t ≥ 0 ; ( LYAPUNOV STABILITY ) lim R → 0 M ( R ) = 0 . DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  49. Robustness of Discontinuous Feedback II Ledyaev and Vinter 2005, 2010 THEOREM: Under Assumptions A1 and A2 we have the stabilizing feedback k ( x ) is robust stabilizing DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  50. Robustness of Discontinuous Feedback II Ledyaev and Vinter 2005, 2010 THEOREM: Under Assumptions A1 and A2 we have the stabilizing feedback k ( x ) is robust stabilizing APPLICATION: Quantization of values x : find a net { y j } such that � y i − y j � < sup � e ( t ) � / 2 < b 3 / 2( r, R ) then we can use only values of control if � x ′ − y j � < b 3 / 2 k ( y j ) ANOTHER APPLICATION: existence of piece-wise constant robust stabilizing feedback DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  51. Robustness of Discontinuous Feedback II General Principle for Robust Feedback Ledyaev 1999 in Ledyaev&Rifford 1999 THEOREM: Integral Decrease Principle: V ( x ) contin. or loc.Lipschitz ∃ k : R n → U and δ ( x ) > 0 such that V ( x + τf ) − V ( x ) ≤ − τW ( x ) ∀ f ∈ co f ( x, k ( x ) , D ) , 0 ≤ τ ≤ δ ( x ) Then k ( x ) is robust stabilizing v α ( x ) can be chosen as V ( x ) in our case Analogous principle for differential games Ledyaev 2002 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 14/24

  52. Robustness of Stabilizing Feedback for Any Sampling Rate Let (dis)-continuous k ( x ) be robustly sampling-stabilizing (permitting arbitrary large sampling rate) if ∀ 0 < r < R ∃ T = T ( r, R ) , δ = δ ( r, R ) , η = η ( r, R ) , and M ( R ) s.t. for any disturb. d ∈ D measurement errors e ( t ) and external disturbances w ( t ) for which � e ( t ) � ≤ η ∀ t ≥ 0 , � w ( ∙ ) � ∞ ≤ η and any partition π with d ( π ) ≤ δ : 0 < t i +1 − t i < δ, every π -trajectory with � x (0) � ≤ R does not blow-up and satisfies the following relations: ( UNIFORM ATTRACTIVITY ) � x ( t ) � ≤ r ∀ t ≥ T ; ( BOUNDED OVERSHOOT ) � x ( t ) � ≤ M ( R ) ∀ t ≥ 0; ( LYAPUNOV STABILITY ) lim R ↓ 0 M ( R ) = 0 . DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 15/24

  53. Robustness of Stabilizing Feedback for Any Sampling Rate Ledyaev&Sontag, 1998 THEOREM: ∃ C ∞ CLF V ( x ) ∃ robust sampl.-stabiliz. feedback k ( x ) IFF DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 15/24

  54. Robustness of Stabilizing Feedback for Any Sampling Rate Ledyaev&Sontag, 1998 THEOREM: ∃ C ∞ CLF V ( x ) ∃ robust sampl.-stabiliz. feedback k ( x ) IFF Let V ( x ) be C ∞ control Lyapunov function Then ANY k ( x ) s.t. max d ∈ D �∇ V ( x ) , f ( x, k ( x ) , d ) � ≤ − W ( x ) is ROBUST STABILIZING for any high enough sampling rate DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 15/24

  55. Robustness of Stabilizing Feedback for Any Sampling Rate Ledyaev&Sontag, 1998 THEOREM: ∃ C ∞ CLF V ( x ) ∃ robust sampl.-stabiliz. feedback k ( x ) IFF Let V ( x ) be C ∞ control Lyapunov function Then ANY k ( x ) s.t. max d ∈ D �∇ V ( x ) , f ( x, k ( x ) , d ) � ≤ − W ( x ) is ROBUST STABILIZING for any high enough sampling rate Artstein 1983 for affine-control systems: ∃ SMOOTH control Lyapunov function ∃ continuous IFF stabilizing feedback DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 15/24

  56. Robustness of Stabilizing Feedback for Any Sampling Rate PROOF is based on the inverse Lyapunov function theorem for differential inclusion x ∈ F ( x ) ˙ F ( x ) upper semicontinuous multifunction Clarke,Ledyaev&Stern 1999 THEOREM: ∃ C ∞ V ( x ) Diff. inclusion ˙ x ∈ F is strongly AS IFF Proof is based on structural robustness of AS of diff.inclusions x ∈ co F ( x + ∆( x ) B ) + ∆( x ) B ˙ DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 15/24

  57. Robustness of Stabilizing Feedback for Any Sampling Rate PROOF is based on the inverse Lyapunov function theorem for differential inclusion x ∈ F ( x ) ˙ F ( x ) upper semicontinuous multifunction Clarke,Ledyaev&Stern 1999 THEOREM: ∃ C ∞ V ( x ) Diff. inclusion ˙ x ∈ F is strongly AS IFF APPLICATION Criteria for AS of Filippov or Krasovskii solutions in terms of C ∞ Lyapunov function V x ∈ ∩ ε> 0 co f ( x, k ( x + εB ) , D ) ˙ Limits of trajectories of perturbed system are solutions of this differential inclusion DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 15/24

  58. Underwater Vehicle Example: Lyapunov function V ( x ) = x 2 1 + x 2 2 + x 2 3 x 1 ˙ = u 2 u 3 U := { ( u 1 , u 2 , u 3 ) : | u i | ≤ 1 , i = 1 , 2 , 3 } x 2 ˙ = u 1 u 3 x 3 ˙ = u 1 u 2 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 16/24

  59. Underwater Vehicle Example: Lyapunov function V ( x ) = x 2 1 + x 2 2 + x 2 3 x 1 ˙ = u 2 u 3 U := { ( u 1 , u 2 , u 3 ) : | u i | ≤ 1 , i = 1 , 2 , 3 } x 2 ˙ = u 1 u 3 x 3 ˙ = u 1 u 2 discontinuous ROBUST stabilizer u j ( x ) := − s ign ( x i ( x ) ) , u l ( x ) := 1 u i ( x ) := − s ign ( x j ( x ) u l ( x ) + x l ( x ) u j ( x ) ) i ( x ) := max { i : | x i | = max | x l |} , j ( x ) := i ( x ) + 1 l ( x ) := i ( x ) + 2 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 16/24

  60. Robust Stabilization of Nonholonomic Integrator Brockett’s example (nonholonomic integrator) 1982 x 1 ˙ = u 1 U := { ( u 1 , u 2 ) : | u i | ≤ 1 , i = 1 , 2 } x 2 ˙ = u 2 x 3 ˙ = x 1 u 2 − x 2 u 1 Ledyaev&Rifford 1999 design of robust discontinuous stabilizing feedback based on nonsmooth control Lyapunov functions � � x 2 1 + x 2 x 2 1 + x 2 V ( x ) = max { 2 , | x 3 | − 2 } Known results: Bloch&Drakunov 1994, Astolfi 1995 - no robustness results DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 17/24

  61. Robust Stabilization of Nonholonomic Integrator Stabilization of nonholonomic integrator: pictures � x 2 1 + x 2 Cylindrical coordinates: r = 2 , z = x 3 r = v 1 , ˙ z = rv 2 ˙ DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 17/24

  62. Output Regulation Problem: Conjecture Open Problem: DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 18/24

  63. Output Regulation Problem: Conjecture Open Problem: Consider x ( t ) = f ( x ( t ) , u ( t ) , d ( t )) , ˙ y ( t ) = h ( x ( t )) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 18/24

  64. Output Regulation Problem: Conjecture Open Problem: Consider x ( t ) = f ( x ( t ) , u ( t ) , d ( t )) , ˙ y ( t ) = h ( x ( t )) Assume that for arbitrary y 0 , z 0 ∃ a non-anticipating strategy u ( t, y t , d t ) such that for the system x ( t ) = f ( x ( t ) , u ( t, y t , d t ) , d ( t )) , ˙ y ( t ) = h ( x ( t )) x ( t ) → S as t → + ∞ DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 18/24

  65. Output Regulation Problem: Conjecture Consider x ( t ) = f ( x ( t ) , u ( t ) , d ( t )) , ˙ y ( t ) = h ( x ( t )) Assume that for arbitrary y 0 , z 0 ∃ a non-anticipating strategy u ( t, y t , d t ) such that for the system x ( t ) = f ( x ( t ) , u ( t, y t , d t ) , d ( t )) , ˙ y ( t ) = h ( x ( t )) x ( t ) → S as t → + ∞ CONJECTURE: ∃ dynamic stabilizing feedback k ( z, y ) , g ( z, y ) such that x ( t ) = f ( x ( t ) , k ( z ( t ) , y ( t )) , d ( t )) , ˙ ˙ z ( t ) = g ( z ( t ) , y ( t )) , y ( t ) = h ( x ( t )) is robustly stabilizing: x ( t ) → S as t → + ∞ DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 18/24

  66. Discontinuous Feedback in Control OPTIMIZATION DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 19/24

  67. Discontinuous Feedback and Team Optimal Control We discuss mathematical techniques for deriving optimal solution of some coordinated control problem Differential Game of Team Pursuit Examples of Team Pursuit DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  68. Discontinuous Feedback and Team Optimal Control Differential Game of Team Pursuit Consider objects x 0 , x 1 , . . . , x m in R n with "simple" dynamics x 0 = u 0 , ˙ x 1 = u 1 , ˙ . . . , x m = u m ˙ Controls u 0 ( t ) , u 1 ( t ) , . . . , u m ( t ) are subject to constraints � u 0 � ≤ σ 0 , � u 1 � ≤ σ 1 , . . . , � u m � ≤ σ m DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  69. Discontinuous Feedback and Team Optimal Control Consider objects x 0 , x 1 , . . . , x m in R n with "simple" dynamics x 0 = u 0 , ˙ x 1 = u 1 , ˙ . . . , x m = u m ˙ Controls u 0 ( t ) , u 1 ( t ) , . . . , u m ( t ) are subject to constraints � u 0 � ≤ σ 0 , � u 1 � ≤ σ 1 , . . . , � u m � ≤ σ m The object x 0 is an EVADER (it tries to avoid a capture by one of the objects x 1 , . . . , x m ). Objects x 1 , . . . , x m are PURSUERS (they try to capture the object x 0 ), The pursuit is over at some moment T if � x 0 ( T ) − x i ( T ) � ≤ l i for some i ∈ I := { 1 , 2 , . . . , m } DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  70. Discontinuous Feedback and Team Optimal Control IMPORTANT POINT: PURSUERS and EVADER can use only closed-loop control (or feedback control) u i ( t ) = k i ( x ( t )) , i ∈ I where x := [ x 0 , x 1 , . . . , x m ] . Optimal pursuit time w ( x ) for initial point x is a value function of the differential game of pursuit DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  71. Discontinuous Feedback and Team Optimal Control IMPORTANT POINT: PURSUERS and EVADER can use only closed-loop control (or feedback control) u i ( t ) = k i ( x ( t )) , i ∈ I where x := [ x 0 , x 1 , . . . , x m ] . Optimal pursuit time w ( x ) for initial point x is a value function of the differential game of pursuit. If w ( x ) is smooth (differentiable) then it satisfies the eikonal equation H ( x, ∇ w ( x )) = − 1 , w ( x ) | M = 0 where Hamiltonian H is defined as follows H ( x, ∇ w ( x )) = min p ∈ P max q ∈ Q �∇ w ( x ) , f ( x, p, q ) � for the differential game of pursuit with the terminal set M and dynamics x = f ( x, p, q ) , ˙ p ∈ P, q ∈ Q DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  72. Discontinuous Feedback and Team Optimal Control In general, w ( x ) is nonsmooth (lower semicontinuous) function, optimal feedback controls k p ( x ) , k q ( x ) are discontinuous For lower semicontinuous value function w ( x ) relation H ( x, ∇ w ( x )) = − 1 , w ( x ) | M = 0 is replaced by two inequalities in terms of subgradients of w ( x ) One of them H ( x, ζ ) ≤ − 1 , ∀ ζ ∈ ∂ P w ( x ) , x �∈ M DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  73. Discontinuous Feedback and Team Optimal Control For lower semicontinuous value function w ( x ) relation H ( x, ∇ w ( x )) = − 1 , w ( x ) | M = 0 is replaced by two inequalities in terms of subgradients of w ( x ) One of them H ( x, ζ ) ≤ − 1 , ∀ ζ ∈ ∂ P w ( x ) , x �∈ M The synthesis of universal feedback pursuit strategies in differential games THEOREM: Clarke,Ledyaev,Subbotin 1997 : Let D ⊂ ¯ G be a compact set such that w is bounded on D , then for any ε > 0 there exists δ > 0 and a feedback control k such that for any x 0 ∈ D and ∆ , diam (∆) < δ we have θ ε ( x 0 , k p , ∆) < w ( x 0 ) + ε where θ ε ( x 0 , k p , ∆) is a pursuit guaranteed time for feedback k p and sampling partition ∆ to drive x into set M ε ( ε -neighbourhood of M ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 20/24

  74. Team Optimal Pursuit Dynamics of EVADER x 0 and PURSUERS x 1 , . . . , x m in R n x 0 = u 0 , ˙ x 1 = u 1 , ˙ . . . , x m = u m ˙ Controls u 0 ( t ) , u 1 ( t ) , . . . , u m ( t ) are subject to constraints � u 0 � ≤ σ 0 , � u 1 � ≤ σ 1 , . . . , � u m � ≤ σ m Terminal set M := { x = [ x 0 , x 1 , . . . , x m ] : 1 ≤ i ≤ m ( � x 0 − x i � − l i ) ≤ 0 } min ASSUMPTION: m ≤ n , σ i ≥ σ 0 and σ i + l i > σ 0 , i = 1 , . . . , m DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

  75. Team Optimal Pursuit ASSUMPTION: m ≤ n and σ i ≥ σ 0 , σ i + l i > σ 0 , i = 1 , . . . , m Consider sets for i ∈ I := { 1 , . . . , m } Y i ( x ) := { y ∈ R n : Φ i ( y, x i ) ≤ 0 } , i ∈ I where Φ i ( y, x i ) := � y − x 0 � − � y − x i � − l i σ 0 σ i Nonsmooth function (value ( marginal ) function for mathematical programming problem) w ( x ) := sup {� y − x 0 � : y ∈ Y ( x ) } σ 0 Y ( x ) := ∩ i ∈ I Y i ( x ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

  76. Team Optimal Pursuit Nonsmooth function (value ( marginal ) function for mathematical programming problem) w ( x ) := sup {� y − x 0 � : y ∈ Y ( x ) } σ 0 Y ( x ) := { y : � y − x 0 � − � y − x i � − l i ≤ 0 ∀ i ∈ I } σ 0 σ i If w ( x ) < + ∞ then define Y opt ( x ) := { y ∈ Y ( x ) : � y − x 0 � = w ( x ) } σ 0 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

  77. Team Optimal Pursuit w ( x ) := sup {� y − x 0 � : y ∈ Y ( x ) } σ 0 PURSUERS ’ feedback controls y − x i k i ( x ) := σ i � y − x i � , where y ∈ Y opt ( x ) , i ∈ I EVADER ’s feedback control y − x 0 � y − x 0 � , where y ∈ Y opt ( x ) , k 0 ( x ) := σ 0 DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

  78. Team Optimal Pursuit w ( x ) := sup {� y − x 0 � : y ∈ Y ( x ) } σ 0 PURSUERS ’ feedback controls y − x i k i ( x ) := σ i � y − x i � , where y ∈ Y opt ( x ) , i ∈ I EVADER ’s feedback control y − x 0 � y − x 0 � , where y ∈ Y opt ( x ) , k 0 ( x ) := σ 0 THEOREM: Ivanov & Ledyaev 1980 Under Assumptions A the nonsmooth function w ( x ) is the value function of the team pursuit problem DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

  79. Team Optimal Pursuit w ( x ) := sup {� y − x 0 � : y ∈ Y ( x ) } σ 0 PURSUERS ’ feedback controls y − x i � y − x i � , where y ∈ Y opt ( x ) , i ∈ I k i ( x ) := σ i EVADER ’s feedback control y − x 0 � y − x 0 � , where y ∈ Y opt ( x ) , k 0 ( x ) := σ 0 THEOREM: Ledyaev 2007 Under Assumptions A the discontinuous feedbacks k 1 , . . . , k m are optimal universal robust pursuit feedback controls, k 0 is optimal universal robust evader’s feedback for the team pursuit problem DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

  80. Team Optimal Pursuit Meaning of the set Y ( x ) Y ( x ) := { y : � y − x 0 � − � y − x i � − l i ≤ 0 , ∀ i ∈ I } σ 0 σ i At any point y ∈ Y ( x ) EVADER comes before interception by EACH PURSUER EVADER can avoid interception on the time interval [0 , w ( x )) EXAMPLE: the set Y 1 ( x ) ∪ Y 2 ( x ) ∪ Y 3 ( x ) EXAMPLE: the set Y ( x ) EXAMPLE: the set Y opt ( x ) DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory , Rutgers University, May 23-25, 2011 – p. 21/24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend