the problem of output measurement feedback control under
play

The Problem of Output Measurement Feedback Control Under Set-valued - PowerPoint PPT Presentation

The Problem of Output Measurement Feedback Control Under Set-valued Uncertainty : from Theory to Computation A.B.KURZHANSKI (Moscow State Univ. and Univ. of California at Berkeley) Presentation at 44-th IEEE CDC and 28-th Chinese National


  1. The Problem of Output Measurement Feedback Control Under Set-valued Uncertainty : from Theory to Computation A.B.KURZHANSKI (Moscow State Univ. and Univ. of California at Berkeley) Presentation at 44-th IEEE CDC and 28-th Chinese National Control Conference Shanghai, China, December 17, 2009 1

  2. OUTLINE 1. Motivations 2. The Basic Problem. The Separation Property. 2. The GSE Problem of Guaranteed (Set-Membership) State Estimation 3. The GCS Problem OF Guaranteed Control Synthesis 4. Combination of GSE AND GCS: the Solution Strategy 5. Systems with Linear Structure: the system and its reconfiguration 6. Linear Systems : the Solution Scheme, reduction to finite-dimensions 7. Calculation: the Ellipsoidal and Polyhedral Techniques 8. Conclusion 2

  3. MOTIVATIONS 3

  4. flow, 0 ≤ v ≤ v t starting point end point measurements reachability set 4

  5. Team Control Synthesis Complete measurements Container M W [ · ] W [ t 0 ] = W ( t 0 , t 1 , M ) Safety set � � x ( i ) ( t ) Safety zone B r 5

  6. 6

  7. The System Equations and the Uncertainties The uncertain system : dx dt = f 1 ( t , x , u )+ f 2 ( t , x , v ) , x ∈ R n , t ∈ [ t 0 , ϑ ] (1) with continuous right-hand sides satisfying conditions of uniqueness an extendibility of solutions. hard bounds on control u and unknown disturbance v ( t ) : u ∈ P ( t ) , v ( t ) ∈ Q ( t ) , (2) P ( t ) , Q ( t ) — compact sets in R p , R q , Hausdorff-continuous. 7

  8. Measurement equation: y ( t ) = h ( t , x )+ ξ ( t ) , y ∈ R m , (3) measurements — y ( t ) , t ∈ T – (continuous or discrete) disturbance in measurement ξ ( t ) — unknown but bounded: ξ ( t ) ∈ R ( t ) , t ∈ [ t 0 , ϑ ] , (4) R ( t ) — similar to P ( t ) , h ( t , x ) — continuous. Initial condition: x ( t 0 ) ∈ X 0 , (5) X 0 — compact. Starting Position: { t 0 , X 0 } 8

  9. 9

  10. BASIC PROBLEM STEER SYSTEM dx dt = f 1 ( t , x , u )+ f 2 ( t , x , v ) , x ∈ R n , t ∈ [ t 0 , ϑ ] , ( 1 ) y ( t ) = h ( t , x )+ ξ ( t ) , ; y ∈ R m , ( 3 ) from starting position { t 0 , X 0 } to terminal position { ϑ , M } , by feedback control strategy U ( t , · ) , on the basis of available information: - system model : equations (1), (3), - starting position { t 0 , X 0 } , - available measurement y ( t ) , - given constraints on control u and uncertain disturbance inputs v ( t ) , ξ ( t ) 10

  11. What should the NEW STATE of the SYSTEM be ? *** Classical case under complete information: Position (state) – { t , x } – single valued Closed-loop control : { u ( t , x ) } Trajectories – single-valued : x [ t ] = x ( t , t 0 , x 0 ) . *** Output feedback control under incomplete information: with – set-valued bounds (no statistical data available): Position (state) – set-valued: X [ t ] 11

  12. On-line set-valued position (NEW STATE) of the system may be taken as: * { t , y t ( · ) } — memorize measurements, (in stochastic control this is done through observers and filters (Kalman)) ** { t , X [ t ] } — find set-valued information set consistent with measurements and constraints on uncertain items: find set-valued information tubes *** { t , V ( t , · ) } — find information state – function V ( t , x ) such that X [ t ] = { x : V ( t , x ) ≤ α } is the level set of V ( t , x ) , (found through Hamilton-Jacobi-Bellman (HJB) PDE equations). 12

  13. Guaranteed State Estimation under Set-membership noise measurements 1 2 3 X ( τ ) information set t = t 0 t = τ Open-loop reachability tube 13

  14. Problem I of Measurement Output Feedback Control : Specify feedback strategy (closed-loop controls) U ( t , X [ t ]) or U ( t , V ( t , · )) which steers overall system FROM any starting position { τ , X [ τ ] } , τ ∈ [ t 0 , ϑ ] TO given neighborhood M µ of target set M at time ϑ : { τ , X [ τ ] } → { ϑ , X [ τ ] } , X [ ϑ ] ⊆ M µ despite unknown disturbances and incomplete measurements. ATTENTION for MATHEMATICIANS: U = { U ( t , X [ t ]) } must ensure the existence and extendability of solutions to differential inclusion x ∈ f 1 ( t , x , U ( t , X [ t ]))+ f 2 ( t , x , v ) , ˙ within interval t ∈ [ t 0 , ϑ ] , whatever be v ( t ) . 14

  15. (Measurement) Output Feedback Control Closed-loop (feedback) control strategies: U ( t , X ) , U ( t , V ( t , · )) , with state { t , X } , or { t , V ( t , · ) } , and trajectories –set-valued: X [ t ] = X ( t , t 0 , X 0 ) or single valued x [ t ] , with set-valued error-bound R [ t ] , with state { t , x [ t ] , Ω [ t ] } (external estimate E [ t ] ⊇ R [ t ] ). trajectories x [ t ] = x ( t , t 0 , x 0 ) , error bounds Ω [ t ] = Ω ( t , t 0 , X 0 − x 0 ) . 15

  16. REMARK: Problem I may be separated into: Problem GSE of guaranteed state estimation(finite-dimensional) and Problem GCS of guaranteed control synthesis (infinite-dimensional) OUR AIM : (a) Find possibility of solutions while avoiding infinite-dimensional schemes. (b) Design feasible computational methods. 16

  17. 17

  18. 18

  19. SOLUTION METHODS (a) GENERAL METHOD: the HAMILTON-JACOBI-BELLMAN (HJB) EQUATIONS (b) USING INVARIANT SETS and AIMING METHODS SET-VALUED CALCULUS+ NONLINEAR ANALYSIS FOR LINEAR SYSTEMS: CONVEX ANALYSIS (c) THE H -INFINITY APPROACH (d) APPROXIMATE METHODS: THE COMPARISON PRINCIPLE, DISCRETIZATION METHODS (e) COMPUTATION METHODS FOR LINEAR SYSTEMS: ELLIPSOIDAL CALCULUS, POLYHEDRAL CALCULUS or BOTH (f) INTERTWINING THE ABOVE METHODS 19

  20. Problem GSE of Guaranteed State Estimation The One-Stage Problem NOTE THAT THERE IS WORST CASE NOISE and BEST CASE NOISE 20

  21. y = x + ξ , ξ ∈ R R noise bound worst case routine best case c ∗ 2 c ∗ 2 c ∗ 1 c ∗ 1 21

  22. Examples: nonlinear maps   x ( k + 1 ) = f ( x ( k )) Gx ( k + 1 )+ ξ y ( k + 1 ) =    x 2  ax 1 2 Take x ∈ R 2 , f ( x ) =  , ax 2 x 2 1 X ( k ) = { x ∈ R 2 : | x i | ≤ 1; i = 1 , 2 } , y ( k + 1 ) = x 2 ( k )+ ξ , | ξ | ≤ µ , X Y ( k + 1 ) = { x : x ∈ [ y ( k + 1 )+ µ , y ( k + 1 ) − µ ] } , X ( k + 1 ) = f ( X ( k )) ∩ X Y ( k + 1 ) X Y X ( k ) f ( X ( k )) 22

  23. Nonlinear Examples   x ( k + 1 ) = f ( x ( k )) | ξ | ≤ ε a 2 x 2 1 + b 2 x 2 y ( k + 1 ) = 2 + ξ  X ( k ) = { x ∈ R 2 : | x i | ≤ 1; i = 1 , 2 } X Y ( k + 1 ) = { x : x ∈ [ y ( k + 1 ) − ε , y ( k + 1 )+ ε ] } X ( k + 1 ) = f ( X ( k )) ∩ X Y ( k + 1 ) X ( k + 1 ) disconnected 23

  24. convex hull 24

  25. Unkown but bounded noise (i)Measurements – at given time (continuous or discrete). Noise – unknown, with given bounds. Has a worst case when W [ t ] is largest possible and a best case when W [ t ] may even reduce to a point (ii) Measurements arrive at random instants of time, due to distribution of Poisson. Noise - with given bounds and given probabilistic density. With stochastic noise the worst and best cases arrive with probability zero. The statistical estimates of x are consistent. 25

  26. The Dynamics of the Information Set t ∗ and t ∗ are the instants of discrete observations 26

  27. Problem GSE of Guaranteed (“Minmax”) State Estimation Problem GSE may be formulated in two versions - E 1 and E 2 Problem E 1 : Given are equations dx dt = f 1 ( t , x , u )+ f 2 ( t , x , v ) , y ( t ) = h ( t , x )+ ξ ( t ) ( i ) position { t 0 , X 0 } , used control u [ s ] , s ∈ [ t 0 , τ ) , measurement y = y ∗ ( t ) , t ∈ [ t 0 , τ ] , and constraints u ∈ P , v ∈ Q , ξ ∈ R ( ii ) with P , Q , R given. Specify information set X [ τ ] , of solutions x ( τ ) to system (i), consistent with system equations, measurement y ∗ ( t ) , t ∈ [ t 0 , τ ] and constraints (ii). The information set X [ τ ] is the guaranteed estimate of x ( τ ) . 27

  28. v d ( x ( t 0 ) , X 0 ) > 0 min V ( τ , x ) > 0 x ( τ ) �∈ X [ τ ] X 0 x ( τ ) ∈ X [ τ ] V ( τ , x ) = 0 v d ( x ( t 0 ) , X 0 ) = 0 min v d ( x ( t 0 ) , X 0 ) V ( τ , x ) = min 28

  29. It is necessary not only to calculate set X [ τ ] , but to arrange on-line calculations , following the evolution of X [ t ] in time.!!! This leads to the problem of DYNAMIC OPTIMIZATION: Problem E 2 Given starting position { t 0 , X 0 } , and realization y ∗ ( s ) , s ∈ [ t 0 , τ ] , Find value function: v { d ( x ( t 0 ) , X 0 ) | v ( t ) ∈ Q ( t ) , t ∈ [ t 0 , τ ] } V ( τ , x ) = min due to equation (1), under additional conditions x ( τ ) = x ; y ∗ ( s ) − h ( s , x ( s )) ∈ R ( s ) , s ∈ [ t 0 , τ ] . The last condition is actually an on-line state constraint 29

  30. The following relation is true X [ t ] = { x : V ( t , x ) ≤ 0 } !!! The value function V ( t , x ) may be found by solving an HJB equation! Introduce notation V ( τ , x ) = V ( τ , x | V ( t 0 , · )) , Then the principle of optimality for problem GSE reads: V ( τ , x | V ( t 0 , · )) = V ( τ , x | V ( t , ·| V ( t 0 , · ))) , t 0 ≤ t ≤ τ . ( ! ) This allows to derive an HJB (Dynamic Programming) equation, to calculate V ( t , x ) . 30

  31. The HJB equation: �� ∂ V � ∂ V ∂ x , f 1 ( t , x , u ∗ ( t ))+ f 2 ( t , x , v ) ∂ t + max − v � � � − d 2 ( y ∗ ( t ) − h ( t , x ) , R ( t )) � v ( t ) ∈ Q ( t ) = 0 , � under boundary condition V ( t 0 , x ) = d 2 ( x , X 0 ) . Discretized scheme: X [ t + σ ] ∼ X [ t + σ − 0 ] ∩ Y ( t + σ ) 31

  32. The Dynamics of the Information Set t ∗ and t ∗ are the instants of discrete observations 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend