duality between constrained estimation and control
play

Duality Between Constrained Estimation and Control Jos e De Don a - PowerPoint PPT Presentation

Duality Between Constrained Estimation and Control Jos e De Don a September 2004 Centre of Complex Dynamic Systems and Control Outline Recap on Lagrangian Duality 1 Primal and Dual Problems Strong Duality Theorem 2 Primal


  1. Duality Between Constrained Estimation and Control Jos´ e De Don´ a September 2004 Centre of Complex Dynamic Systems and Control

  2. Outline Recap on Lagrangian Duality 1 Primal and Dual Problems Strong Duality Theorem 2 Primal Estimation Problem System Definition Constrained Estimation Problem Lagrangian Dual Problem 3 Lagrangian Dual Function Formulation of the Lagrangian Dual Problem Equivalent Formulation of the Primal Problem 4 Preliminary Results Formulation of the Equivalent Primal Problem Symmetry of Constrained Estimation and Control 5 More General Constraints 6 Centre of Complex Dynamic Systems and Control

  3. Duality Between Constrained Estimation and Control We have seen that the problem of constrained estimation can be formulated as a constrained optimisation problem. Indeed, this problem is remarkably similar to the constrained control problem—differing only with respect to the boundary conditions. (In control, the initial condition is fixed, whereas in estimation, the initial condition can also be adjusted.) We will now derive the Lagrangian dual of a constrained estimation problem and show that it leads to a particular unconstrained nonlinear optimal control problem. We will then show that the original (primal) constrained estimation problem has an equivalent formulation as an unconstrained nonlinear optimisation problem, exposing a clear symmetry with its dual. Centre of Complex Dynamic Systems and Control

  4. Recap on Lagrangian Duality Consider the following nonlinear programming problem: Primal Problem P minimise f ( x ) , (1) subject to: g i ( x ) ≤ 0 for i = 1 , . . . , m , h i ( x ) = 0 for i = 1 , . . . , ℓ, x ∈ X . Centre of Complex Dynamic Systems and Control

  5. Recap on Lagrangian Duality Then the Lagrangian dual problem is defined as the following nonlinear programming problem. Lagrangian Dual Problem D maximise θ ( u , v ) , (2) subject to: u ≥ 0 , where, m ℓ � � θ ( u , v ) = inf { f ( x ) + u i g i ( x ) + v i h i ( x ) : x ∈ X } , (3) i = 1 i = 1 is the Lagrangian dual function . Centre of Complex Dynamic Systems and Control

  6. Recap on Lagrangian Duality In the dual problem (2)–(3), the vectors u and v have as their components the Lagrange multipliers u i for i = 1 , . . . , m , and v i for i = 1 , . . . , ℓ . Note that the Lagrange multipliers u i , corresponding to the inequality constraints g i ( x ) ≤ 0, are restricted to be nonnegative, whereas the Lagrange multipliers v i , corresponding to the equality constraints h i ( x ) = 0, are unrestricted in sign. Given the primal problem P (1), several Lagrangian dual problems D of the form of (2)–(3) can be devised, depending on which constraints are handled as g i ( x ) ≤ 0 and h i ( x ) = 0, and which constraints are handled by the set X . (An appropriate selection of the set X must be made, depending on the nature of the problem.) Centre of Complex Dynamic Systems and Control

  7. Recap on Lagrangian Duality The following result, known as the strong duality theorem , shows that, under suitable convexity assumptions and under a constraint qualification, there is no duality gap between the primal and dual optimal objective function values. Theorem (Strong Duality Theorem) Let X be a nonempty convex set in R n . Let f : R n → R and g : R n → R m be convex, and h : R n → R ℓ be affine. Suppose that the following constraint qualification is satisfied. There exists an x ∈ X such that g (ˆ ˆ x ) < 0 and h (ˆ x ) = 0 , and 0 ∈ int h ( X ) , where h ( X ) = { h ( x ) : x ∈ X } . Then, inf { f ( x ) : x ∈ X , g ( x ) ≤ 0 , h ( x ) = 0 } = sup { θ ( u , v ) : u ≥ 0 } , (4) where θ ( u , v ) = inf { f ( x ) + u  g ( x ) + v  h ( x ) : x ∈ X } . Furthermore, if the inf is finite, then sup { θ ( u , v ) : u ≥ 0 } is achieved at (¯ u , ¯ v ) with u ≥ 0 . If the inf is achieved at ¯ ¯ x, then ¯ u  g (¯ x ) = 0 . Centre of Complex Dynamic Systems and Control

  8. System Definition Consider the following system x k + 1 = Ax k + Bw k for k = 0 , · · · , N − 1 , (5) y k = Cx k + v k for k = 1 , · · · , N , where x k ∈ R n , w k ∈ R m , y k ∈ R p . For clarity of exposition, we will consider the case where only the process noise sequence { w k } is constrained. (The case of general constraints on w k , v k and x 0 can be treated as well.) Centre of Complex Dynamic Systems and Control

  9. System Definition We thus assume that { v k } is an i.i.d. sequence having a Gaussian distribution N ( 0 , R ) with covariance R > 0; x 0 has a Gaussian distribution N ( µ 0 , P 0 ) with covariance P 0 > 0; and, { w k } is an i.i.d. sequence having a truncated Gaussian distribution of the form: � � ⎧ − 1 k Q − 1 w k β w exp 2 w  ⎪ ⎪ ⎪ for w k ∈ Ω , ⎪ ⎪ � � � ⎪ ⎨ − 1 2 ν  Q − 1 ν p w ( w k ) = β w Ω exp d ν (6) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 0 otherwise . ⎩ where Q > 0 and β w = ( 2 π ) − m 2 ( det Q ) − 1 2 . Centre of Complex Dynamic Systems and Control

  10. Constrained Estimation Problem Given the observations { y d 1 , . . . , y d N } and the knowledge of µ 0 (the mean value of x 0 ), we consider the following optimisation problem, which can be shown to yield the joint a posteriori most probable state estimates: N ( µ 0 , { y d V  P e : k } ) = min V N ( { ˆ x k } , { ˆ v k } , { ˆ w k } ) , (7) x k , ˆ ˆ v k , ˆ w k subject to: x k + 1 = A ˆ ˆ x k + B ˆ w k for k = 0 , . . . , N − 1 , (8) v k = y d ˆ k − C ˆ x k for k = 1 , . . . , N , (9) { ˆ x 0 , . . . , ˆ x N , ˆ v 1 , . . . , ˆ v N , ˆ w 0 , . . . , ˆ w N − 1 } ∈ X , (10) Centre of Complex Dynamic Systems and Control

  11. Constrained Estimation Problem In problem P e , the constrain set X is given by: X = R n × · · · × R n × R p × · · · × R p (11) × Ω × · · · × Ω , � ����������� �� ����������� � � ����������� �� ����������� � � �������� �� �������� � N + 1 N N and the objective function is given by: w k } ) = 1 x 0 − µ 0 )  P − 1 V N ( { ˆ x k } , { ˆ v k } , { ˆ 2 (ˆ 0 (ˆ x 0 − µ 0 ) N − 1 N + 1 w k + 1 � � k Q − 1 ˆ k R − 1 ˆ w  v  ˆ ˆ v k . (12) 2 2 k = 0 k = 1 Centre of Complex Dynamic Systems and Control

  12. Lagrangian Dual Problem The Lagrangian dual function θ is given by: θ � { λ k } , { u k } � = L � { ˆ w k } , { λ k } , { u k } � , inf x k } , { ˆ v k } , { ˆ (13) w k ∈ Ω , ˆ ˆ x k , ˆ v k where the function L is defined as, w k } , { λ k } , { u k } � = V N ( { ˆ L � { ˆ x k } , { ˆ v k } , { ˆ x k } , { ˆ v k } , { ˆ w k } ) N − 1 � � � λ  x k + 1 − A ˆ ˆ x k − B ˆ w k + k k = 0 N � � � y d u  + k − C ˆ x k − ˆ v k . (14) k k = 1 In (14), V N is the primal objective function and { λ k } and { u k } are the Lagrange multipliers corresponding to the linear equalities. Centre of Complex Dynamic Systems and Control

  13. Lagrangian Dual Problem Substituting the expression for the primal objective function V N given in (12) and rearranging terms we obtain: w k } , { λ k } , { u k } � = 1 L � { ˆ x 0 − µ 0 )  P − 1 x 0 − µ 0 ) − λ  x k } , { ˆ v k } , { ˆ 2 (ˆ 0 (ˆ 0 A ˆ x 0 N � 1 � � k R − 1 ˆ k y d 2 ˆ v  v k − u  k ˆ v k + u  + k k = 1 N − 1 � 1 � � k Q − 1 ˆ w  w k − λ  + 2 ˆ k B ˆ w k k = 0 N − 1 � � ( λ k − 1 − A  λ k − C  u k )  ˆ � + x k k = 1 + ( λ N − 1 − C  u N )  ˆ x N . (15) Centre of Complex Dynamic Systems and Control

  14. Lagrangian Dual Problem Recall that the Lagrangian dual function θ is given by: θ � { λ k } , { u k } � = L � { ˆ w k } , { λ k } , { u k } � . inf x k } , { ˆ v k } , { ˆ w k ∈ Ω , ˆ ˆ x k , ˆ v k We can see in w k } , { λ k } , { u k } � = L � { ˆ x k } , { ˆ v k } , { ˆ · · · N − 1 � � ( λ k − 1 − A  λ k − C  u k )  ˆ � x k + k = 1 + ( λ N − 1 − C  u N )  ˆ x N , that the infimum is −∞ whenever { λ k } and { u k } are such that λ k − 1 − A  λ k − C  u k � 0 for k ∈ { 1 , · · · , N − 1 } , or , λ N − 1 − C  u N � 0 . Centre of Complex Dynamic Systems and Control

  15. Lagrangian Dual Problem However, since we will subsequently choose { λ k } and { u k } so as to maximise θ ( { λ k } , { u k } ) (recall that the Lagrangian Dual Problem D consists of maximising θ ( { λ k } , { u k } ) ), we are only interested in those values of { λ k } and { u k } satisfying: λ k − 1 − A  λ k − C  u k = 0 for k = 1 , · · · , N − 1 , λ N − 1 − C  u N = 0 . Introducing an extra variable, λ N = 0, for ease of notation, we obtain: λ N = 0 , (16) A  λ k + C  u k λ k − 1 = for k = 1 , · · · , N . (17) Centre of Complex Dynamic Systems and Control

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend