part 23 optimal control examples
play

Part 23 Optimal Control: Examples 142 Definition of optimal - PowerPoint PPT Presentation

Part 23 Optimal Control: Examples 142 Definition of optimal control problems Commonly understood definition of optimal control problems: Let X a space of time-dependent functions Q a space of control parameters, time dependent or


  1. The shooting method: Computing derivatives By the chain rule, we have d F  S  q, x 0 , t i ,t  , q  d q i =∇ s F  S  q, x 0 ,t i ,t  ,q  d S  q,x 0 ,t i ,t  ∂ F  S  q,x 0 ,t i ,t  ,q  ∂ q i dq i That is, to compute derivatives of F , we need derivatives of S . To compute these, remember that S  q, x 0 ,t i ,t = x  t  where x(t)=x q (t) solves the ODE for the given q : x  t − f  x  t  ,q = 0 ∀ t ∈[ t i ,t f ] ˙ x  t i = g  x 0 , q  163

  2. The shooting method: Computing derivatives By definition: S  q  e i ,x 0 ,t i ,t − S  q , x 0 ,t i ,t  d S  q, x 0 ,t i ,t = lim   0 dq i  Consequently, we can approximate derivatives using the formula x q  e i  t − x q  t  S  q , x 0 ,t i ,t ≈ S  q  e i , x 0 ,t i ,t − S  q, x 0 , t i ,t  d = d q i   for a finite δ>0 . Note that x q (t) and x q+δei (t) solve the ODEs x q  t − f  x q  t  ,q = 0 ˙ x q  e i  t − f  x q  e i  t  ,q  e i = 0 ˙ x q  t i = g  x 0 ,q  x q  e i  t i = g  x 0 ,q  e i  164

  3. The shooting method: Computing derivatives Corollary: To compute we need to compute ∇ q F  S  q, x 0 ,t i ,t  ,q  ∇ q S  q, x 0 ,t i ,t  n q ∈ℝ For , this requires the solution of n+1 ordinary differential equations: ● For the given q : x q  t − f  x q  t  ,q = 0 ˙ x q  t i = g  x 0 ,q  ● Perturbed in directions i=1...n : x q  e i  t − f  x q  e i  t  ,q  e i = 0 ˙ x q  e i  t i = g  x 0 ,q  e i  165

  4. The shooting method: Computing derivatives Practical considerations 1: When computing finite difference approximations x q  e i  t − x q  t  S  q , x 0 ,t i ,t ≈ S  q  e i , x 0 ,t i ,t − S  q, x 0 , t i ,t  d = d q i   how should we choose the step length δ ? δ must be small enough to yield a good approximation to the exact derivative but large enough so that floating point roundoff does not affect the accuracy! Rule of thumb: If ● is the precision of floating point numbers  ● is a typical size of the i th control variable q i  q i =    then choose . q i 166

  5. The shooting method: Computing derivatives Practical considerations 2: The one-sided finite difference quotient x q  e i  t − x q  t  S  q , x 0 ,t i ,t ≈ S  q  e i , x 0 ,t i ,t − S  q, x 0 , t i ,t  d = d q i   is only first order accurate in δ , i.e. ∣ ∣ = O  S  q , x 0 ,t i ,t − S  q  e i , x 0 ,t i ,t − S  q, x 0 , t i , t  d d q i  167

  6. The shooting method: Computing derivatives Practical considerations 2: Improvement: Use two-sided finite difference quotients x q  e i  t − x q − e i  t  S  q , x 0 ,t i ,t ≈ S  q  e i , x 0 ,t i ,t − S  q − e i , x 0 ,t i ,t  d = d q i 2  2  which is second order accurate in δ , i.e. ∣ ∣ = O  S  q , x 0 ,t i ,t − S  q  e i , x 0 ,t i ,t − S  q − e i ,x 0 , t i , t  d 2  d q i 2  Note: The cost for this higher accuracy is 2n+1 ODE solves! 168

  7. The shooting method: Computing derivatives Practical considerations 3: Approximating derivatives requires solving the ODEs x q  t − f  x q  t  ,q = 0 ˙ x q  t i = g  x 0 ,q  x q  e i  t − f  x q  e i  t  ,q  e i = 0 i = 1... n ˙ x q  e i  t i = g  x 0 ,q  e i  If we can do that analytically, then good. If we do this numerically, then numerical approximation introduces systematic errors related to ● the numerical method used ● the time mesh (i.e. the collection of time step sizes) chosen 169

  8. The shooting method: Computing derivatives Practical considerations 3: We gain the highest accuracy in the numerical solution of equations like x q  t − f  x q  t  ,q = 0 ˙ x q  t i = g  x 0 ,q  by choosing sophisticated adaptive time step, extrapolating multistep ODE integrators (e.g. RK45). On the other hand, to get the best accuracy in evaluating x q  e i  t − x q  t  d S  q , x 0 ,t i ,t ≈ d q i  experience shows that we should use predictable integrators for all variables and use x q  t  , x q  e i  t  ● the same numerical method ● the same time steps ● no extrapolation 170

  9. The shooting method: Computing derivatives Practical considerations 3: Thus, to solve the ODEs x q  t − f  x q  t  ,q = 0 ˙ x q  e i  t − f  x q  e i  t  ,q  e i = 0 ˙ x q  t i = g  x 0 ,q  x q  e i  t i = g  x 0 ,q  e i  it is useful to solve them all at once as −  f  x q  n e n  t  , q  n e n   dt  x q  n e n  t   f  x q  t  ,q  x q  t  d x q  1 e 1  t  f  x q  1 e 1  t  , q  1 e 1  = 0 ⋮ ⋮  x q  n e n  t i   =  g  x 0 ,q  n e n   x q  t i  g  x 0 , q  x q  1 e 1  t i  g  x 0 ,q  1 e 1  ⋮ ⋮ 171

  10. The shooting method: Computing derivatives Practical considerations 4: For BFGS, we only need 1 st derivatives of F(S(q),q). For a full Newton method we also need 2 2 d d 2 S  q ,x 0 , t i , t  , S  q, x 0 ,t i ,t  dq i dq j dq i Again use finite difference methods: x q +δ e i ( t )− x q ( t ) x q ( t )− x q −δ e i ( t ) − x q +δ e i ( t )− 2x q ( t )+ x q −δ e i ( t ) 2 d δ δ 2 S ( q ,x 0 ,t i ,t )≈ = δ 2 δ dq i x q +δ e i +δ e j ( t )− x q −δ e i +δ e j ( t ) x q +δ e i −δ e j ( t )− x q −δ e i −δ e j ( t ) − 2 d 2 δ 2 δ S ( q , x 0 ,t i , t )≈ dq i dq j 2 δ Note: The cost for this operation is 3 n ODE solves. 172

  11. The shooting method: Practical implementation Algorithm: To solve min x  t  ,q F  x  t  ,q  ˙ x  t − f  x  t  ,q = 0 ∀ t ∈[ t i ,t f ] x  t i = g  x 0 ,q  h  q ≥ 0 reformulate it as min q F  S  q, x 0 , t i ,t  ,q  h  q ≥ 0 Solve it using a known technique where ● by the chain rule ∇ q F  S, q = F S  S, q ∇ q S  q, x 0 , t i ,t  F q  S ,q  and similarly for second derivatives ● the quantities are 2 S  q , x 0 ,t i ,t  ∇ q S  q , x 0 ,t i ,t  , ∇ q approximated by finite difference quotients by solving multiple ODEs for different values of the control variable q 173

  12. The shooting method: Practical implementation Implementation (Newton method without line search; no attempt to compute ODE and its derivatives in synch): function f(double[N] q) → double; function grad_f(double[N] q) → double[N]; function grad_grad_f(double[N] q) → double[N][N]; function newton(double[N] q) → double[N] { do { double[N] dq = - invert(grad_grad_f(q)) * grad_f(q); q = q + dq; } while (norm(grad_f(x)) > 1e-12); // for example return q; } 174

  13. The shooting method: Practical implementation Implementation (objective function only depends on x(t f ) ): function S(double[N] q, double t) → double[M] { double[M] x = x0; double time = ti; while (time<t) { // explicit Euler method with fixed dt x = x + dt * rhs(x,q); time = time + dt; } return x; } function f(double[N] q) → double { return objective_function(S(q, tf),q); } 175

  14. The shooting method: Practical implementation Implementation (one-sided finite difference quotient): function grad_f(double[N] q) → double[N] { double[N] df = 0; for (i=1...N) { delta = 1e-8 * typical_q[i]; double[N] q_plus = q; q_plus[i] = q[i] + delta; df[i] = (f(q_plus) – f(q)) / delta; } return df; } 176

  15. Part 25 Optimal control: The multiple shooting method 177

  16. Motivation In the shooting method, we need to evaluate and differentiate the function S  q , x 0 ,t i ,t = x q  t  where x q (t) solves the ODE ˙ x  t − f  x  t  , q = 0 ∀ t ∈[ t i , t f ] x  t i = g  x 0 ,q  Observation: If the time interval [t i ,t f ] is “long”, then S is often a strongly nonlinear function of q . Consequence: It is difficult to approximate S and derivatives numerically since errors grow like e LT , where L is a Lipschitz constant of S and T=t f -t i . 178

  17. Idea Observation: If the time interval [t i ,t f ] is “long”, then S is often a strongly nonlinear function of q . But then S should be less nonlinear on smaller intervals! Idea: While S(q,x 0 ,t i ,t f ) is a strongly nonlinear function of q , we could introduce t i = t 0 < t 1 < … < t k < … < t K = t f and the functions S(q,x k ,t k ,t k+1 ) should be less nonlinear and therefore simpler to approximate or differentiate numerically! 179

  18. Multiple shooting Outline: To solve min x  t  ,q F  x  t  ,q  x  t − f  x  t  ,q = 0 ∀ t ∈[ t i ,t f ] ˙ x  t i = g  x 0 ,q  h  q ≥ 0 replace this problem by the following: K ( t ) ,q F ( x ( t ) ,q ) min x 1 ( t ) , x 2 ( t ) , ... , x k ( t ) ∀ t ∈[ t k − 1 ,t k ] where x ( t ) : = x 1 ( t )− f ( x 1 ( t ) ,q )= 0 ∀ t ∈[ t 0, t 1 ] such that ˙ x 1 ( t i )= g ( x 0 , q ) x k ( t )− f ( x k ( t ) , q )= 0 ∀ t ∈[ t k − 1 , t k ] ,k = 2... K ˙ x k ( t k − 1 )= x k − 1 ( t k − 1 ) x h ( q )≥ 0 180

  19. Multiple shooting Outline: In this formulation, every x k depends explicitly on x k-1 . We can decouple this: K q F ( x ( t ) , q ) min x 1 ... , ̂ 1 ( t ) , x 2 ( t ) , ... , x K ( t ) , ̂ x 0, x 0, k ( t ) ∀ t ∈[ t k − 1 ,t k ] where x ( t ) : = x k ( t )− f ( x k ( t ) ,q )= 0 ∀ t ∈[ t k − 1 ,t k ] , k = 1. .. K such that ˙ x k ( t k − 1 )=̂ k x x 0 1 − g ( x 0, q )= 0 ̂ x 0 k − 1 ( t k − 1 )= 0 ∀ k = 2... K k − x ̂ x 0 h ( q )≥ 0 k − x k − 1  t k − 1 = 0 Note: The “defect constraints” need not be  x 0 satisfied in intermediate iterations of Newton's method. They will only be satisfied at the solution, forcing x(t) to be continuous. 181

  20. Multiple shooting Outline with the solution operator: By introducing the solution operator as before, the problem can be written as K q F ( S ( q , x 0 ,t i ,t ) ,q ) min ̂ 1 ... , ̂ x 0, x 0, k ,t k − 1 ,t ) ∀ t ∈[ t k − 1 , t k ] where S ( q, x 0 ,t i ,t ) : = S ( q , ̂ x 0 1 − g ( x 0, q )= 0 such that ̂ x 0 k − 1 t k − 1 ,t k )= 0 ∀ k = 2... K k − S ( q, ̂ ̂ x 0 x 0, h ( q )≥ 0 Note: We now only ever have to differentiate k ,t k − 1 ,t  S  q ,  x 0 which integrates the ODE on the much shorter time intervals [t k-1 ,t k ] and consequently is much less nonlinear. 182

  21. Part 26 Optimal control: Introduction to the Theory 183

  22. Preliminaries Definition: A vector space is a set X of objects so that the following holds: ∀ x, y ∈ X : x  y ∈ X ∀ x ∈ X , ∈ℝ :  x ∈ X In addition, associativity, distributivity and commutativity of addition has to hold. There also need to be identity and null elements of addition and scalar multiplication. Examples: N X =ℝ 0  0, T ={ x  t  : x  t  is continuous on  0, T } X = C 1  0, T ={ x  t ∈ C 0  0, T  : x  t  is continuously differentiable on  0, T } X = C T 2 dt ∞} 2  0, T ={ x  t  : ∫ 0 X = L ∣ x  t ∣ 184

  23. Preliminaries Definition: A scalar product is a mapping 〈 ⋅ , ⋅ 〉 : X × Y ℝ of a pair of vectors from (real) vector spaces X,Y into the real numbers. It needs to be linear. If X=Y and x=y, then it also needs to be positive or zero. Examples: N 〈 x, y 〉 = ∑ i = 1 N X = Y =ℝ x i y i N 〈 x, y 〉 = ∑ i = 1  i x i y i with weights 0  i ∞ ∞ x i y i 〈 x, y 〉 = ∑ i = 1 X = Y = l 2 T 2  0, T  〈 x, y 〉 = ∫ 0 X = Y = L x  t  y  t  dt 185

  24. Preliminaries Definition: Given a space X and a scalar product 〈 ⋅ , ⋅ 〉 : X × Y ℝ we call Y=X' the dual space of X if Y is the largest space for which the scalar product above “makes sense”. Examples: N 〈 x, y 〉 = ∑ i = 1 N N X =ℝ Y =ℝ x i y i T 〈 x, y 〉 = ∫ 0 0  0, T  Y = S  0, T  X = C x  t  y  t  dt T 〈 x, y 〉 = ∫ 0 2  0, T  X = L 2  0, T  x  t  y  t  dt Y = L q  0, T  , 1 p  1 T 〈 x, y 〉 = ∫ 0 p  0, T  , 1  p ∞ X = L x  t  y  t  dt Y = L q = 1 186

  25. Lagrange multipliers for finite dimensional problems Consider the following finite dimensional problem: min x ∈ℝ n f  x  such that g 1  x = 0 g 2  x = 0 ⋮ g K  x = 0 K L  x, = f  x − ∑ i = 1 Definition: Let the Lagrangian be  i g i  x  . Theorem: Under certain conditions on f,g the solution of above problem satisfies ∂ L * ,  * = 0, i = 1,... , N  x ∂ x i ∂ L * ,  * = 0, i = 1,... , K  x ∂ i 187

  26. Lagrange multipliers for optimal control problems Consider the following optimal control problem: min x  t  f  x  t  ,t  such that g  x  t  ,t = 0 ∀ t ∈[ 0, T ] Questions: ● What would be the corresponding Lagrange multiplier for such a problem? ● What would be the corresponding Lagrangian function? ● What are optimality conditions in this case? 188

  27. Lagrange multipliers for optimal control problems Formal approach: Take the problem min x  t  f  x  t  ,t  such that g  x  t  ,t = 0 ∀ t ∈[ 0, T ] There are infinitely many constraints, one constraint for each time instant. Following this idea, we would then have to replace K L  x, = f  x − ∑ i = 1  i g i  x  . by T L  x  t  ,  t = f  x  t  , t − ∫ 0  t  g  x  t  , t  dt where we have one Lagrange multiplier for every time t : .  t  189

  28. Lagrange multipliers for optimal control problems The “correct” approach: If we have a set of equations like g 1  x = 0 g 2  x = 0 ⋮ g K  x = 0 then we can write this as  g  x = 0 which we can interpret as saying K 〈  g  x  , h 〉 = 0 ∀ h ∈ℝ 190

  29. Lagrange multipliers for optimal control problems The “correct” approach: Likewise, if we have g  x  t  , t = 0 then we can interpret this in different ways: ● At every possible time t we want that g(x(t),t) equals zero ● The measure of the set {t: g(x(t),t)≠0} is zero (“almost all t ”) 2 dt T ∫ 0 ● The integral is zero ∣ g  x  t  ,t ∣ ● If then g(x(t),t) is zero in V , i.e. g : X ×[ 0, T ] V T 〈 g  x  t  ,t  ,h 〉 = ∫ g  x  t  ,t  h  t  dt = 0 ∀ h ∈ V ' 0 Notes: ● The first and fourth statement are the same if 0 [ 0, T ] V = C ● The second and fourth statement are the same if 1 [ 0, T ] V = L 2 [ 0, T ] ● The third and fourth statement are the same if V = L 191

  30. Lagrange multipliers for optimal control problems In either case: Given min x  t ∈ X f  x  t  ,t  such that g  x  t  ,t = 0 the Lagrangian is now L  x  t  ,  t = f  x  t  , t − 〈  ,g  x  t  , t  〉 T = f  x  t  ,t − ∫ 0  t  g  x  t  ,t  dt and L : X × V ' ℝ 192

  31. Optimality conditions for finite dimensional problems Corollary: In view of the definition f  x − f  x  〈 ∇ x f  x  ,  〉 = lim  0  K ℝ we can say that the gradient of a function is a f : ℝ functional K   ℝ K  ' ∇ x f : ℝ In other words: The gradient of a function is an element in the dual space of its argument. Note: For finite dimensional spaces, we can identify space and K ℝ dual space. Alternatively, we can consider as the space of  ℝ K  ' column vectors with K elements and as the space of row vectors with K elements. In either case, the dual product is well defined. 193

  32. Optimality conditions for finite dimensional problems Corollary: From above considerations it follows that for min x ∈ℝ n f  x  such that g 1  x = 0 g 2  x = 0 ⋮ g K  x = 0 we define K L  x , = f  x − ∑ i = 1  i g i  x  where K ℝ N ×ℝ L : ℝ and N ×ℝ K ℝ N  ' ∇ x L : ℝ N ×ℝ K ℝ K  ' ∇  L : ℝ 194

  33. Optimality conditions for finite dimensional problems Summary: For the problem min x ∈ℝ n f  x  such that g 1  x = 0 g 2  x = 0 ⋮ g K  x = 0 we define K L  x, = f  x − ∑ i = 1  i g i  x  . The optimality conditions are then * ,  * = 0 in ℝ N ∇ x L  x * ,  * = 0 in ℝ K ∇  L  x or equivalently: 〈 ∇ x L  x *  ,  〉 = 0 ∀ ∈ℝ * ,  N *  ,  〉 = 0 ∀∈ℝ 〈 ∇  L  x * ,  K 195

  34. Optimality conditions for finite dimensional problems Theorem: Under certain conditions on f,g the solution satisfies ∂ L * ,  * = 0, i = 1,... , N  x ∂ x i ∂ L * ,  * = 0, i = 1,... , K  x ∂ i Note 1: These conditions can also be written as 〈 ∇ x L  x *  ,  〉 = 0, ∀∈ℝ * ,  N *  ,  〉 = 0, ∀ ∈ℝ 〈 ∇  L  x * ,  K Note 2: This, in turn, can be written as follows: *  ,  * − L  x * ,  *  L  x 〈 ∇ x L  x *  ,  〉 = lim  0 * ,  N = 0, ∀∈ℝ  * ,  * − L  x * ,  *  L  x *  ,  〉 = lim  0 〈 ∇  L  x * ,  K = 0, ∀∈ℝ  196

  35. Optimality conditions for optimal control problems Recall: For an optimal control problem min x  t ∈ X f  x  t  ,t  such that g  x  t  ,t = 0 with g : X ×ℝ  V we have defined the Lagrangian as L  x  t  ,  t = f  x  t  , t − 〈  ,g  x  t  , t  〉 L : X × V ' ℝ 197

  36. Optimality conditions for optimal control problems Theorem: Under certain conditions on f,g the solution satisfies 〈 ∇ x L  x *  ,  〉 = 0, ∀∈ X * ,  *  ,  〉 = 0, ∀ ∈ V 〈 ∇  L  x * ,  or equivalently T ∫ *  t  ,  *  t   t  dt = 0, ∀∈ X ∇ x L  x 0 T ∫ *  t  ,  *  t   t  dt = 0, ∀∈ V ∇  L  x 0 Note: The derivative of the Lagrangian is defined as usual: *  t  t  ,  *  t − L  x *  t  ,  *  t  L  x 〈 ∇ x L  x *  t  ,  t  〉 = lim  0 *  t  ,   *  t  ,  *  t  t − L  x *  t  ,  *  t  L  x 〈 ∇  L  x *  t  ,  t  〉 = lim   0 *  t  ,   198

  37. Optimality conditions: Example 1 Example: Consider the rather boring problem T min x  t ∈ X f  x  t  ,t = ∫ 0 x  t  dt such that g  x  t  ,t = x  t − t = 0 x  t = t  for a given function . The solution is obviously .  t  Then the Lagrangian is defined as T L  x  t  ,  t = ∫ 0 x  t  dt − 〈  t  , x  t −  t  〉 T x  t − t [ x  t − t ] dt = ∫ 0 and we can compute optimality conditions in the next step. 199

  38. Optimality conditions: Example 1 Given T L  x  t  ,  t = ∫ 0 x  t − t [ x  t − t ] dt we can compute derivatives of the Lagrangian: 〈 ∇ x L ( x ( t ) , λ( t )) , ξ( t ) 〉 ϵ { x ( t )−λ( t )[ x ( t )−ψ( t )] dt } T ∫ 0 ( x ( t )+ϵξ( t ))−λ( t )[( x ( t )+ϵξ( t ))−ψ( t )] dt 1 = lim ϵ→ 0 T − ∫ 0 T = lim ϵ→ 0 ∫ 0 ϵ ξ( t )−λ( t )[ϵξ( t )] dt ϵ T = ∫ 0 ξ( t )−λ( t )ξ( t ) dt T = ∫ 0 [ 1 −λ( t )]ξ( t ) dt T 〈 ∇ λ L ( x ( t ) , λ( t )) , η( t ) 〉 = ∫ 0 −[ x ( t )−ψ( t )]η( t ) dt 200

  39. Optimality conditions: Example 1 Example: Consider the rather boring problem T min x  t ∈ X f  x  t  ,t = ∫ 0 x  t  dt such that g  x  t  ,t = x  t − t = 0 The optimality conditions are now T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ 0 [ 1 − t ] t  dt = 0 ∀ t  T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ x  t − t ]  t  dt = 0 ∀  t  These can only be satisfied for 1 − t = 0 , x  t − t = 0, ∀ 0 ≤ t ≤ T 201

  40. Optimality conditions: Example 2 Example: Consider the slightly more interesting problem 2 dt T min x  t ∈ X f  x  t  ,t = ∫ 0 x  t  such that g  x  t  ,t =˙ x  t − t = 0 x  t = a  1 2 The constraint allows all functions of the form for 2 t all constants a . Then the Lagrangian is defined as 2 dt − 〈  t  , ˙ T L  x  t  ,  t = ∫ 0 x  t − t 〉 x  t  T = ∫ 0 2 − t [ ˙ x  t  x  t − t ] dt x  t = a  1 2 Note: For the objective function has the value 2 t [ a  1 2 ] 2 T x  t  = 1 5  1 T ∫ 0 2 dt = ∫ 0 3  a 2 T t T aT 2 20 3 a =− 1 2 which takes on its minimal value for 6 T 202

  41. Optimality conditions: Example 2 Given T L  x  t  ,  t = ∫ 0 2 − t [ ˙ x  t  x  t − t ] dt we can compute derivatives of the Lagrangian: 〈 ∇ x L ( x ( t ) , λ( t )) , ξ( t ) 〉 ϵ { x ( t )− t ] dt } T ∫ 0 x ( t )+ϵ ˙ 2 −λ( t )[ ˙ ( x ( t )+ϵξ( t )) ξ( t )− t ] dt 1 = lim ϵ→ 0 T − ∫ 0 2 −λ( t )[ ˙ x ( t ) 2 ξ( t ) T = lim ϵ→ 0 ∫ 0 2 −λ( t )[ϵ ˙ 2 ϵ x ( t )ξ( t )+ϵ ξ( t )] dt ϵ T = ∫ 0 2 x ( t )ξ( t )−λ( t )˙ ξ( t ) dt T = ∫ 0 T [ 2 x ( t )+ ˙ λ( t )]ξ( t ) dt − [ λ( t )ξ( t ) ] t = 0 T 〈 ∇ λ L ( x ( t ) , λ( t )) , η( t ) 〉 = ∫ 0 −[ ˙ x ( t )− t ]η( t ) dt 203

  42. Optimality conditions: Example 2 The optimality conditions are now T = 0 ∀ t  T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ 0  t ] t  dt − [  t  t  ] t = 0 [ 2 x  t  ˙ T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ ˙ x  t − t ]  t  dt = 0 ∀  t  From the second equation we can conclude that x  t − t = 0  x  t = a  1 2 ˙ 2 t On the other hand, the first equation yields 2 x  t  ˙  t = 0,  0 = 0,  T = 0 Given the form of x(t) , the first of these three conditions can be integrated:  t =− 2at − 1 3  b 3 t b = 0, a =− 1 2 Enforcing boundary conditions then yields 6 T 204

  43. Optimality conditions: Example 3 – initial conditions 1 ,f ∈ C 0 Theorem: Let . If x(t) satisfies the initial value x ∈ C problem ˙ x  t = f  x  t  ,t  x  0 = x 0 then it also satisfies the “variational” equality T ∫ 0 x  t − f  x  t  ,t  ]  t  dt  [ x  0 − x 0 ]  0 = 0 ∀ t ∈ C [ ˙ 0 [ 0, T ] and vice versa. 205

  44. Optimality conditions: Example 3 – initial conditions Example: Consider the (again slightly boring) problem T min x  t ∈ X f  x  t  ,t = ∫ 0 x  t  dt such that ˙ x  t − t = 0 x  0 = 1 x  t = 1  1 2 The constraint allows for only a single feasible point, 2 t The Lagrangian is now defined as T x  t  dt − 〈  t  , ˙ x  t − t 〉 − [ x  0 − 1 ]  0  L  x  t  ,  t = ∫ 0 T = ∫ 0 x  t − t [˙ x  t − t ] dt − 0 [ x  0 − 1 ] 206

  45. Optimality conditions: Example 3 – initial conditions T L  x  t  ,  t = ∫ 0 Given x  t − t [˙ x  t − t ] dt − 0 [ x  0 − 1 ] we can compute derivatives of the Lagrangian: 〈 ∇ x L ( x ( t ) , λ( t )) , ξ( t ) 〉 ϵ { x ( t )− t ] dt +λ( 0 )[ x ( 0 )− 1 ] } T ∫ 0 x ( t )+ϵ ˙ ( x ( t )+ϵξ( t ))−λ( t )[ ˙ ξ( t )− t ] dt −λ( 0 )[ x ( 0 )+ϵξ( 0 )− 1 ] 1 = lim ϵ→ 0 T − ∫ 0 x ( t )−λ( t )[ ˙ T = lim ϵ→ 0 ∫ 0 ϵ ξ( t )−λ( t )[ϵ ˙ ξ( t )] dt −ϵλ( 0 )ξ( 0 ) ϵ T = ∫ 0 ξ( t )−λ( t ) ˙ ξ( t ) dt −λ( 0 )ξ( 0 ) T −λ( 0 )ξ( 0 ) T = ∫ 0 [ 1 + ˙ λ( t )]ξ( t ) dt − [ λ( t )ξ( t ) ] t = 0 T = ∫ 0 [ 1 + ˙ λ( t )]ξ( t ) dt −λ( T )ξ( T ) T 〈 ∇ λ L ( x ( t ) , λ( t )) , η( t ) 〉 = ∫ 0 −[ ˙ x ( t )− t ]η( t ) dt −η( 0 )[ x ( 0 )− 1 ] 207

  46. Optimality conditions: Example 3 – initial conditions The optimality conditions are now T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ 0 [ 1  ˙  t ] t  dt − T  T  = 0 ∀ t  T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ ˙ x  t − t ]  t  dt −[ x  0 − 1 ]  0  = 0 ∀  t  From the second equation we can conclude that x  t − t = 0 ˙ x  0 = 1 In other words: Taking the derivative of the Lagrangian with respect to the Lagrange multiplier gives us back the (initial value problem) constraint, just like in the finite dimensional case. Note: The only feasible point of this constraint is of course x  t = 1  1 2 2 t 208

  47. Optimality conditions: Example 3 – initial conditions The optimality conditions are now T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ 0 [ 1  ˙  t ] t  dt − T  T  = 0 ∀ t  T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ ˙ x  t − t ]  t  dt −[ x  0 − 1 ]  0  = 0 ∀  t  From the first equation we can conclude that 1  ˙  t = 0  T = 0 in much the same way as we could obtain the initial value problem for x(t) . Note: This is a final value problem for the Lagrange multiplier! Its solution is  t = T − t 209

  48. Optimality conditions: Example 4 – initial conditions Note: If the objective function had been nonlinear, then the equation for λ(t) would contain x(t) but still be linear in λ(t) . Example: Consider the (again slightly boring) variant of the same problem T 1 2 dt min x  t ∈ X f  x  t  ,t = ∫ 2 x  t  0 x  t − t = 0 such that ˙ x  0 = 1 x  t = 1  1 2 2 t The constraint allows for only a single feasible point, The Lagrangian is now defined as T 1 L  x  t  ,  t = ∫ 2 − t [ ˙ 2 x  t  x  t − t ] dt − 0 [ x  0 − 1 ] 0 210

  49. Optimality conditions: Example 4 – initial conditions Given T 1 L  x  t  ,  t = ∫ 2 − t [ ˙ 2 x  t  x  t − t ] dt − 0 [ x  0 − 1 ] 0 the derivatives of the Lagrangian are now: T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ [ x  t ˙  t ] t  dt − T  T  0 T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ ˙ x  t − t ]  t  dt −  0 [ x  0 − 1 ] 211

  50. Optimality conditions: Example 4 – initial conditions The optimality conditions are now T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ [ x  t ˙  t ] t  dt − T  T  = 0 ∀ t  0 T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ ˙ x  t − t ]  t  dt −[ x  0 − 1 ]  0  = 0 ∀  t  From the second equation we can again conclude that x  t − t = 0 ˙ x  0 = 1 with solution x  t = 1  1 2 2 t 212

  51. Optimality conditions: Example 4 – initial conditions The optimality conditions are now T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ [ x  t ˙  t ] t  dt − T  T  = 0 ∀ t  0 T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ 0 −[ ˙ x  t − t ]  t  dt −[ x  0 − 1 ]  0  = 0 ∀  t  From the first equation we can now conclude that x  t  ˙  t = 0  T = 0 Note: This is a linear final value problem for the Lagrange multiplier. Given the form of x(t) , we can integrate the first equation:  t =− t − 1 3  a 6 t Together with the final condition, we obtain  t =− t − 1 3  T  1 3 6 t 6 T 213

  52. Optimality conditions: Preliminary summary Summary so far: Consider the (not very interesting) case where the constraints completely determine the solution, i.e. without any control variables: T min x  t ∈ X f  x  t  ,t = ∫ F  x  t  ,t  dt 0 x  t − g  x  t  ,t = 0 such that ˙ x  0 = x 0 Then the optimality conditions read in “variational form”: T 〈 ∇ x L  x  t  ,  t  ,  〉 = ∫ [ F x  x  t  ,t  g x  x  t  ,t ˙  t ] t  dt − T  T = 0 0 T 〈 ∇  L  x  t  ,  t  ,  〉 = ∫ −[ ˙ x  t − g  x  t  ,t ]  t  dt −[ x  0 − x 0 ] 0  = 0 0 ∀ t  ,   t  214

  53. Optimality conditions: Preliminary summary Summary so far: Consider the (not very interesting) case where the constraints completely determine the solution, i.e. without any control variables: T min x  t ∈ X f  x  t  ,t = ∫ F  x  t  ,t  dt 0 x  t − g  x  t  ,t = 0 such that ˙ x  0 = x 0 Then the optimality conditions read in “strong” form: ˙ x  t − g  x  t  ,t = 0  t =− F x  x  t  ,t − g x  x  t  , t  ˙ x  0 = x 0  T = 0 Note: Because x(t) does not depend on the Lagrange multiplier, the optimality conditions can be solved by first solving for x(t) as an initial value problem from 0 to T and in a second step solving the final value problem for λ(t) backward from T to 0. 215

  54. Part 27 Optimal control: Theory 216

  55. Optimality conditions for optimal control problems Recap: Let ● X a space of time-dependent functions ● Q a space of control parameters, time dependent or not ● a continuous functional on X and Q f : X × Q ℝ ● continuous operator on X mapping into a space Y L : X × Q  Y ● continuous operator on X mapping into a space Z x g : X  Z x ● continuous operator on Q mapping into a space Z q h : Q  Z q Then the problem min x = x  t ∈ X , q ∈ Q f  x  t  ,q  such that L  x  t  ,q = 0 ∀ t ∈[ t i , t f ] g  x  t  ≥ 0 ∀ t ∈[ t i ,t f ] h  q  ≥ 0 is called an optimal control problem . 217

  56. Optimality conditions for optimal control problems There are two important cases: ● The space of control parameters, Q , is a finite dimensional set n f  x  t  ,q  min x = x  t ∈ X ,q ∈ Q =ℝ such that L  x  t  ,q = 0 ∀ t ∈[ t i ,t f ] g  x  t  ≥ 0 ∀ t ∈[ t i ,t f ] h  q  ≥ 0 ● The space of control parameters, Q , consists of time dependent functions min x = x  t ∈ X ,q ∈ Q f  x  t  ,q  t  such that L  x  t  ,q  t = 0 ∀ t ∈[ t i ,t f ] g  x  t  ,q  t  ≥ 0 ∀ t ∈[ t i ,t f ] h  q  t  ≥ 0 218

  57. The finite dimensional case Consider the case of finite dimensional control variables q : T n f  x  t  ,t ,q = ∫ 0 min x  t ∈ X ,q ∈ℝ F  x  t  ,t ,q  dt such that ˙ x  t − g  x  t  ,t, q = 0 x  0 = x 0  q  with n  V g : X ×ℝ×ℝ Because the differential equation now depends on q , the feasible set is no longer just a single point. Rather, for every q there is a feasible x(t) if the ODE is solvable. In this case, we have (all products are understood to be dot products): T L  x  t  ,q,  t = ∫ F  x  t  ,t ,q  dt − 〈  , ˙ x  t − g  x  t  , t,q  〉 − 0 [ x  0 − x 0  q ] 0 n × V ' ℝ L : X ×ℝ 219

  58. The finite dimensional case Theorem: Under certain conditions on f,g the solution satisfies 〈 ∇ x L  x *  ,  〉 = 0, ∀∈ X * , q * ,  *  ,  〉 = 0, ∀∈ V 〈 ∇  L  x * , q * ,  〈 ∇ q L  x *  ,  〉 = 0, ∀∈ℝ * , q * ,  n  ' =ℝ n The first two conditions can equivalently be written as T ∫ *  t  , q,  *  t   t  dt = 0, ∀∈ X ∇ x L  x 0 T ∫ *  t  , q,  *  t    t  dt = 0, ∀ ∈ V ∇  L  x 0 Note: Since q is finite dimensional, the following conditions are equivalent: 〈 ∇ q L  x *  ,  〉 = 0, ∀∈ℝ * ,q,  n  ' =ℝ n * ,q,  * = 0 ∇ q L  x 220

  59. The finite dimensional case Corollary: Given the form of the Lagrangian, T L  x  t  , q,  t = ∫ 0 F  x  t  ,t ,q − t [ ˙ x  t − g  x  t  ,t ,q ] dt − 0 [ x  0 − x 0  q ] the optimality conditions are equivalent to the following three sets of equations: x  t = g  x  t  ,t, q  , x  0 = x 0  q  ˙ ˙  t =− F x  x  t  , t,q − g x  x  t  ,t ,q  ,  T = 0 F q  x  t  , t, q  t  g q  x  t  ,t ,q  dt  0  ∂ x 0  q  T ∫ = 0 ∂ q 0 Remark: These are called the primal , dual and control equations, respectively. 221

  60. The finite dimensional case The optimality conditions for the finite dimensional case are x  t = g  x  t  ,t, q  , x  0 = x 0  q  ˙ ˙  t =− F x  x  t  , t,q − g x  x  t  ,t ,q  ,  T = 0 F q  x  t  , t, q  t  g q  x  t  ,t ,q  dt  0  ∂ x 0  q  T ∫ = 0 ∂ q 0 Note: The primal and dual equations are differential equations, whereas the control equation is a (in general nonlinear) algebraic equation. This should be enough to identify the two time-dependent functions and the finite dimensional parameter. However: Since the control equation determines q for given primal and dual variables, we can no longer integrate the first equation forward and the second backward to solve the problem. Everything is coupled now! 222

  61. The finite dimensional case: An example Example: Throw a ball from height h with horizontal velocity v x so that it lands as close as possible from x=(1,0) after one time unit: 2  x  t −  0    x  t −  0   2 2 min { x  t  , v  t }∈ X ,q ={ h , v x }∈ℝ 2 1 = 1 1 T 1 2 ∫  t − 1  dt 0 x  t = v  t  x  0 =  h  0 such that ˙ − 1  v  0 =  0  v  t =  v x 0 ˙ Then: L { x  t  , v  t } ,q , { x  t  ,  v  t }  x  t −  0   x  t − v  t  〉 − 〈  v , ˙ − 1  〉 v  t −  2 = 1 T 1 0  t − 1  dt − 〈  x , ˙ 2 ∫ 0 h  ] − v  0  [ v  0 −  0  ] − x  0  [ x  0 −  0 v x 223

  62. The finite dimensional case: An example From the Lagrangian L { x  t  , v  t } ,q , { x  t  ,  v  t }  x  t −  0   x  t − v  t  〉 − 〈  v , ˙ − 1  〉 2 v  t −  = 1 T 1 0  t − 1  dt − 〈  x , ˙ 2 ∫ 0 h  ] − v  0  [ v  0 −  0  ] − x  0  [ x  0 −  0 v x we get the optimality conditions: ● Derivative with respect to x(t) :  x  t −  0    x  t  t − 1  dt − ∫ 0 T T 1 ∫ 0  x  t  ˙  x  t  dt − x  0  x  0 = 0 ∀ x  t  After integration by parts, we see that this is equivalent to  x  t −  0    t − 1 ˙ 1  x  t = 0  x  T = 0 224

  63. The finite dimensional case: An example From the Lagrangian L { x  t  , v  t } ,q , { x  t  ,  v  t }  x  t −  0   x  t − v  t  〉 − 〈  v , ˙ − 1  〉 2 v  t −  = 1 T 1 0  t − 1  dt − 〈  x , ˙ 2 ∫ 0 h  ] − v  0  [ v  0 −  0  ] − x  0  [ x  0 −  0 v x we get the optimality conditions: ● Derivative with respect to v(t) : T T ∫ 0  x  t  v  t  dt − ∫ 0  v  t  ˙  v  t  dt − v  0  v  0 = 0 ∀ v  t  After integration by parts, we see that this is equivalent to  x  t  ˙  v  t = 0  v  T = 0 225

  64. The finite dimensional case: An example From the Lagrangian L { x  t  , v  t } ,q , { x  t  ,  v  t }  x  t −  0   x  t − v  t  〉 − 〈  v , ˙ − 1  〉 2 v  t −  = 1 T 1 0  t − 1  dt − 〈  x , ˙ 2 ∫ 0 h  ] − v  0  [ v  0 −  0  ] − x  0  [ x  0 −  0 v x we get the optimality conditions: ● Derivative with respect to λ x (t) : x  t − v  t ] dt − x  0  [ x  0 −  h  ] = 0 ∀ x  t  T 0 ∫ 0  x  t [ ˙ This is equivalent to x  t − v  t = 0 x  0 −  h  = 0 0 ˙ 226

  65. The finite dimensional case: An example From the Lagrangian L { x  t  , v  t } ,q , { x  t  ,  v  t }  x  t −  0   x  t − v  t  〉 − 〈  v , ˙ − 1  〉 2 v  t −  = 1 T 1 0  t − 1  dt − 〈  x , ˙ 2 ∫ 0 h  ] − v  0  [ v  0 −  0  ] − x  0  [ x  0 −  0 v x we get the optimality conditions: ● Derivative with respect to λ v (t) : − 1  ] dt − v  0  [ v  0 −  0  ] = 0 ∀ v  t   v  t  [ ˙ v  t −  T v x 0 ∫ 0 This is equivalent to − 1  = 0 v  0 −  0  = 0 v  t −  v x 0 ˙ 227

  66. The finite dimensional case: An example From the Lagrangian L { x  t  , v  t } ,q , { x  t  ,  v  t }  x  t −  0   x  t − v  t  〉 − 〈  v , ˙ − 1  〉 2 v  t −  = 1 T 1 0  t − 1  dt − 〈  x , ˙ 2 ∫ 0 h  ] − v  0  [ v  0 −  0  ] − x  0  [ x  0 −  0 v x we get the optimality conditions: ● Derivative with respect to the first control parameter h :  x, 2  0 = 0 ● Derivative with respect to the second control parameter v x :  v, 1  0 = 0 228

  67. The finite dimensional case: An example The complete set of optimality conditions is now as follows: x  t − v  t = 0 x  0 −  h  = 0 0 State equations: ˙ (initial value problem) − 1  = 0 v  0 −  0  = 0 v  t −  v x 0 ˙  x  t −  0    t − 1 ˙ 1  x  t = 0  x  T = 0 Adjoint equations: (final value problem)  x  t  ˙  v  t = 0  v  T = 0  x, 2  0 = 0 Control equations: (algebraic)  v, 1  0 = 0 229

  68. The finite dimensional case: An example In this simple example, we can integrate the optimality conditions in time: x  t − v  t = 0 x  0 −  h  = 0 0 State equations: ˙ (initial value problem) − 1  = 0 v  0 −  0  = 0 v  t −  v x 0 ˙ v  t =  − t  v x Solution: x  t =  2  v x t h − 1 t 2 230

  69. The finite dimensional case: An example In this simple example, we can integrate the optimality conditions in time:  x  t −  0    t − 1 ˙ 1 Adjoint equations:  x  t = 0  x  T = 0 (final value problem)  x  t  ˙  v  t = 0  v  T = 0  x  t =− [ x  1 −  0  ] for t  1  v  t = [ x  1 −  0  ] t for t  1 Solution: 1 1  v  t = 0 for t  1  x  t = 0 for t  1 Using what we found for x(1) previously:  x  t =−  2   v  t =  2  v x − 1 v x − 1 for t  1  t − 1  for t  1 h − 1 h − 1  v  t = 0 for t  1  x  t = 0 for t  1 231

  70. The finite dimensional case: An example In the final step, we use the control equations:  x, 2  0 = 0  v, 1  0 = 0 But we know that  x  t =−  2   v  t =  2  v x − 1 v x − 1 for t  1  t − 1  for t  1 h − 1 h − 1 Consequently, the solution is given by h = 1 2 v x = 1 232

  71. The infinite dimensional case Consider the case of a control variable q(t ) that is a function (here, for example, a function in L 2 ): T 2 [ 0, T ] f  x  t  ,t,q  t = ∫ 0 min x  t ∈ X , q  t ∈ L F  x  t  ,t ,q  t  dt x  t − g  x  t  ,t, q  t = 0 such that ˙ x  0 = x 0  q  0  with n  V g : X ×ℝ×ℝ In this case, we have T L  x  t  ,q  t  ,  t = ∫ 0 F  x  t  , t,q  t  dt − 〈  , ˙ x  t − g  x  t  ,t ,q  t  〉 − 0 [ x  0 − x 0  q  0 ] 2 [ 0, T ]× V ' ℝ L : X × L 233

  72. The infinite dimensional case Theorem: Under certain conditions on f,g the solution satisfies 〈 ∇ x L  x *  ,  〉 = 0, ∀∈ X * ,q * ,  〈 ∇  L  x *  ,  〉 = 0, ∀ ∈ V * ,q * ,  〈 ∇ q L  x *  ,  〉 = 0, ∀ ∈ L * ,q * ,  2 [ 0, T ] ' = L 2 [ 0, T ] The first two conditions can equivalently be written as T ∫ *  t  , q,  *  t   t  dt = 0, ∀∈ X ∇ x L  x 0 T ∫ *  t  , q,  *  t    t  dt = 0, ∀ ∈ V ∇  L  x 0 Note: Since q is is now a function, the third optimality condition is: T ∫ 0 *  t  , q,  *  t   t  dt = 0, ∀∈ L 2 [ 0, T ] ∇ q L  x 234

  73. The infinite dimensional case Corollary: Given the form of the Lagrangian, T L  x  t  ,q  t  ,  t = ∫ 0 F  x  t  , t,q  t  dt − 〈  , ˙ x  t − g  x  t  ,t ,q  t  〉 − 0 [ x  0 − x 0  q  0 ] the optimality conditions are equivalent to the following three sets of equations: ˙ x  t = g  x  t  ,t, q  t  , x  0 = x 0  q  0  ˙  t =− F x  x  t  , t, q  t − g x  x  t  ,t ,q  t  ,  T = 0 F q  x  t  , t,q  t  g q  x  t  ,t ,q = 0,  0  ∂ x 0  q  = 0 ∂ q Remark: These are again called the primal , dual and control equations, respectively. 235

  74. The infinite dimensional case The optimality conditions for the infinite dimensional case are ˙ x  t = g  x  t  ,t, q  t  , x  0 = x 0  q  0  ˙  t =− F x  x  t  , t, q  t − g x  x  t  ,t ,q  t  ,  T = 0 F q  x  t  , t,q  t  g q  x  t  ,t ,q = 0,  0  ∂ x 0  q  = 0 ∂ q Note 1: The primal and dual equations are differential equations, whereas the control equation is a (in general nonlinear) algebraic equation that has to hold for all times between 0 and T . This should be enough to identify the three time-dependent functions. Note 2: Like for the finite dimensional case, all three equations are coupled and can not be solved one after the other. 236

  75. The infinite dimensional case: An example Example: Throw a ball from height 1. Use vertical thrusters so that the altitude follows the path 1+t 2 : 2 [ 0, T ] 1 T 2 dt 2 ∫ 0  x  t − 1  t 2   min { x  t  , v  t }∈ X ,q  t ∈ L such that ˙ x  t = v  t  x  0 = 1 ˙ v  t =− 1  q  t  v  0 = 0 Then: L { x  t  , v  t } , q  t  , { x  t  ,  v  t } = 1 T 2 2 ∫ 0  x  t − 1  t 2   dt − 〈  x , ˙ x  t − v  t  〉 − 〈  v , ˙ v  t −[− 1  q  t ] 〉 − x  0  [ x  0 − 1 ] − v  0  [ v  0 − 0 ] 237

  76. The infinite dimensional case: An example From the Lagrangian L { x  t  , v  t } , q  t  , { x  t  ,  v  t } = 1 T 2 2 ∫ 0  x  t − 1  t 2   dt − 〈  x , ˙ x  t − v  t  〉 − 〈  v , ˙ v  t −[− 1  q  t ] 〉 − x  0  [ x  0 − 1 ] − v  0  [ v  0 − 0 ] we get the optimality conditions: ● Derivative with respect to x(t) : T T  x  t − 1  t 2    x  t  dt − ∫ ∫  x  t  ˙  x  t  dt − x  0  x  0 = 0 ∀ x  t  0 0 After integration by parts, we see that this is equivalent to  x  t − 1  t 2    ˙  x  t = 0  x  T = 0 238

  77. The infinite dimensional case: An example From the Lagrangian L { x  t  , v  t } , q  t  , { x  t  ,  v  t } = 1 T 2 2 ∫ 0  x  t − 1  t 2   dt − 〈  x , ˙ x  t − v  t  〉 − 〈  v , ˙ v  t −[− 1  q  t ] 〉 − x  0  [ x  0 − 1 ] − v  0  [ v  0 − 0 ] we get the optimality conditions: ● Derivative with respect to v(t) : T T ∫  x  t  v  t  dt − ∫ 0  v  t  ˙  v  t  dt − v  0  v  0 = 0 ∀ v  t  0 After integration by parts, we see that this is equivalent to  x  t  ˙  v  t = 0  v  T = 0 239

  78. The infinite dimensional case: An example From the Lagrangian L { x  t  , v  t } , q  t  , { x  t  ,  v  t } = 1 T 2 2 ∫ 0  x  t − 1  t 2   dt − 〈  x , ˙ x  t − v  t  〉 − 〈  v , ˙ v  t −[− 1  q  t ] 〉 − x  0  [ x  0 − 1 ] − v  0  [ v  0 − 0 ] we get the optimality conditions: ● Derivative with respect to λ x (t) : T ∫ x  t − v  t ] dt − x  0  [ x  0 − 1 ] = 0 ∀  x  t   x  t [ ˙ 0 This is equivalent to x  t − v  t = 0 x  0 − 1 = 0 ˙ 240

  79. The infinite dimensional case: An example From the Lagrangian L { x  t  , v  t } , q  t  , { x  t  ,  v  t } = 1 T 2 2 ∫ 0  x  t − 1  t 2   dt − 〈  x , ˙ x  t − v  t  〉 − 〈  v , ˙ v  t −[− 1  q  t ] 〉 − x  0  [ x  0 − 1 ] − v  0  [ v  0 − 0 ] we get the optimality conditions: ● Derivative with respect to λ v (t) : T ∫  v  t  [ ˙ v  t −− 1  q  t  ] dt − v  0  [ v  0 − 0 ] = 0 ∀ v  t  0 This is equivalent to v  t −− 1  q  t = 0 v  0 = 0 ˙ 241

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend