optimal control theory the theory
play

Optimal Control Theory The theory Optimal control theory is a - PowerPoint PPT Presentation

Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal control models is rarely exposed


  1. Optimal Control Theory

  2. The theory • Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems • The elaborate mathematical machinery behind optimal control models is rarely exposed to computer animation community • Most controllers designed in practice are theoretically suboptimal • There is an excellent tutorial by Dr. Emo Todorov (http:// www.cs.washington.edu/homes/todorov/papers/ optimality_chapter.pdf)

  3. Standard problem • Find an action sequence ( u 0 , u 1 , ..., u n -1 ) and corresponding state sequence ( x 0 , x 1 , ..., x n- 1 ) minimizing the total cost • The initial state ( x 0 ) and the destination state ( x n ) are given

  4. Discrete control $500 $120 $450 $150 $350 $200 $120 $250 $150 $350 $200 next( x , u ) cost( x , u ) $300 $250

  5. Dynamic programming • Bellman optimality principle: • If a given state-action sequence is optimal and we remove the first state and action, remaining sequence is also optimal • The choice of optimal actions in the futures is independent of the past actions which led to the present state • The optimal state-action sequences can be constructed by starting at the final state and extending backwards

  6. Optimal value function • v ( x ) = “minimal total cost for completing the task starting from state x ” • Find optimal actions: 1. Consider every action available at the current state 2. Add its immediate cost to the optimal value of the resulting next state 3. Choose an action for which the sum is minimal

  7. Optimal value function • Mathematically, a value function, or a cost-to-go function can be defined as

  8. Optimal control policy • A mapping from states to actions is called control policy or control law • Once we have a control policy, we can start at any state and reach the destination state by following the control policy • Optimal control policy satisfies • Its corresponding optimal value function satisfies

  9. Value iteration • Bellman equations cannot be solved in a single pass if the state transitions are cyclic • Value iteration starts with a guess v (0) of the optimal value function and construct a sequence of improved guesses:

  10. • Discrete control: Bellman equations • Continuous control: HJB equations • Maximum principle • Linear quadratic regulator (LQR) • Differential dynamic program (DDP)

  11. Continuous control • State space and control space are continuos • Dynamics of the system: • Continuous time • Discrete time • Objective function:

  12. HJB equation • HJB equation is a nonlinear PDE with respect to unknown function v u ∈ U ( x ) ( l ( x , u , t ) + f ( x , u ) T v x ( x , t )) − v t ( x , t ) = min • An optimal control π ( x , t ) is a value of u which achieves the minimum in HJB equation u ∈ U ( x ) ( l ( x , u , t ) + f ( x , u ) T v x ( x , t )) π ( x , t ) = arg min

  13. Numerical solution • Non-linear differential equations do not always have classic solutions which satisfy them everywhere • Numerical methods guarantee convergence, but they rely on discretization of the state space, which grows exponentially in the state space dimension • Nevertheless, the HJB equations have motivated a number of methods for approximate solution

  14. Parametric value function • Consider an approximation to the optimal value function • The derivative function with respect to x • Choose a large enough set of states and evaluate the right hand side of HJB using the approximated value function • Adjust theta such that get closer to target values

  15. • Discrete control: Bellman equations • Continuous control: HJB equations • Maximum principle • Linear quadratic regulator (LQR) • Differential dynamic program (DDP)

  16. Maximum principle • Optimal control theory is based on two fundamental ideas: dynamic programming and maximum principle • Maximum principle solves the optimal control for a deterministic dynamic system with boundary conditions • Maximum principle casts trajectory optimization as a set of ODE’s, under optimal control conditions and boundary conditions • It escapes “curse of dimensionality” because it only solves for the optimal trajectory and not the entire policy. However, for specific problem classes, the control policy can be obtained.

  17. Derive from Lagrangian Multipliers minimize subject to f ( x k , u k ) − x k +1 = 0 , 0 ≤ k ≤ n − 1

  18. The Lagrangian minimize f ( x ) subject to Ax = b • The Lagrangian associated with this problem is p X ν i ( a T L ( x , ν ) = f ( x ) + i x − b i ) i =1 • Optimality conditions: x * is optimal iff there exists a such that ν ∗ r f ( x ∗ ) + A T ν ∗ = 0 Ax ∗ = b

  19. Geometric interpretation r f ( x ∗ ) + A T ν ∗ = 0 Ax ∗ = b � f ( x ) � f ( x ∗ ) a i • At the optimal point, the gradient of the objective function is the linear combination of the gradient of constraints • The projection of the gradient of the objective function onto the constraint hyperplane is zero at the optimal point

  20. ∇ C 2 ∇ C 1 F ( x )

  21. Derive from Lagrangian Multipliers minimize subject to f ( x k , u k ) − x k +1 = 0 , 0 ≤ k ≤ n − 1 Maximum principle can be express in Hamiltonian function

  22. Hamiltonian expression Plug Hamiltonian back to Lagrangian state equation costate equation optimal condition boundary condition

  23. Solving optimal trajectory • Given a control sequence, use state equation to get the corresponding state sequence. • Then iterate co-state equation backward in time to get Lagrange multiplier (co-state) sequence. • Evaluate the gradient of H wrt u at each time step, and improve the control sequence with any gradient descent algorithm. Go back to step 1, or exit if converged.

  24. Special case • Optimal control laws can rarely be obtained in closed form. One notable exception is the LQR case where the dynamics are linear and the costs are quadratic. • LQR is a class of problems which dynamic function is linear and cost function is quadratic • dynamics: • cost rate: • final cost

  25. Optimal value function • We derive optimal value function from Bellman equation • Again, the optimal value function is quadratic in x and changes over time • Plugging in Bellman equation, we obtain a recursive relation of V k • The optimal control law is linear in x

  26. • Discrete control: Bellman equations • Continuous control: HJB equations • Maximum principle • Linear quadratic regulator (LQR) • Differential dynamic program (DDP)

  27. Linear quadratic regulator • Most optimal control problems do not have closed-form solutions. One exception is LQR case • LQR is a class of problems which dynamic function is linear and cost function is quadratic • dynamics: • cost rate: • final cost • R is symmetric positive definite, and Q and Q f are symmetric • A , B , R , Q can be made time-varying

  28. Optimal value function • For a LQR problem, the optimal value function is quadratic in x and can be expressed as where V ( t ) is a symmetric matrix • We can obtain the ODE of V ( t ) via HJB equation

  29. Discrete LQR • LQR is defined as follows when time is discretized • dynamics • cost rate • final cost • Let n = t f / Δ , the correspondence to continuous-time problem is

  30. Optimal value function • We derive optimal value function from Bellman equation • Again, the optimal value function is quadratic in x and changes over time • Plugging in Bellman equation, we obtain a recursive relation of V k • The optimal control law is linear in x

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend