visual servoing intro optimal control
play

Visual Servoing, Intro Optimal Control Lecture 12 What will you - PowerPoint PPT Presentation

Visual Servoing, Intro Optimal Control Lecture 12 What will you take home today? Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation


  1. Visual Servoing, Intro Optimal Control Lecture 12

  2. What will you take home today? Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation Deriving LQR

  3. Camera-Robot Configurations Image from: CHANG, W., WU, C.. Hand-Eye Coordination for Robotic Assembly Tasks. International Journal of Automation and Smart Technology ,

  4. Image-based visual servoing Current Image Goal Image

  5. Camera Motion to Image Motion ω x ω z v x v z ω y v y Slides adapted from Peter Corke

  6. The Image Jacobian The Image Jacobian ω ω f = f f = f v ) T v ) T ( ˙ ( ˙ u, ˙ u, ˙ ˆ ˆ ρ ρ ( X, Y, Z ) T ( X, Y, Z ) T 0 0 1 1 v x v x v v v y v y ✓ ˙ ✓ ˙ B B C C ✓ − ˆ ✓ − ˆ uv/ ˆ uv/ ˆ − ( ˆ − ( ˆ f + u 2 / ˆ f + u 2 / ˆ ◆ ◆ ◆ ◆ B B C C u u f/Z f/Z 0 0 u/Z u/Z f f f ) f ) v v v z v z B B C C = = − ˆ − ˆ f + u 2 / ˆ f + u 2 / ˆ ˆ ˆ − uv/ ˆ − uv/ ˆ B B C C v v ˙ ˙ 0 0 f/Z f/Z v/Z v/Z f f f f − u − u ω x ω x B B C C B B C C ω y ω y @ @ A A ω z ω z Slides adapted from Peter Corke Slides adapted from Peter Corke

  7. Optical flow Patterns Slides adapted from Peter Corke

  8. Image-based visual servoing Getting a camera velocity to minimize the error between the current and goal image Current Image Goal Image

  9. Image-based visual servoing Current Image Goal Image J ( u, v, Z ) 0 1 v x v y B C − ˆ uv/ ˆ − ( ˆ f + u 2 / ˆ ✓ ◆ ✓ ◆ B C u ˙ f/Z 0 u/Z f f ) v v z B C = − ˆ f + u 2 / ˆ ˆ − uv/ ˆ B C v ˙ 0 f/Z v/Z f f − u ω x B C B C ω y @ A ω z Slides adapted from Peter Corke

  10. Image-based visual servoing     ˙ u 1 v x Current Image Goal Image ˙ v 1 v y       J ( u 1 , v 1 , Z 1 )     ˙ u 2 v z     J ( u 2 , v 2 , Z 2 ) =       ˙ v 2 ω x     J ( u 3 , v 3 , Z 3 )     ˙ u 3 ω y     ˙ v 3 ω z

  11. Image-based visual servoing     ˙ u 1 v x ˙ v 1 v y       J ( u 1 , v 1 , Z 1 )     ˙ u 2 v z     = J ( u 2 , v 2 , Z 2 )       ˙ v 2 ω x     J ( u 3 , v 3 , Z 3 )     ˙ u 3 ω y     ˙ v 3 ω z     ˙ v x u 1 ˙ v y v 1 − 1       J ( u 1 , v 1 , Z 1 )     ˙ v z u 2     J ( u 2 , v 2 , Z 2 ) =       ˙ v 2 ω x     J ( u 3 , v 3 , Z 3 )     ˙ u 3 ω y     ˙ v 3 ω z

  12. Desired Pixel Velocity Slides adapted from Peter Corke

  13. Simulation Slides adapted from Peter Corke

  14. Point Correspondences How to find them? Features, Markers

  15. What will you take home today? Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation Deriving LQR

  16. Training Deep Neural Networks for Visual Servoing Bateux et al. ICRA 2018 1. Instead of using features, use the whole image to compare to given goal image a. Challenge: Small convergence region due to non-linear cost function

  17. Training Deep Neural Networks for Visual Servoing Bateux et al. ICRA 2018

  18. Training Deep Neural Networks for Visual Servoing Bateux et al. ICRA 2018

  19. What will you take home today? Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation Deriving LQR

  20. So far on Control

  21. Optimal Control and Reinforcement Learning from a unified point of view Optimal Control Problem

  22. Trajectory Optimization and Reinforcement Learning 1. Trajectory Optimization: find a optimal trajectory given non-linear dynamics and cost 2. Reinforcement Learning: finding an optimal policy under unknown dynamics and given a reward = -cost

  23. Principle of Optimality – Example: Graph Search problem

  24. Forward Search

  25. Forward Search

  26. Backward Search

  27. Backward Search

  28. Principle of Optimality

  29. Bellman Equation

  30. Problem setup System Dynamics Cost function

  31. Goal

  32. Formalize Cost-to-Go / Value function

  33. Optimal Value function = V with lowest cost

  34. Deriving the Bellman Equation

  35. Optimal Bellman Equation Optimal Value function Optimal Policy

  36. Comparing Optimal Bellman and Value Function

  37. Infinite time horizon, deterministic system

  38. Infinite time horizon, deterministic system

  39. Infinite time horizon, deterministic system

  40. Finite Horizon, Stochastic system Cost function Stochastic System Dynamics

  41. Finite Horizon, Stochastic system Value function Optimal Value function Optimal Policy

  42. Finite Horizon, Stochastic system Bellman Equation Optimal Bellman Equation

  43. Infinite Horizon, Stochastic system Combining formulation from infinite horizon - discrete system with stochastic system derivation

  44. Continuous time systems Hamilton-Jacobi-Bellman Equation

  45. How do you solve these equations?

  46. Linear Dynamical Systems, Quadratic cost – L inear Q uadratic R egulator (LQR)

  47. Linear Dynamical Systems, Quadratic cost – L inear Q uadratic R egulator (LQR)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend