Visual Servoing, Intro Optimal Control Lecture 12 What will you - - PowerPoint PPT Presentation
Visual Servoing, Intro Optimal Control Lecture 12 What will you - - PowerPoint PPT Presentation
Visual Servoing, Intro Optimal Control Lecture 12 What will you take home today? Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation
What will you take home today?
Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation Deriving LQR
Camera-Robot Configurations
Image from: CHANG, W., WU, C.. Hand-Eye Coordination for Robotic Assembly Tasks. International Journal of Automation and Smart Technology,
Image-based visual servoing
Current Image Goal Image
Camera Motion to Image Motion
vx vy vz ωz ωx ωy
Slides adapted from Peter Corke
The Image Jacobian
ω v ˆ f = f ρ ( ˙ u, ˙ v)T (X, Y, Z)T
✓ ˙ u ˙ v ◆ = ✓ − ˆ f/Z u/Z uv/ ˆ f −( ˆ f + u2/ ˆ f) v − ˆ f/Z v/Z ˆ f + u2/ ˆ f −uv/ ˆ f −u ◆ B B B B B B @ vx vy vz ωx ωy ωz 1 C C C C C C A Slides adapted from Peter Corke
The Image Jacobian
ω v ˆ f = f ρ ( ˙ u, ˙ v)T (X, Y, Z)T
✓ ˙ u ˙ v ◆ = ✓ − ˆ f/Z u/Z uv/ ˆ f −( ˆ f + u2/ ˆ f) v − ˆ f/Z v/Z ˆ f + u2/ ˆ f −uv/ ˆ f −u ◆ B B B B B B @ vx vy vz ωx ωy ωz 1 C C C C C C A Slides adapted from Peter Corke
Optical flow Patterns
Slides adapted from Peter Corke
Image-based visual servoing
Getting a camera velocity to minimize the error between the current and goal image
Current Image Goal Image
Image-based visual servoing
Current Image Goal Image
✓ ˙ u ˙ v ◆ = ✓ − ˆ f/Z u/Z uv/ ˆ f −( ˆ f + u2/ ˆ f) v − ˆ f/Z v/Z ˆ f + u2/ ˆ f −uv/ ˆ f −u ◆ B B B B B B @ vx vy vz ωx ωy ωz 1 C C C C C C A
J(u, v, Z)
Slides adapted from Peter Corke
Image-based visual servoing
Current Image Goal Image
˙ u1 ˙ v1 ˙ u2 ˙ v2 ˙ u3 ˙ v3 = J(u1, v1, Z1) J(u2, v2, Z2) J(u3, v3, Z3) vx vy vz ωx ωy ωz
Image-based visual servoing
˙ u1 ˙ v1 ˙ u2 ˙ v2 ˙ u3 ˙ v3 = J(u1, v1, Z1) J(u2, v2, Z2) J(u3, v3, Z3) vx vy vz ωx ωy ωz
vx vy vz ωx ωy ωz = J(u1, v1, Z1) J(u2, v2, Z2) J(u3, v3, Z3)
−1
˙ u1 ˙ v1 ˙ u2 ˙ v2 ˙ u3 ˙ v3
Desired Pixel Velocity
Slides adapted from Peter Corke
Simulation
Slides adapted from Peter Corke
Point Correspondences
How to find them? Features, Markers
What will you take home today?
Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation Deriving LQR
Training Deep Neural Networks for Visual Servoing
Bateux et al. ICRA 2018
1.
Instead of using features, use the whole image to compare to given goal image
- a. Challenge: Small convergence region due to non-linear cost function
Training Deep Neural Networks for Visual Servoing
Bateux et al. ICRA 2018
Training Deep Neural Networks for Visual Servoing
Bateux et al. ICRA 2018
What will you take home today?
Visual Servoing Interaction Matrix Control Law Case-Study: Learning-based approach Introduction Optimal Control Principle of Optimality Bellman Equation Deriving LQR
So far on Control
Optimal Control and Reinforcement Learning from a unified point of view
Optimal Control Problem
Trajectory Optimization and Reinforcement Learning
1.
Trajectory Optimization: find a optimal trajectory given non-linear dynamics and cost
2.
Reinforcement Learning: finding an optimal policy under unknown dynamics and given a reward = -cost
Principle of Optimality – Example: Graph Search problem
Forward Search
Forward Search
Backward Search
Backward Search
Principle of Optimality
Bellman Equation
Problem setup
System Dynamics Cost function
Goal
Formalize Cost-to-Go / Value function
Optimal Value function = V with lowest cost
Deriving the Bellman Equation
Optimal Bellman Equation
Optimal Value function Optimal Policy
Comparing Optimal Bellman and Value Function
Infinite time horizon, deterministic system
Infinite time horizon, deterministic system
Infinite time horizon, deterministic system
Finite Horizon, Stochastic system
Stochastic System Dynamics Cost function
Finite Horizon, Stochastic system
Value function Optimal Value function Optimal Policy
Finite Horizon, Stochastic system
Bellman Equation Optimal Bellman Equation
Infinite Horizon, Stochastic system
Combining formulation from infinite horizon - discrete system with stochastic system derivation
Continuous time systems
Hamilton-Jacobi-Bellman Equation