Visual Servo Control Seth Hutchinson Georgia Institute of - - PowerPoint PPT Presentation

visual servo control
SMART_READER_LITE
LIVE PREVIEW

Visual Servo Control Seth Hutchinson Georgia Institute of - - PowerPoint PPT Presentation

Visual Servo Control Seth Hutchinson Georgia Institute of Technology With contributions from Nicholas Gans -- University of Texas at Dallas Peter Corke -- Queensland University of Technology, Australia Sourabh Battacharya


slide-1
SLIDE 1

Visual Servo Control

With contributions from

  • Nicholas Gans -- University of Texas at Dallas
  • Peter Corke -- Queensland University of Technology, Australia
  • Sourabh Battacharya – Iowa State University

*includes material from the articles “Visual Servo Control, Part I: Basic Approaches,” and “Visual Servo Control, Part II: Advanced Approaches,” in the IEEE Robotics and Automation Society Magazine, by François Chaumette and Seth Hutchinson.

Seth Hutchinson Georgia Institute of Technology

slide-2
SLIDE 2

Visual Servo: The Basic Problem

  • A camera views the scene from an initial pose, yielding the

current image.

  • The desired image corresponds to the scene as viewed

from the desired camera pose.

  • Determine a camera motion to move from initial to desired

camera pose, using the time-varying image as input. There are many variations on the problem:

  • Eye-in-hand vs. fixed camera
  • Which image features to use
  • How to specify desired images for specified tasks
  • Etc…
slide-3
SLIDE 3

An Example

In this example, coordinates of image points are the features Blue points are current features Red points are desired features Error vectors are shown in pink

slide-4
SLIDE 4

Coarsely Calibrated Visual Servoing of a Nonholonomic mobile robot Using a Central Catadioptric Vision System

Romeo Tatsambon, and Andrea Cherubini --- IRISA, Rennes Han Ul Yoon --- University of Illinois

slide-5
SLIDE 5

Robot Controller Image Feature Extraction Visual Servo Controller

𝑡∗

m(t) s(m(t),a) ξ e(t) = s(t) - s*

Basic architecture for Visual Servo Control

At this level of abstraction, it’s remarkably similar to the architecture for any garden variety feedback control system.

slide-6
SLIDE 6

Robot Controller Image Feature Extraction Visual Servo Controller

𝑡∗

m(t) is a set of image measurements (e.g., the image coordinates of interest points, or the parameters of a set of image segments).

m(t) s(m(t),a) ξ e(t) = s(t) - s* s(m(t),a) is a vector of k visual features, computed from the image. a is a set of parameters that represent additional knowledge about the system (e.g., coarse camera intrinsic parameters or 3D model of objects). s* contains the desired values of the features. Error defined by e(t) = s(t) - s* Velocity to camera = ξ

slide-7
SLIDE 7

Visual Servo Control --- The Basic Idea

The aim of vision-based control schemes is to minimize an error e(t) which is typically defined by 𝑓 𝑢 = 𝑡 𝑛 𝑢 , 𝑏 − 𝑡∗

  • m(t) is a set of image measurements (e.g., the image coordinates of interest

points, or the parameters of a set of image segments).

  • s(m(t),a) is a vector of k visual features, computed from the image.
  • a is a set of parameters that represent additional knowledge about the

system (e.g., camera intrinsic parameters or 3D object model).

  • s* contains the desired values of the features.

Typically, one merely writes: e(t) = s(t) - s*

slide-8
SLIDE 8

Some Basic Assumptions

There are numerous considerations when designing a visual servo system. For now, we will consider only systems that satisfy the following basic assumptions:

  • Eye-in-hand systems — the camera is mounted on the end

effector of a robot and treated as a free-flying object with configuration space 𝑅 = SE(3).

  • Static (i.e., motionless) targets.
  • Purely kinematic systems — we do not consider the dynamics
  • f camera motion, but assume that the camera can execute

accurately the applied velocity control.

  • Perspective projection — the imaging geometry can be

modeled as a pinhole camera. Some or all of these may be relaxed as we progress to more advanced topics.

slide-9
SLIDE 9

Designing the Control Law --- The Basic Idea

Given 𝑡, control design can be quite simple. A typical approach is to design a velocity controller, which requires the relationship between the time variation of s and the camera velocity.

  • Let the spatial velocity of the camera be denoted by ξ = (𝑤, 𝜕 )

▪ 𝑤 is the instantaneous linear velocity of the origin of the camera frame ▪ 𝜕 is the instantaneous angular velocity of the camera frame

  • The relationship between ሶ

𝑡 and ξ is given by ሶ 𝑡 = 𝑀ξ 𝑀 ∈ ℝ6 𝑦 𝑙 is named the interaction matrix [Espiau, et al. 1992], or the image Jacobian [Hutchinson, Hager & Corke, 1996].

⟹ The key to visual servo --- choosing s and the control law.

slide-10
SLIDE 10

Designing the Control Law (cont)

Let’s derive the relationship between ሶ 𝑓 and ξ, i.e., how does the error evolve as a function of the camera body velocity?

  • Using the previous equations

𝑓(𝑢)= 𝑡 𝑢 − 𝑡∗ and ሶ 𝑡 = 𝑀 ξ we can easily obtain the relationship between the camera velocity and the rate of change of the error ሶ 𝑓 ሶ 𝑓(𝑢) = ሶ 𝑡 𝑢 = 𝑀 ξ assuming that 𝑡∗ is constant.

  • The relationship between ξ and ሶ

𝑡 is the same as between ξ and ሶ 𝑓. Now our problem is merely to find the control input ξ = 𝑣(𝑢) that gives the desired error performance.

slide-11
SLIDE 11

An Example

In this example, coordinates of image points are the features Blue points are current features Red points are desired features Error vectors are shown in pink

slide-12
SLIDE 12
  • In many cases, we would like to achieve an exponential decoupled

decrease of the error, 𝑓 𝑢 = e(t0)exp(−𝜇𝑢)

  • This is achieved if the error obeys the ordinary differential equation

ሶ 𝑓(𝑢) = −λ𝑓

  • Combining ሶ

𝑓(𝑢) = −λ𝑓 and ሶ 𝑓(𝑢) = 𝑀 ξ we obtain 𝑀ξ = − λ 𝑓

  • If we assume velocity control, i.e., 𝑣 𝑢 = ξ, we simply solve the

above to obtain 𝑣 𝑢 = ξ = −λ 𝑀+𝑓 where 𝑀+ ∈ ℝ𝑙 𝑦 6 is chosen as the Moore-Penrose pseudo-inverse

  • f 𝑀

𝑀+ = (𝑀𝑈𝑀) −1𝑀𝑈

Designing the Control Law (cont)

slide-13
SLIDE 13

Practical Issues

In practice, it is impossible to know exactly the value of 𝑀 or of 𝑀+ , since these depend on measured data. The actual value of 𝑀 is thus an approximation, and the actual control law is given as ξ = − λ ෢ 𝑀+ 𝑓

There are several choices for ෢ 𝑀+ :

𝑀+ = ෠ 𝑀+: Compute an estimate ෠ 𝑀, use the pseudo-inverse of the estimate

  • Directly estimate ෢

𝑀+

  • Let ෢

𝑀+ be approximated by a constant matrix (e.g., 𝑀+ for the goal camera configuration)

slide-14
SLIDE 14

Context – Visual Servo in the Bigger Picture

  • Learning, planning, perception and action are often tightly coupled

activities.

  • Visual servo control is the coupling of perception and action

— hand-eye coordination.

  • Basic visual servo controllers can serve as primitives for planning

algorithms.

  • Switching between control laws is equivalent to executing a plan.
  • There are a number of analogies between human hand-eye

coordination and visual servo control. A rigorous understanding of the performance of visual servo control systems provides a foundation for sensor-based robotics.

slide-15
SLIDE 15

Visual Servo Control --- Some History

Visual servo control is merely the use of computer vision data to control motion of a robot The first real-time visual servo systems were reported in ▪ Agin, 1979 ▪ Weiss et al., 1984, 1987 ▪ Feddema et al., 1989 In some sense, Shakey [SRI, 1966-1972 or so] was an example of visual servo system, but with a very, very slow servo rate In each of these, simple image features (e.g., centroids of binary

  • bjects) were used, primarily due to limitations in computation power.
slide-16
SLIDE 16

Overview

This talk will focus on the control and performance issues, leaving aside the computer vision issues (e.g., feature tracking). The main issues --- how to choose 𝑡(𝑢) and the corresponding control law: – Using 3D reconstruction to define 𝑡(𝑢) – Using image data to directly define 𝑡 𝑢 – Partitioning degrees of freedom – Switching between controllers

slide-17
SLIDE 17

Position-Based Visual Servo Control

slide-18
SLIDE 18

Position-Based Visual Servo Control

  • Computer vision data are used to compute the pose of the camera (𝑒 and 𝑆)

relative to the world frame

  • The error 𝑓 𝑢

is defined in the pose space 𝑒 ∈ ℝ3, 𝑆 ∈ 𝑇𝑃 3 .

  • The control signal ξ = (𝑤, 𝜕) is a camera body velocity.
  • The camera velocity ξ is specified w.r.t. the camera frame.

If the goal pose is given by 𝑒 = 0, 𝑆 = 𝐽, the role of the computer vision system is to provide, in real time, a measurement of pose error.

slide-19
SLIDE 19

PBVS (cont.)

If 𝑣𝜄 is the axis/angle parameterization of 𝑆, the error is given by 𝑓(𝑢) = 𝑒 𝑣𝜄 and its derivative is given by ሶ 𝑓 = 𝑆 𝑀𝜕(𝑣𝜄) ξ = 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)ξ in which [Malis 98] 𝑀𝜕(𝑣𝜄) = 𝐽 −

𝜄 2 𝑣× + 1 – sinc 𝜄/sinc2 𝜄 2 𝑣×

slide-20
SLIDE 20

PBVS (cont.)

Since 𝑀𝜕 is nonsingular when 𝜄 ≠ 2𝑙𝜌, [Malis, Chaumette, Boudet 99], to achieve the error dynamics ሶ 𝑓 = -λ𝑓 we can use

−λ𝑓 = ሶ

𝑓 = 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)ξ → ξ = −λ 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)

−1 𝑓

The motivation: The solution of the differential equation ሶ 𝑓 = −λ𝑓 is a decaying exponential. That’s nice --- but how do we know that it really works? After all, 𝑓(𝑢) is a vector, and 𝑀, is not a constant matrix... This isn’t really a nice, scalar, first-order linear differential equation.

slide-21
SLIDE 21

Lyapunov Theory

Lyapunov theory provides a powerful tool for analyzing the stability of nonlinear systems.

  • Consider a nonlinear system on ℝ𝑜

ሶ 𝑦 = 𝑔(𝑦) where 𝑔(𝑦) is a vector field on ℝ𝑜, and suppose that f(0) = 0.

  • The origin in ℝ𝑜 is said to be an equilibrium point for the system.

What does this have to do with our visual servo problem?!?!

  • If we use a control law 𝑣 𝑢 = ξ = −λ 𝑀+𝑓 and if ሶ

𝑓 = 𝑀ξ,

  • then 𝑓 𝑢 = 0 is an equilibrium point for our visual servo system,

since e = 0 → −λ 𝑀+𝑓 = ξ = 0 → 𝑀ξ = ሶ 𝑓 = 0 When the error is zero, the control input is zero, thus ሶ 𝑓 is zero.

slide-22
SLIDE 22

Lyapunov Theory

Lyapunov theory provides a powerful tool for analyzing the stability of nonlinear systems.

  • Consider a nonlinear system on ℝ𝑜

ሶ 𝑦 = 𝑔(𝑦) where 𝑔(𝑦) is a vector field on ℝ𝑜, and suppose that f(0) = 0.

  • The origin in ℝ𝑜 is said to be an equilibrium point for the system.

Lyapunov Functions:

  • Let 𝓜(𝑦): ℝ𝑜 → ℝ be a function with continuous first partial

derivatives in a neighborhood of the origin.

  • Let 𝓜 be positive definite: 𝓜(0) = 0, 𝓜(𝑦) > 0 for all 𝑦 ≠ 0.
  • 𝓜 is called a Lyapunov function candidate for the system.
slide-23
SLIDE 23

Lyapunov Theory (cont)

THEOREM: The origin is a stable equilibrium for the system if there exists a Lyapunov function candidate 𝓜 such that ሶ 𝓜 is negative semi-definite along solution trajectories for the system, i.e., ሶ 𝓜 =

𝜖𝓜 𝜖𝑦 ሶ

𝑦 =

𝜖𝓜 𝜖𝑦 𝑔 𝑦 ≤ 0

THEOREM: The origin is asymptotically stable if there exists a Lyapunov function candidate 𝓜 such that ሶ 𝓜 is negative definite along solution trajectories for the system ሶ 𝓜 =

𝜖𝓜 𝜖𝑦 𝑔 𝑦 < 0

slide-24
SLIDE 24

Lyapunov Theory and Visual Servo Control

The two versions of stability provide different sorts of performance guarantees:

  • Stability guarantees that the system will remain within a

neighborhood of the equilibrium point, provided the initial state is sufficiently close to the equilibrium point.

  • Asymptotic stability guarantees that the system will converge to

the equilibrium point, provided the initial state is sufficiently close to the equilibrium point.

In some cases, the system error is the simplest Lyapunov function candidate --- this is the case for many visual servo systems. 𝓜 =

1 2 𝑓(𝑢) 2

→ ሶ 𝓜 = 𝑓𝑈 𝑢 ሶ 𝑓(𝑢)

slide-25
SLIDE 25

Lyapunov stability of PBVS

Recall our PBVS controller: −λ𝑓 = ሶ 𝑓 = 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)ξ → ξ = − λ 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)

−1 𝑓

Using the Lyapunov function 𝓜 =

1 2 𝑓(𝑢) 2 we obtain

ሶ 𝓜 = 𝑓𝑈 𝑢 ሶ 𝑓(𝑢) = 𝑓𝑈 𝑢 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)ξ

slide-26
SLIDE 26

Lyapunov stability of PBVS

Recall our PBVS controller: −λ𝑓 = ሶ 𝑓 = 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)ξ → ξ = − λ 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)

−1 𝑓

Using the Lyapunov function 𝓜 =

1 2 𝑓(𝑢) 2 we obtain

ሶ 𝓜 = 𝑓𝑈 𝑢 ሶ 𝑓(𝑢) = 𝑓𝑈 𝑢 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)ξ = - λ 𝑓𝑈𝑀𝑞𝑐𝑤𝑡(𝑣𝜄) 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)

−1 𝑓

= - λ 𝑓(𝑢) 2 and we have, not surprisingly, asymptotic stability.

slide-27
SLIDE 27

PBVS Example

slide-28
SLIDE 28

Why not just use PBVS?

  • Feedback is computed using estimated quantities that are a function
  • f the system calibration parameters. Thus,

ሶ 𝓜 = - λ 𝑓𝑈𝑀𝑞𝑐𝑤𝑡(𝑣𝜄) ෠ 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)

−1 𝑓

and we need 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄) ෠ 𝑀𝑞𝑐𝑤𝑡(𝑣𝜄)

−1 to be positive definite.

  • Even small errors computing the orientation of the cameras can lead

to reconstruction errors that significantly impact system accuracy.

  • Position-based control requires an accurate model of the target --- a

form of calibration.

  • In task space, the robot will move a minimal distance, but in the

image, features may move a non-minimal distance during execution. Features may leave the field of view.

slide-29
SLIDE 29

PBVS Task Example Large translation and rotation about all axes

slide-30
SLIDE 30

Image-Based Visual Servo Control

slide-31
SLIDE 31

Image-Based Visual Servo Control

For Image-Based Visual Servo (IBVS)

  • Features 𝑡(𝑢) are extracted from computer vision data.
  • Camera pose is not explicitly computed.
  • The error is defined in the image feature space: 𝑓 𝑢 = 𝑡 𝑢 – 𝑡∗
  • The control signal, ξ = (𝑤, 𝜕 ) is again a camera body velocity

specified w.r.t. the camera frame, but for IBVS it is computed directly using 𝑡 𝑢 . For example, if the feature is a single image point with image plane coordinates 𝑦 and 𝑧, we have 𝑡(𝑢) = (𝑦(𝑢), 𝑧(𝑢)) Since ሶ 𝑓 𝑢 = ሶ 𝑡(𝑢), we’ll need to know the relationship between ሶ 𝑡 and ξ to design a controller that achieves the error dynamics ሶ 𝑓 = −𝜇𝑓.

slide-32
SLIDE 32

Imaging Geometry

Consider a point P with coordinates (X,Y,Z) w.r.t. the camera frame. Using perspective projection, P’s image plane coordinates are given by 𝑦 = λ

𝑌 𝑎

𝑧 = λ

𝑍 𝑎

in which λ is the camera focal length.

𝑄 = (𝑌, 𝑍, 𝑎)

x y z

Optical axis 𝑡 = (𝑦, 𝑧) Focal length λ Camera frame

slide-33
SLIDE 33

The Interaction Matrix (for a point feature)

As an example, consider the interaction matrix for a single point, with coordinates (X,Y,Z). To determine the interaction matrix for the point:

  • 1. Compute the time derivatives for 𝑦, 𝑧
  • 2. Express these time derivatives in terms of 𝑦, 𝑧,

ሶ 𝑌, ሶ 𝑍, ሶ 𝑎 and 𝑎

  • 3. Find expressions for ሶ

𝑌, ሶ 𝑍, ሶ 𝑎 in terms of ξ and 𝑌, 𝑍, 𝑎 (i.e., eliminate 𝑌, 𝑍)

  • 4. Combine equations and grind through the algebra
slide-34
SLIDE 34

The Interaction Matrix (for a point feature)

Step 1: Compute the time derivatives for 𝑦, 𝑧 Recall 𝑦 = λ

𝑌 𝑎 and 𝑧 = λ 𝑍 𝑎

Using the quotient rule ሶ 𝑦 = λ

𝑎 ሶ 𝑌 −𝑌 ሶ 𝑎 𝑎2

and ሶ 𝑧 = λ

𝑎 ሶ 𝑍 −𝑍 ሶ 𝑎 𝑎2

slide-35
SLIDE 35

The Interaction Matrix (for a point feature)

Step 2: Express time derivatives in terms of 𝑦, 𝑧, ሶ 𝑌, ሶ 𝑍, ሶ 𝑎, 𝑎

  • The perspective projection equations can be rewritten to

give expressions for 𝑌 and 𝑍 as 𝑌 =

𝑦𝑎 λ

and 𝑍 =

𝑧𝑎 λ

  • Substitute these into the equations for ሶ

𝑦 ሶ 𝑧 to obtain ሶ 𝑦 = λ

ሶ 𝑌 𝑎 − 𝑦 ሶ 𝑎 𝑎

and ሶ 𝑧 = λ

ሶ 𝑍 𝑎 − 𝑧 ሶ 𝑎 𝑎

slide-36
SLIDE 36

The Interaction Matrix (for a point feature)

Step 3: Find expressions for ሶ 𝑌, ሶ 𝑍, ሶ 𝑎 in terms of ξ and 𝑌, 𝑍, 𝑎 The velocity of (the fixed point) P relative to the camera frame is given by: ሶ 𝑄 = − 𝜕 × 𝑄 − 𝑤 which gives equations for each of ሶ 𝑌, ሶ 𝑍and ሶ 𝑎. Expanding ሶ 𝑄 = − 𝜕 × 𝑄 − 𝑤 we obtain ሶ 𝑌 = -𝑤𝑦 − 𝜕𝑧𝑎 + 𝜕𝑨𝑍 ሶ 𝑍 = -𝑤𝑧 − 𝜕𝑨𝑌 + 𝜕𝑦Z ሶ 𝑎 = -𝑤𝑨 − 𝜕𝑦𝑍 + 𝜕𝑧X Now it’s just algebra…

slide-37
SLIDE 37

The Interaction Matrix (for a point feature)

Step 4: Combine equations and grind through the algebra Combining equations, we obtain ሶ 𝑦 = −

λ 𝑎 𝑤𝑦 + 𝑦 𝑎 𝑤𝑨 + 𝑦𝑧 λ 𝜕𝑦 − (λ2+𝑦2) λ

𝜕𝑧 + 𝑧𝜕𝑨 ሶ 𝑧 = −

λ 𝑎 𝑤𝑧 + 𝑧 𝑎 𝑤𝑨 + (λ2+𝑧2) λ

𝜕𝑦 −

𝑦𝑧 λ 𝜕𝑧 −𝑦𝜕𝑨

These equations can be written nicely in matrix form.

slide-38
SLIDE 38

The Interaction Matrix (for a point feature)

In matrix form, we obtain: ሶ 𝑦 ሶ 𝑧 = −

λ 𝑎

λ 𝑎 𝑦 𝑎 𝑧 𝑎 𝑦𝑧 λ λ2+𝑧2 λ

λ2+𝑦2 λ

𝑦𝑧 λ

𝑧 −𝑦 ξ

slide-39
SLIDE 39

The Interaction Matrix (for a point feature)

In matrix form, we obtain: ሶ 𝑦 ሶ 𝑧 = −

λ 𝑎

λ 𝑎 𝑦 𝑎 𝑧 𝑎 𝑦𝑧 λ λ2+𝑧2 λ

λ2+𝑦2 λ

𝑦𝑧 λ

𝑧 −𝑦 ξ This can be written more compactly as ሶ 𝑡 = 𝑀 𝑡, 𝑨 𝜊 The matrix 𝑀 is known as the interaction matrix [Espiau, et al., 1992] or the image Jacobian. Weiss et al. [1987] used feature sensitivity matrix, while Feddema et al., [1989] merely used Jacobian to describe this matrix.

slide-40
SLIDE 40

The Null Space of the Interaction Matrix

The null space of this interaction matrix is spanned by: 𝑦 𝑧 λ 𝑦 𝑧 λ 𝑦𝑧𝑎 −(𝑦2 + λ 2)𝑎 λ 𝑧𝑎 −λ 2 𝑦λ λ 𝑦2 + 𝑧2 + λ 2 𝑎 −𝑦 𝑦2 + 𝑧2 + λ 2 𝑎 𝑦𝑧λ − 𝑦2 + λ 2 𝑎 𝑦λ 2

slide-41
SLIDE 41

The Null Space of the Interaction Matrix

The null space of this interaction matrix is spanned by: 𝑦 𝑧 λ 𝑦 𝑧 λ 𝑦𝑧𝑎 −(𝑦2 + λ 2)𝑎 λ 𝑧𝑎 −λ 2 𝑦λ λ 𝑦2 + 𝑧2 + λ 2 𝑎 −𝑦 𝑦2 + 𝑧2 + λ 2 𝑎 𝑦𝑧λ − 𝑦2 + λ 2 𝑎 𝑦λ 2

Intuitively, this basis of the null space corresponds to

  • Translation along a projection ray
  • Rotation about a projection ray
  • Translation along the camera y-axis, keeping the camera pointed in the

correct direction using rotational motions

  • Rotation about the camera y-axis, keeping the camera pointed in the

correct direction using the linear motion These are the point motions that cannot be “seen” by the camera.

slide-42
SLIDE 42

The Interaction Matrix for Multiple Image Points

  • Since 𝑀 𝑡, 𝑎 has a nonzero null space, we cannot control all six degrees of

freedom for the camera motion using a single image point.

  • One solution is to simply use multiple image points.
  • In this case, we merely stack the interaction matrices to obtain

ሶ 𝑡 = ሶ 𝑡1(𝑢) ⋮ ሶ 𝑡𝑜(𝑢) = 𝑀1(𝑡1, 𝑎1) ⋮ 𝑀𝑜(𝑡𝑜, 𝑎𝑜) ξ

  • Using this approach, three points provide sufficient information to control

the camera’s six degrees of freedom.

  • It is required to know the depth 𝑎𝑗 for each point (or at least an estimate).
slide-43
SLIDE 43

Proportional Image-Based Control

As before, to achieve the error dynamics ሶ 𝑓 = −λ𝑓 −λ𝑓 = ሶ 𝑓 = ሶ 𝑡 = 𝑀 𝑡, 𝑎 ξ ξ = −λ 𝑀+ 𝑡, 𝑎 𝑓 in which 𝑀+ = (𝑀𝑈𝑀) −1𝑀𝑈. Using the Lyapunov function 𝓜 =

1 2 𝑓 2 we obtain

ሶ 𝓜 = 𝑓𝑈 ሶ 𝑓 = 𝑓𝑈𝑀ξ = −λ𝑓𝑈𝑀𝑀+𝑓 We have asymptotic stability when the matrix 𝑀𝑀+ is positive definite. Unfortunately, this condition is rarely achieved, e.g., when dim 𝑡 > 6. More on this a bit later…

slide-44
SLIDE 44

IBVS Task Example

Large Translation and Rotation About All Axes