Humanoid Robotics Least Squares Maren Bennewitz Goal of This - - PowerPoint PPT Presentation

humanoid robotics least squares
SMART_READER_LITE
LIVE PREVIEW

Humanoid Robotics Least Squares Maren Bennewitz Goal of This - - PowerPoint PPT Presentation

Humanoid Robotics Least Squares Maren Bennewitz Goal of This Lecture Introduction to least squares Apply it yourself for odometry calibration; later in the lecture: camera and whole-body self-calibration of humanoids Odometry : use


slide-1
SLIDE 1

Humanoid Robotics Least Squares

Maren Bennewitz

slide-2
SLIDE 2

Goal of This Lecture

§ Introduction to least squares § Apply it yourself for odometry calibration;

later in the lecture: camera and whole-body self-calibration of humanoids

§ Odometry: use data from motion sensors

to estimate the change in position over time

§ Robots typically execute motion commands

inaccurately, systematic errors might

  • ccur, i.e., due to wear
slide-3
SLIDE 3

Motion Drift

Use odometry calibration with least squares to reduce such systematic errors

slide-4
SLIDE 4

Least Squares in General

§ Approach for computing a solution for an

  • verdetermined system

§ Linear system § More independent equations than

unknowns, i.e., no exact solution exists

slide-5
SLIDE 5

Least Squares in General

§ Approach for computing a solution for an

  • verdetermined system

§ Linear system § More independent equations than

unknowns, i.e., no exact solution exists

§ Minimizes the sum of the squared errors

in the equations

§ Developed by Carl Friedrich Gauss in 1795

(he was 18 years old)

slide-6
SLIDE 6

Problem Definition

§ Given a system described by a set of n

  • bservation functions

§ Let

§ be the state vector (unknown) § be a measurement of the state x § be a function that maps to a

predicted measurement

§ Given n noisy measurements about

the state

§ Goal: Estimate the state that best

explains the measurements

slide-7
SLIDE 7

Graphical Explanation

state (unknown) predicted measurements real measurements

slide-8
SLIDE 8

Example

§ : position of a 3D feature in world § : coordinate of the 3D feature projected

into camera images (prediction)

§ Estimate the most likely 3D position of the

feature based on the predicted image projections and the actual measurements

slide-9
SLIDE 9

Error Function

§ Error is typically the difference between the predicted and the actual measurement § Assumption: The error has zero mean and is normally distributed § Gaussian error with information matrix § The squared error of a measurement depends on the state and is a scalar

slide-10
SLIDE 10

Goal: Find the Minimum

§ Find the state x* that minimizes the error

  • ver all measurements

global error (scalar) squared error terms (scalar) error terms (vector)

slide-11
SLIDE 11

Goal: Find the Minimum

§ Find the state x* that minimizes the error

  • ver all measurements

§ Possible solution: compute the Jacobian the

global error function and find its zeros

§ Typically non-linear functions, no closed-

form solution

§ Use a numerical approach

slide-12
SLIDE 12

Assumption

§ A “good” initial guess is available § The error functions are “smooth” in the

neighborhood of the (hopefully global) minima

§ Then: Solve the problem by iterative local

linearizations

slide-13
SLIDE 13

Solve via Iterative Local Linearizations

§ Linearize the error terms around the current

solution/initial guess

§ Compute the first derivative of the squared

error function

§ Set it to zero and solve the linear system § Obtain the new state (that is hopefully

closer to the minimum)

§ Iterate

slide-14
SLIDE 14

Linearizing the Error Function

Approximate the error functions around an initial guess via Taylor expansion

slide-15
SLIDE 15

Reminder: Jacobian Matrix

§ Given a vector-valued function § The Jacobian matrix is defined as

slide-16
SLIDE 16

§ Orientation of the tangent plane wrt the

vector-valued function at a given point

§ Generalizes the gradient of a scalar-

valued function

Reminder: Jacobian Matrix

slide-17
SLIDE 17

Squared Error

§ With the previous linearization, we can fix

and carry out the minimization in the increments

§ We replace the Taylor expansion in the

squared error terms:

slide-18
SLIDE 18

Squared Error

§ With the previous linearization, we can fix

and carry out the minimization in the increments

§ We use the Taylor expansion in the squared

error terms:

slide-19
SLIDE 19

Squared Error (cont.)

§ All summands are scalar so the

transposition of a summand has no effect

§ By grouping similar terms, we obtain:

slide-20
SLIDE 20

Global Error

§ The sum of the squared error terms

corresponding to the individual measurements

§ Form a new expression that approximates

the global error in the neighborhood of the current solution

slide-21
SLIDE 21

Global Error (cont.)

with

slide-22
SLIDE 22

Quadratic Form

§ Thus, we can write the global error as a

quadratic form in

§ We need to compute the derivative of

  • wrt. (given )
slide-23
SLIDE 23

Deriving a Quadratic Form

§ Assume a quadratic form § The first derivative is

See: The Matrix Cookbook, Section 2.4.2

slide-24
SLIDE 24

Quadratic Form

§ Global error as quadratic form in § The derivative of the approximated

global error wrt. is then:

slide-25
SLIDE 25

Minimizing the Quadratic Form

§ Derivative of § Setting it to zero leads to § Which leads to the linear system § The solution for the increment is

slide-26
SLIDE 26

Gauss-Newton Solution

Iterate the following steps:

§ Linearize around x and compute for each

measurement

§ Compute the terms for the linear system § Solve the linear system § Update state

slide-27
SLIDE 27

How to Efficiently Solve the Linear System?

§ Linear system § Can be solved by matrix inversion

(in theory)

§ In practice:

§ Cholesky factorization § QR decomposition § Iterative methods such as conjugate

gradients (for large systems)

slide-28
SLIDE 28

Cholesky Decomposition for Solving a Linear System

§ symmetric and positive definite § Solve § Cholesky leads to with

being a lower triangular matrix

§ Solve first § and then

slide-29
SLIDE 29

Gauss-Newton Summary

Method to minimize a squared error:

§ Start with an initial guess § Linearize the individual error functions § This leads to a quadratic form § Setting its derivative to zero leads to a

linear system

§ Solving the linear systems leads to a state

update

§ Iterate

slide-30
SLIDE 30

Example: Odometry Calibration

§ Odometry measurements § Eliminate systematic errors through

calibration

§ Assumption: Ground truth is available § Ground truth by motion capture, scan-

matching, or a SLAM system

slide-31
SLIDE 31

Example: Odometry Calibration

§ There is a function that, given some

parameters , returns a corrected odometry as follows

§ We need to find the parameters

slide-32
SLIDE 32

Odometry Calibration (cont.)

§ The state vector is § The error function is § Accordingly, its Jacobian is

Does not depend on x, why? What are the consequences? e is linear, no need to iterate!

slide-33
SLIDE 33

Questions

§ How do the parameters look like if the

  • dometry is perfect?

§ How many measurements are at least

needed to find a solution for the calibration problem?

slide-34
SLIDE 34

Reminder: Rotation Matrix

§ 3D rotations along the main axes § IMPORTANT: Rotations are not commutative! § The inverse is the transpose (can be computed efficiently)

slide-35
SLIDE 35

Matrices to Represent Affine Transformations

§ Describe a 3D transformation via matrices § Such transformation matrices are used to

describe transforms between poses in the world

rotation matrix translation vector

slide-36
SLIDE 36

Example: Chaining Transformations

§ Matrix A represents the pose of a robot in

the world frame

§ Matrix B represents the position of a sensor

  • n the robot in the robot frame

§ The sensor perceives an object at a given

location p, in its own frame

§ Where is the object in the global frame?

p

slide-37
SLIDE 37

B

Bp gives the pose of the

  • bject wrt the robot

§ Matrix A represents the pose of a robot in

the world frame

§ Matrix B represents the position of a sensor

  • n the robot in the robot frame

§ The sensor perceives an object at a given

location p, in its own frame

§ Where is the object in the global frame?

Example: Chaining Transformations

slide-38
SLIDE 38

B

ABp gives the pose of the

  • bject wrt the world

A

§ Matrix A represents the pose of a robot in

the world frame

§ Matrix B represents the position of a sensor

  • n the robot in the robot frame

§ The sensor perceives an object at a given

location p, in its own frame

§ Where is the object in the global frame?

Bp gives the pose of the

  • bject wrt the robot

Example: Chaining Transformations

slide-39
SLIDE 39

Summary

§ Technique to minimize squared error

functions

§ Gauss-Newton is an iterative approach for

solving non-linear problems

§ Uses linearization (approximation!) § Popular method in a lot of disciplines § Exercise: Apply least squares for odometry

calibration

§ Next lectures: Application of least squares

to camera and whole-body self-calibration

slide-40
SLIDE 40

Literature

Least Squares and Gauss-Newton § Basically every textbook on numeric calculus or optimization § Wikipedia (for a brief summary) § Part of the slides: Based on the course on Robot Mapping by Cyrill Stachniss