Humanoid Robotics Compact Course on Linear Algebra Padmaja - - PowerPoint PPT Presentation

humanoid robotics compact course on linear algebra
SMART_READER_LITE
LIVE PREVIEW

Humanoid Robotics Compact Course on Linear Algebra Padmaja - - PowerPoint PPT Presentation

Humanoid Robotics Compact Course on Linear Algebra Padmaja Kulkarni Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space Vectors: Scalar Product Scalar-vector product Changes the length of


slide-1
SLIDE 1

Humanoid Robotics Compact Course on Linear Algebra

Padmaja Kulkarni Maren Bennewitz

slide-2
SLIDE 2

Vectors

§ Arrays of numbers § Vectors represent a point in a n dimensional space

slide-3
SLIDE 3

Vectors: Scalar Product

§ Scalar-vector product § Changes the length of the vector, but not its direction

slide-4
SLIDE 4

Vectors: Sum

§ Sum of vectors (is commutative) § Can be visualized as “chaining” the vectors

slide-5
SLIDE 5

Vectors: Dot Product

§ Inner product of vectors (yields a scalar) § If , the two vectors are orthogonal

slide-6
SLIDE 6

Vectors: Dot Product

§ Inner product of vectors (yields a scalar) § If one of the vectors, e.g., has , the product returns the length of the projection of along the direction of § If , the two vectors are

  • rthogonal
slide-7
SLIDE 7

Vectors: Linear (In)Dependence

§ A vector is linearly dependent from if

slide-8
SLIDE 8

Vectors: Linear (In)Dependence

§ A vector is linearly dependent from if

slide-9
SLIDE 9

Vectors: Linear (In)Dependence

§ A vector is linearly dependent from if § If there exist no such that then is independent from

slide-10
SLIDE 10

Matrices

§ A matrix is written as a table of values § 1st index refers to the row § 2nd index refers to the column

columns rows

slide-11
SLIDE 11

Matrices as Collections of Vectors

§ Column vectors

slide-12
SLIDE 12

Matrices as Collections of Vectors

§ Row vectors

slide-13
SLIDE 13

Important Matrix Operations

§ Multiplication by a scalar § Sum (commutative, associative) § Multiplication by a vector § Product (not commutative) § Inversion (square, full rank) § Transposition

slide-14
SLIDE 14

Scalar Multiplication & Sum

§ In the scalar multiplication, every element of the vector or matrix is multiplied with the scalar § The sum of two matrices is a matrix consisting of the pair-wise sums of the individual entries

slide-15
SLIDE 15

Matrix Vector Product

§ The ith component of is the dot product . § The vector is linearly dependent from with coefficients

column vectors row vectors

slide-16
SLIDE 16

Matrix Matrix Product

§ Can be defined through

§ the dot product of row and column vectors § the linear combination of the columns of scaled by the coefficients of the columns of

slide-17
SLIDE 17

Matrix Matrix Product

§ If we consider the second interpretation, we see that the columns of are the “global transformations” of the columns

  • f through

§ All the interpretations made for the matrix vector product hold

slide-18
SLIDE 18

Inverse

§ If is a square matrix of full rank, then there is a unique matrix such that holds § The ith row of and the jth column of are

§ orthogonal (if i ≠ j) § or their dot product is 1 (if i = j)

slide-19
SLIDE 19

Matrix Inversion

§ The ith column of can be found by solving the following linear system:

This is the ith column

  • f the identity matrix
slide-20
SLIDE 20

Linear Systems (1)

§ A set of linear equations § Solvable by Gaussian elimination (as taught in school) § Many efficient solvers exit, e.g., conjugate gradients, sparse Cholesky decomposition

slide-21
SLIDE 21

Linear Systems (2)

Notes:

§ Many efficient solvers exit, e.g., conjugate gradients, sparse Cholesky decomposition § One can obtain a reduced system by considering the matrix and suppressing all the rows which are linearly dependent § Let be the reduced system with : n'xm and : n'x1 and § The system might be either overdetermined (n’>m) or underconstrained (n’<m)

slide-22
SLIDE 22

Overdetermined Systems

§ More independent equations than variables § An overdetermined system does not admit an exact solution § “Least-squares" problem

slide-23
SLIDE 23

Determinant (det)

§ Only defined for square matrices § The inverse of exists if and only if § For matrices: Let and , then § For matrices the Sarrus rule holds:

slide-24
SLIDE 24

Determinant

§ For general matrices? Let be the submatrix obtained from by deleting the i-th row and the j-th column Rewrite determinant for matrices:

slide-25
SLIDE 25

Determinant

§ For general matrices? Let be the (i,j)-cofactor, then This is called the cofactor expansion across the first row

slide-26
SLIDE 26

Determinant

§ Gauss elimination to bring the matrix into triangular form § For triangular matrices, the determinant is the product of diagonal elements § Can be used to compute Eigenvalues: Solve the characteristic polynomial

slide-27
SLIDE 27

Determinant: Applications

§ Find the inverse using Cramer’s rule with being the adjugate of with Cij being the cofactors of A, i.e.,

slide-28
SLIDE 28

Orthonormal Matrix

§ A matrix is orthonormal iff its column (row) vectors represent an orthonormal basis § Some properties:

§ The transpose is the inverse § Determinant has unity norm (± 1)

slide-29
SLIDE 29

Rotation Matrix (Orthonormal)

§ 2D Rotations: § 3D Rotations along the main axes § The inverse is the transpose (efficient) § IMPORTANT: Rotations are not commutative!

slide-30
SLIDE 30

Jacobian Matrix

§ It is a non-square matrix in general § Given a vector-valued function § Then, the Jacobian matrix is defined as

slide-31
SLIDE 31

§ Orientation of the tangent plane to the vector-valued function at a given point § Generalizes the gradient of a scalar valued function

Jacobian Matrix

slide-32
SLIDE 32

Quadratic Forms

§ Many functions can be locally approximated with a quadratic form § Often, one is interested in finding the minimum (or maximum) of a quadratic form, i.e.,

slide-33
SLIDE 33

Quadratic Forms

§ Question: How to efficiently compute a solution to this minimization problem § At the minimum, we have § By using the definition of matrix product, we can compute

slide-34
SLIDE 34

Quadratic Forms

§ The minimum of is where its derivative is 0 § Thus, we can solve the system § If the matrix is symmetric, the system becomes § Solving that, leads to the minimum