1/59
Linear Algebra Primer
Note: the slides are based on CS131 (Juan Carlos et al) and EE263 (by Stephen Boyd et al) at Stanford. Reorganized, revised, and typed by Hao Su
Linear Algebra Primer Note: the slides are based on CS131 (Juan - - PowerPoint PPT Presentation
Linear Algebra Primer Note: the slides are based on CS131 (Juan Carlos et al) and EE263 (by Stephen Boyd et al) at Stanford. Reorganized, revised, and typed by Hao Su 1/59 Outline Vectors and Matrices Basic matrix operations
1/59
Note: the slides are based on CS131 (Juan Carlos et al) and EE263 (by Stephen Boyd et al) at Stanford. Reorganized, revised, and typed by Hao Su
2/59
◮ Vectors and Matrices
◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices
◮ Transformation Matrices
◮ Homogeneous matrices ◮ Translation
◮ Matrix inverse ◮ Matrix rank
3/59
◮ Vectors and Matrices
◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices
◮ Transformation Matrices
◮ Homogeneous matrices ◮ Translation
◮ Matrix inverse ◮ Matrix rank
4/59
◮ A column vector v ∈ Rn×1 where
◮ A row vector v T ∈ R1×n where
5/59
◮ We’ll default to column vectors in this class
◮ You’ll want to keep track of the orientation of your vectors when
6/59
◮ Vectors can represent an
◮ Points are just vectors from
◮ Data (pixels, gradients at
◮ Such vectors do not have a
7/59
◮ A matrix A ∈ Rm×n is an array of numbers with size m by n, i.e., m
◮ if m = n, we say that A is square.
8/59
◮ Python represents an image as a matrix of pixel brightness ◮ Note that the upper left corner is (y, x) = [0, 0]
9/59
◮ Grayscale images have one number per pixel, and are stored as an
◮ Color images have 3 numbers per pixel – red, green, and blue
◮ stored as an m × n × 3 matrix
10/59
◮ Addition ◮ Scaling ◮ Dot product ◮ Multiplication ◮ Transpose ◮ Inverse/pseudo-inverse ◮ Determinant/trace
11/59
◮ Addition
12/59
◮ Norm: x2 =
i=1 x2 i ◮ More formally, a norm is any function f : Rn → R that satisfies 4
◮ Non-Negativity: For all x ∈ Rn, f (x) ≥ 0 ◮ Definiteness: f (x) = 0 if and only if x = 0 ◮ Homogeneity: For all xRn, t ∈ R, f (tx) = |t|f (x) ◮ Triangle inequality: For all x, y ∈ Rn, f (x + y) ≤ f (x) + f (y)
13/59
◮ Example norms
n
i
◮ General ℓp norms:
14/59
◮ Inner product (dot product) of vectors
◮ Multiply corresponding entries of two vectors and add up the result ◮ x · y is also |x||y| cos(the angel between x and y)
n
15/59
◮ Inner product (dot product) of vectors
◮ If B is a unit vector, then A · B gives the length of A, which lies in
16/59
◮ The product of two matrices
n
1 −
2 −
m−
1 b1
1 b2
1 bp
2 b1
2 b2
2 bp
mb1
mb2
mbp
17/59
18/59
◮ The product of two matrices
19/59
◮ Powers
◮ By convention, we can refer to the matrix product AA as A2, and
◮ Obviously only square matrices can be multiplied that way
20/59
◮ Transpose – flip matrix, so row 1 becomes column 1
T
21/59
◮ Determinant
◮ det(A) returns a scalar ◮ Represents area (or volume) of the parallelogram described by the
◮ For A =
◮ Properties:
22/59
◮ Trace
◮ trace(A) = sum of diagonal elements
◮ Invariant to a lot of transformations, so it’s used sometimes in
◮ Properties:
23/59
◮ Vector norms
n
i
i
◮ Matrix norms: Norms can also be defined for matrices, such as
n
ij =
24/59
◮ Identity matrix I
◮ Diagonal matrix
25/59
◮ Symmetric matrix: AT = A
◮ Skew-symmetric matrix: AT = −A
26/59
◮ Vectors and Matrices
◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices
◮ Transformation Matrices
◮ Homogeneous matrices ◮ Translation
◮ Matrix inverse ◮ Matrix rank
27/59
◮ Matrices can be used to transform vectors in useful ways, through
◮ Simplest is scaling:
28/59
29/59
◮ Multiple transformation matrices can be used to transform a point:
30/59
◮ Multiple transformation matrices can be used to transform a point:
◮ The effect of this is to apply their transformations one after the
31/59
◮ Multiple transformation matrices can be used to transform a point:
◮ The effect of this is to apply their transformations one after the
◮ In the example above, the result is
32/59
◮ Multiple transformation matrices can be used to transform a point:
◮ The effect of this is to apply their transformations one after the
◮ In the example above, the result is
◮ The result is exactly the same if we multiply the matrices first, to
33/59
◮ In general, a matrix multiplication lets us linearly combine
◮ But notice, we cannot add a constant! :(
34/59
◮ The (somewhat hacky) solution? Stick a “1” at the end of every
◮ Now we can rotate, scale, and skew like before, AND translate (note
◮ This is called “homogeneous coordinates”
35/59
◮ In homogeneous coordinates, the multiplication works out so the
◮ Generally, a homogeneous transformation matrix will have a bottom
36/59
◮ One more thing we might want: to divide the result by something:
◮ Matrix multiplication cannot actually divide ◮ So, by convention, in homogeneous coordinates, we’ll divide the
37/59
38/59
39/59
40/59
41/59
42/59
43/59
44/59
45/59
46/59
47/59
48/59
49/59
◮ Transpose of a rotation matrix produces a rotation in the opposite
◮ The rows of a rotation matrix are always mutually perpendicular
◮ (and so are its columns)
50/59
51/59
◮ Vectors and Matrices
◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices
◮ Transformation Matrices
◮ Homogeneous matrices ◮ Translation
◮ Matrix inverse ◮ Matrix rank
52/59
◮ Given a matrix A, its inverse A−1 is a matrix such that
◮ e.g.,
2 1 3
◮ Useful identities, for matrices that are invertible:
53/59
◮ Vectors and Matrices
◮ Basic matrix operations ◮ Determinants, norms, trace ◮ Special matrices
◮ Transformation Matrices
◮ Homogeneous matrices ◮ Translation
◮ Matrix inverse ◮ Matrix rank
54/59
◮ Suppose we have a set of vectors v1, . . . , vn ◮ If we can express v1 as a linear combination of the other vectors
◮ The direction v1 can be expressed as a combination of the directions
55/59
◮ Suppose we have a set of vectors v1, . . . , vn ◮ If we can express v1 as a linear combination of the other vectors
◮ The direction v1 can be expressed as a combination of the directions
◮ If no vector is linearly dependent on the rest of the set, the set is
◮ Common case: a set of vectors v1, . . . , vn is always linearly
56/59
57/59
◮ Column/row rank col-rank(A) = the maximum number of linearly independent column vectors of A row-rank(A) = the maximum number of linearly independent row vectors of A
◮ Column rank always equals row rank
◮ Matrix rank
58/59
◮ For transformation matrices, the rank tells you the dimensions of the
◮ e.g. if rank of A is 1, then the transformation
◮ Here’s a matrix with rank 1:
59/59
◮ If an m × m matrix is rank m, we say it is “full rank”
◮ Maps an m × 1 vector uniquely to another m × 1 vector ◮ An inverse matrix can be found
◮ If rank < m, we say it is “singular”
◮ At least one dimension is getting collapsed. No way to look at the
◮ Inverse does not exist
◮ Inverse also does not exist for non-square matrices