Lecture 6: Math Review II Justin Johnson EECS 442 WI 2020: Lecture - - PowerPoint PPT Presentation

lecture 6 math review ii
SMART_READER_LITE
LIVE PREVIEW

Lecture 6: Math Review II Justin Johnson EECS 442 WI 2020: Lecture - - PowerPoint PPT Presentation

Lecture 6: Math Review II Justin Johnson EECS 442 WI 2020: Lecture 6 - 1 January 28, 2020 Administrative HW0 due tomorrow , 1/29 11:59pm HW1 due 1 week from tomorrow , 2/5 11:59pm Justin Johnson EECS 442 WI 2020: Lecture 6 - 2


slide-1
SLIDE 1

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Lecture 6: Math Review II

1

slide-2
SLIDE 2

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Administrative

  • HW0 due tomorrow, 1/29 11:59pm
  • HW1 due 1 week from tomorrow, 2/5 11:59pm

2

slide-3
SLIDE 3

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Last Time: Floating Point Math

3

S Exponent Fraction

8 bits 2127 ≈ 1038 23 bits ≈ 7 decimal digits

S

Exponent Fraction

11 bits 21023 ≈ 10308 52 bits ≈ 15 decimal digits IEEE 754 Single Precision / Single / float32 IEEE 754 Double Precision / Double / float64

slide-4
SLIDE 4

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Last Time: Vectors

4

  • Scale (vector, scalar → vector)
  • Add (vector, vector → vector)
  • Magnitude (vector → scalar)
  • Dot product (vector, vector → scalar)
  • Dot products are projection / angles
  • Cross product (vector, vector → vector)
  • Vectors facing same direction have cross product 0
  • You can never mix vectors of different sizes
slide-5
SLIDE 5

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrices

5

slide-6
SLIDE 6

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrices

6

Horizontally concatenate n, m-dim column vectors and you get a mxn matrix A (here 2x3)

𝑩 = 𝒘$, ⋯ , 𝒘' = 𝑤$) 𝑤*) 𝑤+) 𝑤$, 𝑤*, 𝑤+,

a

(scalar) lowercase undecorated a (vector) lowercase bold or arrow

A

(matrix) uppercase bold

slide-7
SLIDE 7

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrices

7

Horizontally concatenate n, m-dim column vectors and you get a mxn matrix A (here 2x3)

𝑩 = 𝒘$, ⋯ , 𝒘' = 𝑤$) 𝑤*) 𝑤+) 𝑤$, 𝑤*, 𝑤+,

Watch out: In math, it’s common to treat D-dim vector as a Dx1 matrix (column vector); In numpy these are different things

slide-8
SLIDE 8

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrices

8

Vertically concatenate m, n-dim row vectors and you get a mxn matrix A (here 2x3)

𝐵 = 𝒗$

/

⋮ 𝒗'

/

= 𝑣$) 𝑣$, 𝑣$2 𝑣*) 𝑣*, 𝑣*2

Transpose: flip rows / columns

𝑏 𝑐 𝑑

/

= 𝑏 𝑐 𝑑 (3x1)T = 1x3

slide-9
SLIDE 9

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix-vector Product

9

𝒛*7$ = 𝑩*7+𝒚+7$ 𝒛 = 𝑦$𝒘𝟐 + 𝑦*𝒘𝟑 + 𝑦+𝒘𝟒

Linear combination of columns of A

𝑧$ 𝑧* = 𝒘𝟐 𝒘𝟑 𝒘𝟒 𝑦$ 𝑦* 𝑦+

slide-10
SLIDE 10

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix-vector Product

10

𝒛*7$ = 𝑩*7+𝒚+7$ 𝑧$ = 𝒗𝟐

𝑼𝒚

Dot product between rows of A and x

𝑧* = 𝒗𝟑

𝑼𝒚

𝒗𝟐

𝑼

𝒗𝟑

𝑼

𝑧$ 𝑧* = 𝒚

3 3

slide-11
SLIDE 11

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix Multiplication

11

− 𝒃𝟐

𝑼

− ⋮ − 𝒃𝒏

𝑼

− | | 𝒄𝟐 ⋯ 𝒄𝒒 | | 𝑩𝑪 =

Generally: Amn and Bnp yield product (AB)mp

Yes – in A, I’m referring to the rows, and in B, I’m referring to the columns

slide-12
SLIDE 12

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix Multiplication

12

− 𝒃𝟐

𝑼

− ⋮ − 𝒃𝒏

𝑼

− | | 𝒄𝟐 ⋯ 𝒄𝒒 | | 𝑩𝑪 = 𝒃𝟐

𝑼𝒄𝟐

⋯ 𝒃𝟐

𝑼𝒄𝒒

⋮ ⋱ ⋮ 𝒃𝒏

𝑼 𝒄𝟐

⋯ 𝒃𝒏

𝑼 𝒄𝒒

𝑩𝑪HI = 𝒃𝒋

𝑼𝒄𝒌

Generally: Amn and Bnp yield product (AB)mp

slide-13
SLIDE 13

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix Multiplication

13

  • Dimensions must match
  • Dimensions must match
  • Dimensions must match
  • (Associative): ABx = (A)(Bx) = (AB)x
  • (Not Commutative): ABx ≠ (BA)x ≠ (BxA)
slide-14
SLIDE 14

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Two uses for Matrices

14

  • 1. Storing things in a rectangular array (e.g. images)
  • Typical operations: element-wise operations,

convolution (which we’ll cover later)

  • Atypical operations: almost anything you learned in a

math linear algebra class

  • 2. A linear operator that maps vectors to another

space (Ax)

  • Typical/Atypical: reverse of above
slide-15
SLIDE 15

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Images as Matrices

15

Suppose someone hands you this matrix. What’s wrong with it?

No contrast!

slide-16
SLIDE 16

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Contrast: Gamma Curve

16

Typical way to change the contrast is to apply a nonlinear correction pixelvalueT The quantity 𝛿 controls how much contrast gets added

slide-17
SLIDE 17

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Contrast: Gamma Curve

17

10% 50% 90% Now the darkest regions (10th pctile) are much darker than the moderately dark regions (50th pctile). new 10% new 50% new 90%

slide-18
SLIDE 18

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 - 18

Contrast: Gamma Correction

slide-19
SLIDE 19

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 - 19

Phew! Much Better.

Contrast: Gamma Correction

slide-20
SLIDE 20

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Implementation

20

imNew = im**4 Python+Numpy (right way): Python+Numpy (slow way – why? ): imNew = np.zeros(im.shape) for y in range(im.shape[0]): for x in range(im.shape[1]): imNew[y,x] = im[y,x]**expFactor

slide-21
SLIDE 21

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Elementwise Operations

21

𝑩 ⊙ 𝑪 HI = 𝑩HI ∗ 𝑪HI

“Hadamard Product” / Element-wise multiplication

𝑩/𝑪 HI = 𝐵HI 𝐶HI

Element-wise division

𝑩Z HI = 𝐵HI

Z

Element-wise power – beware notation

slide-22
SLIDE 22

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 - 22

Sums Across Axes

𝑩 = 𝑦$ 𝑧$ ⋮ ⋮ 𝑦' 𝑧'

Suppose have Nx2 matrix A

Σ(𝑩, 1) = 𝑦$ + 𝑧$ ⋮ 𝑦' + 𝑧'

ND col. vec.

Σ(𝑩, 0) = `

Ha$ '

𝑦H , `

Ha$ '

𝑧H

2D row vec

Note – libraries distinguish between N-D column vector and Nx1 matrix.

slide-23
SLIDE 23

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Operations they don’t teach

23

𝑏 + 𝑓 𝑐 + 𝑓 𝑑 + 𝑓 𝑒 + 𝑓 𝑏 𝑐 𝑑 𝑒 + 𝑓 𝑔 𝑕 ℎ = 𝑏 + 𝑓 𝑐 + 𝑔 𝑑 + 𝑕 𝑒 + ℎ

You Probably Saw Matrix Addition

𝑏 𝑐 𝑑 𝑒 + 𝑓 =

What is this? FYI: e is a scalar

slide-24
SLIDE 24

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Broadcasting

24

𝑏 𝑐 𝑑 𝑒 + 𝑓 = 𝑏 𝑐 𝑑 𝑒 + 𝑓 𝑓 𝑓 𝑓 = 𝑏 𝑐 𝑑 𝑒 + 𝟐*7*𝑓

If you want to be pedantic and proper, you expand e by multiplying a matrix of 1s (denoted 1) Many smart matrix libraries do this automatically. This is the source of many bugs.

slide-25
SLIDE 25

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Broadcasting Example

25

𝑸 = 𝑦$ 𝑧$ ⋮ ⋮ 𝑦' 𝑧' 𝒘 = 𝑏 𝑐

Given: a nx2 matrix P and a 2D column vector v, Want: nx2 difference matrix D

𝑬 = 𝑦$ − 𝑏 𝑧$ − 𝑐 ⋮ ⋮ 𝑦' − 𝑏 𝑧' − 𝑐 𝑸 − 𝒘/ = 𝑦$ 𝑧$ ⋮ ⋮ 𝑦' 𝑧' − 𝑏 𝑐 𝑏 𝑐 ⋮

Blue stuff is assumed / broadcast

slide-26
SLIDE 26

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Broadcasting Rules

26

Suppose we have numpy arrays x and y. How will they broadcast?

  • 1. Write down the shape of each array as a tuple of integers:

For example: x: (10,) y: (20, 10)

  • 2. If they have different numbers of dimensions, prepend

with ones until they have the same number of dimensions For example: x: (10,) y: (20, 10) à x: (1, 10) y: (20, 10)

  • 3. Compare each dimension. There are 3 cases:

(a) Dimension match. Everything is good (b) Dimensions don’t match, but one is =1. ”Duplicate” the smaller array along that axis to match (c) Dimensions don’t match, neither are =1. Error!

slide-27
SLIDE 27

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Broadcasting Examples

27

x = np.ones(10, 20) y = np.ones(20) z = x + y print(z.shape) x = np.ones(10, 20) y = np.ones(10, 1) z = x + y print(z.shape) x = np.ones(10, 20) y = np.ones(10) z = x + y print(z.shape) x = np.ones(1, 20) y = np.ones(10, 1) z = x + y print(z.shape) (10,20) ERROR (10,20) (10,20)

slide-28
SLIDE 28

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Tensors

28

Scalar: Just one number Vector: 1D list of numbers Matrix: 2D grid of numbers Tensor: N-dimensional grid of numbers (Lots of other meanings in math, physics)

slide-29
SLIDE 29

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Broadcasting with Tensors

29

x = np.ones(30) y = np.ones(20, 1) z = np.ones(10, 1, 1) w = x + y + z print(w.shape) (10, 20, 30) The same broadcasting rules apply to tensors with any number of dimensions!

slide-30
SLIDE 30

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization

30

Writing code without explicit loops: use broadcasting, matrix multiply, and other (optimized) numpy primitives instead

slide-31
SLIDE 31

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

31

  • Suppose I represent each image as a 128-

dimensional vector

  • I want to compute all the pairwise distances

between {x1, …, xN} and {y1, …, yM} so I can find, for every xi the nearest yj

  • Identity: 𝒚 − 𝒛 * =

𝒚 * + 𝒛 * − 2𝒚/𝒛

  • Or: 𝒚 − 𝒛

= 𝒚 * + 𝒛 * − 2𝒚/𝒛 $/*

slide-32
SLIDE 32

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

32

𝒀 = − 𝒚$ − ⋮ − 𝒚k − 𝒁 = − 𝒛$ − ⋮ − 𝒛m − 𝒀𝒁𝑼

HI = 𝒚𝒋 𝑼𝒛𝒌

𝒁𝑼 = | | 𝒛$ ⋯ 𝒛m | | 𝚻 𝒀𝟑, 𝟐 = 𝒚𝟐

𝟑

⋮ 𝒚𝑶

𝟑

Compute a Nx1 vector

  • f norms

(can also do Mx1) Compute a NxM matrix

  • f dot products
slide-33
SLIDE 33

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

33

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

𝒚𝟐

𝟑

⋮ 𝒚𝑶

𝟑

+ 𝒛$

𝟑

⋯ 𝒛m

𝟑

Σ 𝒀*, 1 + Σ 𝒁*, 1 /

HI =

𝒚H

* + 𝒛I *

𝒚𝟐

* + 𝒛𝟐 *

⋯ 𝒚𝟐

* + 𝒛𝑵 *

⋮ ⋱ ⋮ 𝒚𝑶

* + 𝒛𝟐 *

⋯ 𝒚𝑶

* + 𝒛𝑵 *

Why?

slide-34
SLIDE 34

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

34

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

slide-35
SLIDE 35

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

35

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

(N, 1)

slide-36
SLIDE 36

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

36

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

(M, 1) (N, 1)

slide-37
SLIDE 37

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

37

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

(M, 1) (N, 1) (N, M)

slide-38
SLIDE 38

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

38

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

(M, 1) (N, 1) (N, M) (N, M)

slide-39
SLIDE 39

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

39

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

(M, 1) (N, 1) (N, M) (N, M)

slide-40
SLIDE 40

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

40

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D):

(M, 1) (N, 1) (N, M) (N, M)

slide-41
SLIDE 41

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Vectorization Example

41

𝐄HI = 𝒚𝒋

* + 𝒛𝒌 * + 2𝒚𝑼𝒛

Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5

𝐄 = Σ 𝒀𝟑, 1 + Σ 𝒁𝟑, 1

𝑼 − 2𝒀𝒁𝑼 $/*

*May have to make sure this is at least 0 (sometimes roundoff issues happen)

slide-42
SLIDE 42

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Does Vectorization Matter?

42

Computing pairwise distances between 300 and 400 128-dimensional vectors

  • 1. for x in X, for y in Y, using native python: 9s
  • 2. for x in X, for y in Y, using numpy to compute

distance: 0.8s

  • 3. vectorized: 0.0045s (~2000x faster than 1, 175x

faster than 2) Expressing things in primitives that are optimized is usually faster Even more important with special hardware like GPUs or TPUs!

slide-43
SLIDE 43

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Linear Algebra

43

slide-44
SLIDE 44

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Linear Independence

44

𝒛 = −2 1 = 1 2 𝒃 − 1 3 𝒄 𝒚 = 4 =

  • Is the set {a,b,c} linearly independent?
  • Is the set {a,b,x} linearly independent?
  • Max # of independent 3D vectors?

𝒃 = 2 𝒄 = 6 𝒅 = 5

Suppose:

A set of vectors is linearly independent if you can’t write one as a linear combination of the others.

slide-45
SLIDE 45

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Span

45

Span: all linear combinations of a set

  • f vectors

Span({ }) = Span({[0,2]}) = ? All vertical lines through origin = 𝜇 0,1 : 𝜇 ∈ 𝑆 Is blue in {red}’s span?

slide-46
SLIDE 46

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Span

46

Span: all linear combinations of a set

  • f vectors

Span({ , }) = ?

slide-47
SLIDE 47

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Span

47

Span: all linear combinations of a set

  • f vectors

Span({ , }) = ?

slide-48
SLIDE 48

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix-Vector Product

48

𝑩𝒚 = | | 𝒅𝟐 ⋯ 𝒅𝒐 | | 𝒚

Right-multiplying A by x mixes columns of A according to entries of x

  • The output space of f(x) = Ax is constrained to be

the span of the columns of A.

  • Can’t output things you can’t construct out of your

columns

slide-49
SLIDE 49

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

An Intuition

49

x Ax

y1 y2 y3

x1 x2 x3

y

𝒛 = 𝑩𝒚 = | | | 𝒅𝟐 𝒅𝟑 𝒅𝒐 | | | 𝑦$ 𝑦* 𝑦+

x – knobs on machine (e.g., fuel, brakes) y – state of the world (e.g., where you are) A – machine (e.g., your car)

slide-50
SLIDE 50

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Linear Independence

50

𝒛 = 𝑩𝒚 = | | | 𝒅𝟐 𝛽𝒅𝟐 𝒅𝟑 | | | 𝑦$ 𝑦* 𝑦+

Suppose the columns of 3x3 matrix A are not linearly independent (c1, αc1, c2 for instance)

𝒛 = 𝑦$𝒅𝟐 + 𝛽𝑦*𝒅𝟐 + 𝑦+𝒅𝟑 𝒛 = 𝑦$ + 𝛽𝑦* 𝒅𝟐 + 𝑦+𝒅𝟑

slide-51
SLIDE 51

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Linear Independence Intuition

51

Knobs of x are redundant. Even if y has 3 outputs, you can only control it in two directions

𝒛 = 𝑦$ + 𝛽𝑦* 𝒅𝟐 + 𝑦+𝒅𝟑

x Ax

y1 y2 y3

x1 x2 x3

y

slide-52
SLIDE 52

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Linear Independence

52

𝑩𝒚 = 𝑦$ + 𝛽𝑦* 𝒅𝟐 + 𝑦+𝒅𝟑

  • Or, given a vector y there’s not a unique vector x

s.t. y =Ax

  • Not all y have a corresponding x s.t. y=Ax

(assuming 𝒅𝟐 and 𝒅𝟐have dimension >= 3) 𝒛 = 𝑩 𝑦$ + 𝛾 𝑦* − 𝛾/𝛽 𝑦+

  • Can write y an infinite number of ways by adding

𝛾 to x1 and subtracting

~

  • from x2

Recall:

= 𝑦$ + 𝛾 + 𝛽𝑦* − 𝛽 𝛾 𝛽 𝑑$ + 𝑦+𝑑*

slide-53
SLIDE 53

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Linear Independence

53

𝑩𝒚 = 𝑦$ + 𝛽𝑦* 𝒅𝟐 + 𝑦+𝒅𝟑

  • An infinite number of non-zero vectors x can map

to a zero-vector y

  • Called the right null-space of A.

𝒛 = 𝑩 𝛾 −𝛾/𝛽 = 𝛾 − 𝛽 𝛾 𝛽 𝒅𝟐 + 0𝒅𝟑

  • What else can we cancel out?
slide-54
SLIDE 54

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Rank

54

  • Rank of a nxn matrix A – number of linearly

independent columns (or rows) of A / the dimension of the span of the columns

  • Matrices with full rank (n x n, rank n) behave nicely:

can be inverted, span the full output space, are

  • ne-to-one.
  • Matrices with full rank are machines where every

knob is useful and every output state can be made by the machine

slide-55
SLIDE 55

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Matrix Inverses

55

  • Given 𝒛 = 𝑩𝒚, y is a linear combination of columns
  • f A proportional to x. If A is full-rank, we should be

able to invert this mapping.

  • Given some y (output) and A, what x (inputs)

produced it?

  • x = A-1y
  • Note: if you don’t need to compute it, never ever

compute it. Solving for x is much faster and stable than obtaining A-1. Bad: y = np.linalg.inv(A).dot(y) Good: y = np.linalg.solve(A, y)

slide-56
SLIDE 56

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Symmetric Matrices

56

  • Symmetric: 𝑩𝑼 = 𝑩 or

𝑩HI = 𝑩IH

  • Have lots of special

properties

𝑏$$ 𝑏$* 𝑏$+ 𝑏*$ 𝑏** 𝑏*+ 𝑏+$ 𝑏+* 𝑏++

Any matrix of the form 𝑩 = 𝒀𝑼𝒀 is symmetric. Quick check: 𝑩𝑼 = 𝒀𝑼𝒀

𝑼

𝑩𝑼 = 𝒀𝑼 𝒀𝑼 𝑼 𝑩𝑼 = 𝒀𝑼𝒀

slide-57
SLIDE 57

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Special Matrices: Rotations

57

𝑠

$$

𝑠

$*

𝑠

$+

𝑠

*$

𝑠

**

𝑠

*+

𝑠

+$

𝑠

+*

𝑠

++

  • Rotation matrices 𝑺 rotate vectors and do not

change vector L2 norms ( 𝑺𝒚 * = 𝒚 *)

  • Every row/column is unit norm
  • Every row is linearly independent
  • Transpose is inverse 𝑺𝑺𝑼 = 𝑺𝑼𝑺 = 𝑱
  • Determinant is 1 (otherwise it’s also a coordinate

flip/reflection), eigenvalues are 1

slide-58
SLIDE 58

Justin Johnson January 28, 2020 EECS 442 WI 2020: Lecture 6 -

Next Time: More Linear Algebra + Image Filtering

58