Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
Lecture 5: Math Review I
1
Lecture 5: Math Review I Justin Johnson EECS 442 WI 2020: Lecture - - PowerPoint PPT Presentation
Lecture 5: Math Review I Justin Johnson EECS 442 WI 2020: Lecture 5 - 1 January 23, 2020 Administrative HW0 due Wednesday 1/29 (1 week from yesterday) HW1 out yesterday, due Wednesday 2/5 (3 weeks from yesterday) Justin Johnson EECS 442
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
1
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
2
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
3
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
4
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
5
impossible.
gaps
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
6
standard deviation 𝜏 for 1 ≤ 𝑗 ≤ 𝑂
𝝂 =
𝟐 𝑶 ∑𝒋0𝟐 𝑶
𝒚𝒋, distributed (qualitatively), in terms of variance?
𝜈 has mean 𝜈 and standard deviation
3 4 .
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
7
Each game/variable has mean $0.10, std $2 100 games is uncertain and fun! 100K games is guaranteed profit: 99.999999% lowest value is $0.064. $0.01 for drinks $0.054 for profits
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
8
mean 31 and standard deviation
5 678 ≈ 10;6
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
9
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
10
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
11
“Integers” on a computer are integers modulo 2k Carry Flag
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
12
Ok – you have to multiply before dividing
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
13
uint8
math
Should be: 9x4=36
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
14
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
15
What’s the largest number we can represent? 63.75 – Why? How precisely can we measure at 63? How precisely can we measure at 0? 0.25 0.25 Fine for many purposes but for science, seems silly
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
16
Sign (S) Exponent (E) Fraction (F)
Bias: allows exponent to be negative (bias = -127 for float32) Note: fraction = significant = mantissa; exponents of all ones or all zeros are special numbers
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
17
Sign Exponent Fraction
0/8
1/8
2/8
6/8 7/8
7-7=0 *(-bias)*
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
18
Fraction
0/8
1/8
2/8 6/8 7/8 Sign Exponent
9-7=2 *(-bias)*
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
19
Sign Exponent Fraction
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
20
Sign Exponent Fraction
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
21
Sign Exponent Fraction
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
22
Sign Exponent Fraction
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
23
S Exponent Fraction
8 bits 2127 ≈ 1038 23 bits ≈ 7 decimal digits
S
Exponent Fraction
11 bits 21023 ≈ 10308 52 bits ≈ 15 decimal digits IEEE 754 Single Precision / Single / float32 IEEE 754 Double Precision / Double / float64
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
24
S Exponent Fraction
5 bits 232 ≈ 109 10 bits ≈ 3 decimal digits IEEE 754 Half Precision / Half / float16 8 bits 2127 ≈ 1038
S Exponent Fraction
7 bits ≈ 2 decimal digits Brain Floating Point / bfloat16 Same range as FP32, but reduced precision
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
25
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
26
basic equalities are almost certainly incorrect for some values
you.
functions in numpy (not necessarily others!)
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
27
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
28
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
29
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
30
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
31
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
32
" I
G 5/G
There are other norms; assume L2 unless told otherwise
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
33
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
34
"05 I
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
35
" I
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
36
G
" I
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
37
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
38
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
39
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
40
[0, 𝑏, 𝑐]
[0,0, 𝑐]
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
41
ambiguity in sign and magnitude
to x, y (2) has sign given by right hand rule and (3) has magnitude given by area of parallelogram of x and y
direction or either is 0, then 𝒚×𝒛 = 𝟏
Image credit: Wikipedia.org
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
42
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
43
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
44
Horizontally concatenate n, m-dim column vectors and you get a mxn matrix A (here 2x3)
(scalar) lowercase undecorated a (vector) lowercase bold or arrow
(matrix) uppercase bold
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
45
Horizontally concatenate n, m-dim column vectors and you get a mxn matrix A (here 2x3)
Watch out: In math, it’s common to treat D-dim vector as a Dx1 matrix (column vector); In numpy these are different things
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
46
Vertically concatenate m, n-dim row vectors and you get a mxn matrix A (here 2x3)
l
l
Transpose: flip rows / columns
l
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
47
Linear combination of columns of A
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
48
𝑼𝒚
Dot product between rows of A and x
𝑼𝒚
𝑼
𝑼
3 3
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
49
𝑼
𝑼
Yes – in A, I’m referring to the rows, and in B, I’m referring to the columns
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
50
𝑼
𝑼
𝑼𝒄𝟐
𝑼𝒄𝒒
𝑼 𝒄𝟐
𝑼 𝒄𝒒
𝑼𝒄𝒌
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
51
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
52
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
53
If you want to be pedantic and proper, you expand e by multiplying a matrix of 1s (denoted 1) Many smart matrix libraries do this automatically. This is the source of many bugs.
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
54
Given: a nx2 matrix P and a 2D column vector v, Want: nx2 difference matrix D
Blue stuff is assumed / broadcast
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
55
maps)
convolution (which we’ll cover next)
math linear algebra class
space (Ax)
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
56
Suppose someone hands you this matrix. What’s wrong with it?
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
57
Typical way to change the contrast is to apply a nonlinear correction pixelvalueˆ The quantity 𝛿 controls how much contrast gets added
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
58
10% 50% 90% Now the darkest regions (10th pctile) are much darker than the moderately dark regions (50th pctile). new 10% new 50% new 90%
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 - 59
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 - 60
Phew! Much Better.
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
61
imNew = im**4 Python+Numpy (right way): Python+Numpy (slow way – why? ): imNew = np.zeros(im.shape) for y in range(im.shape[0]): for x in range(im.shape[1]): imNew[y,x] = im[y,x]**expFactor
Justin Johnson January 23, 2020 EECS 442 WI 2020: Lecture 5 -
62