Lecture 7: More Math + Image Filtering Justin Johnson EECS 442 WI - - PowerPoint PPT Presentation

โ–ถ
lecture 7 more math image filtering
SMART_READER_LITE
LIVE PREVIEW

Lecture 7: More Math + Image Filtering Justin Johnson EECS 442 WI - - PowerPoint PPT Presentation

Lecture 7: More Math + Image Filtering Justin Johnson EECS 442 WI 2020: Lecture 7 - 1 January 30, 2020 Administrative HW0 was due yesterday! HW1 due a week from yesterday Justin Johnson EECS 442 WI 2020: Lecture 7 - 2 January 30, 2020


slide-1
SLIDE 1

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Lecture 7: More Math + Image Filtering

1

slide-2
SLIDE 2

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Administrative

2

HW0 was due yesterday! HW1 due a week from yesterday

slide-3
SLIDE 3

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Cool Talk Today:

3 https://cse.engin.umich.edu/event/numpy-a-look-at-the-past-present-and-future-of-array-computation

slide-4
SLIDE 4

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Last Time: Matrices, Vectorization, Linear Algebra

4

slide-5
SLIDE 5

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Eigensystems

5

  • An eigenvector ๐’˜๐’‹ and eigenvalue ๐œ‡$ of a matrix ๐‘ฉ

satisfy ๐‘ฉ๐’˜๐’‹ = ๐œ‡$๐’˜๐’‹ (๐‘ฉ๐’˜๐’‹ is scaled by ๐œ‡$)

  • Vectors and values are always paired and typically

you assume ๐’˜๐’‹

' = 1

  • Biggest eigenvalue of A gives bounds on how much

๐‘” ๐’š = ๐‘ฉ๐’š stretches a vector x.

  • Hints of what people really mean:
  • โ€œLargest eigenvectorโ€ = vector w/ largest value
  • โ€œSpectralโ€ just means thereโ€™s eigenvectors somewhere
slide-6
SLIDE 6

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 6

Suppose I have points in a grid

slide-7
SLIDE 7

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 7

Now I apply f(x) = Ax to these points Pointy-end: Ax . Non-Pointy-End: x

slide-8
SLIDE 8

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 8

Red box โ€“ unit square, Blue box โ€“ after f(x) = Ax. What are the yellow lines and why?

๐‘ฉ =

1.1 1.1

slide-9
SLIDE 9

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 9

๐‘ฉ =

0.8 1.25 Now I apply f(x) = Ax to these points Pointy-end: Ax . Non-Pointy-End: x

slide-10
SLIDE 10

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 10

Red box โ€“ unit square, Blue box โ€“ after f(x) = Ax. What are the yellow lines and why?

๐‘ฉ =

0.8 1.25

slide-11
SLIDE 11

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 11

Red box โ€“ unit square, Blue box โ€“ after f(x) = Ax. Can we draw any yellow lines?

๐‘ฉ =

cos(๐‘ข) โˆ’sin(๐‘ข) sin(๐‘ข) cos(๐‘ข)

slide-12
SLIDE 12

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Eigenvectors of Symmetric Matrices

12

  • Always n mutually orthogonal eigenvectors with n

(not necessarily) distinct eigenvalues

  • For symmetric ๐‘ฉ, the eigenvector with the largest

eigenvalue maximizes

๐’š๐‘ผ๐‘ฉ๐’š ๐’š๐‘ผ๐’š (smallest/min)

  • So for unit vectors (where ๐’š๐‘ผ๐’š = 1), that

eigenvector maximizes ๐’š๐‘ผ๐‘ฉ๐’š

  • A surprisingly large number of optimization

problems rely on (max/min)imizing this

slide-13
SLIDE 13

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Singular Value Decomposition

13

U A =

Rotation Can always write a mxn matrix A as: ๐‘ฉ = ๐‘ฝ๐šป๐‘พ๐‘ผ Eigenvectors

  • f AAT

โˆ‘

Scale Sqrt of Eigenvalues

  • f ATA

ฯƒ1 ฯƒ2 ฯƒ3

slide-14
SLIDE 14

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Singular Value Decomposition

14

U A =

Rotation Can always write a mxn matrix A as: ๐‘ฉ = ๐‘ฝ๐šป๐‘พ๐‘ผ Eigenvectors

  • f AAT

โˆ‘

Scale Sqrt of Eigenvalues

  • f ATA

VT

Rotation Eigenvectors

  • f ATA
slide-15
SLIDE 15

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Singular Value Decomposition

15

  • Every matrix is a rotation, scaling, and rotation
  • Number of non-zero singular values = rank /

number of linearly independent vectors

  • โ€œClosestโ€ matrix to A with a lower rank

U A

=

ฯƒ1 ฯƒ2 ฯƒ3

VT

slide-16
SLIDE 16

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Singular Value Decomposition

16

  • Every matrix is a rotation, scaling, and rotation
  • Number of non-zero singular values = rank /

number of linearly independent vectors

  • โ€œClosestโ€ matrix to A with a lower rank

U ร‚

=

ฯƒ1 ฯƒ2

VT

slide-17
SLIDE 17

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Singular Value Decomposition

17

  • Every matrix is a rotation, scaling, and rotation
  • Number of non-zero singular values = rank /

number of linearly independent vectors

  • โ€œClosestโ€ matrix to A with a lower rank
  • Secretly behind basically many things you do with

matrices

slide-18
SLIDE 18

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Solving Least-Squares

18

Start with two points (xi,yi) ๐‘ง@ ๐‘ง' = ๐‘ฆ@ 1 ๐‘ฆ' 1 ๐‘› ๐‘ ๐’› = ๐‘ฉ๐’˜ ๐‘ง@ ๐‘ง' = ๐‘›๐‘ฆ@ + ๐‘ ๐‘›๐‘ฆ' + ๐‘ We know how to solve this โ€“ invert A and find v (i.e., (m,b) that fits points)

(x1,y1) (x2,y2)

slide-19
SLIDE 19

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Solving Least-Squares

19

Start with two points (xi,yi) ๐‘ง@ ๐‘ง' = ๐‘ฆ@ 1 ๐‘ฆ' 1 ๐‘› ๐‘ ๐’› = ๐‘ฉ๐’˜ ๐‘ง@ ๐‘ง' โˆ’ ๐‘›๐‘ฆ@ + ๐‘ ๐‘›๐‘ฆ' + ๐‘

'

๐’› โˆ’ ๐‘ฉ๐’˜ ' = = ๐‘ง@ โˆ’ ๐‘›๐‘ฆ@ + ๐‘

' + ๐‘ง' โˆ’ ๐‘›๐‘ฆ' + ๐‘ '

(x1,y1) (x2,y2)

The sum of squared differences between the actual value of y and what the model says y should be.

slide-20
SLIDE 20

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Solving Least-Squares

20

Suppose there are n > 2 points ๐‘ง@ โ‹ฎ ๐‘งG = ๐‘ฆ@ 1 โ‹ฎ โ‹ฎ ๐‘ฆG 1 ๐‘› ๐‘ ๐’› = ๐‘ฉ๐’˜ Compute ๐‘ง โˆ’ ๐ต๐‘ฆ ' again ๐’› โˆ’ ๐‘ฉ๐’˜ ' = I

$J@ K

๐‘ง$ โˆ’ (๐‘›๐‘ฆ$ + ๐‘) '

slide-21
SLIDE 21

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Solving Least-Squares

21

Given y, A, and v with y = Av overdetermined (A tall / more equations than unknowns) We want to minimize ๐’› โˆ’ ๐‘ฉ๐’˜ ๐Ÿ‘, or find:

arg min๐’˜ ๐’› โˆ’ ๐‘ฉ๐’˜ '

(The value of x that makes the expression smallest)

Solution satisfies ๐‘ฉ๐‘ผ๐‘ฉ ๐’˜โˆ— = ๐‘ฉ๐‘ผ๐’›

  • r

๐’˜โˆ— = ๐‘ฉ๐‘ผ๐‘ฉ

R@๐‘ฉ๐‘ผ๐’›

(Donโ€™t actually compute the inverse!)

slide-22
SLIDE 22

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

When is Least-Squares Possible?

22

Given y, A, and v. Want y = Av A y = v

Want n outputs, have n knobs to fiddle with, every knob is useful if A is full rank.

A y = v

A: rows (outputs) > columns (knobs). Thus canโ€™t get precise

  • utput you want (not enough

knobs). So settle for โ€œclosestโ€ knob setting.

slide-23
SLIDE 23

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

When is Least-Squares Possible?

23

Given y, A, and v. Want y = Av A y = v

Want n outputs, have n knobs to fiddle with, every knob is useful if A is full rank.

A y = v

A: columns (knobs) > rows (outputs). Thus, any output can be expressed in infinite ways.

slide-24
SLIDE 24

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Homogeneous Least-Squares

24

Given a set of unit vectors (aka directions) ๐’š๐Ÿ, โ€ฆ , ๐’š๐’ and I want vector ๐’˜ that is as orthogonal to all the ๐’š๐’‹ as possible (for some definition of orthogonal)

๐‘ฉ๐’˜ = โˆ’ ๐’š๐Ÿ

๐‘ผ

โˆ’ โ‹ฎ โˆ’ ๐’š๐’

๐‘ผ

โˆ’ ๐’˜

Stack ๐’š๐’‹ into A, compute Av

= ๐’š๐Ÿ

๐‘ผ๐’˜

โ‹ฎ ๐’š๐’

๐‘ผ๐’˜

๐’š๐Ÿ ๐’š๐Ÿ‘ ๐’š๐’โ€ฆ ๐’˜ ๐‘ฉ๐’˜ ๐Ÿ‘ = I

๐’‹ ๐’

๐’š๐’‹

๐‘ผ๐’˜ ๐Ÿ‘

Compute

0 if

  • rthog

Sum of how orthog. v is to each x

slide-25
SLIDE 25

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Homogenous Least-Squares

25

  • A lot of times, given a matrix A we want to find the

v that minimizes ๐‘ฉ๐’˜ ' .

  • I.e., want ๐ฐโˆ— = arg min

๐’˜

๐‘ฉ๐’˜ '

'

  • Whatโ€™s a trivial solution?
  • Set v = 0 โ†’ Av = 0
  • Exclude this by forcing v to have unit norm
slide-26
SLIDE 26

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Homogenous Least-Squares

26

Letโ€™s look at ๐‘ฉ๐’˜ '

'

๐‘ฉ๐’˜ '

' =

Rewrite as dot product

๐‘ฉ๐’˜ '

' = ๐’˜๐‘ผ๐‘ฉ๐‘ผ๐๐ฐ = ๐ฐ๐” ๐๐”๐ ๐ฐ

๐‘ฉ๐’˜ '

' = ๐๐ฐ Z(๐๐ฐ)

Distribute transpose We want the vector minimizing this quadratic form Where have we seen this?

slide-27
SLIDE 27

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Homogenous Least-Squares

27

arg min

๐’˜ [J@ ๐‘ฉ๐’˜ '

*Note: ๐‘ฉ๐‘ผ๐‘ฉ is positive semi-definite so it has all non-negative eigenvalues

(1) โ€œSmallestโ€* eigenvector of ๐‘ฉ๐‘ผ๐‘ฉ (2) โ€œSmallestโ€ right singular vector of ๐‘ฉ Ubiquitious tool in vision: For min โ†’ max, switch smallest โ†’ largest

slide-28
SLIDE 28

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Derivatives

28

slide-29
SLIDE 29

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Derivatives

29

Remember derivatives? Derivative: rate at which a function f(x) changes at a point as well as the direction that increases the function

slide-30
SLIDE 30

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 30

Given quadratic function f(x) ๐‘” ๐‘ฆ is function ๐‘• ๐‘ฆ = ๐‘”] ๐‘ฆ aka ๐‘• ๐‘ฆ = ๐‘’ ๐‘’๐‘ฆ ๐‘”(๐‘ฆ)

๐‘” ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5

slide-31
SLIDE 31

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 31

Given quadratic function f(x)

Whatโ€™s special about x=2? ๐‘” ๐‘ฆ minim. at 2 ๐‘• ๐‘ฆ = 0 at 2 a = minimum of f โ†’ ๐‘• ๐‘ = 0 Reverse is not true ๐‘” ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5

slide-32
SLIDE 32

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 32

Rates of change

Suppose I want to increase f(x) by changing x: Blue area: move left Red area: move right Derivative tells you direction of ascent and rate ๐‘” ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5

slide-33
SLIDE 33

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Calculus to Know

33

  • Really need intuition
  • Need chain rule
  • Rest you should look up / use a computer algebra

system / use a cookbook

  • Partial derivatives (and thatโ€™s it from multivariable

calculus)

slide-34
SLIDE 34

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Partial Derivatives

34

  • Pretend other variables are constant, take a
  • derivative. Thatโ€™s it.
  • Make our function a function of two variables

๐‘”

' ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5 + ๐‘ง + 1 '

๐‘” ๐‘ฆ = ๐‘ฆ โˆ’ 2 ' + 5 ๐œ– ๐œ–๐‘ฆ ๐‘” ๐‘ฆ = 2 ๐‘ฆ โˆ’ 2 โˆ— 1 = 2(๐‘ฆ โˆ’ 2) ๐œ– ๐œ–๐‘ฆ ๐‘”

' ๐‘ฆ = 2(๐‘ฆ โˆ’ 2)

Pretend itโ€™s constant โ†’ derivative = 0

slide-35
SLIDE 35

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 35

Zooming Out

๐‘”

' ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5 + ๐‘ง + 1 '

Dark = f(x,y) low Bright = f(x,y) high

slide-36
SLIDE 36

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 36

Taking a slice of

๐‘”

' ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5 + ๐‘ง + 1 '

Slice of y=0 is the function from before: ๐‘” ๐‘ฆ = ๐‘ฆ โˆ’ 2 ' + 5 ๐‘”] ๐‘ฆ = 2(๐‘ฆ โˆ’ 2)

slide-37
SLIDE 37

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 37

Taking a slice of

๐‘”

' ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5 + ๐‘ง + 1 ' a ab ๐‘” ' ๐‘ฆ, ๐‘ง is rate of

change & direction in x dimension

slide-38
SLIDE 38

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 - 38

Zooming Out

๐‘”

' ๐‘ฆ, ๐‘ง = ๐‘ฆ โˆ’ 2 ' + 5 + ๐‘ง + 1 '

Gradient/Jacobian: Making a vector of โˆ‡d= ๐œ–๐‘” ๐œ–๐‘ฆ , ๐œ–๐‘” ๐œ–๐‘ง gives rate and direction

  • f change.

Arrows point OUT of minimum / basin.

slide-39
SLIDE 39

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

What Should I Know?

39

  • Gradients are simply partial derivatives per-

dimension: if ๐’š in ๐‘”(๐’š) has n dimensions, โˆ‡d(๐‘ฆ) has n dimensions

  • Gradients point in direction of ascent and tell the

rate of ascent

  • If a is minimum of ๐‘”(๐’š) โ†’ โˆ‡e a = ๐Ÿ
  • Reverse is not true, especially in high-dimensional

spaces

slide-40
SLIDE 40

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Image Filtering

40

slide-41
SLIDE 41

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

A Noisy Image

41

slide-42
SLIDE 42

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Cleaning it up

42

Slide Credit: D. Lowe

  • We have noise in our image
  • Letโ€™s replace each pixel with a weighted

average of its neighborhood

  • Weights are filter kernel

1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 Out

slide-43
SLIDE 43

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

1D Case

43

1/3 1/3 1/3

Filter Signal

10 12 9 11 10 11 12

Output

10.33

slide-44
SLIDE 44

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

1D Case

44

1/3 1/3 1/3

Filter Signal

10 12 9 11 10 11 12

Output

10.33 10.66

slide-45
SLIDE 45

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

1D Case

45

1/3 1/3 1/3

Filter Signal

10 12 9 11 10 11 12

Output

10.33 10.66 10

slide-46
SLIDE 46

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

1D Case

46

1/3 1/3 1/3

Filter Signal

10 12 9 11 10 11 12

Output

10.33 10.66 10 10.66

slide-47
SLIDE 47

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

1D Case

47

1/3 1/3 1/3

Filter Signal

10 12 9 11 10 11 12

Output

10.33 10.66 10 10.66 11

slide-48
SLIDE 48

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Applying a 2D Filter

48

I11 I12 I13 I21 I22 I23 I31 I32 I33 I14 I15 I16 I24 I25 I26 I34 I35 I36 I41 I42 I43 I51 I52 I53 I44 I45 I46 I54 I55 I56

Input

F11 F12 F13 F21 F22 F23 F31 F32 F33

Filter

O11 O12 O13 O21 O22 O23 O31 O32 O33 O14 O24 O34

Output

slide-49
SLIDE 49

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Applying a 2D Filter

49

I11 I12 I13 I21 I22 I23 I31 I32 I33 I14 I15 I16 I24 I25 I26 I34 I35 I36 I41 I42 I43 I51 I52 I53 I44 I45 I46 I54 I55 I56

Input & Filter

F11 F12 F13 F21 F22 F23 F31 F32 F33

Output

O11

O11 = I11*F11 + I12*F12 + โ€ฆ + I33*F33

slide-50
SLIDE 50

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Applying a 2D Filter

50

I11 I12 I13 I21 I22 I23 I31 I32 I33 I14 I15 I16 I24 I25 I26 I34 I35 I36 I41 I42 I43 I51 I52 I53 I44 I45 I46 I54 I55 I56

Input & Filter

F11 F12 F13 F21 F22 F23 F31 F32 F33

Output

O11

O12 = I12*F11 + I13*F12 + โ€ฆ + I34*F33

O12

slide-51
SLIDE 51

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Applying a 2D Filter

51

I11 I12 I13 I21 I22 I23 I31 I32 I33 I14 I15 I16 I24 I25 I26 I34 I35 I36 I41 I42 I43 I51 I52 I53 I44 I45 I46 I54 I55 I56

Input

F11 F12 F13 F21 F22 F23 F31 F32 F33

Filter Output

How many times can we apply a 3x3 filter to a 5x6 image?

slide-52
SLIDE 52

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Applying a 2D Filter

52

I11 I12 I13 I21 I22 I23 I31 I32 I33 I14 I15 I16 I24 I25 I26 I34 I35 I36 I41 I42 I43 I51 I52 I53 I44 I45 I46 I54 I55 I56

Input Output Oij = Iij*F11 + Ii(j+1)*F12 + โ€ฆ + I(i+2)(j+2)*F33

O11 O12 O13 O21 O22 O23 O31 O32 O33 O14 O24 O34 F11 F12 F13 F21 F22 F23 F31 F32 F33

Filter

slide-53
SLIDE 53

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Edge Cases

53

f g g g g f g g g g f g g g g Convolution doesnโ€™t keep the whole image. Suppose f is the image and g the filter.

f/g Diagram Credit: D. Lowe

Full: Any part

  • f g touches f.

Same: Output is same size as f Valid: Filter doesnโ€™t fall off edge

slide-54
SLIDE 54

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Edge Cases

54

What to about the โ€œ?โ€ region?

Symm: fold sides over pad/fill: add value, often 0 f g g g g ? ? ? ? Circular/Wrap: wrap around

f/g Diagram Credit: D. Lowe

slide-55
SLIDE 55

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Edge Cases: Does It Matter?

55

Input Image Box Filtered ??? Box Filtered ??? (Iโ€™ve applied the filter per-color channel) Which padding did I use and why?

Note โ€“ this is a zoom of the filtered, not a filter of the zoomed

slide-56
SLIDE 56

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Edge Cases: Does It Matter?

56

Input Image Box Filtered Symm Pad Box Filtered Zero Pad (Iโ€™ve applied the filter per-color channel)

Note โ€“ this is a zoom of the filtered, not a filter of the zoomed

slide-57
SLIDE 57

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

57

Slide Credit: D. Lowe

Original

?

1

slide-58
SLIDE 58

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

58

Slide Credit: D. Lowe

Original

1

The Same!

slide-59
SLIDE 59

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

59

Slide Credit: D. Lowe

Original

?

1

slide-60
SLIDE 60

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

60

Slide Credit: D. Lowe

Original

1

Shifted LEFT 1 pixel

slide-61
SLIDE 61

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

61

Slide Credit: D. Lowe

Original

?

1

slide-62
SLIDE 62

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

62

Slide Credit: D. Lowe

Original

1

Shifted DOWN 1 pixel

slide-63
SLIDE 63

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

63

?

Slide Credit: D. Lowe

Original

1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9

slide-64
SLIDE 64

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

64

Slide Credit: D. Lowe

Original

1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9

Blur (Box Filter)

slide-65
SLIDE 65

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

65

?

Slide Credit: D. Lowe

Original

1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 2

slide-66
SLIDE 66

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Practice with Linear Filters

66

Slide Credit: D. Lowe

Original

1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 2

  • Sharpened

(Acccentuates difference from local average)

slide-67
SLIDE 67

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Sharpening

67

Slide Credit: D. Lowe

slide-68
SLIDE 68

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Properties: Linear

68

Assume: I image f1, f2 filters Linear: apply(I,f1+f2) = apply(I,f1) + apply(I,f2) I is a white box on black, and f1, f2 are rectangles

Note: I am showing filters un-normalized and blown up. Theyโ€™re a smaller box filter (i.e., each entry is 1/(size^2))

= = + =A( , ) + A( , ) =

slide-69
SLIDE 69

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Properties: Shift-Invariant

69

Assume: I image, f filter Shift-invariant: shift(apply(I,f)) = apply(shift(I,f)) Intuitively: only depends on filter neighborhood

A( , ) = A( , ) =

Note: โ€œShift-Invariantโ€ is standard terminology, but I think โ€œShift- Equivariantโ€ is more correct

slide-70
SLIDE 70

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Annoying Terminology

70

Often called โ€œconvolutionโ€. Actually cross-correlation. Cross-Correlation (Slide filter over image) Convolution (Flip filter, then slide it)

slide-71
SLIDE 71

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Properties of Convolution

71

  • Any shift-invariant, linear operation is a convolution (โŽ)
  • Commutative: f โŽ g = g โŽ f
  • Associative: (f โŽ g) โŽ h = f โŽ (g โŽ h)
  • Distributes over +: f โŽ (g + h) = f โŽ g + f โŽ h
  • Scalars factor out: kf โŽ g = f โŽ kg = k (f โŽ g)
  • Identity (a single one with all zeros):

Property List: K. Grauman

= *

slide-72
SLIDE 72

Justin Johnson January 30, 2020 EECS 442 WI 2020: Lecture 7 -

Next Time: More Image Filtering, Edge Detection

72