Lecture 23: Principal Component Analysis Aykut Erdem January 2019 - - PowerPoint PPT Presentation

lecture 23
SMART_READER_LITE
LIVE PREVIEW

Lecture 23: Principal Component Analysis Aykut Erdem January 2019 - - PowerPoint PPT Presentation

Lecture 23: Principal Component Analysis Aykut Erdem January 2019 Hacettepe University Administrative Project Presentations January 17-18, 2019 Each project group will have ~10 mins to present their work in class. The suggested


slide-1
SLIDE 1

Lecture 23:

−Principal Component Analysis

Aykut Erdem

January 2019 Hacettepe University

slide-2
SLIDE 2

Administrative

Project Presentations

January 17-18, 2019 


  • Each project group will have ~10 mins to present their work in class.

The suggested outline for the presentations are as follows:

  • High-level overview of the paper (main contributions)
  • Problem statement and motivation (clear definition of the problem, why it

is interesting and important)

  • Key technical ideas (overview of the approach)
  • Experimental set-up (datasets, evaluation metrics, applications)
  • Strengths and weaknesses (discussion of the results obtained)
  • In addition to classroom presentations, each group should also

prepare an engaging video presentation of their work using online tools such as PowToon, moovly or GoAnimate (due January 11, 2019).

2

slide-3
SLIDE 3

Final Reports (Due January 18, 2019)


  • The report should be prepared using LaTeX and 6-8 pages. A typical
  • rganization of a report might follow:
  • Title, Author(s).
  • Abstract. This section introduces the problem that you investigated by

providing a general motivation and briefly discusses the approach(es) that you explored.

  • Introduction.
  • Related Work. This section discusses relevant literature for your project

topic.

  • The Approach. This section gives the technical details about your project
  • work. You should describe the representation(s) and the algorithm(s) that

you employed or proposed as detailed and specific as possible.

  • Experimental Results. This section presents some experiments in which you

analyze the performance of the approach(es) you proposed or explored. You should provide a qualitative and/or quantitative analysis, and comment on your findings. You may also demonstrate the limitations of the approach(es).

  • Conclusions. This section summarizes all your project work, focusing on the

key results you obtained. You may also suggest possible directions for future work.

  • References. This section gives a list of all related work you reviewed or used3
slide-4
SLIDE 4

Last time… Graph-Theoretic Clustering

Goal: Given data points X1, ..., Xn and similarities W(Xi ,Xj), partition the data into groups so that points in a group are similar and points in different groups are dissimilar.

4

Similarity Graph: G(V,E,W) V – Vertices (Data points) E – Edge if similarity > 0 W - Edge weights (similarities) Partition the graph so that edges within a group have large weights and edges across groups have small weights.

Similarity graph

slide by Aarti Singh

slide-5
SLIDE 5

Last time… K-Means vs. Spectral Clustering

  • Applying k-means

to Laplacian eigenvectors allows us to find cluster with non- convex boundaries.

5 k-means output Spectral clustering output

slide by Aarti Singh

slide-6
SLIDE 6

6

Bottom-Up (agglomerative):

Start with each item in its own cluster, find the best pair to merge into a new

  • cluster. Repeat until all clusters are

fused together.

slide by Andrew Moore

Last time…

slide-7
SLIDE 7

Today

  • Dimensionality Reduction
  • PCA algorithms
  • Applications

7

slide-8
SLIDE 8

Dimensionality 
 Reduction

8

slide-9
SLIDE 9

Motivation I: Data Visualization

9

Instances Features

H-WBC H-RBC H-Hgb H-Hct H-MCV H-MCH H-MCHC H-MCHC A1 8.0000 4.8200 14.1000 41.0000 85.0000 29.0000 34.0000 A2 7.3000 5.0200 14.7000 43.0000 86.0000 29.0000 34.0000 A3 4.3000 4.4800 14.1000 41.0000 91.0000 32.0000 35.0000 A4 7.5000 4.4700 14.9000 45.0000 101.0000 33.0000 33.0000 A5 7.3000 5.5200 15.4000 46.0000 84.0000 28.0000 33.0000 A6 6.9000 4.8600 16.0000 47.0000 97.0000 33.0000 34.0000 A7 7.8000 4.6800 14.7000 43.0000 92.0000 31.0000 34.0000 A8 8.6000 4.8200 15.8000 42.0000 88.0000 33.0000 37.0000 A9 5.1000 4.7100 14.0000 43.0000 92.0000 30.0000 32.0000

  • 53 Blood and urine samples from 65 people
  • Difficult to see the correlations between features

slide by Alex Smola

slide-10
SLIDE 10

Motivation I: Data Visualization

  • Spectral format (65 curves, one for each person
  • Difficult to compare different patients

10

  • 10

20 30 40 50 60 100 200 300 400 500 600 700 800 900 1000 measurement Value

Measurement

slide by Alex Smola

slide-11
SLIDE 11

Motivation I: Data Visualization

  • Spectral format (53 pictures, one for each feature)

11

0 10 20 30 40 50 60 70 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

Person H-Bands

  • Difficult to see the correlations between features

slide by Alex Smola

slide-12
SLIDE 12

Motivation I: Data Visualization

12

0 50 150 250 350 450 50 100 150 200 250 300 350 400 450 500 550

C-Triglycerides C-LDH

100200300400500 200 400 600 1 2 3 4

C-Triglycerides C-LDH M-EPI

Bi-variate Tri-variate

… ¡difficult ¡to ¡see ¡in ¡4 ¡or ¡higher ¡dimensional ¡spaces...

slide by Alex Smola

Even 3 dimensions are already difficult. How to extend this?

slide-13
SLIDE 13

Motivation I: Data Visualization

  • Is there a representation better than the

coordinate axes?


  • Is it really necessary to show all the 53

dimensions?

  • ... what if there are strong correlations between

the features?


  • How could we find the smallest subspace of the

53-D space that keeps the most information about the original data?


13

slide by Barnabás Póczos and Aarti Singh

slide-14
SLIDE 14

Reduce data from 2D to 1D

Motivation II: Data Compression

slide by Andrew Ng

(inches) (cm)

slide-15
SLIDE 15

(inches) (cm)

Motivation II: Data Compression

slide by Andrew Ng

Reduce data from 2D to 1D

slide-16
SLIDE 16

Motivation II: Data Compression

slide by Andrew Ng

Reduce data from 3D to 2D

slide-17
SLIDE 17

Dimensionality Reduction

  • Clustering
  • One way to summarize a complex real-valued data

point with a single categorical variable


  • Dimensionality reduction
  • Another way to simplify complex high-dimensional

data

  • Summarize data with a lower dimensional real valued

vector

17

slide by Fereshteh Sadeghi

  • Given data points in d dimensions
  • Convert them to data points in r<d dims
  • With minimal loss of information
slide-18
SLIDE 18

Principal Component 
 Analysis

18

slide-19
SLIDE 19

Principal Component Analysis

PCA: Orthogonal projection of the data onto a lower- dimension linear space that...


  • maximizes variance of projected data (purple line)

  • minimizes mean squared distance between
  • data point and
  • projections (sum of blue lines)

19

  • slide by Barnabás Póczos and Aarti Singh
slide-20
SLIDE 20

Principal Component Analysis

  • PCA Vectors originate from the center of

mass.


  • Principal component #1: points in the

direction of the largest variance.


  • Each subsequent principal component
  • is orthogonal to the previous ones, and
  • points in the directions of the largest

variance of the residual subspace

20

slide by Barnabás Póczos and Aarti Singh

slide-21
SLIDE 21

2D Gaussian dataset

21

slide by Barnabás Póczos and Aarti Singh

slide-22
SLIDE 22

1st PCA axis

22

slide by Barnabás Póczos and Aarti Singh

slide-23
SLIDE 23

2nd PCA axis

23

slide by Barnabás Póczos and Aarti Singh

slide-24
SLIDE 24

PCA algorithm I (sequential)

24

18

  • m

i i T i T

m

1 2 1 1 1 2

} )] ( {[ 1 max arg x w w x w w

w

} ) {( 1 max arg

1 2 i 1 1

  • m

i T

m x w w

w

We maximize the variance

  • f the projection in the

residual subspace We maximize the variance of projection of x

x’ ¡PCA reconstruction

Given the centered data {x1, ¡…, ¡xm}, compute the principal vectors:

1st PCA vector kth PCA vector x w1 w x’=w1(w1

Tx)

w x-x’

slide by Barnabás Póczos and Aarti Singh

slide-25
SLIDE 25

PCA algorithm I (sequential)

25

19

  • m

i k j i T j j i T k

m

1 2 1 1 1

} )] ( {[ 1 max arg x w w x w w

w

We maximize the variance

  • f the projection in the

residual subspace Maximize the variance of projection of x

x’ ¡PCA reconstruction

Given w1,…, ¡wk-1, we calculate wk principal vector as before:

kth PCA vector w1(w1

Tx)

w2(w2

Tx)

x w1 w2 x’=w1(w1

Tx)+w2(w2 Tx)

w

slide by Barnabás Póczos and Aarti Singh

slide-26
SLIDE 26

26

  • Given data {x1, ¡…, ¡xm}, compute covariance matrix
  • PCA basis vectors = the eigenvectors of
  • Larger eigenvalue more important eigenvectors
  • m

i T i

m

1

) )( ( 1 x x x x

  • m

i i

m

1

1 x x

where

PCA algorithm II 
 (sample covariance matrix)

slide by Barnabás Póczos and Aarti Singh

slide-27
SLIDE 27

Reminder: Eigenvector and Eigenvalue

27

Ax = λx

A: Square matrix λ: Eigenvector or characteristic vector x: Eigenvalue or characteristic value

slide-28
SLIDE 28

Reminder: Eigenvector and Eigenvalue

28

Ax - λx = 0 (A – λI)x = 0 B = A – λI Bx = 0 x = B-10 = 0

If we define a new matrix B: If B has an inverse: BUT! an eigenvector cannot be zero!! x will be an eigenvector of A if and only if B does not have an inverse, or equivalently det(B)=0 :

det(A – λI) = 0 Ax = λx

slide-29
SLIDE 29

Reminder: Eigenvector and Eigenvalue

29

Example 1: Find the eigenvalues of two eigenvalues: -1, - 2 Note: The roots of the characteristic equation can be repeated. That is, λ1 = λ2 =…= λk. If that happens, the eigenvalue is said to be of multiplicity k. Example 2: Find the eigenvalues of λ = 2 is an eigenvector of multiplicity 3.

ú û ù ê ë é

  • =

5 1 12 2 A ) 2 )( 1 ( 2 3 12 ) 5 )( 2 ( 5 1 12 2

2

+ + = + + = + +

  • =

+

  • =
  • l

l l l l l l l l A I ú ú ú û ù ê ê ê ë é = 2 2 1 2 A ) 2 ( 2 2 1 2

3 =

  • =
  • =
  • l

l l l l A I

slide-30
SLIDE 30

PCA algorithm II 
 (sample covariance matrix)

30

slide-31
SLIDE 31

PCA algorithm III 
 (SVD of the data matrix)

31

23

Singular Value Decomposition of the centered data matrix X.

Xfeatures samples = USVT

X VT S U =

samples

significant noise noise noise significant sig.

(SVD of the data matrix)

slide by Barnabás Póczos and Aarti Singh

slide-32
SLIDE 32

PCA algorithm III

32

  • Columns of U
  • the principal vectors, { u(1), ¡…, ¡u(k) }
  • orthogonal and has unit norm – so UTU = I
  • Can reconstruct the data using linear combinations
  • f { u(1), ¡…, ¡u(k) }
  • Matrix S
  • Diagonal
  • Shows importance of each eigenvector
  • Columns of VT
  • The coefficients for reconstructing the samples

slide by Barnabás Póczos and Aarti Singh

slide-33
SLIDE 33

Applications

33

slide-34
SLIDE 34

Face Recognition

34

slide-35
SLIDE 35

Face Recognition

  • Want to identify specific person, based on facial image
  • Robust to glasses, lighting, …
  • Can’t just use the given 256 x 256 pixels

35

  • Robust ¡to ¡glasses, ¡lighting,…

Can’t ¡just ¡use ¡the ¡given ¡256 ¡x ¡256 ¡pixels

slide by Barnabás Póczos and Aarti Singh

slide-36
SLIDE 36

Applying PCA: Eigenfaces

36

Example data set: Images of faces

  • Famous Eigenface approach

[Turk & Pentland], [Sirovich & Kirby]

Each face x is ¡…

  • 256 256 values (luminance at location)
  • x in 256256 (view as 64K dim vector)

Form X = [ x1 , ¡…, ¡xm ] centered data mtx Compute = XXT Problem: is 64K 64K ¡… ¡HUGE!!!

256 x 256 real values m faces

X =

x1, ¡…, ¡xm

Method A: Build a PCA subspace for each person and check which subspace can reconstruct the test image the best Method B: Build one PCA database for the whole dataset and then classify based on the weights.

27

slide by Barnabás Póczos and Aarti Singh

slide-37
SLIDE 37

A Clever Workaround

37

  • Note that m<<64K
  • Use L=XTX instead of =XXT
  • If v is eigenvector of L

then Xv is eigenvector of Proof: L v = v

XTX v = v X (XTX v) = X( v) = Xv (XXT)X v = (Xv) Xv) = (Xv)

256 x 256 real values m faces

X =

x1, ¡…, ¡xm

slide by Barnabás Póczos and Aarti Singh

slide-38
SLIDE 38

Eigenfaces Example

38

slide by Derek Home

slide-39
SLIDE 39

Representation and Reconstruction

39

slide by Derek Home

slide-40
SLIDE 40

Principle Components (Method B)

40

slide by Barnabás Póczos and Aarti Singh

slide-41
SLIDE 41

Principle Components (Method B)

  • … faster if train with …
  • only people w/out glasses
  • same lighting conditions

41

… ¡faster ¡if ¡train ¡with…

  • Reconstructing… ¡(Method ¡B)

slide by Barnabás Póczos and Aarti Singh

slide-42
SLIDE 42

Shortcomings

  • Requires carefully controlled data:
  • All faces centered in frame
  • Same size
  • Some sensitivity to angle
  • Method is completely knowledge free
  • (sometimes this is good!)
  • Doesn’t know that faces are wrapped around

3D objects (heads)

  • Makes no effort to preserve class distinctions

42

slide by Barnabás Póczos and Aarti Singh

slide-43
SLIDE 43

Happiness subspace (method A)

43

slide by Barnabás Póczos and Aarti Singh

slide-44
SLIDE 44

Disgust subspace (method A)

44

slide by Barnabás Póczos and Aarti Singh

slide-45
SLIDE 45

Facial Expression Recognition 
 Movies

45

slide by Barnabás Póczos and Aarti Singh

slide-46
SLIDE 46

Facial Expression Recognition 
 Movies

46

slide by Barnabás Póczos and Aarti Singh

slide-47
SLIDE 47

Facial Expression Recognition 
 Movies

47

slide by Barnabás Póczos and Aarti Singh

slide-48
SLIDE 48

Image Compression

48

slide-49
SLIDE 49

Original Image

  • Divide the original 372x492 image into patches:
  • Each patch is an instance
  • View each as a 144-D vector

49

  • slide by Barnabás Póczos and Aarti Singh
slide-50
SLIDE 50

L2 error and PCA dim

50

slide by Barnabás Póczos and Aarti Singh

slide-51
SLIDE 51

PCA compression: 144D => 60D

51

slide by Barnabás Póczos and Aarti Singh

slide-52
SLIDE 52

PCA compression: 144D => 16D

52

slide by Barnabás Póczos and Aarti Singh

slide-53
SLIDE 53

16 most important eigenvectors

53

2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12

slide by Barnabás Póczos and Aarti Singh

slide-54
SLIDE 54

PCA compression: 144D => 6D

54

slide by Barnabás Póczos and Aarti Singh

slide-55
SLIDE 55

6 most important eigenvectors

55

2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12

slide by Barnabás Póczos and Aarti Singh

slide-56
SLIDE 56

PCA compression: 144D => 3D

56

slide by Barnabás Póczos and Aarti Singh

slide-57
SLIDE 57

3 most important eigenvectors

57

2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12

slide by Barnabás Póczos and Aarti Singh

slide-58
SLIDE 58

PCA compression: 144D => 1D

58

slide by Barnabás Póczos and Aarti Singh

slide-59
SLIDE 59

60 most important eigenvectors

  • Looks like the discrete cosine bases of JPG!…

59

slide by Barnabás Póczos and Aarti Singh

slide-60
SLIDE 60

2D Discrete Cosine Basis

60

http://en.wikipedia.org/wiki/Discrete_cosine_transform

slide by Barnabás Póczos and Aarti Singh

slide-61
SLIDE 61

Noise Filtering

61

slide-62
SLIDE 62

Noise Filtering

62

x x’ U x

slide by Barnabás Póczos and Aarti Singh

slide-63
SLIDE 63

Noisy image

63

slide by Barnabás Póczos and Aarti Singh

slide-64
SLIDE 64

Denoised image 
 using 15 PCA components

64

slide by Barnabás Póczos and Aarti Singh