Outline Reducing Dimensionality Feature Selection 1 Steven J Zeil - - PowerPoint PPT Presentation

outline reducing dimensionality
SMART_READER_LITE
LIVE PREVIEW

Outline Reducing Dimensionality Feature Selection 1 Steven J Zeil - - PowerPoint PPT Presentation

Feature Selection Feature Extraction Feature Selection Feature Extraction Outline Reducing Dimensionality Feature Selection 1 Steven J Zeil Feature Extraction 2 Principal Components Analysis (PCA) Old Dominion Univ. Factor Analysis (FA)


slide-1
SLIDE 1

Feature Selection Feature Extraction

Reducing Dimensionality

Steven J Zeil

Old Dominion Univ.

Fall 2010

1 Feature Selection Feature Extraction

Outline

1

Feature Selection

2

Feature Extraction Principal Components Analysis (PCA)

Factor Analysis (FA) Multidimensional Scaling (MDS)

Linear Discriminants Analysis (LDA)

2 Feature Selection Feature Extraction

Motivation

Reduction in complexity of prediction and training Reduction in cost of data extraction Simpler models – reduced variance Easier to visualize & analyze results, identify outliers, etc.

3 Feature Selection Feature Extraction

Basic Approaches

Given an input population characterized by d attributes: Feature Selection: find k < d dimensions that give the most

  • information. Discard the other d − k.

subset selection

Feature Extraction: find k ≤ d dimensions that are linear combinations of the original d

Principal Components Analysis (unsupervised)

Related: Factor Analysis and Multidimensional Scaling

Linear Discriminants Analysis (supervised)

Text also mensions Nonlinear methods: Isometric feature mapping and Locally Linear Embedding

Not enough info to really justify

4

slide-2
SLIDE 2

Feature Selection Feature Extraction

Subset Selection

Assume we have a suitable error function and can evaluate it for a variety of models (cross-validation).

Misclassification error for classification problems Mean-squared error for regression

Can’t evaluate all 2d subsets of d features Forward selection: Start with an empty feature set. Repeatedly add the feature that reduces the error the most. Stop when decrease is insignificant. Backward selection: Start with all features. Remove the feature that decreases the error the most (or increases it the least). Stop when any further removals increase the error significantly. Both directions are O(d2)

Hill-climing: not guaranteed to find global optimum

5 Feature Selection Feature Extraction

Notes

Variant floating search adds multiple features at once, then backtracks to see what features can be removed Selection is less useful in very high-dimension problems where individual features are of limiteduse, but clusters of features are significant.

6 Feature Selection Feature Extraction

Outline

1

Feature Selection

2

Feature Extraction Principal Components Analysis (PCA)

Factor Analysis (FA) Multidimensional Scaling (MDS)

Linear Discriminants Analysis (LDA)

7 Feature Selection Feature Extraction

Principal Components Analysis (PCA)

Find a mapping z = A x onto a lower-dimension space Unsupervised method: seeks to minimize variance Intuitively: try to spread the points apart as far as possible

8

slide-3
SLIDE 3

Feature Selection Feature Extraction

1st Principal Component

Assume x ∼ N( µ, Σ). Then

  • wT

x ∼ N( wT µ, wTΣ w) Find z1 = wT

1

x, with wT

1

w1 = 1, that maximizes Var(z1) = wT

1 Σ

w1. Find max

w1

wT

1 Σ

w1 − α( wT

1

w1 − 1), α ≥ 0 Solution: Σ w1 = α w1 This is an eigenvalue problem on Σ. We want the solution (eigenvector) corresponding to the largest eigenvalue α

9 Feature Selection Feature Extraction

2nd Principal Component

Next find z2 = wT

2

x, with wT

2

w2 = 1 and wT

2

w1 = 0, that maximizes Var(z2) = wT

2 Σ

w2. Solution: Σ w2 = α2 w2 Choose the solution (eigenvector) corresponding to the 2nd largest eigenvalue α2

Because Σ is symmetric, its eigenvectors are mutually

  • rthogonal

10 Feature Selection Feature Extraction

Visualizing PCA

  • z = WT(

x − m)

11 Feature Selection Feature Extraction

Is Spreading the Space Enough?

Although we can argue that spreading the points leads to a better- conditioned problem: What does this have to do with reducing dimensionality?

12

slide-4
SLIDE 4

Feature Selection Feature Extraction

Detecting Linear Depencencies

Suppose that some subset of the inputs are linearly correlated ∃ q| qT x = 0 Then Σ is singular. E[ qT x − qT µ] = 0 Σ q = 0

  • q is an eigenvector of the problem Σ

w = α w with α = 0

The last eigenvectors(s) we would consider using

Flip side: PCA can be overly sensitive to scaling issues [normalize] and to outliers

13 Feature Selection Feature Extraction

When to Stop?

Proportion of Variance (PoV) for eigenvalues λ1, λ2, . . . , λk k

i=1 λi

d

i=1 λd

Plot and look for elbow Typically stop around PoV = 0.9

14 Feature Selection Feature Extraction

PoV

15 Feature Selection Feature Extraction

PCA & Visualization

If 1st two eigenvectors account for majority of variance, plot data, using symbols for classes or other features Visually search for patterns

16

slide-5
SLIDE 5

Feature Selection Feature Extraction

PCA Visualization

17 Feature Selection Feature Extraction

Factor Analysis (FA)

A kind of “inverted” PCA. Find a set of factors z that can be combined to generate x: xi − µi =  

k

  • j=1

vijzj   + εi zi are latent factors E[zi] = 0, Var(zi) = 1, i = j ⇒ (Cov(zi, zj) = 0 εi are noise sources E[εi] = 0, Var(εi) = φi, i = j ⇒ (Cov(εi, εj) = 0 Cov(εi, zj) = 0 vij are factor loadings

18 Feature Selection Feature Extraction

PCA vs FA

19 Feature Selection Feature Extraction

Multidimensional Scaling (MDS)

Given the pairwise distances dij between N points, place those points on a low-dimension map, preserving the distances

  • z =

g( x|θ) Choose θ to minimize Sammon stress E(θ|X) =

  • r,s

(|| zr − zs|| − || xr − xs||)2 || xr − xs|| =

  • r,s

(|| g( xr|θ) − g( xs|θ)|| − || xr − xs||)2 || xr − xs|| Use regression methods for g, usng the above as the error function to be minimized.

20

slide-6
SLIDE 6

Feature Selection Feature Extraction

Linear Discriminants Analysis (LDA)

Supervised method Find a projection of x onto a low-dimension space where classes are well-separated Find w maximizing J( w) = (m1 − m2)2 s2

1 + s2 2

mi = wT mi si =

  • t

( wT xt − mi)2rt

21 Feature Selection Feature Extraction

Scatter

J( w) = (m1 − m2)2 s2

1 + s2 2

(m1 − m2)2 = ( wT m1 − wt m2)2 =

  • wTSB

w where SB = ( m1 − m2)( m1 − m2)T is the between-class scatter Similarly, s2

1 + s2 2 =

wTSW w where SW = S1 + S2 is the within-class scatter

22