Random Feature Selection for Robust Face Recognition Allen Y. Yang - - PowerPoint PPT Presentation

random feature selection for robust face recognition
SMART_READER_LITE
LIVE PREVIEW

Random Feature Selection for Robust Face Recognition Allen Y. Yang - - PowerPoint PPT Presentation

1 -Minimization Motivation Problem Formulation Classification Experiments Future Directions Random Feature Selection for Robust Face Recognition Allen Y. Yang <yang@eecs.berkeley.edu> Department of EECS, UC Berkeley with Shankar


slide-1
SLIDE 1

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Random Feature Selection for Robust Face Recognition

Allen Y. Yang <yang@eecs.berkeley.edu> Department of EECS, UC Berkeley

with Shankar Sastry, Yi Ma, & John Wright

HSN MIT Review, Sep 24, 2007

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-2
SLIDE 2

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Next Generation Sensor Networks

1

Transition from dedicated sensor networks to general-purpose sensor networks.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-3
SLIDE 3

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Next Generation Sensor Networks

1

Transition from dedicated sensor networks to general-purpose sensor networks.

2

Similar revolutions in IT industry:

Computer: Services:

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-4
SLIDE 4

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Advances in Sensor Networks

1

Proliferation

(a) Ubiquitous (b) Mobile (c) Personal

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-5
SLIDE 5

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Advances in Sensor Networks

1

Proliferation

(a) Ubiquitous (b) Mobile (c) Personal

2

More powerful processing units

Intel XScale 600MHz CPU Memory: 16 MB ROM, 64 MB RAM Resolution: 1280 × 1024 up to 15 fps Integrated microphone, infrared, motion sensors. IEEE 802.15.4 protocal, 250 kbps. Figure: Next generation Berkeley wireless camera mote.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-6
SLIDE 6

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

On-Demand Surveillance

1

Distributed recognition system:

Multi-tasking. Adaptive to environments.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-7
SLIDE 7

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

On-Demand Surveillance

1

Distributed recognition system:

Multi-tasking. Adaptive to environments.

2

Adaptive feature selection is critical for on-demand surveillance.

Thermometer, infrared: simple thresholding. Face IDs: Action: spatial-temporal features in video sequences.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-8
SLIDE 8

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

On-Demand Surveillance

1

Distributed recognition system:

Multi-tasking. Adaptive to environments.

2

Adaptive feature selection is critical for on-demand surveillance.

Thermometer, infrared: simple thresholding. Face IDs: Action: spatial-temporal features in video sequences.

3

Sensor-server network On-demand surveillance and band-limited channels present a conundrum.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-9
SLIDE 9

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Contributions

Qualification for feature selection Data independent. Application independent. Fast to generate and compute. Accurate in preserving true data structures.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-10
SLIDE 10

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Contributions

Qualification for feature selection Data independent. Application independent. Fast to generate and compute. Accurate in preserving true data structures. Contributions

1

New framework for object/face recognition via compressed sensing.

2

Classification is encoded in a (global) sparse representation.

3

Random projection as universal dimensionality redunction.

4

Efficient solution via ℓ1-minimization outperforms classical algorithms.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-11
SLIDE 11

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Representation using Linear Models

1

Representation of samples in vector form y ∈ RD.

Figure: Stacking of 2-D image.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-12
SLIDE 12

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Representation using Linear Models

1

Representation of samples in vector form y ∈ RD.

Figure: Stacking of 2-D image.

2

Recognition (supervised learning)

Training: For K classes, collect samples {v1,1, · · · , v1,n1}, · · · , {vK,1, · · · , vK,nK }. Test: Present a new y, solve for label(y) ∈ [1, 2, · · · , K].

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-13
SLIDE 13

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Representation using Linear Models

1

Representation of samples in vector form y ∈ RD.

Figure: Stacking of 2-D image.

2

Recognition (supervised learning)

Training: For K classes, collect samples {v1,1, · · · , v1,n1}, · · · , {vK,1, · · · , vK,nK }. Test: Present a new y, solve for label(y) ∈ [1, 2, · · · , K].

3

Subspace model for face recognition: [Belhumeur et al. 1997, Basri & Jocobs 2003] y = αi,1vi,1 + αi,2vi,2 + · · · + αi,n1vi,ni , = Aiαi, where Ai = [vi,1, vi,2, · · · , vi,ni ].

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-14
SLIDE 14

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Recognition via Sparse Representation

1

The label of y is unknown: y = [A1, A2, · · · , AK ]  

α1 α2

. . .

αK

  , = Ax0. Over-determined system: A ∈ RD×n, where D ≫ n = n1 + · · · + nK .

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-15
SLIDE 15

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Recognition via Sparse Representation

1

The label of y is unknown: y = [A1, A2, · · · , AK ]  

α1 α2

. . .

αK

  , = Ax0. Over-determined system: A ∈ RD×n, where D ≫ n = n1 + · · · + nK .

2

x0 encodes membership of y: If y belongs to Subject 1, x0 =   

α1

. . .    ∈ Rn. That is, y should be only represented using the same subject!

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-16
SLIDE 16

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Recognition via Sparse Representation

1

The label of y is unknown: y = [A1, A2, · · · , AK ]  

α1 α2

. . .

αK

  , = Ax0. Over-determined system: A ∈ RD×n, where D ≫ n = n1 + · · · + nK .

2

x0 encodes membership of y: If y belongs to Subject 1, x0 =   

α1

. . .    ∈ Rn. That is, y should be only represented using the same subject! If we recover sparse x0, recognition is solved! Not so fast!! Directly solving A is expensive: D > 7 × 104 for a 320 × 240 grayscale image. x0 is sparse:

1 K terms non-zero.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-17
SLIDE 17

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Dimensionality Redunction

1

Dimensionality redunction Construct linear projection R ∈ Rd×D, d is the feature dimension. ˜ y . = Ry = RAx0 = ˜ Ax0. ˜ A ∈ Rd×n, but x0 is unchanged.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-18
SLIDE 18

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Dimensionality Redunction

1

Dimensionality redunction Construct linear projection R ∈ Rd×D, d is the feature dimension. ˜ y . = Ry = RAx0 = ˜ Ax0. ˜ A ∈ Rd×n, but x0 is unchanged.

2

Holistic features

Eigenfaces [Turk & Pentland 1991] Fisherfaces [Belhumeur et al.1997] Laplacianfaces [He et al.2005]

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-19
SLIDE 19

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Dimensionality Redunction

1

Dimensionality redunction Construct linear projection R ∈ Rd×D, d is the feature dimension. ˜ y . = Ry = RAx0 = ˜ Ax0. ˜ A ∈ Rd×n, but x0 is unchanged.

2

Holistic features

Eigenfaces [Turk & Pentland 1991] Fisherfaces [Belhumeur et al.1997] Laplacianfaces [He et al.2005]

3

Partial features

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-20
SLIDE 20

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Dimensionality Redunction

1

Dimensionality redunction Construct linear projection R ∈ Rd×D, d is the feature dimension. ˜ y . = Ry = RAx0 = ˜ Ax0. ˜ A ∈ Rd×n, but x0 is unchanged.

2

Holistic features

Eigenfaces [Turk & Pentland 1991] Fisherfaces [Belhumeur et al.1997] Laplacianfaces [He et al.2005]

3

Partial features

4

Unconventional features

Downsampled faces Randomfaces

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-21
SLIDE 21

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Randomfaces

Definition Consider a projection matrix R ∈ Rd×D whose entries are independent Gaussian samples, and each row is normalized to unit length. These row vectors are called d Randomfaces in RD.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-22
SLIDE 22

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Randomfaces

Definition Consider a projection matrix R ∈ Rd×D whose entries are independent Gaussian samples, and each row is normalized to unit length. These row vectors are called d Randomfaces in RD. Properties of Randomfaces (Universal Projection?)

1

Domain independent!

2

Data independent!

3

Fast to generate and compute!

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-23
SLIDE 23

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Randomfaces

Definition Consider a projection matrix R ∈ Rd×D whose entries are independent Gaussian samples, and each row is normalized to unit length. These row vectors are called d Randomfaces in RD. Properties of Randomfaces (Universal Projection?)

1

Domain independent!

2

Data independent!

3

Fast to generate and compute! High accuracy? Not necessary: May underperform. Condition 1: Signal is sparse. Condition 2: ℓ1-Minimization.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-24
SLIDE 24

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ0-Minimization

1

Underdetermined system ˜ y = ˜ Ax0 ∈ Rd: ˜ A ∈ Rd×n. Ask for the sparsest solution.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-25
SLIDE 25

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ0-Minimization

1

Underdetermined system ˜ y = ˜ Ax0 ∈ Rd: ˜ A ∈ Rd×n. Ask for the sparsest solution.

2

ℓ0-Minimization x0 = arg min

x

x0 s.t. ˜ y = ˜ Ax. · 0 simply counts the number of nonzero terms.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-26
SLIDE 26

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ0-Minimization

1

Underdetermined system ˜ y = ˜ Ax0 ∈ Rd: ˜ A ∈ Rd×n. Ask for the sparsest solution.

2

ℓ0-Minimization x0 = arg min

x

x0 s.t. ˜ y = ˜ Ax. · 0 simply counts the number of nonzero terms.

3

ℓ0-Ball Optimization over ℓ0-ball is combinatorial.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-27
SLIDE 27

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ1/ℓ0 Equivalence

1

If x0 is sparse enough, program (P0) is equivalent to (P1) min x1 s.t. ˜ y = ˜ Ax. x1 = |x1| + |x2| + · · · + |xn|.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-28
SLIDE 28

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ1/ℓ0 Equivalence

1

If x0 is sparse enough, program (P0) is equivalent to (P1) min x1 s.t. ˜ y = ˜ Ax. x1 = |x1| + |x2| + · · · + |xn|.

2

ℓ1-Ball

ℓ1-Minimization is linear (matching pursuit, basis pursuit) Solution equal to ℓ0-minimization.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-29
SLIDE 29

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ1/ℓ0 Equivalence

1

If x0 is sparse enough, program (P0) is equivalent to (P1) min x1 s.t. ˜ y = ˜ Ax. x1 = |x1| + |x2| + · · · + |xn|.

2

ℓ1-Ball

ℓ1-Minimization is linear (matching pursuit, basis pursuit) Solution equal to ℓ0-minimization.

3

Equivalence condition

Spark condition (sufficient): [Donoho 2002] Equivalence breakdown point (asymptotic): [Donoho 2004]. k-neighborlyness (iff): [Donoho preprint].

Given ˜ y = ˜ Ax0, there exists ρ(˜ A), if x00 < ρ, (1) ℓ1-solution is unique, (2) x1 = x0.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-30
SLIDE 30

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ1-Minimization for Noisy Case

1

Introduce noise in the data ˜ y = ˜ Ax0 + z ∈ Rd, where z2 < ǫ.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-31
SLIDE 31

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ1-Minimization for Noisy Case

1

Introduce noise in the data ˜ y = ˜ Ax0 + z ∈ Rd, where z2 < ǫ.

2

ℓ1-Norm near solution (P′

1)

min x1 s.t. ˜ y − ˜ Ax2 ≤ ǫ.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-32
SLIDE 32

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ℓ1-Minimization for Noisy Case

1

Introduce noise in the data ˜ y = ˜ Ax0 + z ∈ Rd, where z2 < ǫ.

2

ℓ1-Norm near solution (P′

1)

min x1 s.t. ˜ y − ˜ Ax2 ≤ ǫ.

3

ℓ1-Ball

ℓ1-Minimization is quadratic Stability: bounded data noise gives bounded estimation error x1 − x02

With large probability, there exist ρ(˜ A) and ζ > 0, if x00 < ρ, x1 − x02 ≤ ζǫ.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-33
SLIDE 33

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Classification via Sparse Representation

1

Given an estimated ǫ, solve (P′

1) ⇒ x1.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-34
SLIDE 34

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Classification via Sparse Representation

1

Given an estimated ǫ, solve (P′

1) ⇒ x1.

2

Project x1 onto face subspaces: δ1(x1) =  

α1

. . .   , δ2(x1) =  

α2

. . .   , · · · , δK (x1) =    . . .

αK

   . (1)

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-35
SLIDE 35

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Classification via Sparse Representation

1

Given an estimated ǫ, solve (P′

1) ⇒ x1.

2

Project x1 onto face subspaces: δ1(x1) =  

α1

. . .   , δ2(x1) =  

α2

. . .   , · · · , δK (x1) =    . . .

αK

   . (1)

3

Define residual ri = ˜ y − ˜ Aδi(x1)2 for Subject i:

id(y) = arg mini=1,··· ,K {ri}

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-36
SLIDE 36

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Outlier Rejection

ℓ1-Coefficients for invalid images

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-37
SLIDE 37

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Outlier Rejection

ℓ1-Coefficients for invalid images Outlier Rejection When ℓ1-solution is not sparse or concentrated to one subspace, the test sample is invalid. Sparsity Concentration Index: SCI(x) . = K · maxi δi(x)1/x1 − 1 K − 1 ∈ [0, 1].

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-38
SLIDE 38

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Randomface Ensembles

1

Blessing of dimensionality

When D is large for RD, with overwhelming probability, the row vectors in R ∈ Rd×D are independent. Subspace structures and sample membership are preserved.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-39
SLIDE 39

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Randomface Ensembles

1

Blessing of dimensionality

When D is large for RD, with overwhelming probability, the row vectors in R ∈ Rd×D are independent. Subspace structures and sample membership are preserved.

2

Randomface ensemble (For a particular random matrix R, the choice could be bad):

Generate multiple projections: R1, R2, · · · , Rl. For each Rj, solve min x1 s.t. Rjy − RjAx2 ≤ ǫ Compute an averaged residual function ¯ ri = mean{r 1

i , r 2 i , · · · , r l i }.

id(y) = arg mini=1,··· ,K {¯ ri}

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-40
SLIDE 40

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Extended Yale B 38 Subjects (Illumination Variance)

Table: Nearest Neighbor (Left) and Nearest Subspace (Right). Dimension 30 56 120 504 Eigen [%] 74.3 81.4 85.5 88.4 Laplacian [%] 77.1 83.5 87.2 90.7 Random [%] 70.3 75.6 78.8 79 Down [%] 51.7 62.6 71.6 78 Fisher [%] 87.6 N/A N/A N/A Dimension 30 56 120 504 Eigen [%] 89.9 91.1 92.5 93.2 Laplacian [%] 89 90.4 91.9 93.4 Random [%] 87.3 91.5 93.9 94.1 Down [%] 80.8 88.2 91.1 93.4 Fisher [%] 81.9 N/A N/A N/A Table: ℓ1-Minimization Dimension 30 56 120 504 Eigen [%] 86.5 91.6 94 96.8 Laplacian [%] 87.5 91.7 94 96.5 Random [%] 82.6 91.5 95.5 98.1 Down [%] 74.6 86.2 92.1 97.1 Fisher [%] 86.9 N/A N/A N/A Ensemble [%] 90.7 94.1 96.4 98.3

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-41
SLIDE 41

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

AR Database 100 Subjects (Illumination and Expression Variance)

Table: Nearest Neighbor (Left) and Nearest Subspace (Right). Dimension 30 54 130 540 Eigen [%] 68.1 74.8 79.3 80.5 Laplacian [%] 73.1 77.1 83.8 89.7 Random [%] 56.7 63.7 71.4 75 Down [%] 51.7 60.9 69.2 73.7 Fisher [%] 83.4 86.8 N/A N/A Dimension 30 54 130 540 Eigen [%] 64.1 77.1 82 85.1 Laplacian [%] 66 77.5 84.3 90.3 Random [%] 59.2 68.2 80 83.3 Down [%] 56.2 67.7 77 82.1 Fisher [%] 80.3 85.8 N/A N/A Table: ℓ1-Minimization. Dimension 30 54 130 540 Eigen [%] 71.1 80 85.7 92 Laplacian [%] 73.7 84.7 91 94.3 Random [%] 57.8 75.5 87.6 94.7 Down [%] 46.8 67 84.6 93.9 Fisher [%] 87 92.3 N/A N/A Ensemble [%] 78.5 85.8 91.2 95

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-42
SLIDE 42

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Partial Features

Features Nose Right Eye Mouth & Chin Dimension 4,270 5,040 12,936 ℓ1-Minimization [%] 87.3 93.7 98.3 NS [%] 83.7 78.6 94.4 NN [%] 49.2 68.8 72.7

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-43
SLIDE 43

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

ROC Curves on AR database

Figure: ROC curve on Eigenfaces and AR database.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition

slide-44
SLIDE 44

Motivation Problem Formulation ℓ1-Minimization Classification Experiments Future Directions

Future Directions

1

Facial occlusion.

2

Pose variations.

3

Action segmentation and recognition.

References

1

Face recognition

http://www.eecs.berkeley.edu/~yang/software/face_recognition/ Feature selection in face recognition: A sparse representation perspective. UC Berkeley Tech Report UCB/EECS-2007-99.

2

Compressed sensing

Cand` es, Compressive sampling, 2006. Donoho, For most large underdetermined systems of equations, the minimal ℓ1-norm near-solution approximates the sparsest near-solution, 2004. Donoho, Neighborly polytopes and sparse solution of underdetermined linear equations, 2004.

3

Random projections of smooth manifolds

Baraniuk & Wakin, Random projection of smooth manifolds, 2006. Baraniuk et al., The Johnson-Lindenstrauss lemma meets compressed sensing, 2007.

Allen Y. Yang <yang@eecs.berkeley.edu> Random Feature Selection for Robust Face Recognition