robust face recognition via sparse representation
play

Robust Face Recognition via Sparse Representation Allen Y. Yang - PowerPoint PPT Presentation

Introduction Sparse Representation Experiments Discussion Robust Face Recognition via Sparse Representation Allen Y. Yang <yang@eecs.berkeley.edu> April 18, 2008, NIST Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face


  1. Introduction Sparse Representation Experiments Discussion Robust Face Recognition via Sparse Representation Allen Y. Yang <yang@eecs.berkeley.edu> April 18, 2008, NIST Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  2. Introduction Sparse Representation Experiments Discussion Face Recognition: “Where amazing happens” Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  3. Introduction Sparse Representation Experiments Discussion Face Recognition: “Where amazing happens” Figure: Steve Nash, Kevin Garnett, Jason Kidd. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  4. Introduction Sparse Representation Experiments Discussion Sparse Representation Sparsity A signal is sparse if most of its coefficients are (approximately) zero. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  5. Introduction Sparse Representation Experiments Discussion Sparse Representation Sparsity A signal is sparse if most of its coefficients are (approximately) zero. Sparsity in frequency domain 1 Figure: 2-D DCT transform. Sparsity in spatial domain 2 Figure: Gene microarray data. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  6. Introduction Sparse Representation Experiments Discussion Sparsity in human visual cortex [Olshausen & Field 1997, Serre & Poggio 2006] Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  7. Introduction Sparse Representation Experiments Discussion Sparsity in human visual cortex [Olshausen & Field 1997, Serre & Poggio 2006] Feed-forward : No iterative feedback loop. 1 Redundancy : Average 80-200 neurons for each feature representation. 2 Recognition : Information exchange between stages is not about individual neurons, but 3 rather how many neurons as a group fire together. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  8. Introduction Sparse Representation Experiments Discussion Problem Formulation Notation 1 Training: For K classes, collect training samples { v 1 , 1 , · · · , v 1 , n 1 } , · · · , { v K , 1 , · · · , v K , nK } ∈ R D . Test: Present a new y ∈ R D , solve for label( y ) ∈ [1 , 2 , · · · , K ]. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  9. Introduction Sparse Representation Experiments Discussion Problem Formulation Notation 1 Training: For K classes, collect training samples { v 1 , 1 , · · · , v 1 , n 1 } , · · · , { v K , 1 , · · · , v K , nK } ∈ R D . Test: Present a new y ∈ R D , solve for label( y ) ∈ [1 , 2 , · · · , K ]. Data representation in (long) vector form via stacking 2 Figure: Assume 3-channel 640 × 480 image, D = 3 · 640 · 480. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  10. Introduction Sparse Representation Experiments Discussion Problem Formulation Notation 1 Training: For K classes, collect training samples { v 1 , 1 , · · · , v 1 , n 1 } , · · · , { v K , 1 , · · · , v K , nK } ∈ R D . Test: Present a new y ∈ R D , solve for label( y ) ∈ [1 , 2 , · · · , K ]. Data representation in (long) vector form via stacking 2 Figure: Assume 3-channel 640 × 480 image, D = 3 · 640 · 480. Mixture subspace model for face recognition [Belhumeur et al. 1997, Basri & Jocobs 2003] 3 Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  11. Introduction Sparse Representation Experiments Discussion Classification of Mixture Subspace Model Assume y belongs to Class i 1 y = α i , 1 v i , 1 + α i , 2 v i , 2 + · · · + α i , n 1 v i , n i , = A i α i , where A i = [ v i , 1 , v i , 2 , · · · , v i , n i ]. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  12. Introduction Sparse Representation Experiments Discussion Classification of Mixture Subspace Model Assume y belongs to Class i 1 y = α i , 1 v i , 1 + α i , 2 v i , 2 + · · · + α i , n 1 v i , n i , = A i α i , where A i = [ v i , 1 , v i , 2 , · · · , v i , n i ]. Nevertheless, Class i is the unknown variable we need to solve: 2 α 1   α 2  = A x ∈ R 3 · 640 · 480 . . Sparse representation y = [ A 1 , A 2 , · · · , A K ] .  . α K Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  13. Introduction Sparse Representation Experiments Discussion Classification of Mixture Subspace Model Assume y belongs to Class i 1 y = α i , 1 v i , 1 + α i , 2 v i , 2 + · · · + α i , n 1 v i , n i , = A i α i , where A i = [ v i , 1 , v i , 2 , · · · , v i , n i ]. Nevertheless, Class i is the unknown variable we need to solve: 2 α 1   α 2  = A x ∈ R 3 · 640 · 480 . . Sparse representation y = [ A 1 , A 2 , · · · , A K ] .  . α K 0 ··· 0 ] T ∈ R n . x 0 = [ 0 ··· 0 α T 3 i Sparse representation encodes membership! Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  14. Introduction Sparse Representation Experiments Discussion Dimensionality Redunction Construct linear projection R ∈ R d × D , d is the feature dimension . 1 y . = R y = RA x 0 = ˜ A x 0 ∈ R d . ˜ ˜ A ∈ R d × n , but x 0 is unchanged. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  15. Introduction Sparse Representation Experiments Discussion Dimensionality Redunction Construct linear projection R ∈ R d × D , d is the feature dimension . 1 y . = R y = RA x 0 = ˜ A x 0 ∈ R d . ˜ ˜ A ∈ R d × n , but x 0 is unchanged. Holistic features 2 Eigenfaces [Turk 1991] Fisherfaces [Belhumeur 1997] Laplacianfaces [He 2005] Partial features 3 Unconventional features 4 Downsampled faces Random projections Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  16. Introduction Sparse Representation Experiments Discussion ℓ 1 -Minimization Ideal solution: ℓ 0 -Minimization 1 x ∗ = arg min y = ˜ ( P 0 ) � x � 0 s.t. ˜ A x . x � · � 0 simply counts the number of nonzero terms. However, generally ℓ 0 -minimization is NP-hard . Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  17. Introduction Sparse Representation Experiments Discussion ℓ 1 -Minimization Ideal solution: ℓ 0 -Minimization 1 x ∗ = arg min y = ˜ ( P 0 ) � x � 0 s.t. ˜ A x . x � · � 0 simply counts the number of nonzero terms. However, generally ℓ 0 -minimization is NP-hard . Compressed sensing : Under mild condition, ℓ 0 -minimization is equivalent to 2 x ∗ = arg min y = ˜ ( P 1 ) � x � 1 s.t. ˜ A x , x where � x � 1 = | x 1 | + | x 2 | + · · · + | x n | . Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  18. Introduction Sparse Representation Experiments Discussion ℓ 1 -Minimization Ideal solution: ℓ 0 -Minimization 1 x ∗ = arg min y = ˜ ( P 0 ) � x � 0 s.t. ˜ A x . x � · � 0 simply counts the number of nonzero terms. However, generally ℓ 0 -minimization is NP-hard . Compressed sensing : Under mild condition, ℓ 0 -minimization is equivalent to 2 x ∗ = arg min y = ˜ ( P 1 ) � x � 1 s.t. ˜ A x , x where � x � 1 = | x 1 | + | x 2 | + · · · + | x n | . ℓ 1 -Ball 3 ℓ 1 -Minimization is convex. Solution equal to ℓ 0 -minimization. Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  19. Introduction Sparse Representation Experiments Discussion ℓ 1 -Minimization Routines Matching pursuit [Mallat 1993] Find most correlated vector v i in A with y : i = arg max � y , v j � . 1 A ← A ( i ) , x i ← � y , v i � , y ← y − x i v i . 2 Repeat until � y � < ǫ . 3 Basis pursuit [Chen 1998] Start with number of sparse coefficients m = 1. 1 2 Select m linearly independent vectors B m in A as a basis x m = B † m y . Repeat swapping one basis vector in B m with another vector not in B m if improve � y − B m x m � . 3 If � y − B m x m � 2 < ǫ , stop; Otherwise, m ← m + 1, repeat Step 2. 4 Quadratic solvers : y = A x 0 + z ∈ R d , where � z � 2 < ǫ x ∗ = arg min {� x � 1 + λ � y − A x � 2 } [LASSO, Second-order cone programming]: Much more expensive. Matlab Toolboxes for ℓ 1 -Minimization ℓ 1 -Magic by Candes SparseLab by Donoho cvx by Boyd Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

  20. Introduction Sparse Representation Experiments Discussion Sparse Representation Classification Solve ( P 1 ) ⇒ x 1 . Project x 1 onto face subspaces: 1   0 0 α 1     0 0 α 2  , δ 2 ( x 1 ) =  , · · · , δ K ( x 1 ) = . δ 1 ( x 1 ) = .  . (1) .   .  .  .  . . . 0 0 α K Allen Y. Yang <yang@eecs.berkeley.edu> Robust Face Recognition via Sparse Representation

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend