outline
play

Outline Background Sparse Coding Semi-supervised Learning with - PowerPoint PPT Presentation

Sparse-coded Net Model and Applications Y. Gwon, M. Cha, W. Campbell, H.T. Kung, C. Dagli IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2016) September 16, 2016 This work is sponsored by the


  1. Sparse-coded Net Model and Applications Y. Gwon, M. Cha, W. Campbell, H.T. Kung, C. Dagli IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2016) September 16, 2016 This ¡work ¡is ¡sponsored ¡by ¡the ¡Defense ¡Advanced ¡Research ¡Projects ¡Agency ¡under ¡Air ¡Force ¡Contract ¡FA8721-­‑05-­‑C-­‑0002. ¡Opinions, ¡ interpretations, ¡conclusions, ¡and ¡recommendations ¡are ¡those ¡of ¡the ¡authors ¡and ¡are ¡not ¡necessarily ¡endorsed ¡by ¡the ¡United ¡States ¡Government. ¡

  2. Outline • Background – Sparse Coding • Semi-supervised Learning with Sparse Coding • Sparse-coded Net • Experimental Evaluation • Conclusions and Future Work 2 MLSP 2016

  3. Background: Sparse Coding • Unsupervised method to learn representation of data – Decompose data into sparse linear combination of learned basis vectors � � � � � � � � – Domain transform: raw data ⟶ feature vectors Feature ¡dic-onary ¡ Data ¡ Sparse ¡ X ¡≈ ¡ D ¡ Y ¡ ¡ Coding ¡ … … X D Y � � � � 0.5 * ≈ 1.2 × + 0.9 × + 0.5 × * * y 263 d 263 x y 101 d 101 y 208 d 208 %'" &( '% 3 MLSP 2016

  4. Background: Sparse Coding (cont.) • Popularly solved as L 1 -regularized optimization (LASSO/LARS) – Optimizing on L 0 pseudo-norm is intractable ⟹ greedy- L 0 algorithm (OMP) can be used instead Feature ¡dic-onary ¡ Data ¡ Sparse ¡ X ¡≈ ¡ D ¡ Y ¡ ¡ Coding ¡ … … X D Y min { D , y } ǁ‗ x – Dy ǁ‗ 2 2 + λ ǁ‗ y ǁ‗ 1 Convex relaxation min { D , y } ǁ‗ x – Dy ǁ‗ 2 2 + λ ǁ‗ y ǁ‗ 0 4 MLSP 2016

  5. Outline • Background – Sparse Coding • Semi-supervised Learning with Sparse Coding • Sparse-coded Net • Experimental Evaluation • Conclusions and Future Work 5 MLSP 2016

  6. Semi-supervised Learning with Sparse Coding • Semi-supervised learning – Unsupervised stage: learn feature representation using unlabeled data – Supervised stage: optimize task objective using learned feature representations of labeled data • Semi-supervised learning with sparse coding – Unsupervised stage: sparse coding and dictionary learning with unlabeled data – Supervised stage: train classifier/regression using sparse codes of labeled data Unsupervised stage Sparse coding Raw data Preprocessing D & dictionary (optional) ( unlabeled ) learning (learned dictionary) Supervised stage Raw data Classifier/ Preprocessing Sparse coding Feature (optional) with D ( labeled ) regression pooling 6 MLSP 2016

  7. Outline • Background – Sparse Coding • Semi-supervised Learning with Sparse Coding • Sparse-coded Net • Experimental Evaluation • Conclusions and Future Work 7 MLSP 2016

  8. Sparse-coded Net Motivations • Semi-supervised learning with sparse coding cannot jointly optimize feature representation learning and task objective • Sparse codes used as feature vectors for task cannot be modified to induce correct data labels – No supervised dictionary learning ⟹ sparse coding dictionary is learned using only unlabeled data 8 MLSP 2016

  9. Sparse-coded Net • Feedforward model with sparse coding, pooling, softmax layers – Pretrain: semi-supervised learning with sparse coding – Finetune: SCN backpropagation p ( l | z ) Softmax z Pooling (nonlinear rectification) y (1) y ( M ) y (3) y (2) . . . Sparse Sparse Sparse Sparse D coding coding coding coding x (1) x (2) x (3) x ( M ) 9 MLSP 2016

  10. SCN Backpropagation • When predicted output does not match ground truth, hold softmax Rewrite softmax loss as function of z weights constant and adjust pooled Softmax sparse code by gradient descent – z ⟶ z* z • Adjust sparse codes from adjusted pooled sparse code by putback – z* ⟶ Y* Putback • Adjust sparse coding dictionary by rank-1 updates or gradient descent – D ⟶ D* • Redo feedforward path with adjusted dictionary and retrain softmax • Repeat until convergence 10 MLSP 2016

  11. Outline • Background – Sparse Coding • Semi-supervised Learning with Sparse Coding • Sparse-coded Net • Experimental Evaluation • Conclusions and Future Work 11 MLSP 2016

  12. Experimental Evaluation • Audio and Acoustic Signal Processing (AASP) – 30-second WAV files recorded in 44.1kHz 16-bit stereo – 10 classes such as bus, busy street, office, and open-air market – For each class, 10 labeled examples • CIFAR-10 – 60,000 32x32 color images – 10 classes such as airplane, automobile, cat, and dog – We sample 2,000 images to form train and test datasets • Wikipedia – 2,866 documents – Annotated with 10 categorical labels – Text-document is represented as 128 LDA features 12 MLSP 2016

  13. Results: AASP Sound Classification Sound Classification Performance on AASP dataset Method Accuracy Semi-supervised via sparse coding (LARS) 73.0% Semi-supervised via sparse coding (OMP) 69.0% GMM-SVM 61.0% Deep SAE NN (4 layers) 71.0% Sparse-coded net (LARS) 78.0% Sparse-coded net (OMP) 75.0% • Sparse-coded net model for LARS achieves the best accuracy performance of 78% – Comparable to the best AASP scheme (79%) – Significantly better than the AASP baseline † (57%) † D.Stowell, D.Giannoulis, E.Benetos, M.Lagrange, and M.D.Plumbley, “Detection and 13 Classification of Acoustic Scenes and Events,” IEEE Trans. on Multimedia , vol. 17, no. 10, 2015. MLSP 2016

  14. Results: CIFAR Image Classification Image Classification performance on CIFAR-10 Method Accuracy Semi-supervised via sparse coding (LARS) 84.0% Semi-supervised via sparse coding (OMP) 81.3% GMM-SVM 76.8% Deep SAE NN (4 layers) 81.9% Sparse-coded net (LARS) 87.9% Sparse-coded net (OMP) 85.5% • Again, sparse-coded net model for LARS achieves the best accuracy performance of 87.9% – Superior to RBM and CNN pipelines evaluated by Coates et al. † †A. Coates, A. Ng, and H. Lee, “An Analysis of Single-layer Networks in 14 Unsupervised Feature Learning,” in AISTATS, 2011. MLSP 2016

  15. Results: Wikipedia Category Classification Text Classification performance on Wikipedia dataset Method Accuracy Semi-supervised via sparse coding (LARS) 69.4% Semi-supervised via sparse coding (OMP) 61.1% Deep SAE NN (4 layers) 67.1% Sparse-coded net (LARS) 70.2% Sparse-coded net (OMP) 62.1% • We achieve the best accuracy of 70.2% with sparse-coded net on LARS – Superior to 60.5 – 68.2% by existing approaches †1,†2 †1K. Duan, H. Zhang, and J. Wang, “Joint learning of cross-modal classifier and factor analysis for multimedia data classification,” Neural Computing and Applications, vol. 27, no. 2, 2016. 15 †2 L. Zhang, Q. Zhang, L. Zhang, D. Tao, X. Huang, and B. Du, “Ensemble Manifold Regularized Sparse Low-rank Approximation for MLSP 2016 Multi- view Feature Embedding,” Pattern Recognition , vol. 48, no. 10, 2015.

  16. Outline • Background – Sparse Coding • Semi-supervised Learning with Sparse Coding • Sparse-coded Net • Experimental Evaluation • Conclusions and Future Work 16 MLSP 2016

  17. Conclusions and Future Work Conclusions • Introduced sparse-coded net model that jointly optimizes sparse coding and dictionary learning with supervised task at output layer • Proposed SCN backpropagation algorithm that can handle mix-up of feature vectors related to pooling nonlinearity • Demonstrated superior classification performance on sound (AASP), image (CIFAR-10), and text (Wikipedia) data Future Work • More realistic larger-scale experiments necessary • Generalize hyperparameter optimization techniques for various datasets (e.g., audio, video, text) 17 MLSP 2016

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend