representation learning
play

Representation Learning UCA Deep Learning School - Deep in France - PowerPoint PPT Presentation

Representation Learning UCA Deep Learning School - Deep in France Nice 2017 Soufiane Belharbi Romain Hrault Clment Chatelain Sbastien Adam soufiane.belharbi@insa-rouen.fr LITIS lab., Apprentissage team - INSA de Rouen, France 13 June,


  1. Representation Learning UCA Deep Learning School - Deep in France Nice 2017 Soufiane Belharbi Romain Hérault Clément Chatelain Sébastien Adam soufiane.belharbi@insa-rouen.fr LITIS lab., Apprentissage team - INSA de Rouen, France 13 June, 2017 LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning

  2. My PhD work 3rd year PhD student at LITIS lab. Deep learning, structured output prediction, learning representations. S. Belharbi, C. Chatelain, R.Hérault, S. Adam, Learning Structured 1 Output Dependencies Using Deep Neural Networks . 2015. in: Deep Learning Workshop in the 32nd International Conference on Machine Learning (ICML), 2015. S. Belharbi, R.Hérault, C. Chatelain, S. Adam, Deep multi-task 2 learning with evolving weights , in: European Symposium on Artificial Neural 445 Networks (ESANN), 2016 S. Belharbi, C. Chatelain, R.Hérault, S. Adam, Multi-task Learning for 3 Structured Output Prediction . 2017. submitted to Neurocomputing. ArXiv: arxiv.org/abs/1504.07550. S. Belharbi, R.Hérault, C. Chatelain, R. Modzelewski, S. Adam, M. 4 Chastan, S. Thureau, Spotting L3 slice in CT scans using deep convolutional network and transfer learning , in: Computers in Biology and Medicine, 2017. Current work : Learning invariance within neural networks. S. Belharbi, C. Chatelain, R.Hérault, S. Adam, Class-invariance hint: a regularization framework for training neural networks . Coming up soon. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 1/76

  3. Roadmap 1 Representation Learning 2 Sparse Coding 3 Auto-encoders 4 Restricted Boltzmann Machines (RBMs) 5 Conclusion LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 2/76

  4. Representation Learning Representation Learning Representation Learning is fundamental in Machine Learning How to represent the class “dog”? (input variations) Conference: ICLR www.iclr.cc (since 2013). LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 3/76

  5. Representation Learning Representation Learning Stanford, CS331B. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 4/76

  6. Representation Learning Representation Learning LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 5/76

  7. Representation Learning Representation Learning LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 6/76

  8. Representation Learning Features representation: Handcrafting Let us build a cat detector . . . Stanford, A.Zamir LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 7/76

  9. Representation Learning Features representation: Handcrafting Let us build a cat detector . . . LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 8/76

  10. Representation Learning Features representation: Handcrafting Let us build a cat detector . . . LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 9/76

  11. Representation Learning Features representation: Handcrafting Let us build a cat detector . . . LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 10/76

  12. Representation Learning Features representation: Handcrafting Handcrafted features . . . Pros: Was the only way for a long time. Works quite good. Sometimes you need to combine many features. Generic. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 11/76

  13. Representation Learning Features representation: Handcrafting Handcrafted features . . . Cons: Generic. Time consuming. What you will do if nothing works? In many cases, it is difficult to build discriminative features. Figure 2: Classifier: Happy vs Sad Ideal: Application-dependent features ⇒ Deep Learning LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 12/76

  14. Representation Learning Representation Learning Approaches Two main approaches Unsupervised: Representation Supervised constrained on reconstruction Stanford, A.Zamir LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 13/76

  15. Representation Learning Representation Learning Approaches Inverting a representation LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 14/76

  16. Representation Learning Representation Learning Approaches Inverting a representation Dosovitskiy and Brox, 2015. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 15/76

  17. Representation Learning Representation Learning Approaches Two main approaches Unsupervised: Representation constrained on reconstruction Supervised LeCun et al. 1998. Hinton et al. 2006. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 16/76

  18. Representation Learning Representation Learning Approaches Unsupervised: Representation constrained on reconstruction Pros: Exploit millions of unlabeled data from the internet: Images. Text (Wikipedia, . . . ) Records and videos. Hinton et al. 2006. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 17/76

  19. Representation Learning Representation Learning Approaches Unsupervised: Representation constrained on reconstruction The reconstruction loss: how to reconstruct? L2 pixel loss. Applications: Data compression. Dimenstionality reduction. Pre-train neuranl networks (initialization). Hinton et al. 2006. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 18/76

  20. Representation Learning Unsupervised Representation Learning Methods Sparse coding. Auto-encoders (AEs). Restricted Boltzmann Machines (RBMs). LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 19/76

  21. Sparse Coding Sparse Coding Objective: k � x = a i φ i , i = 1 where φ i is a set of basis (dictionnary). Cost function on a set of m input vectors: m k k � � � � �� �� � � x ( j ) − a ( j ) � 2 S ( a ( j ) min i φ i + λ i ) . a ( j ) i ,φ i j = 1 i = 1 i = 1 � �� � � �� � sparsity term reconstruction term Similar to: m � � �� �� � � x ( j ) − H ( j ) W � 2 + λ || H ( j ) || 1 min . � �� � a ( j ) i ,φ i j = 1 sparsity term � �� � reconstruction term LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 20/76

  22. Sparse Coding Sparse Coding Andrew Ng. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 21/76

  23. Sparse Coding Sparse Coding Andrew Ng. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 22/76

  24. Auto-encoders Auto-encoders Q.V. Le LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 23/76

  25. Auto-encoders Auto-encoders Encoder: f ( x ) = s ( Wx + b ) = z . Decoder: g ( z ) = s ( W ′ + b ′ ) = � x , W ′ = W T (tied weight). Objective over a set of n examples x : � n J ( x ; W , b , b ′ ) = 1 x || 2 . || x − � n i = 1 Similar to PCA. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 24/76

  26. Auto-encoders Auto-encoders Keras blog. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 25/76

  27. Auto-encoders Auto-encoders Example: Keras blog. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 25/76

  28. Auto-encoders Deep Auto-encoders wikidocs. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 26/76

  29. Auto-encoders Denoising Auto-encoders Basic auto-encoder: n � J ( x , W , b , b ′ ) = 1 || x − s ( W T ( s ( Wx + b ))) + b ′ || 2 n � �� � i = 1 � x Denoising auto-encoder: build good representations by recovering a corrupted input . n � J ( x , W , b , b ′ ) = 1 || x − s ( W T ( s ( W φ ( x ) + b ))) + b ′ || 2 n � �� � i = 1 � x LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 27/76

  30. Auto-encoders Denoising Auto-encoders P .Vincent, 2010. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 28/76

  31. Auto-encoders Denoising Auto-encoders Unsupervised Manifold hypothesis: Data in high dimensional spaces concentrate in sub-manifolds of much lower dimensionality. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 29/76

  32. Auto-encoders Denoising Auto-encoders Manifolds. ( G.Mesnil. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 30/76

  33. Auto-encoders Denoising Auto-encoders Manifold learning perspective. ( P .Vincent, 2010. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 31/76

  34. Auto-encoders Denoising Auto-encoders Left: filters of basic AE. Right: filters of DAE (Gaussian noise). (trained on natural images) ( P .Vincent, 2010. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 32/76

  35. Auto-encoders Denoising Auto-encoders filters of DAE (zero-masking noise). (trained on MNIST) ( P .Vincent, 2010. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 33/76

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend