deep learning
play

DEEP LEARNING FFR135, Artificial Neural Networks Olof Mogren - PowerPoint PPT Presentation

DEEP LEARNING FFR135, Artificial Neural Networks Olof Mogren Chalmers University of Technology October 2016 DEEP LEARNING Artificial neural networks Many layers of abstractions Outperforms traditional methods in: Image


  1. DEEP LEARNING FFR135, Artificial Neural Networks Olof Mogren Chalmers University of Technology October 2016

  2. DEEP LEARNING • Artificial neural networks • Many layers of abstractions • Outperforms traditional methods in: • Image classification • Natural language processing • Machine translation • Sentiment analysis • Speech recognition • Reinforcement learning

  3. SEMI-RECENT PROGRESS • 2006: Depth breakthrough: layerwise pretrained Restricted Boltzmann Machines • GPUs • Practical use Real applications from Google, Facebook, Tesla, Microsoft, Apple, and others! A fast learning algorithm for deep belief nets ; Hinton, Osindero, Tehi; Neural Computation; 2006

  4. PERCEPTRON • 1943, M cCulloch & Pitts (neuron model) inputs output • 1958, Rosenblatt (perceptron) x • Linear (binary) classification of inputs 0 w 0 x w • Can not learn any non-linear function 1 1 w (e.g. XOR) 2 y x 2 w 3 x w 3 4 x 4

  5. MODELLING XOR 1 1 0 x 0 0 0 1 0 1 x 1

  6. MODELLING XOR 1 1 0 1 1 x 0 x 0 ∧ ¬ x 1 0 0 1 0 0 1 0 1 0 1 ¬ x 0 ∧ x 1 x 1

  7. MULTI-LAYER PERCEPTRON • Combining layers lets us inputs hidden layer outputs represent non-linear functions • Each layer: • Linear transformation: a = W x + b • Non-linear (element-wise) activation: h = g ( a )

  8. MODELLING FUNCTIONS • Universal function approximation inputs hidden layer outputs • Stacking layers: function composition • Apply error/loss function to output • Continuously differentiable; chain rule • Propagating errors (backpropagation) • (Mini-batch) Stochastic gradient descent (SGD) details

  9. MOTIVATION OF DEPTH • M ore compact representation (exponentially) • There are boolean functions that require • Polynomial number of units ( deep architecture) • Exponential number of units ( shallow architecture) • E.g., parity function (for n input bits): • efficiently represented with depth O ( log n ) • but O ( 2 n ) gates if represented by a depth two circuit (Yao, 1985) Exploring Strategies for Training Deep Neural Networks ; Larochelle, Bengio, Louradour, Lamblin; JMLR 2009

  10. LEARNING LEVELS OF REPRESENTATION • E ach layer: non-linear transformation of inputs: h = sigmoid ( W x + b ) • Learning representations; abstractions • No feature engineering!

  11. DISTRIBUTED REPRESENTATIONS • E.g.: big, yellow, Volkswagen • Non-distributed representations: n binary parameters → n values • E.g.: Clustering, n-grams, decision trees, etc. • NNs learn distributed representations • Distributed representations: n binary parameters → 2 n possible values

  12. EXAMPLE: WORD EMBEDDINGS • Distributed representations for words • word2vec, glove, etc.

  13. DEEP LEARNING IN JAVASCRIPT cs231n.stanford.edu playground.tensorflow.org

  14. LEVELS OF ABSTRACTIONS

  15. Convolution Layer 32x32x3 image height 32 width 32 depth 3 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 10 27 Jan 2016 27 Jan 2016

  16. Convolution Layer 32x32x3 image 5x5x3 filter 32 Convolve the filter with the image i.e. “slide over the image spatially, computing dot products” 32 3 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 11 27 Jan 2016 27 Jan 2016

  17. Convolution Layer Filters always extend the full depth of the input volume 32x32x3 image 5x5x3 filter 32 Convolve the filter with the image i.e. “slide over the image spatially, computing dot products” 32 3 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 12 27 Jan 2016 27 Jan 2016

  18. Convolution Layer 32x32x3 image 5x5x3 filter 32 1 number: the result of taking a dot product between the filter and a small 5x5x3 chunk of the image 32 (i.e. 5*5*3 = 75-dimensional dot product + bias) 3 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 13 27 Jan 2016 27 Jan 2016

  19. Convolution Layer activation map 32x32x3 image 5x5x3 filter 32 28 convolve (slide) over all spatial locations 28 32 3 1 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 14 27 Jan 2016 27 Jan 2016

  20. consider a second, green filter Convolution Layer activation maps 32x32x3 image 5x5x3 filter 32 28 convolve (slide) over all spatial locations 28 32 3 1 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 15 27 Jan 2016 27 Jan 2016

  21. For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps: activation maps 32 28 Convolution Layer 28 32 3 6 We stack these up to get a “new image” of size 28x28x6! Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 16 27 Jan 2016 27 Jan 2016

  22. Preview: ConvNet is a sequence of Convolution Layers, interspersed with activation functions 32 28 CONV, ReLU e.g. 6 5x5x3 32 28 filters 3 6 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 17 27 Jan 2016 27 Jan 2016

  23. Preview: ConvNet is a sequence of Convolutional Layers, interspersed with activation functions 32 28 24 …. CONV, CONV, CONV, ReLU ReLU ReLU e.g. 6 e.g. 10 5x5x3 5x5x 6 32 28 24 filters filters 3 6 10 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 18 27 Jan 2016 27 Jan 2016

  24. one filter => example 5x5 filters one activation map (32 total) We call the layer convolutional because it is related to convolution of two signals: elementwise multiplication and sum of a filter and the signal (image) Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 21 27 Jan 2016 27 Jan 2016

  25. preview: Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 22 27 Jan 2016 27 Jan 2016

  26. A closer look at spatial dimensions: activation map 32x32x3 image 5x5x3 filter 32 28 convolve (slide) over all spatial locations 28 32 3 1 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 23 27 Jan 2016 27 Jan 2016

  27. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 24 27 Jan 2016 27 Jan 2016

  28. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 25 27 Jan 2016 27 Jan 2016

  29. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 26 27 Jan 2016 27 Jan 2016

  30. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 27 27 Jan 2016 27 Jan 2016

  31. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter => 5x5 output 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 28 27 Jan 2016 27 Jan 2016

  32. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter applied with stride 2 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 29 27 Jan 2016 27 Jan 2016

  33. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter applied with stride 2 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 30 27 Jan 2016 27 Jan 2016

  34. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter applied with stride 2 => 3x3 output! 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 31 27 Jan 2016 27 Jan 2016

  35. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter applied with stride 3? 7 Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 32 27 Jan 2016 27 Jan 2016

  36. A closer look at spatial dimensions: 7 7x7 input (spatially) assume 3x3 filter applied with stride 3? doesn’t fit! 7 cannot apply 3x3 filter on 7x7 input with stride 3. Fei-Fei Li & Andrej Karpathy & Justin Johnson Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 7 - Lecture 7 - 33 27 Jan 2016 27 Jan 2016

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend