deep learning
play

Deep Learning Europython 2016 - Bilbao G. French University of - PowerPoint PPT Presentation

Deep Learning Europython 2016 - Bilbao G. French University of East Anglia Image montages from http://www.image-net.org Focus: Mainly image processing This talk is more about the principles and the maths than code Got to fit this into 1


  1. after 300 iterations over training set: 99.21% validation accuracy Model Error FC64 2.85% FC256--FC256 1.83% 20C5--MP2--50C5--MP2--FC256 0.79%

  2. What about the learned kernels? Image taken from paper [Krizhevsky12] Gabor filters (ImageNet dataset, not MNIST)

  3. Image taken from [Zeiler14]

  4. Image taken from [Zeiler14]

  5. Lasagne

  6. Specifying your network as mathematical expressions is powerful but low-level

  7. Lasagne is a neural network library built on Theano Makes building networks with Theano much easier

  8. Provides API for: constructing layers of a network getting Theano expressions representing output, loss, etc.

  9. Lasagne is quite a thin layer on top of Theano, so understanding Theano is helpful On the plus side, implementing custom layers, loss functions, etc is quite doable.

  10. Intro to Theano and Lasagne slides: https://speakerdeck.com/britefury https://speakerdeck.com/britefury/intro-to-theano-and-lasagne-for-deep-learning

  11. Notes for building and training neural networks

  12. Neural network architecture (OxfordNet / VGG style)

  13. Early part # Layer Input: 3 x 224 x 224 (RGB image, zero-mean) Blocks consisting of: 1 64C3 2 64C3 MP2 A few convolutional layers, often 3x3 3 128C3 kernels 4 128C3 - followed by - MP2 Down-sampling; max-pooling or 64C3 = 3x3 conv, 64 filters MP2 = max-pooling, 2x2 striding

  14. # Layer Notation: Input: 3 x 224 x 224 (RGB image, zero-mean) 64C3 1 64C3 convolutional 2 64C3 layer with 64 3x3 MP2 filters 3 128C3 4 128C3 MP2 MP2 max-pooling, 2x2

  15. # Layer Input: 3 x 224 x 224 (RGB image, zero-mean) Note 1 64 C3 2 64 C3 after down- MP2 sampling, double the number of 3 128 C3 4 128 C3 convolutional MP2 filters

  16. # Layer Later part: Input: 3 x 224 x 224 (RGB image, zero-mean) After blocks of 1 64C3 convolutional and 2 64C3 down-sampling MP2 layers: 3 128C3 4 128C3 Fully-connected MP2 (a.k.a. dense) FC256 layers FC10

  17. # Layer Input: 3 x 224 x 224 (RGB image, zero-mean) Notation: 1 64C3 2 64C3 FC256 MP2 fully-connected 3 128C3 layer with 256 4 128C3 channels MP2 FC256 FC10

  18. # Layer Input: 3 x 224 x 224 Overall (RGB image, zero-mean) 1 64C3 Convolutional 2 64C3 layers detect MP2 feature in various 3 128C3 positions 4 128C3 throughout the MP2 image FC256 FC10

  19. # Layer Input: 3 x 224 x 224 Overall (RGB image, zero-mean) 1 64C3 Fully-connected / 2 64C3 dense layers use MP2 features detected 3 128C3 by convolutional 4 128C3 layers to produce MP2 output FC256 FC10

  20. Could also look at architectures developed by others, e.g. Inception by Google, or ResNets by Micrsoft for inspiration

  21. Batch normalization

  22. Batch normalization [Ioffe15] is recommended in most cases Necessary for deeper networks (> 8 layers)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend