general neural networks
play

General Neural Networks Compositions of linear maps and - PowerPoint PPT Presentation

Pooling and Invariance in Convolutional Neural Networks General Neural Networks Compositions of linear maps and component-wise non- linearities Neural Networks Common Non-Linearities Rectifier Linear Unit Sigmoid Hyperbolic tangent


  1. Pooling and Invariance in Convolutional Neural Networks

  2. General Neural Networks Compositions of linear maps and component-wise non- linearities

  3. Neural Networks

  4. Common Non-Linearities Rectifier Linear Unit Sigmoid Hyperbolic tangent

  5. Biological Inspiration Neurons diagram Rectified Linear Unit

  6. Sparse Connections Not required to be fully connected

  7. Backpropagation

  8. Representations Learnt

  9. Representation Learning Images far apart in Euclidean space Need to find representation such that members of same class are mapped to similar values

  10. Convolutional Neural Network Compositions of: Convolutions by Linear Filter Thresholding Non-Linearities Spatial Pooling

  11. Convolution by Linear Filter

  12. Convolution by Linear Filter

  13. Example

  14. Convolutional Neural Networks 1) Convolution by Linear Filter 2) Apply non-linearity 3) Pooling

  15. Convolutional Neural Network

  16. Convolutional Neural Networks

  17. Imagenet Network

  18. Successes Computer vision Speech Chemistry

  19. Object Classification a

  20. Segmentation

  21. Object Detection

  22. Speech

  23. Physical Chemistry Successfully predict atomization energy, polarizability, frontier orbital eigenvalues, ionization potential, electron affinity and excitation energies from molecular structure.

  24. Visualization of First Layer

  25. Standard Pooling Mechanisms Ave pooling Max pooling

  26. Example

  27. Heterogenous Pooling Some filters passed to Ave pooling Others filters passed to Max pooling

  28. Pooling Continuum Accordingly, LeCun et al. 2012 ran experiment with variety of “p” values.

  29. Results along spectrum Optimal for this SVHN dataset was p = 4.

  30. L_p learnt pooling Why not learn optimal p for each filter map?

  31. Stochastic Pooling

  32. Stochastic Pooling Expectation at Test Time

  33. Entropy Pooling Extend to variable p In particular, Alternative:

  34. Max-Out pooling Pooling across filters Substantial Improvement in performance and allowed depth

  35. Example

  36. Compete Neurons Neurons can suppress other responses

  37. Visualizations of Filters Early Layers

  38. Visualizations

  39. Visualizations

  40. Invariance under Rigid Motion Goodfellow et al. 2009 demonstrated the CNN are invariant under Indeed, depth of NN critical to establishing such invariance

  41. Unstable under Deformation Szegedy et al.

  42. Lipshitz Bounds for Layers Max and ReLU are contractive FC Layers: usual linear operator norm Conv Layers: Parseval’s and DFT yield explicit formula

  43. Solutions? Regularize Lipshitz operator

  44. Coding Symmetry Convolutional Wavelet Networks

  45. Architecture Wavelet convolutions composed with modulus operator

  46. Gabor Wavelets Trigonometric function in Gaussian Envelope

  47. Group Convolution

  48. Group Invariant Scattering Main Result: Conv. Wavelet Networks are translation invariant functions in L_2(R^2) Furthermore, CWN can be made invariant to action under any compact lie group.

  49. Textures Sifre and Mallat

  50. Basis for images Learns similar representation as Imagenet CNN

  51. Learnt Invariances

  52. Optimal network 1) Encoded symmetry 2) Regularize Lipshitz coefficients 3) Compete Neurons in final layers

  53. Final words

Recommend


More recommend