ammi introduction to deep learning 1 3 what is really
play

AMMI Introduction to Deep Learning 1.3. What is really happening? - PowerPoint PPT Presentation

AMMI Introduction to Deep Learning 1.3. What is really happening? Fran cois Fleuret https://fleuret.org/ammi-2018/ Wed Aug 29 16:56:56 CAT 2018 COLE POLYTECHNIQUE FDRALE DE LAUSANNE (Zeiler and Fergus, 2014) Fran cois Fleuret


  1. AMMI – Introduction to Deep Learning 1.3. What is really happening? Fran¸ cois Fleuret https://fleuret.org/ammi-2018/ Wed Aug 29 16:56:56 CAT 2018 ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

  2. (Zeiler and Fergus, 2014) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 1 / 13

  3. (Zeiler and Fergus, 2014) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 2 / 13

  4. (Google’s Deep Dreams) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 3 / 13

  5. (Google’s Deep Dreams) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 4 / 13

  6. (Thorne Brandt) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 5 / 13

  7. (Duncan Nicoll) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 6 / 13

  8. (Szegedy et al., 2014) (Nguyen et al., 2015) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 7 / 13

  9. Relations with the biology Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 8 / 13

  10. ⊗ ⊗ ⊗ a Encoding Decoding Stimulus Neurons Behavior b V1 V2 V4 V4 PIT CIT AIT V1 RGC LGN PIT DOG T(•) ? ? ? V2 CIT AIT 100-ms Pixels visual presentation c LN LN LN LN LN LN ... LN ... ... ... ... LN Spatial convolution LN over image input LN LN Operations in linear-nonlinear layer Figure 1 HCNNs as models of sensory Φ 1 cortex. ( a ) The basic framework in which ... Φ 2 sensory cortex is studied is one of encoding—the process by which stimuli are transformed Φ k Threshold Pool Normalize into patterns of neural activity—and decoding, the process by which neural activity generates Filter behavior. HCNNs have been used to make models of the encoding step; that is, they describe (Yamins and DiCarlo, 2016) 6 Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 9 / 13

  11. τ a b 5 1 50 0 . HMO HCNN top ± 7 8 . (top hidden hidden layer 0 = r layer) response prediction IT site 56 IT single-site neural predictivity s e l d (% explained variance) o m IT neural N N C response H Test images (sorted by category) c HMAX 50 Monkey V4 Monkey IT 50 Single-site neural predictivity PLOS09 ( n = 128) ( n = 168) (% explained variance) V2-like Category All variables V1-like Pixels V1-like ideal Category Pixels observer SIFT PLOS09 Pixels PLOS09 V2-Like V1-Like V2-like 0 HMAX HMAX SIFT SIFT 0.6 1.0 1 1 2 3 4 2 3 4 Categorization performance 0 0 (balanced accuracy) Ideal Control HCNN Ideal Control HCNN observers models layers observers models layers d e Human V1–V3 Human IT Human IT (fMRI) HCNN model Body 0.4 Human RDM voxel correlation 0.4 Face (Kendall’s � A ) Animate Not human Body 0.2 0.2 Face Natural Inanimate 0 0 **** **** **** **** **** ** * **** **** **** **** **** **** **** **** Artificial Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Layer 6 Layer 7 Scores Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Layer 6 Layer 7 SVM Geometry- supervised � A = 0.38 Convolutional Fully connected 6 (Yamins and DiCarlo, 2016) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 10 / 13

  12. Species Nb. neurons Nb. synapses 7 . 5 × 10 3 Roundworm 302 Jellyfish 800 1 . 8 × 10 4 Sea slug 1 . 0 × 10 5 1 . 0 × 10 7 Fruit fly 2 . 5 × 10 5 Ant 1 . 0 × 10 6 Cockroach 1 . 6 × 10 7 Frog 7 . 1 × 10 7 1 . 0 × 10 11 Mouse 2 . 0 × 10 8 4 . 5 × 10 11 Rat 3 . 0 × 10 8 Octopus 8 . 6 × 10 10 1 . 0 × 10 15 Human (Wikipedia “List of animals by number of neurons”) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 11 / 13

  13. Device Nb. transistors 2 . 6 × 10 9 Intel i7 Haswell-E (8 cores) 7 . 2 × 10 9 Intel Xeon Broadwell-E5 (22 cores) 19 . 2 × 10 9 AMD Epyc (32 cores) 7 . 2 × 10 9 Nvidia GeForce GTX 1080 12 . 5 × 10 9 AMD Vega 10 21 . 1 × 10 9 NVidia GV100 (Wikipedia “Transistor count”) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 12 / 13

  14. Number of transistors per CPU/GPU 10 18 CPUs GPUs Nb. human synapses 10 15 10 12 Nb. Transistors Nb. mouse synapses 10 9 10 6 Nb. fruit fly synapses 10 3 1960 1970 1980 1990 2000 2010 2020 (Wikipedia “Transistor count”) Fran¸ cois Fleuret AMMI – Introduction to Deep Learning / 1.3. What is really happening? 13 / 13

  15. The end

  16. References A. M. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Conference on Computer Vision and Pattern Recognition (CVPR) , 2015. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR) , 2014. D. L. K. Yamins and J. J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience , 19:356–65, Feb 2016. M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision (ECCV) , 2014.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend