adapting dl to new data an evolutionary algorithm for
play

Adapting DL to New Data: An Evolutionary Algorithm for Optimizing - PowerPoint PPT Presentation

Adapting DL to New Data: An Evolutionary Algorithm for Optimizing Deep Networks Steven R. Young Research Scientist Oak Ridge National Laboratory ORNL is managed by UT-Battelle for the US Department of Energy Overview Deep Learning in


  1. Adapting DL to New Data: An Evolutionary Algorithm for Optimizing Deep Networks Steven R. Young Research Scientist Oak Ridge National Laboratory ORNL is managed by UT-Battelle for the US Department of Energy

  2. Overview • Deep Learning in Science • Challenges • Tools • Next Steps 2 Adapting DL to New Data

  3. Deep Learning for Science Applications Commercial Science Applications Applications State of the Art Results Challenging New Domains Object Recognition Material Science High Energy Physics Face Recognition Characteristics Characteristics • • Data is easy to collect Data is difficult to collect • • Inexpensive labels Few labels available 3 Adapting DL to New Data

  4. Problem: Adaptability Challenge • Premise: For every data set, there exists a corresponding neural network that performs ideally with that data • What’s the ideal neural network architecture (i.e., hyper - parameters) for a particular data set ? • Widely-used approach: intuition 1. Pick some deep learning software (Caffe, Torch, Theano, etc) 2. Design a set of parameters that defines your deep learning network 3. Try it on your data If it doesn’t work as well as you want, go back to step 2 and try 4. again. 4 Adapting DL to New Data

  5. The Challenge Deep Learning Toolbox Output Fully Connected Pooling Output Learning Rate Batch Size Fully Connected Convolutional Pooling Convolutional Weight Decay Momentum Pooling Convolutional Input 5 Adapting DL to New Data

  6. The Challenge Deep Learning Toolbox Output Learning Rate Batch Size Convolutional Weight Decay Momentum Convolutional Input 6 Adapting DL to New Data

  7. Hyper-parameter Selection • Manual search, guess and check – Requires domain knowledge • Grid search – Exponential growth with high-dimensional hyper-parameter space – Doesn’t exploit low effective dimension for discovery • Random search – By itself, not adaptive (no use of information from previous experiments) 7 Adapting DL to New Data

  8. What can we do with Titan? 18,688 GPUs 8 Adapting DL to New Data

  9. MENNDL: Multi-node Evolutionary Neural Networks for Deep Learning • Evolutionary algorithm as a solution for searching hyper- parameter space for deep learning – Focus on Convolutional Neural Networks – Evolve only the topology with EA; typical SGD training process – Generally: Provide scalability and adaptability for many data sets and compute platforms • Leverage more GPUs; ORNL’s Titan has 18k GPUs – Next generation, Summit, will have increased GPU capability • Provide the ability to apply DL to new datasets quickly – Climate science, material science, physics, etc. 9 Adapting DL to New Data

  10. Designing the Genetic Code • Goal: facilitate complete network definition exploration • Each population member is a network which has a genome with sets of genes – Fixed width set of genes corresponds to a layer • Layers contain multiple distinct parameters – Restrict layer types based on section • Feature extraction and classification • Minor guided design in network, otherwise we attempt to fully encompass all layer types … … … … … Parameters Feature Layers Classification Layers Individual - Network Population – Group of Networks 10 Adapting DL to New Data

  11. MENNDL: Communication Gene: Fitness Population Genetic Algorithm Metrics: Network Master Accuracy Parameters MPI Network 1 Parameters, Network N Model Parameters, Network 2 Model Predictions Parameters, Worker Model Predictions Performance (one per node) Metrics Performance Predictions Metrics Performance Metrics 11 Adapting DL to New Data

  12. Hyper-parameter Values vs Performance • Currently T&E of latest code that changes all possible parameters (e.g., # of layers, layer types, etc) • Using just 4 nodes • From 27% to 65% Accuracy Evolved 12 Adapting DL to New Data

  13. MINERvA 13 Adapting DL to New Data

  14. MINERvA Vertex Segment Classification Goal: Classify which segment the vertex is located in. Challenge: Events can have very different characteristics. 14 Adapting DL to New Data

  15. Benefit of Parallelization 12 hours 6 hours MINERvA dataset 2 hours 15 Adapting DL to New Data

  16. Unusual layers • Second convolution layer has a kernel size of 29 • Followed by MAX Pooling layer with kernel size of 19 16 Adapting DL to New Data

  17. Unusual Layers (limited training examples) 17 Adapting DL to New Data

  18. Current Status • Scaled to 15,000 nodes of Titan • 460,000 Networks evaluated in 24 hours • Expanding to more complex topologies • Evaluating on a wide range of science datasets 18 Adapting DL to New Data

  19. Acknowledgements • Gabriel Perdue (FNAL) and Sohini Upadhyay (University of Chicago) • Adam Terwilliger (Grand Valley State University) and David Isele (University of Pennsylvania) • Robert Patton, Seung-Hwan Lim, Thomas Karnowski, and Derek Rose (ORNL) 19 Adapting DL to New Data

  20. Questions 20 Adapting DL to New Data

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend