bayesian global optimization
play

BAYESIAN GLOBAL OPTIMIZATION Using Optimal Learning to Tune Deep - PowerPoint PPT Presentation

BAYESIAN GLOBAL OPTIMIZATION Using Optimal Learning to Tune Deep Learning Pipelines Scott Clark scott@sigopt.com OUTLINE 1. Why is Tuning AI Models Hard? 2. Comparison of Tuning Methods 3. Bayesian Global Optimization 4. Deep Learning


  1. BAYESIAN GLOBAL OPTIMIZATION Using Optimal Learning to Tune Deep Learning Pipelines Scott Clark scott@sigopt.com

  2. OUTLINE 1. Why is Tuning AI Models Hard? 2. Comparison of Tuning Methods 3. Bayesian Global Optimization 4. Deep Learning Examples 5. Evaluating Optimization Strategies

  3. Deep Learning / AI is extremely powerful Tuning these systems is extremely non-intuitive

  4. What is the most important unresolved problem in machine learning? “...we still don't really know why some configurations of deep neural networks work in some case and not others, let alone having a more or less automatic approach to determining the architectures and the hyperparameters .” Xavier Amatriain , VP Engineering at Quora (former Director of Research at Netflix) https://www.quora.com/What-is-the-most-important-unresolved-problem-in-machine-learning-3

  5. Photo: Joe Ross

  6. TUNABLE PARAMETERS IN DEEP LEARNING

  7. TUNABLE PARAMETERS IN DEEP LEARNING

  8. Photo: Tammy Strobel

  9. STANDARD METHODS FOR HYPERPARAMETER SEARCH

  10. STANDARD TUNING METHODS Manual Search Parameter Configuration - Weights - Thresholds Training ML / AI - Window sizes Data Model - Transformations Grid Search Random Search ? Cross Testing Validation Data

  11. OPTIMIZATION FEEDBACK LOOP New configurations Training ML / AI Data Model Better Results Objective Metric Cross Testing Validation Data REST API

  12. BAYESIAN GLOBAL OPTIMIZATION

  13. OPTIMAL LEARNING … the challenge of how to collect information as efficiently as possible , primarily for settings where collecting information is time consuming and expensive . Prof. Warren Powell - Princeton What is the most efficient way to collect information ? Prof. Peter Frazier - Cornell How do we make the most money , as fast as possible? Scott Clark - CEO, SigOpt

  14. BAYESIAN GLOBAL OPTIMIZATION ● Optimize objective function ○ Loss, Accuracy, Likelihood ● Given parameters ○ Hyperparameters, feature/architecture params ● Find the best hyperparameters ○ Sample function as few times as possible ○ Training on big data is expensive

  15. HOW DOES IT WORK? SMBO S equential M odel- B ased O ptimization

  16. GP/EI SMBO 1. Build Gaussian Process (GP) with points sampled so far 2. Optimize the fit of the GP (covariance hyperparameters) 3. Find the point(s) of highest Expected Improvement within parameter domain 4. Return optimal next best point(s) to sample

  17. GAUSSIAN PROCESSES

  18. GAUSSIAN PROCESSES

  19. GAUSSIAN PROCESSES

  20. GAUSSIAN PROCESSES

  21. GAUSSIAN PROCESSES

  22. GAUSSIAN PROCESSES

  23. GAUSSIAN PROCESSES

  24. GAUSSIAN PROCESSES

  25. GAUSSIAN PROCESSES overfit good fit underfit

  26. EXPECTED IMPROVEMENT

  27. EXPECTED IMPROVEMENT

  28. EXPECTED IMPROVEMENT

  29. EXPECTED IMPROVEMENT

  30. EXPECTED IMPROVEMENT

  31. EXPECTED IMPROVEMENT

  32. DEEP LEARNING EXAMPLES

  33. SIGOPT + MXNET ● Classify movie reviews using a CNN in MXNet

  34. TEXT CLASSIFICATION PIPELINE Hyperparameter Configurations and Feature ML / AI Transformations Training Model Text (MXNet) Better Results Accuracy Testing Validation Text REST API

  35. TUNABLE PARAMETERS IN DEEP LEARNING

  36. STOCHASTIC GRADIENT DESCENT ● Comparison of several RMSProp SGD parametrizations

  37. ARCHITECTURE PARAMETERS

  38. This slide’s GIF loops automatically TUNING METHODS Grid Search Random Search ?

  39. MULTIPLICATIVE TUNING SPEED UP

  40. SPEED UP #1: CPU -> GPU

  41. SPEED UP #2: RANDOM/GRID -> SIGOPT

  42. CONSISTENTLY BETTER AND FASTER

  43. SIGOPT + TENSORFLOW ● Classify house numbers in an image dataset (SVHN)

  44. COMPUTER VISION PIPELINE Hyperparameter Configurations and Feature ML / AI Transformations Training Model Images (Tensorflow) Better Results Accuracy Cross Testing Validation Images REST API

  45. METRIC OPTIMIZATION

  46. SIGOPT + NEON ● All convolutional neural network ● Multiple convolutional and dropout layers ● Hyperparameter optimization mixture of domain expertise and grid search (brute force) http://arxiv.org/pdf/1412.6806.pdf

  47. COMPARATIVE PERFORMANCE ● Expert baseline: 0.8995 ○ (using neon) ● SigOpt best: 0.9011 ○ 1.6% reduction in error rate ○ No expert time wasted in tuning

  48. SIGOPT + NEON ● Explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions ● Variable depth ● Hyperparameter optimization mixture of domain expertise and grid search (brute force) http://arxiv.org/pdf/1512.03385v1.pdf

  49. COMPARATIVE PERFORMANCE ● Expert baseline: 0.9339 ○ (from paper) ● SigOpt best: 0.9436 ○ 15% relative error rate reduction ○ No expert time wasted in tuning Standard Method

  50. EVALUATING THE OPTIMIZER

  51. OUTLINE ● Metric Definitions ● Benchmark Suite ● Eval Infrastructure ● Visualization Tool ● Baseline Comparisons

  52. METRIC: BEST FOUND What is the best value found after optimization completes? BLUE RED BEST_FOUND 0.7225 0.8949

  53. METRIC: AUC How quickly is optimum found? (area under curve) BLUE RED BEST_FOUND 0.9439 0.9435 AUC 0.8299 0.9358

  54. STOCHASTIC OPTIMIZATION

  55. BENCHMARK SUITE ● Optimization functions from literature ● ML datasets: LIBSVM, Deep Learning, etc TEST FUNCTION TYPE COUNT Continuous Params 184 Noisy Observations 188 Parallel Observations 45 Integer Params 34 Categorical Params / ML 47 Failure Observations 30 TOTAL 489

  56. INFRASTRUCTURE ● On-demand cluster in AWS for parallel eval function optimization ● Full eval consists of ~20000 optimizations, taking ~30 min

  57. RANKING OPTIMIZERS 1. Mann-Whitney U tests using BEST_FOUND 2. Tied results then partially ranked using AUC 3. Any remaining ties, stay as ties for final ranking

  58. RANKING AGGREGATION ● Aggregate partial rankings across all eval functions using Borda count (sum of methods ranked lower)

  59. SHORT RESULTS SUMMARY

  60. BASELINE COMPARISONS

  61. SIGOPT SERVICE

  62. OPTIMIZATION FEEDBACK LOOP New configurations Training ML / AI Data Model Better Results Objective Metric Cross Testing Validation Data REST API

  63. SIMPLIFIED OPTIMIZATION Client Libraries Python ● Java ● R ● Matlab ● And more... ● Framework Integrations TensorFlow ● scikit-learn ● xgboost ● Keras ● Neon ● And more... ● Live Demo

  64. DISTRIBUTED TRAINING SigOpt serves as a distributed ● scheduler for training models across workers Workers access the SigOpt API ● for the latest parameters to try for each model Enables easy distributed ● training of non-distributed algorithms across any number of models

  65. COMPARATIVE PERFORMANCE Better Results, Faster and Cheaper ● Quickly get the most out of your models with our proven, peer-reviewed ensemble of Bayesian and Global Optimization Methods A Stratified Analysis of Bayesian Optimization Methods (ICML 2016) ○ Evaluation System for a Bayesian Optimization Service (ICML 2016) ○ Interactive Preference Learning of Utility Functions for Multi-Objective Optimization (NIPS 2016) ○ And more... ○ Fully Featured ● Tune any model in any pipeline Scales to 100 continuous, integer, and categorical parameters and many thousands of evaluations ○ Parallel tuning support across any number of models ○ Simple integrations with many languages and libraries ○ Powerful dashboards for introspecting your models and optimization ○ Advanced features like multi-objective optimization, failure region support, and more ○ Secure Black Box Optimization ● Your data and models never leave your system

  66. Try it yourself! https://sigopt.com/getstarted

  67. Questions? contact@sigopt.com https://sigopt.com @SigOpt

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend