mit 9 520 6 860 fall 2018 class 11 neural networks tips
play

MIT 9.520/6.860, Fall 2018 Class 11: Neural networks tips, tricks - PowerPoint PPT Presentation

MIT 9.520/6.860, Fall 2018 Class 11: Neural networks tips, tricks & software Andrzej Banburski Last time - Convolutional neural networks source: github.com/vdumoulin/conv arithmetic Large-scale Datasets General Purpose GPUs AlexNet


  1. Batch-size & learning rate An interesting linear scaling relationship seems to exist between the learning rate η and mini-batch size b : ◮ In the SGD update, they appear as a ratio η b , with an additional implicit dependence of the sum of gradients on b . ◮ If b ≪ N , we can approximate SGD by a stochastic differential equation with a noise scale g ≈ η N b [Smit & Le, 2017]. A. Banburski

  2. Batch-size & learning rate An interesting linear scaling relationship seems to exist between the learning rate η and mini-batch size b : ◮ In the SGD update, they appear as a ratio η b , with an additional implicit dependence of the sum of gradients on b . ◮ If b ≪ N , we can approximate SGD by a stochastic differential equation with a noise scale g ≈ η N b [Smit & Le, 2017]. ◮ This means that instead of decaying η , we can increase batch size dynamically. A. Banburski

  3. Batch-size & learning rate An interesting linear scaling relationship seems to exist between the learning rate η and mini-batch size b : ◮ In the SGD update, they appear as a ratio η b , with an additional implicit dependence of the sum of gradients on b . ◮ If b ≪ N , we can approximate SGD by a stochastic differential equation with a noise scale g ≈ η N b [Smit & Le, 2017]. ◮ This means that instead of decaying η , we can increase batch size dynamically. source: [Smith et al., 2018] A. Banburski

  4. Batch-size & learning rate An interesting linear scaling relationship seems to exist between the learning rate η and mini-batch size b : ◮ In the SGD update, they appear as a ratio η b , with an additional implicit dependence of the sum of gradients on b . ◮ If b ≪ N , we can approximate SGD by a stochastic differential equation with a noise scale g ≈ η N b [Smit & Le, 2017]. ◮ This means that instead of decaying η , we can increase batch size dynamically. source: [Smith et al., 2018] ◮ As b approaches N the dynamics become more and more deterministic and we would expect this relationship to vanish. A. Banburski

  5. Batch-size & learning rate source: [Goyal et al., 2017] A. Banburski

  6. Overview Initialization & hyper-parameter tuning Optimization algorithms Batchnorm & Dropout Finite dataset woes Software A. Banburski

  7. SGD is kinda slow... ◮ GD – use all points each iteration to compute gradient ◮ SGD – use one point each iteration to compute gradient ◮ Faster: Mini-Batch – use a mini-batch of points each iteration to compute gradient A. Banburski

  8. Alternatives to SGD Are there reasonable alternatives outside of Newton method? Accelerations ◮ Momentum ◮ Nesterov’s method ◮ Adagrad ◮ RMSprop ◮ Adam ◮ . . . A. Banburski

  9. SGD with Momentum We can try accelerating SGD θ t +1 = θ t − η ∇ f ( θ t ) by adding a momentum/velocity term: A. Banburski

  10. SGD with Momentum We can try accelerating SGD θ t +1 = θ t − η ∇ f ( θ t ) by adding a momentum/velocity term: v t +1 = µv t − η ∇ f ( θ t ) (4) θ t +1 = θ t + v t +1 µ is a new ”momentum” hyper-parameter. A. Banburski

  11. SGD with Momentum We can try accelerating SGD θ t +1 = θ t − η ∇ f ( θ t ) by adding a momentum/velocity term: v t +1 = µv t − η ∇ f ( θ t ) (4) θ t +1 = θ t + v t +1 µ is a new ”momentum” hyper-parameter. source: cs213n.github.io A. Banburski

  12. Nesterov Momentum ◮ Sometimes the momentum update can overshoot A. Banburski

  13. Nesterov Momentum ◮ Sometimes the momentum update can overshoot ◮ We can instead evaluate the gradient at the point where momentum takes us: A. Banburski

  14. Nesterov Momentum ◮ Sometimes the momentum update can overshoot ◮ We can instead evaluate the gradient at the point where momentum takes us: v t +1 = µv t − η ∇ f ( θ t + µv t ) (5) θ t +1 = θ t + v t +1 A. Banburski

  15. Nesterov Momentum ◮ Sometimes the momentum update can overshoot ◮ We can instead evaluate the gradient at the point where momentum takes us: v t +1 = µv t − η ∇ f ( θ t + µv t ) (5) θ t +1 = θ t + v t +1 source: Geoff Hinton’s lecture A. Banburski

  16. AdaGrad ◮ An alternative way is to automatize the decay of the learning rate. A. Banburski

  17. AdaGrad ◮ An alternative way is to automatize the decay of the learning rate. ◮ The Adaptive Gradient algorithm does this by accumulating magnitudes of gradients A. Banburski

  18. AdaGrad ◮ An alternative way is to automatize the decay of the learning rate. ◮ The Adaptive Gradient algorithm does this by accumulating magnitudes of gradients A. Banburski

  19. AdaGrad ◮ An alternative way is to automatize the decay of the learning rate. ◮ The Adaptive Gradient algorithm does this by accumulating magnitudes of gradients ◮ AdaGrad accelerates in flat directions of optimization landscape and slows down in step ones. A. Banburski

  20. RMSProp Problem:The updates in AdaGrad always decrease the learning rate, so some of the parameters can become un-learnable. A. Banburski

  21. RMSProp Problem:The updates in AdaGrad always decrease the learning rate, so some of the parameters can become un-learnable. ◮ Fix by Hinton: use weighted sum of the square magnitudes instead. A. Banburski

  22. RMSProp Problem:The updates in AdaGrad always decrease the learning rate, so some of the parameters can become un-learnable. ◮ Fix by Hinton: use weighted sum of the square magnitudes instead. ◮ This assigns more weight to recent iterations. Useful if directions of steeper or shallower descent suddenly change. A. Banburski

  23. RMSProp Problem:The updates in AdaGrad always decrease the learning rate, so some of the parameters can become un-learnable. ◮ Fix by Hinton: use weighted sum of the square magnitudes instead. ◮ This assigns more weight to recent iterations. Useful if directions of steeper or shallower descent suddenly change. A. Banburski

  24. Adam Adaptive Moment – a combination of the previous approaches. [Kingma and Ba, 2014] A. Banburski

  25. Adam Adaptive Moment – a combination of the previous approaches. [Kingma and Ba, 2014] ◮ Ridiculously popular – more than 13K citations! A. Banburski

  26. Adam Adaptive Moment – a combination of the previous approaches. [Kingma and Ba, 2014] ◮ Ridiculously popular – more than 13K citations! ◮ Probably because it comes with recommended parameters and came with a proof of convergence (which was shown to be wrong). A. Banburski

  27. So what should I use in practice? ◮ Adam is a good default in many cases. A. Banburski

  28. So what should I use in practice? ◮ Adam is a good default in many cases. ◮ There exist datasets in which Adam and other adaptive methods do not generalize to unseen data at all! [Marginal Value of Adaptive Gradient Methods in Machine Learning] A. Banburski

  29. So what should I use in practice? ◮ Adam is a good default in many cases. ◮ There exist datasets in which Adam and other adaptive methods do not generalize to unseen data at all! [Marginal Value of Adaptive Gradient Methods in Machine Learning] ◮ SGD with Momentum and a decay rate often outperforms Adam (but requires tuning). A. Banburski

  30. So what should I use in practice? ◮ Adam is a good default in many cases. ◮ There exist datasets in which Adam and other adaptive methods do not generalize to unseen data at all! [Marginal Value of Adaptive Gradient Methods in Machine Learning] ◮ SGD with Momentum and a decay rate often outperforms Adam (but requires tuning). includegraphicsFigures/comp.png source: github.com/YingzhenLi A. Banburski

  31. Overview Initialization & hyper-parameter tuning Optimization algorithms Batchnorm & Dropout Finite dataset woes Software A. Banburski

  32. Data pre-processing Since our non-linearities change their behavior around the origin, it makes sense to pre-process to zero-mean and unit variance. x i = x i − E [ x i ] ˆ (6) � Var [ x i ] A. Banburski

  33. Data pre-processing Since our non-linearities change their behavior around the origin, it makes sense to pre-process to zero-mean and unit variance. x i = x i − E [ x i ] ˆ (6) � Var [ x i ] source: cs213n.github.io A. Banburski

  34. Batch Normalization A common technique is to repeat this throughout the deep network in a differentiable way: A. Banburski

  35. Batch Normalization A common technique is to repeat this throughout the deep network in a differentiable way: [Ioffe and Szegedy, 2015] A. Banburski

  36. Batch Normalization In practice, a batchnorm layer is added after a conv or fully-connected layer, but before activations. A. Banburski

  37. Batch Normalization In practice, a batchnorm layer is added after a conv or fully-connected layer, but before activations. ◮ In the original paper the authors claimed that this is meant to reduce covariate shift . A. Banburski

  38. Batch Normalization In practice, a batchnorm layer is added after a conv or fully-connected layer, but before activations. ◮ In the original paper the authors claimed that this is meant to reduce covariate shift . ◮ More obviously, this reduces 2nd-order correlations between layers. Recently shown that it actually doesn’t change covariate shift! Instead it smooths out the landscape. A. Banburski

  39. Batch Normalization In practice, a batchnorm layer is added after a conv or fully-connected layer, but before activations. ◮ In the original paper the authors claimed that this is meant to reduce covariate shift . ◮ More obviously, this reduces 2nd-order correlations between layers. Recently shown that it actually doesn’t change covariate shift! Instead it smooths out the landscape. [Santurkar, Tsipras, Ilyas, Madry, 2018] A. Banburski

  40. Batch Normalization In practice, a batchnorm layer is added after a conv or fully-connected layer, but before activations. ◮ In the original paper the authors claimed that this is meant to reduce covariate shift . ◮ More obviously, this reduces 2nd-order correlations between layers. Recently shown that it actually doesn’t change covariate shift! Instead it smooths out the landscape. ◮ In practice this reduces dependence on initialization and seems to stabilize the flow of gradient descent. A. Banburski

  41. Batch Normalization In practice, a batchnorm layer is added after a conv or fully-connected layer, but before activations. ◮ In the original paper the authors claimed that this is meant to reduce covariate shift . ◮ More obviously, this reduces 2nd-order correlations between layers. Recently shown that it actually doesn’t change covariate shift! Instead it smooths out the landscape. ◮ In practice this reduces dependence on initialization and seems to stabilize the flow of gradient descent. ◮ Using BN usually nets you a gain of few % increase in test accuracy. A. Banburski

  42. Dropout Another common technique: during forward pass, set some of the weights to 0 randomly with probability p . Typical choice is p = 50% . A. Banburski

  43. Dropout Another common technique: during forward pass, set some of the weights to 0 randomly with probability p . Typical choice is p = 50% . A. Banburski

  44. Dropout Another common technique: during forward pass, set some of the weights to 0 randomly with probability p . Typical choice is p = 50% . ◮ The idea is to prevent co-adaptation of neurons. A. Banburski

  45. Dropout Another common technique: during forward pass, set some of the weights to 0 randomly with probability p . Typical choice is p = 50% . ◮ The idea is to prevent co-adaptation of neurons. ◮ At test want to remove the randomness. A good approximation is to multiply the neural network by p . A. Banburski

  46. Dropout Another common technique: during forward pass, set some of the weights to 0 randomly with probability p . Typical choice is p = 50% . ◮ The idea is to prevent co-adaptation of neurons. ◮ At test want to remove the randomness. A good approximation is to multiply the neural network by p . ◮ Dropout is more commonly applied for fully-connected layers, though its use is waning. A. Banburski

  47. Overview Initialization & hyper-parameter tuning Optimization algorithms Batchnorm & Dropout Finite dataset woes Software A. Banburski

  48. Finite dataset woes While we are entering the Big Data age, in practice we often find ourselves with insufficient data to sufficiently train our deep neural networks. ◮ What if collecting more data is slow/difficult? A. Banburski

  49. Finite dataset woes While we are entering the Big Data age, in practice we often find ourselves with insufficient data to sufficiently train our deep neural networks. ◮ What if collecting more data is slow/difficult? ◮ Can we squeeze out more from what we already have? A. Banburski

  50. Invariance problem An often-repeated claim about CNNs is that they are invariant to small translations. Independently of whether this is true, they are not invariant to most other types of transformations: source: cs213n.github.io A. Banburski

  51. Data augmentation ◮ Can greatly increase the amount of data by performing: A. Banburski

  52. Data augmentation ◮ Can greatly increase the amount of data by performing: – Translations – Rotations – Reflections – Scaling – Cropping – Adding Gaussian Noise – Adding Occlusion – Interpolation – etc. A. Banburski

  53. Data augmentation ◮ Can greatly increase the amount of data by performing: – Translations – Rotations – Reflections – Scaling – Cropping – Adding Gaussian Noise – Adding Occlusion – Interpolation – etc. ◮ Crucial for achieving state-of-the-art performance! A. Banburski

  54. Data augmentation ◮ Can greatly increase the amount of data by performing: – Translations – Rotations – Reflections – Scaling – Cropping – Adding Gaussian Noise – Adding Occlusion – Interpolation – etc. ◮ Crucial for achieving state-of-the-art performance! ◮ For example, ResNet improves from 11.66% to 6.41% error on CIFAR-10 dataset and from 44.74% to 27.22% on CIFAR-100. A. Banburski

  55. Data augmentation source: github.com/aleju/imgaug A. Banburski

  56. Transfer Learning What if you truly have too little data? ◮ If your data has sufficient similarity to a bigger dataset, the you’re in luck! A. Banburski

  57. Transfer Learning What if you truly have too little data? ◮ If your data has sufficient similarity to a bigger dataset, the you’re in luck! ◮ Idea: take a model trained for example on ImageNet. A. Banburski

  58. Transfer Learning What if you truly have too little data? ◮ If your data has sufficient similarity to a bigger dataset, the you’re in luck! ◮ Idea: take a model trained for example on ImageNet. ◮ Freeze all but last few layers and retrain on your small data. The bigger your dataset, the more layers you have to retrain. A. Banburski

  59. Transfer Learning What if you truly have too little data? ◮ If your data has sufficient similarity to a bigger dataset, the you’re in luck! ◮ Idea: take a model trained for example on ImageNet. ◮ Freeze all but last few layers and retrain on your small data. The bigger your dataset, the more layers you have to retrain. source: [Haase et al., 2014] A. Banburski

  60. Overview Initialization & hyper-parameter tuning Optimization algorithms Batchnorm & Dropout Finite dataset woes Software A. Banburski

  61. Software overview A. Banburski

  62. Software overview A. Banburski

  63. Why use frameworks? ◮ You don’t have to implement everything yourself. A. Banburski

  64. Why use frameworks? ◮ You don’t have to implement everything yourself. ◮ Many inbuilt modules allow quick iteration of ideas – building a neural network becomes putting simple blocks together and computing backprop is a breeze. A. Banburski

  65. Why use frameworks? ◮ You don’t have to implement everything yourself. ◮ Many inbuilt modules allow quick iteration of ideas – building a neural network becomes putting simple blocks together and computing backprop is a breeze. ◮ Someone else already wrote CUDA code to efficiently run training on GPUs (or TPUs). A. Banburski

  66. Main design difference source: Introduction to Chainer A. Banburski

  67. PyTorch concepts Similar in code to numpy. A. Banburski

  68. PyTorch concepts Similar in code to numpy. ◮ Tensor: nearly identical to np.array, can run on GPU just with A. Banburski

  69. PyTorch concepts Similar in code to numpy. ◮ Tensor: nearly identical to np.array, can run on GPU just with ◮ Autograd: package for automatic computation of backprop and construction of computational graphs. A. Banburski

  70. PyTorch concepts Similar in code to numpy. ◮ Tensor: nearly identical to np.array, can run on GPU just with ◮ Autograd: package for automatic computation of backprop and construction of computational graphs. ◮ Module: neural network layer storing weights A. Banburski

  71. PyTorch concepts Similar in code to numpy. ◮ Tensor: nearly identical to np.array, can run on GPU just with ◮ Autograd: package for automatic computation of backprop and construction of computational graphs. ◮ Module: neural network layer storing weights ◮ Dataloader: class for simplifying efficient data loading A. Banburski

  72. PyTorch - optimization A. Banburski

  73. PyTorch - ResNet in one page @jeremyphoward A. Banburski

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend