lecture 6 optimization
play

Lecture 6: Optimization CS109B Data Science 2 Pavlos Protopapas and - PowerPoint PPT Presentation

Lecture 6: Optimization CS109B Data Science 2 Pavlos Protopapas and Mark Glickman Outline Optimization Challenges in Optimization Momentum Adaptive Learning Rate Parameter Initialization Batch Normalization CS109B, P


  1. Lecture 6: Optimization CS109B Data Science 2 Pavlos Protopapas and Mark Glickman

  2. Outline Optimization • Challenges in Optimization • Momentum • Adaptive Learning Rate • Parameter Initialization • Batch Normalization CS109B, P ROTOPAPAS , G LICKMAN 2

  3. Learning vs. Optimization Goal of learning: minimize generalization error In practice, empirical risk minimization: [ ] J ( θ ) = E ( x , y )~ p data L ( f ( x ; θ ), y ) m J ( θ ) = 1 ˆ ∑ ( f ( x ( i ) ; θ ), y ( i ) ) L m i = 1 Quantity optimized different from the quantity we care about CS109B, P ROTOPAPAS , G LICKMAN 3

  4. Batch vs. Stochastic Algorithms Batch algorithms • Optimize empirical risk using exact gradients Stochastic algorithms • Estimates gradient from a small random sample [ ] ∇ J ( θ ) = E ( x , y )~ p data ∇ L ( f ( x ; θ ), y ) Large mini-batch : gradient computation expensive Small mini-batch : greater variance in estimate, longer steps for convergence CS109B, P ROTOPAPAS , G LICKMAN 4

  5. Critical Points Points with zero gradient 2 nd -derivate (Hessian) determines curvature Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 5

  6. Stochastic Gradient Descent Take small steps in direction of negative gradient Sample m examples from training set and compute: g = 1 ∑ ∇ L ( f ( x ( i ) ; θ ), y ( i ) ) Update parameters: m i θ = θ − ε k g In practice: shuffle training set once and pass through multiple times CS109B, P ROTOPAPAS , G LICKMAN 6

  7. Outline Optimization • Challenges in Optimization • Momentum • Adaptive Learning Rate • Parameter Initialization • Batch Normalization CS109B, P ROTOPAPAS , G LICKMAN 7

  8. Local Minima Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 8

  9. Local Minima Old view: local minima is major problem in neural network training Recent view: • For sufficiently large neural networks, most local minima incur low cost • Not important to find true global minimum CS109B, P ROTOPAPAS , G LICKMAN 9

  10. Saddle Points Both local min Recent studies indicate that in and max high dim, saddle points are more likely than local min Gradient can be very small near saddle points Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 10

  11. No Critical Points Gradient norm increases, but validation error decreases Convolution Nets for Object Detection Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 11

  12. Saddle Points SGD is seen to escape saddle points – Moves down-hill, uses noisy gradients Second-order methods get stuck – solves for a point with zero gradient Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 12

  13. Poor Conditioning Poorly conditioned Hessian matrix – High curvature: small steps leads to huge increase Learning is slow despite strong gradients Oscillations slow down progress CS109B, P ROTOPAPAS , G LICKMAN 13

  14. No Critical Points Some cost functions do not have critical points. In particular classification. CS109B, P ROTOPAPAS , G LICKMAN 14

  15. Exploding and Vanishing Gradients ℎ " = 𝑋𝑦 Linear activation ℎ " = 𝑋ℎ "&' , 𝑗 = 2, … , 𝑜 deeplearning.ai CS109B, P ROTOPAPAS , G LICKMAN 15

  16. Exploding and Vanishing Gradients ! $ a 0 Suppose W = & : # 0 b " % ! $ ! $ ! $ ! $ ! $ h 1 h n ! $ x 1 x 1 a n a 0 0 # & # & 1 1 ! # & # & # & & = & = # & # # h 1 0 b x 2 h n b n x 2 # & # & 0 # & " % " % " % " % " % " % 2 2 CS109B, P ROTOPAPAS , G LICKMAN 16

  17. Exploding and Vanishing Gradients ! $ 1 Suppose x = # & 1 " % Case 1: a = 1, b = 2 : ! $ n Explodes! y → 1, ∇ y → # & n 2 n − 1 # & " % Case 2: a = 0.5, b = 0.9 : ! $ 0 Vanishes! y → 0, ∇ y → # & 0 " % CS109B, P ROTOPAPAS , G LICKMAN 17

  18. Exploding and Vanishing Gradients Exploding gradients lead to cliffs Can be mitigated using gradient clipping if 𝑕 > 𝑣 𝑕 ⟵ 𝑕𝑣 𝑕 Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 18

  19. Outline Optimization • Challenges in Optimization • Momentum • Adaptive Learning Rate • Parameter Initialization • Batch Normalization CS109B, P ROTOPAPAS , G LICKMAN 19

  20. Stochastic Gradient Descent J ( θ ) Oscillations because updates do not exploit curvature information Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 20

  21. Momentum SGD is slow when there is high curvature J ( θ ) Average gradient presents faster path to opt: – vertical components cancel out CS109B, P ROTOPAPAS , G LICKMAN 21

  22. Momentum Uses past gradients for update Maintains a new quantity: ‘velocity’ Exponentially decaying average of gradients: g = 1 ∑ ∇ θ L ( f ( x ( i ) ; θ ), y ( i ) ) m Current gradient update i v = α v + ( − ε g ) controls how quickly α ∈ [0,1) effect of past gradients decay CS109B, P ROTOPAPAS , G LICKMAN 22

  23. Momentum Compute gradient estimate: g = 1 ∑ ∇ θ L ( f ( x ( i ) ; θ ), y ( i ) ) m i Update velocity: v = α v − ε g Update parameters: θ = θ + v CS109B, P ROTOPAPAS , G LICKMAN 23

  24. Momentum J ( θ ) Damped oscillations: gradients in opposite directions get cancelled out Goodfellow et al. (2016) CS109B, P ROTOPAPAS , G LICKMAN 24

  25. Nesterov Momentum Apply an interim update: ! θ = θ + v Perform a correction based on gradient at the interim point: g = 1 ∇ θ L ( f ( x ( i ) ; ! ∑ θ ), y ( i ) ) m i v = α v − ε g θ = θ + v Momentum based on look-ahead slope CS109B, P ROTOPAPAS , G LICKMAN 25

  26. 26

  27. Outline Optimization • Challenges in Optimization • Momentum • Adaptive Learning Rate • Parameter Initialization • Batch Normalization CS109B, P ROTOPAPAS , G LICKMAN 27

  28. Adaptive Learning Rates J ( θ ) θ 2 θ 1 Oscillations along vertical direction – Learning must be slower along parameter 2 Use a different learning rate for each parameter? CS109B, P ROTOPAPAS , G LICKMAN 28

  29. AdaGrad • Accumulate squared gradients: 2 r i = r i + g i Inversely • Update each parameter: proportional to ε cumulative g i θ i = θ i − squared gradient r δ + i • Greater progress along gently sloped directions CS109B, P ROTOPAPAS , G LICKMAN 29

  30. RMSProp • For non-convex problems, AdaGrad can prematurely decrease learning rate • Use exponentially weighted average for gradient accumulation 2 r i = ρ r i + (1 − ρ ) g i ε g i θ i = θ i − r δ + i CS109B, P ROTOPAPAS , G LICKMAN 30

  31. Adam • RMSProp + Momentum • Estimate first moment: Also applies bias correction v i = ρ 1 v i + (1 − ρ 1 ) g i to v and r • Estimate second moment: 2 r i = ρ 2 r i + (1 − ρ 2 ) g i • Update parameters: Works well in practice, ε v i θ i = θ i − is fairly robust to r δ + hyper-parameters i CS109B, P ROTOPAPAS , G LICKMAN 31

  32. Outline Optimization • Challenges in Optimization • Momentum • Adaptive Learning Rate • Parameter Initialization • Batch Normalization CS109B, P ROTOPAPAS , G LICKMAN 32

  33. Parameter Initialization • Goal: break symmetry between units • so that each unit computes a different function • Initialize all weights (not biases) randomly • Gaussian or uniform distribution • Scale of initialization? • Large -> grad explosion, Small -> grad vanishing CS109B, P ROTOPAPAS , G LICKMAN 33

  34. Xavier Initialization • Heuristic for all outputs to have unit variance • For a fully-connected layer with m inputs: ! $ W ij ~ N 0, 1 # & " m % • For ReLU units, it is recommended: ! $ W ij ~ N 0, 2 # & " m % CS109B, P ROTOPAPAS , G LICKMAN 34

  35. Normalized Initialization • Fully-connected layer with m inputs, n outputs: " % 6 6 W ij ~ U − m + n , $ ' m + n # & • Heuristic trades off between initialize all layers have same activation and gradient variance • Sparse variant when m is large Initialize k nonzero weights in each unit – CS109B, P ROTOPAPAS , G LICKMAN 35

  36. Bias Initialization • Output unit bias • Marginal statistics of the output in the training set • Hidden unit bias • Avoid saturation at initialization • E.g. in ReLU, initialize bias to 0.1 instead of 0 • Units controlling participation of other units • Set bias to allow participation at initialization CS109B, P ROTOPAPAS , G LICKMAN 36

  37. 37

  38. Outline Challenges in Optimization Momentum Adaptive Learning Rate Parameter Initialization Batch Normalization CS109B, P ROTOPAPAS , G LICKMAN 38

  39. Feature Normalization Good practice to normalize features before applying learning algorithm: Feature vector Vector of mean feature values x = x − µ ! σ Vector of SD of feature values Features in same scale: mean 0 and variance 1 – Speeds up learning CS109B, P ROTOPAPAS , G LICKMAN 39

  40. Feature Normalization J ( θ ) Before normalization After normalization CS109B, P ROTOPAPAS , G LICKMAN

  41. Internal Covariance Shift Each hidden layer changes distribution of inputs to next layer: slows down learning … Normalize Normalize inputs to layer 2 inputs to layer n CS109B, P ROTOPAPAS , G LICKMAN 41

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend