stochastic gradient methods for neural networks
play

Stochastic Gradient Methods for Neural Networks Chih-Jen Lin - PowerPoint PPT Presentation

Stochastic Gradient Methods for Neural Networks Chih-Jen Lin National Taiwan University Last updated: May 25, 2020 Chih-Jen Lin (National Taiwan Univ.) 1 / 45 Outline Gradient descent 1 Mini-batch SG 2 Adaptive learning rate 3


  1. Stochastic Gradient Methods for Neural Networks Chih-Jen Lin National Taiwan University Last updated: May 25, 2020 Chih-Jen Lin (National Taiwan Univ.) 1 / 45

  2. Outline Gradient descent 1 Mini-batch SG 2 Adaptive learning rate 3 Discussion 4 Chih-Jen Lin (National Taiwan Univ.) 2 / 45

  3. Gradient descent Outline Gradient descent 1 Mini-batch SG 2 Adaptive learning rate 3 Discussion 4 Chih-Jen Lin (National Taiwan Univ.) 3 / 45

  4. Gradient descent NN Optimization Problem I Recall that the NN optimization problem is min θ f ( θ ) where f ( θ ) = 1 2 C θ T θ + 1 � l i =1 ξ ( ③ L +1 , i ( θ ); ② i , Z 1 , i ) l Let’s simplify the loss part a bit f ( θ ) = 1 2 C θ T θ + 1 � l i =1 ξ ( θ ; ② i , Z 1 , i ) l The issue now is how to do the minimization Chih-Jen Lin (National Taiwan Univ.) 4 / 45

  5. Gradient descent Gradient Descent I This is one of the most used optimization method First-order approximation f ( θ + ∆ θ ) ≈ f ( θ ) + ∇ f ( θ ) T ∆ θ Solve ∇ f ( θ ) T ∆ θ min ∆ θ subject to � ∆ θ � = 1 (1) If no constraint, the above sub-problem goes to −∞ Chih-Jen Lin (National Taiwan Univ.) 5 / 45

  6. Gradient descent Gradient Descent II The solution of (1) is ∆ θ = − ∇ f ( θ ) �∇ f ( θ ) � This is called steepest descent method In general all we need is a descent direction ∇ f ( θ ) T ∆ θ < 0 Chih-Jen Lin (National Taiwan Univ.) 6 / 45

  7. Gradient descent Gradient Descent III From f ( θ + α ∆ θ ) = f ( θ ) + α ∇ f ( θ ) T ∆ θ + 1 2 α 2 ∆ θ T ∇ 2 f ( θ )∆ θ + · · · , if ∇ f ( θ ) T ∆ θ < 0 , then with a small enough α , f ( θ + α ∆ θ ) < f ( θ ) Chih-Jen Lin (National Taiwan Univ.) 7 / 45

  8. Gradient descent Line Search I Because we only consider an approximation f ( θ + ∆ θ ) ≈ f ( θ ) + ∇ f ( θ ) T ∆ θ we may not have the strict decrease of the function value That is, f ( θ ) < f ( θ + ∆ θ ) may occur In optimization we then need a step selection procedure Chih-Jen Lin (National Taiwan Univ.) 8 / 45

  9. Gradient descent Line Search II Exact line search min α f ( θ + α ∆ θ ) This is a one-dimensional optimization problem In practice, people use backtracking line search We check α = 1 , β, β 2 , . . . with β ∈ (0 , 1) until f ( θ + α ∆ θ ) < f ( θ ) + ν ∇ f ( θ ) T ( α ∆ θ ) Chih-Jen Lin (National Taiwan Univ.) 9 / 45

  10. Gradient descent Line Search III Here ν ∈ (0 , 1 2) The convergence is well established. For example, under some conditions, Theorem 3.2 of Nocedal and Wright (1999) has that k →∞ ∇ f ( θ k ) = 0 , lim where k is the iteration index This means we can reach a stationary point of a non-convex problem Chih-Jen Lin (National Taiwan Univ.) 10 / 45

  11. Gradient descent Practical Use of Gradient Descent I The standard back-tracking line search is simple and useful However, the convergence is slow for difficult problems Thus in many optimization applications, methods of using second-order information (e.g., quasi Newton or Newton) are preferred f ( θ +∆ θ ) ≈ f ( θ )+ ∇ f ( θ ) T ∆ θ + 1 2∆ θ T ∇ 2 f ( θ )∆ θ These methods have fast final convergence Chih-Jen Lin (National Taiwan Univ.) 11 / 45

  12. Gradient descent Practical Use of Gradient Descent II An illustration (modified from Tsai et al. (2014)) distance to optimum distance to optimum time time Slow final convergence Fast final convergence Chih-Jen Lin (National Taiwan Univ.) 12 / 45

  13. Gradient descent Practical Use of Gradient Descent III But fast final convergence may not be needed in machine learning The reason is that an optimal solution θ ∗ may not lead to the best model We will discuss such issues again later Chih-Jen Lin (National Taiwan Univ.) 13 / 45

  14. Mini-batch SG Outline Gradient descent 1 Mini-batch SG 2 Adaptive learning rate 3 Discussion 4 Chih-Jen Lin (National Taiwan Univ.) 14 / 45

  15. Mini-batch SG Estimation of the Gradient I Recall the function is f ( θ ) = 1 2 C θ T θ + 1 � l i =1 ξ ( θ ; ② i , Z 1 , i ) l The gradient is l C + 1 θ � ξ ( θ ; ② i , Z 1 , i ) l ∇ θ i =1 Going over all data is time consuming Chih-Jen Lin (National Taiwan Univ.) 15 / 45

  16. Mini-batch SG Estimation of the Gradient II What if we use a subset of data l E ( ∇ θ ξ ( θ ; ② , Z 1 )) = 1 � ξ ( θ ; ② i , Z 1 , i ) l ∇ θ i =1 We may just use a subset S C + 1 θ � ξ ( θ ; ② i , Z 1 , i ) | S |∇ θ i : i ∈ S Chih-Jen Lin (National Taiwan Univ.) 16 / 45

  17. Mini-batch SG Algorithm I 1: Given an initial learning rate η . 2: while do Choose S ⊂ { 1 , . . . , l } . 3: Calculate 4: C + 1 θ ← θ − η ( θ � ξ ( θ ; ② i , Z 1 , i )) | S |∇ θ i : i ∈ S May adjust the learning rate η 5: 6: end while It’s known that deciding a suitable learning rate is difficult Chih-Jen Lin (National Taiwan Univ.) 17 / 45

  18. Mini-batch SG Algorithm II Too small learning rate: very slow convergence Too large learning rate: the procedure may diverge Chih-Jen Lin (National Taiwan Univ.) 18 / 45

  19. Mini-batch SG Stochastic Gradient “Descent” I In comparison with gradient descent you see that we don’t do line search Indeed we cannot. Without the full gradient, the sufficient decrease condition may never hold. f ( θ + α ∆ θ ) < f ( θ ) + ν ∇ f ( θ ) T ( α ∆ θ ) Therefore, we don’t have a “descent” algorithm here It’s possible that f ( θ next ) > f ( θ ) Though people frequently use “SGD,” it’s unclear if “D” is suitable in the name of this method Chih-Jen Lin (National Taiwan Univ.) 19 / 45

  20. Mini-batch SG Momentum I This is a method to improve the convergence speed A new vector ✈ and a parameter α ∈ [0 , 1) are introduced ← α ✈ − η ( θ C + 1 � ξ ( θ ; ② i , Z 1 , i )) | S |∇ θ ✈ i : i ∈ S θ ← θ + ✈ Chih-Jen Lin (National Taiwan Univ.) 20 / 45

  21. Mini-batch SG Momentum II Esssentially what we do is θ ← θ − η (current sub-gradient) − αη (prev. sub-gradient) − α 2 η (prev. prev. sub-gradient) − · · · There are some reasons why doing so can improve the convergence speed, though details are not discussed here Chih-Jen Lin (National Taiwan Univ.) 21 / 45

  22. Adaptive learning rate Outline Gradient descent 1 Mini-batch SG 2 Adaptive learning rate 3 Discussion 4 Chih-Jen Lin (National Taiwan Univ.) 22 / 45

  23. Adaptive learning rate AdaGrad I Scaling learning rates inversely proportional to the square root of sum of past gradient squares (Duchi et al., 2011) Update rule: C + 1 θ � ξ ( θ ; ② i , Z 1 , i ) ← | S |∇ θ ❣ i : i ∈ S ← r + ❣ ⊙ ❣ r ǫ θ ← θ − √ r + δ ⊙ ❣ r : sum of past gradient squares Chih-Jen Lin (National Taiwan Univ.) 23 / 45

  24. Adaptive learning rate AdaGrad II ǫ and δ are given constants ⊙ : Hadamard product (element-wise product of two vectors/matrices) A large ❣ component ⇒ a larger r component ⇒ fast decrease of the learning rate Conceptual explanation from Duchi et al. (2011): frequently occurring features ⇒ low learning rates infrequent features ⇒ high learning rates Chih-Jen Lin (National Taiwan Univ.) 24 / 45

  25. Adaptive learning rate AdaGrad III “the intuition is that each time an infrequent feature is seen, the learner should take notice.” But how is this explanation related to ❣ components? Let’s consider linear classification. Recall our optimization problem is l ✇ T ✇ � + C ξ ( ✇ ; y i , ① i ) 2 i =1 Chih-Jen Lin (National Taiwan Univ.) 25 / 45

  26. Adaptive learning rate AdaGrad IV For methods such as SVM or logistic regression, the loss function can be written as a function of ✇ T ① ǫ ( ✇ T ① ) ξ ( ✇ ; y , ① ) = ˆ Then the gradient is l � ǫ ′ ( ✇ T ① i ) ① i ✇ + C ˆ i =1 Thus the gradient is related to the density of features Chih-Jen Lin (National Taiwan Univ.) 26 / 45

  27. Adaptive learning rate AdaGrad V The above analysis is for linear classification But now we have a non-convex neural network! Empirically, people find that the sum of squared gradient since the beginning causes too fast decrease of the learning rate Chih-Jen Lin (National Taiwan Univ.) 27 / 45

  28. Adaptive learning rate RMSProp I The original reference seems to be the lecture slides at https://www.cs.toronto.edu/~tijmen/ csc321/slides/lecture_slides_lec6.pdf Idea: they think AdaGrad’s learning rate may be too small before reaching a locally convex region That is, OK to sum all past gradient squares in convex, but not non-convex Thus they do “exponentially weighted moving average” Chih-Jen Lin (National Taiwan Univ.) 28 / 45

  29. Adaptive learning rate RMSProp II Update rule ← ρ r + (1 − ρ ) ❣ ⊙ ❣ r ǫ √ θ ← θ − δ + r ⊙ ❣ AdaGrad: ← r + ❣ ⊙ ❣ r ǫ θ ← θ − √ r + δ ⊙ ❣ Chih-Jen Lin (National Taiwan Univ.) 29 / 45

  30. Adaptive learning rate RMSProp III Somehow the setting is a bit heuristic and the reason behind the change (from AdaGrad to RMSProp) is not really that strong Chih-Jen Lin (National Taiwan Univ.) 30 / 45

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend