neural networks optimization regularization
play

Neural Networks: Optimization & Regularization Shan-Hung Wu - PowerPoint PPT Presentation

Neural Networks: Optimization & Regularization Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Machine Learning Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 1 / 68


  1. Nesterov Momentum Make the same movement v ( t ) in the last iteration, corrected by lookahead negative gradient: Θ ( t + 1 ) ← Θ ( t ) + η v ( t ) ˜ v ( t + 1 ) ← λ v ( t ) − ( 1 − λ ) ∇ Θ C ( ˜ Θ ( t ) ) Θ ( t + 1 ) ← Θ ( t ) + η v ( t + 1 ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 11 / 68

  2. Nesterov Momentum Make the same movement v ( t ) in the last iteration, corrected by lookahead negative gradient: Θ ( t + 1 ) ← Θ ( t ) + η v ( t ) ˜ v ( t + 1 ) ← λ v ( t ) − ( 1 − λ ) ∇ Θ C ( ˜ Θ ( t ) ) Θ ( t + 1 ) ← Θ ( t ) + η v ( t + 1 ) Faster convergence to a minimum Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 11 / 68

  3. Nesterov Momentum Make the same movement v ( t ) in the last iteration, corrected by lookahead negative gradient: Θ ( t + 1 ) ← Θ ( t ) + η v ( t ) ˜ v ( t + 1 ) ← λ v ( t ) − ( 1 − λ ) ∇ Θ C ( ˜ Θ ( t ) ) Θ ( t + 1 ) ← Θ ( t ) + η v ( t + 1 ) Faster convergence to a minimum Not helpful for NNs that lack of minima Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 11 / 68

  4. Outline Optimization 1 Momentum & Nesterov Momentum AdaGrad & RMSProp Batch Normalization Continuation Methods & Curriculum Learning NTK-based Initialization Regularization 2 Cyclic Learning Rates Weight Decay Data Augmentation Dropout Manifold Regularization Domain-Specific Model Design Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 12 / 68

  5. Where Does SGD Spend Its Training Time? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 13 / 68

  6. Where Does SGD Spend Its Training Time? Detouring a saddle point of high cost 1 Better initialization Traversing the relatively flat valley 2 Adaptive learning rate Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 13 / 68

  7. SGD with Adaptive Learning Rates Smaller learning rate η along a steep direction Prevents overshooting Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 14 / 68

  8. SGD with Adaptive Learning Rates Smaller learning rate η along a steep direction Prevents overshooting Larger learning rate η along a flat direction Speed up convergence Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 14 / 68

  9. SGD with Adaptive Learning Rates Smaller learning rate η along a steep direction Prevents overshooting Larger learning rate η along a flat direction Speed up convergence How? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 14 / 68

  10. AdaGrad Update rule: r ( t + 1 ) ← r ( t ) + g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 15 / 68

  11. AdaGrad Update rule: r ( t + 1 ) ← r ( t ) + g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ r ( t + 1 ) accumulates squared gradients along each axis Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 15 / 68

  12. AdaGrad Update rule: r ( t + 1 ) ← r ( t ) + g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ r ( t + 1 ) accumulates squared gradients along each axis Division and square root applied to r ( t + 1 ) elementwisely We have η η η 1 1 √ √ √ r ( t + 1 ) = t + 1 ⊙ = t + 1 ⊙ � � i = 0 g ( i ) ⊙ g ( i ) 1 t + 1 ∑ t 1 t + 1 r ( t + 1 ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 15 / 68

  13. AdaGrad Update rule: r ( t + 1 ) ← r ( t ) + g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ r ( t + 1 ) accumulates squared gradients along each axis Division and square root applied to r ( t + 1 ) elementwisely We have η η η 1 1 √ √ √ r ( t + 1 ) = t + 1 ⊙ = t + 1 ⊙ � � i = 0 g ( i ) ⊙ g ( i ) 1 t + 1 ∑ t 1 t + 1 r ( t + 1 ) Smaller learning rate along all directions as t grows 1 Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 15 / 68

  14. AdaGrad Update rule: r ( t + 1 ) ← r ( t ) + g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ r ( t + 1 ) accumulates squared gradients along each axis Division and square root applied to r ( t + 1 ) elementwisely We have η η η 1 1 √ √ √ r ( t + 1 ) = t + 1 ⊙ = t + 1 ⊙ � � i = 0 g ( i ) ⊙ g ( i ) 1 t + 1 ∑ t 1 t + 1 r ( t + 1 ) Smaller learning rate along all directions as t grows 1 Larger learning rate along more gently sloped directions 2 Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 15 / 68

  15. Limitations The optimal learning rate along a direction may change over time Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 16 / 68

  16. Limitations The optimal learning rate along a direction may change over time In AdaGrad, r ( t + 1 ) accumulates squared gradients from the beginning of training Results in premature adaptivity Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 16 / 68

  17. RMSProp RMSProp changes the gradient accumulation in r ( t + 1 ) into a moving average: r ( t + 1 ) ← λ r ( t ) +( 1 − λ ) g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 17 / 68

  18. RMSProp RMSProp changes the gradient accumulation in r ( t + 1 ) into a moving average: r ( t + 1 ) ← λ r ( t ) +( 1 − λ ) g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) − r ( t + 1 ) ⊙ g ( t ) √ A popular algorithm Adam (short for adaptive moments ) [7] is a combination of RMSProp and Momentum: v ( t + 1 ) ← λ 1 v ( t ) − ( 1 − λ 1 ) g ( t ) r ( t + 1 ) ← λ 2 r ( t ) +( 1 − λ 2 ) g ( t ) ⊙ g ( t ) η Θ ( t + 1 ) ← Θ ( t ) + r ( t + 1 ) ⊙ v ( t + 1 ) √ With some bias corrections for v ( t + 1 ) and r ( t + 1 ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 17 / 68

  19. Outline Optimization 1 Momentum & Nesterov Momentum AdaGrad & RMSProp Batch Normalization Continuation Methods & Curriculum Learning NTK-based Initialization Regularization 2 Cyclic Learning Rates Weight Decay Data Augmentation Dropout Manifold Regularization Domain-Specific Model Design Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 18 / 68

  20. Training Deep NNs I So far, we modify the optimization algorithm to better train the model Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 19 / 68

  21. Training Deep NNs I So far, we modify the optimization algorithm to better train the model Can we modify the model to ease the optimization task? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 19 / 68

  22. Training Deep NNs I So far, we modify the optimization algorithm to better train the model Can we modify the model to ease the optimization task? What are the difficulties in training a deep NN? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 19 / 68

  23. Training Deep NNs II The cost C ( Θ ) of a deep NN is usually ill-conditioned due to the dependency between W ( k ) ’s at different layers Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 20 / 68

  24. Training Deep NNs II The cost C ( Θ ) of a deep NN is usually ill-conditioned due to the dependency between W ( k ) ’s at different layers As a simple example, consider a deep NN for x , y ∈ R : y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Single unit at each layer Linear activation function and no bias in each unit Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 20 / 68

  25. Training Deep NNs II The cost C ( Θ ) of a deep NN is usually ill-conditioned due to the dependency between W ( k ) ’s at different layers As a simple example, consider a deep NN for x , y ∈ R : y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Single unit at each layer Linear activation function and no bias in each unit The output ˆ y is a linear function of x , but not of weights Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 20 / 68

  26. Training Deep NNs II The cost C ( Θ ) of a deep NN is usually ill-conditioned due to the dependency between W ( k ) ’s at different layers As a simple example, consider a deep NN for x , y ∈ R : y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Single unit at each layer Linear activation function and no bias in each unit The output ˆ y is a linear function of x , but not of weights The curvature of f with respect to any two w ( i ) and w ( j ) is ∂ f ∂ w ( i ) ∂ w ( j ) = ( w ( i ) + w ( j ) ) · x ∏ w ( k ) k � = i , j Very small if L is large and w ( k ) < 1 for k � = i , j Very large if L is large and w ( k ) > 1 for k � = i , j Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 20 / 68

  27. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  28. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  29. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  30. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) However, g ( t ) updates Θ ( t ) in all dimensions simultaneously in the same iteration Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  31. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) However, g ( t ) updates Θ ( t ) in all dimensions simultaneously in the same iteration C ( Θ ( t + 1 ) ) will be guaranteed to decrease only if C is linear at Θ ( t ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  32. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) However, g ( t ) updates Θ ( t ) in all dimensions simultaneously in the same iteration C ( Θ ( t + 1 ) ) will be guaranteed to decrease only if C is linear at Θ ( t ) Wrong assumption: Θ ( t + 1 ) will decrease C even if other Θ ( t + 1 ) ’s are i j updated simultaneously Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  33. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) However, g ( t ) updates Θ ( t ) in all dimensions simultaneously in the same iteration C ( Θ ( t + 1 ) ) will be guaranteed to decrease only if C is linear at Θ ( t ) Wrong assumption: Θ ( t + 1 ) will decrease C even if other Θ ( t + 1 ) ’s are i j updated simultaneously Second-order methods? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  34. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) However, g ( t ) updates Θ ( t ) in all dimensions simultaneously in the same iteration C ( Θ ( t + 1 ) ) will be guaranteed to decrease only if C is linear at Θ ( t ) Wrong assumption: Θ ( t + 1 ) will decrease C even if other Θ ( t + 1 ) ’s are i j updated simultaneously Second-order methods? Time consuming Does not take into account high-order effects Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  35. Training Deep NNs III The ill-conditioned C ( Θ ) makes a gradient-based optimization algorithm (e.g., SGD) inefficient Let Θ = [ w ( 1 ) , w ( 2 ) , ··· , w ( L ) ] ⊤ and g ( t ) = ∇ Θ C ( Θ ( t ) ) In gradient descent, we get Θ ( t + 1 ) by Θ ( t + 1 ) ← Θ ( t ) − η g ( t ) based on the first-order Taylor approximation of C The gradient g ( t ) ∂ w ( i ) ( Θ ( t ) ) is calculated individually by fixing ∂ C = i C ( Θ ( t ) ) in other dimensions ( w ( j ) ’s, j � = i ) However, g ( t ) updates Θ ( t ) in all dimensions simultaneously in the same iteration C ( Θ ( t + 1 ) ) will be guaranteed to decrease only if C is linear at Θ ( t ) Wrong assumption: Θ ( t + 1 ) will decrease C even if other Θ ( t + 1 ) ’s are i j updated simultaneously Second-order methods? Time consuming Does not take into account high-order effects Can we change the model to make this assumption not-so-wrong? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 21 / 68

  36. Batch Normalization I y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Why not standardize each hidden activation a ( k ) , k = 1 , ··· , L − 1 (as we standardized x )? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 22 / 68

  37. Batch Normalization I y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Why not standardize each hidden activation a ( k ) , k = 1 , ··· , L − 1 (as we standardized x )? We have y = a ( L − 1 ) w ( L ) ˆ When a ( L − 1 ) is standardized, g ( t ) ∂ C ∂ w ( L ) ( Θ ( t ) ) is more likely to L = decrease C Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 22 / 68

  38. Batch Normalization I y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Why not standardize each hidden activation a ( k ) , k = 1 , ··· , L − 1 (as we standardized x )? We have y = a ( L − 1 ) w ( L ) ˆ When a ( L − 1 ) is standardized, g ( t ) ∂ C ∂ w ( L ) ( Θ ( t ) ) is more likely to L = decrease C If x ∼ N ( 0 , 1 ) , then still a ( L − 1 ) ∼ N ( 0 , 1 ) , no matter how w ( 1 ) , ··· , w ( L − 1 ) change Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 22 / 68

  39. Batch Normalization I y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Why not standardize each hidden activation a ( k ) , k = 1 , ··· , L − 1 (as we standardized x )? We have y = a ( L − 1 ) w ( L ) ˆ When a ( L − 1 ) is standardized, g ( t ) ∂ C ∂ w ( L ) ( Θ ( t ) ) is more likely to L = decrease C If x ∼ N ( 0 , 1 ) , then still a ( L − 1 ) ∼ N ( 0 , 1 ) , no matter how w ( 1 ) , ··· , w ( L − 1 ) change Changes in other dimensions proposed by g ( t ) i ’s, i � = L , can be zeroed out Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 22 / 68

  40. Batch Normalization I y = f ( x ) = xw ( 1 ) w ( 2 ) ··· w ( L ) ˆ Why not standardize each hidden activation a ( k ) , k = 1 , ··· , L − 1 (as we standardized x )? We have y = a ( L − 1 ) w ( L ) ˆ When a ( L − 1 ) is standardized, g ( t ) ∂ C ∂ w ( L ) ( Θ ( t ) ) is more likely to L = decrease C If x ∼ N ( 0 , 1 ) , then still a ( L − 1 ) ∼ N ( 0 , 1 ) , no matter how w ( 1 ) , ··· , w ( L − 1 ) change Changes in other dimensions proposed by g ( t ) i ’s, i � = L , can be zeroed out Similarly, if a ( k − 1 ) is standardized, g ( t ) ∂ C ∂ w ( k ) ( Θ ( t ) ) is more likely to k = decrease C Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 22 / 68

  41. Batch Normalization II How to standardize a ( k ) at training and test time? We can standardize the input x because we see multiple examples Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 23 / 68

  42. Batch Normalization II How to standardize a ( k ) at training and test time? We can standardize the input x because we see multiple examples During training time, we see a minibatch of activations a ( k ) ∈ R M ( M the batch size) Batch normalization [6]: = a ( k ) − µ ( k ) a ( k ) i , ∀ i ˜ i σ ( k ) µ ( k ) and σ ( k ) are mean and std of activations across examples in the minibatch Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 23 / 68

  43. Batch Normalization II How to standardize a ( k ) at training and test time? We can standardize the input x because we see multiple examples During training time, we see a minibatch of activations a ( k ) ∈ R M ( M the batch size) Batch normalization [6]: = a ( k ) − µ ( k ) a ( k ) i , ∀ i ˜ i σ ( k ) µ ( k ) and σ ( k ) are mean and std of activations across examples in the minibatch At test time, µ ( k ) and σ ( k ) can be replaced by running averages that were collected during training time Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 23 / 68

  44. Batch Normalization II How to standardize a ( k ) at training and test time? We can standardize the input x because we see multiple examples During training time, we see a minibatch of activations a ( k ) ∈ R M ( M the batch size) Batch normalization [6]: = a ( k ) − µ ( k ) a ( k ) i , ∀ i ˜ i σ ( k ) µ ( k ) and σ ( k ) are mean and std of activations across examples in the minibatch At test time, µ ( k ) and σ ( k ) can be replaced by running averages that were collected during training time Can be readily extended to NNs having multiple neurons at each layer Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 23 / 68

  45. Standardizing Nonlinear Units How to standardize a nonlinear unit a ( k ) = act ( z ( k ) ) ? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 24 / 68

  46. Standardizing Nonlinear Units How to standardize a nonlinear unit a ( k ) = act ( z ( k ) ) ? We can still zero out the effects from other layers by normalizing z ( k ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 24 / 68

  47. Standardizing Nonlinear Units How to standardize a nonlinear unit a ( k ) = act ( z ( k ) ) ? We can still zero out the effects from other layers by normalizing z ( k ) Given a minibatch of z ( k ) ∈ R M : = z ( k ) − µ ( k ) z ( k ) i ˜ , ∀ i i σ ( k ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 24 / 68

  48. Standardizing Nonlinear Units How to standardize a nonlinear unit a ( k ) = act ( z ( k ) ) ? We can still zero out the effects from other layers by normalizing z ( k ) Given a minibatch of z ( k ) ∈ R M : = z ( k ) − µ ( k ) z ( k ) i ˜ , ∀ i i σ ( k ) A hidden unit now looks like: Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 24 / 68

  49. Expressiveness I The weights W ( k ) at each layer is easier to train now The “wrong assumption” of gradient-based optimization is made valid Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 25 / 68

  50. Expressiveness I The weights W ( k ) at each layer is easier to train now The “wrong assumption” of gradient-based optimization is made valid But at the cost of expressiveness Normalizing a ( k ) or z ( k ) limits the output range of a unit Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 25 / 68

  51. Expressiveness I The weights W ( k ) at each layer is easier to train now The “wrong assumption” of gradient-based optimization is made valid But at the cost of expressiveness Normalizing a ( k ) or z ( k ) limits the output range of a unit z ( k ) to have zero mean and Observe that there is no need to insist a ˜ unit variance We only care about whether it is “fixed” when calculating the gradients for other layers Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 25 / 68

  52. Expressiveness II During training time, we can introduce two parameters γ and β and back-propagate through z ( k ) + β γ ˜ to learn their best values Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 26 / 68

  53. Expressiveness II During training time, we can introduce two parameters γ and β and back-propagate through z ( k ) + β γ ˜ to learn their best values z ( k ) to get z ( k ) , so what’s Question: γ and β can be learned to invert ˜ the point? z ( k ) = z ( k ) − µ ( k ) z ( k ) + β = σ ˜ z ( k ) + µ = z ( k ) ˜ , so γ ˜ σ ( k ) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 26 / 68

  54. Expressiveness II During training time, we can introduce two parameters γ and β and back-propagate through z ( k ) + β γ ˜ to learn their best values z ( k ) to get z ( k ) , so what’s Question: γ and β can be learned to invert ˜ the point? z ( k ) = z ( k ) − µ ( k ) z ( k ) + β = σ ˜ z ( k ) + µ = z ( k ) ˜ , so γ ˜ σ ( k ) The weights W ( k ) , γ , and β are now easier to learn with SGD Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 26 / 68

  55. Outline Optimization 1 Momentum & Nesterov Momentum AdaGrad & RMSProp Batch Normalization Continuation Methods & Curriculum Learning NTK-based Initialization Regularization 2 Cyclic Learning Rates Weight Decay Data Augmentation Dropout Manifold Regularization Domain-Specific Model Design Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 27 / 68

  56. Parameter Initialization Initialization is important Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 28 / 68

  57. Parameter Initialization Initialization is important How to better initialize Θ ( 0 ) ? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 28 / 68

  58. Parameter Initialization Initialization is important How to better initialize Θ ( 0 ) ? Train an NN multiple times with random initial points, and then pick 1 the best Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 28 / 68

  59. Parameter Initialization Initialization is important How to better initialize Θ ( 0 ) ? Train an NN multiple times with random initial points, and then pick 1 the best Design a series of cost functions such that a solution to one is a good 2 initial point of the next Solve the “easy” problem first, and then a “harder” one, and so on Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 28 / 68

  60. Continuation Methods I Continuation methods : construct easier cost functions by smoothing the original cost function: ˜ Θ ∼ N ( Θ , σ 2 ) C ( ˜ C ( Θ ) = E ˜ Θ ) In practice, we sample several ˜ Θ ’s to approximate the expectation Assumption: some non-convex functions become approximately convex when smoothen Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 29 / 68

  61. Continuation Methods II Problems? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 30 / 68

  62. Continuation Methods II Problems? Cost function might not become convex, no matter how much it is smoothen Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 30 / 68

  63. Continuation Methods II Problems? Cost function might not become convex, no matter how much it is smoothen Designed to deal with local minima; not very helpful for NNs without minima Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 30 / 68

  64. Curriculum Learning Curriculum learning (or shaping ) [1]: make the cost function easier by increasing the influence of simpler examples E.g., by assigning them larger weights in the new cost function Or, by sampling them more frequently Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 31 / 68

  65. Curriculum Learning Curriculum learning (or shaping ) [1]: make the cost function easier by increasing the influence of simpler examples E.g., by assigning them larger weights in the new cost function Or, by sampling them more frequently How to define “simple” examples? Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 31 / 68

  66. Curriculum Learning Curriculum learning (or shaping ) [1]: make the cost function easier by increasing the influence of simpler examples E.g., by assigning them larger weights in the new cost function Or, by sampling them more frequently How to define “simple” examples? Face image recognition: front view (easy) vs. side view (hard) Sentiment analysis for movie reviews: 0-/5-star reviews (easy) vs. 1-/2-/3-/4-star reviews (hard) Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 31 / 68

  67. Curriculum Learning Curriculum learning (or shaping ) [1]: make the cost function easier by increasing the influence of simpler examples E.g., by assigning them larger weights in the new cost function Or, by sampling them more frequently How to define “simple” examples? Face image recognition: front view (easy) vs. side view (hard) Sentiment analysis for movie reviews: 0-/5-star reviews (easy) vs. 1-/2-/3-/4-star reviews (hard) Learn simple concepts first, then learn more complex concepts that depend on these simpler concepts Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 31 / 68

  68. Curriculum Learning Curriculum learning (or shaping ) [1]: make the cost function easier by increasing the influence of simpler examples E.g., by assigning them larger weights in the new cost function Or, by sampling them more frequently How to define “simple” examples? Face image recognition: front view (easy) vs. side view (hard) Sentiment analysis for movie reviews: 0-/5-star reviews (easy) vs. 1-/2-/3-/4-star reviews (hard) Learn simple concepts first, then learn more complex concepts that depend on these simpler concepts Just like how humans learn Knowing the principles, we are less likely to explain an observation using special (but wrong) rules Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 31 / 68

  69. Outline Optimization 1 Momentum & Nesterov Momentum AdaGrad & RMSProp Batch Normalization Continuation Methods & Curriculum Learning NTK-based Initialization Regularization 2 Cyclic Learning Rates Weight Decay Data Augmentation Dropout Manifold Regularization Domain-Specific Model Design Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 32 / 68

  70. Prior Predictions of NTK-GP Prior (unconditioned) mean predictions for training set: y N = ( I − e − η T N , N t ) y N ˆ Prior mean predictions for test set: y M = T M , N T − 1 N , N ( I − e − η T N , N t ) y N ˆ Given a training set, the T N , N and T M , N depends only on the network structure and hyperparameters of initial weights Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 33 / 68

  71. Trainability Prior (unconditioned) mean predictions for training set: y N = ( I − e − η T N , N t ) y N ˆ 2 2 where η < λ max + λ min ≈ λ max Goal: ˆ y N → y N as t → ∞ Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 34 / 68

  72. Trainability Prior (unconditioned) mean predictions for training set: y N = ( I − e − η T N , N t ) y N ˆ 2 2 where η < λ max + λ min ≈ λ max Goal: ˆ y N → y N as t → ∞   λ max ... Let T N , N = U ⊤  U , we have    λ min λ i y N ) i ≈ (( I − e − 2 λ max t ) Uy N ) i ( U ˆ It follows that if the conditioning number κ = λ max λ min diverges , the NN becomes untrainable Shan-Hung Wu (CS, NTHU) NN Opt & Reg Machine Learning 34 / 68

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend