training dnns tricks
play

Training DNNs: Tricks Ju Sun Computer Science & Engineering - PowerPoint PPT Presentation

Training DNNs: Tricks Ju Sun Computer Science & Engineering University of Minnesota, Twin Cities March 5, 2020 1 / 33 Recap: last lecture Training DNNs m 1 min ( y i , DNN W ( x i )) + ( W ) m W i =1 2 / 33 Recap: last


  1. Fix the scaling: second idea PCA and whitening PCA , i.e., zero-center and rotate the data to align principal directions to coordinate directions X -= X.mean(axis=1) #centering U, S, VT = np.linalg.svd(X, full matrices=False) Xrot = U.T@X #rotate/decorrelate the data (math: X = USV ⊺ , then U ⊺ X = SV ) Whitening : PCA + normalize the coordinates by singular values 7 / 33

  2. Fix the scaling: second idea PCA and whitening PCA , i.e., zero-center and rotate the data to align principal directions to coordinate directions X -= X.mean(axis=1) #centering U, S, VT = np.linalg.svd(X, full matrices=False) Xrot = U.T@X #rotate/decorrelate the data (math: X = USV ⊺ , then U ⊺ X = SV ) Whitening : PCA + normalize the coordinates by singular values # (math: X white = V ) Xwhite =1/(S+eps)*Xrot 7 / 33

  3. Fix the scaling: second idea PCA and whitening PCA , i.e., zero-center and rotate the data to align principal directions to coordinate directions X -= X.mean(axis=1) #centering U, S, VT = np.linalg.svd(X, full matrices=False) Xrot = U.T@X #rotate/decorrelate the data (math: X = USV ⊺ , then U ⊺ X = SV ) Whitening : PCA + normalize the coordinates by singular values # (math: X white = V ) Xwhite =1/(S+eps)*Xrot Credit: Stanford CS231N 7 / 33

  4. Fix the scaling: second idea For LS, works well when features are approximately independent before vs. after the whitening 8 / 33

  5. Fix the scaling: second idea For LS, works well when features are approximately independent before vs. after the whitening For LS, also works well when features are highly dependent. before vs. after the whitening 8 / 33

  6. In DNNs practice fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. 9 / 33

  7. In DNNs practice fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, 9 / 33

  8. In DNNs practice fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Preprocess the input data * zero-center * normalization * PCA or whitening (less common) 9 / 33

  9. In DNNs practice fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Preprocess the input data * zero-center * normalization * PCA or whitening (less common) – But recall our model objective min w f ( w ) . � m 1 = i =1 ℓ ( w ⊺ x i ; y i ) vs. m DL objective � m 1 min W i =1 ℓ ( y i , σ ( W k σ ( W k − 1 . . . σ ( W 1 x i )))) + Ω ( W ) m * DL objective is much more complex * But σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . 9 / 33

  10. In DNNs practice fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Preprocess the input data * zero-center * normalization * PCA or whitening (less common) – But recall our model objective min w f ( w ) . � m 1 = i =1 ℓ ( w ⊺ x i ; y i ) vs. m DL objective � m 1 min W i =1 ℓ ( y i , σ ( W k σ ( W k − 1 . . . σ ( W 1 x i )))) + Ω ( W ) m * DL objective is much more complex * But σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . – Idea: also process the input data to some/all hidden layers 9 / 33

  11. Batch normalization Apply normalization to the input data to some/all hidden layers 10 / 33

  12. Batch normalization Apply normalization to the input data to some/all hidden layers – σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . 10 / 33

  13. Batch normalization Apply normalization to the input data to some/all hidden layers – σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . – Apply normalization to the outputs of the colored parts based on the statistics of a mini-batch of x i ’s, e.g., W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) � �� � � �� � . BN( z i ) = z i 10 / 33

  14. Batch normalization Apply normalization to the input data to some/all hidden layers – σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . – Apply normalization to the outputs of the colored parts based on the statistics of a mini-batch of x i ’s, e.g., W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) � �� � � �� � . BN( z i ) = z i – Let z i ’s be generated from a mini-batch of x i ’s and Z = [ z 1 . . . z | B | ] , = z j − µ z j � z j � for each row z j of Z . BN σ z j 10 / 33

  15. Batch normalization Apply normalization to the input data to some/all hidden layers – σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . – Apply normalization to the outputs of the colored parts based on the statistics of a mini-batch of x i ’s, e.g., W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) � �� � � �� � . BN( z i ) = z i – Let z i ’s be generated from a mini-batch of x i ’s and Z = [ z 1 . . . z | B | ] , = z j − µ z j � z j � for each row z j of Z . BN σ z j Flexibity restored by optional scaling γ j ’s and shifting β j ’s: = γ j z j − µ z j � z j � for each row z j of Z . BN γ j , β j + β j σ z j 10 / 33

  16. Batch normalization Apply normalization to the input data to some/all hidden layers – σ ( W k σ ( W k − 1 . . . σ ( W 1 x i ))) is a composite version of w ⊺ x i : W 1 x i , W 2 σ ( W 1 x i ) , W 3 σ ( W 2 σ ( W 1 x i )) , . . . – Apply normalization to the outputs of the colored parts based on the statistics of a mini-batch of x i ’s, e.g., W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) � �� � � �� � . BN( z i ) = z i – Let z i ’s be generated from a mini-batch of x i ’s and Z = [ z 1 . . . z | B | ] , = z j − µ z j � z j � for each row z j of Z . BN σ z j Flexibity restored by optional scaling γ j ’s and shifting β j ’s: = γ j z j − µ z j � z j � for each row z j of Z . BN γ j , β j + β j σ z j Here, γ j ’s and β ’s are trainable (optimization) variables! 10 / 33

  17. Batch normalization: implementation details = γ j z j − µ z j � z j � W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) BN γ j , β j + β j ∀ j � �� � � �� � σ z j . BN( z i ) = z i 11 / 33

  18. Batch normalization: implementation details = γ j z j − µ z j � z j � W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) BN γ j , β j + β j ∀ j � �� � � �� � σ z j . BN( z i ) = z i Question: how to perform training after plugging in the BN operations? � m 1 min W i =1 ℓ ( y i , σ ( W k BN ( σ ( W k − 1 . . . BN ( σ ( W 1 x i )))))) + Ω ( W ) m 11 / 33

  19. Batch normalization: implementation details = γ j z j − µ z j � z j � W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) BN γ j , β j + β j ∀ j � �� � � �� � σ z j . BN( z i ) = z i Question: how to perform training after plugging in the BN operations? � m 1 min W i =1 ℓ ( y i , σ ( W k BN ( σ ( W k − 1 . . . BN ( σ ( W 1 x i )))))) + Ω ( W ) m � z j � is nothing but a differentiable function of z j , γ j , Answer: for all j , BN γ j , β j and β j — chain rule applies! 11 / 33

  20. Batch normalization: implementation details = γ j z j − µ z j � z j � W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) BN γ j , β j + β j ∀ j � �� � � �� � σ z j . BN( z i ) = z i Question: how to perform training after plugging in the BN operations? � m 1 min W i =1 ℓ ( y i , σ ( W k BN ( σ ( W k − 1 . . . BN ( σ ( W 1 x i )))))) + Ω ( W ) m � z j � is nothing but a differentiable function of z j , γ j , Answer: for all j , BN γ j , β j and β j — chain rule applies! – µ z j and σ z j are differentiable functions of z j , and � � � z j � z j , γ j , β j �→ BN γ j , β j is a vector-to-vector mapping – Any row z j depends on all x k ’s in the current mini-batch B as Z = [ z 1 . . . z | B | ] ← − [ x 1 . . . x | B | ] 11 / 33

  21. Batch normalization: implementation details = γ j z j − µ z j � z j � W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) BN γ j , β j + β j ∀ j � �� � � �� � σ z j . BN( z i ) = z i Question: how to perform training after plugging in the BN operations? � m 1 min W i =1 ℓ ( y i , σ ( W k BN ( σ ( W k − 1 . . . BN ( σ ( W 1 x i )))))) + Ω ( W ) m � z j � is nothing but a differentiable function of z j , γ j , Answer: for all j , BN γ j , β j and β j — chain rule applies! – µ z j and σ z j are differentiable functions of z j , and � � � z j � z j , γ j , β j �→ BN γ j , β j is a vector-to-vector mapping – Any row z j depends on all x k ’s in the current mini-batch B as Z = [ z 1 . . . z | B | ] ← − [ x 1 . . . x | B | ] – Without BN: � | B | � | B | 1 1 ∇ W k =1 ℓ ( W ; x k , y k ) = k =1 ∇ W ℓ ( W ; x k , y k ) , the | B | | B | summands can be computed in parallel and then aggregated 11 / 33

  22. Batch normalization: implementation details = γ j z j − µ z j � z j � W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) BN γ j , β j + β j ∀ j � �� � � �� � σ z j . BN( z i ) = z i Question: how to perform training after plugging in the BN operations? � m 1 min W i =1 ℓ ( y i , σ ( W k BN ( σ ( W k − 1 . . . BN ( σ ( W 1 x i )))))) + Ω ( W ) m � z j � is nothing but a differentiable function of z j , γ j , Answer: for all j , BN γ j , β j and β j — chain rule applies! – µ z j and σ z j are differentiable functions of z j , and � � � z j � z j , γ j , β j �→ BN γ j , β j is a vector-to-vector mapping – Any row z j depends on all x k ’s in the current mini-batch B as Z = [ z 1 . . . z | B | ] ← − [ x 1 . . . x | B | ] – Without BN: � | B | � | B | 1 1 ∇ W k =1 ℓ ( W ; x k , y k ) = k =1 ∇ W ℓ ( W ; x k , y k ) , the | B | | B | summands can be computed in parallel and then aggregated � | B | 1 With BN: ∇ W k =1 ℓ ( W ; x k , y k ) has to be computed altogether, | B | due to the inter-dependency across the summands 11 / 33

  23. Batch normalization: implementation details = γ j z j − µ z j � z j � BN γ j , β j + β j ∀ j σ z j What about validation/test, where only a single sample is seen each time? 12 / 33

  24. Batch normalization: implementation details = γ j z j − µ z j � z j � BN γ j , β j + β j ∀ j σ z j What about validation/test, where only a single sample is seen each time? idea: use the average µ z j ’s and σ z j ’s over the training data ( γ j ’s and β j ’s are learned) 12 / 33

  25. Batch normalization: implementation details = γ j z j − µ z j � z j � BN γ j , β j + β j ∀ j σ z j What about validation/test, where only a single sample is seen each time? idea: use the average µ z j ’s and σ z j ’s over the training data ( γ j ’s and β j ’s are learned) In practice, collect the momentum-weighted running averages: e.g., for each hidden node with BN, µ = (1 − η ) µ old + ηµ new σ = (1 − η ) σ old + ησ new with e.g., η = 0 . 1 . In PyTorch, torch.nn.BatchNorm1d , torch.nn.BatchNorm2d , torch.nn.BatchNorm3d depending on the input shapes 12 / 33

  26. Batch normalization: implementation details Question: BN before or after the activation? W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) (after) W 2 σ ( W 1 x i ) − → W 2 ( σ (BN ( W 1 x i ))) (before) 13 / 33

  27. Batch normalization: implementation details Question: BN before or after the activation? W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) (after) W 2 σ ( W 1 x i ) − → W 2 ( σ (BN ( W 1 x i ))) (before) – The original paper [ Ioffe and Szegedy, 2015 ] proposed the “before” version (most of the original intuition has since proved wrong) 13 / 33

  28. Batch normalization: implementation details Question: BN before or after the activation? W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) (after) W 2 σ ( W 1 x i ) − → W 2 ( σ (BN ( W 1 x i ))) (before) – The original paper [ Ioffe and Szegedy, 2015 ] proposed the “before” version (most of the original intuition has since proved wrong) – But the “after” version is more intuitive as we have seen 13 / 33

  29. Batch normalization: implementation details Question: BN before or after the activation? W 2 σ ( W 1 x i ) − → W 2 BN ( σ ( W 1 x i )) (after) W 2 σ ( W 1 x i ) − → W 2 ( σ (BN ( W 1 x i ))) (before) – The original paper [ Ioffe and Szegedy, 2015 ] proposed the “before” version (most of the original intuition has since proved wrong) – But the “after” version is more intuitive as we have seen – Both are used in practice and debatable which one is more effective * https://www.reddit.com/r/MachineLearning/comments/ 67gonq/d_batch_normalization_before_or_after_relu/ * https://blog.paperspace.com/ busting-the-myths-about-batch-normalization/ * https://github.com/gcr/torch-residual-networks/issues/5 * [Chen et al., 2019] 13 / 33

  30. Why BN works? Short answer: we don’t know yet 14 / 33

  31. Why BN works? Short answer: we don’t know yet Long answer: – Originally proposed to deal with internal covariate shift [Ioffe and Szegedy, 2015] 14 / 33

  32. Why BN works? Short answer: we don’t know yet Long answer: – Originally proposed to deal with internal covariate shift [Ioffe and Szegedy, 2015] – The original intuition later proved wrong and BN is shown to make the optimization problem “nicer” (or “smoother”) [Santurkar et al., 2018, Lipton and Steinhardt, 2019] 14 / 33

  33. Why BN works? Short answer: we don’t know yet Long answer: – Originally proposed to deal with internal covariate shift [Ioffe and Szegedy, 2015] – The original intuition later proved wrong and BN is shown to make the optimization problem “nicer” (or “smoother”) [Santurkar et al., 2018, Lipton and Steinhardt, 2019] – Yet another explanation from optimization perspective [ Kohler et al., 2019 ] 14 / 33

  34. Why BN works? Short answer: we don’t know yet Long answer: – Originally proposed to deal with internal covariate shift [Ioffe and Szegedy, 2015] – The original intuition later proved wrong and BN is shown to make the optimization problem “nicer” (or “smoother”) [Santurkar et al., 2018, Lipton and Steinhardt, 2019] – Yet another explanation from optimization perspective [ Kohler et al., 2019 ] – A good research topic 14 / 33

  35. Batch PCA/whitening? fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Add (pre-)processing to input data * zero-center * normalization * PCA or whitening (less common) 15 / 33

  36. Batch PCA/whitening? fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Add (pre-)processing to input data * zero-center * normalization * PCA or whitening (less common) – Add batch-processing steps to some/all hidden layers * Batch normalization 15 / 33

  37. Batch PCA/whitening? fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Add (pre-)processing to input data * zero-center * normalization * PCA or whitening (less common) – Add batch-processing steps to some/all hidden layers * Batch normalization * Batch PCA or whitening? 15 / 33

  38. Batch PCA/whitening? fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Add (pre-)processing to input data * zero-center * normalization * PCA or whitening (less common) – Add batch-processing steps to some/all hidden layers * Batch normalization * Batch PCA or whitening? Doable but requires a lot of work [Huangi et al., 2018, Huang et al., 2019, Wang et al., 2019] 15 / 33

  39. Batch PCA/whitening? fixing the feature scaling makes the landscape “nicer” —derivatives and curvatures in all directions are roughly even in magnitudes. So for DNNs, – Add (pre-)processing to input data * zero-center * normalization * PCA or whitening (less common) – Add batch-processing steps to some/all hidden layers * Batch normalization * Batch PCA or whitening? Doable but requires a lot of work [Huangi et al., 2018, Huang et al., 2019, Wang et al., 2019] normalization is most popular due to the simplicity 15 / 33

  40. Zoo of normalization Credit: [Wu and He, 2018] normalization in different directions/groups of the data tensors 16 / 33

  41. Zoo of normalization Credit: [Wu and He, 2018] normalization in different directions/groups of the data tensors weight normalization: decompose the weight as magnitude and direction v w = g � v � 2 and perform optimization in ( g, v ) space 16 / 33

  42. Zoo of normalization Credit: [Wu and He, 2018] normalization in different directions/groups of the data tensors weight normalization: decompose the weight as magnitude and direction v w = g � v � 2 and perform optimization in ( g, v ) space An Overview of Normalization Methods in Deep Learning https://mlexplained.com/2018/11/30/ an-overview-of-normalization-methods-in-deep-learning/ Check out PyTorch normalization layers 16 / 33 https://pytorch.org/docs/stable/nn.html#normalization-layers

  43. Outline Data Normalization Regularization Hyperparameter search, data augmentation Suggested reading 17 / 33

  44. Regularization to avoid overfitting � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with explicit m regularization Ω . But which Ω ? 18 / 33

  45. Regularization to avoid overfitting � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with explicit m regularization Ω . But which Ω ? – Ω ( W ) = � k � W k � 2 F where k indexes the layers — penalizes large values in W and hence avoids steep changes (set weight decay as λ in torch.optim.xxxx ) 18 / 33

  46. Regularization to avoid overfitting � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with explicit m regularization Ω . But which Ω ? – Ω ( W ) = � k � W k � 2 F where k indexes the layers — penalizes large values in W and hence avoids steep changes (set weight decay as λ in torch.optim.xxxx ) – Ω ( W ) = � k � W k � 1 — promotes sparse W k ’s (i.e., many entries in W k ’s to be near zero; good for feature selection) l1 reg = torch.zeros(1) for W in model.parameters(): l1 reg += W.norm(1) 18 / 33

  47. Regularization to avoid overfitting � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with explicit m regularization Ω . But which Ω ? – Ω ( W ) = � k � W k � 2 F where k indexes the layers — penalizes large values in W and hence avoids steep changes (set weight decay as λ in torch.optim.xxxx ) – Ω ( W ) = � k � W k � 1 — promotes sparse W k ’s (i.e., many entries in W k ’s to be near zero; good for feature selection) l1 reg = torch.zeros(1) for W in model.parameters(): l1 reg += W.norm(1) – Ω ( W ) = � J DNN W ( x ) � 2 F — promotes smoothness of the function represented by DNN W [Varga et al., 2017, Hoffman et al., 2019, Chan et al., 2019] 18 / 33

  48. Regularization to avoid overfitting � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with explicit m regularization Ω . But which Ω ? – Ω ( W ) = � k � W k � 2 F where k indexes the layers — penalizes large values in W and hence avoids steep changes (set weight decay as λ in torch.optim.xxxx ) – Ω ( W ) = � k � W k � 1 — promotes sparse W k ’s (i.e., many entries in W k ’s to be near zero; good for feature selection) l1 reg = torch.zeros(1) for W in model.parameters(): l1 reg += W.norm(1) – Ω ( W ) = � J DNN W ( x ) � 2 F — promotes smoothness of the function represented by DNN W [Varga et al., 2017, Hoffman et al., 2019, Chan et al., 2019]   0 W ∈ C – Constraints, δ C ( W ) . = , e.g., binary, norm bound ∞ W / ∈ C  18 / 33

  49. Regularization to avoid overfitting � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with explicit m regularization Ω . But which Ω ? – Ω ( W ) = � k � W k � 2 F where k indexes the layers — penalizes large values in W and hence avoids steep changes (set weight decay as λ in torch.optim.xxxx ) – Ω ( W ) = � k � W k � 1 — promotes sparse W k ’s (i.e., many entries in W k ’s to be near zero; good for feature selection) l1 reg = torch.zeros(1) for W in model.parameters(): l1 reg += W.norm(1) – Ω ( W ) = � J DNN W ( x ) � 2 F — promotes smoothness of the function represented by DNN W [Varga et al., 2017, Hoffman et al., 2019, Chan et al., 2019]   0 W ∈ C – Constraints, δ C ( W ) . = , e.g., binary, norm bound ∞ W / ∈ C  – many others! 18 / 33

  50. Implicit regularization � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with m implicit regularization — operation that is not built into the objective but avoids overfitting 19 / 33

  51. Implicit regularization � m 1 Training DNNs min W i =1 ℓ ( y i , DNN W ( x i )) + λ Ω ( W ) with m implicit regularization — operation that is not built into the objective but avoids overfitting – early stopping – batch normalization – dropout – ... 19 / 33

  52. Early stopping A practical/pragmatic stopping strategy: early stopping ... periodically check the validation error and stop when it doesn’t improve 20 / 33

  53. Early stopping A practical/pragmatic stopping strategy: early stopping ... periodically check the validation error and stop when it doesn’t improve Intuition: avoid the model to be too specialized/perfect for the training data More concrete math examples: [Bishop, 1995, Sj¨ oberg and Ljung, 1995] 20 / 33

  54. Batch/general normalization Credit: [Wu and He, 2018] normalization in different directions/groups of the data tensors weight normalization: decompose the weight as magnitude and direction v w = g � v � 2 and perform optimization in ( g, v ) space An Overview of Normalization Methods in Deep Learning https://mlexplained.com/2018/11/30/ an-overview-of-normalization-methods-in-deep-learning/ 21 / 33

  55. Dropout Credit: [Srivastava et al., 2014] Idea: kill each non-output neuron with probability 1 − p , called Dropout 22 / 33

  56. Dropout Credit: [Srivastava et al., 2014] Idea: kill each non-output neuron with probability 1 − p , called Dropout – perform Dropout independently for each training sample and each iteration 22 / 33

  57. Dropout Credit: [Srivastava et al., 2014] Idea: kill each non-output neuron with probability 1 − p , called Dropout – perform Dropout independently for each training sample and each iteration – for each neuron, if the original output is x , then the expected output with Dropout: px . So rescale the actual output by 1 /p 22 / 33

  58. Dropout Credit: [Srivastava et al., 2014] Idea: kill each non-output neuron with probability 1 − p , called Dropout – perform Dropout independently for each training sample and each iteration – for each neuron, if the original output is x , then the expected output with Dropout: px . So rescale the actual output by 1 /p – no Dropout at test time! 22 / 33

  59. Dropout: implementation details Credit: Stanford CS231N 23 / 33

  60. Dropout: implementation details Credit: Stanford CS231N What about derivatives? 23 / 33

  61. Dropout: implementation details Credit: Stanford CS231N What about derivatives? Back-propagation for each sample and then aggregate 23 / 33

  62. Dropout: implementation details Credit: Stanford CS231N What about derivatives? Back-propagation for each sample and then aggregate PyTorch: torch.nn.Dropout , torch.nn.Dropout2d , torch.nn.Dropout3d 23 / 33

  63. Why Dropout? Credit: Wikipedia bagging can avoid overfitting 24 / 33

  64. Why Dropout? Credit: Wikipedia bagging can avoid overfitting Credit: [Srivastava et al., 2014] 24 / 33

  65. Why Dropout? Credit: Wikipedia bagging can avoid overfitting Credit: [Srivastava et al., 2014] For an n -node network, 2 n possible sub-networks. 24 / 33

  66. Why Dropout? Credit: Wikipedia bagging can avoid overfitting Credit: [Srivastava et al., 2014] For an n -node network, 2 n possible sub-networks. Consider the average/ensemble prediction E SN [SN ( x )] over 2 n of sub-networks and the new objective m F ( W ) . = 1 � ℓ ( y i , E SN [SN W ( x i )]) m i =1 24 / 33

  67. Why Dropout? Credit: Wikipedia bagging can avoid overfitting Credit: [Srivastava et al., 2014] For an n -node network, 2 n possible sub-networks. Consider the average/ensemble prediction E SN [SN ( x )] over 2 n of sub-networks and the new objective m F ( W ) . = 1 � ℓ ( y i , E SN [SN W ( x i )]) m i =1 Mini-batch SGD with Dropout samples data point and model simultaneously (stochastic composite optimization [Wang et al., 2016, Wang et al., 2017] ) 24 / 33

  68. Outline Data Normalization Regularization Hyperparameter search, data augmentation Suggested reading 25 / 33

  69. Hyperparameter search ...tunable parameters (vs. learnable parameters, or optimization variables) 26 / 33

  70. Hyperparameter search ...tunable parameters (vs. learnable parameters, or optimization variables) – Network architecture (depth, width, activation, loss, etc) – Optimization methods – Initialization schemes – Initial LR and LR schedule/parameters – regularization methods and parameters – etc 26 / 33

  71. Hyperparameter search ...tunable parameters (vs. learnable parameters, or optimization variables) – Network architecture (depth, width, activation, loss, etc) – Optimization methods – Initialization schemes – Initial LR and LR schedule/parameters – regularization methods and parameters – etc https://cs231n.github.io/neural-networks-3/#hyper Credit: [Bergstra and Bengio, 2012] 26 / 33

  72. Data augmentation – More relevant data always help! 27 / 33

  73. Data augmentation – More relevant data always help! – Fetch more external data 27 / 33

  74. Data augmentation – More relevant data always help! – Fetch more external data – Generate more internal data: generate based on whatever you want to be robust to * vision: translation, rotation, background, noise, deformation, flipping, blurring, occlusion, etc Credit: https://github.com/aleju/imgaug See one example here https: //pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 27 / 33

  75. Outline Data Normalization Regularization Hyperparameter search, data augmentation Suggested reading 28 / 33

  76. Suggested reading – Chap 7, Deep Learning (Goodfellow et al) – Stanford CS231n course notes: Neural Networks Part 2: Setting up the Data and the Loss https://cs231n.github.io/neural-networks-2/ – Stanford CS231n course notes: Neural Networks Part 3: Learning and Evaluation https://cs231n.github.io/neural-networks-3/ – http://neuralnetworksanddeeplearning.com/chap3.html 29 / 33

  77. References i [Bergstra and Bengio, 2012] Bergstra, J. and Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of machine learning research , 13(Feb):281–305. [Bishop, 1995] Bishop, C. M. (1995). Regularization and complexity control in feed-forward networks. In International Conference on Artificial Neural Networks ICANN . [Chan et al., 2019] Chan, A., Tay, Y., Ong, Y. S., and Fu, J. (2019). Jacobian adversarially regularized networks for robustness. arXiv:1912.10185 . [Chen et al., 2019] Chen, G., Chen, P., Shi, Y., Hsieh, C.-Y., Liao, B., and Zhang, S. (2019). Rethinking the usage of batch normalization and dropout in the training of deep neural networks. arXiv:1905.05928 . [Hoffman et al., 2019] Hoffman, J., Roberts, D. A., and Yaida, S. (2019). Robust learning with jacobian regularization. arXiv:1908.02729 . [Huang et al., 2019] Huang, L., Zhou, Y., Zhu, F., Liu, L., and Shao, L. (2019). Iterative normalization: Beyond standardization towards efficient whitening. pages 4869–4878. IEEE. 30 / 33

  78. References ii [Huangi et al., 2018] Huangi, L., Huangi, L., Yang, D., Lang, B., and Deng, J. (2018). Decorrelated batch normalization. pages 791–800. IEEE. [Ioffe and Szegedy, 2015] Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In The 32nd International Conference on Machine Learning . [Kohler et al., 2019] Kohler, J. M., Daneshmand, H., Lucchi, A., Hofmann, T., Zhou, M., and Neymeyr, K. (2019). Exponential convergence rates for batch normalization: The power of length-direction decoupling in non-convex optimization. In The 22nd International Conference on Artificial Intelligence and Statistics . [Lipton and Steinhardt, 2019] Lipton, Z. C. and Steinhardt, J. (2019). Troubling trends in machine learning scholarship. ACM Queue , 17(1):80. [Santurkar et al., 2018] Santurkar, S., Tsipras, D., Ilyas, A., and Madry, A. (2018). How does batch normalization help optimization? In Advances in Neural Information Processing Systems , pages 2483–2493. 31 / 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend