backpropagation why backpropagation
play

Backpropagation Why backpropagation Neural networks are sequences - PowerPoint PPT Presentation

Backpropagation Why backpropagation Neural networks are sequences of parametrized functions x ($; !) linear conv subsample conv subsample filters filters weights Parameters ! Why backpropagation Neural networks are


  1. Backpropagation

  2. Why backpropagation • Neural networks are sequences of parametrized functions x ℎ($; !) linear conv subsample conv subsample filters filters weights Parameters !

  3. Why backpropagation • Neural networks are sequences of parametrized functions • Parameters need to be set by minimizing some loss function N 1 X min L ( h ( x i ; θ ) , y i ) N θ i =1 Convolutional network

  4. Why backpropagation • Neural networks are sequences of parametrized functions • Parameters need to be set by minimizing some loss function • Minimization through gradient descent requires computing the gradient N θ ( t +1) = θ ( t ) � λ 1 X r L ( h ( x i ; θ ) , y i ) N i =1

  5. Why backpropagation • Neural networks are sequences of parametrized functions • Parameters need to be set by minimizing some loss function • Minimization through gradient descent requires computing the gradient N θ ( t +1) = θ ( t ) � λ 1 X r L ( h ( x i ; θ ) , y i ) N i =1 r θ L ( z, y ) = ∂ L ( z, y ) ∂ z z = h ( x ; θ ) ∂ z ∂ θ

  6. Why backpropagation • Neural networks are sequences of parametrized functions • Parameters need to be set by minimizing some loss function • Minimization through gradient descent requires computing the gradient • Backpropagation : way to compute gradient ∂ z ∂ θ

  7. The gradient of convnets z 5 = z z z z z 4 1 2 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5

  8. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5 ∂ z = ∂ z ∂ z 3 ∂ w 3 ∂ z 3 ∂ w 3

  9. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5 ∂ z = ∂ z ∂ z 3 ∂ w 3 ∂ z 3 ∂ w 3

  10. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5 ∂ z = ∂ z ∂ z 3 ∂ w 3 ∂ z 3 ∂ w 3

  11. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5 ∂ z = ∂ z ∂ z 4 ∂ z 3 ∂ z 4 ∂ z 3 ∂ z = ∂ z ∂ z 3 ∂ w 3 ∂ z 3 ∂ w 3

  12. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5 ∂ z = ∂ z ∂ z 4 ∂ z 3 ∂ z 4 ∂ z 3 ∂ z = ∂ z ∂ z 3 ∂ w 3 ∂ z 3 ∂ w 3

  13. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5 Recurrence ∂ z = ∂ z ∂ z 3 going ∂ z 2 ∂ z 3 ∂ z 2 backward!! ∂ z = ∂ z ∂ z 2 ∂ w 2 ∂ z 2 ∂ w 2

  14. The gradient of convnets z 5 = z z 4 z 1 z 2 z 3 f 1 f 2 f 3 f 4 f 5 x w 1 w 2 w 3 w 4 w 5

  15. Backpropagation for a sequence of functions z i = f i ( z i − 1 , w i ) z 0 = x z = z n • Assume we can compute partial derivatives of each function ∂ z i = ∂ f i ( z i − 1 , w i ) = ∂ f i ( z i − 1 , w i ) ∂ z i ∂ z i − 1 ∂ z i − 1 ∂ w i ∂ w i • Use g(z i ) to store gradient of z w.r.t z i , g(w i ) for w i • Calculate g(z i ) by iterating backwards g ( z i − 1 ) = ∂ z ∂ z i = g ( z i ) ∂ z i g ( z n ) = ∂ z = 1 ∂ z n ∂ z i ∂ z i − 1 ∂ z i − 1 • Use g(z i ) to compute gradient of parameters g ( w i ) = ∂ z ∂ z i = g ( z i ) ∂ z i ∂ z i ∂ w i ∂ w i

  16. Loss as a function label linear loss conv subsample conv subsample filters filters weights

  17. Putting it all together: SGD training of ConvNets 1. Sample image and label label Image linear loss conv subsample conv subsample filters filters weights

  18. Putting it all together: SGD training of ConvNets 1. Sample image and label 2. Pass image through network to get loss (forward) label Image linear loss conv subsample conv subsample filters filters weights

  19. Putting it all together: SGD training of ConvNets 1. Sample image and label 2. Pass image through network to get loss (forward) 3. Backpropagate to get gradients (backward) label Image linear loss conv subsample conv subsample filters filters weights

  20. Putting it all together: SGD training of ConvNets 1. Sample image and label 2. Pass image through network to get loss (forward) 3. Backpropagate to get gradients (backward) 4. Take step along negative gradients to update weights label Image linear loss conv subsample conv subsample filters filters weights

  21. Putting it all together: SGD training of ConvNets 1. Sample image and label 2. Pass image through network to get loss (forward) 3. Backpropagate to get gradients (backward) 4. Take step along negative gradients to update weights label Image 5. Repeat! linear loss conv subsample conv subsample filters filters weights

  22. Beyond sequences: computation graphs • Arbitrary graphs of functions • No distinction between intermediate outputs and parameters u g k x f l z y w h

  23. Computation graph - Functions • Each node implements two functions • A “forward” • Computes output given input • A “backward” • Computes derivative of z w.r.t input, given derivative of z w.r.t output

  24. Computation graphs a b f i d c

  25. Computation graphs ∂ z ∂ a ∂ z ∂ z f i ∂ b ∂ d ∂ z ∂ c

  26. Computation graphs a b f i d c

  27. Computation graphs ∂ z ∂ a ∂ z ∂ z f i ∂ b ∂ d ∂ z ∂ c

  28. Neural network frameworks

  29. Stochastic gradient descent K θ ( t +1) θ ( t ) � λ 1 r L ( h ( x i k ; θ ( t ) ) , y i k ) X K k =1 Noisy!

  30. Momentum • Average multiple gradient steps • Use exponential averaging K g ( t ) 1 X r L ( h ( x i k ; θ ( t ) ) , y i k ) K k =1 p ( t ) µ g ( t ) + (1 � µ ) p ( t − 1) θ ( t +1) θ ( t ) � λ p ( t )

  31. Weight decay • Add −"# $ to the gradient • Prevents # from growing to infinity • Equivalent to L2 regularization of weights

  32. Learning rate decay • Large step size / learning rate • Faster convergence initially • Bouncing around at the end because of noisy gradients • Learning rate must be decreased over time • Usually done in steps

  33. Convolutional network training • Initialize network • Sample minibatch of images • Forward pass to compute loss • Backpropagate loss to compute gradient • Combine gradient with momentum and weight decay • Take step according to current learning rate

  34. Setting hyperparameters • How do we find a hyperparameter setting that works? • Try it! • Train on train • Test on test validation • Picking hyperparameters that work for test = Overfitting on test set

  35. Setting hyperparameters Test on test (Ideally only Test on once) validation Training iterations Train Validation Test Pick new hyperparameters

  36. Vagaries of optimization • Non-convex • Local optima • Sensitivity to initialization • Vanishing / exploding gradients ∂ z ∂ z ∂ z n − 1 . . . ∂ z i +1 = ∂ z i ∂ z n − 1 ∂ z n − 2 ∂ z i • If each term is (much) greater than 1 à explosion of gradients • If each term is (much) less than 1 à vanishing gradients

  37. Image Classification

  38. How to do machine learning • Create training / validation sets • Identify loss functions • Choose hypothesis class • Find best hypothesis by minimizing training loss

  39. How to do machine learning • Create training / validation sets Multiclass • Identify loss functions classificatio n!! • Choose hypothesis class • Find best hypothesis by minimizing training loss e s k p ( y = k | x ) = ˆ h ( x ) = s p ( y = k | x ) ∝ e s k ˆ P j e s j L ( h ( x ) , y ) = − log ˆ p ( y | x )

  40. Building a convolutional network conv + relu + subsample conv + relu + subsample linear 10 classes average pool conv + relu + subsample

  41. Building a convolutional network

  42. MNIST Classification Method Error rate (%) Linear classifier over pixels 12 Kernel SVM over HOG 0.56 Convolutional Network 0.8

  43. ImageNet • 1000 categories • ~1000 instances per category Olga Russakovsky*, Jia Deng*, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. (* = equal contribution) ImageNet Large Scale Visual Recognition Challenge . International Journal of Computer Vision , 2015.

  44. ImageNet • Top-5 error: algorithm makes 5 predictions, true label must be in top 5 • Useful for incomplete labelings

  45. Challenge winner's accuracy 30 Convolutional 25 Networks 20 15 10 5 0 2010 2011 2012

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend