optimization for training deep models
play

Optimization for Training Deep Models Xiaogang Wang - PowerPoint PPT Presentation

Optimization Basics Optimization of training deep neural networks Multi-GPU Training Optimization for Training Deep Models Xiaogang Wang xgwang@ee.cuhk.edu.hk February 12, 2019 cuhk Xiaogang Wang Optimization for Training Deep Models


  1. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Optimization for Training Deep Models Xiaogang Wang xgwang@ee.cuhk.edu.hk February 12, 2019 cuhk Xiaogang Wang Optimization for Training Deep Models

  2. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Outline Optimization Basics 1 Optimization of training deep neural networks 2 Multi-GPU Training 3 cuhk Xiaogang Wang Optimization for Training Deep Models

  3. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Training neural networks Minimize the cost function on the training set θ ∗ = arg min θ J ( X ( train ) , θ ) Gradient descent θ = θ − η ∇ J ( θ ) cuhk Xiaogang Wang Optimization for Training Deep Models

  4. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Local minimum, local maximum, and saddle points When ∇ J ( θ ) = 0, the gradient provides no information about which direction to move Points at ∇ J ( θ ) = 0 are known as critical points or stationary points A local minimum is a point where J ( θ ) is lower than at all neighboring points, so it is no longer possible to decrease J ( θ ) by making infinitesimal steps A local maximum is a point where J ( θ ) is higher than at all neighboring points, so it is no longer possible to increase J ( θ ) by making infinitesimal steps Some critical points are neither maxima nor minima. These are known as saddle points cuhk Xiaogang Wang Optimization for Training Deep Models

  5. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Local minimum, local maximum, and saddle points In the context of deep learning, we optimize functions that may have many local minima that are not optimal, and many saddle points surrounded by very flat regions. All of this makes optimization very difficult, especially when the input to the function is multidimensional. We therefore usually settle for finding a value of J that is very low, but not necessarily minimal in any formal sense. cuhk Xiaogang Wang Optimization for Training Deep Models

  6. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Jacobian matrix and Hessian matrix Jacobian matrix contains all of the partial derivatives of all the elements of a vector-valued function Function f : R m → R n , then the Jacobian matrix J ∈ R n × m of f is ∂ defined such that J i , j = ∂ x j f ( x ) i ∂ 2 The second derivative ∂ x i ∂ x j f tells us how the first derivative will change as we vary the input. It is useful for determining whether a critical point is a local maximum, local minimum, or saddle point. f ′ ( x ) = 0 and f ′′ ( x ) > 0: local minimum f ′ ( x ) = 0 and f ′′ ( x ) < 0: local maximum f ′ ( x ) = 0 and f ′′ ( x ) = 0: saddle point or a part of a flat region Hessian matrix contains all of the second derivatives of the scalar-valued function ∂ 2 H ( f )( x ) i , j = f ( x ) cuhk ∂ x i ∂ x j Xiaogang Wang Optimization for Training Deep Models

  7. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Jacobian matrix and Hessian matrix At a critical point, ∇ f ( x ) = 0, we can examine the eigenvalues of the Hessian to determine whether the critical point is a local maximum. local minimum, or saddle point When the Hessian is positive definite (all its eigenvalues are positive), the point is a local minimum: the directional second derivative in any direction must be positive When the Hessian is negative definite (all its eigenvalues are negative), the point is a local maximum Saddle point: at least one eigenvalue is positive and at least one eigenvalue is negative. x is a local maximum on one cross section of f but a local maximum on another cross section. cuhk Xiaogang Wang Optimization for Training Deep Models

  8. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Saddle point cuhk Xiaogang Wang Optimization for Training Deep Models

  9. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Hessian matrix Condition number: consider the function f ( x ) = A − 1 x . When A ∈ R n × n has an eigenvalue decomposition, its condition number i , j | λ i max | λ j i.e. the ratio of the magnitude of the largest and smallest eigenvalue. When this number is large, matrix inversion is particularly sensitive to error in the input The Hessian can also be useful for understanding the performance of gradient descent. When the Hessian has a poor condition number, gradient descent performs poorly. This is because in one direction, the derivative increases rapidly, while in another direction, it increases slowly. Gradient descent is unaware of this change in the derivative so it does not know that it needs to explore preferentially in the direction where the derivative remains negative for longer. cuhk Xiaogang Wang Optimization for Training Deep Models

  10. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Hessian matrix Gradient descent fails to exploit the curvature information contained in Hessian. Here we use gradient descent on a quadratic function whose Hessian matrix has condition number 5. The red lines indicate the path followed by gradient descent. This very elongated quadratic function resembles a long canyon. Gradient descent wastes time repeatedly descending canyon walls, because they are the steepest feature. Because the step size is somewhat too large, it has a tendency to overshoot the bottom of the function and thus needs to descend the opposite canyon wall on the next iteration. The large positive eigenvalue of the Hessian corresponding to the eigenvector pointed in this cuhk direction indicates that this directional derivative is rapidly increasing, so an optimization algorithm based on the Hessian could predict that the steepest direction is not actually a promising search direction in this context. Xiaogang Wang Optimization for Training Deep Models

  11. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Second-order optimization methods Gradient descent uses only the gradient and is called first-order optimization. Optimization algorithms such as Newton’s method that also use the Hessian matrix are called second-order optimization algorithms. Newton’s method on 1D function f ( x ) . The second-order Taylor expansion f T ( x ) of a function f around x n is f T ( x ) = f T ( x n + ∆ x ) ≈ f ( x n ) + f ′ ( x n )∆ x + 1 2 f ′′ ( x n )∆ x 2 Ideally, we want to pick a ∆ x such that x n + ∆ x is a stationary point of f . Solve for the ∆ x corresponding to the root of the expansion’s derivative: d � f ( x n ) + f ′ ( x n )∆ x + 1 � 2 f ′′ ( x n )∆ x 2 = f ′ ( x n ) + f ′′ ( x n )∆ x 0 = d ∆ x ∆ x = − [ f ′′ ( x n )] − 1 f ′ ( x n ) The update rule therefore is x n + 1 = x n + ∆ x cuhk Xiaogang Wang Optimization for Training Deep Models

  12. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Second-order optimization methods The 1D function update can be illustrated as follows If we extend the 1D function to a multi-dimension function. The update rule of Newton’s method becomes x n + 1 = x n − H ( f )( x n ) − 1 ∇ x f ( x n ) When the function can be locally approximated as quadratic, iteratively updating the approximation and jumping to the minimum of the approximation can reach the critical point much faster than gradient descent would. In many other fields, the dominant approach to optimization is to design optimization algorithms for a limited family of functions. cuhk The family of functions used in deep learning is quite complicated and complex Xiaogang Wang Optimization for Training Deep Models

  13. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Data augmentation If the training set is small, one can synthesize some training samples by adding Gaussian noise to real training samples Domain knowledge can be used to synthesize training samples. For example, in image classification, more training images can be synthesized by translation, scaling, and rotation. cuhk Xiaogang Wang Optimization for Training Deep Models

  14. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Data augmentation Change the pixels without changing the label Train on transformed data Very widely used in practice cuhk Xiaogang Wang Optimization for Training Deep Models

  15. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Data augmentation Horizontal flipping cuhk Xiaogang Wang Optimization for Training Deep Models

  16. Optimization Basics Optimization of training deep neural networks Multi-GPU Training Data augmentation Random crops/scales Training for image classification networks (AlexNet/VGG/ResNet) Pick random L in range [ 256 , 480 ] Resize training image, short side = L Sample random 224 × 224 patch Testing: average a fixed set of crops Resize image at 5 scales: { 224 , 256 , 384 , 480 , 640 } For each size, use 10 224 × 224 crops: 4 corners + center, + flips cuhk Xiaogang Wang Optimization for Training Deep Models

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend