nonlinear optimization algorithms 2 equality constrained
play

Nonlinear Optimization: Algorithms 2: Equality Constrained - PowerPoint PPT Presentation

Nonlinear Optimization: Algorithms 2: Equality Constrained Optimization INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.1/33


  1. Nonlinear Optimization: Algorithms 2: Equality Constrained Optimization INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.1/33 Nonlinear optimization c

  2. Outline Equality constrained minimization Newton’s method with equality constraints Infeasible start Newton method � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.2/33 Nonlinear optimization c

  3. Equality constrained minimization problems � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.3/33 Nonlinear optimization c

  4. Equality constrained minimization We consider the problem: minimize f ( x ) subject to Ax = b , f is supposed to be convex and twice continuously differentiable . A is a p × n matrix of rank p < n (i.e., fewer equality constraints than variables, and independent equality constraints). We assume f ∗ is finite and attained at x ∗ � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.4/33 Nonlinear optimization c

  5. Optimality conditions Remember that a point x ∗ ∈ R n is optimal if and only if there exists a dual variable λ ∗ ∈ R p such that: � Ax ∗ = b , ∇ f ( x ∗ ) + A ⊤ λ ∗ = 0 . This is a set of n + p equations in the n + p variables x, λ , called the KKT system . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.5/33 Nonlinear optimization c

  6. How to solve such problems? Analytically solve the KKT system (usually not possible) Eliminate the equality constraints to reduce the constrained problem to an unconstrained problem with fewer variables, and then solve using unconstrained minimization algorithms. Solve the dual problem using an unconstrained minimization algorithm Adapt Newton’s methods to the constrained minimization setting (keep the Newton step in the set of feasible directions etc...): often preferable to other methods . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.6/33 Nonlinear optimization c

  7. Quadratic minimization Consider the equality constrained convex quadratic minimization problem: 1 2 x ⊤ Px + q ⊤ x + r minimize subject to Ax = b , where P ∈ n × n , P � 0 and A ∈ R p × n . The optimality conditions are: � Ax ∗ = b , ∇ f ( x ∗ ) + A ⊤ λ ∗ = 0 . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.7/33 Nonlinear optimization c

  8. Quadratic minimization (cont.) The optimality conditions can be rewritten as the KKT system : � � � � � � A ⊤ x ∗ P − q = . λ ∗ A 0 b The coefficient matrix in this system is called the KKT matrix . If the KKT matrix is nonsingular (e.g, if P ≻ 0 ) there is a unique optimal primal-dual pair ( x ∗ , λ ∗ ) . If the KKT matrix is singular but the KKT system solvable , any solution yields an optimal pair ( x ∗ , λ ∗ ) . It the KKT system is not solvable , the minimization problem is unbounded below. � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.8/33 Nonlinear optimization c

  9. Eliminating equality constraints One general approach to solving the equality constrained minimization problem is to eliminate the constraints, and solve the resulting problem with algorithms for unconstrained minimization. The elimination is obtained by a reparametrization of the affine subset: x + Fz | z ∈ R n − p � � { x | Ax = b } = ˆ x is any particular solution ˆ range of F ∈ R n × ( n − p ) is the nullspace of A ( rank ( F ) = n − p and AF = 0 .) � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.9/33 Nonlinear optimization c

  10. Example Optimal allocation with resource constraint : we want to allocate a single resource, with a fixed total amount b (the budget), to n otherwise independent activities: minimize f 1 ( x 1 ) + f 2 ( x 2 ) + . . . + f n ( x n ) subject to x 1 + x 2 + . . . + x n = b . Eliminate x n = b − x 1 − . . . − x n − 1 , i.e., choose: � � I ∈ R n × ( n − 1) , x = be n , ˆ F = − 1 ⊤ leads to the reduced problem: x 1 ,...,x n − 1 f 1 ( x 1 ) + . . . + f n − 1 ( x n − 1 ) + f n ( b − x 1 − . . . − x n − 1 ) . min � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.10/33 Nonlinear optimization c

  11. Solving the dual Another approach to solving the equality constrained minimization problem is to solve the dual : � � �� − b ⊤ λ + inf f ( x ) + λ ⊤ Ax max . x λ ∈ R p By hypothesis there is an optimal point so Slater’s conditions hold: strong duality holds and the dual optimum is attained. If the dual function is twice differentiable, then the methods for unconstrained optimization can be used to maximize it. � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.11/33 Nonlinear optimization c

  12. Example The equality constrained analytic center is given (for A ∈ R p × n ) by: n � minimize f ( x ) = − log x i i =1 subject to Ax = b . The Lagrangian is n log x i + λ ⊤ ( Ax − b ) � L ( x, λ ) = − i =1 � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.12/33 Nonlinear optimization c

  13. Example (cont.) We minimize this convex function of x by setting the derivative to 0 : i = 1 � � A ⊤ λ , x i therefore the dual function for λ ∈ R p is: n � � � q ( λ ) = − b ⊤ λ + n + A ⊤ λ log i i =1 We can solve this problem using Newton’s method for unconstrained problem, and recover a solution for the primal problem via the simple equation: 1 x ∗ i = . � A ⊤ λ ∗ � i � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.13/33 Nonlinear optimization c

  14. Newton’s method with equality constraints � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.14/33 Nonlinear optimization c

  15. Motivation Here we describe an extension of Newton’s method to include linear equality constraint. The methods are almost the same except for two differences: the initial point must be feasible ( Ax = b ), the Newton step must be a feasible direction ( A ∆ x nt = 0 ). � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.15/33 Nonlinear optimization c

  16. The Newton step The Newton step of f at a feasible point x for the linear equality constrained problem is given by (the first block of) the solution of: � � � � � � ∇ 2 f ( x ) A ⊤ ∆ x nt −∇ f ( x ) = . A 0 w 0 Interpretations ∆ x nt solves the second-order approximation of f at x (with variable v ): f ( x ) + ∇ f ( x ) ⊤ v + 1 2 v ⊤ ∇ 2 f ( x ) v minimize subject to A ( x + v ) = b . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.16/33 Nonlinear optimization c

  17. The Newton step (cont.) When f is exactly quadratic, the Newton update x + ∆ x nt exactly solves the problem and w is the optimal dual variable. When f is nearly quadratic, x + ∆ x nt is a very good approximation of x ∗ , and w is a good estimate of λ ∗ . Solution of linearized optimality condition . ∆ x nt and w are solutions of the linearized approximation of the optimality condition: � ∇ f ( x + ∆ x nt ) + A ⊤ w = 0 , A ( x + ∆ x nt ) = b . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.17/33 Nonlinear optimization c

  18. Newton decrement � 1 ∆ x nt ∇ 2 f ( x )∆ x nt � λ ( x ) = 2 Give an estimate of f ( x ) − f ∗ using quadratic approximation: f ( y ) = 1 2 λ ( x ) 2 . ˆ f ( x ) − inf Ay = b directional derivative in Newton direction: d dtf ( x + t ∆ x nt ) | t =0 = − λ ( x ) 2 . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.18/33 Nonlinear optimization c

  19. Newton’s method given starting point x ∈ R n with Ax = b , tolerance ǫ > 0 . repeat 1. Compute the Newton step and decrement ∆ x nt , λ ( x ) . 2. Stopping criterion. quit if λ 2 / 2 < ǫ . 3. Line search. Choose step size t by backtracking line search. 4. Update: x := x + t ∆ x nt . � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.19/33 Nonlinear optimization c

  20. Newton’s method and elimination Newton’s method for the reduced problem : ˜ minimize f ( z ) = f ( Fz + ˆ x ) starting at z (0) , generates iterates z ( k ) . Newton’s method with equality constraints : when started at x (0) = Fz (0) + ˆ x , iterates are: x ( k ) = Fz ( k ) + ˆ x. = ⇒ the iterates in Newton’s method for the equality constrained problem coincide with the iterates in Newton’s method applied to the unconstrained reduced problem. All convergence analysis therefore remains valid. � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.20/33 Nonlinear optimization c

  21. Summary The Newton method for equality constrained optimization problems is the most natural extension of the Newton’s method for unconstrained problem: it solves the problem on the affine subset of constraints. All results valid for the Newton’s method on unconstrained problems remain valid, in particular it is a good method . Drawback: we need a feasible initial point. � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.21/33 Nonlinear optimization c

  22. Infeasible start Newton method � 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) – p.22/33 Nonlinear optimization c

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend