11 equality constrained minimization
play

11. Equality constrained minimization equality constrained - PowerPoint PPT Presentation

Convex Optimization Boyd & Vandenberghe 11. Equality constrained minimization equality constrained minimization eliminating equality constraints Newtons method with equality constraints infeasible start Newton method


  1. Convex Optimization — Boyd & Vandenberghe 11. Equality constrained minimization • equality constrained minimization • eliminating equality constraints • Newton’s method with equality constraints • infeasible start Newton method • implementation 11–1

  2. Equality constrained minimization minimize f ( x ) subject to Ax = b • f convex, twice continuously differentiable • A ∈ R p × n with rank A = p • we assume p ⋆ is finite and attained optimality conditions: x ⋆ is optimal iff there exists a ν ⋆ such that ∇ f ( x ⋆ ) + A T ν ⋆ = 0 , Ax ⋆ = b Equality constrained minimization 11–2

  3. equality constrained quadratic minimization (with P ∈ S n + ) (1 / 2) x T Px + q T x + r minimize subject to Ax = b optimality condition: � � � � � � A T x ⋆ P − q = ν ⋆ A 0 b • coefficient matrix is called KKT matrix • KKT matrix is nonsingular if and only if x T Px > 0 Ax = 0 , x � = 0 = ⇒ • equivalent condition for nonsingularity: P + A T A ≻ 0 Equality constrained minimization 11–3

  4. Eliminating equality constraints represent solution of { x | Ax = b } as x | z ∈ R n − p } { x | Ax = b } = { Fz + ˆ • ˆ x is (any) particular solution • range of F ∈ R n × ( n − p ) is nullspace of A ( rank F = n − p and AF = 0 ) reduced or eliminated problem minimize f ( Fz + ˆ x ) • an unconstrained problem with variable z ∈ R n − p • from solution z ⋆ , obtain x ⋆ and ν ⋆ as x ⋆ = Fz ⋆ + ˆ ν ⋆ = − ( AA T ) − 1 A ∇ f ( x ⋆ ) x, Equality constrained minimization 11–4

  5. example: optimal allocation with resource constraint minimize f 1 ( x 1 ) + f 2 ( x 2 ) + · · · + f n ( x n ) subject to x 1 + x 2 + · · · + x n = b eliminate x n = b − x 1 − · · · − x n − 1 , i.e. , choose � � I ∈ R n × ( n − 1) x = be n , ˆ F = − 1 T reduced problem: minimize f 1 ( x 1 ) + · · · + f n − 1 ( x n − 1 ) + f n ( b − x 1 − · · · − x n − 1 ) (variables x 1 , . . . , x n − 1 ) Equality constrained minimization 11–5

  6. Newton step Newton step ∆ x nt of f at feasible x is given by solution v of � � � � � � ∇ 2 f ( x ) A T v −∇ f ( x ) = A 0 w 0 interpretations • ∆ x nt solves second order approximation (with variable v ) � f ( x + v ) = f ( x ) + ∇ f ( x ) T v + (1 / 2) v T ∇ 2 f ( x ) v minimize subject to A ( x + v ) = b • ∆ x nt equations follow from linearizing optimality conditions ∇ f ( x + v ) + A T w ≈ ∇ f ( x ) + ∇ 2 f ( x ) v + A T w = 0 , A ( x + v ) = b Equality constrained minimization 11–6

  7. Newton decrement � � 1 / 2 = � � 1 / 2 ∆ x T nt ∇ 2 f ( x )∆ x nt −∇ f ( x ) T ∆ x nt λ ( x ) = properties • gives an estimate of f ( x ) − p ⋆ using quadratic approximation � f : f ( y ) = 1 � 2 λ ( x ) 2 f ( x ) − inf Ay = b • directional derivative in Newton direction: � � d � = − λ ( x ) 2 dtf ( x + t ∆ x nt ) � t =0 � � 1 / 2 ∇ f ( x ) T ∇ 2 f ( x ) − 1 ∇ f ( x ) • in general, λ ( x ) � = Equality constrained minimization 11–7

  8. Newton’s method with equality constraints given starting point x ∈ dom f with Ax = b , tolerance ǫ > 0 . repeat 1. Compute the Newton step and decrement ∆ x nt , λ ( x ) . 2. Stopping criterion. quit if λ 2 / 2 ≤ ǫ . 3. Line search. Choose step size t by backtracking line search. 4. Update. x := x + t ∆ x nt . • a feasible descent method: x ( k ) feasible and f ( x ( k +1) ) < f ( x ( k ) ) • affine invariant Equality constrained minimization 11–8

  9. Newton’s method and elimination Newton’s method for reduced problem ˜ minimize f ( z ) = f ( Fz + ˆ x ) • variables z ∈ R n − p • ˆ x satisfies A ˆ x = b ; rank F = n − p and AF = 0 • Newton’s method for ˜ f , started at z (0) , generates iterates z ( k ) Newton’s method with equality constraints when started at x (0) = Fz (0) + ˆ x , iterates are x ( k +1) = Fz ( k ) + ˆ x hence, don’t need separate convergence analysis Equality constrained minimization 11–9

  10. Newton step at infeasible points 2nd interpretation of page 11–6 extends to infeasible x ( i.e. , Ax � = b ) linearizing optimality conditions at infeasible x (with x ∈ dom f ) gives � � � � � � ∇ 2 f ( x ) A T ∆ x nt ∇ f ( x ) = − (1) w Ax − b A 0 primal-dual interpretation • write optimality condition as r ( y ) = 0 , where r ( y ) = ( ∇ f ( x ) + A T ν, Ax − b ) y = ( x, ν ) , • linearizing r ( y ) = 0 gives r ( y + ∆ y ) ≈ r ( y ) + Dr ( y )∆ y = 0 : � � � � � � ∇ 2 f ( x ) A T ∇ f ( x ) + A T ν ∆ x nt = − A 0 ∆ ν nt Ax − b same as (1) with w = ν + ∆ ν nt Equality constrained minimization 11–10

  11. Infeasible start Newton method given starting point x ∈ dom f , ν , tolerance ǫ > 0 , α ∈ (0 , 1 / 2) , β ∈ (0 , 1) . repeat 1. Compute primal and dual Newton steps ∆ x nt , ∆ ν nt . 2. Backtracking line search on � r � 2 . t := 1 . while � r ( x + t ∆ x nt , ν + t ∆ ν nt ) � 2 > (1 − αt ) � r ( x, ν ) � 2 , t := βt . 3. Update. x := x + t ∆ x nt , ν := ν + t ∆ ν nt . until Ax = b and � r ( x, ν ) � 2 ≤ ǫ . • not a descent method: f ( x ( k +1) ) > f ( x ( k ) ) is possible • directional derivative of � r ( y ) � 2 in direction ∆ y = (∆ x nt , ∆ ν nt ) is � � d � dt � r ( y + t ∆ y ) � 2 = −� r ( y ) � 2 � t =0 Equality constrained minimization 11–11

  12. Solving KKT systems � � � � � � A T H v g = − A 0 w h solution methods • LDL T factorization • elimination (if H nonsingular) AH − 1 A T w = h − AH − 1 g, Hv = − ( g + A T w ) • elimination with singular H : write as � � � � � � H + A T QA A T g + A T Qh v = − A 0 w h with Q � 0 for which H + A T QA ≻ 0 , and apply elimination Equality constrained minimization 11–12

  13. Equality constrained analytic centering primal problem: minimize − � n i =1 log x i subject to Ax = b dual problem: maximize − b T ν + � n i =1 log( A T ν ) i + n three methods for an example with A ∈ R 100 × 500 , different starting points 1. Newton method with equality constraints (requires x (0) ≻ 0 , Ax (0) = b ) 10 5 f ( x ( k ) ) − p ⋆ 10 0 10 − 5 10 − 10 0 5 10 15 20 k Equality constrained minimization 11–13

  14. 2. Newton method applied to dual problem (requires A T ν (0) ≻ 0 ) 10 5 p ⋆ − g ( ν ( k ) ) 10 0 10 − 5 10 − 10 0 2 4 6 8 10 k 3. infeasible start Newton method (requires x (0) ≻ 0 ) 10 10 � r ( x ( k ) , ν ( k ) ) � 2 10 5 10 0 10 − 5 10 − 10 10 − 15 0 5 10 15 20 25 k Equality constrained minimization 11–14

  15. complexity per iteration of three methods is identical 1. use block elimination to solve KKT system � � � � � � diag ( x ) − 2 A T diag ( x ) − 1 1 ∆ x = A 0 w 0 reduces to solving A diag ( x ) 2 A T w = b 2. solve Newton system A diag ( A T ν ) − 2 A T ∆ ν = − b + A diag ( A T ν ) − 1 1 3. use block elimination to solve KKT system � � � � � � diag ( x ) − 2 A T diag ( x ) − 1 1 − A T ν ∆ x = A 0 ∆ ν b − Ax reduces to solving A diag ( x ) 2 A T w = 2 Ax − b conclusion: in each case, solve ADA T w = h with D positive diagonal Equality constrained minimization 11–15

  16. Network flow optimization � n minimize i =1 φ i ( x i ) subject to Ax = b • directed graph with n arcs, p + 1 nodes • x i : flow through arc i ; φ i : cost flow function for arc i (with φ ′′ i ( x ) > 0 ) A ∈ R ( p +1) × n defined as • node-incidence matrix ˜  1 arc j leaves node i  ˜ A ij = − 1 arc j enters node i  0 otherwise • reduced node-incidence matrix A ∈ R p × n is ˜ A with last row removed • b ∈ R p is (reduced) source vector • rank A = p if graph is connected Equality constrained minimization 11–16

  17. KKT system � � � � � � A T H v g = − A 0 w h • H = diag ( φ ′′ 1 ( x 1 ) , . . . , φ ′′ n ( x n )) , positive diagonal • solve via elimination: AH − 1 A T w = h − AH − 1 g, Hv = − ( g + A T w ) sparsity pattern of coefficient matrix is given by graph connectivity ( AH − 1 A T ) ij � = 0 ( AA T ) ij � = 0 ⇐ ⇒ ⇐ ⇒ nodes i and j are connected by an arc Equality constrained minimization 11–17

  18. Analytic center of linear matrix inequality minimize − log det X subject to tr ( A i X ) = b i , i = 1 , . . . , p variable X ∈ S n optimality conditions p � X ⋆ ≻ 0 , − ( X ⋆ ) − 1 + ν ⋆ tr ( A i X ⋆ ) = b i , j A i = 0 , i = 1 , . . . , p j =1 Newton equation at feasible X : p � X − 1 ∆ XX − 1 + w j A i = X − 1 , tr ( A i ∆ X ) = 0 , i = 1 , . . . , p j =1 • follows from linear approximation ( X + ∆ X ) − 1 ≈ X − 1 − X − 1 ∆ XX − 1 • n ( n + 1) / 2 + p variables ∆ X , w Equality constrained minimization 11–18

  19. solution by block elimination • eliminate ∆ X from first equation: ∆ X = X − � p j =1 w j XA j X • substitute ∆ X in second equation p � tr ( A i XA j X ) w j = b i , i = 1 , . . . , p (2) j =1 a dense positive definite set of linear equations with variable w ∈ R p flop count (dominant terms) using Cholesky factorization X = LL T : • form p products L T A j L : (3 / 2) pn 3 • form p ( p + 1) / 2 inner products tr (( L T A i L )( L T A j L )) : (1 / 2) p 2 n 2 • solve (2) via Cholesky factorization: (1 / 3) p 3 Equality constrained minimization 11–19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend