computational optimization
play

Computational Optimization Augmented Lagrangian NW 17.3 Upcoming - PowerPoint PPT Presentation

Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday for 4/25 presenters). Monday, April


  1. Computational Optimization Augmented Lagrangian NW 17.3

  2. Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday for 4/25 presenters). Monday, April 28, evening class presentations, pizza provided. Tuesday, April 29, in class presentations. Exam – May 6 Tuesday, open notes/book

  3. General Equality Problem min f x ( ) = ∈ ( NLP ) s t . . h x ( ) 0 i E i

  4. Augmented Lagrangian Consider min f(x) s.t h(x)=0 Start with L(x, λ )=f(x)- λ ’h(x) Add penalty L(x, λ ,c)=f(x)- λ ’h(x)+ μ /2||h(x)|| 2 The penalty helps insure that the point is feasible.

  5. Lagrangian Multiplier Estimate L(x, λ , μ )=f(x)- λ ’h(x)+ μ /2||h(x)|| 2 ∇ μ =∇ − λ ∇ + ⋅ μ ∇ = If x L x v ( , , ) f x ( ) ' h x ( ) h x ( )' h x ( ) 0 [ ] ⇒∇ − λ μ − ⋅ ∇ = f x ( ) h x ( ) ' h x ( ) 0 Looks like the Lagrangian Multiplier! + = λ λ − μ k 1 k k h x ( ) 17.39 i i i

  6. In Class Exercise Consider s t x + = 3 min x . . 1 0 Find x*, λ * satisfying the KKT conditions The augmented Lagrangian is L(x, λ *, c)= Plot f(x), L(x, λ *), L(x*, λ * ,4), L(x*, λ * ,16) L(x*, λ * ,40) Compare these functions near x*

  7. Augmented Lagrangian Algorithm for equality constraints (17.3) λ μ > > Given 0 0 0 x , , 0, tol 0 For k = 0,1,2….. find an approximate minimizer of L(x, λ , μ ) such that ∇ λ μ ≤ k k L ( x , , ) tol x if optimal stop update Lagrangain multipliers 17.39 + = λ λ − μ k 1 k k k h x ( ) 17.39 i i i chose new penalty + 1 μ ≥ μ k k

  8. AL has Nice Properties Penalty function can improve conditioning and convexity. Automatically gives estimates of Lagrangian Multipliers Finite penalty term

  9. Theorem 17.5 Let x* be local solution of NLP-equality with LICQ and SOSC satisfied. Then for all μ sufficiently large, x* is a strict local minimizer of L(x, λ , μ ) . Only need a finite penalty term!

  10. Why AL works AL solution close to real solution if penalty μ is large enough or if multiplier λ is close enough to the real thing. Subproblems have a strict local min, so unconstrained minimization methods should work well.

  11. Add bounds constraints Original Problem min f x ( ) = ∈ s t . . h x ( ) 0 i E i ≤ ≤ l x u Add only inequalities to Lagrangian μ 2 λ μ = − λ + 2 k k k ' min L x ( , , ) f x ( ) h x ( ) h x ( ) x 2 ≤ ≤ s t . . l x u

  12. Algorithm 17.4 For bounds constraint case Just put nonlinear equalities in augmented Lagragian subproblem and keep bounds as is. If near feasible, update multipliers and penalties Else just update penalties.

  13. Inequality Problems Method of multiplier can be extended to this case using penalty parameter t ) ( m 2 = + ⎡ − ⎤ − ∑ 1 2 L x u t ( , , ) f x ( ) u tg ( ) x u ⎣ ⎦ j j j + 2 = j 1 If strict complementarity holds this function is twice differentiable.

  14. Inequality Problems ) ( m ⎡ ⎤ ∇ = ∇ − − ∇ = ∑ L x u t ( , , ) f x ( ) u tg ( ) x ' g ( ) x 0 ⎣ ⎦ x j j j + = j 1 ) ( m ⎡ ⎤ ∇ = − − − = ∑ L x u t ( , , ) u tg ( ) x u 0 ⎣ ⎦ u j j j + = j 1 KKT point of Augmented Lagrangian is KKT point of original problem Estimate of Lagrangian Multiplier is ⎡ ⎤ = − u u tg ( ) x ⎣ ⎦ j j j +

  15. Inequality Problems ( ) m ⎡ ⎤ ∇ = ∇ − − ∇ = ∑ L x u t ( , , ) f x ( ) u tg ( ) x ' g ( ) x 0 ⎣ ⎦ x j j j + = j 1 m ( ) ⎡ ⎤ = − = ∇ − ∇ = ∑ u u tg ( ) x ⎣ ⎦ ˆ f x ( ) u ' g ( ) x 0 j j j + j j = j 1 ≥ ˆ u 0 j ) ( m � ∇ = − ⎡ − ⎤ − = ∑ ˆ L x u t ( , , ) u tg ( ) x u 0 ⎣ ⎦ u j j j + = j 1 � > ⇒ = if g ( ) x 0 for t sufficiently large u 0 j j � � = ⇒ ≥ = if g ( ) x 0 u 0, u g ( ) x 0 j j j j < ⇒ if g ( ) x 0 for t suffici ently large get a contradiction j

  16. NLP Family of Algorithms Basic Sequential Sequential Augmented Projection Linear Prog Lagrangian or Reduced Method Quadratric Gradient Programming Directions Steepest Newton Quasi Newton Conjugate Descent Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region

  17. Hybrid Approaches Method can be any combination of these algorithms MINOS: For linear program utilizes a simplex method. The generalization of this to nonlinear programs with linear constraints is the reduced gradient method. Nonlinear constraints are handled by utilizing the augmented Lagrangian. A BFGS estimate of the Hessian is used.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend