10 1 types of constrained optimization algorithms
play

10.1 TYPES OF CONSTRAINED OPTIMIZATION ALGORITHMS Quadratic - PowerPoint PPT Presentation

10.1 TYPES OF CONSTRAINED OPTIMIZATION ALGORITHMS Quadratic Programming Problems Algorithms for such problems are interested to explore because 1. Their structure can be efficiently exploited. 2. They form the basis for other


  1. 10.1 TYPES OF CONSTRAINED OPTIMIZATION ALGORITHMS

  2. Quadratic Programming Problems • Algorithms for such problems are interested to explore because – 1. Their structure can be efficiently exploited. – 2. They form the basis for other algorithms, such as augmented Lagrangian and Sequential quadratic programming problems.

  3. Penalty Methods Solve with unconstrained optimization • Idea: Replace the constraints by a penalty term. • Inexact penalties: parameter driven to infinity to recover solution. Example: x * = argmin f ( x ) subject to c x ( ) = 0 ! ( ) + µ x µ = argmin f x ( ) ; x * = lim µ $% x µ = x * # 2 c i x 2 i " E • Exact but nonsmooth penalty – the penalty parameter can stay finite. x * = argmin f ( x ) subject to c x ( ) = 0 ! x * = argmin f x # ( ) + µ ( ) ; µ $ µ 0 c i x i " E

  4. Augmented Lagrangian Methods • Mix the Lagrangian point of view with a penalty point of view. x * = argmin f ( x ) subject to c x ( ) = 0 ! ( ) + µ x µ , " = argmin f x % % ( ) # ( ) & " i c i 2 x c i x 2 i $ E i $ E x * = lim " ' " * x µ , " for some µ ( µ 0 > 0

  5. Sequential Quadratic Programming Algorithms • Solve successively Quadratic Programs. 1 ( ) 2 p T B k p + ! f x k min p ( ) d + c i x k ( ) = 0 ! c i x k i " E subject to ( ) d + c i x k ( ) # 0 ! c i x k i " I • It is the analogous of Newton ’ s method for the case of constraints if ( ) 2 L x k , " k B k = ! xx • But how do you solve the subproblem? It is possible with extensions of simplex which I do not cover. • An option is BFGS which makes it convex.

  6. Interior Point Methods • Reduce the inequality constraints with a barrier m ( ) ! µ " min x , s f x log s i i = 1 ( ) = 0 i # E subject to c i x ( ) ! s i = 0 i # I c i x • An alternative, is use to use a penalty as well: ( ) ( ) + 1 + 1 ( ) ! µ # # ( ) ! s # ( ) 2 2 min x f x log s i c i x c i x 2 µ 2 µ i " I i " I i " E • And I can solve it as a sequence of unconstrained problems!

  7. 10.2 MERIT FUNCTIONS AND FILTERS

  8. Feasible algorithms • If I can afford to maintain feasibility at all steps, then I just monitor decrease in objective function. • I accept a point if I have enough descent. • But this works only for very particular constraints, such as linear constraints or bound constraints (and we will use it). • Algorithms that do that are called feasible algorithms.

  9. Infeasible algorithms • But, sometimes it is VERY HARD to enforce feasibility at all steps (e.g. nonlinear equality constraints). • And I need feasibility only in the limit; so there is benefit to allow algorithms to move on the outside of the feasible set. • But then, how do I measure progress since I have two, apparently contradictory requirements: { } " ( ) " ( ) ,0 + max # c i x – Reduce infeasibility (e.g. ) c i x i ! E i ! I – Reduce objective function. – It has a multiobjective optimization nature!

  10. 10.2.1 MERIT FUNCTIONS

  11. Merit function • One idea also from multiobjective optimization: minimize a weighted combination of the 2 criteria. % ( { } # # ( ) = w 1 f x ( ) + w 2 ( ) ( ) ,0 ! x + max $ c i x w 1 , w 2 > 0 * ; c i x ' & ) i " E i " I • But I can scale it so that the weight of the objective is 1. • In that case, the weight of the infeasibility measure is called “ penalty parameter ” . ( ) ! x • I can monitor progress by ensuring that decreases, as in unconstrained optimization.

  12. Nonsmooth Penalty Merit Functions Penalty parameter • It is called the l1 merit function. • Sometimes, they can be even EXACT. •

  13. Smooth and Exact Penalty Functions • Excellent convergence properties, but very expensive to compute. • Fletcher ’ s augmented Lagrangian: • It is both smooth and exact, but perhaps impractical due to the linear solve.

  14. Augmented Lagrangian ( ) + µ • Smooth, but inexact. % % ( ) = f x ( ) " ( ) & ! x # i c i 2 x c i x 2 i $ E i $ E • An update of the Lagrange Multiplier is needed. • We will not use it, except with Augmented Lagrangian methods themselves.

  15. Line-search (Armijo) for Nonsmooth Merit Functions • How do we carry out the “ progress search ” ? • That is the line search or the sufficient reduction in trust region? • In the unconstrained case, we had • But we cannot use this anymore, since the function is not differentiable. ( ) # ! $" m % f x k ( ) ! f x k + " m d k ( ) T d k ; 0 < " < 1, 0 < $ < 0.5 f x k

  16. Directional Derivatives of Nonsmooth Merit Function • Nevertheless, the function has a directional derivative (follows from properties of max function). EXPAND ( ) # ! x , µ ( ) ! x + tp , µ ( ) = max $ f 1 p , $ f 1 p ( ) = lim t " 0, t > 0 { } , p { } ( ) ; p D ! x , µ ; D max f 1 , f 2 t ( ) $ " %# m D ! x k , µ ( ) ; ( ) " ! x k + # m p k , µ ( ) , p k ! x k , µ • Line Search: ( ) $ " % 1 m 0 ( ) ; ( ) " ! x k + # m p k , µ ( ) ( ) " m p k • Trust Region ! x k , µ 0 < % 1 < 0.5

  17. And …. How do I choose the penalty parameter? • VERY tricky issue, highly dependent on the penalty function used. • For the l1 function, guideline is: • But almost always adaptive. Criterion: If optimality gets ahead of feasibility, make penalty parameter more stringent. • E.g l1 function: the max of current value of multipliers plus safety factor (EXPAND) –

  18. 10.2.2 FILTER APPROACHES

  19. Principles of filters • Originates in the multiobjective optimization philosophy: objective and infeasibility • The problem becomes:

  20. The Filter approach

  21. Some Refinements • Like in the line search approach, I cannot accept EVERY decrease since I may never converge. • Modification: ! ! 10 " 5

  22. 10.3 MARATOS EFFECT AND CURVILINEAR SEARCH

  23. Unfortunately, the Newton step may not be compatible with penalty • This is called the Maratos effect. • Problem: • Note: the closest point on search direction (Newton) will be rejected ! • So fast convergence does not occur

  24. Solutions? • Use Fletcher ’ s function that does not suffer from this problem. • Following a step: • Use a correction that satisfies • Followed by the update or line search: x k + ! p k + ! 2 ˆ p k ( ) ( ) ( ) = O x k ! x * 3 c x k + p k + ˆ ( ) = O x k ! x * 2 p k • Since compared to c x k + p k corrected Newton step is likelier to be accepted.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend