MATH529 Fundamentals of Optimization Fundamentals of Constrained - - PowerPoint PPT Presentation

math529 fundamentals of optimization fundamentals of
SMART_READER_LITE
LIVE PREVIEW

MATH529 Fundamentals of Optimization Fundamentals of Constrained - - PowerPoint PPT Presentation

MATH529 Fundamentals of Optimization Fundamentals of Constrained Optimization VIII: Algorithms Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 24 Algorithms for Nonlinear Constrained Optimization One basic


slide-1
SLIDE 1

MATH529 – Fundamentals of Optimization Fundamentals of Constrained Optimization VIII: Algorithms

Marco A. Montes de Oca

Mathematical Sciences, University of Delaware, USA

1 / 24

slide-2
SLIDE 2

Algorithms for Nonlinear Constrained Optimization

One basic idea: Use Lagrangian-like functions as proxies (or analytical tools) for dealing with a constrained problem.

2 / 24

slide-3
SLIDE 3

Algorithms for Nonlinear Constrained Optimization

In this course: Penalty Methods Interior-Point Methods

3 / 24

slide-4
SLIDE 4

Penalty Methods for Nonlinear Constrained Optimization

Idea: Have a mechanism that generates solutions using information about their quality. (Favoring better quality solutions.)

4 / 24

slide-5
SLIDE 5

Penalty Methods for Nonlinear Constrained Optimization

Idea: Have a mechanism that generates solutions using information about their quality. (Favoring better quality solutions.) In the process of determining the quality of those solutions, penalize those that are infeasible by reducing their quality based on the degree they violate the constraints.

5 / 24

slide-6
SLIDE 6

Penalty Methods for Nonlinear Constrained Optimization

Example: Say c1(x) = x1 − x2 = 3. Given u = (2, −0.5)T, and v = (−1, 0)T, u should receive a better score than v because 2.5 is closer to 3 than −1.

6 / 24

slide-7
SLIDE 7

Penalty Methods for Nonlinear Constrained Optimization

A common way to implement these ideas in order to deal with equality constraints is to define a proxy for the objective function as follows: Q(x, µ) = f (x) + µ 2

  • i∈E

(ci(x))2 where µ > 0 is called the penalty parameter.

7 / 24

slide-8
SLIDE 8

Penalty Methods for Nonlinear Constrained Optimization

Effects of the penalty parameter:

8 / 24

slide-9
SLIDE 9

Penalty Methods for Nonlinear Constrained Optimization

Minimize xy subject to x2 + y2 = 1:

9 / 24

slide-10
SLIDE 10

Penalty Methods for Nonlinear Constrained Optimization

Q(x, y, 1) = xy + 1

2(x2 + y2 − 1)2:

(x⋆, y⋆) = (−0.8660, 0.8660), or (x⋆, y⋆) = (0.866, −0.866).

10 / 24

slide-11
SLIDE 11

Penalty Methods for Nonlinear Constrained Optimization

Q(x, y, 40) = xy + 20(x2 + y2 − 1)2: (x⋆, y⋆) = (−0.7115, 0.7115), or (x⋆, y⋆) = (0.7115, −0.7115).

11 / 24

slide-12
SLIDE 12

Penalty Methods for Nonlinear Constrained Optimization

Penalty Methods: Initialize µ0 > 0, x0. for i = 1,. . . ,k

Use your favorite algorithm to find an approximate minimizer

  • f Q(xk, µk), call it mk;

If mk is good enough, break and return mk as solution. Else choose new µk+1 > µk, and set xx+1 = mk.

endfor

12 / 24

slide-13
SLIDE 13

Penalty Methods for Nonlinear Constrained Optimization

MATLAB Example

13 / 24

slide-14
SLIDE 14

Penalty Methods for Nonlinear Constrained Optimization

Issues: Divergence The Hessian becomes ill-conditioned with large values of µk. It is harder to deal with inequality constraints because in this case: Q(x, µ) = f (x) + µ 2

  • i∈E

(ci(x))2 + µ 2

  • i∈I

(min{ci(x), 0})2 therefore, Q is no longer twice continuously differentiable.

14 / 24

slide-15
SLIDE 15

Penalty Methods for Nonlinear Constrained Optimization

Augmented Lagrangian Method

15 / 24

slide-16
SLIDE 16

Penalty Methods for Nonlinear Constrained Optimization

We saw that: ∇xQ(x, µk) = ∇f (x) + µk

  • i∈E

ci(x)∇ci(x) Now, compare this equation with the gradient of the Lagrangian: ∇xL(x, λi) = ∇f (x) −

  • i∈E

λi∇ci(x) At a solution point of Q, we can say that ci(xk) ≈ −λ⋆

i

µk for all i ∈ E. This means that ci(xk) → 0 as µk → ∞, but in general a solution to Q is biased.

16 / 24

slide-17
SLIDE 17

Penalty Methods for Nonlinear Constrained Optimization

A way to reduce this bias, is to use what is called the augmented Lagrangian which is defined as: LA(x, λ, µ) = f (x) −

  • i∈E

λici(x) + µ 2

  • i∈E

(ci(x))2 The idea is then to use LA(x, λk, µk) instead of Q(x, µk) as proxy for the constrained problem.

17 / 24

slide-18
SLIDE 18

Penalty Methods for Nonlinear Constrained Optimization

This works because the optimality conditions for LA(x, λk, µk) say that ∇LA(xk, λk, µk) = 0 and therefore ∇LA(xk, λk, µk) = ∇f (xk) −

  • i∈E

(λk

i − µkci(xk))∇ci(xk) = 0

and so λ⋆

i ≈ λk i − µkci(xk)

for all i ∈ E. We can see now that ci(xk) ≈ − 1 µk (λ⋆ − λk

i ). So, ci(xk) would be

much smaller than before provided that λk

i is close to λ⋆ i .

18 / 24

slide-19
SLIDE 19

Penalty Methods for Nonlinear Constrained Optimization

A method that implements the augmented Lagrangian method would use the formula λk+1

i

= λk

i − µkci(xk)

to have a better behaved algorithm that does not require µk → ∞ (at least not as fast) to have accurate solutions.

19 / 24

slide-20
SLIDE 20

Penalty Methods for Nonlinear Constrained Optimization

It is possible to modify a problem with inequality constraints so that the augmented Lagrangian method can be used without modification: The idea is to transform ci(x) ≥ 0 into ci(x) − si = 0 subject to bound constraints (which are easier to deal with). Another approach is to use:

20 / 24

slide-21
SLIDE 21

Barrier Methods

21 / 24

slide-22
SLIDE 22

Barrier Methods for Nonlinear Constrained Optimization

Similar to the penalty method, but now the penalty is smooth:

22 / 24

slide-23
SLIDE 23

Barrier Methods for Nonlinear Constrained Optimization

Common barrier functions:

1 x

− ln(x)

23 / 24

slide-24
SLIDE 24

Barrier Methods for Nonlinear Constrained Optimization

Example: Minimize 2x2 + 9y subject to x + y ≥ 4.

24 / 24