Computational Optimization Augmented Lagrangian NW 17.3 Upcoming - - PowerPoint PPT Presentation
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming - - PowerPoint PPT Presentation
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday for 4/25 presenters). Monday, April
Upcoming Schedule
No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday for 4/25 presenters). Monday, April 28, evening class presentations, pizza provided. Tuesday, April 29, in class presentations. Exam – May 6 Tuesday, open notes/book
General Equality Problem min ( ) ( ) . . ( )
i
f x NLP s t h x i E = ∈
Augmented Lagrangian
Consider min f(x) s.t h(x)=0 Start with L(x, λ)=f(x)-λ’h(x) Add penalty L(x, λ,c)=f(x)-λ’h(x)+μ/2||h(x)||2 The penalty helps insure that the point is feasible.
Lagrangian Multiplier Estimate
L(x, λ,μ)=f(x)-λ’h(x)+μ/2||h(x)||2 If Looks like the Lagrangian Multiplier!
[ ]
( , , ) ( ) ' ( ) ( )' ( ) ( ) ( ) ' ( )
xL x v
f x h x h x h x f x h x h x μ λ μ λ μ ∇ =∇ − ∇ + ⋅ ∇ = ⇒∇ − − ⋅ ∇ =
1
( ) 17.39
k k k i i i
h x λ λ μ
+ =
−
In Class Exercise
Consider Find x*, λ* satisfying the KKT conditions The augmented Lagrangian is L(x, λ*, c)= Plot f(x), L(x, λ*), L(x*, λ* ,4), L(x*, λ* ,16) L(x*, λ* ,40) Compare these functions near x*
3
min . . 1 x s t x + =
Augmented Lagrangian Algorithm for equality constraints (17.3)
Given For k = 0,1,2….. find an approximate minimizer of L(x,λ,μ) such that if optimal stop update Lagrangain multipliers 17.39 chose new penalty
k k
μ μ ≥
+1
tol x L
k k x
≤ ∇ ) , , ( μ λ
1
( ) 17.39
k k k k i i i
h x λ λ μ
+ =
− , , 0, x tol λ μ > >
AL has Nice Properties
Penalty function can improve conditioning and convexity. Automatically gives estimates of Lagrangian Multipliers Finite penalty term
Theorem 17.5
Let x* be local solution of NLP-equality with LICQ and SOSC satisfied. Then for all μ sufficiently large, x* is a strict local minimizer of L(x,λ,μ) .
Only need a finite penalty term!
Why AL works
AL solution close to real solution if penalty μ is large enough or if multiplier λ is close enough to the real thing. Subproblems have a strict local min, so unconstrained minimization methods should work well.
Add bounds constraints
Original Problem Add only inequalities to Lagrangian
min ( ) . . ( )
i
f x s t h x i E l x u = ∈ ≤ ≤
2 2 '
min ( , , ) ( ) ( ) ( ) 2 . .
k k k x
L x f x h x h x s t l x u μ λ μ λ = − + ≤ ≤
Algorithm 17.4
For bounds constraint case Just put nonlinear equalities in augmented Lagragian subproblem and keep bounds as is. If near feasible, update multipliers and penalties Else just update penalties.
Inequality Problems
Method of multiplier can be extended to this case using penalty parameter t If strict complementarity holds this function is twice differentiable. ( )
2 1 2 2
( , , ) ( ) ( ) 1
j j j
m L x u t f x u tg x u j
+
⎡ ⎤ = + − − ∑ ⎣ ⎦ =
Inequality Problems
KKT point of Augmented Lagrangian is KKT point of original problem Estimate of Lagrangian Multiplier is ( ) ( )
( , , ) ( ) ( ) ' ( ) 1 ( , , ) ( ) 1
x j j j u j j j
m L x u t f x u tg x g x j m L x u t u tg x u j
+ +
⎡ ⎤ ∇ = ∇ − − ∇ = ∑ ⎣ ⎦ = ⎡ ⎤ ∇ = − − − = ∑ ⎣ ⎦ = ( )
j j j
u u tg x
+
⎡ ⎤ = − ⎣ ⎦
Inequality Problems
( )
( )
( )
( , , ) ( ) ( ) ' ( ) 1 ˆ ( ) ' ( ) 1 ˆ ˆ ( , , ) ( ) 1 ( ) for t sufficiently large ( ) 0, ( ) ( ) for t suffici
x j j j j j j u j j j j j j j j j j
m L x u t f x u tg x g x j m f x u g x j u m L x u t u tg x u j if g x u if g x u u g x if g x
+ +
⎡ ⎤ ∇ = ∇ − − ∇ = ∑ ⎣ ⎦ = = ∇ − ∇ = ∑ = ≥ ⎡ ⎤ ∇ = − − − = ∑ ⎣ ⎦ = > ⇒ = = ⇒ ≥ = < ⇒
- ently large get a contradiction
( )
j j j
u u tg x
+
⎡ ⎤ = − ⎣ ⎦
NLP Family of Algorithms
Basic Method Sequential Quadratric Programming Sequential Linear Prog Augmented Lagrangian Projection
- r Reduced
Gradient Directions Steepest Descent Newton Quasi Newton Conjugate Gradient Space Direct Null Range Constraints Active Set Barrier Penalty Step Size Line Search Trust Region
Hybrid Approaches
Method can be any combination of these algorithms MINOS: For linear program utilizes a simplex
- method. The generalization of this to