augmented lagrangians and decomposition in convex and
play

Augmented Lagrangians and Decomposition in Convex and Nonconvex - PowerPoint PPT Presentation

Augmented Lagrangians and Decomposition in Convex and Nonconvex Programming Terry Rockafellar University of Washington, Seattle DIMACS Workshop on ADMM and Proximal Splitting Methods in Optimization Rutgers University, New Brunswick NJ June


  1. Augmented Lagrangians and Decomposition in Convex and Nonconvex Programming Terry Rockafellar University of Washington, Seattle DIMACS Workshop on ADMM and Proximal Splitting Methods in Optimization Rutgers University, New Brunswick NJ June 11-13, 2018

  2. Extended Optimization Model for Decomposition Work Ingredients: R m , just C 1 or C 2 for j = 1 , . . . , q , R n j → I mappings F j : I R n j → ( −∞ , ∞ ], just lsc for j = 1 , . . . , q , functions f j : I R m → ( −∞ , ∞ ], lsc, convex, pos. homogeneous function g : I R n = I R n 1 × · · · × I R n q with complement S ⊥ subspace S ⊂ I Problem � � q � minimize � q j =1 f j ( x j ) + g j =1 F j ( x j ) over ( x 1 , . . . , x q ) ∈ S Specializations of the coupling term: g ( u ) = δ K ( u ) for a closed convex cone K , or g ( u ) = || u || p Specializations of the coupling space: � � � � x 1 = · · · = x q S = ( x 1 , . . . , x q ) , for the splitting version R n (thereby “dropping out”), S ⊥ = { 0 } S taken to be all of I Convex case: f j convex and F j = A j affine is just one possiblity

  3. Reformulation With Enhanced Separability Expansion Lemma � � q � R m for j = 1 , . . . , q j =1 F j ( x j ) ≤ α ⇐ ⇒ ∃ u j ∈ I g such that � q j =1 u j = 0 and � q � � j =1 g F j ( x j ) + u j ≤ α R n × [ I R m | q Extended coupling space: now in I � � � � ( x 1 , . . . , x q ) ∈ S , � q = ( x 1 , . . . , x q , u 1 , . . . , u q ) j =1 u j = 0 , S � � � ⊥ = � ( v 1 , . . . , v q ) ∈ S ⊥ , y 1 = · · · = y q ( v 1 , . . . , v q , y 1 , . . . , y q ) S Expanded problem (equivalent) � � min � q f j ( x j ) + g ( F j ( x j ) + u j ) over ( x 1 , . . . , x q , u 1 , . . . , u q ) ∈ S j =1 − → separability achieved in the objective: ϕ ( x 1 , . . . , x q , u 1 , . . . , u q ) = ϕ 1 ( x 1 , u 1 ) + · · · + ϕ q ( x q , u q )

  4. Linkage Problems and Subgradients Optimization goal minimize some lsc function ϕ on a subspace S First-order condition for local optimality: z ∈ S ⊥ w ∈ S and ∃ ¯ ¯ z ∈ ∂ϕ ( ¯ w ) such that ¯ smooth case: z = ∇ ϕ ( ¯ ¯ w ) ⊥ S Linkage problem — in terms of the subgradient mapping ∂ϕ z ) ∈ [ gph ∂ϕ ] ∩ [ S × S ⊥ ] given ϕ and S , find such a pair ( ¯ w , ¯ z ∈ � Regular subgradients: notation ¯ ∂ϕ ( ¯ w ) ϕ ( w ) ≥ ϕ ( ¯ w ) + ¯ z · ( w − ¯ w ) + o ( || w − ¯ w || ) General subgradients: notation ¯ z ∈ ∂ϕ ( ¯ w ) ∃ z ν → ¯ z with z ν ∈ � ∂ϕ ( w ν ) , w ν → ¯ w , ϕ ( w ν ) → ϕ ( ¯ w )

  5. Local Elicitation of Convexity/Monotonicity Key observation → minimizing ϕ e = ϕ + e 2 d 2 minimizing ϕ on over S ← S over S d S ( w ) = distance of w from S , e = elicitation parameter Second-order variational sufficient condition: for ¯ z ∈ ∂ϕ ( ¯ w ) ∃ e ≥ 0 , ε > 0, open convex nbhd W × Z of ( ¯ w , ¯ z ), and lsc convex ψ on W such that gph ∂ψ coincides in W × Z with � � � � ϕ e ( w ) ≤ ϕ e ( ¯ gph T e ,ε = ( w , z ) ∈ gph ∂ϕ e w ) + ε and, on that common set, furthermore ψ ( w ) = ϕ e ( w ) Example: ϕ ∈ C 2 , ¯ w ), ∇ 2 ϕ ( ¯ z = ∇ ϕ ( ¯ w ) pos.definite relative to S Criterion for local max monotonicity This condition = ⇒ T e ,ε is max monotone around ( ¯ w , ¯ z ), and it is ⇐ ⇒ when ¯ z is a regular subgradient of ϕ at ¯ w

  6. Progressive Decoupling of Linkages R N for finding a local minimizer ¯ w of ϕ on the subspace S ⊂ I Algorithm with parameters r > e ≥ 0 In iteration ν , having w ν ∈ S and z ν ∈ S ⊥ , find � 2 || w − w ν || 2 � w ν = ( local ) argmin w ∈ I ϕ ( w ) − z ν · w + r � R N and update by w ν +1 = P S ( � z ν +1 = z ν − ( r − e )[ � w ν − w ν +1 ] w ν ) , − → builds on Spingarn’s method and Pennanen’s PPA localization Convergence Theorem (for e high enough) Let ( ˜ w , ˜ z ) satisfy the first-order condition and the second-order variational condition. Then ∃ nbhd W × Z of ( ˜ w , ˜ z ) such that w ν = unique if ( w 0 , z 0 ) ∈ W × Z then ( w ν , z ν ) ∈ W × Z with � local minimizer on W , and ( w ν , z ν ) → some ( ¯ w , ¯ z ) satisfying the first-order condition, such that ¯ w = local argmin of ϕ on W ∩ S

  7. Application to the Expanded Decomposition Model min � q j =1 [ f j ( x j ) + g ( F j ( x j ) + u j ) ] over ( x 1 , . . . , x q , u 1 , . . . , u q ) ∈ S � � � � ( x 1 , . . . , x q ) ∈ S , � q S = ( x 1 , . . . , x q , u 1 , . . . , u q ) j =1 u j = 0 , � � � ⊥ = � ( v 1 , . . . , v q ) ∈ S ⊥ , y 1 = · · · = y q ( v 1 , . . . , v q , y 1 , . . . , y q ) S Algorithm (with parameters r > e ≥ 0) ⊥ − ( x ν 1 , . . . , x ν q , u ν 1 , . . . , u ν q ) ∈ S , ( v ν 1 , . . . , v ν q , y ν , . . . , y ν ) ∈ S → determine ( � j , � j ) as the local minimizer of ϕ ν j ( x j , u j ) = x ν u ν j || 2 + r j · x j − y ν · u j + r j || 2 f j ( x j ) + g ( F j ( x j ) + u j ) − v ν 2 || x j − x ν 2 || u j − u ν � q u ν = 1 Then let � j =1 � and update by u ν j q ( x ν +1 , . . . , x ν +1 u ν +1 ) = P S ( � x ν j , . . . , � x ν j ), = u ν j − � u ν q 1 j y ν +1 = y ν − ( r − e ) � v ν +1 j − x ν +1 = v ν j − ( r − e )[ � ], x ν u ν j j Convergence: local under local optimality conditions as above

  8. Implementation with Augmented Lagrangians Auxiliary problems: min f j ( x j )+ g ( F ν j ( x j )), F ν j ( x j ) = F j ( x j )+ u ν j g is lsc convex pos.homog., so g ∗ = δ Y for some Y ⊂ I R m Note: g = δ K for cone K yields Y = polar cone K ∗ Examples: g = || · || p yields Y = unit ball for dual norm || · || q Associated Lagrangians: L ν j ( x j , y j ) = f j ( x j ) + y j · F ν j ( x j ) − δ Y ( y j ) Augmented Lagrangians (with parameter r > 0): � � j ( x j ) || 2 − 1 j ( x j ) + r 2 r d 2 L ν j , r ( x j , y j ) = f j ( x j ) + y j · F ν 2 || F ν y j + rF ν j ( x j ) Y Algorithm in condensed form q ) ∈ S ⊥ , u ν From ( x ν 1 , . . . , x ν q ) ∈ S , ( v ν 1 , . . . , v ν 1 + · · · + u ν q = 0, y ν , � j || 2 � j · x j + r get � x ν j = ( local ) argmin x j L ν j , r ( x j , y ν ) − v ν 2 || x j − x ν j , r ( x ν +1 then take � u ν j = ∇ y L ν , y ν ) and update just as before j augmented Lagrangians can furthermore help with elicitation

  9. References [1] J.E. Spingarn (1985) “Applications of the metiond of partial inverses to convex programming: decomposition,” Mathematical Programming 32, 199–221. [2] T. Pennanen (2002) “Local convergence of the proximal point algorithm and multiplier methods without monotonicity,” Mathematics of Operations Research 27, 170–191. [3] R.T. Rockafellar (2017) “Progressive decoupling of linkages in monotone variational inequalities and convex optimization” [4] R.T. Rockafellar (2018) “Variational convexity and local monotonicity of subgradient mappings” [5] R.T. Rockafellar (2018) “Progressive decoupling of linkages in optimization and variational inequalities with elicitable convexity or monotonicity” nearly finished website: www.math.washington.edu/ ∼ rtr/mypage.html

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend