projected primal dual splitting for solving
play

Projected primal-dual splitting for solving constrained convex - PowerPoint PPT Presentation

Motivation Convergence Numerics Extensions Projected primal-dual splitting for solving constrained convex optimization 1 L. M. Brice no-Arias Universidad T ecnica Federico Santa Mar a LAWOC 2018, Quito 5 September 2018 1 Joint


  1. Motivation Convergence Numerics Extensions Projected primal-dual splitting for solving constrained convex optimization 1 L. M. Brice˜ no-Arias Universidad T´ ecnica Federico Santa Mar´ ıa LAWOC 2018, Quito 5 September 2018 1 Joint work with Sergio L´ opez Rivera, USM LAWOC 2018 1/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  2. Motivation Convergence Numerics Extensions 1 Motivation Stationary Mean Field Games 2 Algorithm and convergence 3 Numerical experiences Stationary Mean Field Games Numerical simulation 4 Extensions and acceleration LAWOC 2018 2/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  3. Motivation Convergence Numerics Extensions Motivation: Mean Field Games (MFG) Introduced by Lasry-Lions (2006) : Given T > 0 and ν ≥ 0 − ∂ t u T − ν ∆ u T + 1 2 |∇ u T | 2 = V ( x, m T ) in T 2 × [0 , T ] in T 2 × [0 , T ] � � ∂ t m T − ν ∆ m T − div m T ∇ u T = 0 in T 2 . m T ( x, 0) = m 0 ( x ) , u T ( x, T ) = Φ( x, m T ( x, T )) , The first is a Hamilton-Jacobi equation backward in time. It characterizes the value of an optimal control problem solved by “a typical small” player and whose cost function V (and final cost Φ) depends on the density of other players at each t ∈ [0 , T ] (at time T ). The second is a Fokker-Plank equation forward in time. It models the evolution of initial density m 0 at the Nash Equilibrium. LAWOC 2018 3/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  4. Motivation Convergence Numerics Extensions Motivation: Stationary MFG It is interpreted as the limit behavior of the rescaled MFG when T → ∞ (see Cardaliaguet et. al 2012). SMFG − ν ∆ u ( x ) + 1 2 |∇ u ( x ) | 2 + λ = V ( x, m ( x )) in T 2 in T 2 � � − ν ∆ m ( x ) − div m ( x ) ∇ u ( x ) = 0 � � 0 ≤ m, T 2 m ( x ) dx = 1 , T 2 u ( x ) dx = 0 . Well-posedness is studied in the case of smooth and weak solutions: Lasry, Lions (2006-07), Cirant (2015-16), Gomes, Patrizi,Voskanyan (2014), Gomes, Mitaki (2015)... LAWOC 2018 4/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  5. Motivation Convergence Numerics Extensions Motivation: Discretized SMFG If ν > 0, V ( x, · ) is increasing and we suppose that the sta- tionary system admits a unique classical solution, in Achdou, Camilli & Capuzzo Dolcetta (2013) the convergence of a dis- cretized SMFG to the unique solution to the stationary sys- tem as h → 0 is proved ( L p some p < 2). To solve the discretized system, Newton’s method can be used (see Achdou, Camilli & Capuzzo Dolcetta, 2013 and Cacace & Camilli, 2016) if the initial guess is close enough to the solution. The performance of Newton’s method depends heavily on the values of ν : for small values of ν the convergence is much slower and cannot be guaranteed in general since m h can become negative. LAWOC 2018 5/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  6. Motivation Convergence Numerics Extensions Motivation: Discretized SMFG is FOC of ( P h ) 2 Discrete optimization problem ( P h ) N h − 1 � � � ˆ h 2 inf b ( m i,j , w i,j ) + F ( x i,j , m i,j ) ( m,w ) ∈M h ×W h i,j =1 � − ν (∆ h m ) i,j + (div h w ) i,j = 0 , ∀ 0 ≤ i, j, ≤ N h − 1 s.t. h 2 � i,j m i,j = 1 . h > 0, N h = 1 /h , M h = R N h × N h , W h = R 4( N h × N h ) . ∆ h : M h → M h , div h : W h → M h are linear ( A , B ) Define  | w | 2 2 m , if m> 0 , w ∈ K ,   � m ˆ 0 , if ( m,w )=(0 , 0) , V ( x,m ′ ) dm ′ . b : ( m,w ) �→ F : ( x,m ) �→ 0  + ∞ , otherwise .  K := R + × R − × R + × R − ( w = − mD h u ) 2 Joint work with F.J. Silva (U. Limoges) y D. Kalise (I. College) LAWOC 2018 6/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  7. Motivation Convergence Numerics Extensions Motivation: ( P h )’s structure Assume V ( x, · ) increasing ( F ( x, · ) convex). f : m �→ � i,j F ( x i,j , m i,j ) it is convex and smooth. i,j ˆ g : ( m, w ) �→ � b ( m i,j , w i,j ) is convex, l.s.c., and nonsmooth. We recall − ∆ h = A and div h = B . Reformulation of ( P h ) ( P ) inf f ( m ) + g ( m, w ) ( m,w ) ∈M h ×W h s.t. L ( m, w ) = (0 , 1) , � A where � B L := h 2 1 ⊤ 0 LAWOC 2018 7/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  8. Motivation Convergence Numerics Extensions Convex non-differentiable optimization problem Problem (P) minimize f ( x ) + g ( x ) + h ( Lx ) . x ∈ R N f : R N → R is differentiable and ∇ f is β − 1 − Lipschitz. g ∈ Γ 0 ( R N ) and h ∈ Γ 0 ( R M ) (l.s.c. convex proper). L is a M × N real matrix. The set of solutions is nonempty. ri(dom h ) ∩ L (ri(dom g )) � = ∅ . � 0 , if x = b otherwise , , b ∈ R M Important case: h = ι { b } : x �→ + ∞ , minimize f ( x ) + g ( x ) . Lx = b LAWOC 2018 8/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  9. Motivation Convergence Numerics Extensions Classic approach: ADMM An equivalent formulation is Lx = y f ( x ) + g ( x ) + h ( y ) min from which we define the Augmented Lagrangian ( γ > 0): L γ ( x, y, u ) = f ( x ) + g ( x ) + h ( y ) + u · ( Lx − y ) + γ 2 � Lx − y � 2 . Under qualification conditions x solves (P) iff ( x, Lx, u ) is a saddle point of L γ . From an alternating minimization-maximization method we obtain the classical Alternating Direction method of Multipliers (ADMM) (Gabay-Mercier 80’s): LAWOC 2018 9/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  10. Motivation Convergence Numerics Extensions Classic approach: ADMM x k +1 = argmin x L γ ( x, y k , u k ) y k +1 = argmin y L γ ( x k +1 , y, u k ) u k +1 = u k + γ ( Lx k +1 − y k +1 ) . LAWOC 2018 10/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  11. Motivation Convergence Numerics Extensions Classic approach: ADMM f ( x ) + g ( x ) + u k · Lx + γ x k +1 = argmin x � 2 � Lx − y k � 2 � h ( y ) − u k · y + γ y k +1 = argmin y � 2 � Lx k +1 − y � 2 � u k +1 = u k + γ ( Lx k +1 − y k +1 ) . LAWOC 2018 10/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  12. Motivation Convergence Numerics Extensions Classic approach: ADMM f ( x ) + g ( x ) + u k · Lx + γ x k +1 = argmin x � 2 � Lx − y k � 2 � h ( y ) − u k · y + γ y k +1 = argmin y � 2 � Lx k +1 − y � 2 � u k +1 = u k + γ ( Lx k +1 − y k +1 ) . In the case when h = ι { b } for some b ∈ R M , we have f ( x ) + g ( x ) + u k · Lx + γ x k +1 = argmin x � 2 � Lx − b � 2 � u k +1 = u k + γ ( Lx k +1 − b ) . LAWOC 2018 10/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  13. Motivation Convergence Numerics Extensions Drawbacks ADMM The primal iterates ( x k ) k ∈ N do not satisfy the constraints ( Lx k � = b ). Moreover, the first step it is not easy in general (involves L and f + g ). It can be solved efficiently only in specific instances: f + g quadratic, L ⊤ L = α Id. Idea: Try to split the influence of L , f and g in the first step. LAWOC 2018 11/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  14. Motivation Convergence Numerics Extensions Drawbacks ADMM The primal iterates ( x k ) k ∈ N do not satisfy the constraints ( Lx k � = b ). Moreover, the first step it is not easy in general (involves L and f + g ). It can be solved efficiently only in specific instances: f + g quadratic, L ⊤ L = α Id. Idea: Try to split the influence of L , f and g in the first step. Given x ∈ R N , prox f x is the unique solution to f ( y ) + 1 2 � x − y � 2 . minimize y ∈ R N Several functions f have an explicit or efficently computable prox f . Examples: � · � 1 , ι C (prox ι C = P C ), d C , etc... LAWOC 2018 11/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  15. Motivation Convergence Numerics Extensions Other approaches Problem (P) x ∈ R N f ( x ) + g ( x ) + h ( Lx ) . min Combettes-Pesquet (2012) Let 0 < γ < ( � L � + β ) − 1 , x 0 ∈ R N and u 0 ∈ R M and iterate 1 = prox γg ( x k − γ ( ∇ f ( x k ) + L ⊤ u k )) p k   2 = prox γh ∗ ( u k + γLx k )  p k  x k +1 = p k 1 ) − L ⊤ u k − ∇ f ( x k ))  1 − γ ( L ⊤ p k 2 + ∇ f ( p k  u k +1 = p k 2 + γ ( Lp k 1 − Lx k ) . In the case f = 0, is the method proposed in BA-Combettes (2011). LAWOC 2018 12/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

  16. Motivation Convergence Numerics Extensions Other approaches Problem (P) case h = ι { b } Lx = b f ( x ) + g ( x ) . min Combettes-Pesquet (2012) Let 0 < γ < ( � L � + β ) − 1 , x 0 ∈ R N and u 0 ∈ R M and iterate 1 = prox γg ( x k − γ ( ∇ f ( x k ) + L ⊤ u k )) p k   2 = u k + γ Lx k − b p k  � �  x k +1 = p k 1 ) − L ⊤ u k − ∇ f ( x k ))  1 − γ ( L ⊤ p k 2 + ∇ f ( p k  u k +1 = p k 2 + γ ( Lp k 1 − Lx k ) . prox γh ∗ = Id − γ prox h/γ ◦ (Id /γ ) = Id − γb . Also the influences of L , f and g have been split, but primal iterates do not satisfy the constraints. The method does not exploit cocoercivity of ∇ f . LAWOC 2018 12/ 31 L. M. Brice˜ no-AriasUniversidad T´ ecnica Federico Santa Mar´ ıa

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend