polynomial optimization and sums of squares sums of
play

Polynomial optimization and sums of squares Sums-of-squares - PowerPoint PPT Presentation

P OLYNOMIAL O PTIMIZATION WITH S UMS - OF -S QUARES I NTERPOLANTS Sercan Yldz syildiz@samsi.info in collaboration with D avid Papp (NCSU) OPT Transition Workshop May 02, 2017 O UTLINE Polynomial optimization and sums of squares


  1. P OLYNOMIAL O PTIMIZATION WITH S UMS - OF -S QUARES I NTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with D´ avid Papp (NCSU) OPT Transition Workshop May 02, 2017

  2. O UTLINE • Polynomial optimization and sums of squares – Sums-of-squares hierarchy – Semidefinite representation of sums-of-squares constraints • Interior-point methods and conic optimization • Sums-of-squares optimization with interior-point methods – Computational complexity – Preliminary results

  3. P OLYNOMIAL O PTIMIZATION • Polynomial optimization problem: min f ( z ) z ∈ R n s.t. g i ( z ) ≥ 0 for i = 1 , . . . , m where f , g 1 , . . . , g m are n -variate polynomials z 3 1 + 3 z 2 1 z 2 − 6 z 1 z 2 2 + 2 z 3 Example: min 2 z ∈ R 2 z 2 1 + z 2 s.t. 2 ≤ 1 • Some applications: – Shape-constrained estimation – Design of experiments – Control theory Source: http://www.cds.caltech.edu/ ˜murray/amwiki/images/thumb/1/ – Combinatorial optimization 19/Doscpp.png/270px- Doscpp.png – Computational geometry – Optimal power flow Source: https://upload.wikimedia. org/wikipedia/commons/thumb/d/ Source: https://upload.wikimedia. d2/Kissing- 2d.svg/ org/wikipedia/commons/thumb/c/ 190px- Kissing- 2d.svg.png cf/Max- cut.svg/200px- Max- cut. svg.png

  4. S UMS - OF -S QUARES R ELAXATIONS • Let f be an n -variate degree-2 d polynomial. • Unconstrained polynomial optimization: NP-hard already min f ( z ) for d = 2 ! z ∈ R n • Equivalent “dual” formulation: max y y ∈ R s.t. f ( z ) − y ∈ P ∀ z ∈ R n } where P = { f : f ( z ) ≥ 0 • Sums-of-squares cone: SOS = { f : f = � N j = 1 f 2 j for some degree- d polynomials f j } f ∈ SOS ⇒ f ∈ P • SOS relaxation: max y y ∈ R s.t. f ( z ) − y ∈ SOS

  5. S UMS - OF -S QUARES R ELAXATIONS • Let f , g 1 , . . . , g m be n -variate polynomials. • Constrained polynomial optimization: min f ( z ) z ∈ R n s.t. g i ( z ) ≥ 0 for i = 1 , . . . , m • Feasible set: G = { z ∈ R n : g i ( z ) ≥ 0 for i = 1 , . . . , m } . • Dual formulation: max y y ∈ R s.t. f ( z ) − y ∈ P G where P G = { f : f ( z ) ≥ 0 ∀ z ∈ G } • “Weighted” SOS cone of order r : SOS G , r = { f : f = � m i = 0 g i s i , s i ∈ SOS , deg ( g i s i ) ≤ r } where g 0 ≡ 1 f ∈ SOS G , r ⇒ f ∈ P G • SOS relaxation of order r : max y y ∈ R s.t. f ( z ) − y ∈ SOS G , r

  6. W HY D O W E L IKE SOS? • Polynomial optimization is NP-hard. • Increasing r produces a hierarchy of SOS relaxations for polynomial optimization problems. • Under mild assumptions: – The lower bounds from SOS relaxations converge to the true optimal value as r ↑ ∞ (follows from Putinar’s Positivstellensatz). • SOS relaxations can be represented as semidefinite programs of size � n + r / 2 O ( mL 2 ) where L = � (follows from the results of Shor, Nesterov, n Parrilo, Lasserre). • There are efficient and stable numerical methods for solving SDPs.

  7. SOS C ONSTRAINTS A RE S EMIDEFINITE R EPRESENTABLE • Consider the cone of degree-2 d SOS polynomials: SOS 2 d = { f ∈ R [ z ] 2 d : f = � N j = 1 f 2 j for some f j ∈ R [ z ] d } Theorem (Nesterov, 2000) f u z u is SOS iff there exists a The univariate polynomial f ( z ) = � 2 d u = 0 ¯ ( d + 1 ) × ( d + 1 ) PSD matrix S such that ¯ f u = � k + ℓ = u S k ℓ ∀ u = 0 , . . . , 2 d .

  8. SOS C ONSTRAINTS A RE S EMIDEFINITE R EPRESENTABLE • More generally, in the n -variate case: � n + d � n + 2 d – Let L := dim ( R [ z ] d ) = � and U := dim ( R [ z ] 2 d ) = � . n n – Fix bases { p ℓ } L ℓ = 1 and { q u } U u = 1 for the linear spaces R [ z ] d and R [ z ] 2 d . Theorem (Nesterov, 2000) u = 1 ¯ The polynomial f ( z ) = � U f u q u ( z ) is SOS iff there exists a L × L PSD matrix S such that ¯ f = Λ ∗ ( S ) where Λ : R U → S L is the linear map satisfying Λ([ q u ] U ⊤ . u = 1 ) = [ p ℓ ] L ℓ = 1 [ p ℓ ] L ℓ = 1 • If univariate polynomials are represented in the monomial basis :  x 0 x 1 x 2 . . . x d  x 1 x 2     x 2 � 2 d Λ ∗ ( S ) = �� Λ( x ) = k + ℓ = u S k ℓ u = 0 .    .  .   . x 2 d − 1   x d x 2 d − 1 x 2 d • These results easily extend to the weighted case.

  9. SOS C ONSTRAINTS A RE S EMIDEFINITE R EPRESENTABLE • To keep things simple, we focus on optimization over a single SOS cone. • SOS problem: b ⊤ y max y ∈ R k , S ∈ S L A ⊤ y + Λ ∗ ( S ) = c A ⊤ y + s = c s.t. s ∈ SOS := { s ∈ R U : s = Λ ∗ ( S ) , S � 0 } S � 0 • Moment problem: c ⊤ x min x ∈ R U s.t. Ax = b x ∈ SOS ∗ := { x ∈ R U : Λ( x ) � 0 } Λ( x ) � 0 Disadvantages of existing approaches • Problem: SDP representation roughly squares the number of variables. Solution: We solve the SOS and moment problems in their original space. • Problem: Standard basis choices lead to ill-conditioning . Solution: We use orthogonal polynomials and interpolation bases.

  10. I NTERIOR -P OINT M ETHODS FOR C ONIC P ROGRAMMING • Primal-dual pair of conic programs: b ⊤ y c ⊤ x max min A ⊤ y + s = c s.t. Ax = b s.t. x ∈ K s ∈ K ∗ • K : closed, convex, pointed cone with nonempty interior Examples: K = R n + , S n + , SOS , P • Interior-point methods make use of self-concordant barriers (SCBs) . Examples: – K = R n + : F ( x ) = − log x – K = S n + : F ( X ) = − log det X

  11. I NTERIOR -P OINT M ETHODS FOR C ONIC P ROGRAMMING • Given SCBs F and F ∗ , IPMs converge to the optimal solution by solving a sequence of equality-constrained barrier problems for µ ↓ 0: c ⊤ x + µ F ( x ) b ⊤ y − µ F ∗ ( s ) max min A ⊤ y + s = c s.t. Ax = b s.t. • Primal IPMs: – solve only the primal barrier problem, – are not considered to be practical. • Primal-dual IPMs: – solve both the primal and dual barrier problems simultaneously, – are preferred in practice.

  12. I NTERIOR -P OINT M ETHODS FOR C ONIC P ROGRAMMING • In principle, any closed convex cone admits a SCB (Nesterov and Nemirovski, 1994). • However, the success of the IPM approach depends on the availability of a SCB whose gradient and Hessian can be computed efficiently. – For the cone P , there are complexity-based reasons for suspecting that there are no computationally tractable SCBs. – For the cone SOS , there is evidence suggesting that there may not exist any tractable SCBs. – On the other hand, the cone SOS ∗ inherits a tractable SCB from the PSD cone.

  13. I NTERIOR -P OINT M ETHOD OF S KAJAA AND Y E • Until recently: – The practical success of primal-dual IPMs had been limited to optimization over symmetric cones: LP , SOCP , SDP . – Existing primal-dual IPMs for non-symmetric conic programs required both the primal and dual SCBs (e.g., Nesterov, Todd, and Ye, 1999; Nesterov, 2012). • Skajaa and Ye (2015) proposed a primal-dual IPM for non-symmetric conic programs which – requires a SCB for only the primal cone , – achieves the best-known iteration complexity .

  14. S OLVING SOS P ROGRAMS WITH I NTERIOR -P OINT M ETHODS • Using Skajaa and Ye’s IPM with the SCB for SOS ∗ , the SOS and moment problems can be solved without recourse to SDP . • For any ǫ ∈ ( 0 , 1 ) , the algorithm finds a primal-dual solution that has ǫ √ times the duality gap of an initial solution in O ( L log ( 1 /ǫ )) iterations where L = dim ( R [ x ] d ) . • Each iteration of the IPM requires – the computation of the Hessian of the SCB for SOS ∗ , – the solution of a Newton system. • Solving the Newton system requires O ( U 3 ) operations where U = dim ( R [ z ] 2 d ) .

  15. S OLVING SOS P ROGRAMS WITH I NTERIOR -P OINT M ETHODS • The choice of bases { p ℓ } L ℓ = 1 and { q u } U u = 1 for R [ z ] d and R [ z ] 2 d has a significant effect on how efficiently the Newton system can be compiled. – In general, computing the Hessian requires O ( L 2 U 2 ) operations. – If both bases are chosen to be monomial bases , the Hessian can be computed faster but requires specialized methods such as FFT and the “inversion” of Hankel-like matrices . – Following L¨ ofberg and Parrilo (2004), we choose • { q u } U u = 1 : Lagrange interpolating polynomials, • { p ℓ } L ℓ = 1 : orthogonal polynomials. – With this choice, the Hessian can be computed in O ( LU 2 ) operations .

  16. S OLVING SOS P ROGRAMS WITH I NTERIOR -P OINT M ETHODS • Putting everything together: √ – The algorithm runs in O ( L log ( 1 /ǫ )) iterations. – At each iteration: • Computing the Hessian requires O ( LU 2 ) operations, • Solving the Newton system requires O ( U 3 ) operations. • Overall complexity: O ( U 3 √ L log ( 1 /ǫ )) operations. • This matches the best-known complexity bounds for LP! • In contrast: – Solving the SDP formulation with a primal-dual IPM requires O ( L 6 . 5 log ( 1 /ǫ )) operations. – For fixed n : L 2 U = Θ( d n ) .

  17. S OLVING SOS P ROGRAMS WITH I NTERIOR -P OINT M ETHODS • The conditioning of the moment problem is directly related to the conditioning of the interpolation problem with { p k p ℓ } L k ,ℓ = 1 . • Good interpolation nodes are understood well only in few low-dimensional domains: – Chebyshev points in [ − 1 , 1 ] , – Padua points in [ − 1 , 1 ] 2 (Caliari et al., 2005). • For problems in higher dimensions, we follow a heuristic approach to choose interpolation nodes.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend