quadratic programs cone programs
play

Quadratic programs Cone programs 10-725 Optimization Geoff Gordon - PowerPoint PPT Presentation

Quadratic programs Cone programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Administrivia HW3 back at end of class Last day for feedback survey All lectures now up on Youtube (and continue to be downloadable from course


  1. Quadratic programs Cone programs 10-725 Optimization Geoff Gordon Ryan Tibshirani

  2. Administrivia • HW3 back at end of class • Last day for feedback survey • All lectures now up on Youtube (and continue to be downloadable from course website) • Reminder: midterm next Tuesday 11/6! ‣ in class, 1 hr 20 min, one sheet (both sides) of notes Geoff Gordon—10-725 Optimization—Fall 2012 2

  3. Quadratic programs • m constraints, n vars ‣ A: R m ! n b: R m c: R n x: R n H: R n ! n ‣ [min or max] x T Hx/2 + c T x ‣ s.t. Ax " b or Ax = b [or some mixture] ‣ may have (some elements of) x # 0 • Convex problem if: max 2x+x 2 +y 2 s.t. x + y " 4 ‣ 2x + 5y " 12 x + 2y " 5 x, y # 0 Geoff Gordon—10-725 Optimization—Fall 2012 3

  4. For example Geoff Gordon—10-725 Optimization—Fall 2012

  5. Cone programs • m constraints, n vars ‣ A: R m ! n b: R m c: R n x: R n ‣ Cones K ⊆ R m L ⊆ R n ‣ [min or max] c T x s.t. Ax + b ∈ K x ∈ L ‣ convex if • E.g., K = • E.g., L = Geoff Gordon—10-725 Optimization—Fall 2012 5

  6. For example: SOCP • min c T x s.t. A i x + b i ∈ K i , i = 1, 2, … Geoff Gordon—10-725 Optimization—Fall 2012 6

  7. Conic sections Geoff Gordon—10-725 Optimization—Fall 2012 7

  8. QPs are reducible to SOCPs • min x T Hx/2 + c T x s.t. … Geoff Gordon—10-725 Optimization—Fall 2012 8

  9. ! SOCPs that aren’t QPs? • QCQP: convex quadratic objective & constraints • minimize a 2 + b 2 s.t. ‣ a # x 2 , b # y 2 ‣ 2x + y = 4 • Not a QP (nonlinear constraints) ‣ but, can rewrite as SOCP Geoff Gordon—10-725 Optimization—Fall 2012 9

  10. More cone programs: SDP • Semidefinite constraint: ‣ variable x ∈ R n ‣ constant matrices A 1 , A 2 , … ∈ R m ! m ‣ constrain • Semidefinite program: min c T x s.t. ‣ semidefinite constraints ‣ linear equalities ‣ linear inequalities Geoff Gordon—10-725 Optimization—Fall 2012 10

  11. Visualizing S + • 2 x 2 symmetric matrices w/ tr(A) = 1 Geoff Gordon—10-725 Optimization—Fall 2012

  12. What about 3 x 3? • Try setting entire diagonal to 1/3 ‣ plot off-diagonal elements (3 of them) Geoff Gordon—10-725 Optimization—Fall 2012

  13. 3 ! 3 symmetric psd matrices

  14. S + is self-dual • S + : { A | A=A T , x T Ax # 0 for all x } • [x T Ax # 0 for all x] " [tr(B T A) # 0 for all psd B] Geoff Gordon—10-725 Optimization—Fall 2012

  15. How hard are QPs and CPs? • Convex QP or CP: not much harder than LP! ‣ as long as we have an efficient rep’n of the cone ‣ poly(L, 1/ ϵ ) (L = bit length, ϵ = accuracy) ‣ can we get strongly polynomial (no 1/ ϵ )? ‣ famous open question, even for LP • General QP or CP: NP-complete ‣ e.g., reduce max cut to QP Geoff Gordon—10-725 Optimization—Fall 2012 15

  16. QP examples • Euclidean projection • LASSO ‣ Mahalanobis projection • Huber regression • Support vector machine Geoff Gordon—10-725 Optimization—Fall 2012 16

  17. LASSO example fit y = ax+b w/ (a,b) sparse y x Geoff Gordon—10-725 Optimization—Fall 2012 17

  18. LASSO example

  19. Robust (Huber) regression • Given points (x i , y i ) ‣ L 2 regression: min w Σ i (y i – x i T w) 2 • Problem: overfitting! • Solution: Huber loss ‣ min w Σ i Hu(y i – x i T w) Hu(z) = Geoff Gordon—10-725 Optimization—Fall 2012

  20. Huber loss as QP • Hu(z) = min a,b (z + a – b) 2 + 2a + 2b ‣ s.t. a, b # 0 Geoff Gordon—10-725 Optimization—Fall 2012

  21. Cone program examples • SOCP ‣ (sparse) group lasso ‣ discrete MRF relaxation ‣ [Kumar, Kolmogorov, Torr, JMLR 2008] ‣ min volume covering ellipsoid (nonlinear objective) Geoff Gordon—10-725 Optimization—Fall 2012 21

  22. Cone program examples • SDP ‣ graphical lasso (nonlinear objective) ‣ Markowitz portfolio optimization (see B&V) ‣ max-cut relaxation [Goemans, Williamson] ‣ matrix completion ‣ manifold learning: max variance unfolding Geoff Gordon—10-725 Optimization—Fall 2012 22

  23. Matrix completion • Observe A ij for ij ∈ E, write P ij = { • min ||(X–A) � P|| 2 + λ ||X|| F * Geoff Gordon—10-725 Optimization—Fall 2012 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend