constraint reduction for linear and convex optimization
play

Constraint Reduction for Linear and Convex Optimization Meiyun He, - PowerPoint PPT Presentation

Constraint Reduction for Linear and Convex Optimization Meiyun He, Jin Jung, Paul Laiu, Sungwoo Park, Luke Winternitz, P .-A. Absil, Dianne OLeary, Andr e Tits Presenter: Andr e Tits University of Maryland, College Park March 11, 2014


  1. Constraint Reduction for Linear and Convex Optimization Meiyun He, Jin Jung, Paul Laiu, Sungwoo Park, Luke Winternitz, P .-A. Absil, Dianne O’Leary, Andr´ e Tits Presenter: Andr´ e Tits University of Maryland, College Park March 11, 2014 He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 1 / 38

  2. Outline Constraint Reduction for LP: Basic Ideas 1 Constraint Reduction for LP: An Aggressive Approach 2 Selection of Working (Q) Set, and Convergence Properties Addressing “Rank Degeneracy” Allowing Infeasible Starting Points Extension to Convex Quadratic Optimization (CQP) Numerical Results and Applications Constraint Reduction for SDP: A More Robust, Polynomial-Time Approach 3 Block-Structured SDP Constraint-Reduction Scheme Special Case: LP Polynomial Convergence Discussion 4 He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 2 / 38

  3. This talk is an overview of work carried out in our research group over the past few years. For more details, see: Tits, Absil, Bill Woessner, “Constraint Reduction for Linear Programs with Many Inequality Constraints”, SIOPT 2006. Jung, O’Leary, Tits, “Adaptive Constraint Reduction for Training Support Vector Machines”, ETNA 2008. Jung, O’Leary, Tits, “Adaptive Constraint Reduction for Convex Quadratic Programming”, COAP 2012. Winternitz, Stacey Nicholls, Tits, O’Leary, “A Constraint-Reduced Variant of Mehrotra’s Predictor-Corrector Algorithm”, COAP 2012. He, Tits, “Infeasible Constraint-Reduced Interior-Point Methods for Linear Optimization”, GOMS 2012. Winternitz, Tits, Absil, “Addressing rank degeneracy in constraint-reduced interior-point methods for linear optimization”, JOTA, 2014. Park, O’Leary “A Polynomial Time Constraint Reduced Algorithm for Semidefinite Optimization Problems”, submitted for publication, 2013. He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 3 / 38

  4. Outline Constraint Reduction for LP: Basic Ideas 1 Constraint Reduction for LP: An Aggressive Approach 2 Selection of Working (Q) Set, and Convergence Properties Addressing “Rank Degeneracy” Allowing Infeasible Starting Points Extension to Convex Quadratic Optimization (CQP) Numerical Results and Applications Constraint Reduction for SDP: A More Robust, Polynomial-Time Approach 3 Block-Structured SDP Constraint-Reduction Scheme Special Case: LP Polynomial Convergence Discussion 4 He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 4 / 38

  5. Background: Primal-Dual Interior Point (PDIP) Methods Consider the standard-form primal and dual linear program (LP) max b T y min c T x s.t. Ax = b s.t. A T y ≤ c (P) (D) x ≥ 0 ( or s.t. A T y + s = c , s ≥ 0 ) where A ∈ R m × n . PDIP search direction: Newton direction for perturbed version of the equalities in the Karush-Kuhn-Tucker (KKT) conditions.  A T y + s = c ,        A T I ∆ x c − A T y − s 0 Newton Ax = b , − − − − − − → A ∆ y b − Ax  =  ,     0 0  Xs = τ e , S X ∆ s σµ e − Xs 0 ( x , s ) ≥ 0 . where X := diag ( x ) > 0, S := diag ( s ) > 0, τ = σµ , µ = x T s / n > 0, σ ∈ [ 0 , 1 ] . He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 5 / 38

  6. Background: Cost of PDIP iteration Commonly, the Newton-KKT system is reduced (by block gaussian elimination) to the symmetric indefinite “augmented” system � � � ∆ x � � ⋆ � − X − 1 S A T = , A ∆ y 0 ⋆ an ( n + m ) × ( n + m ) linear system; or, further reduced to the positive definite “normal equations” M ∆ y = [ ⋆ ] , where M := AS − 1 XA T . The dominant cost is that of forming the “normal matrix” n x i � M = AS − 1 XA T = a i a T i . s i i = 1 When A is dense, the work per iteration is approximately nm 2 flops . He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 6 / 38

  7. Constraint Reduction for LP: Basic Intuition We expect many constraints are redundant or somehow not very relevant. We could try to guess, at each iteration, a good set Q to “pay attention to” and ignore the rest. irrelevant? m = 2 n = 13 b b ∆ y ∆ y redundant active Ignore many constraints | Q | = 6 max b T y max b T y s.t. A T Q y ≤ c Q s.t. A T y ≤ c ( A Q := [ a i 1 , a i 2 , · · · ] , i j ∈ Q ) Some prior work in 1990’s, Dantzig and Ye [1991], Tone [1993], Den Hertog et al. [1994], for basic classes of dual interior-point algorithms. Our work focuses on primal-dual interior-point methods (PDIP). He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 7 / 38

  8. Constraint Reduction: Basic Scheme Given a small set Q of constraints deemed critical at the current iteration, compute a PDIP search direction for min c T Q x Q max b T y s . t . A Q x Q = b s . t . A T Q y ≤ c Q x Q ≥ 0 i.e., solve       A T I ∆ x Q ∗ 0 Q ∆ y A Q     =   . ∗ 0 0 ∆ s Q S Q X Q 0 ∗ This system can be reduced (by block Gaussian elimination) to the “normal equations” M ( Q ) ∆ y = [ ∗ ] , where M ( Q ) := A Q S − 1 Q X Q A T Q . The dominant cost is that of forming the reduced “normal matrix” x i � M ( Q ) = A Q S − 1 Q X Q A T a i a T Q := i . s i i ∈ Q When A is dense, the cost is reduced from nm 2 to | Q | m 2 flops. He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 8 / 38

  9. Outline Constraint Reduction for LP: Basic Ideas 1 Constraint Reduction for LP: An Aggressive Approach 2 Selection of Working (Q) Set, and Convergence Properties Addressing “Rank Degeneracy” Allowing Infeasible Starting Points Extension to Convex Quadratic Optimization (CQP) Numerical Results and Applications Constraint Reduction for SDP: A More Robust, Polynomial-Time Approach 3 Block-Structured SDP Constraint-Reduction Scheme Special Case: LP Polynomial Convergence Discussion 4 He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 9 / 38

  10. Aggressive Approach: Selection of Working (Q) Set [Given a dual-feasible initial point, a dual-feasible sequence is generated.] Key requirements for working set Q k at iteration k : A Q must have full row rank, in order for ∆ y to be well defined. IF the sequence { y k } converges to some limit y ′ , THEN, for k large enough, all constraints that are active at y ′ must be contained in Q . Sufficient rule to satisfy these requirements: Let M be an upper bound to the number of active constraints at any feasible y , and let ǫ > 0. Among the M smallest slacks s k i , include all those with s k i < ǫ , subject to A Q full row rank. Possibly augment Q with heuristics addressing the class of problems or application under consideration. Reduced “normal” matrix M ( Q ) need not be close to unreduced matrix M . (Ongoing investigation: sort the constraints by s k i / s k − 1 instead of s k i .) i He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 10 / 38

  11. Aggressive Approach: Convergence Properties If Problem is primal-dual strictly feasible A has full row rank Then y k converges to y ∗ , a stationary point. If, in addition, A linear-independence condition holds [Conjecture: This condition is not needed] Then y k converges to y ∗ , a dual solution. If further The dual solution set is a singleton Then ( x k , y k ) converges q-quadratically to the unique PD solution. He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 11 / 38

  12. Aggressive Approach: Addressing “Rank Degeneracy” If A Q is rank deficient, it means the reduced primal-dual problem min c T Q x Q max b T y s . t . A Q x Q = b s . t . A T Q y ≤ c Q x Q ≥ 0 b is degenerate, and the reduced PDIP search a 1 direction is not well-defined. Enforcing rank ( A Q ) = m may require significant effort or make | Q | larger than desired: a 2 Add constraints until the condition holds. AQ = [ a 1 , a 2 ] OR More systematic linear-algebra methods to ensure a good basis is obtained. Instead we propose dealing with the degeneracy by the regularization max b T y − δ k 2 � y − y k � 2 2 s . t . A T Q y ≤ c Q He, Jung, Laiu, Park, Winternitz, Absil, O’Leary, Tits () Constraint Reduction in Interior-Point Methods 11 Mar 2014, NIST 12 / 38

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend