solving linear equations
play

Solving Linear Equations in Interior-Point Methods Tongryeol Seol - PowerPoint PPT Presentation

Optimization Seminar Solving Linear Equations in Interior-Point Methods Tongryeol Seol Computing and Software Department, McMaster University March 1, 2004 Outline Linear Equations in IPMs How to Solve Sparse Linear Equations


  1. Optimization Seminar Solving Linear Equations in Interior-Point Methods Tongryeol Seol Computing and Software Department, McMaster University March 1, 2004

  2. Outline • Linear Equations in IPMs • How to Solve Sparse Linear Equations • Introduction to McSML • Conclusion • Reference 1 / 21

  3. Linear Equations in IPMs for LO and QO • The key implementation issue of IPMs is the solution of the linear system of equations arising in the Newton system: – H –1 A T Δ x f L – Q – H –1 A T Δ x f Q = = A 0 Δ y h L A 0 Δ y h Q augmented system in LO augmented system in QO AHA T Δ y = AHf L + h L A ( Q+ H –1 ) –1 A T Δ y = A ( Q+H –1 ) –1 f Q + h Q normal equation in LO normal equation in QO • At every iteration, we solve the above system with different H . • Solving the equations takes in the average 60~90% of the time of solving problems by IPMs. 2 / 21

  4. How to Solve Linear Equations in IPMs Direct Methods • Cholesky factorization ( LL T ) is most popular for normal equation approach. – Cholesky factorization is a symmetric variant of LU factorization. – Data structures of L can be determined previously and fixed. • LDL T factorization is used for augmented system approach. – D is a block diagonal matrix if 2x2 pivot is applied, otherwise just a diagonal matrix. Iterative Methods • Conjugate gradient method is considered as an alternative in IPMs for network flow optimization. 3 / 21

  5. Normal Equation Approach • In IPMs for LO, normal equation is popular because AHA T is positive defnite and H is a diagonal matrix. AHA T Δ y = AH f L + h L A(Q+H –1 ) –1 A T Δ y = A(Q+H –1 ) –1 f Q + h Q normal equation in LO normal equation in QO • If there is one or more dense columns in A , AHA T loses sparsity. nonzero pattern of AA T of fit1p nonzero pattern of A of fit1p (nz=9,868) (nz=393,129) • normal equation of QO? 4 / 21

  6. Augmented System Approach • Augmented system is free from dense columns or Q matrix, but loses positive definiteness of normal equation. • There are two approaches to solve augmented system: Symmetric Block Factorization with 2x2 pivoting – Theory: Bunch-Parlett (or Bunch-Kaufman) Pivoting Strategy – Implementation: LINPACK, Harwell Library (MA27, MA47), Fourer and Mehrotra (fo1aug) LDL T Factorization with Regularization – Theory: Quasidefinite, Proximal Point Algorithm – Implementation: Mészáros, Gondzio 5 / 21

  7. Block Factorization with 2 × 2 Pivoting • We cannot apply the Cholesky factorization to indefinite matrices. – e.g 0 1 1 0 • Bunch-Kaufman pivoting strategy uses 2 × 2 pivot when 1 × 1 pivot is not acceptable. – At first, it tries a partial pivoting search (case 1~3). – If no 1 × 1 pivot is acceptable, it performs 2 × 2 pivot. α ≒ 0.6404 d λ case 1. | d | ≥ α | λ |, use d as a 1 × 1 pivot. case 2. | d σ | ≥ α | λ | 2 , use d as a 1 × 1 pivot. c λ σ case 3. | c | ≥ α | σ | , use c as a 1 × 1 pivot. σ d λ case 4. Otherwise, use as a 2 × 2 pivot. λ c σ has the largest absolute value in this column λ has the largest absolute value in this column 6 / 21

  8. Solving Linear Equations by Regularization • Quasidefinite matrices are strongly factorizable. – A symmetric matrix K is called quasidefinite if it has the form – H A T K = A F where H and F are positive definite and A has full row rank. – Factorizable: There exists D and L such that K = LDL T . – Strongly Factorizable: For any permutation matrix P , PKP T = LDL T exists. • Regularization can be applied to achieve stability in the linear algebra kernel. – Primal-dual regularization: makes the part more negative definite – Q – H –1 A T M = + – R p 0 A 0 0 R d makes the part more positive definite – However, we keep the regularizations small since we don’t want to change greatly the behaviour of IPMs. 7 / 21

  9. Sparse Gaussian Elimination • First step of Gaussian elimination: α w T 1 0 α w T upper triangular matrix M = = vw T unit lower triangular matrix v C v/ α I 0 C – α – Repeat Gaussian elimination on C … Result in a Factorization M = LU. – In symmetric cases, M = L ( U = DL T ) = LDL T . diagonal matrix • For sparse M : X X X X X X X X X X X X If v and w are not zero, X X 0 X X X X X X X 0 X X X X X vw T X X 0 X X X X X C – α is not zero when C is zero. X X 0 X X X X X X X 0 X X X X X X X X X X X 0 X X Different order of pivoting X X 0 X X X X 0 X X can reduce the number of fill-ins. X X 0 X X X X X X X X 0 X X X X X 8 / 21

  10. Control of Nonzeros ( Fill-ins ) • When numerical stability is not an issue, ordering, symbolic factorization, can be performed separately. numerical factorization • Ordering permutes equations and variables to reduce fill in L . – Finding Optimal Permutation is NP-complete problem. – Local heuristics (e.g minimum degree ordering) – Global heuristics (e.g nested dissection) • Symbolic factorization determines locations of nonzeros in L , and we can set up data structures for storing nonzeros of L previously. 9 / 21

  11. Local Ordering Heuristics • The Markowitz criterion in Gaussian elimination c j Markowitz count: ( r i -1)( c j -1) X X X X Ⓧ r i X ■ ■ ■ ■ estimated # of elements to be nonzeros X ■ ■ ■ ■ X ■ ■ ■ ■ • Minimum degree ordering is a symmetric variant of Markowitz ordering. Markowitz count: ( r i -1) 2 Ⓧ X X X r i Degree: r X ■ ■ ■ X ■ ■ ■ X ■ ■ ■ • Graph representation of the sparse pattern of a matrix - Nodes: on-diagonal elements ② ① ③ X X X ① X ② - Edges: off-diagonal elements X X ③ X X ④ ④ ⑤ X X ⑤ 10 / 21

  12. Global Ordering Heuristics • Nested dissection ordering – Its roots are in finite-element substructuring. – It makes a matrix doubly-bordered block diagonal form. – It finds a small separator to disconnect a given graph into components of approximately equal size. 9 × 9 grid 81 × 81 matrix M1 M1 M2 M2 M11 M11 M21 M12 M21 M12 M22 M22 11 / 21

  13. Notion of Supernode • Consecutive columns with identical sparsity structure can often be found in L . • Supernode is a group of consecutive columns { j, j+1, …, j+t } such that – columns j to j+t have a dense diagonal block, – columns j to j+t have identical sparsity pattern below row j+t • Benefits of Supernode – Reduce inefficient indirect addressing, – Take advantage of cache memory, – Allow a compact representation of the sparsity structure of L – We can expect about 30% speed-up by using supernode structures. 12 / 21

  14. Introduction to McSML • McSML consists of 11 C–MEX files for solution of linear equations in IPMs. • ANALYSIS – Minimum (External) Degree Ordering with Multiple Elimination • FACTORIZE and SOLVE – Left-Looking Cholesky/ LDL T Factorization – Supernodal Techniques: Compact Row Indices, Loop-unrolling, etc – Iterative Refinement: Conjugate Gradient Method • MISC. – We maintain A and A T together. – Normal Equation Builder and Augmented System Builder. 13 / 21

  15. Normal Equation Solver of McSML It performs minimum degree AT=sm_trans(A); ordering. P has ordering info. P=sf_mmd_ne(A); It builds data structures for L and create supernode structures( SINFO ). [L,SINFO]=sf_scc_ne(A,AT,P); It computes AHA T . D=nf_aat(A,AT,H,P,L,SINFO); It performs numerical factorization. nf_scc_ne(L,SINFO,D); It solves the equation x=nf_sub_ne(A,H,L,SINFO,D,P,rhs); ( AHA T x=rhs ) and refines the solution. We assume that every matrix is sparse and every vector is full. 14 / 21

  16. Augmented System Solver of McSML AT=sm_trans(A); P=sf_mmd_as(A,AT,H,F); H A T -R p 0 [L,SINFO]=sf_scc_as(A,AT,H,F,P); It builds + . A F 0 R d D=nf_aug(A,AT,H,F,P,L,SINFO,Rp,Rd); It performs numerical factorization. nf_scc_as(L,SINFO,D); It solves the equation and refines x=nf_sub_as(A,AT,H,F,L,SINFO,D,P,rhs); the solution. 15 / 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend