lu factorization with pivoting what can go wrong with the
play

LU Factorization with pivoting What can go wrong with the previous - PowerPoint PPT Presentation

LU Factorization with pivoting What can go wrong with the previous algorithm for LU factorization? 2 8 4 1 1 0 0 0 2 8 4 1 0.5 0 0 0 1 3 3 0 0 0 0 = = = 1 2 6 2 0.5 0 0 0 0 0 0 0 1 3 4


  1. LU Factorization with pivoting

  2. What can go wrong with the previous algorithm for LU factorization? 2 8 4 1 1 0 0 0 2 8 4 1 0.5 0 0 0 1 πŸ“ 3 3 0 0 0 0 𝑡 = 𝑴 = 𝑽 = 1 2 6 2 0.5 0 0 0 0 0 0 0 1 3 4 2 0.5 0 0 0 0 0 0 0 2 8 4 1 4 2 0.5 1 𝟏 1 2.5 π’Ž !" 𝒗 "! = 𝑡 βˆ’ π’Ž !" 𝒗 "! = 4 2 0.5 1 βˆ’2 4 1.5 4 2 0.5 1 βˆ’1 2 1.5 The next update for the lower triangular matrix will result in a division by zero! LU factorization fails. What can we do to get something like an LU factorization?

  3. Pivoting Approach: 1. Swap rows if there is a zero entry in the diagonal 2. Even better idea: Find the largest entry (by absolute value) and swap it to the top row. The entry we divide by is called the pivot. Swapping rows to get a bigger pivot is called (partial) pivoting. 𝑏 !! 𝒃 !" 𝑣 !! 𝒗 !" = 𝒃 "! 𝑩 "" 𝑣 !! π’Ž "! π’Ž "! 𝒗 !" + 𝑴 "" 𝑽 "" Find the largest entry (in magnitude)

  4. Sparse Systems

  5. Sparse Matrices Some type of matrices contain many zeros. Storing all those zero entries is wasteful! How can we efficiently store large matrices without storing tons of zeros? β€’ Sparse matrices (vague definition): matrix with few non-zero entries. For practical purposes: an π‘›Γ—π‘œ matrix is sparse if it has 𝑃 min 𝑛, π‘œ β€’ non-zero entries. β€’ This means roughly a constant number of non-zero entries per row and column. β€’ Another definition: β€œmatrices that allow special techniques to take advantage of the large number of zero elements” (J. Wilkinson)

  6. Sparse Matrices: Goals β€’ Perform standard matrix computations economically, i.e., without storing the zeros of the matrix. β€’ For typical Finite Element and Finite Difference matrices, the number of non-zero entries is 𝑃 π‘œ

  7. Sparse Matrices: MP example

  8. Sparse Matrices EXAMPLE: Number of operations required to add two square dense matrices: 𝑃 π‘œ ! Number of operations required to add two sparse matrices 𝐁 and 𝐂: 𝑃 nnz 𝐁 + nnz(𝐂) where nnz 𝐘 = number of non-zero elements of a matrix 𝐘

  9. Popular Storage Structures

  10. Dense (DNS) π΅π‘‘β„Žπ‘π‘žπ‘“ = (π‘œπ‘ π‘π‘₯, π‘œπ‘‘π‘π‘š) Row 0 Row 1 Row 2 Row 3 β€’ Simple β€’ Row-wise β€’ Easy blocked formats β€’ Stores all the zeros

  11. Coordinate Form (COO) β€’ Simple β€’ Does not store the zero elements β€’ Not sorted β€’ row and col : array of integers β€’ data : array of doubles

  12. Compressed Sparse Row (CSR) format

  13. Compressed Sparse Row (CSR) β€’ Does not store the zero elements β€’ Fast arithmetic operations between sparse matrices, and fast matrix- vector product β€’ col : contain the column indices (array of π‘œπ‘œπ‘¨ integers) β€’ data : contain the non-zero elements (array of π‘œπ‘œπ‘¨ doubles) β€’ rowptr : contain the row offset (array of π‘œ + 1 integers)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend