reduced order modeling and numerical linear algebra
play

Reduced order modeling and numerical linear algebra Akil Narayan 1 1 - PowerPoint PPT Presentation

Reduced order modeling and numerical linear algebra Akil Narayan 1 1 Department of Mathematics, and Scientific Computing and Imaging (SCI) Institute University of Utah February 7, 2020 ICERM A. Narayan (U. Utah SCI) NLA and ROM Continuous


  1. Reduced order modeling and numerical linear algebra Akil Narayan 1 1 Department of Mathematics, and Scientific Computing and Imaging (SCI) Institute University of Utah February 7, 2020 ICERM A. Narayan (U. Utah – SCI) NLA and ROM

  2. Continuous Ø discrete analogies Most standard techniques for reduced basis methods can be understood from numerical linear algebra. Kolmogorov n widths Ø Singular value decompositions Reduced basis methods Ø QR decompositions Empirical interpolation methods Ø LU decompositions A. Narayan (U. Utah – SCI) NLA and ROM

  3. Kolmogorov n widths are (essentially) singular values A. Narayan (U. Utah – SCI) NLA and ROM

  4. Singular value decompositions Let A P ❘ M ˆ N , with M " N . We will think of the columns of A as snapshots. ¨ ˛ ˝ a 1 A : “ a 2 ¨ ¨ ¨ a N ‚ The SVD of A is A “ U Σ V T , where U and V are orthogonal M ˆ M and N ˆ N matrices, respectively. Σ is a diagonal matrix with non-negative entries. We’ll use the following non-standard notation for the entries in Σ : σ 0 ě σ 1 ě ¨ ¨ ¨ ě σ N ´ 1 . A. Narayan (U. Utah – SCI) NLA and ROM

  5. Low-rank approximations Among the nice properties of the SVD is its ability to form low-rank approximations, A k : “ U k Σ k V T 1 ď k ď N, k , where U k and V k are k -column truncations, and Σ k is a k ˆ k principcal submatrix truncation. With rank p A k q “ k , then A k “ arg min } A ´ B } ˚ , rank p B qď k for ˚ “ 2 , F . A. Narayan (U. Utah – SCI) NLA and ROM

  6. Low-rank approximations Among the nice properties of the SVD is its ability to form low-rank approximations, A k : “ U k Σ k V T 1 ď k ď N, k , where U k and V k are k -column truncations, and Σ k is a k ˆ k principcal submatrix truncation. With rank p A k q “ k , then A k “ arg min } A ´ B } ˚ , rank p B qď k for ˚ “ 2 , F . Equivalently, A k is the projection of the columns of A onto R p U k q : ¨ ˛ ˝ P R p U k q a 1 A k “ P R p U k q a 2 ¨ ¨ ¨ P R p U k q a N ‚ A. Narayan (U. Utah – SCI) NLA and ROM

  7. Projections onto arbitrary spaces What if we project A onto other spaces? If V Ă ❘ M is any subspace, we could consider ¨ ˛ ˝ P V a 1 P V A : “ P V a 2 ¨ ¨ ¨ P V a N ‚ A. Narayan (U. Utah – SCI) NLA and ROM

  8. Projections onto arbitrary spaces What if we project A onto other spaces? If V Ă ❘ M is any subspace, we could consider ¨ ˛ ˝ P V a 1 P V A : “ P V a 2 ¨ ¨ ¨ P V a N ‚ And we could ask about a certain type of error committed by this approximation E p V q : “ max } x } 2 “ 1 } A x ´ P V A x } 2 We know V “ R p U k q does a pretty good job. What about other spaces? A. Narayan (U. Utah – SCI) NLA and ROM

  9. ❘ ❘ Optimal projections For a given rank k , an “optimal” projection commits the smallest error: E k : “ V Ă ❘ M E p V q min A. Narayan (U. Utah – SCI) NLA and ROM

  10. ❘ Optimal projections For a given rank k , an “optimal” projection commits the smallest error: E k : “ V Ă ❘ M E p V q min So an extremal characterization of an SVD-based low rank approximation is R p U k q “ arg min } x } 2 “ 1 } A x ´ P A x } 2 . max V Ă ❘ N A. Narayan (U. Utah – SCI) NLA and ROM

  11. Optimal projections For a given rank k , an “optimal” projection commits the smallest error: E k : “ V Ă ❘ M E p V q min So an extremal characterization of an SVD-based low rank approximation is R p U k q “ arg min } x } 2 “ 1 } A x ´ P A x } 2 . max V Ă ❘ N Or, an (unnecessarily?) pedantic alternative: E k “ σ k p A q “ min v P V } Ax ´ v } 2 } x } 2 “ 1 min max V Ă ❘ N A. Narayan (U. Utah – SCI) NLA and ROM

  12. ❘ ❘ ❘ ❘ SVD projections Given A P ❘ M ˆ N , the success of a low-rank projection is dictated by the approximation numbers σ k p A q “ min } x } 2 “ 1 min max v P V } Ax ´ v } 2 . V Ă ❘ N More precisely, it is dictated by fast decay of these numbers as k increases. A. Narayan (U. Utah – SCI) NLA and ROM

  13. SVD projections Given A P ❘ M ˆ N , the success of a low-rank projection is dictated by the approximation numbers σ k p A q “ min } x } 2 “ 1 min max v P V } Ax ´ v } 2 . V Ă ❘ N More precisely, it is dictated by fast decay of these numbers as k increases. These numbers are defined by our choice of metric on “output” space ❘ M , and our choice of metric on “measurement” space ❘ N . I.e., a generalization might look like ´ A ; ℓ p ´ ❘ M ¯ , ℓ q ´ ❘ N ¯¯ “ v P V } Ax ´ v } p . σ k min } x } q “ 1 min max dim V ď k A. Narayan (U. Utah – SCI) NLA and ROM

  14. Kolmogorov n widths ´ A ; ℓ p ´ ❘ M ¯ , ℓ q ´ ❘ N ¯¯ σ n “ min } x } q “ 1 min max v P V } Ax ´ v } p . dim V ď n These numbers tell us how well the columns of A are ℓ p -approximated by a linear space using ℓ q measurements. Another definition might be the maximum column norm error: ´ A ; ℓ p ´ ❘ M ¯¯ σ n “ dim V ď n max min i Pr N s min v P V } Ae i ´ v } p . Great. How do we do all this with functions? A. Narayan (U. Utah – SCI) NLA and ROM

  15. Kolmogorov n widths ´ A ; ℓ p ´ ❘ M ¯ , ℓ q ´ ❘ N ¯¯ σ n “ min } x } q “ 1 min max v P V } Ax ´ v } p . dim V ď n These numbers tell us how well the columns of A are ℓ p -approximated by a linear space using ℓ q measurements. Another definition might be the maximum column norm error: ´ A ; ℓ p ´ ❘ M ¯¯ σ n “ dim V ď n max min i Pr N s min v P V } Ae i ´ v } p . Great. How do we do all this with functions? Let A be a collection of functions in a Hilbert space H . Then one way to talk about similar concepts to ( ℓ 2 ) singular values is σ n p A ; H q “ v P V } a ´ v } dim V ď n sup inf inf a P A This is called the Kolmogorov n width of A (with respect to H ). A. Narayan (U. Utah – SCI) NLA and ROM

  16. Reduced basis methods (essentially) perform QR decompositions A. Narayan (U. Utah – SCI) NLA and ROM

  17. Interpolative decompositions One disadvantage of SVD-based low rank approximations, ¨ ˛ ‚ “ U Σ V T , ˝ a 1 A “ ¨ ¨ ¨ a 2 a N is that we need information from all columns of A to define U . A. Narayan (U. Utah – SCI) NLA and ROM

  18. Interpolative decompositions One disadvantage of SVD-based low rank approximations, ¨ ˛ ‚ “ U Σ V T , ˝ a 1 A “ ¨ ¨ ¨ a 2 a N is that we need information from all columns of A to define U . One alternative: Interpolative decompositions, or matrix skeletonizations. Basic idea: project all columns of A onto a subspace spanned by a few columns. A rank- n column skeletonization of A is ¨ ˛ ¯ : ´ A T A T ˝ e s 1 B “ A S S A S A , A S : “ A e s 2 ¨ ¨ ¨ e s n ‚ , S l jh n P R p A S q with S “ t s 1 , . . . s n u Ă r N s . A. Narayan (U. Utah – SCI) NLA and ROM

  19. Choosing the columns S The problem of choosing S that is optimal in some metric is the column subset selection problem. For metrics of interest, it’s NP-hard. A. Narayan (U. Utah – SCI) NLA and ROM

  20. Choosing the columns S The problem of choosing S that is optimal in some metric is the column subset selection problem. For metrics of interest, it’s NP-hard. So let’s do something else: Let’s pick columns greedily: Given S Ă r N s of size n , we’ll add a column index via the procedure › › s n ` 1 “ arg max › a j ´ P R p A S q a j 2 . › j Pr N s This is much cheaper since I need only to evaluate N vector norms at each step. A. Narayan (U. Utah – SCI) NLA and ROM

  21. Choosing the columns S The problem of choosing S that is optimal in some metric is the column subset selection problem. For metrics of interest, it’s NP-hard. So let’s do something else: Let’s pick columns greedily: Given S Ă r N s of size n , we’ll add a column index via the procedure › › s n ` 1 “ arg max › a j ´ P R p A S q a j 2 . › j Pr N s This is much cheaper since I need only to evaluate N vector norms at each step. There’s already a well-polished algorithm that does this: the QR decomposition. A. Narayan (U. Utah – SCI) NLA and ROM

  22. The QR decomposition (1/2) The column-pivoted QR decomposition iteratively computes orthonormal vectors in the range of A . At step j , the next column is identified as the one whose projected residual is largest. P j ´ 1 : “ Q j ´ 1 Q T j ´ 1 s j “ arg max } a j ´ P j ´ 1 a j } 2 j Pr N s a s j “ ‰ q j : “ } a s j } 2 , Q j “ Q j ´ 1 , q j A. Narayan (U. Utah – SCI) NLA and ROM

  23. The QR decomposition (1/2) The column-pivoted QR decomposition iteratively computes orthonormal vectors in the range of A . At step j , the next column is identified as the one whose projected residual is largest. P j ´ 1 : “ Q j ´ 1 Q T j ´ 1 s j “ arg max } a j ´ P j ´ 1 a j } 2 j Pr N s a s j “ ‰ q j : “ } a s j } 2 , Q j “ Q j ´ 1 , q j The residual › › r j ´ 1 : “ › a s j ´ P j ´ 1 a s j 2 , › is the largest ( ℓ 2 -norm) column mistake we make by choosing S “ t s 1 , . . . , s j ´ 1 u , i.e., by replacing A Ð P V A , V : “ span t a s 1 , . . . , a s j ´ 1 u . A. Narayan (U. Utah – SCI) NLA and ROM

  24. The QR decomposition (2/2) This algorithm is a greedy algorithm: instead of all-at-once optimization, we optimize one at a time. Clearly, we don’t expect this to perform as well as the optimal SVD-based subspace. But how well does this greedy procedure work in practice? A. Narayan (U. Utah – SCI) NLA and ROM

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend