accurate eigenvalues and svds of totally nonnegative
play

Accurate Eigenvalues and SVDs of Totally Nonnegative Matrices - PowerPoint PPT Presentation

Accurate Eigenvalues and SVDs of Totally Nonnegative Matrices Plamen Koev San Jose State University HMICGA, Dublin, Ireland, June 18, 2018 Supported by SJSU Woodward Fund for Applied Mathematics Motivation Def: A matrix is Totally


  1. Accurate Eigenvalues and SVDs of Totally Nonnegative Matrices Plamen Koev San Jose State University HMICGA, Dublin, Ireland, June 18, 2018 Supported by SJSU Woodward Fund for Applied Mathematics

  2. Motivation ◮ Def: A matrix is Totally Nonnegative (TN) if all minors ≥ 0 ◮ This talk: All matrix computations with TN matrices possible: ◮ to high relative accuracy ◮ in floating point arithmetic ◮ at no extra cost ◮ Connection to computed-aided geometric design, e.g., “When converting a curve expressed in a B-spline expansion into its B´ ezier form, corner cutting of the B- spline control polygon leads to the B´ ezier points exactly when the B´ ezier matrix is totally positive.” Ref: Corner cutting algorithms for the B´ ezier representation of free form curves, Goodman, Micchelli, Linear Algebra Appl. 1998.

  3. Examples of TN matrices ◮ Vandermonde, Hilbert, Pascal:     1 1 1 1 1 1 / 2 1 / 3 1 / 4 2 2 2 3 1 2 1 / 2 1 / 3 1 / 4 1 / 5     V = H =  3 2 3 3    1 3 1 / 3 1 / 4 1 / 5 1 / 6     5 2 5 3 1 / 4 1 / 5 1 / 6 1 / 7 1 5  1 1 1 1  1 2 3 4   P =   1 3 6 10   1 4 10 20 ◮ Also Cauchy, Said–Ball, etc. ◮ Ubiquitous in practice: Occupy an octant in n 2 space when properly parameterized (as will see)

  4. The matrix eigenvalue problem ◮ Goal: compute all eigenvalues of a TN matrix in floating point arithmetic ◮ Including the zero Jordan structure! ◮ Very hard in general (per higher powers): “ The Jordan form is useful theoretically but is very hard to compute in a numerically stable fash- ion... ” James Demmel, Applied Numerical Linear Algebra. ◮ Why is that?

  5. Floating point arithmetic ◮ Finite, countably many floating point numbers representing the infinite, uncountable R ◮ Roundoff errors could make equal eigenvalues different and destroy the Jordan structure! ◮ Even accurate eigenvalues alone are problematic � 1 � 3 ◮ The determinant and eigenvalues of . 3 9 >> det([1 3; 3 9]) ans = -4.9960e-16 >> eig([1 3; 3 9]) ans = 1.1102e-16 1.0000e+01

  6. Eigenvalues of Pascal Matrix (which is TN)   1 1 1 1 · · · 1 2 3 4 · · ·     1 3 6 10 · · · P n =     1 4 10 20 · · ·    . . . .  ... . . . . . . . . cond ( P 40 ) = 6 × 10 44

  7. Eigenvalues of 40 × 40 Pascal matrix 30 10 eig accurate 20 10 � max × macheps 10 10 0 | � i | 10 − 10 10 − 20 10 − 30 10 0 10 20 30 40 Eigenvalue index

  8. Absolute vs. Relative Accuracy ◮ Absolute accuracy | x − ˆ x | ≤ ε ◮ Depends on the magnitude of x ◮ ε = 10 − 4 means what? ◮ Does x equal distance between planets or between molecules? ◮ Relative accuracy | x − ˆ x | ≤ ε | x | ◮ Works fine regardless of magnitude of x ◮ ε = 10 − 4 means ˆ x has 4 correct decimal digits! ◮ What if x = 0?

  9. Reason accuracy is lost in floating point arithmetic ◮ fl ( a ⊙ b ) = ( a ⊙ b )( 1 + δ ) , ⊙ ∈ { + , − , × , / } ◮ Relative accuracy preserved in × , + , / Proof: ( 1 + δ ) factors accumulate multiplicatively ◮ Subtractions of approximate quantities dangerous: . 123456789 xxx − . 123456789 yyy . 000000000 zzz ◮ subtraction of exact initial data is OK! ◮ if all other subtractions avoided, we get accuracy ◮ Exactly what we do with TN matrices

  10. Question: ◮ How do we know if a matrix is TN?   1 2 6 4 13 69   28 131 852

  11. Question: ◮ How do we know if a matrix is TN?             1 2 6 1 1 1 1 2 1 4 13 69  = 1 4 1 5 1 6 1 3            28 131 852 7 1 8 1 9 1 1 ◮ Product of TN bidiagonals

  12. Question: ◮ How do we know if a matrix is TN?             1 2 6 1 1 1 1 2 1 4 13 69  = 1 4 1 5 1 6 1 3            28 131 852 7 1 8 1 9 1 1 ◮ Product of TN bidiagonals ◮ In general: A = L 1 · · · L n − 1 · D · U n − 1 · · · U 1 , where L i lower bidiagonal, D diagonal, U i upper bidiagonal ◮ Cauchy–Binet: TN × TN = TN ◮ Each red entry = minor 1 ( A ) minor 2 ( A ) · minor 3 ( A ) minor 4 ( A )

  13. Question: ◮ How do we know if a matrix is TN?             1 2 6 1 1 1 1 2 1 4 13 69  = 1 4 1 5 1 6 1 3            28 131 852 7 1 8 1 9 1 1 ↓   1 2 3   BD ( A ) = 4 5 6 7 8 9   The n 2 nontrivial entries of BD ( A ) : ◮ parameterize class of nonsingular TN matrices ◮ allow for highly accurate computations: If A → B , then BD ( A ) → BD ( B ) accurate

  14. Starting point is the bidiagonal decomposition           1 1 d 1 1 u 12 1 A = 1 l 21 1 d 2 1 u 23 1 u 13           l 31 1 l 32 1 d 3 1 1 ◮ These are parameters the user must deliver � n � x j − 1 ◮ Readily available for Vandermonde A = i i , j = 1 i − 1 i − 1 x i + 1 − x j + 1 � � d i = ( x i − x j ) , l ik = , u ik = x i + n − k x i − x j j = 1 j = n − k

  15. Starting point is the bidiagonal decomposition           1 1 d 1 1 u 12 1 A = 1 l 21 1 d 2 1 u 23 1 u 13           l 31 1 l 32 1 d 3 1 1 � n � ◮ Cauchy A = 1 x i + y j i , j = 1 i − 1 ( x i − x k )( y i − y k ) � d i = ( x i + y k )( y i + x k ) k = 1 i − 1 i − n + k − 1 l ik = x n − k + y i − n + k + 1 x i + 1 − x l + 1 x i + y l � � x i + y i − n + k + 1 x i − x l x i + 1 + y l l = n − k l = 1 i − 1 i − n + k − 1 u ik = y n − k + x i − n + k + 1 y i + 1 − y l + 1 y i + x l � � y i + x i − n + k + 1 y i − y l y i + 1 + x l l = n − k l = 1

  16. Starting point is the bidiagonal decomposition           1 1 d 1 1 u 12 1 A = 1 l 21 1 d 2 1 u 23 1 u 13           l 31 1 l 32 1 d 3 1 1 ◮ Pascal: d i = l ij = u ij = 1 ◮ Reasonable requirement: Show me that your matrix is TN! ◮ Critical observation: BD ( A ) determines all eigenvalues accurately! (matrix entries do not!) ◮ I.e., small relative perturbations in BD ( A ) cause small relative perturbations to eigenvalues ◮ Eigenvalues “deserve” to be computed accurately ◮ The logic: The computed eigenvalues are rational functions of entries of BD ( A ) ◮ So those rational functions must have only positive coefficients (intuition, not proof)

  17. Eigenvalue algorithm ◮ Reduction to tridiagonal form (Cryer ’76) ◮ Using only one operation: Addition/subtraction of one row to the next/previous         + + + + + + + + + + + 0 + + + 0 + + + + + + + + + + + + + + + +          →  →  →  →         + + + + + + + + + + + + 0 + + +     + + + + 0 + + + 0 + + + 0 + + +  + +   + +   + +  0 0 0 0 0 0 + + + + + + + + + + + 0        →  →  0 + + +   0 + + +   0 + + +      0 + + + 0 0 + + 0 0 + + ◮ λ i = σ 2 i , σ i = singular values of bidiagonal Cholesky factor ◮ Solved accurately by Demmel and Kahan in 1989 ◮ We implement on BD ( A ) and not on A !

  18. Cryer’s algorithm applied to A ◮ To create a 0 in position (3,1) of   1 2 4 1 3 9   1 4 16 we use similarity  1   1 2 4   1  1 1 3 9 1       − 1 1 1 4 16 1 1       1 2 4 1 1 6 4  = = 1 3 9 1 1 12 9      0 1 7 1 1 0 8 7

  19. Now, Cryer’s applied to BD ( A ) ◮ Subtracting a multiple of one row from next to create a 0 is equivalent to setting an entry of the BD to 0             1 2 4 1 1 1 1 2 1  = 1 3 9 1 1 1 1 1 3 1 2            1 4 16 1 1 1 1 2 1 1 ↓             1 2 4 1 1 1 1 2 1  = 1 3 9 1 1 1 1 1 3 1 2            0 1 7 0 1 1 1 2 1 1 ◮ No arithmetic performed! ◮ New matrix still TN

  20. Next: Completing the similarity ◮ Adding a multiple of one row/col to next/previous is done by changing the entries of the BD only  1 2 4   1   1   1   1 2   1  1 3 9 = 1 1 1 1 1 3 1 2             0 1 7 0 1 1 1 2 1 1 ↓ ↓  1 6 4   1   1   1   1 6   1  4 2 1 1 12 9 = 1 6 1 1 2             3 3 0 8 7 0 1 2 1 1 1 2 ◮ New entries are rational functions with > 0 coefficients ◮ No subtractions ⇒ accuracy ◮ New matrix is still TN (Cauchy–Binet)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend