rank revealing qr factorization
play

Rank Revealing QR factorization F. Guyomarch, D. Mezher and B. - PowerPoint PPT Presentation

Rank Revealing QR factorization F. Guyomarch, D. Mezher and B. Philippe 1 Outline Introduction Classical Algorithms Full matrices Sparse matrices Rank-Revealing QR Conclusion CSDA 2005, Cyprus 2 Situation of the


  1. Rank Revealing QR factorization F. Guyomarc’h, D. Mezher and B. Philippe 1

  2. Outline • Introduction • Classical Algorithms ⋆ Full matrices ⋆ Sparse matrices • Rank-Revealing QR • Conclusion CSDA 2005, Cyprus 2

  3. Situation of the problem See Bjorck SIAM96 : Numerical Methods for Least Squares Problems. Data : A ∈ R m × n , b ∈ R m . Problem P : Find x ∈ S such that � x � minimum where S = { x ∈ R n | � b − Ax � minimum } . ( � . � ≡ � . � 2 ) x = A + b . The solution is unique : ˆ Property : A T ( b − Ax ) = 0 , x ∈ S iff A T Ax = A T b. iff The last equation is consistent since R ( A T A ) = R ( A T ). CSDA 2005, Cyprus 3

  4. Rank-Revealing QR factorization Theorem : A ∈ R m × n with rank( A ) = r ( r ≤ min( m, n )). There exist Q , E , R 11 and R 12 such that • Q ∈ R m × m is orthogonal, • E ∈ R n × n is a permutation, • R 11 ∈ R r × r is upper-triangular with positive diagonal entries, • R 12 ∈ R r × ( n − r ) , � � R 11 R 12 and AE = Q . 0 0 RRQR The factorization is not unique. Let RRQR be any of them. CSDA 2005, Cyprus 4

  5. Actually, if we consider a complete orthogonal decomposition � � T 0 V T A = Q 0 0 where Q and V are orthogonal and T triangular with positive diagonal � T − 1 � 0 entries, we have A + = V Q T . 0 0 Such a factorization can be obtained from the previous one by per- � R T � 11 forming a QR factorization of R T 12 CSDA 2005, Cyprus 5

  6. Column pivoting strategy Householder QR factorization using Householder reflections : H 1 A = A = v = A 2 .. 12 , 2 + || A 2 .. 12 , 2 || e 1 v = A 1 .. 12 , 1 + || A 1 .. 12 , 1 || e 1 H 2 = I 11 − 2 vv t H 1 = I 12 − 2 vv t v t v v t v CSDA 2005, Cyprus 6

  7. Sparse QR factorization Factorizing a sparse matrix implies fill-in in the factors. The situation is worse with QR than with LU since when updating the tailing matrix : → y = ( I − αve T • LU : the elementary transformation x − k ) x keeps x invariant when x k = 0. → y = ( I − αvv T ) x keeps x • QR : the elementary transformation x − invariant when x ⊥ v . Since A T A = R T R , the QR factorization A is related to the Cholesky factorization of A T A . It is known that a symmetric permutation on a sparse s.p.d. matrix changes the level of fill-in. CSDA 2005, Cyprus 7

  8. Therefore, a permutation of the columns of A changes the fill-in of R . ⇒ there is conflict between pivoting to minimize fill-ins and pivoting associated with numerical properties. We choose to decouple the sparse factorization phase and the rank- revealing phase For a standard QR factorization of a sparse matrix : Multi-frontal QR method [Amestoy-Duff-Puglisi ’94] Routine MA49AD in Library HSL. CSDA 2005, Cyprus 8

  9. Column pivoting strategy Householder QR factorization using Householder reflections : H 1 A = A = v = A 2 .. 12 , 2 + || A 2 .. 12 , 2 || e 1 v = A 1 .. 12 , 1 + || A 1 .. 12 , 1 || e 1 H 2 = I 11 − 2 vv t H 1 = I 12 − 2 vv t v t v v t v Householder QR with Column Pivoting (Businger and Golub) : At step k , the column j k ≥ k defining the Householder transformation is chosen so that � A ( k : m, j k ) � ≥ � A ( k : m, j ) � for j ≥ k . CSDA 2005, Cyprus 9

  10. Some properties on R 11 At step k :    R ( k ) R ( k ) A ( k ) ≡ H k · · · H 1 AE ( k ) =  . 11 12 A ( k ) 0 22    R ( k ) 12 ( k, :) satisfies R ( k ) Any column a of 11 ( k, k ) ≥ � a � .  A ( k ) 22 � A � ≥ � R ( k ) 11 � ≥ R ( k ) Moreover 11 (1 , 1) R ( k ) 11 ( k, k ) ≥ σ min ( R ( k ) 11 ) ≥ σ min ( A ), 11 ) ≥ R ( k ) 11 (1 , 1) This implies that : cond ( A ) ≥ cond ( R ( k ) 11 ( k,k ) . R ( k ) CSDA 2005, Cyprus 10

  11. Bad new The quantity R ( k ) 11 (1 , 1) 11 ( k,k ) cannot be considered as an estimator of cond ( R ( k ) 11 ) R ( k ) since it can be of different order of magnitude : Example (Kahan’s matrix of order n) : for θ ∈ (0 , π 2 ), c = cos θ and s = arcsin θ : d=c .ˆ [0:n-1] ; M=diag(d)*(eye(n)-s*triu(ones(n),1)); MATLAB : n=100 ; θ = arcsin(0 . 2) : σ min ( R 11 ) = 3 . 7 e − 9 ≪ R 11 (100 , 100) = 1 . 3 e − 1 Therefore a better estimate of cond ( R ( k ) 11 ) is needed. CSDA 2005, Cyprus 11

  12. Incremental Condition Estimator (ICE) [Bischof 90] which is implemented in LAPACK : it estimates cond ( R ( k ) 11 ) from an estimation of cond ( R ( k − 1) ). 11 However, the strategy which consists in stopping the factorization when 11 ) < 1 cond ( R ( k ) ǫ ≤ cond ( R ( k +1) ) 11 may fail in very special situations : Counterexample : M = Kahan’s matrix of order 30 with θ = arccos(0 . 2) cond ( R (16) ǫ < cond ( R (17) ) < 1 ) 11 11 indicates a numerical rank equal to 16 although the numerical rank of M computed by SVD is 22. CSDA 2005, Cyprus 12

  13. Pivoting strategies Convert R to reveal the rank by pushing the singularities towards the right end • Apply an Incremental Condition Estimator to evaluate σ max and σ min of R 1 → i, 1 → i , • if σ max /σ min > τ , move column i towards the right end and re- orthogonalise R using Givens rotations CSDA 2005, Cyprus 13

  14. Pivoting strategies • RRQR uses a set of thresholds to overcome numerical singularities, • A predifined set of thresholds might fail   2 0 0 0 10 − 6 A = 0 1 1     10 − 3 − 10 − 3 0 0 rank=2 if τ = { 10 7 } , rank=3 if τ = { 10 4 , 10 7 } • RRQR adapts the thresholds to avoid failure � � � � � < σ max ⋆ Upon completion, check if � R 22 ≃ σ min � � � � τ � � ⋆ On failure, insert new thresholds and restart algorithm CSDA 2005, Cyprus 14

  15. Numerical results 50 10 smax/smin 45 cond([q,r,p]=qr(A) 10 RRQR, thres=1e3,1e7,1e12,1e25 RRQR, thres=1e3,1e9,1e25 40 10 35 10 30 condition number 10 25 10 20 10 15 10 10 10 5 10 0 10 0 5 10 15 20 25 30 35 40 45 50 kahan(50,acos(0.2)), cond=2.4721e+49, rank=20, size=50x50, req=0 CSDA 2005, Cyprus 15

  16. Numerical results 50 10 smax/smin 45 10 cond([q,r,p]=qr(A) RRQR, thres=1e30 40 10 RRQR, thres=1e20 RRQR, thres=1e25, req=2 35 10 30 condition number 10 25 10 20 10 15 10 10 10 5 10 0 10 0 5 10 15 20 25 30 35 40 45 50 kahan(50,acos(0.2)), cond=2.4721e+49, rank=20, size=50x50 CSDA 2005, Cyprus 16

  17. Second strategy The k + 1 column is not the worst usually for the condition number. Can we select the worst one for the conditionement of R 11 ? Then we reject this worst column to the end. k+1 j k+1 We need to recompute the singular values and vectors estimates. Reverse ICE CSDA 2005, Cyprus 17

  18. Second strategy Random matrix, n=500 20 10 Exact Estimated tol=1e−12 Estimated tol=1e−7 15 10 10 10 5 10 0 10 0 100 200 300 400 500 CSDA 2005, Cyprus 18

  19. Conclusion Work is still on progress : Case where R is sparse • ICE might fail, so does the Reverse ICE. • Using the elimination tree, we can try to keep R as sparse as possible. CSDA 2005, Cyprus 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend