communication avoiding successive band reduction
play

Communication Avoiding Successive Band Reduction Nick Knight, Grey - PowerPoint PPT Presentation

Communication Avoiding Successive Band Reduction Nick Knight, Grey Ballard, James Demmel UC Berkeley SIAM PP12 Research supported by Microsoft (Award #024263) and Intel (Award #024894) funding and by matching funding by U.C. Discovery (Award


  1. Communication Avoiding Successive Band Reduction Nick Knight, Grey Ballard, James Demmel UC Berkeley SIAM PP12 Research supported by Microsoft (Award #024263) and Intel (Award #024894) funding and by matching funding by U.C. Discovery (Award #DIG07-10227). Additional support comes from Par Lab affiliates National Instruments, NEC, Nokia, NVIDIA, and Samsung.

  2. Talk Summary For high performance, we must reformulate existing algorithms in order to reduce data movement (i.e., avoid communication) We want to tridiagonalize a symmetric band matrix Application: dense symmetric eigenproblem Only want the eigenvalues (no eigenvectors) Our improved band reduction algorithm Moves asymptotically less data Speeds up against tuned libraries on a multicore platform, up to 2 × serial, 6 × parallel With our band-reduction approach, two-step tridiagonalization of a dense matrix is communication-optimal for all problem sizes Nick Knight Communication Avoiding Successive Band Reduction 1

  3. Motivation By communication we mean moving data within memory hierarchy on a sequential computer moving data between processors on a parallel computer Local Local Local SLOW Local Local Local FAST Local Local Local Sequential Parallel Communication is expensive, so our goal is to minimize it in many cases we need new algorithms in many cases we can prove lower bounds and optimality Nick Knight Communication Avoiding Successive Band Reduction 2

  4. Direct vs Two-Step Tridiagonalization Application: solving the dense symmetric eigenproblem via reduction to tridiagonal form (tridiagonalization) Conventional approach (e.g. LAPACK) is direct tridiagonalization Two-step approach reduces first to band, then band to tridiagonal Direct: A T Two-step: A B T Nick Knight Communication Avoiding Successive Band Reduction 3

  5. Direct vs Two-Step Tridiagonalization Application: solving the dense symmetric eigenproblem via reduction to tridiagonal form (tridiagonalization) Conventional approach (e.g. LAPACK) is direct tridiagonalization Two-step approach reduces first to band, then band to tridiagonal Direct: MatMul
 Direct
 Two‐step
 9000
 8000
 7000
 A T MFLOPS
 6000
 5000
 Two-step: 4000
 3000
 2000
 1000
 0
 0
 1000
 2000
 3000
 4000
 5000
 6000
 7000
 8000
 n
 A B T Nick Knight Communication Avoiding Successive Band Reduction 3

  6. Why is direct tridiagonalization slow? Communication costs! Approach Flops Words Moved MatMul
 Direct
 Two‐step
 3 n 3 4 O ( n 3 ) 9000
 Direct 8000
 7000
 O ( n 3 4 MFLOPS
 6000
 3 n 3 (1) M ) √ 5000
 Two-step O ( n 2 √ O ( n 2 √ 4000
 3000
 (2) M ) M ) 2000
 1000
 0
 0
 1000
 2000
 3000
 4000
 5000
 6000
 7000
 8000
 n
 M = fast memory size Direct approach achieves O (1) data re-use Two-step approach moves fewer words than direct approach √ using intermediate bandwidth b = Θ( M ) √ Full-to-banded step (1) achieves O ( M ) data re-use this is optimal Band reduction step (2) achieves O (1) data re-use Can we do better? Nick Knight Communication Avoiding Successive Band Reduction 4

  7. Band Reduction - previous work 1963 Rutishauser: Givens-based down diagonals and Householder-based 1968 Schwarz: Givens-based up columns 1975 Muraka-Horikoshi: improved R’s Householder-based algorithm 1984 Kaufman: vectorized S’s algorithm 1993 Lang: parallelized M-H’s algorithm (distributed-mem) 2000 Bischof-Lang-Sun: generalized everything but S’s algorithm 2009 Davis-Rajamanickam: Givens-based in blocks 2011 Luszczek-Ltaief-Dongarra: parallelized M-H’s algorithm (shared-mem) 2011 Haidar-Ltaief-Dongarra: combined L-L-D and D-R see A. Haidar’s talk in MS50 tomorrow Nick Knight Communication Avoiding Successive Band Reduction 5

  8. Band Reduction - previous work 1963 Rutishauser: Givens-based down diagonals and Householder-based 1968 Schwarz: Givens-based up columns 1975 Muraka-Horikoshi: improved R’s Householder-based algorithm 1984 Kaufman: vectorized S’s algorithm 1993 Lang: parallelized M-H’s algorithm (distributed-mem) 2000 Bischof-Lang-Sun: generalized everything but S’s algorithm ← 2009 Davis-Rajamanickam: Givens-based in blocks 2011 Luszczek-Ltaief-Dongarra: parallelized M-H’s algorithm (shared-mem) 2011 Haidar-Ltaief-Dongarra: combined L-L-D and D-R see A. Haidar’s talk in MS50 tomorrow Nick Knight Communication Avoiding Successive Band Reduction 5

  9. Successive Band Reduction (bulge-chasing) Q 1 T constraint: b+1 Q 2 T c + d ≤ b d+1 Q 1 1 6 c Q 3 T c+d Q 2 2 c d Q 4 T Q 3 3 b = bandwidth Q 5 T c = columns d = diagonals Q 4 4 Q 5 5 Nick Knight Communication Avoiding Successive Band Reduction 6

  10. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  11. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  12. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  13. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  14. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  15. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  16. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  17. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  18. How do we get data re-use? 1 Increase number of columns in parallelogram ( c ) permits blocking Householder updates: O ( c ) re-use constraint c + d ≤ b = ⇒ trade-off between re-use and progress 2 Chase multiple bulges at a time ( ω ) apply several updates to band while it’s in cache: O ( ω ) re-use bulges cannot overlap, need working set to fit in cache b+1 d+1 QR PRE SYM c POST Nick Knight Communication Avoiding Successive Band Reduction 7

  19. Data access patterns One bulge at a time Four bulges at a time ω = 4: same amount of work, 4 × fewer words moved Nick Knight Communication Avoiding Successive Band Reduction 8

  20. Shared-Memory Parallel Implementation lots of dependencies: use pipelining threads maintain working sets which never overlap Nick Knight Communication Avoiding Successive Band Reduction 9

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend