with proprietary interconnect tca
play

with Proprietary Interconnect TCA for GPU Direct Communication - PowerPoint PPT Presentation

Implementation of CG Method on GPU Cluster with Proprietary Interconnect TCA for GPU Direct Communication Kazuya MATSUMOTO 1 , Toshihiro HANAWA 2 , Yuetsu KODAMA 4 , Hisafumi FUJII 5 , Taisuke BOKU 1,3 1 Center for Computational


  1. Implementation of CG Method on GPU Cluster with Proprietary Interconnect TCA for GPU Direct Communication Kazuya MATSUMOTO †1 , Toshihiro HANAWA †2 , Yuetsu KODAMA †4 , Hisafumi FUJII †5 , Taisuke BOKU †1,3 †1 Center for Computational Sciences, University of Tsukuba †2 Information Technology Center, The University of Tokyo †3 Graduate School of Systems and Information Engineering, University of Tsukuba †4 RIKEN Advanced Institute for Computational Science †5 FUJITSU Software Technologies Limited AsHES 2015 May 25, 2015

  2. Outline • Background and Motivation • TCA (Tightly Coupled Accelerators) Architecture • Collective Communication • Allgather and Allreduce • CG Method • Conclusion 2

  3. Background • GPU clusters are common as HPC systems • High peak performance / cost ratio • High peak performance / power ratio • Strong scaling on GPU clusters is difficult. • Large gap between computation perf. and communication perf. • Communication latency between GPUs is larger than between CPUs • Improving communication performance between GPUs is demanded for HPC • Our target is to develop a direct communication system between GPUs over different nodes for future accelerated computing ⇒ Tightly Coupled Accelerators (TCA) architecture 3

  4. Our Previous Work on TCA 1. “ Tightly Coupled Accelerators Architecture for Minimizing Communication Latency among Accelerators ,” In AsHES 2013 . • Introduction (descriptions) on the TCA architecture • Performance evaluation on the ping-pong communication of TCA 2. “ QCD Library for GPU Cluster with Proprietary Interconnect for GPU Direct Communication ,” In HeteroPar 2014 . • Application of TCA to improve the communication performance in QUDA QCD library 4

  5. Motivation • Further performance evaluation of TCA • Implementing CG method by using TCA • CG method: Iterative solution for systems of linear equations • Implementing allgather and allreduce collective communication with TCA API • Evaluating the performance and seeing how TCA is effective 5

  6. Outline • Background and Motivation • TCA (Tightly Coupled Accelerators) Architecture • Collective Communication • Allgather and Allreduce • CG Method • Conclusion 6

  7. TCA (Tightly Coupled Accelerators) Architecture • Technology for direct connection between accelerators (GPUs) over different nodes without CPU assistance. • Low communication latency • By eliminating extra data copy to the host (CPU) • Improves strong scalability Node Node CPU Memory CPU Memory CPU CPU PCIe Switch PCIe Switch PCI PCI PC e PC e e e GPU GPU PEACH2 PEACH2 PCIe GPU Memory GPU Memory 7

  8. PEACH2 • PCI Express Adaptive Communication Hub ver. 2 • Implementation of TCA by FPGA • Enables direct connection between GPUs with PCI-Express (PCIe) technology • Direct data copy is accomplished by NVIDIA GPUDirect Support for RDMA (GDR) • Protocol conversion is not required ⇒ Lower latency than InfiniBand • Contains 4 PCIe ports (3 external ports) • Each port has PCIe Gen2 x8 bandwidth (4 GB/s peak) • NOTE: For convenience, we call this implementation of TCA on PEACH2 as “ TCA ”. 8

  9. HA-PACS/TCA • Proof-of-concept GPU cluster of TCA concept in HA-PACS project • 64 compute nodes in total • 4 sub-clusters each of which consists of 16 nodes • PEACH2 is equipped with • Sub-cluster configures 2x8 ring (torus) network. • By connecting 3 neighbor nodes through 3 PCIe ports of PEACH2 • MPI communication through InfiniBand is also possible. • Can be considered to be a normal GPU cluster • Full-bisection bandwidth fat-tree network. 9

  10. Performance Evaluation Condition • Evaluation on Hardware Intel Xeon E5-2680 2.8 GHz × 2 CPU a sub-cluster of ( IvyBridge 10 cores / CPU) HA-PACS/TCA NVIDIA Tesla K20X × 4 GPU ( Kepler GK110 2688 cores / GPU) • Up to 16 nodes TCA PEACH2 board (processes) (Altera Stratix-IV GX 530 FPGA) • Using 1 GPU / node InfiniBand Mellanox Connect-X3 Dual-port QDR G2 x8 Infini Software PEACH2 CPU0 CPU1 G2 x8 G2 x8 QPI G3 x8 Band G2 x16 G2 x16 G2 x16 G2 x16 CUDA 6.5 PCIe G2 x8 MPI MVAPICH 2 GDR 2.1a GPU0 GPU1 GPU2 GPU3 C Compiler Intel Compiler 14.0.3 10

  11. MPI (MVAPICH2-GDR) • We compare the performance of implementation using TCA with using MPI communication. • MPI Impl.: MVAPICH2-GDR 2.1a ( MV2GDR) • MPI implementation for InfiniBand • As with TCA, MV2GDR utilizes GPU Direct for RMA (GDR) to improve latency and bandwidth for small data communication 11

  12. Ping-pong GPU-to-GPU Communication Performance Latency Bandwidth 12 5 MPI/IB TCA/PEACH2 (DMA) 4.5 10 TCA/PEACH2 (DMA) 4 MPI/IB Bandwidth [GB/s] 3.5 Latency [μsec] 8 3 6 2.5 Better Better 2 4 1.5 1 2 0.5 0 0 8 128 2048 8 512 32768 2097152 Data size [Bytes] Data size [Bytes] • TCA/PEACH2 is better for small sizes. • For large sizes, TCA is outperformed by MPI/IB since the difference of peak 12 bandwidth perf. (4 GB/s vs. 8 GB/s) → How about collective communications?

  13. Outline • Background and Motivation • TCA (Tightly Coupled Accelerators) Architecture • Collective Communication • Allgather and Allreduce • CG Method • Conclusion 13

  14. TCA Implementation of Collective Communication • Allgather • All processes gather data of each process. • Gathering data of KB-MB order • Communication bandwidth as well as latency is important. • Allreduce • Conducts specified operation (sum, max, …) among data arrays (𝑦 𝑗 ) of each process and store the reduction result in all processes. • Targeted for CG method, we implement and tune allreduce (sum) for double-precision scalar (8 Bytes) data. 3 - (𝑦 0 + 𝑦 1 + 𝑦 2 + 𝑦 3 = 𝑗=0 𝑦 𝑗 ) • Latency decides the performance. 14

  15. Algorithms for Collective Communication • Implement and evaluate 4 algorithms We suppose #processes (p) is in power of 2. 15

  16. Allgather Implementation: Recursive Doubling (In case #processes = 16) Step 1 Step 2 • Requires 4 (= log 2 p) steps 8 14 8 14 • Node mapping optimization 0 6 0 6 10 12 10 12 1. Same hop counts between 2 4 2 4 9 15 9 15 any nodes in every step 1 7 1 7 2. Communicate data with 11 13 11 13 neighbor node in the last 3 5 3 5 step Initial Step 3 Step 4 state 8 14 8 14 8 14 0 6 0 6 0 6 10 12 10 12 10 12 2 4 2 4 2 4 9 15 9 15 9 15 1 7 1 7 1 7 11 13 11 13 11 13 3 5 3 5 3 5 16

  17. Impact of Node Mapping to Allgather Performance (#Processes=16) 350 15 8 Non-optimized 0 7 300 Communication time [ μsec ] 14 9 Optimized 1 6 250 13 10 2 5 Better 200 12 11 Non-optimized 3 4 150 100 8 14 0 6 50 10 12 2 4 0 9 15 0 64 128 192 256 1 7 11 13 Gathered data size [KB] Optimized 3 5 17

  18. Allgather Perfomance Comparison among Different Algorithms • Time for all-gathering 128 KB data • N=16384 case in CG method • Recursive Doubling shows good performance • However, when p=16, TCA is slower than MPI for this size Ring Neighbor Exchange Recursive Doubling Dissemination MPI 250 Communication time [μsec] 200 150 Better 100 50 0 2 4 8 16 #Processes 18

  19. Allgather Performance (#Processes=16) 250 MPI/IB Communication time [μsec] 200 TCA/PEACH2 150 Better 100 50 0 0 32 64 96 128 160 192 224 256 Gathered data size [KB] 19

  20. Allgather Performance (#Processes=4) 250 MPI/IB Communication time [μsec] 200 TCA/PEACH2 150 Better 100 50 0 0 32 64 96 128 160 192 224 256 Gathered data size [KB] 20

  21. Allreduce Performance • CPU-to-CPU allreduce time for 8 Bytes scalar data • Dissemination algorithm is the fastest. • TCA/PEACH2 is more than 2x faster than MPI/IB • Low latency of TCA works effectively Ring Neighbor Exchange Recursive Doubling Dissemination MPI Communition time [μsec] 25 20 Better 15 10 5 0 2 4 8 16 21 #Processes

  22. Outline • Background and Motivation • TCA (Tightly Coupled Accelerators) Architecture • Collective Communication • Allgather and Allreduce • CG Method • Conclusion 22

  23. CG (Conjugate Gradient) Method • Iterative solution for systems of linear equations • Ax = b • A : N-by-N symmetric positive-definite matrix (sparse matrix) • Sparse matrix is stored in CRS (Compressed Row Storage) order. • x , b : N-dimensional vector • No preprocessing • Main computation parts ( NVIDIA’s cuSPARSE and cuBLAS are utilized ) • SpMV x1 – Sparse Matrix-Vector Multiply ( q := Ap ) • DOT x3 – Vector Dot Product ( α := p T q ) • AXPY x3 – Vector Multiply-Add ( y := α x + y ) 23

  24. Parallelization of CG Method • Parallelized by row-wise one-dimensional partitioning of matrix A In case #processes = 4 N rank0 A0 b0 x0 rank1 A1 b1 x1 N = rank2 A2 x2 b2 rank3 N/4 A3 x3 b3 24

  25. Parallelization of CG Method • Parallelized CG method requires collective communications among all processes 1. Allgather : Gathering required vector data for SpMV 2. Allreduce: Reduction for having the summation of the local dot product • Implemented collective communications are utilized. 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend