High Performance and Scalable MPI+X Library for Emerging HPC - - PowerPoint PPT Presentation

high performance and scalable mpi x library for
SMART_READER_LITE
LIVE PREVIEW

High Performance and Scalable MPI+X Library for Emerging HPC - - PowerPoint PPT Presentation

High Performance and Scalable MPI+X Library for Emerging HPC Clusters Talk at Intel HPC Developer Conference (SC 16) by Dhabaleswar K. (DK) Panda Khaled Hamidouche The Ohio State University The Ohio State University E-mail:


slide-1
SLIDE 1

High Performance and Scalable MPI+X Library for Emerging HPC Clusters

Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

Talk at Intel HPC Developer Conference (SC ‘16) by

Khaled Hamidouche The Ohio State University E-mail: hamidouc@cse.ohio-state.edu http://www.cse.ohio-state.edu/~hamidouc

slide-2
SLIDE 2

Intel HPC Dev Conf (SC ‘16) 2 Network Based Computing Laboratory

High-End Computing (HEC): ExaFlop & ExaByte

100-200 PFlops in 2016-2018

1 EFlops in 2023-2024?

10K-20K EBytes in 2016-2018

40K EBytes in 2020 ?

ExaFlop & HPC

  • ExaByte & BigData
slide-3
SLIDE 3

Intel HPC Dev Conf (SC ‘16) 3 Network Based Computing Laboratory

Trends for Commodity Computing Clusters in the Top 500 List (http://www.top500.org)

10 20 30 40 50 60 70 80 90 100 50 100 150 200 250 300 350 400 450 500 Percentage of Clusters Number of Clusters Timeline Percentage of Clusters Number of Clusters

85%

slide-4
SLIDE 4

Intel HPC Dev Conf (SC ‘16) 4 Network Based Computing Laboratory

Drivers of Modern HPC Cluster Architectures

Tianhe – 2 Titan Stampede Tianhe – 1A

  • Multi-core/many-core technologies
  • Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)
  • Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD
  • Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)

Accelerators / Coprocessors high compute density, high performance/watt >1 TFlop DP on a chip High Performance Interconnects - InfiniBand <1usec latency, 100Gbps Bandwidth> Multi-core Processors SSD, NVMe-SSD, NVRAM

slide-5
SLIDE 5

Intel HPC Dev Conf (SC ‘16) 5 Network Based Computing Laboratory

Designing Communication Libraries for Multi-Petaflop and Exaflop Systems: Challenges

Programming Models

MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies

(InfiniBand, 40/100GigE, Aries, and OmniPath)

Multi/Many-core Architectures Accelerators (NVIDIA and MIC)

Middleware

Co-Design Opportunities and Challenges across Various Layers

Performance Scalability Fault- Resilience Communication Library or Runtime for Programming Models

Point-to-point Communicatio n Collective Communicatio n Energy- Awareness Synchronizatio n and Locks I/O and File Systems Fault Tolerance

slide-6
SLIDE 6

Intel HPC Dev Conf (SC ‘16) 6 Network Based Computing Laboratory

Exascale Programming models

Next-Generation Programming models MPI+X X= ? OpenMP, OpenACC, CUDA, PGAS, Tasks….

Highly-Threaded Systems (KNL) Irregular Communications Heterogeneous Computing with Accelerators

  • The community believes

exascale programming model will be MPI+X

  • But what is X?

– Can it be just OpenMP?

  • Many different environments

and systems are emerging

– Different `X’ will satisfy the respective needs

slide-7
SLIDE 7

Intel HPC Dev Conf (SC ‘16) 7 Network Based Computing Laboratory

  • Scalability for million to billion processors

– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up

  • Scalable Collective communication

– Offload – Non-blocking – Topology-aware

  • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)

– Multiple end-points per node

  • Support for efficient multi-threading (OpenMP)
  • Integrated Support for GPGPUs and Accelerators (CUDA)
  • Fault-tolerance/resiliency
  • QoS support for communication and I/O
  • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,

CAF, …)

  • Virtualization
  • Energy-Awareness

MPI+X Programming model: Broad Challenges at Exascale

slide-8
SLIDE 8

Intel HPC Dev Conf (SC ‘16) 8 Network Based Computing Laboratory

  • Extreme Low Memory Footprint

– Memory per core continues to decrease

  • D-L-A Framework

– Discover

  • Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job
  • Node architecture, Health of network and node

– Learn

  • Impact on performance and scalability
  • Potential for failure

– Adapt

  • Internal protocols and algorithms
  • Process mapping
  • Fault-tolerance solutions

– Low overhead techniques while delivering performance, scalability and fault-tolerance

Additional Challenges for Designing Exascale Software Libraries

slide-9
SLIDE 9

Intel HPC Dev Conf (SC ‘16) 9 Network Based Computing Laboratory

Overview of the MVAPICH2 Project

  • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,690 organizations in 83 countries – More than 402,000 (> 0.4 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘16 ranking)

  • 1st ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China
  • 13th ranked 241,108-core cluster (Pleiades) at NASA
  • 17th ranked 519,640-core cluster (Stampede) at TACC
  • 40th ranked 76,032-core cluster (Tsubame 2.5) at Tokyo Institute of Technology and many others

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE) – http://mvapich.cse.ohio-state.edu

  • Empowering Top500 systems for over a decade

– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) -> Sunway TaihuLight at NSC, Wuxi, China (1st in Nov’16, 10,649,640 cores, 93 PFlops)

slide-10
SLIDE 10

Intel HPC Dev Conf (SC ‘16) 10 Network Based Computing Laboratory

  • Hybrid MPI+OpenMP Models for Highly-threaded Systems
  • Hybrid MPI+PGAS Models for Irregular Applications
  • Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous

Computing with Accelerators Outline

slide-11
SLIDE 11

Intel HPC Dev Conf (SC ‘16) 11 Network Based Computing Laboratory

  • Systems like KNL
  • MPI+OpenMP is seen as the best fit

– 1 MPI process per socket for Multi-core – 4-8 MPI processes per KNL – Each MPI process will launch OpenMP threads

  • However, current MPI runtimes are not “efficiently” handling the hybrid

– Most of the application use Funneled mode: Only the MPI processes perform communication – Communication phases are the bottleneck

  • Multi-endpoint based designs

– Transparently use threads inside MPI runtime – Increase the concurrency

Highly Threaded Systems

slide-12
SLIDE 12

Intel HPC Dev Conf (SC ‘16) 12 Network Based Computing Laboratory

  • MPI-4 will enhance the thread support

– Endpoint proposal in the Forum – Application threads will be able to efficiently perform communication – Endpoint is the communication entity that maps to a thread

  • Idea is to have multiple addressable communication entities within a single process
  • No context switch between application and runtime => better performance
  • OpenMP 4.5 is more powerful than just traditional data parallelism

– Supports task parallelism since OpenMP 3.0 – Supports heterogeneous computing with accelerator targets since OpenMP 4.0 – Supports explicit SIMD and threads affinity pragmas since OpenMP 4.0

MPI and OpenMP

slide-13
SLIDE 13

Intel HPC Dev Conf (SC ‘16) 13 Network Based Computing Laboratory

  • Lock-free Communication

– Threads have their own resources

  • Dynamically adapt the number of threads

– Avoid resource contention – Depends on application pattern and system performance

  • Both intra- and inter-nodes communication

– Threads boost both channels

  • New MEP-Aware collectives
  • Applicable to the endpoint proposal in MPI-4

MEP-based design: MVAPICH2 Approach

Multi-endpoint Runtime Request Handling Progress Engine Comm. Resources Management Endpoint Controller Collective Optimized Algorithm Point-to-Point MPI/OpenMP Program

MPI Collective (MPI_Alltoallv, MPI_Allgatherv...) *Transparent support MPI Point-to-Point (MPI_Isend, MPI_Irecv, MPI_Waitall...) *OpenMP Pragma needed

Lock-free Communication Components

  • M. Luo, X. Lu, K. Hamidouche, K. Kandalla and D. K. Panda, Initial Study of Multi-Endpoint Runtime for MPI+OpenMP Hybrid

Applications on Multi-Core Systems. International Symposium on Principles and Practice of Parallel Programming (PPoPP '14).

slide-14
SLIDE 14

Intel HPC Dev Conf (SC ‘16) 14 Network Based Computing Laboratory

Performance Benefits: OSU Micro-Benchmarks (OMB) level

20 40 60 80 100 120 140 160 180 16 64 256 1K 4K 16K 64K 256K Latency (us) Message size Orig Multi-threaded Runtime Proposed Multi-Endpoint Runtime Process based Runtime 100 200 300 400 500 600 700 800 900 1000 1 4 16 64 256 1K 2K Latency (us) Message size

  • rig

mep

100 200 300 400 500 600 700 800 900 1000 1 4 16 64 256 1K 2K Latency (us) Message size

  • rig

mep

Multi-pairs Bcast Alltoallv

  • Reduces the latency from 40us to 1.85 us (21X)
  • Achieves the same as Processes
  • 40% improvement on latency for Bcast on 4,096 cores
  • 30% improvement on latency for Alltoall on 4,096 cores
slide-15
SLIDE 15

Intel HPC Dev Conf (SC ‘16) 15 Network Based Computing Laboratory

Performance Benefits: Application Kernel level

20 40 60 80 100 120 140 160 180 200 CG LU MG Time (s) NAS Benchmark with 256 nodes (4,096 cores) CLASS E Orig mep

5 10 15 2K 4K Execution Time # of cores and input size

P3DFFT

Orig MEP

  • 6.3% improvement for MG, 11.7% improvement for CG, and 12.6% improvement for LU on 4,096

cores.

  • With P3DFFT, we are able to observe a 30% improvement in communication time and 13.5%

improvement in the total execution time.

slide-16
SLIDE 16

Intel HPC Dev Conf (SC ‘16) 16 Network Based Computing Laboratory

  • On-load approach

– Takes advantage of the idle cores – Dynamically configurable – Takes advantage of highly multithreaded cores – Takes advantage of MCDRAM of KNL processors

  • Applicable to other programming models such as PGAS, Task-based, etc.
  • Provides portability, performance, and applicability to runtime as well as

applications in a transparent manner

Enhanced Designs for KNL: MVAPICH2 Approach

slide-17
SLIDE 17

Intel HPC Dev Conf (SC ‘16) 17 Network Based Computing Laboratory

Performance Benefits of the Enhanced Designs

  • New designs to exploit high concurrency and MCDRAM of KNL
  • Significant improvements for large message sizes
  • Benefits seen in varying message size as well as varying MPI processes

Very Large Message Bi-directional Bandwidth 16-process Intra-node All-to-All Intra-node Broadcast with 64MB Message

2000 4000 6000 8000 10000 2M 4M 8M 16M 32M 64M Bandwidth (MB/s) Message size MVAPICH2 MVAPICH2-Optimized 10000 20000 30000 40000 50000 60000 4 8 16 Latency (us)

  • No. of processes

MVAPICH2 MVAPICH2-Optimized

27%

50000 100000 150000 200000 250000 300000 1M 2M 4M 8M 16M 32M Latency (us) Message size MVAPICH2 MVAPICH2-Optimized

17.2%

52%

slide-18
SLIDE 18

Intel HPC Dev Conf (SC ‘16) 18 Network Based Computing Laboratory

Performance Benefits of the Enhanced Designs

10000 20000 30000 40000 50000 60000 1M 2M 4M 8M 16M 32M 64M

Bandwidth (MB/s) Message Size (bytes)

MV2_Opt_DRAM MV2_Opt_MCDRAM MV2_Def_DRAM MV2_Def_MCDRAM 30%

50 100 150 200 250 300 4:268 4:204 4:64 Time (s) MPI Processes : OMP Threads MV2_Def_DRAM MV2_Opt_DRAM

15%

Multi-Bandwidth using 32 MPI processes CNTK: MLP Training Time using MNIST (BS:64)

  • Benefits observed on training time of Multi-level Perceptron (MLP) model on MNIST dataset

using CNTK Deep Learning Framework

Enhanced Designs will be available in upcoming MVAPICH2 releases

slide-19
SLIDE 19

Intel HPC Dev Conf (SC ‘16) 19 Network Based Computing Laboratory

  • Hybrid MPI+OpenMP Models for Highly-threaded Systems
  • Hybrid MPI+PGAS Models for Irregular Applications
  • Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous

Computing with Accelerators Outline

slide-20
SLIDE 20

Intel HPC Dev Conf (SC ‘16) 20 Network Based Computing Laboratory

Maturity of Runtimes and Application Requirements

  • MPI has been the most popular model for a long time
  • Available on every major machine
  • Portability, performance and scaling
  • Most parallel HPC code is designed using MPI
  • Simplicity - structured and iterative communication patterns
  • PGAS Models
  • Increasing interest in community
  • Simple shared memory abstractions and one-sided communication
  • Easier to express irregular communication
  • Need for hybrid MPI + PGAS
  • Application can have kernels with different communication characteristics
  • Porting only part of the applications to reduce programming effort
slide-21
SLIDE 21

Intel HPC Dev Conf (SC ‘16) 21 Network Based Computing Laboratory

Hybrid (MPI+PGAS) Programming

  • Application sub-kernels can be re-written in MPI/PGAS based on communication

characteristics

  • Benefits:

– Best of Distributed Computing Model – Best of Shared Memory Computing Model

Kernel 1 MPI Kernel 2 MPI Kernel 3 MPI Kernel N MPI

HPC Application

Kernel 2 PGAS Kernel N PGAS

slide-22
SLIDE 22

Intel HPC Dev Conf (SC ‘16) 22 Network Based Computing Laboratory

Current Approaches for Hybrid Programming

  • Need more network and

memory resources

  • Might lead to deadlock!
  • Layering one programming model over another

– Poor performance due to semantics mismatch – MPI-3 RMA tries to address

  • Separate runtime for each programming model

Hybrid (OpenSHMEM + MPI) Applications OpenSHMEM Runtime MPI Runtime InfiniBand, RoCE, iWARP OpenSHMEM Calls MPI Class

slide-23
SLIDE 23

Intel HPC Dev Conf (SC ‘16) 23 Network Based Computing Laboratory

The Need for a Unified Runtime

  • Deadlock when a message is sitting in one runtime, but application calls the other runtime
  • Prescription to avoid this is to barrier in one mode (either OpenSHMEM or MPI) before entering

the other

  • Or runtimes require dedicated progress threads
  • Bad performance!!
  • Similar issues for MPI + UPC applications over individual runtimes

shmem_int_fadd (data at p1); /* operate on data */ MPI_Barrier(comm); /* local computation */ MPI_Barrier(comm);

P0 P1

OpenSHMEM Runtime MPI Runtime OpenSHMEM Runtime MPI Runtime

Active Msg

slide-24
SLIDE 24

Intel HPC Dev Conf (SC ‘16) 24 Network Based Computing Laboratory

MVAPICH2-X for Hybrid MPI + PGAS Applications

MPI, OpenSHMEM, UPC, CAF, UPC++ or Hybrid (MPI + PGAS) Applications Unified MVAPICH2-X Runtime InfiniBand, RoCE, iWARP OpenSHMEM Calls MPI Calls UPC Calls

  • Unified communication runtime for MPI, UPC, OpenSHMEM, CAF, UPC++ available with MVAPICH2-

X 1.9 onwards! (since 2012) – http://mvapich.cse.ohio-state.edu

  • Feature Highlights

– Supports MPI(+OpenMP), OpenSHMEM, UPC, CAF, UPC++, MPI(+OpenMP) + OpenSHMEM, MPI(+OpenMP) + UPC – MPI-3 compliant, OpenSHMEM v1.0 standard compliant, UPC v1.2 standard compliant (with initial support for UPC 1.3), CAF 2008 standard (OpenUH), UPC++ – Scalable Inter-node and intra-node communication – point-to-point and collectives

CAF Calls UPC++ Calls

slide-25
SLIDE 25

Intel HPC Dev Conf (SC ‘16) 25 Network Based Computing Laboratory

OpenSHMEM Atomic Operations: Performance

5 10 15 20 25 30

fadd finc add inc cswap swap Time (us)

UH-SHMEM MV2X-SHMEM Scalable-SHMEM OMPI-SHMEM

  • OSU OpenSHMEM micro-benchmarks (OMB v4.1)
  • MV2-X SHMEM performs up to 40% better compared to UH-SHMEM
slide-26
SLIDE 26

Intel HPC Dev Conf (SC ‘16) 26 Network Based Computing Laboratory

1000 2000 3000 4000 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K Time (ms) Message Size UPC-GASNet UPC-OSU 50 100 150 200 250 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K Time (ms) Message Size UPC-GASNet UPC-OSU 2X

UPC Collectives Performance

Broadcast (2,048 processes) Scatter (2,048 processes) Gather (2,048 processes) Exchange (2,048 processes)

5000 10000 15000 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K Time (us) Message Size UPC-GASNet UPC-OSU 100 200 300 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K Time (ms) Message Size UPC-GASNet UPC-OSU 25X 2X 35%

  • J. Jose, K. Hamidouche, J. Zhang, A. Venkatesh, and D. K. Panda, Optimizing Collective Communication in UPC (HiPS’14, in association with

IPDPS’14)

slide-27
SLIDE 27

Intel HPC Dev Conf (SC ‘16) 27 Network Based Computing Laboratory

Performance Evaluations for CAF model

1000 2000 3000 4000 5000 6000 Bandwidth (MB/s) Message Size (byte) GASNet-IBV GASNet-MPI MV2X 1000 2000 3000 4000 5000 6000 Bandwidth (MB/s) Message Size (byte) GASNet-IBV GASNet-MPI MV2X 2 4 6 8 10 GASNet-IBV GASNet-MPI MV2X Latency (ms) 2 4 6 8 10 GASNet-IBV GASNet-MPI MV2X Latency (ms) 100 200 300 bt.D.256 cg.D.256 ep.D.256 ft.D.256 mg.D.256 sp.D.256 GASNet-IBV GASNet-MPI MV2X

Time (sec)

Get NAS-CAF Put

  • Micro-benchmark improvement (MV2X vs. GASNet-IBV, UH CAF test-suite)

– Put bandwidth: 3.5X improvement on 4KB; Put latency: reduce 29% on 4B

  • Application performance improvement (NAS-CAF one-sided implementation)

– Reduce the execution time by 12% (SP.D.256), 18% (BT.D.256) 3.5X 29% 12% 18%

  • J. Lin, K. Hamidouche, X. Lu, M. Li and D. K. Panda, High-performance Co-array Fortran support with MVAPICH2-X: Initial

experience and evaluation, HIPS’15

slide-28
SLIDE 28

Intel HPC Dev Conf (SC ‘16) 28 Network Based Computing Laboratory

UPC++ Collectives Performance

MPI + {UPC++} application

GASNet Interfaces

UPC++ Runtime

Network Conduit (MPI) MVAPICH2-X Unified communication Runtime (UCR)

MPI + {UPC++} application

UPC++ Runtime

MPI Interfaces

  • Full and native support for hybrid MPI + UPC++ applications
  • Better performance compared to IBV and MPI conduits
  • OSU Micro-benchmarks (OMB) support for UPC++
  • Available since MVAPICH2-X 2.2RC1

5000 10000 15000 20000 25000 30000 35000 40000 Time (us) Message Size (bytes) GASNet_MPI GASNET_IBV MV2-X

14x Inter-node Broadcast (64 nodes 1:ppn)

  • J. M. Hashmi, K. Hamidouche, and D. K. Panda, Enabling

Performance Efficient Runtime Support for hybrid MPI+UPC++ Programming Models, IEEE International Conference on High Performance Computing and Communications (HPCC 2016)

slide-29
SLIDE 29

Intel HPC Dev Conf (SC ‘16) 29 Network Based Computing Laboratory

Application Level Performance with Graph500 and Sort

Graph500 Execution Time

  • J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming

Models, International Supercomputing Conference (ISC’13), June 2013 5 10 15 20 25 30 35 4K 8K 16K Time (s)

  • No. of Processes

MPI-Simple MPI-CSC MPI-CSR Hybrid (MPI+OpenSHMEM) 13X 7.6X

  • Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design
  • 8,192 processes
  • 2.4X improvement over MPI-CSR
  • 7.6X improvement over MPI-Simple
  • 16,384 processes
  • 1.5X improvement over MPI-CSR
  • 13X improvement over MPI-Simple

Sort Execution Time 500 1000 1500 2000 2500 3000 500GB-512 1TB-1K 2TB-2K 4TB-4K Time (seconds) Input Data - No. of Processes MPI Hybrid 51%

  • Performance of Hybrid (MPI+OpenSHMEM) Sort

Application

  • 4,096 processes, 4 TB Input Size
  • MPI – 2408 sec; 0.16 TB/min
  • Hybrid – 1172 sec; 0.36 TB/min
  • 51% improvement over MPI-design
  • J. Jose, S. Potluri, H. Subramoni, X. Lu, K. Hamidouche, K. Schulz, H. Sundar and D. Panda Designing Scalable Out-of-core Sorting with

Hybrid MPI+PGAS Programming Models, PGAS’14

slide-30
SLIDE 30

Intel HPC Dev Conf (SC ‘16) 30 Network Based Computing Laboratory

MiniMD – Total Execution Time

  • Hybrid design performs better than MPI implementation
  • 1,024 processes
  • 17% improvement over MPI version
  • Strong Scaling

Input size: 128 * 128 * 128

Performance Strong Scaling

1000 2000 3000 512 1,024

Hybrid-Barrier MPI-Original Hybrid-Advanced

17%

1000 2000 3000 256 512 1,024

Hybrid-Barrier MPI-Original Hybrid-Advanced

Time (ms) Time (ms) # of Cores # of Cores

slide-31
SLIDE 31

Intel HPC Dev Conf (SC ‘16) 31 Network Based Computing Laboratory

Accelerating MaTEx k-NN with Hybrid MPI and OpenSHMEM

KDD (2.5GB) on 512 cores 9.0% KDD-tranc (30MB) on 256 cores 27.6%

  • Benchmark: KDD Cup 2010 (8,407,752 records, 2 classes, k=5)
  • For truncated KDD workload on 256 cores, reduce 27.6% execution time
  • For full KDD workload on 512 cores, reduce 9.0% execution time
  • J. Lin, K. Hamidouche, J. Zhang, X. Lu, A. Vishnu, D. Panda. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM,

OpenSHMEM 2015

  • MaTEx: MPI-based Machine learning algorithm library
  • k-NN: a popular supervised algorithm for classification
  • Hybrid designs:

– Overlapped Data Flow; One-sided Data Transfer; Circular-buffer Structure

slide-32
SLIDE 32

Intel HPC Dev Conf (SC ‘16) 32 Network Based Computing Laboratory

  • Hybrid MPI+OpenMP Models for Highly-threaded Systems
  • Hybrid MPI+PGAS Models for Irregular Applications
  • Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous

Computing with Accelerators Outline

slide-33
SLIDE 33

Intel HPC Dev Conf (SC ‘16) 33 Network Based Computing Laboratory

At Sender: At Receiver:

MPI_Recv(r_devbuf, size, …); inside MVAPICH2

  • Standard MPI interfaces used for unified data movement
  • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
  • Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

slide-34
SLIDE 34

Intel HPC Dev Conf (SC ‘16) 34 Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases

  • Support for MPI communication from NVIDIA GPU device memory
  • High performance RDMA-based inter-node point-to-point communication

(GPU-GPU, GPU-Host and Host-GPU)

  • High performance intra-node point-to-point communication for multi-GPU

adapters/node (GPU-GPU, GPU-Host and Host-GPU)

  • Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node

communication for multiple GPU adapters/node

  • Optimized and tuned collectives for GPU device buffers
  • MPI datatype support for point-to-point and collective communication from

GPU device buffers

  • Unified memory
slide-35
SLIDE 35

Intel HPC Dev Conf (SC ‘16) 35 Network Based Computing Laboratory

1000 2000 3000 4000 1 4 16 64 256 1K 4K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR

GPU-GPU Internode Bi-Bandwidth

Message Size (bytes) Bi-Bandwidth (MB/s) 5 10 15 20 25 30 2 8 32 128 512 2K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR

GPU-GPU internode latency

Message Size (bytes)

Latency (us)

MVAPICH2-GDR-2.2 Intel Ivy Bridge (E5-2680 v2) node - 20 cores NVIDIA Tesla K40c GPU Mellanox Connect-X4 EDR HCA CUDA 8.0 Mellanox OFED 3.0 with GPU-Direct-RDMA

10x

2X

11x

Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)

2.18us 500 1000 1500 2000 2500 3000 1 4 16 64 256 1K 4K MV2-GDR2.2 MV2-GDR2.0b MV2 w/o GDR

GPU-GPU Internode Bandwidth

Message Size (bytes)

Bandwidth (MB/s)

11X 2X 3X

slide-36
SLIDE 36

Intel HPC Dev Conf (SC ‘16) 36 Network Based Computing Laboratory

  • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)
  • HoomdBlue Version 1.0.5
  • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768

MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

500 1000 1500 2000 2500 4 8 16 32

Average Time Steps per second (TPS)

Number of Processes

MV2 MV2+GDR

500 1000 1500 2000 2500 3000 3500 4 8 16 32 Average Time Steps per second (TPS) Number of Processes 64K Particles 256K Particles 2X 2X

slide-37
SLIDE 37

Intel HPC Dev Conf (SC ‘16) 37 Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0.2 0.4 0.6 0.8 1 1.2 16 32 64 96 Normalized Execution Time Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based 0.2 0.4 0.6 0.8 1 1.2 4 8 16 32 Normalized Execution Time Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

  • 2X improvement on 32 GPUs nodes
  • 30% improvement on 96 GPU nodes (8 GPUs/node)
  • C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data

Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content /tasks/operational/meteoSwiss/

slide-38
SLIDE 38

Intel HPC Dev Conf (SC ‘16) 38 Network Based Computing Laboratory

Need for Non-Uniform Memory Allocation in OpenSHMEM for Heterogeneous Architectures

  • MIC cores have limited

memory per core

  • OpenSHMEM relies on

symmetric memory, allocated using shmalloc()

  • shmalloc() allocates same amount of memory on all PEs
  • For applications running in symmetric mode, this limits the total heap size
  • Similar issues for applications (even host-only) with memory load imbalance

(Graph500, Out-of-Core Sort, etc.)

  • How to allocate different amounts of memory on host and MIC cores, and still be

able to communicate?

MIC Memory Host Memory Host Cores MIC Cores

Memory per core

slide-39
SLIDE 39

Intel HPC Dev Conf (SC ‘16) 39 Network Based Computing Laboratory

OpenSHMEM Design for MIC Clusters

OpenSHMEM Applications Multi/Many-Core Architectures with memory heterogeneity MVAPICH2-X OpenSHMEM Runtime InfiniBand Networks OpenSHMEM Programming Model InfiniBand Channel SCIF Channel Shared Memory/ CMA Channel Proxy based Communication Extensions Application Co-Design Symmetric Memory Manager

  • Non-Uniform Memory Allocation:

– Team-based Memory Allocation (Proposed Extensions) – Address Structure for non-uniform memory allocations

void shmem_team_create(shmem_team_t team, int *ranks, int size, shmem_team_t *newteam); void shmem_team_destroy(shmem_team_t *team); void shmem_team_split(shmem_team_t team, int color, int key, shmem_team_t *newteam); int shmem_team_rank(shmem_team_t team); int shmem_team_size(shmem_team_t team); void *shmalloc_team (shmem_team_t team, size_t size); void shfree_team(shmem_team_t team, void *addr);

slide-40
SLIDE 40

Intel HPC Dev Conf (SC ‘16) 40 Network Based Computing Laboratory

HOST2

Proxy-based Designs for OpenSHMEM

OpenSHMEM Put using Active Proxy OpenSHMEM Get using Active Proxy

HOST1 MIC1

H C A

HOST2 MIC2

H C A

(1) IB REQ (2) SCIF Read (2) IB Write (3) IB FIN

HOST1 MIC1 H C A MIC2 H C A

(3) IB FIN (2) SCIF Read (2) IB Write (1) IB REQ

  • Current generation architectures impose limitations on read bandwidth when HCA

reads from MIC memory – Impacts both put and get operation performance

  • Solution: Pipelined data transfer by proxy running on host using IB and SCIF channels
  • Improves latency and bandwidth!
slide-41
SLIDE 41

Intel HPC Dev Conf (SC ‘16) 41 Network Based Computing Laboratory

OpenSHMEM Put/Get Performance

OpenSHMEM Put Latency OpenSHMEM Get Latency

1000 2000 3000 4000 5000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Latency (us) Message Size MV2X-Def MV2X-Opt 1000 2000 3000 4000 5000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Latency (us) Message Size MV2X-Def MV2X-Opt

  • Proxy-based designs alleviate hardware limitations
  • Put Latency of 4M message: Default: 3911us, Optimized: 838us
  • Get Latency of 4M message: Default: 3889us, Optimized: 837us

4.5X 4.6X

slide-42
SLIDE 42

Intel HPC Dev Conf (SC ‘16) 42 Network Based Computing Laboratory

Performance Evaluations using Graph500

Native Mode (8 procs/MIC) Symmetric Mode (16 Host+16MIC)

  • Graph500 Execution Time (Native Mode):

– 8 processes per MIC node – At 512 processes , Default: 5.17s, Optimized: 4.96s – Performance Improvement from MIC-aware collectives design

  • Graph500 Execution Time (Symmetric Mode):

– 16 processes on each Host and MIC node – At 1,024 processes, Default: 15.91s, Optimized: 12.41s – Performance Improvement from MIC-aware collectives and proxy-based designs 2 4 6 64 128 256 512 Execution Time (s) Number of Processes MV2X-Def MV2X-Opt 5 10 15 20 25 128 256 512 1024 Execution Time (s) Number of Processes MV2X-Def MV2X-Opt 28%

slide-43
SLIDE 43

Intel HPC Dev Conf (SC ‘16) 43 Network Based Computing Laboratory

Graph500 Evaluations with Extensions

  • Redesigned Graph500 using MIC to overlap computation/communication

– Data Transfer to MIC memory; MIC cores pre-processes received data – Host processes traverses vertices, and sends out new vertices

  • Graph500 Execution time at 1,024 processes:

– 16 processes on each Host and MIC node – Host-Only: .33s, Host+MIC with Extensions: .26s

  • Magnitudes of improvement compared to default symmetric mode

– Default Symmetric Mode: 12.1s, Host+MIC Extensions: 0.16s 0.2 0.4 0.6 0.8 128 256 512 1024 Execution Time (s) Number of Processes Host Host+MIC (extensions)

26%

  • J. Jose, K. Hamidouche, X. Lu, S. Potluri, J. Zhang, K. Tomko and D. K. Panda, High Performance OpenSHMEM for Intel MIC Clusters: Extensions,

Runtime Designs and Application Co-Design, IEEE International Conference on Cluster Computing (CLUSTER '14) (Best Paper Finalist)

slide-44
SLIDE 44

Intel HPC Dev Conf (SC ‘16) 44 Network Based Computing Laboratory

  • Architectures for Exascale systems are evolving
  • Exascale systems will be constrained by

– Power – Memory per core – Data movement cost – Faults

  • Programming Models, Runtimes and Middleware need to be designed for

– Scalability – Performance – Fault-resilience – Energy-awareness – Programmability – Productivity

  • High Performance and Scalable MPI+X libraries are needed
  • Highlighted some of the approaches taken by the MVAPICH2 project
  • Need continuous innovation to have the right MPI+X libraries for Exascale

systems

Looking into the Future ….

slide-45
SLIDE 45

Intel HPC Dev Conf (SC ‘16) 45 Network Based Computing Laboratory

Funding Acknowledgments

Funding Support by Equipment Support by

slide-46
SLIDE 46

Intel HPC Dev Conf (SC ‘16) 46 Network Based Computing Laboratory

Personnel Acknowledgments

Current Students

  • A. Awan (Ph.D.)

  • M. Bayatpour (Ph.D.)

  • S. Chakraborthy (Ph.D.)

– C.-H. Chu (Ph.D.)

Past Students

  • A. Augustine (M.S.)

  • P. Balaji (Ph.D.)

  • S. Bhagvat (M.S.)

  • A. Bhat (M.S.)

  • D. Buntinas (Ph.D.)

  • L. Chai (Ph.D.)

  • B. Chandrasekharan (M.S.)

  • N. Dandapanthula (M.S.)

  • V. Dhanraj (M.S.)

  • T. Gangadharappa (M.S.)

  • K. Gopalakrishnan (M.S.)

  • R. Rajachandrasekar (Ph.D.)

  • G. Santhanaraman (Ph.D.)

  • A. Singh (Ph.D.)

  • J. Sridhar (M.S.)

  • S. Sur (Ph.D.)

  • H. Subramoni (Ph.D.)

  • K. Vaidyanathan (Ph.D.)

  • A. Vishnu (Ph.D.)

  • J. Wu (Ph.D.)

  • W. Yu (Ph.D.)

Past Research Scientist

  • S. Sur

Past Post-Docs

  • D. Banerjee

  • X. Besseron

– H.-W. Jin –

  • W. Huang (Ph.D.)

  • W. Jiang (M.S.)

  • J. Jose (Ph.D.)

  • S. Kini (M.S.)

  • M. Koop (Ph.D.)

  • K. Kulkarni (M.S.)

  • R. Kumar (M.S.)

  • S. Krishnamoorthy (M.S.)

  • K. Kandalla (Ph.D.)

  • P. Lai (M.S.)

  • J. Liu (Ph.D.)

  • M. Luo (Ph.D.)

  • A. Mamidala (Ph.D.)

  • G. Marsh (M.S.)

  • V. Meshram (M.S.)

  • A. Moody (M.S.)

  • S. Naravula (Ph.D.)

  • R. Noronha (Ph.D.)

  • X. Ouyang (Ph.D.)

  • S. Pai (M.S.)

  • S. Potluri (Ph.D.)

  • S. Guganani (Ph.D.)

  • J. Hashmi (Ph.D.)

  • N. Islam (Ph.D.)

  • M. Li (Ph.D.)

  • J. Lin

  • M. Luo

  • E. Mancini

Current Research Scientists

  • K. Hamidouche

  • X. Lu

  • H. Subramoni

Past Programmers

  • D. Bureddy

  • M. Arnold

  • J. Perkins

Current Research Specialist

  • J. Smith

  • M. Rahman (Ph.D.)

  • D. Shankar (Ph.D.)

  • A. Venkatesh (Ph.D.)

  • J. Zhang (Ph.D.)

  • S. Marcarelli

  • J. Vienne

  • H. Wang
slide-47
SLIDE 47

Intel HPC Dev Conf (SC ‘16) 47 Network Based Computing Laboratory

  • Three Conference Tutorials (IB+HSE, IB+HSE Advanced, Big Data)
  • HP-CAST
  • Technical Papers (SC main conference; Doctoral Showcase; Poster; PDSW-

DISC, PAW, COMHPC, and ESPM2 Workshops)

  • Booth Presentations (Mellanox, NVIDIA, NRL, PGAS)
  • HPC Connection Workshop
  • Will be stationed at Ohio Supercomputer Center/OH-TECH Booth (#1107)

– Multiple presentations and demos

  • More Details from http://mvapich.cse.ohio-state.edu/talks/

OSU Team Will be Participating in Multiple Events at SC ‘16

slide-48
SLIDE 48

Intel HPC Dev Conf (SC ‘16) 48 Network Based Computing Laboratory

panda@cse.ohio-state.edu, hamidouch@cse.ohio-state.edu

Thank You!

Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ The MVAPICH Project http://mvapich.cse.ohio-state.edu/