Integrating GPU Support for OpenMP Offloading Directives into Clang - - PowerPoint PPT Presentation

integrating gpu support for openmp offloading directives
SMART_READER_LITE
LIVE PREVIEW

Integrating GPU Support for OpenMP Offloading Directives into Clang - - PowerPoint PPT Presentation

Integrating GPU Support for OpenMP Offloading Directives into Clang Carlo Bertolli , Samuel F. Antao, Gheorghe-Teodor Bercea, Arpith C. Jacob, Alexandre E. Eichenberger, Tong Chen, Zehra Sura, Hyojin Sung, Georgios Rokos, David Appelhans, Kevin


slide-1
SLIDE 1

Carlo Bertolli, Samuel F. Antao, Gheorghe-Teodor Bercea, Arpith C. Jacob, Alexandre E. Eichenberger, Tong Chen, Zehra Sura, Hyojin Sung, Georgios Rokos, David Appelhans, Kevin O’Brien
 IBM T.J. Watson Research Center The Second Workshop on the LLVM Compiler Infrastructure in HPC 11.15.15

Integrating GPU Support for OpenMP Offloading Directives into Clang

slide-2
SLIDE 2

C/C++ input file

CPU GPU Preproc. Preproc. Clang Clang LLVM P8 LLVM NVPTX

2

libtarget-nvptx LD PTXAS NVLINK Host GPU

Fat binary

  • mp/lomp

libtarget P8 P8 K K K K K K M M M M M M M PCI Express

XLF

slide-3
SLIDE 3

C/C++ input file

CPU GPU Preproc. Preproc. Clang Clang LLVM P8 LLVM NVPTX

3

libtarget-nvptx LD PTXAS NVLINK Host GPU

Fat binary

  • mp/lomp

libtarget OpenMP Implementation in Clang CG

XLF

P8 P8 K K K K K K M M M M M M M PCI Express

slide-4
SLIDE 4

Research Topics

  • Implement OpenMP on GPU
  • Hard to do for programming constraints
  • Cannot re-use OpenMP on CPU (codegen +lib)
  • Performance guaranteed to be trash in many cases
  • What should be optimized?
  • Integration into Open Source compiler
  • Cannot be disruptive to compiler design and implementation
  • Based on existing assumptions: OpenMP is implemented in Clang codegen
  • Gather community interest for this implementation to land

4

slide-5
SLIDE 5

OpenMP Challenges for GPUs

#pragma omp target teams { if (a[0]++ > 0) { #pragma omp parallel for for (int i = 0 ; i < n ; i++) { if (omp_get_thread_num () > 0) { #pragma omp simd for (int s = 0 ; s < 32 ; s++) { .. } } else { #pragma omp simd for (int s = 0 ; s < 4 ; s++) { .. } } } } else if(b[0]++ > 0) { #pragma omp parallel for for (int i = 0 ; i < n*2 ; i++) { .. } } }

5

Sequential within team:

  • nly team master executes this
slide-6
SLIDE 6

#pragma omp target teams thread_limit(256) { if (a[0]++ > 0) { #pragma omp parallel for num_threads(128) for (int i = 0 ; i < n ; i++) { if (omp_get_thread_num () > 0) { #pragma omp simd for (int s = 0 ; s < 32 ; s++) { .. } } else { #pragma omp simd for (int s = 0 ; s < 4 ; s++) { .. } } } } else if(b[0]++ > 0) { #pragma omp parallel for for (int i = 0 ; i < n*2 ; i++) { .. } } }

6

Parallel threads: some threads are executing this in parallel

OpenMP Challenges for GPUs

slide-7
SLIDE 7

#pragma omp target teams { if (a[0]++ > 0) { #pragma omp parallel for for (int i = 0 ; i < n ; i++) { if (omp_get_thread_num () > 0) { #pragma omp simd for (int s = 0 ; s < 32 ; s++) { .. } } else { #pragma omp simd for (int s = 0 ; s < 4 ; s++) { .. } } } } else if(b[0]++ > 0) { #pragma omp parallel for nowait for (int i = 0 ; i < n*2 ; i++) { .. } } }

7

Explicit and implicit divergence between threads

OpenMP Challenges for GPUs

slide-8
SLIDE 8

#pragma omp target teams { if (a[0]++ > 0) { #pragma omp parallel for for (int i = 0 ; i < n ; i++) { if (omp_get_thread_num () > 0) { #pragma omp simd for (int s = 0 ; s < 32 ; s++) { .. } } else { #pragma omp simd for (int s = 0 ; s < 4 ; s++) { .. } } } } else if(b[0]++ > 0) { #pragma omp parallel for nowait for (int i = 0 ; i < n*2 ; i++) { .. } } }

8

No actual simd units on GPUs

OpenMP Challenges for GPUs

slide-9
SLIDE 9

Control Loop Scheme

int tmp = 3; #pragma omp target teams \ thread_limit(5) \ map(tofrom:tmp,a[:n]) { tmp += 3; #pragma omp parallel for \ num_threads(4) for (int i = 0 ; i < n; i++) a[i] += tmp; tmp = -1; }

9

1 2 3 4

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; // sequential reg. 1 nextState = PR1; break; case PR1: if(tid > 4) break; // parallel reg. 1 if (tid == 0) nextState = SQ2; break; case SQ2: if(tid > 0) break; // sequential reg. 2 finished = true; break; } __syncthreads(); }

Avoid dynamic parallelism and start all threads

slide-10
SLIDE 10

Control Loop Scheme

int tmp = 3; #pragma omp target teams \ thread_limit(5) \ map(tofrom:tmp,a[:n]) { tmp += 3; #pragma omp parallel for \ num_threads(4) for (int i = 0 ; i < n; i++) a[i] += tmp; tmp = -1; }

10

1 2 3 4 1 2 3 4

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; // sequential reg. 1 nextState = PR1; break; case PR1: if(tid > 4) break; // parallel reg. 1 if (tid == 0) nextState = SQ2; break; case SQ2: if(tid > 0) break; // sequential reg. 2 finished = true; break; } __syncthreads(); }

slide-11
SLIDE 11

Control Loop Scheme

int tmp = 3; #pragma omp target teams \ thread_limit(5) \ map(tofrom:tmp,a[:n]) { tmp += 3; #pragma omp parallel for \ num_threads(4) for (int i = 0 ; i < n; i++) a[i] += tmp; tmp = -1; }

11

1 2 3 4 1 2 3 4 1 2 3 4

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; // sequential reg. 1 nextState = PR1; break; case PR1: if(tid > 3) break; // parallel reg. 1 if (tid == 0) nextState = SQ2; break; case SQ2: if(tid > 0) break; // sequential reg. 2 finished = true; break; } __syncthreads(); }

slide-12
SLIDE 12

Control Loop Scheme

int tmp = 3; #pragma omp target teams \ thread_limit(5) \ map(tofrom:tmp,a[:n]) { tmp += 3; #pragma omp parallel for \ num_threads(4) for (int i = 0 ; i < n; i++) a[i] += tmp; tmp = -1; }

12

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; // sequential reg. 1 nextState = PR1; break; case PR1: if(tid > 4) break; // parallel reg. 1 if (tid == 0) nextState = SQ2; break; case SQ2: if(tid > 0) break; // sequential reg. 2 finished = true; break; } __syncthreads(); }

slide-13
SLIDE 13

Control Loop & Clang

  • Rules for modular integration
  • Do’s
  • Extend OpenMP-related functions
  • Add new function calls
  • Add new runtime functions only for specific targets
  • Don’ts
  • OpenMP target implementation influences every C/C++ construct
  • Add metadata and process OpenMP later when more convenient

13

slide-14
SLIDE 14

Example: Codegen Control Loop for #target

void CGF::EmitOMPTargetDirective(..) { // control flow will lead to… if (isTargetMode) CGM.getOpenMPRuntime().EnterTargetLoop(); // emit target region statements CGF.EmitStmt(CS->getCapturedStmt()); if (isTargetMode) CGM.getOpenMPRuntime().ExitTargetLoop(); }

14

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; } __syncthreads(); }

codegen

slide-15
SLIDE 15

Example: Codegen Control Loop for #target

void CGF::EmitOMPTargetDirective(..) { // control flow will lead to… if (isTargetMode) CGM.getOpenMPRuntime().EnterTargetLoop(); // emit target region statements CGF.EmitStmt(CS->getCapturedStmt()); if (isTargetMode) CGM.getOpenMPRuntime().ExitTargetLoop(); }

15

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; } __syncthreads(); }

setInsertPoint

slide-16
SLIDE 16

Example: Codegen Control Loop for #target

void CGF::EmitOMPTargetDirective(..) { // control flow will lead to… if (isTargetMode) CGM.getOpenMPRuntime().EnterTargetLoop(); // emit target region statements CGF.EmitStmt(CS->getCapturedStmt()); if (isTargetMode) CGM.getOpenMPRuntime().ExitTargetLoop(); }

16

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; } __syncthreads(); }

codegen until #parallel

slide-17
SLIDE 17

Example: Codegen Control Loop for #parallel

void CGF::EmitOMPParallelDirective(..) { // control flow will lead to… if (isTargetMode) CGM.getOpenMPRuntime().EnterParallel(); // emit parallel region statements CGF.EmitStmt(CS->getCapturedStmt()); if (isTargetMode) CGM.getOpenMPRuntime().ExitParallel(); }

17

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; // sequential reg. 1 nextState = PR1; break; case PR1: if(tid > num_threads) break; } __syncthreads(); }

slide-18
SLIDE 18

Example: Codegen Control Loop for #target

void CGF::EmitOMPParallelDirective(..) { // control flow will lead to… if (isTargetMode) CGM.getOpenMPRuntime().EnterParallel(); // emit parallel region statements CGF.EmitStmt(CS->getCapturedStmt()); if (isTargetMode) CGM.getOpenMPRuntime().ExitParallel(); }

18

nextState = SQ1; while(!finished) { switch(nextState) { case SQ1: if(tid > 0) break; // sequential reg. 1 nextState = PR1; break; case PR1: if(tid > num_threads) break; } __syncthreads(); }

setInsertPoint

slide-19
SLIDE 19

Control Loop Overhead vs CUDA (1/2)

19

Vector Add CUDA Control Loop #registers/thread

16 64

Shared Memory (bytes)

280

Occupancy

95.9% 26.6%

Execution Time (usec.)

1523.5 1988.5

#pragma omp target teams \ distribute parallel for \ schedule(static,1) for (i = 0 ; i < n ; i++) a[i] += b[i] + c[i]; for (int i = threadIdx.x + blockIdx.x * blockDim.x; i < n; i += blockDim.x * gridDim.x) a[i] += b[i] + c[i];

Nvidia Tesla K40m

  • maxregcount=64
slide-20
SLIDE 20

Control Loop Overhead vs CUDA (2/2)

20

Vector Matrix Add CUDA Control Loop #registers/thread

18 64

Shared Memory (bytes)

280

Occupancy

97.3% 49.5%

Execution Time (usec.)

70832.0 78333.0

#pragma omp target teams \ distribute parallel for \ schedule(static,1) for (i = 0 ; i < n ; i++) for (j = 0 ; j < n_loop ; j++) a[i] += b[i] + c[i*n_loop + j]; for (int i = threadIdx.x + blockIdx.x * blockDim.x; i < n; i += blockDim.x * gridDim.x) for (j = 0 ; j < n_loop ; j++) a[i] += b[i] + c[i*n_loop + j];

Nvidia Tesla K40m

  • maxregcount=64
slide-21
SLIDE 21

Occupancy / Register Allocation

  • Many reasons:
  • A while loop with a switch inside hits hard register allocation
  • In OpenMP 4.0 kernel parameters are passed as pointer to pointer
  • The kernel is allowed to do pointer arithmetic
  • This provokes an additional register for each parameter
  • Fixed by OpenMP 4.5: always pass pointer, what matters is the pointee address
  • CUDA and LLVM backends for NVPTX are different:
  • CUDA uses libnvvm, which is shipped as a library
  • LLVM uses the open source code in the trunk
  • Different optimization strategies

21

slide-22
SLIDE 22

Optimizing “Good Cases”: LULESH

  • Recurrent pragma patterns to be optimized
  • Some hints
  • No OpenMP control flow divergence
  • No nested parallelism/pragmas
  • No hard stuff: locks, tasks, etc..

22

#pragma omp parallel for firstprivate(numNode) for( Index_t gnode=0 ; gnode<numNode ; ++gnode ) { Index_t count = domain.nodeElemCount(gnode) ; Index_t *cornerList = domain.nodeElemCornerList(gnode) ; Real_t fx_tmp = Real_t(0.0) ; Real_t fy_tmp = Real_t(0.0) ; Real_t fz_tmp = Real_t(0.0) ; for (Index_t i=0 ; i < count ; ++i) { Index_t elem = cornerList[i] ; fx_tmp += fx_elem[elem] ; fy_tmp += fy_elem[elem] ; fz_tmp += fz_elem[elem] ; } domain.fx(gnode) = fx_tmp ; domain.fy(gnode) = fy_tmp ; domain.fz(gnode) = fz_tmp ; }

slide-23
SLIDE 23

Porting LULESH to OpenMP 4

#pragma omp parallel for firstprivate(numNode) for( Index_t gnode=0 ; gnode<numNode ; ++gnode ) { }

23

#pragma omp target teams distribute parallel for schedule(static,1) \ firstprivate(numNode) for( Index_t gnode=0 ; gnode<numNode ; ++gnode ) { }

OpenMP 4-ization

slide-24
SLIDE 24

Implementation of Combined Construct

24

#pragma omp target teams distribute parallel for schedule(static,1) \ firstprivate(numNode) for( Index_t gnode=0 ; gnode<numNode ; ++gnode ) { }

CUDA-style notation

for (int i = threadIdx.x + blockIdx.x * blockDim.x; i < n; i += blockDim.x * gridDim.x) { g_node = i; // codegen loop body }

Compiler:

  • Detect pragma combination
  • Prove absence of nested pragmas
slide-25
SLIDE 25

Combined Directive - Vector Add

25

Vector add CUDA Control Loop Combined #registers/thread

16 64 21

Shared Memory (bytes)

280

Occupancy

95.9% 26.6% 96%

Execution Time (usec.)

1523.5 1988.5 1523.1

slide-26
SLIDE 26

Control Loop Overhead vs CUDA (2/2)

26

Vector-Matrix Add CUDA Control Loop Combined #registers/thread

18 64 30

Shared Memory (bytes)

280

Occupancy

97.3% 49.5% 97.7%

Execution Time (usec.)

70832.0 78333.0 70456.0

slide-27
SLIDE 27

27

Mesh Size ApplyAccelBCForNode

CalcMonotonicQRegionForElems

CUDA Control Loop Combined CUDA Control Loop Combined

#registers

6 64 22 32 64 64

Shared Memory

280 280

123

Occupancy

5.5% 6.7% 43.8% 5.9% 6.6% 5.6%

Execution Time (μsec)

5.184 20.928 5.024 11.169 62.751 15.775

303

Occupancy

6.1% 3.3% 14.7% 70.6% 26.5% 43.3%

Execution Time (μsec)

5.568 22.912 4.96 29.184 178.18 67.296

1003

Occupancy

27.8% 13.2% 33.8% 93.0% 26.4% 47.3%

Execution Time (μsec)

6.72 50.944 14.976 1287.4 4833.2 2563.4

LULESH Kernels

slide-28
SLIDE 28

28

Mesh Size ApplyAccelBCForNode

CalcMonotonicQRegionForElems

CUDA Control Loop Combined CUDA Control Loop Combined

#registers

6 64 22 32 64 64

Shared Memory

280 280

123

Occupancy

5.5% 6.7% 43.8% 5.9% 6.6% 5.6%

Execution Time (μsec)

5.184 20.928 5.024 11.169 62.751 15.775

303

Occupancy

6.1% 3.3% 14.7% 70.6% 26.5% 43.3%

Execution Time (μsec)

5.568 22.912 4.96 29.184 178.18 67.296

1003

Occupancy

27.8% 13.2% 33.8% 93.0% 26.4% 47.3%

Execution Time (μsec)

6.72 50.944 14.976 1287.4 4833.2 2563.4

LULESH Kernels

slide-29
SLIDE 29

29

Mesh Size ApplyAccelBCForNode

CalcMonotonicQRegionForElems

CUDA Control Loop Combined CUDA Control Loop Combined

#registers

6 64 22 32 64 64

Shared Memory

280 280

123

Occupancy

5.5% 6.7% 43.8% 5.9% 6.6% 5.6%

Execution Time (μsec)

5.184 20.928 5.024 11.169 62.751 15.775

303

Occupancy

6.1% 3.3% 14.7% 70.6% 26.5% 43.3%

Execution Time (μsec)

5.568 22.912 4.96 29.184 178.18 67.296

1003

Occupancy

27.8% 13.2% 33.8% 93.0% 26.4% 47.3%

Execution Time (μsec)

6.72 50.944 14.976 1287.4 4833.2 2563.4

LULESH Kernels

slide-30
SLIDE 30

30

Mesh Size ApplyAccelBCForNode

CalcMonotonicQRegionForElems

CUDA Control Loop Combined CUDA Control Loop Combined

#registers

6 64 22 32 64 64

Shared Memory

280 280

123

Occupancy

5.5% 6.7% 43.8% 5.9% 6.6% 5.6%

Execution Time (μsec)

5.184 20.928 5.024 11.169 62.751 15.775

303

Occupancy

6.1% 3.3% 14.7% 70.6% 26.5% 43.3%

Execution Time (μsec)

5.568 22.912 4.96 29.184 178.18 67.296

1003

Occupancy

27.8% 13.2% 33.8% 93.0% 26.4% 47.3%

Execution Time (μsec)

6.72 50.944 14.976 1287.4 4833.2 2563.4

LULESH Kernels

slide-31
SLIDE 31

31

Mesh Size ApplyAccelBCForNode

CalcMonotonicQRegionForElems

CUDA Control Loop Combined CUDA Control Loop Combined

#registers

6 64 22 32 64 64

Shared Memory

280 280

123

Occupancy

5.5% 6.7% 43.8% 5.9% 6.6% 5.6%

Execution Time (μsec)

5.184 20.928 5.024 11.169 62.751 15.775

303

Occupancy

6.1% 3.3% 14.7% 70.6% 26.5% 43.3%

Execution Time (μsec)

5.568 22.912 4.96 29.184 178.18 67.296

1003

Occupancy

27.8% 13.2% 33.8% 93.0% 26.4% 47.3%

Execution Time (μsec)

6.72 50.944 14.976 1287.4 4833.2 2563.4

LULESH Kernels

slide-32
SLIDE 32

Conclusion

  • Generality of OpenMP has a large performance cost
  • even in simpler cases (no tasks, no locks, etc..)
  • Optimized schemes are possible
  • More schemes in the near future
  • Interesting cases require control-flow analysis in Clang
  • Optimize register allocation for control loop
  • Better low level optimizations in NVPTX
  • Optimize control loop scheme

32