Apple LLVM GPU Compiler: Embedded Dragons Charu Chandrasekaran, - - PowerPoint PPT Presentation

apple llvm gpu compiler embedded dragons
SMART_READER_LITE
LIVE PREVIEW

Apple LLVM GPU Compiler: Embedded Dragons Charu Chandrasekaran, - - PowerPoint PPT Presentation

Apple LLVM GPU Compiler: Embedded Dragons Charu Chandrasekaran, Apple Marcello Maggioni, Apple 1 Agenda How Apple uses LLVM to build a GPU Compiler Factors that affect GPU performance The Apple GPU compiler Pipeline passes


slide-1
SLIDE 1

Apple LLVM GPU Compiler: Embedded Dragons

Charu Chandrasekaran, Apple Marcello Maggioni, Apple

1

slide-2
SLIDE 2
  • How Apple uses LLVM to build a GPU Compiler
  • Factors that affect GPU performance
  • The Apple GPU compiler
  • Pipeline passes
  • Challenges

2

Agenda

slide-3
SLIDE 3
  • Live on Trunk and merge continuously
  • Benefit from latest improvements on trunk
  • Identify any regressions immediately and report back
  • Minimize changes to open source llvm code
  • Reuse as much as possible

3

How Apple uses LLVM

slide-4
SLIDE 4

Continuous Integration

4

LLVM Trunk GPU Compiler Year 1 production compiler

slide-5
SLIDE 5

5

LLVM Trunk GPU Compiler

Continuous Integration

Year 1 production compiler

slide-6
SLIDE 6

LLVM Trunk

3

6

GPU Compiler

Continuous Integration

Year 1 production compiler Year 2 production compiler Year 3 production compiler

slide-7
SLIDE 7
  • Regression testing involves:
  • register count
  • instruction count
  • FileCheck : correctness
  • compile time
  • compiler size
  • runtime performance

7

Testing

slide-8
SLIDE 8

App

The GPU SW Stack

iOS / watchOS / tvOS Process

Metal-FE

XPC Service

Result

User

.metal

Shader

.obj .exec IR IR

8

Backend

XPC Service

Metal Framework, GPU Driver

Interacts

slide-9
SLIDE 9

About GPUs

9

slide-10
SLIDE 10

GPUs are massively parallel vector processors Threads are grouped together and execute in lockstep (they share the same PC)

About GPUs

10

Shader Core PC

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

slide-11
SLIDE 11

The parallelism is implicit, a single thread looks like normal CPU code

float kernel(float a, float b) { float c = a + b; return c; }

11

About GPUs

Shader Core PC

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

slide-12
SLIDE 12

The parallelism is implicit, a single thread looks like normal CPU code

float8 kernel(float8 a, float8 b) { float8 c = add_v8(a, b);
 return c; }

12

About GPUs

Shader Core PC

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

slide-13
SLIDE 13

Multiple groups of threads are resident on the GPU at the same time for latency hiding

Shader Core

About GPUs : Latency hiding

float kernel(struct In_PS ) { float4 color = texture_fetch(); float4 c = In_PS.a * In_PS.b; … float4 d = c + color; … }

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

PC PC

13

PC PC

slide-14
SLIDE 14

The GPU picks up work from the various different groups of threads to hide the latency from the other groups

Shader Core

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

PC PC PC

float kernel(struct In_PS ) { float4 color = texture_fetch(); float4 c = In_PS.a * In_PS.b; … float4 d = c + color; … }

14

About GPUs : Latency hiding

PC

slide-15
SLIDE 15

The GPU picks up work from the various different groups of threads to hide the latency from the other groups

Shader Core

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

PC PC PC

float kernel(struct In_PS ) { float4 color = texture_fetch(); float4 c = In_PS.a * In_PS.b; … float4 d = c + color; … }

15

About GPUs : Latency hiding

PC

slide-16
SLIDE 16

The GPU picks up work from the various different groups of threads to hide the latency from the other groups

Shader Core

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

float kernel(struct In_PS ) { float4 color = texture_fetch(); float4 c = In_PS.a * In_PS.b; … float4 d = c + color; … }

16

About GPUs : Latency hiding

PC PC PC PC

slide-17
SLIDE 17

Shader Core

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

PC PC PC PC

17

About GPUs : Latency hiding

The GPU picks up work from the various different groups of threads to hide the latency from the other groups

float kernel(struct In_PS ) { float4 color = texture_fetch(); float4 c = In_PS.a * In_PS.b; … float4 d = c + color; … }

slide-18
SLIDE 18

Shader Core

Registers per lane Registers per thread

18

About GPUs: Register file

The groups of threads share a big register file that is split between the threads

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

PC PC PC PC

0b 0a 0c 0d 4b 4a 4c 4d 6b 6a 6c 6d 2b 2a 2c 2d 1b 1a 1c 1d 3b 3a 3c 3d 5b 5a 5c 5d 7b 7a 7c 7d

Register File

b a c d

slide-19
SLIDE 19

19

About GPUs: Register file

The number of registers used per-thread impact the number of resident group of threads on the machine (occupancy)

Shader Core

Registers per thread

PC 2 3 6 7 1 4 5

Register File

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

slide-20
SLIDE 20

This in turn will impact the latency hiding capability

20

About GPUs: Register file

Shader Core

Registers per thread

PC 2 3 6 7 1 4 5

Register File

LANE 0 LANE 1 LANE 2 LANE 3 LANE 4 LANE 5 LANE 6 LANE 7

VERY IMPORTANT!

slide-21
SLIDE 21

The huge register file and number of concurrent threads makes spilling pretty costly

21

About GPUs: Spilling

Register File L1$

slide-22
SLIDE 22

Example (spilling 1 register): 1024 threads x 32-bit register = 4 KB !

Register File L1$

22

About GPUs: Spilling

The huge register file and number of concurrent threads makes spilling pretty costly Spilling is typically not an effective way of reducing register pressure to increase

  • ccupancy and should be avoided at all costs
slide-23
SLIDE 23

Pipeline

23

slide-24
SLIDE 24

We support function calls and we try to exploit them Like most GPU programming models though, we can inline everything if we want

Unoptimized IR Inlining

24

Inlining

All functions + main kernel linked together in a single module

slide-25
SLIDE 25

I-Cache savings!

25

Not inlining showed significant speedup on some shaders where big functions were called multiple times

Inlining

slide-26
SLIDE 26

Dead Arg Elimination

Get rid of dead arguments to functions

26

Inlining

slide-27
SLIDE 27

Convert to pass by value as many

  • bjects as we can

27

Dead Arg Elimination Argument Promotion

Inlining

slide-28
SLIDE 28

Proceed to the actual inlining

28

Inlining Dead Arg Elimination Argument Promotion

Inlining

slide-29
SLIDE 29

Inlining decision based on standard LLVM inlining policy + custom threshold + additional constraints

29

Inlining Dead Arg Elimination Argument Promotion

Inlining

slide-30
SLIDE 30

Inlining

int function(int addrspace(stack)* v) { … } int function(int addrspace(constant)* v) { … } We force inline these cases

30

Objective of our inlining policy is to be very conservative while trying exploit cases where we can keep a function call can benefit us potentially a lot Custom policies try to minimize the impact that not inlining could have on other key

  • ptimizations for performance (SROA, Buffer preloading)
slide-31
SLIDE 31

int callee() { add r1, r2, r3 ret } int caller () { mul r4, r1, r3 push r4 call callee() pop r4 add r1, r1, r4 } int callee() { add r1, r2, r3 ret } int caller () { mul r4, r1, r3 push r4 call callee() pop r4 add r1, r1, r4 } Without IPRA With IPRA

31

The new IPRA support in LLVM has been key in avoiding pointless calling convention register store/reload

Inlining

slide-32
SLIDE 32

Inlining SROA

32

SROA

Argument Promotion

slide-33
SLIDE 33

33

Inlining SROA

SROA

We run it multiple times in our pipeline in

  • rder to be sure that we promote as many

allocas to register values as possible

Inlining SROA Argument Promotion

slide-34
SLIDE 34

Inlining SROA Alloca Opt int function(int i) { int a[4] = { x, y, z, w }; … … = a[i]; }

34

Alloca Opt

Argument Promotion

slide-35
SLIDE 35

Inlining SROA Alloca Opt int function(int i) { int a[4] = { x, y, z, w }; … … = i == 0 ? x : (i == 1 ? y : i == 2 ? z : w); }

Less stack accesses!

35

Alloca Opt

Argument Promotion

slide-36
SLIDE 36

SROA Alloca Opt Loop Unrolling

36

Loop Unrolling

slide-37
SLIDE 37

Completely unrolling loops allows SROA to remove stack accesses If we have dynamic memory access to stack or constant memory that we can promote to uniform memory we want to greatly increase the unrolling thresholds

int a[5] = { x, y, z, w, q }; int b = 0; for (int i = 0; i < 5; ++i) { b += a[i]; } int a[5] = { x, y, z, w, q }; int b = x; b += y; b += z; b += w; b += q;

37

Loop Unrolling

slide-38
SLIDE 38

We also keep track of register pressure Our scheduler is very eager to try and help latency hiding by moving most of memory accesses at the top of the shader (and is difficult to teach it otherwise) so we limit unrolling when we detect we could blow up the register pressure

for (int i = 0; i < 5; ++i) { float4 a = texture_fetch(); float4 b = texture_fetch(); float4 c = texture_fetch(); float4 d = texture_fetch(); float4 e = texture_fetch(); // Math involving the above }

38

Loop Unrolling

slide-39
SLIDE 39

We allow partial unrolling if we detect a static loop count and the loop would be bigger than our unrolling threshold

for (int i = 0; i < 16; ++i) { float4 a = texture_fetch(); // Math involving the above } for (int i = 0; i < 4; ++i) { float4 a1 = texture_fetch(); float4 a2 = texture_fetch(); float4 a3 = texture_fetch(); float4 a4 = texture_fetch(); … // Unrolled 4 times }

39

Loop Unrolling

slide-40
SLIDE 40

Loop Unrolling Flatten CFG if (val == x) { a = v + z; c = q + a; } else { b = v * z; c = q * b; } … = c;

40

Flatten CFG

Speculation helps in creating bigger blocks for the scheduler to do a better job and reduces the total overhead introduced by small blocks

slide-41
SLIDE 41

Loop Unrolling Flatten CFG

Speculation helps in creating bigger blocks for the scheduler to do a better job and reduces the total overhead introduced by small blocks

41

Flatten CFG

a = v + z; c1 = q + a; b = v * z; c2 = q * b; c = (val == x) ? c1 : c2; … = c; if (val == x) { a = v + z; c = q + a; } else { b = v * z; c = q * b; } … = c;

slide-42
SLIDE 42

Flatten CFG Uniformity Hoisting

42

GPUs are massively parallel, but often some computation in shader can be statically determined to be the same for all the threads Some of these patterns are really convenient or difficult for the shader writer to extract from the program

Uniformity Hoisting

slide-43
SLIDE 43

Flatten CFG Uniformity Hoisting void kernel(constant float4 *A, constant bool *b global float *C) { float4 f_vec = *b ? *A : float4(1.0); … = f_vec * C[tid]; }

43

GPUs are massively parallel, but often some computation in shader can be statically determined to be the same for all the threads Some of these patterns are really convenient or difficult for the shader writer to extract from the program

Uniformity Hoisting

slide-44
SLIDE 44

Flatten CFG Uniformity Hoisting void uniform_kernel(constant float4 *A, constant bool *b) { // uni_f_vec lives in uniform memory uni_f_vec = *b ? *A : float4(1.0); } void kernel(constant float4 *A, constant bool *b global float *C) { … = uni_f_vec * C[tid]; }

44

We can move such computation to a program that runs at a lower rate (once) Even one instruction is a lot of parallel work saved

Uniformity Hoisting

slide-45
SLIDE 45

Flatten CFG Uniformity Hoisting void kernel(constant float4 *A, constant bool *b global float *C) { const int a[5] = { 3, 2, 1, 4, 2 }; … = a[i]; } Never stored to

45

Some stack arrays that are initialized and never stored to (and haven’t been optimized away previously) can be turned into global loads instead

Uniformity Hoisting

slide-46
SLIDE 46

Flatten CFG Uniformity Hoisting const int a[5] = { 3, 2, 1, 4, 2 }; void kernel(constant float4 *A, constant bool *b global float *C) { … = a[i]; }

46

File scope constants can be initialized more efficiently before running the program In the stack also the array is replicated for every thread, while in global memory the array memory is shared by all the threads

Uniformity Hoisting

slide-47
SLIDE 47

Uniformity Hoisting CFG Structurization

A B C D

47

When control-flow is unstructured (e.g., a block is controlled by multiple predecessors) execution on GPUs require some special handling

CFG Structurization

slide-48
SLIDE 48

Uniformity Hoisting CFG Structurization

A B C D

48

Our backend supports full execution of unstructured control-flow handled at MI-level with little overhead So we need only limited structurization (we require loops to be transformed in LoopSimplify form though)

CFG Structurization

slide-49
SLIDE 49

For relatively small unstructured blocks employ structurization based on duplication

Uniformity Hoisting CFG Structurization

A B C D C’

49

CFG Structurization

slide-50
SLIDE 50

Uniformity Hoisting CFG Structurization

50

A B C D C’

We thought about employing the LLVM StructurizeCFG pass, but the way it translated control-flow wasn’t optimal for us (Higher register pressure on avg, more control instructions)

CFG Structurization

slide-51
SLIDE 51

We run a bunch of optimizations (multiple times) in between passes

InstCombine DCE SCCP CSE SimplifyCFG Reassociate

51

  • Misc. optimizations
slide-52
SLIDE 52

Instruction Selection is one of the most expensive steps of our compilation pipeline We use lots of custom combines to extract performance from our hardware

52

Instruction Selection

SelectionDAG FastISel

slide-53
SLIDE 53

Takes between 15% to 35%

  • f our compile time!

53

Instruction Selection is one of the most expensive steps of our compilation pipeline We use lots of custom combines to extract performance from our hardware

Instruction Selection

SelectionDAG FastISel

slide-54
SLIDE 54

SelectionDAG FastISel

Takes between 15% to 35%

  • f our compile time!

On some devices FastISel helps keeping compile time in check

54

Instruction Selection is one of the most expensive steps of our compilation pipeline We use lots of custom combines to extract performance from our hardware

Instruction Selection

slide-55
SLIDE 55

SelectionDAG FastISel GlobalISel

Plan is to switch to GlobalISel in the near future as our main compiler ISel The switch should give us a better infrastructure while improving compile time

55

Instruction Selection

slide-56
SLIDE 56

Scheduling is key for exploiting ILP, improve latency hiding and reducing power consumption by reducing register accesses We try to achieve the above while being very careful at not to cause register pressure problems

add r5, r0, r3 mul r7, r3, r4 sub r6, r5, r4 load r1 add r3, r1, r6 load r2 mul r4, r2, r3

56

Scheduling

slide-57
SLIDE 57

Adding unrelated after memory accesses helps with in-thread latency hiding so that

  • ther instructions can be executed while the load or texture fetch results are ready

load r1 load r2 add r5, r0, r3 mul r7, r3, r4 sub r6, r5, r4 add r3, r1, r6 mul r4, r2, r3 Wait here for the loads Independent operations here

57

Scheduling

slide-58
SLIDE 58

load r1 load r2 add r5, r0, r3 mul r7, r3, r4 sub r6, r5, r4 add r3, r1, r6 mul r4, r2, r3

58

Interleaving independent operations to improve ILP Forwarding instruction results help reducing register file traffic (lower power) This is pretty standard scheduling

Scheduling

slide-59
SLIDE 59

load r1 load r2 add r5, r0, r3 mul r7, r3, r4 sub r6, r5, r4 add r3, r1, r6 mul r4, r2, r3

59

Many other target specific policies are enforced, all aimed at improving ILP, latency hiding and power (for example grouping instructions by type), all of this while battling with register pressure We are willing to spend a lot of compile time on scheduling

Scheduling

slide-60
SLIDE 60

60

Challenges

slide-61
SLIDE 61
  • Being JITs GPU compilers care about compile time very much

61

Compile-time and being a JIT

slide-62
SLIDE 62
  • Being JITs GPU compilers care about compile time very much
  • We optimize our pipeline to obtain the best results with the least wasted compile

time

62

Compile-time and being a JIT

slide-63
SLIDE 63

Main offenders:

  • Instruction Selection: 15% - 35% compile-time
  • Scheduling: 5% - 15% compile-time
  • Instruction combining: ~10% compile-time
  • Register Allocation/Register Coalescing: ~10% compile-time

63

Compile-time and being a JIT

slide-64
SLIDE 64
  • Being JITs GPU compilers care about compile time very much
  • We optimize our pipeline to obtain the best results with the least wasted compile

time

  • Having a custom pipeline often times creates problems as changing the order of

the passes can unveil nasty bugs that used to be hidden …

64

Compile-time and being a JIT

slide-65
SLIDE 65
  • Being JITs GPU compilers care about compile time very much
  • We optimize our pipeline to obtain the best results with the least wasted compile

time

  • Having a custom pipeline often times creates problems as changing the order of

the passes can unveil nasty bugs that used to be hidden …

  • We also reuse a single compiler instance for multiple compilations … this also

uncovered some nasty bugs!

65

Compile-time and being a JIT

slide-66
SLIDE 66

66

Register definitions

Tuples of 4 Tuples of 2

r0 r1 r0 r1 r2 r3 r0 r1 r2 r3 r4 r5 r6 r7 r8 r2 r3 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8

slide-67
SLIDE 67

Some instructions support complex input/output operands loaded in contiguous registers GPUs typically support register tuples with overlapping tuple elements sharing many RUs

67

Register definitions

Tuples of 4 Tuples of 2

r0 r1 r0 r1 r2 r3 r0 r1 r2 r3 r4 r5 r6 r7 r8 r2 r3 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8

slide-68
SLIDE 68

This kind of register hierarchy generates a substantial amount of LLVM register definitions (one per each element of each tuple) Tuples can go up to 16-wide on some architectures!

68

Register definitions

Tuples of 4 Tuples of 2

r0 r1 r0 r1 r2 r3 r0 r1 r2 r3 r4 r5 r6 r7 r8 r2 r3 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8

slide-69
SLIDE 69

Algorithms that scale with the number or registers or iterate over all the registers containing a RU can take a hit We had problem with IPRA implementation for example where in our case for determining the registers used by a function was O(N2) on the number of registers

69

Register definitions

Tuples of 4 Tuples of 2

r0 r1 r0 r1 r2 r3 r0 r1 r2 r3 r4 r5 r6 r7 r8 r2 r3 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8 r4 r5 r6 r7 r1 r2 r3 r4 r5 r6 r7 r8

slide-70
SLIDE 70
  • LLVM has limited support for register pressure awareness
  • IR passes largely ignore register pressure (example LICM)
  • Machine-level has some register pressure estimation, but most passes care only if

they are running out of registers

  • For us increasing register pressure is potentially bad even if we don’t end up

spilling as it reduces occupancy

70

Register pressure awareness

slide-71
SLIDE 71

Q&A

71