Persistent RNNs (stashing recurrent weights on-chip) Gregory Diamos - - PowerPoint PPT Presentation

persistent rnns
SMART_READER_LITE
LIVE PREVIEW

Persistent RNNs (stashing recurrent weights on-chip) Gregory Diamos - - PowerPoint PPT Presentation

Persistent RNNs (stashing recurrent weights on-chip) Gregory Diamos Baidu SVAIL April 7, 2016 Gregory Diamos Persistent RNNs SVAIL Think hard AI. Goal Develop hard AI technologies that impact 100 million users. Gregory Diamos Persistent


slide-1
SLIDE 1

Persistent RNNs

(stashing recurrent weights on-chip) Gregory Diamos

Baidu SVAIL

April 7, 2016

Gregory Diamos Persistent RNNs

slide-2
SLIDE 2

SVAIL

Think hard AI. Goal Develop hard AI technologies that impact 100 million users.

Gregory Diamos Persistent RNNs

slide-3
SLIDE 3

Deep Learning at SVAIL

recognition accuracy data and compute

state of the art human level

100 GFLOP/s 1 laptop 6 TFLOP/s 1 GPU 800 TFLOP/s 128 GPUs 100 PFLOP/s 16K GPUs many previous methods

deep learning

Hypothesis: deep learning scales with data and compute. Can we strong scale deep learning to the limits of technology?

Gregory Diamos Persistent RNNs

slide-4
SLIDE 4

Persistent RNNs

30x speedup at a mini-batch size of 4 Why is reducing the mini-batch size important? Train bigger and deeper models. Strong scale to more GPUs. Improve efficiency of deployed models.

Gregory Diamos Persistent RNNs

slide-5
SLIDE 5

Training Deep RNNs

Gregory Diamos Persistent RNNs

slide-6
SLIDE 6

Deep speech

Near human level speech recognition in Mandarin and English Trained on over 10,000 hours (about 1 year) of speech data. 20 ExaFLOPs of work to train (7 days on 16 GPUs at 40% of peak).

Gregory Diamos Persistent RNNs

slide-7
SLIDE 7

Data parallel training

... GPU 0 GPU 1 mini-batch speech data

Data parallelism: The training data is grouped into mini-batches. Each GPU trains a copy of the model on a slice of the mini-batch. GPUs synchronize their models after a fixed number of steps.

Gregory Diamos Persistent RNNs

slide-8
SLIDE 8

Mini-batch constraints

So how should you choose the mini-batch size?

wall-clock time to convergence

mini-batch size

inefficient hardware inefficient optimization

64 per GPU 1024

Hardware efficiency will set a lower bound. Optimization efficiency will set an upper bound. Shrinking the mini-batch per GPU enables the use of more GPUs.

Gregory Diamos Persistent RNNs

slide-9
SLIDE 9

Determining the batch size

The upper bound can be found empirically. In general a hyperparameter search is needed, but a useful heuristic is: momentum = 1.0 − miniBatchSize

windowSize

learningRate = stepSize ∗ (1.0 − momentum) ∗ miniBatchSize

Gregory Diamos Persistent RNNs

slide-10
SLIDE 10

Persistent RNN Details

Gregory Diamos Persistent RNNs

slide-11
SLIDE 11

RNN primer

RNNs built on GEMM calls reload the weights (U) each timestep.

However, the weights are constant, and this is wasteful.

Gregory Diamos Persistent RNNs

slide-12
SLIDE 12

Caching weights in registers

380 GB/s 300 ns 5.5 MB 6.144 TFLOP/s 128 GB/s 30 ns 230 KB 256 GFLOP/s 16 GB/s 6 ns 896 B 2 GFLOP/s

x 24 x 128 x 1

GPU Core Thread

Off-chip memory is much slower and less efficient than registers. GPUs have more on-chip memory in registers than anywhere else. Cache RNN weights in registers and reuse them over timesteps.

Gregory Diamos Persistent RNNs

slide-13
SLIDE 13

Choosing the tile sizes

Recurrent Weight Matrix

SM0 SM1 SM23 ...

1152 1152 1152 48

Warp0 Warp1 Warp7

1152 6

Thread0

3 2

Thread1 Thread14 Thread15 Thread30 Thread31 Thread15 Thread0 Thread31 Thread16 Thread16 Thread17

... ...

...

Block rows avoid additional inter-CTA synchronizations. Each SM loads the activations into shared memory. Threads are interleaved to avoid shared memory bank conflicts. Vector loads and broadcasts amplify shared memory bandwidth.

Gregory Diamos Persistent RNNs

slide-14
SLIDE 14

Global barriers on GPUs

Grid of cooperative thread arrays Cooperative Thread Array Kernel launch

barrier barrier divergent branch

Grid of cooperative thread arrays Cooperative Thread Array Persistent kernel launch

barrier divergent branch

global barrier

An inter-CTA barrier is implemented with a counting semaphore. Uses atomic, membar, and cache modified load/store operations. Completes in about 500ns on a TitanX GPU. Disclaimer: global barriers violate the CUDA 7.5 model. CUDA does not guarantee forward progress of multiple CTAs. Our system implements cooperative threading for correctness.

Gregory Diamos Persistent RNNs

slide-15
SLIDE 15

Software pipelining

load math reduce barrier load math reduce barrier load math reduce barrier load math reduce barrier load math reduce barrier load math reduce load math load

mini-batch 0 mini-batch 1 mini-batch 2 mini-batch 3

barrier reduce barrier math reduce barrier load math reduce barrier barrier reduce barrier math reduce barrier load math reduce load math load

...

i0 i1 i2 i3 i4 i5 i6 i7 timestep0 timestep1 timestepn-1 i4n-4 i4n-3 i4n-2 i4n-1 i4n i4n+1 i4n+2

Software pipelining is used to hide latency. Thread local math (430ns). Intra-SM reduction (320ns). Global loads (315ns). Global barrier (500ns). These are grouped into 4 pipeline stages, kept full with a minibatch of 4.

Gregory Diamos Persistent RNNs

slide-16
SLIDE 16

Strong Scaling

Gregory Diamos Persistent RNNs

slide-17
SLIDE 17

Scaling to 128 GPUs

Scaling results for end-to-end model training. 8 GPUs per node, 7GB/s infiniband between nodes. The algorithmic mini-batch size is fixed at 512.

20 40 60 80 100 120 140

GPU Count

50 100 150 200 250 300

TeraFLOP/s Deep Speech Scaling With 1152 Unit Layers

PERSISTENT-RNN GEMM-RNN PERFECT SCALING

A smaller mini-batch per GPU enables the use of up to 128 GPUs.

Gregory Diamos Persistent RNNs

slide-18
SLIDE 18

Exploring deep residual RNNs

Using a mini-batch per GPU of 4 provides a 16x reduction in memory. Models with more parameters can now fit into GPU memory.

10 20 30 40 50 60 70 80 90

Recurrent Layer Count

27 28 29 30 31 32 33 34 35 36

Word Error Rate (English) Deep Residual Network Error Rate Reduction With Depth

Deep Residual RNN

Results suggest that residual skip connections networks apply to RNNs.

Gregory Diamos Persistent RNNs

slide-19
SLIDE 19

Pascal and future

Future GPUs will enable bigger and faster RNN layers. bigger GPUs (more threads, more registers) low latency atomics between GPUs (NvLink) lower precision (fp16)

Gregory Diamos Persistent RNNs

slide-20
SLIDE 20

Conclusions

So far, deep learning for speech recognition has scaled with compute.

recognition accuracy data and compute

state of the art human level

100 GFLOP/s 1 laptop 6 TFLOP/s 1 GPU 800 TFLOP/s 128 GPUs 100 PFLOP/s 16K GPUs many previous methods

deep learning

Persistent kernels provide a new tool for accelerating RNN training. Let’s continue building faster computers, software, and algorithms. What other hard AI problems will scale with deep learning and compute?

Gregory Diamos Persistent RNNs

slide-21
SLIDE 21

Questions

Questions?

Contact Me: Gregory Diamos - gregdiamos@baidu.com Baidu USA is hiring! http://usa.baidu.com/careers/

Gregory Diamos Persistent RNNs