th the fron ronti tier of f def efine by by run dee eep
play

Th The Fron ronti tier of f Def efine-by by-Run Dee eep Lea - PowerPoint PPT Presentation

Th The Fron ronti tier of f Def efine-by by-Run Dee eep Lea earning Fram rameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks, Inc. S9380 Deep Learning Framework for fast iterative research/development 2 Def


  1. Th The Fron ronti tier of f Def efine-by by-Run Dee eep Lea earning Fram rameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks, Inc. S9380

  2. Deep Learning Framework for fast iterative research/development 2

  3. Def efine-by by-Run fr frameworks by default from 2.0 3

  4. x = numpy.array (…) Write forward prop h1 = layer1(x, W1) as a plain Python script. h2 = layer2(h1, W2) loss = loss_func(h2) loss.backward() Variables hold how they W1.array -= lr * W1.grad were computed. Use it to W2.array -= lr * W2.grad compute the gradient. 4

  5. Deep learning framework optimized for the Define-by-Run API design 5

  6. ✓ Model description ✓ Distributed training ✓ Serialization, export …… Everything is optimized for Define-by-Run style programming 6

  7. class Linear(chainer.Link): Tie parameters to the forward code using OOP. def __init__( self , n_in, n_out): super ().__init__() with self.init_scope(): self .W = chainer.Parameter(I.HeNormal(), (n_in, n_out)) self .b = chainer.Parameter(0, (n_out,)) def forward( self , x): return x @ self .W + self .b 7

  8. class MLP(chainer.Chain): def __init__( self ): super ().__init__() with self.init_scope(): self .l1 = Linear(784, 200) Object structure = self .l2 = Linear(200, 100) composition of NN fragments self .l3 = Linear(100, 10) def forward( self , x): h1 = F.relu( self .l1(x)) h2 = F.relu( self .l2(h1)) return self .l3(h2) 8

  9. for batch in iterator: # fetch the next minibatch x, t = converter(batch) # concat, transfer to the device loss = loss_fun(x, t) # forward prop loss.backward() # backprop optimizer.update() # update parameters model.cleargrad() # cleanup gradients Every part is plain, customizable Python code 9

  10. Fast GPU computation 10

  11. 11

  12. import numpy as np def logsumexp(x): x_max = x.max(axis=1, keepaxis=True) x0 = x – x_max lse = np.log(np.exp(x0).sum(axis=1)) lse += x_max return lse x = np.array([...], dtype=np.float32) print (logsumexp(x)) 12

  13. import cupy as cp def logsumexp(x): x_max = x.max(axis=1, keepaxis=True) x0 = x – x_max lse = cp .log( cp .exp(x0).sum(axis=1)) lse += x_max return lse x = cp .array([...], dtype=np.float32) print (logsumexp(x)) 13

  14. import cupy as cp , numpy as np def logsumexp(x): x_max = x.max(axis=1, keepaxis=True) x0 = x – x_max lse = np .log( np .exp(x0).sum(axis=1)) lse += x_max return lse x = cp .array([...], dtype=np.float32) print (logsumexp(x)) 14

  15. ✓ cuDNN support (conv, pooling, LSTM, …) ✓ Easy custom kernel compiled at runtime ✓ FP16 support 15

  16. load & make forward parameter minibatch backward update DALI, float16 mode, Distributed multiprocessing TensorCore training 17

  17. Mixed precision training > TensorCore support automatically available > Techniques for mixed precision training optimizer.set_loss_scale(scale) optimizer.use_fp32_update() > mixed16 mode (coming soon) CHAINER_DTYPE=mixed16 18

  18. Distributed training 19

  19. Process 0 on node 0, GPU 0 Forward Backward Optimize ALL-REDUCE Process 1 on node 0, GPU 1 Backward Optimize Forward Process 2 on node 1, GPU 0 Forward Backward Optimize Process 3 on node 1, GPU 1 Forward Backward Optimize 20

  20. Data parallelism comm = chainermn.create_communicator() device = comm.intra_rank # use this device optimizer = chainermn.create_multi_node_optimizer (…, comm) Scaled to V100x512 environment (https://arxiv.org/abs/1809.00778) 21

  21. Model parallelism # rank 0 > Each node computes phi = send (x, comm, rank=1) different part of the network h = recv (comm, rank=1, delegate_variable=phi) ( model itself is parallelized ) > MPI communication # rank 1 primitives with backprop x = recv (comm, rank=0) h = f(x) phi = send (h, comm, rank=0) 22

  22. 23

  23. Model parallelism # rank 0 phi = send (x, comm, rank=1) > send returns a pseudo variable φ. It h = recv (comm, rank=1, simulates the topology of full delegate_variable=phi) computational graph loss(h).backward() > Collective communication routines, # rank 1 e.g. bcast, scatter, allgather etc., are x = recv (comm, rank=0) also available h = f(x) phi = send (h, comm, rank=0) phi.backward() 24

  24. Domain specific add-on packages 25

  25. ✓ Support standard computer vision tasks classification, object detection, semantic/instance segmentation ✓ Simple, unified interface easy to use and compose, optimized for computer vision workloads ✓ Guaranteed reproduction every method implemented is confirmed to reproduce the same performance as the original paper 26

  26. ✓ Wide range of Deep RL methods covered DQN, Categorical DQN, IQN, DDPG, A3C, ACER, NSQ, PCL, PPO, TRPO ✓ Clean API and abstraction easy to combine multiple orthogonal design choices, e.g. discrete/continuous actions, recurrent models, async training, ... ✓ Environment support compatible with OpenAI Gym interface 27

  27. Chainer Chemistry Chainer UI 28

  28. What is needed for modern deep learning frameworks? 29

  29. Quick Environment Speed Deployment support Faster trial-and-error Quick adoption of new Quick application of Larger scale hardware/environment research outcome

  30. ChainerX included in Chainer v6 beta1

  31. Cha ChainerX = Nu NumPy-like nda ndarray + + aut autograd Speed • in C++ w/ a thin binding layer = far less host-side overhead • with pluggable device backends Environment Support = open to quickly add a new device support Quick Deployment • with pure C++ API = available for Python-free native apps

  32. Existing code using Chainer High level API (Chainer) Low-overhead computation written in Python Portable code with much ChainerX Python API less overhead in C++ NumPy CuPy ChainerX (with C++ API) custom backend … Native backend CUDA backend 33

  33. Cha ChainerX Pyt ython API: I: chainerx nam namespace > NumPy compatible API import chainerx as chx > NN specific functions x = chx.ones((2, 3), conv, batch_norm , … dtype=chx.float32, device='cuda:0') > Device support y = (x + 1).require_grad() z = chx.exp(y).sum() z.backward() > require_grad() to make array differentiable

  34. Cha Chainer on on Cha ChainerX > Wraps chx.ndarray with Variable arr = chx.ones((2, 3), > FunctionNode fallbacks the dtype=chx.float32) computation to NumPy/CuPy x = chainer.Variable(arr) y = model(x) > Uses ChainerX (C++) y.backward() computational graph with lower overhead in backprop

  35. Cha ChainerX C+ C++ API > Has almost one-to-one chainerx::Array x = chainerx::ones( mapping to Python API {2, 3}, chainerx::Dtype::kFloat32, chainerx::GetDevice("cuda:0")); > Runs without CPython chainerx::Array y = (x + 1).RequireGrad(); chainerx::Array z = chainerx::Exp(y).Sum(); environment chainerx::Backward(z);

  36. Time per iteration Framework/API (=fwd+bwd+update, msec) Chainer on NumPy 14.48 Chainer on ChainerX 7.54 ChainerX Python 1.88 PyTorch 2.45 Host logic overhead

  37. Cha ChainerX: Ro Roadmap v6 v7 future May. 2019 Nov. 2019 2020+ ChainerX Wide coverage of ops Wide coverage of ops Easier deploy Basic ops Ready for most users Ready for most users Wider coverage of Integration to Chainer C++ API made more C++ API made more “compiled models” accessible accessible 38

  38. Chai Chainer Com Compiler https://gi ht github.c .com/pfnet-re research/chainer-compiler Tracing (ONNX-Chainer) Python ChainerX Execution with ONNX+ ChainerX Chainer VM Translation (Chainer to ONNX) Vendor-specific graph formats Graph-based optimization Graph-based autodiff Dynamic shape Native binary Control flows

  39. 40

  40. > Pioneering define-by-run API design > Being made faster and more portable with ChainerX and Chainer Compiler WE ARE HIRING! @ChainerOfficial on Twitter https://bit.ly/join-chainer-slack https://preferred-networks.jp/en/jobs 41

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend