cloud deep learning
play

Cloud Deep Learning Gingfung Yeung, Damian Borowiec, Adrian Friday, - PowerPoint PPT Presentation

2020 USENIX HotCloud Towards GPU Utilization Prediction for Cloud Deep Learning Gingfung Yeung, Damian Borowiec, Adrian Friday, Richard Harper, Peter Garraghan Evolving Distributed System Lab School of Computing & Communications Lancaster


  1. 2020 USENIX HotCloud Towards GPU Utilization Prediction for Cloud Deep Learning Gingfung Yeung, Damian Borowiec, Adrian Friday, Richard Harper, Peter Garraghan Evolving Distributed System Lab School of Computing & Communications Lancaster University UK

  2. Deep Learning (DL) Systems Growing number of Machine Learning engineers, More Deep Learning expensive GPUs researchers, users (DL) workloads Require efficient resource usage & high DL performance 2

  3. DL System Challenges • Avg. GPU utilization ~ 52% in production systems [ Jeon et al. ’19 ] DL System Challenges • Long job completion + queue times ~ up to hours [ Jeon et al. ’19; Gu et al. ‘19 ] Addressed via understanding and exploiting workload patterns 3 3

  4. Online profiling approach Deploy workload into isolated machines and GPUs to obtain workload patterns Workload Workload Resource Profile Monitor Response GPU-1 {Utilization = 20, Memory = 4GiB,Bytes…} GPU-1 GPU-2 GPU-2 {Utilization = 40, Memory = 6GiB,Bytes…} Node Usually per workload profiling range from minutes to hours 4

  5. DL Metrics • Iteration time • Useful for scale-out workers, migration, SLA-aware inference • [ Peng et al. ’18; Xiao et al.’ 18; Shen et al.’ 19 ] • Network I/O • Useful for efficient distributed training • [Gu et al. ’19] • GPU Utilization • For packing and calculating interference • [Thinakaran et al. ’19; Xu et al. ’19] 5

  6. Case: Scheduling Scheduling Loop 1. Query Resource Make decision based on Scheduler Monitor workload patterns from profiling 2. Issue 3. Migrate Resource Management Framework 6 6

  7. Time is Money If the system has many heterogenous workloads, will lead to head-of-line blocking. • N workload × mins … … Workload Queue Profiling Stage Scheduling Stage (mins) 7

  8. Online Profiling • Pros • Accurate, near real-time workload patterns • Provide insights to the system • Cons • Heterogenous workloads require different profiles • Time consuming (~mins to ~hours) • Require modifying underlying frameworks 8

  9. Online Profiling • Pros • Accurate, near real-time workload patterns • Provide insights to the system • Cons Obtain prior execution ? • Heterogenous workloads require different profiles • Time consuming (~mins to ~hours) • Require actual execution onto an isolated machine • Require modifying underlying frameworks 9

  10. Prediction • N workload × seconds Reduce blocking … … Workload Queue Prediction Stage Scheduling Stage (sub-second – seconds) 10

  11. DL System Challenges • Avg. GPU utilization ~ 52% in production systems [ Jeon et al. ’19 ] DL System Challenges • Long job completion + queue times ~ up to hours [ Jeon et al. ’19; Gu et al. ‘19 ] Addressed via understanding and exploiting workload patterns 11 11

  12. DL Metrics • Iteration time • Useful for scale-out workers, migration, SLA-aware inference • [ Peng et al. ’18; Xiao et al.’ 18; Shen et al.’ 19 ] • Network I/O • Useful for efficient distributed training • [Gu et al. ’19] • GPU Utilization • For packing and calculating interference • [Thinakaran et al. ’19; Xu et al. ’19] 12

  13. Objective GPU utilization prediction engine for Cloud DL Systems Benefits • Estimates GPU utilization of unseen workloads • Prior to execution • No modification of existing DL frameworks • E.g. PyTorch, TensorFlow, MXNet… Analysis, prediction model, case study 13 13

  14. DL computation graph Going deeper with convolutions [Szegedy et al 2014] Features: Num. Convs, FLOPs, layers, etc. Leverage graph information to (See paper for full features list) predict workload usage. 𝑔 𝑦 → 𝑧 14

  15. Analysis • Profile DL workload utilization • Determine important model features • Set up • Nvidia 1080, Nvidia 2080, Intel i7-6850k • 13 DNN model architectures, 81 workloads See paper for full list of models and permutations. • Tools • Nvidia-smi • Nvidia Nsight Systems 15 15

  16. Analysis 100 CNN GPU Utilization % RNN 80 GFLOPs 60 40 20 0 GPU Utilization % 16 16

  17. Analysis 5x 100 Nvidia 1080 Normalized JCT increase GPU Utilization % Nvidia 2080 80 4x Batch 16 Batch 128 Batch 64 60 3x 40 2x 20 1x 0 0 50 100 150 200 Summative GPU Utilization (%) 1.5x – 4x slowdown from co-location 17

  18. GPU Utilization Prediction 𝑜 1 2 𝑜 ෍ log 𝑞 𝑗 + 1 − log 𝑧 𝑗 + 1 𝑗=1 18

  19. Evaluation 100 Slot-based Avg Cluster GPU Utilization (%) 80 Reactive Proactive 60 40 20 0 0 50 100 150 200 250 300 Time (minutes) 33.5% Makespan reduction 61.5% Utilization improvements 19

  20. Open Challenges • Hardware • Number of processing elements, memory bandwidth and cache sizes. • DL Compilers • Extract lower level IR to determine optimization decision for more accurate prediction. (e.g. Op fusion – ConvBatchNorm) • Distributed Workload • Network I/O, parallelism strategy and system configuration. • (e.g. ring topology) • Co-location Scheduling • Incorporate prediction and system constraints • Derive an optimization algorithm • (e.g. Mixed Integer Programming). 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend