large scale gpu deep learning platform design and case
play

Large-scale GPU Deep Learning Platform Design and Case Analysis - PowerPoint PPT Presentation

Large-scale GPU Deep Learning Platform Design and Case Analysis Zhang Qing Alfie Lew YOUR SUCCESS, WE SUCCEED AI Age Has arrived Electric Age AI Age In 1870s In 2012 The second The fourth technological technological


  1. Large-scale GPU Deep Learning Platform Design and Case Analysis Zhang Qing Alfie Lew YOUR SUCCESS, WE SUCCEED

  2. AI Age Has arrived Electric Age AI Age •In 1870s •In 2012 •The second •The fourth technological technological revolution revolution Information Age Steam Age •In 1940s~1950s •In 1760s •The third technological •The first technological revolution revolution

  3. AI Application Trend • More and More Users The Internet – Security and Surveillance – Finance, health care – Car manufacturers – Smart City Financial Medical care Robots, entertainment – • More and more application scenarios Image / video analysis – Speech recognition – NLP/OCR – … – Automobile Household Entertainment

  4. Deep Learning Process Flow Inference Data Preprocessing Training Abnormal Model Data sets “Thank you”

  5. Deep Learning Computing Characteristics Data Preprocess High IO Intensity Training Extreme Computing and Communication Intensity Inference High throughput and low latency

  6. Deep Learning Computing System Trend • Computing Mode From single node to clusters – From local to cloud – • Data Storage From Dedicated (Training and Inference) to Unified Storage – • System Management Development platform – Production platform – Cloud platform – • Application mode From single user to multi-user – From single framework to multiple frameworks –

  7. Deep Leaning Challenges Implementing distributed Obtaining large amount of Large-scale deep learning parallel neural network labeled data and computing platform algorithm for speed, scale, preprocessing efficiency expandability

  8. Architecture of Large-scale Deep Learning System Image/video Apps NLP Apps Speech Apps App Level Caffe-mpi TensorFlow Caffe CNTK mxnet Framework Level Management Scheduling Mirror Management monitoring applied analysis Management Level Inspur Teye Inspur AIStation GPU Inference Platform CPU pre-processing Platform GPU Training Platform Parallel 10GbE/IB Platform Level storage Network

  9. Deep Learning Challenges - Platform Level Design IO efficiency for data pre-processing Computing resources required for modeling, tuning and optimization Inference speed and desired throughput for large amount of sample processing

  10. Architecture of Large-scale Deep Learning Platform Computing Architecture • Data preprocessing platform – CPU cluster • Training platform – CPU+ P100/P40 GPU • HPC Cluster • Inference platform – CPU+P4 GPU • Hadoop • Data Storage • Offline with Lustre – Online with HDFS – Network • Offline with IB – Online with 10GbE –

  11. Deep Learning Challenges - Management Layer Managing different computing platforms and configurations/devices Managing different frameworks for different computing tasks Managing the whole system and monitoring different computing tasks

  12. Deep Learning Management System AIStation is a Deep Learning Cluster and training task management software, which can rapidly deploy training environment for Deep Learning, and comprehensively manage Deep Learning training tasks, providing an efficient and convenient platform for users. Key Functions Deployment of Deep-Learning environment Management of Deep-Learning training tasks GPU & CPU Monitoring GPU resources management and scheduling Cluster Statistics & Report

  13. AIStation - Workflow Resource scheduling Containers run Assign GPUs Training mgmt Container installation Applications start Compose training jobs User interaction 1. Resources : GPU 1. Shell access 1. Run Contariners 2. Templates : TF1 2. VNC access 2. Execute Job 1. Job starter 3. Images : TF/v1.0 3. Training 2. TF1.yaml commands 4. Parametes : ps,ws… visualization 5. Data : volume

  14. AIStation - Integrating Deep Learning Frameworks Integrate Deep Learning Frameworks – Supports Multiple Deep Learning Frameworks: Caffe, TensorFlow, CNTK, etc. – Support various models: GoogleNet, VGG, ResNet, etc. – One-Key deployment of the Deep Learning environment – Training jobs submit & schedule – Training process management & visualization GPU Resource Training Jobs 20% 30% Utilization Throughput

  15. Teye : Application Optimization Analysis Tool Analyzing the bottleneck and characteristics of Applicatation • GPU driver data:clock,ECC,power – GPU runtime data:memory util,memory copy,cache,SP/DP Gflops – CPU runtime info: AVX,SSE,SP/DP Gflops, CPI –

  16. Deep Learning Challenges - Framework • How to select from many Deep Learning Framework? Caffe, TensorFlow, MxNET, CNTK, Torch, Theono, DeepLearning4j, PaddlePaddle … • What framework to use for a given scenario and model? • Using a single framework or multiple frameworks?

  17. A Frameworks Comparison Compute Platform: Inspur SR-AI Rack(16 GPUs) + AIStation+Teye (management) • Framework: Caffe, Tensorflow, MxNet • Model: Alexnet, Googlenet • Performance • Alexnet: 4675.799 Images/s (16 GPUs/GPU = 14X) à Caffe is best – Googlenet:2462Images/s (16 GPUs/GPU = 13X) à MxNet is best –

  18. Factors to Consider when Selecting Framework • Based on model size and complexity • Based on different application scenarios Image – Speech – NLP – • Based on data size to select distributed framework Caffe-MPI – Tensorflow – MxNet –

  19. Deep Learning Challenges - Applications Layer • How to improve the recognition accuracy? Model design – Data pre-processing – • How to improve Training performance? CUDA Programming for half Precision (pascal) – CUDA Programming for mixed Precision – • How to improve Inference performance? CUDA Programming for Int8 –

  20. Deep Learning Applications on GPU Speech Training Image Search Time ( S ) 300 256.1 250 200 CPU 150 115.2 GPU 100 50 0 Samples : 1M , dimensions:180 Image Training Network Security Time(s) 350 300 250 200 150 100 50 0 CPU:C+MKL 1GPU version 4GPU version version

  21. Deep Learning Platform End-to-End Alexnet/GoogLenet/Resnet CNN/RNN/LSTM Model & Algorithm Deep learning Caffe-MPI TensorFlow MXNET PaddlePaddle training platform DL Framework “Big Win!” DL Management AIStation management system T-Eye Tuning Tool Speech recognition “ This is Daniel wu ” Face recognition Training Data Model Terminal “ Pursuit staff ” Video monitoring GPU AI Cloud “ Retinopathy ” 16 Card Medical imaging 2U8 Card GPU Box “Have booked G6 ” 10G/IB Network 2U4 Card 4U4 Card NF5280M4 P8000 Wrokstation Personal assistant Inference Training GPU Clustre AI recognition processing speech/image/video data processing Cluster / natural language Flash Storage AS5600/13000 Storage

  22. Inspur Deep Learning GPU Servers 8GPU Server 64GPU Server 2GPU Server 4GPU Server Inspur is a leading AI computing providers: NF5280M4 NF5568M4 AGX-2 SR-AI Rack Supply >60% AI HW to CSP in Inference Training Training Training China

  23. Thank You Visit us in Booth #911 COMPUTING INSPIRES FUTURE

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend