kubernetes ai with run ai red hat excelero
play

Kubernetes & AI with Run:AI, Red Hat & Excelero AI WEBINAR - PowerPoint PPT Presentation

Kubernetes & AI with Run:AI, Red Hat & Excelero AI WEBINAR Date/Time: Tuesday, June 9 | 9 am PST Whats next in technology and innovation? Whats next in technology and innovation? Kubernetes & AI Kubernetes & AI with


  1. Kubernetes & AI with Run:AI, Red Hat & Excelero AI WEBINAR Date/Time: Tuesday, June 9 | 9 am PST

  2. What’s next in technology and innovation? What’s next in technology and innovation? Kubernetes & AI Kubernetes & AI with Run:AI, Red Hat & Excelero with Run:AI, Red Hat & Excelero AI WEBINAR AI WEBINAR Presenter: Presenter: Presenter: Presenter: Presenter: Presenter: Your Host: Your Host: William Benton William Benton Omri Geller Omri Geller Gil Vitzinger Gil Vitzinger Tom Leyden Tom Leyden Engineering Manager Engineering Manager CEO & Co-Founder CEO & Co-Founder Software Developer Software Developer VP Marketing VP Marketing

  3. Kubernetes for AI Workloads Omri Geller, CEO and co-founder, Run:AI

  4. A Bit of History Reproducibility and Needed flexibility portability and better utilization Bare Metal Virtual Machines Containers Containers scale easily, they’re lightweight and efficient, they can run any workload, are flexible and can be isolated …But they need orchestration 2

  5. Enter Kubernetes Track, Create Efficient Execute Across Schedule and Cluster Different Operationalize Utilization Hardware 3

  6. Today, 60% of Those Who Deploy Containers Use K8s for Orchestration* *CNCF 4

  7. Now let’s talk about AI

  8. Computing Power Fuels Development of AI Deep Learning Classical Machine Learning Manual Engineering 6

  9. Artificial Intelligence is a Completely Different Ballgame New Distributed Experimentation accelerators computing R&D 7

  10. Data Science Workflows and Hardware Accelerators are Highly Coupled Data Hardware scientists accelerators Constant Workflow Under-utilized hassles Limitations GPUs 8

  11. This Leads to Frustration on Both Sides IT leaders are Data Scientists are frustrated – GPU frustrated – speed and utilization is low productivity are low 9

  12. AI Workloads are Also Built on Containers NGC – Nvidia pre-trained models for AI Container ecosystem for Data experimentation on docker containers Science is growing 10

  13. How Can We Bridge The Divide? 11

  14. Kubernetes, the “De-facto” Standard for Container Orchestration Lacks the Multiple queues following Automatic queueing/de-queueing capabilities: Advanced priorities & policies Advanced scheduling algorithms Affinity-aware scheduling Efficient management of distributed workloads 12

  15. How is Experimentation Different? Training Build 13

  16. Distinguishing Between Build and Training Workflows Training Build • Development & debugging • Interactive sessions • Short cycles • Performance is less important • Low GPU utilization 14

  17. Distinguishing Between Build and Training Workflows Training Build • Training & HPO • Development & debugging • Remote execution • Interactive sessions • Long workloads • Short cycles • Throughput is highly important • Performance is less important • High GPU utilization • Low GPU utilization 15

  18. How to Solve? Guaranteed Quotas Guaranteed quotas Fixed quotas • Fits training workflows • Fits build workloads • Users can go over quota • GPUs are always available 16

  19. Solution: Guaranteed Quotas Guaranteed quotas Fixed quotas • Fits training workflows • Fits build workloads • Users can go over quota • GPUs are always available • More concurrent experiments • More multi-GPU training 17

  20. Queueing Management Mechanism 18

  21. Run:AI - Stitching it All Together

  22. Run:AI - Applying HPC Concepts to Kubernetes With the advantages of K8s, plus some concepts from the world of HPC & distributed computing, we can bridge the gap Data Science teams IT teams gain visibility gain productivity and maximal GPU and speed utilization 20

  23. Run:AI - Kubernetes-Based Abstraction Layer INTEGRABLE Easily integrates with IT and Data Science platforms MULTI-CLOUD Run on any public, private and hybrid cloud environment IT GOVERNANCE Policy based orchestration and queuing management 21

  24. Run:AI Utilize Kubernetes across IT to improve resource utilization Speed up experimentation process and time to market Easily scale infrastructure to meet needs of the business 22

  25. From 28% to 73% utilization, 2X speed, and $1M savings Challenge Solution After implementing Run:AI’s platform 28% AVERAGE GPU UTILIZATION - 73% AVERAGE GPU UTILIZATION inefficient and underutilized resources • Enabled 2x more experiments to run • Saved $1M in additional GPU expenditures for 2020 23

  26. Run:AI at-a-Glance • Founded in 2018 • Backed by top VCs • Offices in Tel Aviv, New York, and Boston Venture • Fortune 500 customers Funded • Top cloud and virtualization engineers 24

  27. Thank you

  28. NVMesh in Kubernetes

  29. 29 What is NVMesh CSI Driver ● What is NVMesh CSI Driver ? ○ CSI - Container Storage Interface ○ NVMesh as a storage backend in Kubernetes ● Main Features ○ Static Provisioning ○ Dynamic Provisioning ○ Block and File System volumes ○ Access Modes (ReadWriteOnce, ReadWriteMany, ReadOnlyMany) ○ Extend volumes ○ Using NVMesh VPGs

  30. 30 CSI Driver Components Kubernetes Controller REST API NVMesh CSI Controller NVMesh CSI NVMesh CSI NVMesh CSI Node Driver Node Driver Node Driver NVMesh NVMesh NVMesh NVMesh Management Client Client Client NVMesh Targets

  31. 31 Dynamic Provisioning & Attach Flow Kubernetes Controller User creates a Persistent Volume Claim (PVC) NVMesh CSI Controller Create Volume NVMesh Management NVMesh Targets

  32. 32 Dynamic Provisioning & Attach Flow Kubernetes Controller User creates a POD that uses the PVC Nod NVMesh CSI e NVMesh CSI Controller User App Node Driver PODs POD mount Attach / Detach K8s internal mount OS mount NVMesh NVMesh Management /dev/nvmesh/v1 Client Data NVMesh Targets

  33. 33 Exposing NVMesh volume in a Pod User App POD User App POD 1 2 kubelet/pod1/volumes/v1 kublete/pod2/volumes/v1 CSI Publish Volume bind mount For each volume for each POD kubelet/volume/mount Block Volume FileSystem Volume mount CSI Stage Volume Once for each Volume on the Node mkfs /dev/nvmesh/v1 NVMesh attach NVMesh Client

  34. 34 Usage Examples kind: PersistentVolumeClaim apiVersion: v1 metadata: name: block-pvc spec: accessModes: - ReadWriteMany volumeMode: Block resources: requests: storage: 15Gi storageClassName: nvmesh-raid10 kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nvmesh-custom-vpg provisioner: nvmesh-csi.excelero.com parameters: vpg: your_custom_vpg

  35. 35 Summary NVMesh Benefits for Kubernetes: ● Persistent storage that scales for stateful applications ● Predictable application performance – ensure that storage is not a bottleneck ● Scale your performance and capacity linearly ● Containers in a pod can access persistent storage presented to that pod, but with the freedom to restart the pod on an alternate physical node ● Choice of Kubernetes PVC access mode to match the storage to the application and file system requirements

  36. Machine learning discovery, workflows, and systems on Kubernetes William Benton Engineering Manager and Senior Principal Engineer Red Hat, Inc.

  37. codifying problem 
 data collection feature model training model model monitoring, and metrics and cleaning engineering and tuning validation deployment validation

  38. codifying problem 
 data collection feature model training model model monitoring, and metrics and cleaning engineering and tuning validation deployment validation

  39. codifying problem 
 data collection feature model training model model monitoring, and metrics and cleaning engineering and tuning validation deployment validation

  40. codifying problem 
 data collection feature model training model model monitoring, and metrics and cleaning engineering and tuning validation deployment validation

  41. codifying problem 
 data collection feature model training model model monitoring, and metrics and cleaning engineering and tuning validation deployment validation

  42. codifying problem data collection feature model training model model monitoring, and metrics and cleaning engineering and tuning validation deployment validation

  43. machine data monitoring resource verification management data collection configuration serving infrastructure analysis tools process feature extraction management (Adapted from Sculley et al., “Hidden Technical Debt in Machine Learning Systems.” NIPS 2015)

  44. machine data monitoring resource verification management data collection configuration serving infrastructure analysis tools process feature extraction management (Adapted from Sculley et al., “Hidden Technical Debt in Machine Learning Systems.” NIPS 2015)

  45. data engineers transform events transform federate databases archive file, object transform storage

  46. transform developer UI events transform federate databases file, object transform storage train models data scientists

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend