swayam
play

Swayam Distributed Autoscaling for Machine Learning as a Service - PowerPoint PPT Presentation

Swayam Distributed Autoscaling for Machine Learning as a Service Sameh Elnikety, Arpan Gujarati, Kathryn S. McKinley Yuxiong He Bjrn B. Brandenburg 1 Machine Learning as a Service (MLaaS) Machine Learning Amazon Machine Learning Data


  1. Swayam Distributed Autoscaling for Machine Learning as a Service Sameh Elnikety, Arpan Gujarati, Kathryn S. McKinley Yuxiong He Björn B. Brandenburg 1

  2. Machine Learning as a Service (MLaaS) Machine Learning Amazon Machine Learning Data Science & Machine Learning Google Cloud AI 2

  3. Machine Learning as a Service (MLaaS) 1. Training Machine Learning = + Amazon Machine Learning Trained Untrained Dataset Model model Data Science & 2. Prediction Machine Learning + = Google Cloud AI Answer Trained Query Model 2

  4. Machine Learning as a Service (MLaaS) This work 2. Prediction + = Answer Trained Query Model Models are already trained and available for prediction 2

  5. Swayam Distributed autoscaling 2. Prediction + = of the compute resources needed for prediction serving Answer Trained Query Model inside the MLaaS infrastructure 3

  6. Prediction serving ( application perspective) MLaaS Provider Application / End User image "cat" Image classifier 4

  7. Prediction serving ( provider perspective) Finite compute resources MLaaS Provider "Backends" for prediction Lots of trained models! 5

  8. Prediction serving ( provider perspective) Finite compute resources MLaaS Provider "Backends" for prediction Application / End User Lots of trained models! (1) New prediction request for the Multiple request pink model dispatchers "Frontends" (2) A frontend receives the request 5

  9. Prediction serving ( provider perspective) Finite compute resources MLaaS Provider "Backends" for prediction Application / End User (4) The backend fetches the pink model Lots of trained models! (3) The request is dispatched to an idle backend (1) New prediction request for the Multiple request pink model dispatchers "Frontends" (2) A frontend receives the request 5

  10. Prediction serving ( provider perspective) Finite compute resources MLaaS Provider "Backends" for prediction Application / End User (4) The backend fetches the pink model (5) The request Lots of trained models! outcome is predicted (3) The request is dispatched to an idle backend (1) New prediction request for the Multiple request (6) The response is pink model sent back through dispatchers "Frontends" the frontend (2) A frontend receives the request 5

  11. Prediction serving ( objectives ) Finite compute resources MLaaS Provider "Backends" for prediction Application / End User Application / End User Lots of trained models! Multiple request dispatchers "Frontends" 6

  12. Prediction serving ( objectives ) Low latency, SLAs Finite compute resources MLaaS Provider MLaaS Provider "Backends" for prediction Application / End User Application / End User Application / End User Resource e ffi ciency Lots of trained models! Multiple request dispatchers "Frontends" 6

  13. Static partitioning of trained models 7

  14. Static partitioning of trained models MLaaS Provider MLaaS Provider The trained models partitioned among the finite backends 7

  15. Static partitioning of trained models MLaaS Provider MLaaS Provider The trained models partitioned among Application / End User the finite backends No need to fetch and install the pink model Multiple request dispatchers "Frontends" 7

  16. Static partitioning of trained models MLaaS Provider MLaaS Provider The trained models partitioned among Application / End User the finite backends No need to fetch and install the pink model Problem: Not all models are used at all times Multiple request dispatchers "Frontends" 7

  17. Static partitioning of trained models MLaaS Provider MLaaS Provider The trained models partitioned among Application / End User the finite backends No need to fetch and install the pink model Problem: Not all models are used at all times Multiple request dispatchers "Frontends" Problem: Many more models than backends, high memory footprint per model 7

  18. Static partitioning of trained models Low latency, SLAs MLaaS Provider MLaaS Provider The trained models partitioned among Application / End User Resource e ffi ciency the finite backends Static partitioning is infeasible No need to fetch and install the pink model Problem: Not all models are used at all times Multiple request dispatchers "Frontends" Problem: Many more models than backends, high memory footprint per model 8

  19. Classical approach: autoscaling The number of active backends # Active backends Request load for for the pink model the pink model are automatically scaled up or down based on load Time 9

  20. Classical approach: autoscaling The number of active backends # Active backends Request load for for the pink model the pink model are automatically scaled up or down based on load With ideal autoscaling ... Enough backends to guarantee low latency # Active backends over time is Time minimized for resource e ffi ciency 9

  21. Autoscaling for MLaaS is challenging [1/3] 10

  22. Autoscaling for MLaaS is challenging [1/3] Finite compute resources MLaaS Provider "Backends" for prediction (4) The backend fetches the pink model Lots of trained models! (5) The request outcome is predicted Multiple request dispatchers "Frontends" 10

  23. Autoscaling for MLaaS is challenging [1/3] Finite compute resources MLaaS Provider "Backends" for prediction Challenge (4) The backend fetches the pink model Lots of trained models! Provisioning >> Execution Time (4) Time (5) (~ a few seconds) (~ 10ms to 500ms) (5) The request outcome is predicted Requirement Predictive autoscaling to Multiple request dispatchers "Frontends" hide the provisioning latency 10

  24. Autoscaling for MLaaS is challenging [2/3] MLaaS architecture is large-scale, multi-tiered Hardware broker Frontends Backends [ VMs, containers ] 11

  25. Autoscaling for MLaaS is challenging [2/3] MLaaS architecture is large-scale, multi-tiered Challenge Multiple frontends with Hardware partial information about broker the workload Frontends Requirement Fast, coordination-free, globally-consistent autoscaling decisions on the frontends Backends [ VMs, containers ] 11

  26. Autoscaling for MLaaS is challenging [3/3] Strict , model-specific SLAs on response times "99% of requests must complete under 500ms" "99.9% of requests must complete under 1s" "[A] 95% of requests "[B] Tolerate up to 25% must complete under increase in request rates 850ms" without violating [A]" 12

  27. Autoscaling for MLaaS is challenging [3/3] Strict , model-specific SLAs Challenge on response times No closed-form solutions to "99% of requests must get response-time distributions complete under 500ms" for SLA-aware autoscaling "99.9% of requests must complete under 1s" Requirement "[A] 95% of requests "[B] Tolerate up to 25% Accurate waiting-time and must complete under increase in request rates execution-time distributions 850ms" without violating [A]" 12

  28. } Swayam: model-driven distributed autoscaling Challenges Provisioning >> Execution Time (4) Time (5) (~ a few seconds) (~ 10ms to 500ms) We address these challenges Multiple frontends with by leveraging specific partial information about ML workload characteristics the workload and design an analytical model No closed-form solutions to for resource estimation get response-time distributions that allows distributed and for SLA-aware autoscaling predictive autoscaling 13

  29. Outline 1. System architecture, key ideas 2. Analytical model for resource estimation 3. Evaluation results 14

  30. System architecture 15

  31. System architecture Backends dedicated for the pink model Application / End User Application / End User Backends dedicated for the blue model Application / End User Hardware broker Application / End User Backends dedicated for the Global pool Frontends green model of backends 15

  32. System architecture Backends dedicated for the pink model Application / End User Application / End User Backends dedicated for the blue model Application / End User Hardware broker Application / End User Backends dedicated for the Global pool Frontends green model of backends Objective: dedicated set of backends should dynamically scale 1. If load decreases, extra backends go back to the global pool (for resource e ffi ciency) 2. If load increases, new backends are set up in advance (for SLA compliance) 15

  33. System architecture Let's focus on the pink model Backends dedicated Application / End User for the pink model Application / End User Application / End User Hardware broker Application / End User Frontends Objective: dedicated set of backends should dynamically scale 1. If load decreases, extra backends go back to the global pool (for resource e ffi ciency) 2. If load increases, new backends are set up in advance (for SLA compliance) 15

  34. Key idea 1: Assign states to each backend 16

  35. Key idea 1: Assign states to each backend In the cold global pool warm Dedicated to a trained model 16

  36. Key idea 1: Assign states to each backend Haven't executed a request for a while In the cold global pool not-in-use warm Dedicated to a in-use trained model Maybe executing a request 16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend