serverless in the wild
play

Serverless in the Wild: Characterizing and Optimizing the Serverless - PowerPoint PPT Presentation

Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider Mohammad Shahrad , Rodrigo Fonseca , igo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark


  1. Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider Mohammad Shahrad , Rodrigo Fonseca , Íñigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, and Ricardo Bianchini July 15, 2020

  2. What is Serverless? • Very attractive abstraction: • Pay for Use • Infinite elasticity from 0 (and back) • No worry about servers • Provisioning, Reserving, Configuring, patching, managing • Most popular offering: Function-as-a-Service (FaaS) • Bounded-time functions with no persistent state among invocations • Upload code, get an endpoint, and go For the rest of this talk, Serverless = Serverless FaaS

  3. What is Serverless? Bare Metal VMs (IaaS) Containers Functions (FaaS) Unit of Scale Server VM Application/Pod Function Provisioning Ops DevOps DevOps Cloud Provider Init Time Days ~1 min Few seconds Few seconds Scaling Buy new hardware Allocate new VMs 1 to many, auto 0 to many, auto Typical Lifetime Years Hours Minutes O(100ms) Payment Per allocation Per allocation Per allocation Per use State Anywhere Anywhere Anywhere Elsewhere

  4. Serverless “…more than 20 percent of global enterprises will have deployed serverless computing technologies by 2020.” Gartner, Dec 2018

  5. Serverless Source: CNCF Cloud Native Interactive Landscape https://landscape.cncf.io/format=serverless

  6. Serverless December 2019 “… we predict that (…) serverless computing will grow to dominate the future of cloud computing.”

  7. So what are people doing with FaaS? • Interesting Explorations • MapReduce (pywren) • Many simple things • Linear Algebra (numpywren) • ETL workloads • ExCamera • IoT data collection / processing • gg “burst-parallel” functions apps • Stateless processing • ML training • Image / Video transcoding • Limitations • Translation • Communication • Check processing • Latency • Serving APIs, Mobile/Web Backends • Locality (lack) • State management

  8. What is Serverless? • Very attractive abstraction: • Pay for Use • Infinite elasticity from 0 (and back) • No worry about servers • Provisioning, Reserving, Configuring, patching, managing

  9. If you are a cloud provider… • A big challenge • You do worry about servers! • Provisioning, scaling, allocating, securing, isolating • Illusion of infinite scalability • Optimize resource use • Fierce competition • A bigger opportunity • Fine grained resource packing • Great space for innovating, and capturing new applications, new markets

  10. Cold Starts Azure Functions OpenWhisk AWS Lambda • Typically range between 0.2 to a few seconds 1,2 1 https://levelup.gitconnected.com/1946d32a0244 2 https://mikhail.io/serverless/coldstarts/big3/ 9

  11. Cold Starts and Resource Wastage Keeping functions in memory indefinitely. Wasted Memory ? Removing function instance from memory after invocation. Cold Starts 10

  12. Stepping Back: Characterizing the Workload • How are functions accessed • What resources do they use • How long do functions take 2 weeks of all invocations to Azure Functions in July 2019 First characterization of the workload of a large serverless provider Subset of the traces available for research: https://github.com/Azure/AzurePublicDataset 11

  13. Invocations per Application* 12 This graph is from a representative subset of the workload. See paper for details.

  14. Invocations per Application 13 This graph is from a representative subset of the workload. See paper for details.

  15. Invocations per Application 14 This graph is from a representative subset of the workload. See paper for details.

  16. Invocations per Application 15 This graph is from a representative subset of the workload. See paper for details.

  17. Invocations per Application 16 This graph is from a representative subset of the workload. See paper for details.

  18. Invocations per Application 17 This graph is from a representative subset of the workload. See paper for details.

  19. Invocations per Application 18% >1/min 82% <1/min 99.6% of invocations! 0.4% of invocations 18 This graph is from a representative subset of the workload. See paper for details.

  20. Apps are highly heterogeneous 19

  21. What about memory? If we wanted to keep all apps warm… Cumulative Fraction of Total Memory 1.0 0.8 0.6 Allocated Memory Physical Memory 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of Least Invoked Apps 20

  22. What about memory? If we wanted to keep all apps warm… Cumulative Fraction of Total Memory 1.0 0.8 0.6 Allocated Memory Physical Memory 0.4 0.2 82% of apps -> 0.4% of invocations -> 0.0 40% of all physical memory, 0.0 0.2 0.4 0.6 0.8 1.0 60% of virtual memory Fraction of Least Invoked Apps 90% of apps -> 1.05% of invocations -> 50% of all physical memory 21

  23. Function Execution Duration 1.00 0.90 0.75 Minimum CDF Average 0.50 Maximum LogNormal Fit 0.25 0.10 0.00 • Executions are short 1ms 100ms 1s 10s 1m 10m 1h Time(s) • 50% of apps on average run for <= 0.67s • 75% of apps on run for <= 10s max • Times at the same scale as cold start times 1,2 22 1 https://levelup.gitconnected.com/1946d32a0244 2 https://mikhail.io/serverless/coldstarts/big3/

  24. Key Takeaways • Highly concentrated accesses • 82% of the apps are accessed <1/min on average • Correspond to 0.4% of all accesses • But in aggregate would take 40% of the service memory if kept warm • Arrival processes are highly variable • Execution times are short • Same OOM as cold start times 23

  25. Cold Starts and Resource Wastage Cumulative Fraction of Total Memory 1.0 Allocated Memory 0.8 Physical Memory 0.6 Keeping functions in 0.4 memory indefinitely. 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of Least Invoked Apps Wasted 1.00 0.90 0.75 Memory ? CDF 0.50 Minimum 0.25 Average Maximum LogNormal Fit 0.10 0.00 1ms 100ms 1s 10s 1m 10m 1h Time(s) Removing function instance from memory after invocation. Cold Starts 24

  26. What do serverless providers do? Amazon Lambda Fixed 10-minute Cold start probability keep-alive. Azure Functions Cold start probability Fixed 20-minute keep-alive. Time since last invocation (mins) Time since last invocation (mins) 25 Mikhail Shilkov, Cold Starts in Serverless Functions, https://mikhail.io/serverless/coldstarts/

  27. Fixed Keep-Alive Policy Results from simulation of the entire workload for a week. Longer keep-alive 26

  28. Fixed Keep-Alive Won’t Fit All 8 mins 8 mins 10-minute Fixed Keep-alive Time Cold Start 11 mins 11 mins Warm Start Time 27

  29. Fixed Keep-Alive Is Wasteful 8 mins 8 mins 10-minute Fixed Keep-alive Time Cold Start Warm Start Function image kept in memory but not used. 28

  30. Hybrid Histogram Policy Adapt to each application Pre-warm in addition to keep-alive Lightweight implementation 29

  31. A Histogram Policy To Learn Idle Times 8 mins Idle Time (IT): 8 mins 10-minute Fixed Keep-alive Time Cold Start Frequency Warm Start 8 Idle Time (IT) 30

  32. A Histogram Policy To Learn Idle Times Keep-alive Pre-warm Frequency 7 8 9 Idle Time (IT) 31

  33. A Histogram Policy To Learn Idle Times Pre-warm Keep-alive Frequency 99 th percentile 5 th percentile Idle Time (IT) Minute-long bins Limited number of bins (e.g., 240 bins for 4-hours) 32

  34. The Hybrid Histogram Policy Pre-warm Keep-alive Frequency 99 th percentile Out of Bound 5 th percentile (OOB) Idle Time (IT) We can afford to run complex predictors given the low arrival rate. A histogram might be too wasteful. Time Series Forecast 33

  35. The Hybrid Histogram Policy Pattern Use IT distribution Yes No Significant (histogram) Update New Too many app’s IT invocation Be conservative OOB ITs No distribution (standard keep-alive) Yes Time-series forecast (ARIMA) ARIMA: Autoregressive Integrated Moving Average 34

  36. More Optimal Pareto Frontier 35

  37. Implemented in OpenWhisk REST Interface Controller Distributed Load Database Balancer • Open-sourced industry-grade Distributed (IBM Cloud Functions) Messaging • Functions run in docker containers • Uses 10-minute fixed keep-alive Invoker Invoker Invoker Container Container Container Container Container Container Container Container Container • Built a distributed setup with 19 VMs 36

  38. Simulation Experimental 4-Hour Hybrid Histogram 1.00 0.75 CD) 0.50 0.25 Hybrid )ixHd (10-min) 0.00 0 25 50 75 100 App Cold StDrt (%) Average exec time reduction: 32.5% Container memory reduction: 15.6% 99 th –percentile exec time reduction: 82.4% Latency overhead: < 1ms (835.7µs) 37

  39. Closing the loop Ø First serverless characterization from a provider’s point of view Ø A dynamic policy to manage serverless workloads more efficiently ( First elements now running in production. ) Ø Azure Functions traces available to download: https://github.com/Azure/AzurePublicDataset/blob/master/ AzureFunctionsDataset2019.md 38

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend