a cloud benchmark suite combining micro and application
play

A Cloud Benchmark Suite Combining Micro and Application Benchmarks - PowerPoint PPT Presentation

A Cloud Benchmark Suite Combining Micro and Application Benchmarks Joel Scheuner, Philipp Leitner Joel Scheuner scheuner@chalmers.se joe4dev @joe4dev Context: Public Infrastructure-as-a-Service Clouds IaaS PaaS SaaS Applications


  1. A Cloud Benchmark Suite Combining Micro and Application Benchmarks Joel Scheuner, Philipp Leitner Joel Scheuner � scheuner@chalmers.se � joe4dev � @joe4dev

  2. Context: Public Infrastructure-as-a-Service Clouds IaaS PaaS SaaS Applications Applications Applications User-Managed Data Data Data Runtime Runtime Runtime Middleware Middleware Middleware OS OS OS Virtualization Virtualization Virtualization Servers Servers Servers Storage Storage Storage Networking Networking Networking Provider-Managed Infrastructure-as-a-Service (IaaS) Platform-as-a-Service (PaaS) Software-as-a-Service (SaaS)

  3. Motivation: Capacity Planning in IaaS Clouds What cloud provider should I choose? https://www.cloudorado.com 2018-04-10 QUDOS@ICPE'18 3

  4. Motivation: Capacity Planning in IaaS Clouds What cloud service (i.e., instance type) should I choose? 120 t2.nano Number of Instance Type 0.05-1 vCPU 100 0.5 GB RAM $0.006/h 80 60 x1e.32xlarge 40 128 vCPUs 3904 GB RAM 20 $26.688 hourly 0 6 7 8 9 0 1 2 3 4 5 6 7 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2018-04-10 QUDOS@ICPE'18 4

  5. Topic: Performance Benchmarking in the Cloud “The instance type itself is a very major tunable parameter” � @brendangregg re:Invent’17 https://youtu.be/89fYOo1V2pA?t=5m4s 2018-04-10 QUDOS@ICPE'18 5

  6. Background Application Micro Benchmarks Benchmarks Memory CPU I/O Overall performance Network (e.g., response time) Generic Specific Artificial Real-World Resource- Resource- specific heterogeneous 2018-04-10 QUDOS@ICPE'18 6

  7. Related Work Micro Benchmarking / Application Kernels Iosup et. al. Performance analysis of cloud computing services for many-tasks scientific computing. Ostermann et. al. A performance analysis of EC2 cloud computing services for scientific computing. Application Benchmarking Ferdman et. al. Clearing the clouds: a study of emerging scale-out workloads on modern hardware. Cooper et. al. Benchmarking Cloud Serving Systems with YCSB. Repeatability of Cloud Experiments A) Single Trial B C A B) Multiple Consecutive Trials (MCT) Abedi and Brecht. Conducting Repeatable Experiments A A A B B B C C C C) Multiple Interleaved Trials (MIT) in Highly Variable Cloud Computing Environments. A B C A B C A B C D) Randomized Multiple Interleaved Trials (RMIT) @ICPE’17 A A A B B C C B C 2018-04-10 QUDOS@ICPE'18 7

  8. Problem: Isolation, Reproducibility of Execution Application Micro Benchmarks Benchmarks Memory CPU I/O Overall performance Network (e.g., response time) Generic Specific Artificial Real-World Resource-specific Resource- heterogeneous 2018-04-10 QUDOS@ICPE'18 8

  9. How can we systematically combine and execute Question: micro and application benchmarks? Application Micro Benchmarks Benchmarks Memory CPU I/O Overall performance Network (e.g., response time) Generic Specific Artificial Real-World Resource-specific Resource- heterogeneous 2018-04-10 QUDOS@ICPE'18 9

  10. Idea Application Micro Benchmarks Benchmarks Memory CPU I/O Overall performance Network (e.g., response time) Generic Specific Artificial Real-World Systematically Resource-specific Resource- Execute heterogeneous Together 2018-04-10 QUDOS@ICPE'18 10

  11. Execution Methodology D) Randomized Multiple Interleaved Trials (RMIT) B A C C B A A C B 30 benchmark scenarios 3 trials ~2-3h runtime 2018-04-10 QUDOS@ICPE'18 11

  12. Benchmark Manager Cloud WorkBench (CWB) Tool for scheduling cloud experiments � sealuzh/cloud-workbench CloudCom 2014 “Cloud Work Bench – Infrastructure-as-Code Based Cloud Benchmarking” Scheuner, Leitner, Cito, and Gall Demo@WWW 2015 Scheuner, Cito, Leitner, and Gall 2018-04-10 QUDOS@ICPE'18 12

  13. Architecture Overview 2018-04-10 QUDOS@ICPE'18 13

  14. Micro Micro Benchmarks Benchmarks Broad resource coverage and specific resource testing Memory CPU I/O Network I/O CPU • [file I/O] sysbench/fileio-1m-seq-write • sysbench/cpu-single-thread • [file I/O] sysbench/fileio-4k-rand-read • sysbench/cpu-multi-thread • [disk I/O] fio/4k-seq-write • stressng/cpu-callfunc • [disk I/O] fio/8k-rand-read • stressng/cpu-double • stressng/cpu-euler Network • stressng/cpu-ftt • iperf/single-thread-bandwidth • stressng/cpu-fibonacci • iperf/multi-thread-bandwidth • stressng/cpu-int64 • stressng/network-epoll • stressng/cpu-loop • stressng/network-icmp • stressng/cpu-matrixprod Software (OS) • stressng/network-sockfd • sysbench/mutex • stressng/network-udp Memory • sysbench/thread-lock-1 • sysbench/memory-4k-block-size • sysbench/thread-lock-128 • sysbench/memory-1m-block-size 2018-04-10 QUDOS@ICPE'18 14

  15. Micro Micro Benchmarks: Examples Benchmarks File I/O: 4k random read Bandwidth 1) Prepare Memory CPU I/O Network I/O Network Server 2) Run 3) Extract Result 4) Cleanup 3.5793 MiB/sec Client Result 972 Mbits/sec 2018-04-10 QUDOS@ICPE'18 15

  16. Application Benchmarks Application Benchmarks Overall performance (e.g., response time) Molecular Dynamics WordPress Benchmark (WPBench) Simulation (MDSim) 100 Number of Concurrent Threads 80 60 40 20 0 00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 08:00 Elapsed Time [min] Multiple short blogging session scenarios (read, search, comment) 2018-04-10 QUDOS@ICPE'18 16

  17. Performance Data Set * Instance Type vCPU ECU RAM [GiB] Virtualization Network Performance eu + us m1.small 1 1 1.7 PV Low m1.medium 1 2 3.75 PV Moderate m3.medium 1 3 3.75 PV /HVM Moderate eu + us m1.large 2 4 7.5 PV Moderate m3.large 2 6.5 7.5 HVM Moderate eu m4.large 2 6.5 8.0 HVM Moderate c3.large 2 7 3.75 HVM Moderate c4.large 2 8 3.75 HVM Moderate c3.xlarge 4 14 7.5 HVM Moderate c4.xlarge 4 16 7.5 HVM High c1.xlarge 8 20 7 PV High * ECU := Elastic Compute Unit (i.e., Amazon’s metric for CPU performance) >240 Virtual Machines (VMs) à 3 Iterations à ~750 VM hours >60’000 Measurements (258 per instance) 2018-04-10 QUDOS@ICPE'18 17

  18. WPBench Response Time Cost Frontier Cost/Performance is a trade-off but there exist unfavorable instance types WPBench Scenario Read Response Time (ms) Instance Type c1.xlarge -80% performance c3.large -35% cost 2000 c3.xlarge c4.large c4.xlarge m1.large m1.medium 1000 m1.small +40% performance m3.large - 40% cost m3.medium (hvm) m3.medium (pv) m4.large Cost-Optimal Instance Types Frontier 0 0.2 0.4 0.6 Instance Cost (USD/h) 2018-04-10 QUDOS@ICPE'18 18

  19. Intra-Cloud Network Bandwidth over Time Almost perfect stability in comparison to previous results 2017 2014 Instance Type m1.small m3.large 600 m3.medium (hvm) Network Bandwidth (Mbits/sec) 500 P. Leitner, J. Cito. Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds. TOIT 2016 400 300 2017 − 04 − 04 2017 − 04 − 06 2017 − 04 − 08 2017 − 04 − 10 2017 − 04 − 12 2017 − 04 − 14 2017 − 04 − 16 Time 2018-04-10 QUDOS@ICPE'18 19

  20. Disk Utilization during I/O Benchmark The newer virtualization type hvm is more I/O efficient than pv Instance Type c1.xlarge c3.large FIO 8k Random Read Disk Utilization (%) c3.xlarge 95 c4.large c4.xlarge m1.large m1.medium m1.small m3.large m3.medium (hvm) m3.medium (pv) 90 m4.large Virtualization Type hvm pv 98.5 98.7 98.9 99.1 FIO 4k Sequential Write Disk Utilization (%) 2018-04-10 QUDOS@ICPE'18 20

  21. Future Work Benchmark Benchmark Data Pre- Data Design Execution Processing Analysis � � 50 40 Relative Standard Deviation (RSD) [%] 30 � � � � � � 20 � � � � � � � � � � � � � 10 � � � � � � � � � � � � � � � � � � � � 6.83 � � � � � � 5 � � � � � � 4.41 � 4.3 � � � � � � 3.32 � � � � � � � 3.16 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 0 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � m1.small (eu) m1.small (us) m3.medium (eu) m3.medium (us) m3.large (eu) Configuration [Instance Type (Region)] Under Submission Accepted QUDOS@ICPE 2018 “A Cloud Benchmark Suite “Estimating Cloud Application Performance Combining Micro and Applications Benchmarks” Based on Micro Benchmark Profiling” Scheuner and Leitner Scheuner and Leitner 2018-04-10 QUDOS@ICPE'18 21

  22. Conclusions Selecting an optimal instance type can save up to 40% costs while increasing up to 40% performance Support trend towards more predictable performance (AWS EC2) The newer virtualization type (hvm) improves I/O utilization rates up to 10% (vs pv) � scheuner@chalmers.se 2018-04-10 QUDOS@ICPE'18 22

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend