for the cloud april 2016
play

for the Cloud April 2016 Austin Openstack Summit Panelists & - PowerPoint PPT Presentation

Panel Discussion: Performance Measuring Tools for the Cloud April 2016 Austin Openstack Summit Panelists & Presenters Panelists & Presenters Douglas Shakshober Das Kamhout XiaoFei Wang Nicholas Wakou Yuting Wu Red Hat Intel


  1. Panel Discussion: Performance Measuring Tools for the Cloud April 2016 Austin Openstack Summit

  2. Panelists & Presenters

  3. Panelists & Presenters Douglas Shakshober Das Kamhout XiaoFei Wang Nicholas Wakou Yuting Wu Red Hat Intel AWCloud Dell AWCloud https://www.openstack.org/summit/austin-2016/summit-schedule/events/7989

  4. Agenda

  5. Agenda  Measuring the Cloud Using Rally & CloudBench  Measuring the Cloud Using Rally & Dichotomy  Performance Analysis by HAProxy  Industry Standard Benchmarks

  6. Measuring the Cloud Using Rally & CloudBench Douglas Shakshober, Red Hat Inc.

  7. Red Hat – OpenStack Cloud Benchmark Efforts  Rally - CPU loads - Linpack script: https://review.openstack.org/#/c/115439/ - Pros – Test Cloud provisioning and auto execution of workloads within cloud  Cloud Bench - CBtool: https://github.com/jtaleric/cbtool - added Netperf Cleint/Server - Pros – Helps sync start of execution and completion  SPECcloud - Agreed upon Industry standard – add support for Fedora/CentOS/RHEL  Cinder - Ceph Benchmark Tool – CBT  Perfkit - Google general cloud benchmark harness

  8. Rally – OSP Provisioning / ping  Rally Pro’s - Automate VM provisioning - Monitor Nova success/failures - Contributed Linpack CPU load  Rally Cons - Difficult to sync benchmarks - VM cleanup scripts needed

  9. Rally used for Linpack CPU Performance (example - 256 VMs over 8 host 32/host)

  10. OSP/ Network Performance ( Example Netperf VxLAN OVS using Mellanox 40 Gb networks)  Cloud Bench harness - Benchmark harness - Sync up starting time for bench - Run Client / Server - Avoid VM to VM traffic on same host.  Shaker - distributed data-plane testing tool - Multi-stream iperf benchmark https://github.com/openstack/shaker

  11. Red Hat FIO scale for Ceph Scaling FIO 100 IOPs - R7.1 Rand 4k, 8 R7.1 Compute host, 8 R7.1 KVM guest/host up to 64VMs, 4 R6.6 Ceph 1.2.2 Host, 12core/24cpu, 48GB Mem , 12 disk/host – total of 48 OSDs)  Cloud Bench harness 120 - Benchmark harness - Sync up starting time for bench 100 Average IOPs / Guest - Add warm up period for BM steady state 80 - Would like fio parallel option to 8 Guest aggregate perf data 60 16 Guest CBT – Ceph Benchmark Tool  32 Guest 40 - https://github.com/ceph/cbt 64 Guest - Test Seq/Rand I/O w/ FIO 20 - Range of transfers 4k-1m - Queue Depth – 1-64 0 - Run on Bare Metal or KVM VMs. Ephem Rand Ephem Rand Ceph Rand Ceph Rand Write Read Write Read

  12. Measuring the Cloud Using Rally & Dichotomy XiaoFei Wang, AWCloud

  13. What the Rally tool can do  Benchmarking tool for OpenStack  As a basic tool and Tempest for OpenStack CI/CD system  Cloud verification and profiling tool  Deployment tool for DevStack

  14. The Rally and Dichotomy  When profiling the cloud performance bottlenecks, cannot quickly judge orders of magnitude. For example, a single Nova API can handle withstand the number of concurrent create instance.  How to use the Dichotomy

  15. How to apply The Rally in actual project The Rally tool is used in a variety of ways. For example, CI with Jenkins, Production of cloud. Keystone, Nova, Cinder, Neutron, Glance , etc. That all component working in the HAProxy. The following is the actual result.

  16. Performance Analysis by HAProxy Yuting Wu, AWCloud

  17. HAProxy Load-Balancing OpenStack API Services  HAProxy often used to load balance each OpenStack API service.  HAProxy collects many statistics and information about traffic passing through in the log.  By analyzing HAProxy log, we can determine the performance bottleneck of OpenStack API service.

  18. Performance Analysis by HAProxy Log Example’s # Name Custom log Short description Value tag 8 0/0/0/67/67 Tq/Tw/Tc/Tr/T %Tq Tq: total time in milliseconds spent waiting for the client to send a full HTTP request, not counting data t %Tw Tw: total time in milliseconds spent waiting in the various %Tc queues %Tr Tc: total time in milliseconds spent waiting for the connection to establish to the final server, including retries %Tt Tr: total time in milliseconds spent waiting for the server to send a full HTTP response, not counting data Tt: total time in milliseconds elapsed between the accept and the last close. It covers all possible processings Status_code %st HTTP status code returned to the client 9 200 Bytes_read %B total number of bytes transmitted to the client when the log is 10 2591 emitted

  19. Performance Analysis by HAProxy Log (Continued) Example’s # Name Custom log Short description Value tag captured_request_cookie : optional ”name=value” entry captured_request_cookie 11 - - %cc indicating that the client had this cookie in the request captured_response %cs captured_response_cookie : optional ”name=value” entry _cookie indicating that the server has returned a cookie with its response termination_state: condition the session was in when the 12 - - - - termination_state %tsc session ended cookie_status cookie_status: status of cookie persistence 13 15/2/0/0/0 actconn/ %ac actconn: total number of concurrent connections on the process when the session was logged feconn/ %fc feconn: total number of concurrent connections on the frontend beconn/ %bc when the session was logged beconn: total number of concurrent connections handled by the srv_conn/ %sc backend when the session was logged retries %rc srv_conn: total number of concurrent connections still active on the server when the session was logged retries: number of connection retries experienced by this session when trying to connect to the server srv_queue: total number of requests which were processed 14 0/0 srv_queue/ %sq/%bq before this one in the server queue backend_queue Backend_queue: total number of requests which were processed before this one in the backend’s global queue

  20. Industry Standard Benchmarks Nicholas Wakou, Dell

  21. Performance Consortia Standard Performance Evaluation Corporation (www.spec.org) Transaction Processing Performance Council (www.tpc.org) 21

  22. SPEC Cloud IaaS 2016 Benchmark  Measures performance of Infrastructure-as-a-Service (IaaS) Clouds.  Measures both control and data plane  Control: management operations, e.g., Instance provisioning time  Data: virtualization, network performance, runtime performance  Uses workloads that resemble “real” customer applications   benchmarks the cloud, not the application  Produces metrics (“elasticity”, “scalability”, “provisioning time”) which allow comparison. 22

  23. Benchmark and Workload Control Benchmark Harness Cloud SUT Group of boxes represents an Benchmark Harness. It comprises of Cloud Bench application instance. (CBTOOL) and baseline/elasticity drivers, and . report generators. For white-box clouds the benchmark harness is outside the SUT. For black-box clouds, it can be in the same location or campus. 23

  24. SPEC Cloud Workloads YCSB Framework used by a common set of workloads for evaluating performance of different key- value and cloud serving stores. KMeans - Hadoop-based CPU intensive workload - Chose Intel HiBench implementation

  25. What is Measured  Measures the number of AIs that can be loaded onto a Cluster before SLA violations occur.  Measures the scalability and elasticity of the Cloud under Test (CuT).  Not a measure of Instance density  SPEC Cloud workloads can individually be used to stress the CuT:  KMeans – CPU/Memory  YCSB - IO

  26. SPEC Cloud IaaS 2016 High Level Report Summary Fair usage A tester cannot selectively report benchmark metrics. Their reporting in product or marketing descriptions is governed by fair usage. 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend