Panel Discussion: Performance Measuring Tools for the Cloud April 2016
Austin Openstack Summit
for the Cloud April 2016 Austin Openstack Summit Panelists & - - PowerPoint PPT Presentation
Panel Discussion: Performance Measuring Tools for the Cloud April 2016 Austin Openstack Summit Panelists & Presenters Panelists & Presenters Douglas Shakshober Das Kamhout XiaoFei Wang Nicholas Wakou Yuting Wu Red Hat Intel
Austin Openstack Summit
Yuting Wu AWCloud Nicholas Wakou Dell Das Kamhout Intel XiaoFei Wang AWCloud Douglas Shakshober Red Hat
https://www.openstack.org/summit/austin-2016/summit-schedule/events/7989
Douglas Shakshober, Red Hat Inc.
https://github.com/openstack/shaker
FIO 100 IOPs - R7.1 Rand 4k, 8 R7.1 Compute host, 8 R7.1 KVM guest/host up to 64VMs, 4 R6.6 Ceph 1.2.2 Host, 12core/24cpu, 48GB Mem , 12 disk/host – total of 48 OSDs)
state
aggregate perf data
20 40 60 80 100 120 Ephem Rand Write Ephem Rand Read Ceph Rand Write Ceph Rand Read
Average IOPs / Guest
8 Guest 16 Guest 32 Guest 64 Guest
XiaoFei Wang, AWCloud
Yuting Wu, AWCloud
information about traffic passing through in the log.
determine the performance bottleneck of OpenStack API service.
# Example’s Value Name Custom log tag Short description 8 0/0/0/67/67
Tq/Tw/Tc/Tr/T t
%Tq %Tw %Tc %Tr %Tt
Tq: total time in milliseconds spent waiting for the client to send a full HTTP request, not counting data Tw: total time in milliseconds spent waiting in the various queues Tc: total time in milliseconds spent waiting for the connection to establish to the final server, including retries Tr: total time in milliseconds spent waiting for the server to send a full HTTP response, not counting data Tt: total time in milliseconds elapsed between the accept and the last close. It covers all possible processings
9 200
Status_code %st
HTTP status code returned to the client
10 2591
Bytes_read %B
total number of bytes transmitted to the client when the log is emitted
# Example’s Value Name Custom log tag Short description 11
captured_request_cookie captured_response _cookie
%cc %cs
captured_request_cookie: optional ”name=value” entry indicating that the client had this cookie in the request captured_response_cookie: optional ”name=value” entry indicating that the server has returned a cookie with its response
12
termination_state cookie_status %tsc
termination_state: condition the session was in when the session ended cookie_status: status of cookie persistence
13 15/2/0/0/0 actconn/ feconn/ beconn/ srv_conn/ retries %ac %fc %bc %sc %rc
actconn: total number of concurrent connections on the process when the session was logged feconn: total number of concurrent connections on the frontend when the session was logged beconn: total number of concurrent connections handled by the backend when the session was logged srv_conn: total number of concurrent connections still active on the server when the session was logged retries: number of connection retries experienced by this session when trying to connect to the server
14 0/0 srv_queue/ backend_queue %sq/%bq
srv_queue: total number of requests which were processed before this one in the server queue Backend_queue: total number of requests which were processed before this one in the backend’s global queue
Nicholas Wakou, Dell
21
Standard Performance Evaluation Corporation (www.spec.org) Transaction Processing Performance Council (www.tpc.org)
22
23
Cloud SUT Group of boxes represents an application instance.
. Benchmark Harness
Benchmark Harness. It comprises of Cloud Bench (CBTOOL) and baseline/elasticity drivers, and report generators. For white-box clouds the benchmark harness is
the same location or campus.
26
Fair usage
A tester cannot selectively report benchmark metrics. Their reporting in product or marketing descriptions is governed by fair usage.