In-Test Adaptation of Workload in Enterprise Application Performance - - PowerPoint PPT Presentation

in test adaptation of workload in enterprise application
SMART_READER_LITE
LIVE PREVIEW

In-Test Adaptation of Workload in Enterprise Application Performance - - PowerPoint PPT Presentation

In-Test Adaptation of Workload in Enterprise Application Performance Testing Maciej Kaczmarski April 23, 2017 Agenda 1 Motivation & Research Objective 2 Proposed Approach 3 Experimental Evaluation 4 Conclusions & Future work Maciej


slide-1
SLIDE 1

In-Test Adaptation of Workload in Enterprise Application Performance Testing Maciej Kaczmarski

April 23, 2017

slide-2
SLIDE 2

Agenda

1 Motivation & Research Objective 2 Proposed Approach 3 Experimental Evaluation 4 Conclusions & Future work

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 2 / 12

slide-3
SLIDE 3

Motivation

A considerable number of the performance issues which

  • ccur in the software systems are dependent on the

input workloads. Traditional Techniques are ineffective because:

´ rely on static workloads, ´ it is common to use time-consuming and complex

iterative test methods,

´ heavily rely on human expert knowledge.

They could cause:

´ the complexity escalation, ´ the risk of potentially overlooking performance issues.

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 3 / 12

slide-4
SLIDE 4

Research Objective

Automated approach to dynamically adapt the workload used by a testing tool Based on a set of diagnostic metrics, evaluated in real-time, to determine if any test workload adjustments are required for the tested application

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 4 / 12

slide-5
SLIDE 5

Proposed Approach

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 5 / 12

slide-6
SLIDE 6

Experimental Set-up

Testbed Two independent VMs located on a 24-core, 64GB RAM server:

´ Server (2 core, 4GB RAM):

´ JPetstore, NMon, WAIT data collector

´ Test Controller (2 cores, 4GB RAM):

´ JMeter, Controlling tool (Java)

Tests execution Static:

´ Run a range of workloads in order to establish Static

Base Line; to be compared with our solution

Dynamic:

´ Tests run with our solution (prototype)

Analyzed parameters: # Bugs, Transaction Response Time, Throughput, Error rate, CPU and Memory utilisations

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 6 / 12

slide-7
SLIDE 7

Results

Bugs detection

Bugs classification (frequency

  • ccurrence based):

´ major (more

than 5%)

Comparable number

  • f detected bugs

w.r.t. the best static workload

20 40 60 80 100 Any Major

  • Perf. Bugs Found (#)

Bug Classification

best-static dynamic avg-static worst-static

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 7 / 12

slide-8
SLIDE 8

Results

Execution time

Reduction in the duration of the performance testing activities of 94% Workload decision taken out from a tester hands

5 10 15 20 25 30 35 40 static runs dynamic run

Time (hr) Test Run Type

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 8 / 12

slide-9
SLIDE 9

Results

Resource utilisation

More CPU efficient than static workload Marginally more memory-intensive due to monitoring the workload behaviour

20 40 60 80 100 CPU Memory

Average Utilisation (%) Resource Type (JMeter)

best-static dynamic avg-static worst-static

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 9 / 12

slide-10
SLIDE 10

Conclusions

Automated approach to dynamically adapt the workload so that issues (e.g. bottlenecks) can be identified more quickly, as well as with less effort and expertise Reduction in the duration of the performance testing activities of 94% The approach is able to identify almost as many relevant bugs as the best test run (from the tests using static workloads) Introducing a moderate level of overhead in memory (i.e., 5% increment) utilisation in the JMeter machine.

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 10 / 12

slide-11
SLIDE 11

Future work

Improve experimental validation of our approach:

´ by diversifying the tested applications, ´ the diagnosis tools used to identify the bugs, ´ the size and composition of the test environment, ´ test duration.

Keep investigating how best to extend our technique (i.e., by exploring the idea of using different workloads, per transaction type).

Maciej Kaczmarski — LTB L’Aquila April 23, 2017 — 11 / 12

slide-12
SLIDE 12

Thank you for your attention. Questions?