Application Performance Monitoring: Trade-Off between Overhead - - PowerPoint PPT Presentation

application performance monitoring trade off between
SMART_READER_LITE
LIVE PREVIEW

Application Performance Monitoring: Trade-Off between Overhead - - PowerPoint PPT Presentation

Application Performance Monitoring: Trade-Off between Overhead Reduction and Maintainability Jan Waller, Florian Fittkau, and Wilhelm Hasselbring 2014-11-27 Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 1 / 26 1.


slide-1
SLIDE 1

Application Performance Monitoring: Trade-Off between Overhead Reduction and Maintainability

Jan Waller, Florian Fittkau, and Wilhelm Hasselbring 2014-11-27

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 1 / 26

slide-2
SLIDE 2
  • 1. Introduction
  • 2. Foundation
  • 3. Performance Benchmark
  • 4. Overhead Reduction and its Impact on Maintainability
  • 5. Related Work
  • 6. Future Work and Conclusions
  • 7. References

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 1 / 26

slide-3
SLIDE 3

Introduction

Introduction

◮ Application level monitoring introduces monitoring overhead ◮ Live trace processing approaches rely on high throughput ◮ How to achieve?

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 2 / 26

slide-4
SLIDE 4

Introduction

Introduction

◮ Application level monitoring introduces monitoring overhead ◮ Live trace processing approaches rely on high throughput ◮ How to achieve?

→ Structured process for performance tunings utilizing benchmarks

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 2 / 26

slide-5
SLIDE 5

Kieker Architecture

Foundation

Figure 1: UML component diagram of a top-level view on the Kieker framework architecture

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 3 / 26

slide-6
SLIDE 6

Causes of Monitoring Overhead

Performance Benchmark

Figure 2: UML sequence diagram for method monitoring with the Kieker framework [WH13]

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 4 / 26

slide-7
SLIDE 7

Benchmark Engineering Phases

Performance Benchmark

Figure 3: Benchmark engineering phases [WH13]

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 5 / 26

slide-8
SLIDE 8

Measured Timings

Performance Benchmark

Figure 4: Time series diagram of measured timings

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 6 / 26

slide-9
SLIDE 9

Overhead Reduction Tunings

Overhead Reduction and its Impact on Maintainability

◮ Four performance tunings (PT1 to PT4) ◮ Used the benchmark for structured performance optimizations ◮ Goal: Low monitoring overhead and high throughput ◮ Every tuning is evaluated by the benchmark ◮ We will see whether usable in Kieker or not

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 7 / 26

slide-10
SLIDE 10

Experimental Setup

Overhead Reduction and its Impact on Maintainability

◮ Modifying Kieker 1.8 ◮ X6270 Blade Server with

◮ 2x Intel Xeon 2.53 GHz E5540 Quadcore processors, ◮ 24 GiB RAM, and ◮ Solaris 10 Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 8 / 26

slide-11
SLIDE 11

Starting Point

Overhead Reduction and its Impact on Maintainability

No instr. Deactiv. Collecting Writing Mean 1 176.5k 757.6k 63.2k 16.6k 95% CI ± 25.9k ± 5.5k ± 0.1k ± 0.02k Q1 1 189.2k 756.6k 63.0k 16.2k Median 1 191.2k 765.9k 63.6k 16.8k Q3 1 194.6k 769.8k 63.9k 17.2k

Table 1: Throughput for basis (traces per second)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 9 / 26

slide-12
SLIDE 12

Analysis

Overhead Reduction and its Impact on Maintainability

◮ High monitoring overhead in:

◮ Collection of data and ◮ actually writing the gathered data

◮ Expensive Reflection API calls ◮ Reuse of signature of operations

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 10 / 26

slide-13
SLIDE 13

PT1: Caching & Cloning

Overhead Reduction and its Impact on Maintainability

No instr. Deactiv. Collecting Writing Mean 1 176.5k 757.6k 63.2k 16.6k

Table 2: Throughput for basis (traces per second)

No instr. Deactiv. Collecting Writing Mean 1 190.5k 746.3k 78.2k 31.6k 95% CI ± 4.1k ± 4.1k ± 0.1k ± 0.1k

Table 3: Throughput for PT1 (traces per second)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 11 / 26

slide-14
SLIDE 14

Discussion

Overhead Reduction and its Impact on Maintainability

◮ Will be used in Kieker since not impacting interfaces

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 12 / 26

slide-15
SLIDE 15

Analysis

Overhead Reduction and its Impact on Maintainability

◮ From PT1: Queue is saturated and the monitoring thread waits for

a free space in the queue

◮ Target: Decrease the synchronization impact of writing data ◮ Optimize the communication between monitoring and writer thread ◮ Disruptor instead of Java’s ArrayBlockingQueue

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 13 / 26

slide-16
SLIDE 16

PT2: Inter-Thread Communication

Overhead Reduction and its Impact on Maintainability

No instr. Deactiv. Collecting Writing Mean 1 190.5k 746.3k 78.2k 31.6k

Table 4: Throughput for PT1 (traces per second)

No instr. Deactiv. Collecting Writing Mean 1 190.5k 757.6k 78.2k 56.0k 95% CI ± 3.6k ± 6.2k ± 0.1k ± 0.2k

Table 5: Throughput for PT2 (traces per second)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 14 / 26

slide-17
SLIDE 17

Discussion

Overhead Reduction and its Impact on Maintainability

◮ Will be used in Kieker since only impacting communication

between MonitoringController and Writers

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 15 / 26

slide-18
SLIDE 18

Analysis

Overhead Reduction and its Impact on Maintainability

◮ From PT2: Monitoring thread is waiting for the writer thread to

finish

◮ Target: Decrease the writing time ◮ Reduce the conducted work of the writer thread ◮ Flat record model (ByteBuffers)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 16 / 26

slide-19
SLIDE 19

PT3: Flat Record Model

Overhead Reduction and its Impact on Maintainability

No instr. Deactiv. Collecting Writing Mean 1 190.5k 757.6k 78.2k 56.0k

Table 6: Throughput for PT2 (traces per second)

No instr. Deactiv. Collecting Writing Mean 1 176.5k 729.9k 115.7k 113.2k 95% CI ± 2.1k ± 4.4k ± 0.2k ± 0.5k

Table 7: Throughput for PT3 (traces per second)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 17 / 26

slide-20
SLIDE 20

Discussion

Overhead Reduction and its Impact on Maintainability

◮ Will not be used in Kieker since monitoring records now writing

bytes directly to buffers (less maintainable)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 18 / 26

slide-21
SLIDE 21

Analysis

Overhead Reduction and its Impact on Maintainability

◮ From PT3: About 80% spent time in collecting phase ◮ Target: Decrease the collecting time ◮ Remove interface definitions, configurability, and consistence

checks

◮ Five hard coded types of MonitoringRecords

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 19 / 26

slide-22
SLIDE 22

PT4: Minimal Monitoring Code

Overhead Reduction and its Impact on Maintainability

No instr. Deactiv. Collecting Writing Mean 1 176.5k 729.9k 115.7k 113.2k

Table 8: Throughput for PT3 (traces per second)

No instr. Deactiv. Collecting Writing Mean 1 190.5k 763.3k 145.1k 141.2k 95% CI ± 2.0k ± 4.0k ± 0.2k ± 0.3k

Table 9: Throughput for PT4 (traces per second)

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 20 / 26

slide-23
SLIDE 23

Results and Discussion

Overhead Reduction and its Impact on Maintainability

◮ Will not be used in Kieker since breaking the framework idea

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 21 / 26

slide-24
SLIDE 24

Threats to Validity

Overhead Reduction and its Impact on Maintainability

◮ At least one core was available for the monitoring ◮ Common threats of micro-benchmarks (relevance and systematic

errors)

◮ Different memory layouts of programs or JIT compilation paths

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 22 / 26

slide-25
SLIDE 25

Summarized Tuning Results

Overhead Reduction and its Impact on Maintainability

Figure 5: Overview of the tuning results in response time

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 23 / 26

slide-26
SLIDE 26

Related Work

Related Work

◮ Dapper ◮ Magpie ◮ X-Trace ◮ SPASS-meter

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 24 / 26

slide-27
SLIDE 27

Future Work

Future Work and Conclusions

◮ Reduce the impact of deactivated probes by, for instance, DiSL ◮ Generator handling the monitoring record byte serialization ◮ Multi-threaded versions of our monitoring benchmark ◮ Compare to other benchmarks

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 25 / 26

slide-28
SLIDE 28

Conclusions

Future Work and Conclusions

◮ Proposed micro-benchmark for monitoring frameworks ◮ Tunings show an upper limit for the monitoring overhead ◮ Useful for live trace processing in the context of ExplorViz1

1http://www.explorviz.net Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 26 / 26

slide-29
SLIDE 29

Jan Waller and Wilhelm Hasselbring. A benchmark engineering methodology to measure the overhead of application-level monitoring. In Proceedings of the Syposium on Software Performance: Joint Kieker/Palladio Days (KPDays), pages 59–68, 2013.

Waller, Fittkau, Hasselbring Application Performance Monitoring 2014-11-27 26 / 26