How to Waste Time and Money Testing the Performance of a Software - - PowerPoint PPT Presentation

how to waste time and money testing the performance of a
SMART_READER_LITE
LIVE PREVIEW

How to Waste Time and Money Testing the Performance of a Software - - PowerPoint PPT Presentation

How to Waste Time and Money Testing the Performance of a Software Product David Daly | Lead Engineer -- Performance | @daviddaly44 | https://daviddaly.me/ - 22 % 2 3 Understand the performance of our software and when it changes. 4 How


slide-1
SLIDE 1

How to Waste Time and Money Testing the Performance of a Software Product

David Daly | Lead Engineer -- Performance | @daviddaly44 | https://daviddaly.me/

slide-2
SLIDE 2

2

  • 22

%

slide-3
SLIDE 3

3

slide-4
SLIDE 4

Understand the performance of our software and when it changes.

4

slide-5
SLIDE 5

Machines with Personality Run tests by hand Wait until release time Have a dedicated team separate from dev Automate everything Minimize noise Involve everyone Always be testing

How to How NOT to

5

slide-6
SLIDE 6
  • Detect performance impacting commits (Waterfall)
  • Test impact of proposed code change (Patch Test)
  • Diagnose performance regressions (Diagnostics, Profiling)
  • Release support (how do we compare to previous stable?)
  • Add test coverage
  • Performance exploration

Performance Use Cases

6

slide-7
SLIDE 7
  • Detect performance impacting commits (Waterfall)
  • Test impact of proposed code change (Patch Test)
  • Diagnose performance regressions (Diagnostics, Profiling)
  • Release support (how do we compare to previous stable?)
  • Add test coverage
  • Performance exploration

Performance Use Cases

7

slide-8
SLIDE 8

Detect performance impacting commits (Waterfall)

slide-9
SLIDE 9

Setup a system under test Run a workload Report the results Visualize the result Decide (and alert) if the performance changed Automate everything/Keep Noise Down

Performance Testing in Continuous Integration

9

slide-10
SLIDE 10

Setup a system under test Run a workload Report the results Visualize the result Decide (and alert) if the performance changed Automate everything/Keep Noise Down

Performance Testing in Continuous Integration

10

slide-11
SLIDE 11

Levels

System Level Tests (Sys-perf)

Multi node clusters in the cloud with end to end tests. Expensive ($s and hours), run least frequently

Microbenchmarks

Single-node cpu-bound tests. Dedicated hardware.

Unit Level Performance Tests

Google Benchmark framework. Some dedicated

  • hardware. Least expensive ($s and hours)

11

slide-12
SLIDE 12

The focus for DSI was serving the more complex requirements of end-to-end system performance tests on real clusters, automating every step including provisioning of hardware, and generating consistent, repeatable results.

12

slide-13
SLIDE 13
  • Full end-to-end automation
  • Support both CI and manual testing
  • Elastic, public cloud infrastructure
  • Everything configurable
  • All configuration via YAML
  • Diagnosability
  • Repeatability

DSI Goals

13

slide-14
SLIDE 14
  • Bootstrap
  • Infrastructure provisioning
  • System setup
  • Workload setup
  • MongoDB setup
  • Test Control
  • Analysis
  • Infrastructure teardown

DSI Modules

14

slide-15
SLIDE 15

<Put Henrik’s examples here>

Configuration Files

15

slide-16
SLIDE 16

Setup a system under test Run a workload Report the results Visualize the result Decide (and alert) if the performance changed Automate everything/Keep Noise Down

Performance Testing in Continuous Integration

16

slide-17
SLIDE 17

17

slide-18
SLIDE 18

18

slide-19
SLIDE 19

19

slide-20
SLIDE 20

20

slide-21
SLIDE 21

Setup a system under test Run a workload Report the results Visualize the result Decide (and alert) if the performance changed

  • See ICPE Paper: Change Point Detection in Software Performance Testing (video,

slides) Automate everything/Keep Noise Down

Performance Testing in Continuous Integration

21

slide-22
SLIDE 22

22

slide-23
SLIDE 23

23

slide-24
SLIDE 24

24

slide-25
SLIDE 25

Release support

slide-26
SLIDE 26

Can we release?

How is the performance?

Compared to the last release.

How many open issues are there?

Are they getting fixed? Are they stuck?

Do we have coverage for new features?

26

slide-27
SLIDE 27

27

slide-28
SLIDE 28

28

slide-29
SLIDE 29

29

slide-30
SLIDE 30

Periodically review everything. (Weekly, Monthly)

  • Is everything important ticketed?
  • Are the top issues being worked?
  • Surface trade-offs that need to be addressed (e.g., New Feature X makes everything else

3% slower) Put people on the hard parts, then see what can be automated next.

Humans in the loop

30

slide-31
SLIDE 31

Ongoing Work

slide-32
SLIDE 32

We have real world problems and would love to work with the community

  • Noise Reduction work
  • Dbtest.io: “Automated System Performance Testing at MongoDB”
  • ICPE Paper: “The Use of Change Point Detection to Identify Software

Performance Regressions in a Continuous Integration System” (video)

Our code is open source: signal-processing-algorithms, infrastructure code Our regression environment is open, and the platform is open source Our performance data is not open source, but we’re working to share it with academics

Work with Us

32

slide-33
SLIDE 33

Thank you

David Daly | Lead Engineer -- Performance | @daviddaly44 | https://daviddaly.me/