escartes Modeling and Experimental Analysis of Virtualized Storage - - PowerPoint PPT Presentation

escartes
SMART_READER_LITE
LIVE PREVIEW

escartes Modeling and Experimental Analysis of Virtualized Storage - - PowerPoint PPT Presentation

escartes Modeling and Experimental Analysis of Virtualized Storage Performance using IBM System z as Example Diploma Thesis Presentation October 12, 2012 Dominik Bruhn Reviewers: Prof. Dr. Ralf H. Reussner, Prof. Dr. Walter F . Tichy


slide-1
SLIDE 1

CHAIR FOR SOFTWARE DESIGN AND QUALITY

escartes

Modeling and Experimental Analysis of Virtualized Storage Performance using IBM System z as Example

Diploma Thesis Presentation October 12, 2012 Dominik Bruhn

Reviewers: Prof. Dr. Ralf H. Reussner, Prof. Dr. Walter F . Tichy Advisors: Qais Noorshams, Dr. Samuel Kounev

KIT – Universit¨ at des Landes Baden-W¨ urttemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft

www.kit.edu

slide-2
SLIDE 2

Motivation

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 2/25

slide-3
SLIDE 3

Motivation

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 2/25

slide-4
SLIDE 4

Motivation

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 2/25

slide-5
SLIDE 5

Motivation

System App A App A’

?

System App A System App A

?

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 2/25

slide-6
SLIDE 6

Problem & Idea & Benefit & Action

Problem

Complex systems with many layers Difficulty to obtain good performance prediction models

Idea

Derivation of storage performance models from systematic measurements using regression techniques

Benefit

Possibility to predict the performance Easier decisions on configurations and systems

Action

Creation and evaluation of performance models Evaluation of techniques and optimization possibilites

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 3/25

slide-7
SLIDE 7

Contribution

Contribution

Creation and evaluation of regression models for storage performance prediction Evaluation, analysis and comparison of regression techniques valid for storage performance prediction Repeatable process validated in a representative real-world environment

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 4/25

slide-8
SLIDE 8

System Under Study

IBM System z Processors, Memory PR/SM (Hypervisor) LPAR1 LPAR2 z/Linux z/Linux App. App. IBM DS8700 Harddisks (RAID) Storage Controller Volatile Cache Non-Volatile Cache

Switched Fibre Channel Fibre Channel

Storage-Performance-Influencing Factors Workload Requests Size Mix Pattern Locality System Operating System File System I/O Scheduler Hardware

Derived from Noorshams et al. (2012)

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 5/25

slide-9
SLIDE 9

Modeling

Regression Models

Regression Model Training Data

  • 1. Training

Regression Model Independent Variables Dependent Variable

  • 2. Prediction

Black Box

Regression Model

Model Introspection

Regression Model A B C D E

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 6/25

slide-10
SLIDE 10

Regression Techniques

Linear Regression Models

0.0 2.5 5.0 7.5 10.0 2 4 6 8

x1 y

y = −1.884 + 1.293x1 Parameters: None

MARS

0.0 2.5 5.0 7.5 10.0 2 4 6 8

x1 y

y = 1.014501 + 1.72866h(x1 − 3.25) Parameters: nk, threshold

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 7/25

slide-11
SLIDE 11

Regression Techniques

Regression Trees (CART)

0.0 2.5 5.0 7.5 10.0 2 4 6 8

x1 y

x1 < 4.5 x1 < 6.75 8.93 5.35 1.17

Parameters: minsplit, cp

M5

0.0 2.5 5.0 7.5 10.0 2 4 6 8

x1 y

x1 ≤ 3.5 LM1 LM0 Model LM0 LM1 (Intercept) 1

  • 3.34

x1 1.53

Parameter: nsplits

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 8/25

slide-12
SLIDE 12

Cross-Validation

Samples Randomized Samples Split into k folds

Training Data Test Data

First Training

Training Data Training Data Test Data

Second Training

. . .

Training Data Test Data

kth Training

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 9/25

slide-13
SLIDE 13

Experimental Setup

Workload

System Parameters File system ext4 I/O scheduler CFQ, NOOP Workload Parameters Threads 100 File set size 1 GB, 25 GB, 50 GB, 75 GB, 100 GB Request size 4 KB, 8 KB, 12 KB, 16 KB, 20 KB, 24 KB, 28 KB, 32 KB Access pattern random, sequential Read percentage 0%, 25%, 30%, 50%, 70%, 75%, 100%

Benchmark - FFSB

Existing benchmark At application layer

System Setup

Virtual Machines: z/Linux Virtualized by PR/SM in an LPAR DS8700 System Storage with 50 GB volatile and 2GB non-volatile cache.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 10/25

slide-14
SLIDE 14

Approach

Goal/Question/Metric (GQM)

Measurements Analysis

Setup Analysis Stability of the Results Parameter Analysis Parameter Influence Virtualization Influence Performance Modeling Model Analysis Interpolation

Synthetic Random

Extrapolation

Synthetic Random

Reduced Training Sets Nominal Split Technique Analysis Quality Comparison Parameter Tuning Tradeoff Analysis

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 11/25

slide-15
SLIDE 15

Measurement Analysis - Results

GQM

Measurements Analysis

Setup Analysis Stability of the Results Parameter Analysis Parameter Influence Virtualization Influence Performance Modeling Model Analysis Interpolation

Synthetic Random

Extrapolation

Synthetic Random

Reduced Training Sets Nominal Split Technique Analysis Quality Comparison Parameter Tuning Tradeoff Analysis

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 12/25

slide-16
SLIDE 16

Performance Modeling - Results

GQM

Measurements Analysis

Setup Analysis Stability of the Results Parameter Analysis Parameter Influence Virtualization Influence Performance Modeling Model Analysis Interpolation

Synthetic Random

Extrapolation

Synthetic Random

Reduced Training Sets Nominal Split Technique Analysis Quality Comparison Parameter Tuning Tradeoff Analysis

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 13/25

slide-17
SLIDE 17

Interpolation Using Random Samples

What interpolation abilities do the regression models show when being tested using newly collected samples?

Method

Creation of six regression models:

Linear regression model (lm) Linear regression model including interaction terms (lm 5param inter) CART tree (cart) MARS model without interactions (mars) MARS model including all interaction terms (mars multi) M5 model (m5)

Training using all measurements Validation using newly collected random samples

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 14/25

slide-18
SLIDE 18

Interpolation Using Random Samples

79.34% 13.87% 35.28% 79.49% 28.52% 9.27% 62.28% 10.01% 33.97% 64.65% 16.64% 10.39%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 25 50 75

Relative Error (%)

Models without interactions (lm,

mars) do not perform well.

With an error of ∼10%, M5 works well. Linear regression with interactions works surprisingly well. CART and MARS (with interactions) rank in the midfield.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 15/25

slide-19
SLIDE 19

Extrapolation Using Random Samples

How is the extrapolation ability of the regression models when testing using newly collected data?

97.27% 20.16% 61.29% 95.92% 41.01% 12.58% 84.33% 20.32% 82.13% 79.71% 26.47% 15.46%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 25 50 75 100

Relative Error (%)

Again, the models without interactions do not work well. CART models can not be used for extrapolation. M5 still performs well with an error

  • f ∼14%.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 16/25

slide-20
SLIDE 20

Nominal Split Model Optimization

How can the regression modeling of nominal scale parameters be improved?

Method

I/O Scheduler Access Pattern Model 1

Random

Model 2

Sequential CFQ

Access Pattern Model 3

Random

Model 4

Sequential NOOP

Create four models for each value of each nominal parameter. Remaining three parameters are all on ordinal scale.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 17/25

slide-21
SLIDE 21

Nominal Split Model Optimization

20.51% 13.35% 19.63% 17.89% 5.82% 6.21% 12.72% 9.20% 15.36% 10.76% 3.86% 4.75%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 5 10 15 20 25

Relative Error (%) 18.76% 66.52% 0.13% 0.28% 68.47% 28.52% 22.74% 53.60%

  • 0.01%
  • 0.52%

56.08% 12.14%

read write 20 40 60 80

Improvement (% points)

Models without interactions improve the most. The best performing models do not benefit.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 18/25

slide-22
SLIDE 22

Regression Technique Parameter Tuning

Which configuration parameters of the regression techniques can improve the prediction results?

Problem

Different regression techniques have different configuration parameters. Default values might not be well-suited. It is difficult to find the right parameters.

Method

Identify the most promising configuration parameters Analyze the influence of these configuration parameters on the prediction quality

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 19/25

slide-23
SLIDE 23

Regression Technique Parameter Tuning

CART

13.47% 38.40% 38.40% 21.25% 21.26% 6.49% 9.19% 38.11% 38.11% 17.68% 17.13% 4.24%

m5 cart var3 cart var2 cart var1 cart lm 5param inter m5 cart var3 cart var2 cart var1 cart lm 5param inter read write 10 20 30 40

Relative Error (%)

Model

minsplit cp cart

20 0.01

cart var1

5 0.01

cart var2

20 0.001

cart var3

5 0.001 Decreasing minsplit does not improve the models. Decreasing cp does improve the models by about 50%.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 20/25

slide-24
SLIDE 24

Regression Technique Parameter Tuning

MARS

13.47% 34.34% 12.59% 34.34% 12.58% 6.49% 9.19% 16.00% 9.00% 16.00% 8.05% 4.24%

m5 mars var3 mars var2 mars var1 mars multi lm 5param inter m5 mars var3 mars var2 mars var1 mars multi lm 5param inter read write 10 20 30 40

Relative Error (%)

Model

nk threshold mars multi

20 0.001

mars var1

40 0.001

mars var2

20 0.0001

mars var3

40 0.0001 Increasing nk improves the models by about 50%. Decreasing threshold improves the models only by a small amount.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 21/25

slide-25
SLIDE 25

Regression Technique Tradeoff Analysis

What are the advantages and the disadvantages of the modeling techniques?

Criterion Linear Regression CART MARS M5 Prediction Quality

⋆⋆⋆ ⋆ ⋆⋆ ⋆⋆⋆⋆

Modeling Time

⋆⋆⋆⋆ ⋆⋆⋆ ⋆⋆ ⋆

Prediction Time

⋆⋆⋆⋆ ⋆⋆⋆⋆ ⋆⋆ ⋆

Interpretability

⋆⋆ ⋆⋆⋆⋆ ⋆ ⋆

Stars are only ordered relative ranking.

0.0050s 0.0145s 0.0250s 0.0440s 0.0570s 0.1635s

m5 mars multi mars cart lm 5param inter lm 0.00 0.05 0.10 0.15 0.20

Model Creation Time (s) 0.0014s 0.0018s 0.0010s 0.0195s 0.0203s 0.0534s

0.00 0.02 0.04 0.06

Prediction Time (s)

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 22/25

slide-26
SLIDE 26

Lessons Learned

GQM

Measurements Analysis

Setup Analysis Stability of the Results Parameter Analysis Parameter Influence Virtualization Influence Performance Modeling Model Analysis Interpolation

Synthetic Random

Extrapolation

Synthetic Random

Reduced Training Sets Nominal Split Technique Analysis Quality Comparison Parameter Tuning Tradeoff Analysis

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 23/25

slide-27
SLIDE 27

Related Work

Storage Performance Modeling

Model storage performance using various techniques: Predict only the virtualization overhead: Ahmad et al. (2003) Use fine-grained models: Kraft et al. (2011), Huber et al. (2010) Omit system parameters Wang et al. (2004), Anderson (2001), Lee and Katz (1993)

Measurement Based Regression Analysis

Use, evaluate and compare regression techniques on other systems:

Westermann et al. (2012), Courtois and Woodside (2000), Kim et al. (2007)

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 24/25

slide-28
SLIDE 28

Conclusion

Summary Creation and evaluation of storage performance prediction models using regression techniques Evaluation of techniques and optimization possibilities Analysis Results Extra- and interpolation of storage performance using regression models works well: Errors ≤ 15% possible M5 and linear regression models are the best choice in these case. Optimization possibilities: Nominal parameters and configuration of the regression techniques. Outlook Further analysis of the optimization possibilities of regression techniques. Application and validation using true applications.

Introduction Foundations Methodology Results Related Work Conclusion Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 25/25

slide-29
SLIDE 29

BACKUP

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 1/16

slide-30
SLIDE 30

Stability of the Results

How reproducible are the experiments results?

0.00 0.25 0.50 0.75 1.00 10 20

Relative Standard Error (%) Cumulative Distribution Function

Operation read write

rError = σR · 100%

5 · R Read Requests:

Mean Standard Error: 3.35% 90th percentile: 8.45%

Write Requests:

Mean Standard Error: 2.10% 90th percentile: 5.35%

Each measurement run issues up to 2.7M requests. Measurements are sufficient repeatable and stable.

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 2/16

slide-31
SLIDE 31

Virtualization Influence

What is the influence of virtualization?

0.00 0.25 0.50 0.75 1.00 20 40 60

Relative Error (%) Cumulative Density

Operation read write

rError =

  • dualVM1+dualVM2

2

− singleVM

singleVM

  • Mean read requests: 10.60%

Mean write requests: 16.67%

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 3/16

slide-32
SLIDE 32

Parameter Influence

Which parameters have an influence on the response time?

Read Write 25 50 75 0/100 25 50 75 0/100 Parameter filesetSize blockSize sequentialAccess scheduler readPercentage filesetSize:sequentialAccess blockSize:sequentialAccess sequentialAccess:scheduler

  • ther

98.71% (Read) and 99.53% (Write) of the variation can be explained. Without interaction terms: Only 54.03% (Read) or 63.36% (Write) Interactions terms are necessary. All five parameters influence the response time.

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 4/16

slide-33
SLIDE 33

Interpolation Using Existing Data

How good is the interpolation of the regression models when using synthetic test sets?

87.02% 13.47% 38.40% 86.36% 34.34% 6.49% 66.32% 9.19% 38.11% 66.84% 16.00% 4.24%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 25 50 75 100

Relative Error (%)

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 5/16

slide-34
SLIDE 34

Extrapolation Using Existing Data

How good is the interpolation of the regression models when using synthetic test sets?

97.27% 20.16% 61.29% 95.92% 41.01% 12.58% 107.14% 41.93% 74.89% 110.33% 64.59% 45.51%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 30 60 90 120

Relative Error (%)

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 6/16

slide-35
SLIDE 35

Reduced Training Sets

How many measurements are needed for an accurate model?

red1 red2

Block size 4 kB, 32 kB 4 kB, 16 kB, 32 kB Read percentage 25%, 75% 25%, 50%, 75% File set size 1 GB, 100 GB 1 GB, 50 GB, 100 GB Access random, sequential random, sequential Scheduler NOOP , CFQ NOOP , CFQ # of configurations 32 108

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 7/16

slide-36
SLIDE 36

Reduced Training Sets

80.76% 14.64% 112.57% 76.99% 50.92% 104.46% 57.87% 9.78% 62.62% 56.67% 29.48% 24.24%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 30 60 90 120

Relative Error (%)

red1

79.79% 12.29% 43.61% 82.89% 34.35% 16.70% 61.95% 8.51% 32.96% 59.87% 17.15% 14.66%

m5 mars multi mars cart lm 5param inter lm m5 mars multi mars cart lm 5param inter lm read write 30 60 90 120

Relative Error (%)

red2

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 8/16

slide-37
SLIDE 37

Parameter Tuning–Paired T-Test

CART

Model minsplit cp p (comparing to cart)

cart

20 0.01

cart var1

5 0.01 identical

cart var2

20 0.001 0.014

cart var3

5 0.001 0.015

MARS

Model nk threshold p (comparing to mars multi)

mars multi

20 0.001

mars var1

40 0.001 0.04

mars var2

20 0.0001 identical

mars var3

40 0.0001 0.04

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 9/16

slide-38
SLIDE 38

System Layers

FFSB Syscall Interface VFS Layer Filesystem Layer Block Layer Scheduler Device Driver Virtualization Layer Storage Interface Device Controller Storage Medium Kernel Level System Level Storage Device Level

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 10/16

slide-39
SLIDE 39

Benchmarking Setup (Simplified)

Controller Machine Benchmark Harness Benchmark Controller Benchmark Driver for FFSB SQLite Database R – Statistics Tool Analysis Library Analysis, Optimi- zation & Regressi-

  • n Functions

VM 2 FFSB Benchmark VM 1 FFSB Benchmark SSH

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 11/16

slide-40
SLIDE 40

Benchmarking Setup

Controller Machine Benchmark Harness Benchmark Controller Benchmark Driver for FFSB Benchmark Driver Benchmark Driver Datastore Interface SQLite Database R (Analysis Software) Analysis Library Datastore Interface Aggregation Functions Regression Functions Analysis Functions System Under Test 2 Machine Hardware System Under Test 1 Machine Hardware FFSB Benchmark Benchmark Benchmark SSH SSH SSH

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 12/16

slide-41
SLIDE 41

Related Measurement Approaches

Differences from all three tools:

No automated analysis of the benchmark results No automated model generation No automated analysis of the regression models No integration for storage benchmark

Differences from Software Performance Cockpit:

No simoultaneaous execution of benchmarks on multiple hosts Relies on RMI for the transport

Differences from Ginpex:

Missing integration of external benchmarks No regression technique integration

Differences from Faban:

No specification of multiple jobs to be run. No analysis possibilities

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 13/16

slide-42
SLIDE 42

References I

  • I. Ahmad, J.M. Anderson, A.M. Holler, R. Kambo, and V. Makhija. An analysis of disk

performance in VMware ESX Server virtual machines. In Workload Characterization,

  • 2003. WWC-6. 2003 IEEE International Workshop on, pages 65–76. IEEE, 2003. doi:

10.1109/WWC.2003.1249058.

Eric Anderson. Simple table-based modeling of storage devices. Ssp technical report, HP Laboratories, July 2001. Marc Courtois and Murray Woodside. Using regression splines for software performance

  • analysis. In Proceedings of the 2nd international workshop on Software and

performance, WOSP ’00, pages 105–114, New York, NY, USA, 2000. ACM. ISBN 1-58113-195-X. doi: 10.1145/350391.350416. Nikolaus Huber, Steffen Becker, Christoph Rathfelder, Jochen Schweflinghaus, and Ralf H.

  • Reussner. Performance modeling in industry: A case study on storage virtualization. In

Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering

  • Volume 2, ICSE ’10, pages 1–10, New York, NY, USA, 2010. ACM. ISBN

978-1-60558-719-6. doi: http://doi.acm.org/10.1145/1810295.1810297.

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 14/16

slide-43
SLIDE 43

References II

Hyunjoong Kim, Wei-yin Loh, Yu-shan Shih, and Probal Chaudhuri. Visualizable and interpretable regression models with good prediction power. IIE Transactions, 39: 565–579, 2007. Stephan Kraft, Giuliano Casale, Diwakar Krishnamurthy, Des Greer, and Peter Kilpatrick. IO performance prediction in consolidated virtualized environments. In Proceedings of the second joint WOSP/SIPEW international conference on Performance engineering, ICPE ’11, pages 295–306, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0519-8. doi: 10.1145/1958746.1958789. Edward K. Lee and Randy H. Katz. An analytic performance model of disk arrays. SIGMETRICS Perform. Eval. Rev., 21(1):98–109, June 1993. ISSN 0163-5999. doi:

10.1145/166962.166994.

Qais Noorshams, Samuel Kounev, and Ralf Reussner. Experimental evaluation of the performance-influencing factors of virtualized storage systems. In Proceedings of the 9th European Performance Engineering Workshop (EPEW’12), Munich, Germany, July 30-31, 2012.

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 15/16

slide-44
SLIDE 44

References III

Mengzhi Wang, Kinman Au, Anastassia Ailamaki, Anthony Brockwell, Christos Faloutsos, and Gregory R. Ganger. Storage device performance prediction with CART models. In Proceedings of the The IEEE Computer Society’s 12th Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, pages 588–595, Washington, DC, USA, 2004. IEEE Computer Society. ISBN 0-7695-2251-3. Dennis Westermann, Jens Happe, Rouven Krebs, and Roozbeh Farahbod. Automated inference of goal-oriented performance prediction functions. In 27th IEEE/ACM International Conference On Automated Software Engineering (ASE 2012), September 2012.

Dominik Bruhn – Modeling and Experimental Analysis of Virtualized Storage Performance October 12, 2012 16/16