Cloud WorkBench A Web-Based Framework for Benchmarking Cloud - - PowerPoint PPT Presentation

cloud workbench
SMART_READER_LITE
LIVE PREVIEW

Cloud WorkBench A Web-Based Framework for Benchmarking Cloud - - PowerPoint PPT Presentation

Cloud WorkBench A Web-Based Framework for Benchmarking Cloud Services Joel Scheuner University of Zurich, Switzerland software evolution & architecture lab 14.08.2014 Cloud Computing - Essential Characteristics 1. On-demand Self-service


slide-1
SLIDE 1

software evolution & architecture lab

Joel Scheuner

University of Zurich, Switzerland

Cloud WorkBench

A Web-Based Framework for Benchmarking Cloud Services

14.08.2014

slide-2
SLIDE 2

Cloud Computing - Essential Characteristics

  • 1. On-demand Self-service
  • 2. Rapid Elasticity
  • 3. Utility-based Pricing
  • 4. Resource Pooling
  • 5. Broad Network Access

2

slide-3
SLIDE 3

Cloud Computing - Essential Characteristics

  • 1. On-demand Self-service
  • 2. Rapid Elasticity
  • 3. Utility-based Pricing
  • 4. Resource Pooling
  • 5. Broad Network Access

3

3 days

70x

slide-4
SLIDE 4

Cloud Computing - Essential Characteristics

  • 1. On-demand Self-service
  • 2. Rapid Elasticity
  • 3. Utility-based Pricing
  • 4. Resource Pooling
  • 5. Broad Network Access

4

slide-5
SLIDE 5

Cloud Computing - Essential Characteristics

  • 1. On-demand Self-service
  • 2. Rapid Elasticity
  • 3. Utility-based Pricing
  • 4. Resource Pooling
  • 5. Broad Network Access

5

slide-6
SLIDE 6

Cloud Computing - Essential Characteristics

  • 1. On-demand Self-service
  • 2. Rapid Elasticity
  • 3. Utility-based Pricing
  • 4. Resource Pooling
  • 5. Broad Network Access

6

slide-7
SLIDE 7

Infrastructure-as-a- Service (IaaS)

7

  • Processing, storage, networks
  • Virtual Machines (VMs)

>23 Instance Types

slide-8
SLIDE 8

Differences between IaaS Services

  • Performance
  • Hardware being served
  • Reliability
  • Costs

Even for services with the same specification!

slide-9
SLIDE 9

Benchmark

  • Performance test
  • Types
  • Micro-Benchmarks
  • Application Benchmarks

9

slide-10
SLIDE 10

Demo

10

slide-11
SLIDE 11

Cloud WorkBench

11

https://github.com/sealuzh/cloud-workbench

Open Source

slide-12
SLIDE 12

Research Questions

I II

12

How can common IaaS cloud benchmarks from literature be defined in a modular and portable manner? How can benchmarks from RQ1 be periodically scheduled and reproducibly executed in cloud environments without manual interaction?

slide-13
SLIDE 13

Overall Architecture

13

CWB Server Web Interface REST REST Access Web Interface N Su Relational Database Experimenter Business Logic

VM Environment Manager Provider Plugin Core Scheduler

Pro

slide-14
SLIDE 14

Overall Architecture

14

IaaS Provider IaaS Provider IaaS Providers CWB Server Web Interface REST REST Access Web Interface Provider API Manage VMs Notify State + Submit Results Relational Database REST Experimenter Cloud VM CWB Client Library

Benchmark Execution Environment

Cloud VMs SSH Fetch Configuration Business Logic

VM Environment Manager Provider Plugin Core Scheduler

Provision VMs + Execute Commands

slide-15
SLIDE 15

Overall Architecture

15

IaaS Provider IaaS Provider IaaS Providers CWB Server Web Interface Provisioning Service REST REST Upload Configuration Access Web Interface Provider API Manage VMs Notify State + Submit Results REST REST Relational Database REST Experimenter

Configurations

Cloud VM CWB Client Library

Benchmark Execution Environment

Cloud VMs SSH Fetch Configuration Business Logic

VM Environment Manager Provider Plugin Core Scheduler

Provision VMs + Execute Commands

slide-16
SLIDE 16

Benchmark Anatomy

16

Benchmark Definition Timeout Schedule * 1 * 1 1 *

slide-17
SLIDE 17

Benchmark Anatomy

17

Benchmark Definition Timeout Schedule Cloud VM Configuration Result Model Provisioning Configuration 1..* * * 1..* 1 1..* <<enum>> Result Type * 1 * 1 1 *

slide-18
SLIDE 18

Benchmark Execution

18

slide-19
SLIDE 19

Case Study

  • Raw sequential write speed
  • HDD vs. SSD storage
  • Different instance types

19

slide-20
SLIDE 20

Questions

20

When do larger instance types perform better than smaller instance types? When should larger instance types be preferred over the better block storage type? How do instance types and block storage types influence performance variability?

1 2 3

slide-21
SLIDE 21

Experiment Setup

  • Amazon EC2 Ireland (eu-west-1)
  • Ubuntu 14.04 (trusty)
  • FIO benchmark
  • 20. - 23. June 2014
  • 8 - 12 repetitions

21

Instance Type Price per Hour t1.micro $ 0.020 m1.small $ 0.047 m3.medium $ 0.077

slide-22
SLIDE 22

t1.micro m1.small m3.medium KB/s 1000 2000 3000 4000 5000 6000 7000 Standard EBS General Purpose EBS (SSD)

750 1000 3500 5500 3000 6000

When do larger instance types perform better than smaller instance types?

1

22

+ 4x + 0x

(Elastic Block Storage)

slide-23
SLIDE 23

t1.micro m1.small m3.medium KB/s 1000 2000 3000 4000 5000 6000 7000 Standard EBS General Purpose EBS (SSD)

750 1000 3500 5500 3000 6000

23

+30%

(Elastic Block Storage)

When should larger instance types be preferred over the better block storage type?

2

+100% +60% Larger Instance Type Better Block Storage Type

slide-24
SLIDE 24

5 10 15 20 2000 4000 6000 min KB/s

m1.small + General Purpose EBS m1.small + Standard EBS t1.micro + General Purpose EBS t1.micro + Standard EBS

24

3

How do instance types and block storage types influence performance variability?

slide-25
SLIDE 25

5 10 15 20 2000 4000 6000 min KB/s

m1.small + General Purpose EBS m1.small + Standard EBS t1.micro + General Purpose EBS t1.micro + Standard EBS

25

3

How do instance types and block storage types influence performance variability?

s in % of x̄

t1.micro m1.small m3.medium Standard EBS 20% (20-50%) 20% (10-20%) 30% (15-60%) General Purpose EBS 10 % (20-40%) 10% (5-15%) 10% (5-10%)

slide-26
SLIDE 26

Conclusion I

I

How can common IaaS cloud benchmarks from literature be defined in a modular and portable manner?

  • Entirely define benchmarks by means of code
  • Apply common software engineering techniques
  • Make components and benchmarks configurable

26

slide-27
SLIDE 27

Conclusion II

II

How can benchmarks from RQ1 be periodically scheduled and reproducibly executed in cloud environments without manual interaction?

  • Use system utilities and existing tools to build a

fully automated benchmark execution environment

  • Eliminate any error-prone human interactions

threatening reproducibility

27

slide-28
SLIDE 28

Outlook

  • Large-scale evaluation for a


World Wide Web Conference paper

  • Extend CWB in a master project
  • Web-based benchmarking studio
  • Support entire lifecycle