Energy-Efficient Management of Virtual Machines in Data Centers for - - PowerPoint PPT Presentation

energy efficient management of virtual machines in data
SMART_READER_LITE
LIVE PREVIEW

Energy-Efficient Management of Virtual Machines in Data Centers for - - PowerPoint PPT Presentation

H EURISTICS M ARKOV H OST O VERLOAD D ETECTION I MPLEMENTATION C ONCLUSIONS Energy-Efficient Management of Virtual Machines in Data Centers for Cloud Computing Anton Beloglazov Supervisor: Prof. Rajkumar Buyya The Cloud Computing and


slide-1
SLIDE 1

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

Energy-Efficient Management of Virtual Machines in Data Centers for Cloud Computing

Anton Beloglazov Supervisor: Prof. Rajkumar Buyya The Cloud Computing and Distributed Systems (CLOUDS) Lab CIS Department, The University of Melbourne PhD Completion Seminar

slide-2
SLIDE 2

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

CLOUD DATA CENTERS

◮ Delivering computing resources

  • n-demand over the Internet

◮ Hundreds of thousands of

servers worldwide

◮ Amazon EC2 2012

◮ 450,000 servers [Liu, 2012] ◮ 9 regions

◮ High energy consumption and

CO2 emissions [Koomey, 2011]

◮ 2005-2010: 56% increase in

energy consumption

◮ 2% of global CO2 emissions

[Gartner, 2007]

Google’s data center [Google, 2012]

2010 2005 2000 300 250 200 150 100 50

Year Billion kWh/year

Worldwide data center energy consumption 2000-2010 [Koomey, 2011]

2 / 70

slide-3
SLIDE 3

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

CLOUD DATA CENTERS

◮ Delivering computing resources

  • n-demand over the Internet

◮ Hundreds of thousands of

servers worldwide

◮ Amazon EC2 2012

◮ 450,000 servers [Liu, 2012] ◮ 9 regions + Sydney (2012)!

◮ High energy consumption and

CO2 emissions [Koomey, 2011]

◮ 2005-2010: 56% increase in

energy consumption

◮ 2% of global CO2 emissions

[Gartner, 2007]

Google’s data center [Google, 2012]

2010 2005 2000 300 250 200 150 100 50

Year Billion kWh/year

Worldwide data center energy consumption 2000-2010 [Koomey, 2011]

3 / 70

slide-4
SLIDE 4

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SOURCES OF ENERGY WASTE

Server power consumption depending

  • n the CPU utilization [Fan, 2007]
  • 1. Infrastructure efficiency

◮ Facebook’s Oregon data

center PUE = 1.08 [Open Compute, 2012]

◮ 91% of energy is

consumed by the computing resources

  • 2. Resource utilization

◮ Average CPU

utilization: < 50% [Barroso, 2007]

◮ Low server dynamic

power range: 30% [Fan, 2007]

4 / 70

slide-5
SLIDE 5

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SOURCES OF ENERGY WASTE

Server power consumption depending

  • n the CPU utilization [Fan, 2007]
  • 1. Infrastructure efficiency

◮ Facebook’s Oregon data

center PUE = 1.08 [Open Compute, 2012]

◮ 91% of energy is

consumed by the computing resources

  • 2. Resource utilization

◮ Average CPU

utilization: < 50% [Barroso, 2007]

◮ Low server dynamic

power range: 30% [Fan, 2007]

Solution – sleep mode! 450 W → 10 W in 300 ms

5 / 70

slide-6
SLIDE 6

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

A TAXONOMY OF ENERGY-EFFICIENT COMPUTING

Power Management Techniques Static Power Management (SPM) Dynamic Power Management (DPM) Hardware Level

[Devadas 1995], [Venkatachalam 2005]

Software Level

[Ong 1994], [Givargis 2001]

Circuit Level Logic Level Architectural Level Hardware Level

[Benini 2000]

Software Level Single Server Multiple Servers, Data Centers and Clouds OS Level

[Pallipadi 2006]

Virtualization Level

[Wei 2009], [Stoess 2007]

6 / 70

slide-7
SLIDE 7

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

A TAXONOMY OF ENERGY-EFFICIENT COMPUTING

Power Management Techniques Static Power Management (SPM) Dynamic Power Management (DPM) Hardware Level

[Devadas 1995], [Venkatachalam 2005]

Software Level

[Ong 1994], [Givargis 2001]

Circuit Level Logic Level Architectural Level Hardware Level

[Benini 2000]

Software Level Single Server Multiple Servers, Data Centers and Clouds OS Level

[Pallipadi 2006]

Virtualization Level

[Wei 2009], [Stoess 2007]

7 / 70

slide-8
SLIDE 8

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

DYNAMIC CONSOLIDATION OF VIRTUAL MACHINES

Power On Power Off

Physical compute nodes Virtualization layer (VMMs, local resource managers) Consumer, scientific and business applications Global resource managers

User User User

VM provisioning SLA negotiation Application requests

Virtual machines and user applications ◮ Adjusts the number of

active hosts according to the resource demand

◮ Improves the power

proportionality

◮ 2 basic processes

◮ VM consolidation ◮ VM deconsolidation

◮ Nathuji 2007

Raghavendra 2008 Verma 2008 Kusic 2009 Hermenier 2009

8 / 70

slide-9
SLIDE 9

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

INFRASTRUCTURE AS A SERVICE – PROPERTIES

  • 1. Large scale

◮ Amazon EC2: ≈450,000 servers ◮ Rackspace: ≈85,000 servers ◮ Scalability and fault-tolerance are required

  • 2. Multiple independent users

◮ On-demand VM provisioning ◮ Full access and permissions ◮ VM provisioning time is unknown

  • 3. Unknown mixed workloads

◮ Web, HPC applications ◮ The provider is unaware of the application workloads

  • 4. Quality of Service (QoS) guarantees

◮ Currently, the performance in IaaS is not guaranteed ◮ Existing metrics: availability, response time, deadlines ◮ Workload independent QoS are required 9 / 70

slide-10
SLIDE 10

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

RESEARCH QUESTIONS

  • 1. How to define workload-independent QoS requirements?
  • 2. When to migrate VMs?
  • 3. Which VMs to migrate?
  • 4. Where to migrate the VMs selected for migration?
  • 5. When and which physical nodes to switch on/off?
  • 6. How to provide scalability and fault-tolerance?

10 / 70

slide-11
SLIDE 11

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THESIS CONTRIBUTIONS

  • 1. A taxonomy and survey of energy-efficient computing

◮ Advances in Computers 2011

  • 2. Competitive analysis of dynamic VM consolidation

◮ CCPE 2012

  • 3. Novel heuristics for dynamic VM consolidation

◮ FGCS 2012, CCPE 2012

  • 4. The Markov host overload detection algorithm

◮ TPDS 2013

  • 5. A software framework for dynamic VM consolidation

◮ SPE 2013 (in prep.) 11 / 70

slide-12
SLIDE 12

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THESIS CONTRIBUTIONS

  • 1. A taxonomy and survey of energy-efficient computing

◮ Advances in Computers 2011

  • 2. Competitive analysis of dynamic VM consolidation

◮ CCPE 2012

  • 3. Novel heuristics for dynamic VM consolidation

◮ FGCS 2012, CCPE 2012

  • 4. The Markov host overload detection algorithm

◮ TPDS 2013

  • 5. A software framework for dynamic VM consolidation

◮ SPE 2013 (in prep.) 12 / 70

slide-13
SLIDE 13

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

13 / 70

slide-14
SLIDE 14

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

14 / 70

slide-15
SLIDE 15

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

DISTRIBUTED APPROACH: 4 SUB-PROBLEMS

  • 1. Host underload detection
  • 2. Host overload detection
  • 3. VM selection
  • 4. VM placement

15 / 70

slide-16
SLIDE 16

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

DISTRIBUTED APPROACH: 4 SUB-PROBLEMS

  • 1. Host underload detection
  • 2. Host overload detection
  • 3. VM selection
  • 4. VM placement

Scalability and fault-tolerance → distribution and replication

16 / 70

slide-17
SLIDE 17

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

17 / 70

slide-18
SLIDE 18

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OVERLOAD TIME FRACTION (OTF)

OTF(ut) = to(ut) ta

◮ ut – the CPU utilization threshold

distinguishing the non-overload and overload states of a host

◮ to – the time, during which the

host has been overloaded, which is a function of ut

◮ ta – the total time, during which

the host has been active

18 / 70

slide-19
SLIDE 19

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

AGGREGATE OVERLOAD TIME FRACTION (AOTF)

AOTF(ut) =

  • h∈H

to(h, ut) ta(h)

◮ ut – the CPU utilization threshold

distinguishing the non-overload and overload states of a host

◮ H – the set of compute hosts ◮ h – a compute host ◮ to(h, ut) – the overload time of the

host h, which is a function of ut

◮ ta(h) – the total activity time of

the host h

19 / 70

slide-20
SLIDE 20

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

20 / 70

slide-21
SLIDE 21

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

HOST UNDERLOAD DETECTION

A simple algorithm for simulation purposes: Input: Hosts, VMs Output: A decision on whether the host is underloaded

1: Place all VMs from the current host on other hosts 2: if a feasible placement exists then 3:

return true

4: return false

21 / 70

slide-22
SLIDE 22

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

HOST OVERLOAD DETECTION

◮ A static CPU utilization threshold (THR) ◮ Adaptive threshold-based algorithms:

◮ Adjust the threshold depending on the strength of

deviation of the CPU utilization

◮ Median Absolute Deviation (MAD): un > 1 − s × MAD ◮ Interquartile Range (IQR): un > 1 − s × IQR

◮ Local regression-based algorithms: s ×

un+1 >= 1

◮ Local Regression (LR) ◮ Robust Local Regression (LRR) 22 / 70

slide-23
SLIDE 23

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

VM SELECTION

◮ Minimum Migration Time (MMT)

◮ Estimate the VM migration time as RAM/BW

◮ Random Selection (RS)

◮ Randomly select a VM

◮ Maximum Correlation (MC)

◮ Select the VM that maximizes the multiple correlation

coefficient

23 / 70

slide-24
SLIDE 24

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

ALGORITHMS: VM PLACEMENT

◮ A modification of the Best Fit Decreasing (BFD) algorithm,

which uses no more than (11/9 × OPT + 1) bins [Yue, 1991]

◮ Extensions:

◮ A constraint on the amount of RAM required by the VMs ◮ An inactive host is only activated when a VM cannot be

placed on one of the already active hosts

◮ The worst-case complexity: (n + m/2)m

◮ n – the number of physical nodes ◮ m – the number of VMs to be placed ◮ The worst case occurs when every VM to be placed requires

a new inactive host to be activated

24 / 70

slide-25
SLIDE 25

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

PERFORMANCE METRICS

AOTF(ut) =

  • h∈H

to(h, ut) ta(h) PDM = 1 M

M

  • j=1

Cdj Crj SLAV = AOTF × PDM ESV = E × SLAV

◮ PDM – Performance Degradation

due to Migrations

◮ M – the number of VMs ◮ Cdj – the estimate of the

performance degradation of the VM j caused by migrations

◮ Crj – the total CPU capacity

requested by the VM j

◮ SLAV – SLA Violation ◮ ESV – the Energy - SLA Violation

combined metric

25 / 70

slide-26
SLIDE 26

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

EXPERIMENTAL SETUP

◮ Simulation using CloudSim [Calheiros, 2011] ◮ Power consumption data from SPECpower:

◮ 400 × HP ProLiant ML110 G4 (2 cores × 1860 MHz, 4 GB) ◮ 400 × HP ProLiant ML110 G5 (2 cores × 2660 MHz, 4 GB)

◮ VM CPU utilization traces from PlanetLab [Park, 2006]

◮ Collected every 5 minutes during 10 days of 03-04/2011 ◮ 898-1516 VMs per day 26 / 70

slide-27
SLIDE 27

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SIMULATION RESULTS: ESV

L R M M T 1 . 2 L R M C 1 . 3 L R R S 1 . 3 L R R M M T 1 . 2 L R R M C 1 . 2 L R R R S 1 . 3 M A D M M T 2 . 5 M A D M C 2 . 5 M A D R S 2 . 5 I Q R M M T 1 . 5 I Q R M C 1 . 5 I Q R R S 1 . 5 T H R M M T . 8 T H R M C . 8 T H R R S . 8 7 6 5 4 3 2 1

ESV, x0.001

27 / 70

slide-28
SLIDE 28

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SIMULATION RESULTS: SUMMARY

Simulation results of the best algorithm combinations and benchmark algorithms (median values) Policy ESV(×10−3) Energy (kWh) SLAV(×10−5) DVFS 613.6 THR-MMT-1.0 20.12 75.36 25.78 THR-MMT-0.8 4.19 89.92 4.57 IQR-MMT-1.5 4.00 90.13 4.51 MAD-MMT-2.5 3.94 87.67 4.48 LRR-MMT-1.2 2.43 87.93 2.77 LR-MMT-1.2 1.98 88.17 2.33

28 / 70

slide-29
SLIDE 29

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

CONCLUSIONS

  • 1. Dynamic VM consolidation algorithms significantly
  • utperform static allocation policies, such as DVFS
  • 2. The MMT policy produces better results compared to the

MC and RS policies: minimization of the VM migration time is more important than the correlation

  • 3. Host overload detection algorithms based on local

regression outperform the threshold based algorithms due to a decreased level of SLA violations and the number of VM migrations

29 / 70

slide-30
SLIDE 30

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

30 / 70

slide-31
SLIDE 31

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

HOST OVERLOAD DETECTION

◮ Host overload detection has direct influence on the QoS

◮ Since host overloads cause resource shortages and

performance degradation of applications

◮ Current algorithms have no direct control over the QoS

◮ Only by tuning the algorithm parameters

◮ Overload detection is done by each host independently

31 / 70

slide-32
SLIDE 32

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

QUALITY OF DYNAMIC VM CONSOLIDATION

H = 1 n

n

  • i=1

ai → min

◮ H – the mean number of active

hosts over n time steps

◮ ai is the number of active hosts at

the time step i = 1, 2, . . . , n

◮ A lower value of H represents a

better quality of VM consolidation

32 / 70

slide-33
SLIDE 33

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

QUALITY OF DYNAMIC VM CONSOLIDATION

E[H∗] ∝ np2 2E[T]

  • 1 +

n E[T]

  • ,

therefore, E[T] → max

◮ E[H∗] – the mean number of active

hosts switched on due to VM migrations initiated by the host

  • verload detection algorithm over

n time steps

◮ p – the probability that an extra

host has to be activated to migrate a VM from an overloaded host

◮ E[T] is the expected time between

migrations from overloaded hosts

33 / 70

slide-34
SLIDE 34

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE HOST OVERLOAD DETECTION PROBLEM

ta(tm) → max

to(tm) ta(tm) ≤ M ◮ The problem is limited to a single

VM migration

◮ ta(tm) – the time until migration,

which is a function of tm

◮ tm – the VM migration start time ◮ to(tm) – the time, during which the

host has been overloaded, which is a function of tm and ut

◮ M – the limit on the maximum

allowed OTF value, which is a QoS goal expressed in terms of OTF

34 / 70

slide-35
SLIDE 35

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

35 / 70

slide-36
SLIDE 36

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE OPTIMAL OFFLINE ALGORITHM

Input: A system state history Input: M, the maximum allowed OTF Output: A VM migration time

1: while history is not empty do 2:

if OTF of history ≤ M then

3:

return the time of the last history state

4:

else

5:

drop the last state from history

36 / 70

slide-37
SLIDE 37

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

37 / 70

slide-38
SLIDE 38

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE HOST MODEL

◮ Consider the time period until the first VM migration ◮ States are assigned to N CPU utilization intervals ◮ E.g., a host is overloaded if the CPU utilization ≥ 80%

◮ The state space S of the DTMC contains 2 states ◮ State 1: [0%, 80%) ◮ State 2: [80%, 100%]

◮ Assuming the workload is known, a matrix of transition

probabilities P can be estimated for i, j ∈ S:

  • pij =

cij

  • k∈S cik

◮ cij – the number of transitions between states i and j 38 / 70

slide-39
SLIDE 39

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE HOST MODEL

◮ To model VM migrations we add an absorbing state ◮ A state k is absorbing if no other state can be reached from

it, i.e., pkk = 1

◮ The resulting extended state space is S∗ = S ∪ {(N + 1)} ◮ Then, the control policy is represented by the transition

probabilities to the absorbing state (N + 1)

39 / 70

slide-40
SLIDE 40

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE HOST MODEL

◮ The extended matrix of transition probabilities P∗:

P∗ =      p∗

11

· · · p∗

1N

m1 . . . ... . . . . . . p∗

N1

· · · p∗

NN

mN 1      p∗

ij = pij(1 − mi),

∀i, j ∈ S

◮ Closed-form equations for the expected time until

absorption spent in each state can be obtained, where the unknowns are the required m1, m2, . . . , mN: L1(∞), L2(∞), . . . , LN(∞)

40 / 70

slide-41
SLIDE 41

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE OPTIMIZATION PROBLEM

  • i∈S

Li(∞) → max Tm + LN(∞) Tm +

i∈S Li(∞) ≤ M ◮ Li(∞) – the expected time until

absorption spent in the state i

◮ LN(∞) – the expected time until

absorption spent in the overload state N

◮ Tm – the VM migration time ◮ M – the limit on the maximum

allowed OTF value

41 / 70

slide-42
SLIDE 42

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE CONTROL POLICY

◮ The solution of the optimization problem are the

probabilities of transitions to the absorbing state (N + 1), m1, m2, . . . , mN

◮ A VM is migrated with the probability mi, where i ∈ S is

the current state

◮ The control policy is deterministic if:

∃k ∈ S : mk = 1 and ∀i ∈ S, i = k : mi = 0

◮ Otherwise the policy is randomized

42 / 70

slide-43
SLIDE 43

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE MHOD-OPT ALGORITHM

Input: Transition probabilities Output: A decision on whether to migrate a VM

1: Build the objective and constraint functions 2: Invoke the brute-force search to find the m vector 3: if a feasible solution exists then 4:

Extract the VM migration probability

5:

if the probability is < 1 then

6:

return false

7: return true

43 / 70

slide-44
SLIDE 44

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE MHOD ALGORITHM

Input: A CPU utilization history Output: A decision on whether to migrate a VM

1: if the CPU utilization history size > Tl then 2:

Convert the last CPU utilization value to a state

3:

Invoke the Multisize Sliding Window estimation [Luiz, 2010] to obtain transition probability estimates

4:

Invoke the MHOD-OPT algorithm

5:

return the decision returned by MHOD-OPT

6: return false

44 / 70

slide-45
SLIDE 45

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

EXPERIMENTAL SETUP

◮ Simulated a single host: 4 cores × 3 GHz ◮ 4 VM instance types: 1.7 GHz, 2 GHz, 2.4 GHz, and 3 GHz ◮ VM CPU utilization traces from PlanetLab [Park, 2006]

◮ Collected every 5 minutes during 10 days of 03-04/2011

◮ 100 different sets of VMs

◮ The max OTF after the first 30 time steps is 10% ◮ The min overall OTF is 20%

◮ A simulation is run until the first VM migration

45 / 70

slide-46
SLIDE 46

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SIMULATION RESULTS: OTF

OPT-30 OPT-20 OPT-10 MHOD-30 MHOD-20 MHOD-10 LRR-0.85 LRR-0.95 LRR-1.05 LR-0.85 LR-0.95 LR-1.05 IQR-2.0 IQR-1.0 MAD-3.0 MAD-2.0 THR-100 THR-90 THR-80 50% 40% 30% 20% 10% 0%

Algorithm Resulting OTF value

46 / 70

slide-47
SLIDE 47

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SIMULATION RESULTS: TIME UNTIL A MIGRATION

OPT-30 OPT-20 OPT-10 MHOD-30 MHOD-20 MHOD-10 LRR-0.85 LRR-0.95 LRR-1.05 LR-0.85 LR-0.95 LR-1.05 IQR-2.0 IQR-1.0 MAD-3.0 MAD-2.0 THR-100 THR-90 THR-80 90 80 70 60 50 40 30 20 10

Algorithm Time until a migration, x1000 s

47 / 70

slide-48
SLIDE 48

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SIMULATION RESULTS: MHOD VS LRR

Paired T-tests for comparing the time until a migration

  • Alg. 1 (×103)
  • Alg. 2 (×103)
  • Diff. (×103)

p-value MHOD (39.64) LR (44.29) 4.65 (2.73, 6.57) < 0.001 MHOD (39.23) LRR (44.23) 5.00 (3.09, 6.91) < 0.001

48 / 70

slide-49
SLIDE 49

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SIMULATION RESULTS: MHOD VS OPT

OPT MHOD Difference p-value OTF 18.31% 18.25% 0.06% (-0.03, 0.15) = 0.226 Time 45,767 41,128 4,639 (3617, 5661) < 0.001

◮ Relatively to OPT, the time until a migration produced by

the MHOD algorithm converts to 88.02% with 95% CI: (86.07%, 89.97%)

49 / 70

slide-50
SLIDE 50

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

CONCLUSIONS

  • 1. MHOD on average provides approximately 88% of the

time until a VM migration produced by OPT

  • 2. MHOD leads to approximately 11% shorter time until a

migration than the LRR algorithm, while satisfying QoS constraints

  • 3. The MHOD algorithm enables explicit specification of a

desired QoS goal to be delivered by the system through the OTF parameter, which is successfully met by the resulting value of the OTF metric

50 / 70

slide-51
SLIDE 51

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

51 / 70

slide-52
SLIDE 52

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

DYNAMIC VM CONSOLIDATION FRAMEWORK

◮ An extension for the OpenStack Cloud platform

◮ Supported by the industry: Rackspace, NASA, IBM, etc. ◮ Scalable and fault-tolerant: loose coupling + replication

◮ The framework transparently attaches to existing

OpenStack deployments with no configuration changes

◮ Interaction with OpenStack through public APIs ◮ Configuration-based substitution of algorithms ◮ Open source, released under the Apache 2.0 license:

http://openstack-neat.org/

◮ Neat – (adjective) arranged in an orderly, tidy way 52 / 70

slide-53
SLIDE 53

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

FRAMEWORK COMPONENTS

53 / 70

slide-54
SLIDE 54

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE GLOBAL MANAGER: UNDERLOAD

54 / 70

slide-55
SLIDE 55

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE GLOBAL MANAGER: OVERLOAD

55 / 70

slide-56
SLIDE 56

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

THE LOCAL MANAGER

56 / 70

slide-57
SLIDE 57

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

IDLE TIME FRACTION (ITF)

ITF = ti ta AITF =

  • h∈H

ti(h) ta(h)

◮ ti – the time, during which the

host has been idle

◮ ta – the total time, during which

the host has been active

◮ H – the set of compute hosts ◮ h – a compute host

57 / 70

slide-58
SLIDE 58

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

DYNAMIC VM CONSOLIDATION ALGORITHMS

◮ Host underload detection

◮ The averaging threshold-based underload detection

algorithm

◮ Host overload detection

◮ MAX-ITF – a base line algorithm, which never detects host

  • verloads leading to the maximum ITF

◮ THR – the averaging threshold-based algorithm ◮ LRR – the robust local regression algorithm ◮ MHOD – pending

◮ VM selection

◮ The min migration time max CPU utilization algorithm

◮ VM placement

◮ The BFD-based algorithm with CPU utilization averaging 58 / 70

slide-59
SLIDE 59

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

59 / 70

slide-60
SLIDE 60

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

EXPERIMENTAL SETUP

◮ 4 compute nodes, 1 controller node

◮ 4 x IBM System x3200 M3 (8 threads × 2800 MHz, 4 GB) ◮ 1 x Dell Optiplex 745 (2 threads × 2400 MHz, 2 GB)

◮ 28 VMs

◮ 1 virtual CPU ◮ 128 MB RAM ◮ Ubuntu 12.04 Cloud Image

◮ VM CPU utilization traces from PlanetLab [Park, 2006]

◮ Collected every 5 minutes during 10 days of 03-04/2011 ◮ At least 10% of time the CPU utilization is lower than 20% ◮ At least 10% of time the CPU utilization is higher than 80%

◮ Each experiment is 24 hour long × 3 ◮ No sleep mode – AITF

60 / 70

slide-61
SLIDE 61

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

EXPERIMENTAL RESULTS: AOTF

MAX-ITF LRR-0.9 LRR-1.0 THR-0.8 60.00% 50.00% 40.00% 30.00% 20.00% 10.00%

Algorithm OTF

61 / 70

slide-62
SLIDE 62

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

EXPERIMENTAL RESULTS: AITF

MAX-ITF LRR-0.9 LRR-1.0 THR-0.8 75.00% 70.00% 65.00% 60.00% 55.00% 50.00% 45.00% 40.00%

Algorithm ITF

62 / 70

slide-63
SLIDE 63

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

EXPERIMENTAL RESULTS: SUMMARY

◮ Server power consumption [Meisner, 2009]:

◮ 450 W – the fully utilized state ◮ 270 W – the idle state ◮ 10.4 W – the sleep mode

Algorithm AOTF AITF Energy savings THR-0.8 12.4% (10.3, 14.5) 38.7% (38.1, 39.3) 26.80% LRR-1.0 15.7% (14.6, 16.9) 43.3% (32.0, 54.5) 30.36% LRR-0.9 26.8% ( 20.6, 33.1) 47.6% (45.7, 49.5) 32.98% MAX-ITF 38.7% (0.0, 77.5) 57.6% (21.9, 93.3) 40.96%

63 / 70

slide-64
SLIDE 64

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

CONCLUSIONS

◮ OpenStack Neat is transparent to the base OpenStack

installation by interacting with it using the public APIs

◮ The framework can be customized to use various

implementations of VM consolidation algorithms

◮ On a 4-node testbed the energy consumption has been

reduced by up to 30% with a limited application performance impact of 15% OTF

◮ Iterations of the components take a fraction of a second ◮ The request processing of the global manager takes on

average 20 to 40 seconds required for VM migration

◮ OpenStack Neat is released under the Apache 2.0 license:

http://openstack-neat.org/

64 / 70

slide-65
SLIDE 65

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

OUTLINE

HEURISTICS Distributed Approach Workload Independent QoS Dynamic VM Consolidation Heuristics MARKOV HOST OVERLOAD DETECTION Problem Definition The Optimal Offline Algorithm Markov Host Overload Detection (MHOD) Algorithm IMPLEMENTATION Framework for Dynamic VM Consolidation Experimental Evaluation CONCLUSIONS Summary and Future Directions

65 / 70

slide-66
SLIDE 66

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

CONCLUSIONS

◮ Dynamic VM consolidation significantly reduces energy

consumption by adjusting the number of active servers

◮ Scalability and fault-tolerance are crucial in large-scale IaaS ◮ The proposed approach is distributed, scalable, and

efficient in managing the energy-performance trade-off

◮ The proposed approach allows the system administrator to

explicitly specify workload-independent QoS constraints

◮ On a 4-node testbed, the estimated energy savings are up

to 30% with a limited performance impact

◮ The implemented OpenStack Neat framework is

  • pen source: http://openstack-neat.org/

66 / 70

slide-67
SLIDE 67

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

FUTURE DIRECTIONS

◮ Replicated Global Managers

◮ Achieving the complete distribution

◮ VM Network Topologies

◮ Taking into account network communication between VMs

◮ Thermal-Aware Dynamic VM Consolidation

◮ Taking into account the server temperature and cooling

◮ Dynamic and Heterogeneous SLAs

◮ Handling per-user SLAs, which may vary over time

◮ Power Capping

◮ Constraining the overall power consumption by servers 67 / 70

slide-68
SLIDE 68

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

SELECTED PUBLICATIONS

  • 1. Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya, “A

Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems,” Advances in Computers, Marvin V. Zelkowitz (editor), 82:47-111, 2011

  • 2. Anton Beloglazov, Jemal Abawajy, and Rajkumar Buyya, “Energy-Aware Resource

Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing,” Future Generation Computer Systems (FGCS), 28(5):755-768, 2012

  • 3. Anton Beloglazov and Rajkumar Buyya, “Optimal Online Deterministic

Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers,” Concurrency and Computation: Practice and Experience (CCPE), 24(13):1397-1420, 2012

  • 4. Anton Beloglazov and Rajkumar Buyya, “Managing Overloaded Hosts for

Dynamic Consolidation of Virtual Machines in Cloud Data Centers Under Quality

  • f Service Constraints,” IEEE Transactions on Parallel and Distributed Systems (TPDS),

2013 (in press, accepted on August 2, 2012)

  • 5. Anton Beloglazov and Rajkumar Buyya, “OpenStack Neat: A Framework for

Dynamic Consolidation of Virtual Machines in OpenStack Clouds,” Software: Practice and Experience (SPE), 2013 (in preparation)

68 / 70

slide-69
SLIDE 69

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

ACKNOWLEDGEMENTS

◮ Supervisor: Prof. Rajkumar Buyya ◮ PhD committee:

◮ Prof. Chris Leckie ◮ Dr. Rodrigo Calheiros ◮ Dr. Saurabh Garg

◮ Past and current members of the CLOUDS Laboratory ◮ Friends and colleagues from the CSSE/CIS department

69 / 70

slide-70
SLIDE 70

HEURISTICS MARKOV HOST OVERLOAD DETECTION IMPLEMENTATION CONCLUSIONS

MORE INFORMATION

◮ Thesis:

http://beloglazov.info/thesis.pdf

◮ Slides:

http://beloglazov.info/thesis-slides.pdf

◮ More information and publications:

http://beloglazov.info

Thank you all for coming! Any questions?

70 / 70

slide-71
SLIDE 71

References Wishlist Heuristics Markov Host Overload Detection Implementation

APPENDIX

References Wishlist Heuristics Markov Host Overload Detection Implementation

71 / 70

slide-72
SLIDE 72

References Wishlist Heuristics Markov Host Overload Detection Implementation

REFERENCES I

Google Google data centers. http://www.google.com/datacenters/

  • H. Liu

Amazon data center size. http://huanliu.wordpress.com/2012/03/13/amazon- data-center-size/

  • J. G. Koomey

Growth in data center electricity use 2005 to 2010. Analytics Press, 2011.

72 / 70

slide-73
SLIDE 73

References Wishlist Heuristics Markov Host Overload Detection Implementation

REFERENCES II

Gartner, Inc. Gartner estimates ICT industry accounts for 2 percent of global CO2 emissions. http://www.gartner.com/it/page.jsp?id=503867 The Open Compute Project Energy Efficiency. http://opencompute.org/about/energy-efficiency/

  • X. Fan, W. D. Weber, and L. A. Barroso

Power provisioning for a warehouse-sized computer. 34th Annual International Symposium on Computer Architecture (ISCA), 2007

73 / 70

slide-74
SLIDE 74

References Wishlist Heuristics Markov Host Overload Detection Implementation

REFERENCES III

  • L. A. Barroso and U. Holzle

The case for energy-proportional computing. Computer, 40(12):33–37, 2007

  • K. S Park and V. S Pai

CoMon: a mostly-scalable monitoring system for PlanetLab. ACM SIGOPS Operating Systems Review, 40(1):65–74, 2006

  • R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. De Rose,

and R. Buyya CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms. Software: Practice and Experience, 41(1):23–50, 2011

74 / 70

slide-75
SLIDE 75

References Wishlist Heuristics Markov Host Overload Detection Implementation

REFERENCES IV

  • D. Meisner, B.T. Gold, and T.F. Wenisch

PowerNap: Eliminating server idle power. ACM SIGPLAN Notices, 44(3):205–216, 2009

  • M. Yue

A simple proof of the inequality FFD (L)< 11/9 OPT (L)+ 1,for all L for the FFD bin-packing algorithm. Acta Mathematicae Applicatae Sinica, 7(4):321–331, 1991

  • S. O. D. Luiz, A. Perkusich, and A. M. N. Lima

Multisize Sliding Window in Workload Estimation for Dynamic Power Management. IEEE Transactions on Computers, 59(12):1625–1639, 2010

75 / 70

slide-76
SLIDE 76

References Wishlist Heuristics Markov Host Overload Detection Implementation

I WISH I USED * FROM THE BEGINNING OF MY PHD

◮ Linux / Xmonad ◮ Emacs ◮ L A

T EX

◮ R ◮ Python (more productive than Java) ◮ Git ◮ Haskell – still have not started using.. .

76 / 70

slide-77
SLIDE 77

References Wishlist Heuristics Markov Host Overload Detection Implementation

SIMULATION RESULTS: ENERGY

L R M M T 1 . 2 L R M C 1 . 3 L R R S 1 . 3 L R R M M T 1 . 2 L R R M C 1 . 2 L R R R S 1 . 3 M A D M M T 2 . 5 M A D M C 2 . 5 M A D R S 2 . 5 I Q R M M T 1 . 5 I Q R M C 1 . 5 I Q R R S 1 . 5 T H R M M T . 8 T H R M C . 8 T H R R S . 8 130 120 110 100 90 80 70 60

Energy, kWh

77 / 70

slide-78
SLIDE 78

References Wishlist Heuristics Markov Host Overload Detection Implementation

SIMULATION RESULTS: SLAV

L R M M T 1 . 2 L R M C 1 . 3 L R R S 1 . 3 L R R M M T 1 . 2 L R R M C 1 . 2 L R R R S 1 . 3 M A D M M T 2 . 5 M A D M C 2 . 5 M A D R S 2 . 5 I Q R M M T 1 . 5 I Q R M C 1 . 5 I Q R R S 1 . 5 T H R M M T . 8 T H R M C . 8 T H R R S . 8 9 8 7 6 5 4 3 2 1

SLAV, x0.00001

78 / 70

slide-79
SLIDE 79

References Wishlist Heuristics Markov Host Overload Detection Implementation

SIMULATION RESULTS: ESV

L R M M T 1 . 2 L R M C 1 . 3 L R R S 1 . 3 L R R M M T 1 . 2 L R R M C 1 . 2 L R R R S 1 . 3 M A D M M T 2 . 5 M A D M C 2 . 5 M A D R S 2 . 5 I Q R M M T 1 . 5 I Q R M C 1 . 5 I Q R R S 1 . 5 T H R M M T . 8 T H R M C . 8 T H R R S . 8 7 6 5 4 3 2 1

ESV, x0.001

79 / 70

slide-80
SLIDE 80

References Wishlist Heuristics Markov Host Overload Detection Implementation

SIMULATION RESULTS: VM MIGRATIONS

L R M M T 1 . 2 L R M C 1 . 3 L R R S 1 . 3 L R R M M T 1 . 2 L R R M C 1 . 2 L R R R S 1 . 3 M A D M M T 2 . 5 M A D M C 2 . 5 M A D R S 2 . 5 I Q R M M T 1 . 5 I Q R M C 1 . 5 I Q R R S 1 . 5 T H R M M T . 8 T H R M C . 8 T H R R S . 8 22.5 20.0 17.5 15.0 12.5 10.0 7.5 5.0

VM Migrations, x1000

80 / 70

slide-81
SLIDE 81

References Wishlist Heuristics Markov Host Overload Detection Implementation

SIMULATION RESULTS: MHOD VS LRR

Mean OTF Algorithm 2 6 . 3 % 1 7 . 4 % 9 . % M H O D

  • 3

. 4 L R R

  • .

8 5 M H O D

  • 1

7 . 9 L R R

  • .

9 5 M H O D

  • 9

. 9 L R R

  • 1

. 5

50% 40% 30% 20% 10% 0%

2 6 . 3 % 1 7 . 4 % 9 . % M H O D

  • 3

. 4 L R R

  • .

8 5 M H O D

  • 1

7 . 9 L R R

  • .

9 5 M H O D

  • 9

. 9 L R R

  • 1

. 5

90 80 70 60 50 40 30 20 10

Resulting OTF value

9.0% 17.4% 26.3%

Time until a migration, x1000 s 81 / 70

slide-82
SLIDE 82

References Wishlist Heuristics Markov Host Overload Detection Implementation

ALGORITHMS: HOST UNDERLOAD DETECTION

The averaging threshold-based underload detection algorithm: Input: threshold, n, utilization Output: Whether the host is underloaded

1: if utilization is not empty then 2:

utilization ← last n values of utilization

3:

meanUtilization ← sum(utilization) / len(utilization)

4:

return meanUtilization ≤ threshold

5: return false

82 / 70

slide-83
SLIDE 83

References Wishlist Heuristics Markov Host Overload Detection Implementation

ALGORITHMS: VM SELECTION

The min migration time max CPU utilization algorithm: Input: n, vmsCpuMap, vmsRamMap Output: A VM to migrate

1: minRam ← min(values of vmsRamMap) 2: maxCpu ← 0 3: selectedVm ← None 4: for vm, cpu in vmsCpuMap do 5:

if vmsRamMap[vm] > minRam then

6:

continue

7:

vals ← last n values of cpu

8:

mean ← sum(vals) / len(vals)

9:

if maxCpu < mean then

10:

maxCpu ← mean

11:

selectedVm ← vm

12: return selectedVm

83 / 70

slide-84
SLIDE 84

References Wishlist Heuristics Markov Host Overload Detection Implementation

EXPERIMENTAL RESULTS: AOTF

MAX-ITF LRR-0.9 LRR-1.0 THR-0.8 60.00% 50.00% 40.00% 30.00% 20.00% 10.00%

Algorithm OTF

84 / 70

slide-85
SLIDE 85

References Wishlist Heuristics Markov Host Overload Detection Implementation

EXPERIMENTAL RESULTS: AITF

MAX-ITF LRR-0.9 LRR-1.0 THR-0.8 75.00% 70.00% 65.00% 60.00% 55.00% 50.00% 45.00% 40.00%

Algorithm ITF

85 / 70

slide-86
SLIDE 86

References Wishlist Heuristics Markov Host Overload Detection Implementation

EXPERIMENTAL RESULTS: VM MIGRATIONS

MAX-ITF LRR-0.9 LRR-1.0 THR-0.8 180 160 140 120 100 80 60 40 20

Algorithm VM migrations

86 / 70

slide-87
SLIDE 87

References Wishlist Heuristics Markov Host Overload Detection Implementation

EXPERIMENTAL RESULTS: ENERGY CONSUMPTION

◮ Server power consumption [Meisner, 2009]:

◮ 450 W – the fully utilized state ◮ 270 W – the idle state ◮ 10.4 W – the sleep mode

Algorithm Energy, kWh Base energy, kWh Energy savings THR-0.8 25.18 34.39 26.80% LRR-1.0 23.51 33.76 30.36% LRR-0.9 22.23 33.16 32.98% MAX-ITF 18.76 31.78 40.96%

87 / 70