Computing Load Aware and Long-View Load Balancing for Cluster - - PowerPoint PPT Presentation

computing load aware
SMART_READER_LITE
LIVE PREVIEW

Computing Load Aware and Long-View Load Balancing for Cluster - - PowerPoint PPT Presentation

Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems Guoxin Liu, Haiying Shen and Haoyu Wang Holcombe Department of Electrical and Computer Engineering Clemson University Presented by Haoyu Wang Outline 1,


slide-1
SLIDE 1

Guoxin Liu, Haiying Shen and Haoyu Wang Holcombe Department of Electrical and Computer Engineering Clemson University Presented by Haoyu Wang

Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems

slide-2
SLIDE 2

Outline

1, Introduction 2, System design 3, Performance evaluation 4, Conclusion

slide-3
SLIDE 3

Introduction

Background (Clemson Palmetto Clusters)

Load Balancing Problem

I/O load Data storage …... Why not consider the computing workload?

slide-4
SLIDE 4

Introduction

Previous work

Previous work

  • Challenge for load

balancing

– Data locality – Task delay – Long-term load balance – Cost-efficient & scalable

  • Related work

– Random data allocation – Balancing the number

  • f data blocks

– Balancing the I/O load

slide-5
SLIDE 5

System Design

Main contribution

1, Trace analysis on computing workloads 2, Computing load aware long-view load balancing method 3, Trace-driven experiments

slide-6
SLIDE 6

System Design

Trace Data Analysis

Click to edit subtitle style

0% 20% 40% 60% 80% 100% 1 10 100 1000

CDF Task running time (s)

slide-7
SLIDE 7

System Design

Trace Data Analysis

Click to edit subtitle style

0% 20% 40% 60% 80% 100% 1 10 100 1000

CDF Task running time (s)

0% 20% 40% 60% 80% 100% 20000 40000 60000

CDF Number of currently submitted tasks

slide-8
SLIDE 8

System Design

Trace Data Analysis

Click to edit subtitle style

0% 20% 40% 60% 80% 100% 1 10 100 1000

CDF Task running time (s)

0% 20% 40% 60% 80% 100% 20000 40000 60000

CDF Number of currently submitted tasks

80% 85% 90% 95% 100% 10000 20000 30000 40000

CDF Number of currently submitted tasks from different jobs

slide-9
SLIDE 9

System Design

Trace Data Analysis

Click to edit subtitle style

0% 20% 40% 60% 80% 100% 1 10 100 1000

CDF Task running time (s)

0% 20% 40% 60% 80% 100% 20000 40000 60000

CDF Number of currently submitted tasks

80% 85% 90% 95% 100% 10000 20000 30000 40000

CDF Number of currently submitted tasks from different jobs

0% 20% 40% 60% 80% 100% 10 20 30 40

CDF

  • Num. of data transmissions of a server
slide-10
SLIDE 10

System Design

Trace Data Analysis

Click to edit subtitle style

0% 20% 40% 60% 80% 100% 1 10 100 1000

CDF Task running time (s)

0% 20% 40% 60% 80% 100% 20000 40000 60000

CDF Number of currently submitted tasks

80% 85% 90% 95% 100% 10000 20000 30000 40000

CDF Number of currently submitted tasks from different jobs

0% 20% 40% 60% 80% 100% 10 20 30 40

CDF

  • Num. of data transmissions of a server

0% 20% 40% 60% 80% 100% 20000 40000 60000 80000

CDF Waiting time of a task (s)

slide-11
SLIDE 11

System Design

CALV System Overview

Coefficient-based data reallocation

Principle 1: The data blocks contributing more computing workloads at more

  • verloaded epochs in the spatial space and temporal space have a

higher priority to be selected to reallocate Principle2: Among all data blocks contributing workloads at an overloaded epoch, the data blocks contribute less workload at more underloaded epochs have a higher priority to be selected to reallocate.

slide-12
SLIDE 12

System Design

CALV System Overview

Coefficient-based data reallocation

Si

(a) Reduce num. of reported data blocks in spatial space (b) Reduce num. of reported data blocks in temporal space

: Computing capacity of the server e1 e2 e3 d1 d2 d3 d1 d2 d6 d5 Sj e1 e2 e3 d1 d2 d3 d5 d7 d6 d4 Sk e1 e2 e3 d1 d2 d3 d2 d3 d2 d4 d3

(c) Avoid server underload

Selection of data block to reallocate

slide-13
SLIDE 13

System Design

CALV System Overview

Lazy Data Block Transmission

Lazy data block transmission

Si : Computing capacity e1 e2 e3 d1 d3 d1 d2 d1 d2 d3 d1 d2 d3 e4 Sj e1 e2 e3 d5 d4 d5 d5 d5 e4

slide-14
SLIDE 14

Performance Evaluation

Trace-driven experiments

Simulated environment:

3000 servers with typical fat-tree topology. 8 computing slots for each server Epoch set to 1 second Comparison method: Random, Sierra, Ursa, CA

slide-15
SLIDE 15

Performance Evaluation

Trace-driven experiments Performance of Data locality

20 40 60 80 100 120 0.5 0.75 1 1.25 1.5

% of network load compared to Random x times of num. of jobs

Random Sierra Ursa CA CALV 20 40 60 80 100 120 0.5 0.75 1 1.25 1.5

% of network load compared to Random x times of num. of jobs

Random Sierra Ursa CA CALV

slide-16
SLIDE 16

Performance Evaluation

Trace-driven experiments Performance of Task Latency

10 20 30 40 50 0.5 0.75 1 1.25 1.5

Reduced avg. latency per task (s) x times of num. of jobs

Random=0 Sierra Ursa CA CALV 10 20 30 40 50 0.5 0.75 1 1.25 1.5

Reduced avg. latency per task (s) x times of num. of jobs

Random=0 Sierra Ursa CA CALV

slide-17
SLIDE 17

Performance Evaluation

Trace-driven experiments Performance of Cost-Efficiency

0.E+0 5.E+6 1.E+7 2.E+7 2.E+7 3.E+7 0.5 0.75 1 1.25 1.5

  • Num. of reported

blocks x times of num. of jobs

CALV CALV-MAX CALV-Random CALV-All 1 4 16 64 256 1024 0.5 0.75 1 1.25 1.5

x times of num. of jobs

Saved % of network load Saved % of peak num. of reallocated blocks Reduced num. of overloads (*20)

Performance of Lazy Data transmission

slide-18
SLIDE 18

Conclusion

Conclusion The importance of considering the computing workloads CALV is cost-efficient and could get long-term load balance

slide-19
SLIDE 19

The End

Thanks! Questions?