Power-Saving in Large-Scale Storage Systems with Data Migration
Koji Hasebe, Tatsuya Niwa, Akiyoshi Sugiki, and Kazuhiko Kato
University of Tsukuba, Japan
Power-Saving in Large-Scale Storage Systems with Data Migration - - PowerPoint PPT Presentation
Power-Saving in Large-Scale Storage Systems with Data Migration Koji Hasebe, Tatsuya Niwa, Akiyoshi Sugiki, and Kazuhiko Kato University of Tsukuba, Japan Background Power-saving in storage systems is a central issue. IT systems consume
Koji Hasebe, Tatsuya Niwa, Akiyoshi Sugiki, and Kazuhiko Kato
University of Tsukuba, Japan
IT systems consume 1-2% of the total energy in the world.
Green IT: A New Industry Shock Wave, Gartner Symp/ITxpo, 2007
In large data centers, storage systems consume <40% of the
Workload Low-power mode Peak time Off-peak time In the literature…
MAID [Colarelli-Grunwald, '02], PDC [Pinheiro-Bianchini, '04] DIV [Pinheiro et al., '06], Pergamum [Storer et al., '08] RIMAC [Yao-Wang, '06], eRAID [Wang-Zhu-Li, '08] Hibernator [Zhu et al., '05], PARAID [Waddle et al. '07], etc.
Commonly-observed technique:
Propose the efficient allocation of replicated data d1 d2 d3
Propose the efficient allocation of replicated data d1 d2 d3
Propose the efficient allocation of replicated data d1 d2 d3 Low-power mode
data data data data data
data data data data data Low-power mode
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 Block 1 Block 2 Block 3 Block 4 Parent Children Parent Child
3 physical nodes are required at off-peak time May increase up to four-fold
P1 P2 P3 V1 V2 V3 V4 V5 V6 V7 V8 V9 P4 P5 P6 P7 P8 P9 P10 P11 P12 Block 1 Block 2 Block 3 Block 4 V1 V2 V3 V4 V5 V6 V7 V8 V9
V1 P1 V2 V3 V4 P2 V5 V6 V7 P3 V8 V9 P4 P5 P6 P7 P8 P9 P10 P11 P12 1 4 7 2 5 8 3 6 9 1 2 3 4 5 6 7 8 9 1 4 7 2 5 8 3 6 9 Block 1 Block 2 Block 3 Block 4
V1 P1 V2 V3 V4 P2 V5 V6 V7 P3 V8 V9 P4 P5 P6 1 4 7 2 5 8 3 6 9 Block 1 Block 2
V12/2 P1 V2 V3 V4 P2 V5 V6 V7 P3 V8 V9 P4 P5 P6 1 4 7 2 5 8 3 6 9 Block 1 Block 2
V11/2
V9 V12/2 V11/2
V12/2 P1 V2 V3 V4 P2 V5 V6 V7 P3 V8 V9 P4 P5 P6 4 7 2 5 8 3 6 9 Block 1 Block 2
V11/2
V12/2 P1 V2 V3 V4 P2 V5 V6 V7 P3 V8 V9 P4 P5 P6 4 7 2 5 8 3 6 9 Block 1 Block 2
V11/2
V12/2 P1 V2 V3 V42/2 P2 V5 V6 V7 P3 V8 V9 P4 P5 P6 4 7 2 5 8 3 6 9 Block 1 Block 2
V41/2 V11/2
V12/2 P1 V2 V3 V42/2 P2 V5 V6 P3 V8 V9 P4 P5 P6 7 2 5 8 3 6 9 Block 1 Block 2
V41/2 V11/2 V72/2 V71/2
V12/2 P1 V42/2 P2 P3 P4 P5 P6 Block 1 Block 2 V41/2 V11/2 V72/2 V22/2 V32/2 V52/2 V62/2 V82/2 V92/2 V71/2 V51/2 V21/2 V81/2 V61/2 V31/2 V91/2
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 Block 1 Block 2 Block 3 Block 4 Parent Children Parent Child d d d d d d d d d d d d
Extension Reduction
V12/2 V2 V3 V42/2 V5 V6 V72/2 V8 V9 V11/2 V41/2 (5) (8) V71/2 (2) (6) (9) (3) P1 P2 P3 P4 P5 P6 Parent Child
Reusing the stored data in the previous day enables the
The mapping of virtual nodes effectively skews the workload. V12/2 V2 V3 V42/2 V5 V6 V72/2 V8 V9 V11/2 V41/2 (5) (8) V71/2 (2) (6) (9) (3) P1 P2 P3 P4 P5 P6 Parent Child
V1 V42/2 V5 V72/2 V9 V41/2 V71/2 P1 P2 P3 P4 P5 P6 Parent Child V22/2 V32/2 V62/2 V82/2 V21/2 V31/2 V62/2 V82/2
V1 V42/2 V5 V72/2 V9 V41/2 V71/2 P1 P2 P3 P4 P5 P6 Parent Child V22/2 V32/2 V62/2 V82/2 V21/2 V31/2 V62/2 V82/2
V1 V42/2 V5 V72/2 V9 V41/2 V71/2 P1 P2 P3 P4 P5 P6 Parent Child V22/2 V3 V6 V82/2 V21/2 V82/2
V1 V42/2 V5 V72/2 V9 V41/2 V71/2 P1 P2 P3 P4 P5 P6 Parent Child V22/2 V32/2 V62/2 V82/2 V21/2 V31/2 V62/2 V82/2
V1 V4 V5 V7 V9 P1 P2 P3 P4 P5 P6 Parent Child V2 V32/2 V62/2 V8 V31/2 V62/2
V1 V4 V5 V7 V9
P1 P2 P3
V2 V32/2 V6 V8
V32/2
P4 P5 P6
V1 V4 V5 V7 V9
P1 P2 P3
V22/2 V3 V6 V8
V22/2
P4 P5 P6
V1 V4 V5 V7 V9
P1 P2 P3
V2 V32/2 V62/2 V8
V32/2
P4 P5 P6
V62/2
V1 V42/2 V5 V7 V9
P1 P2 P3
V2 V32/2 V6 V8
V32/2
P4 P5 P6
V41/2
V1 V4 V5 V7 V9
P1 P2 P3
V2 V32/2 V62/2 V8
V32/2
P4 P5 P6
V62/2
V1 V2 V3 V4 V5 V6 V7 V8 V9 (1) (4) (5) (8) (7) (2) (6) (9) (3) P1 P2 P3 P4 P5 P6 Parent Child
V4 V5 V6 V7 V8 V9 (4) (5) (8) (7) (6) (9) P1 P2 P3 P4 P5 P6 Parent Child V12/2 V22/2 V32/2 V11/2 V21/2 V31/2
V1 V2 V3 V4 V5 V6 V7 V8 V9 (1) (4) (5) (8) (7) (2) (6) (9) (3) P1 P2 P3 P4 P5 P6 Parent Child
V1 V5 V9 V4 V2 V6 V7 V8 V3 (1) (4) (5) (8) (7) (2) (6) (9) (3) P1 P2 P3 P4 P5 P6 Parent Child
Evaluate the efficiency of skewing the workload. Evaluate the validity of long-term optimization.
39
Number of physical nodes 800 Number of virtual nodes 10,000 Term of simulation 1 day Migration condition Split:more than 90% Merge:less than 70% Workload of all virtual nodes Initially at its lowest,increased until middle of the day.Gap was sixfold. Virtual node groups Gap of the loads is twice.
40
Long-term optimization algorithm improves the average load as expected. Physical nodes run effectively, coping with the daily variation of workload.
41
Optimization improves the power consumption consistently and continually.
Verify the efficiency of load intensive at real machine. Verify whether response time becomes below the desired time.
Response time:from sending a request until the data were loaded into
memory in the server.
42
Number of physical nodes 40
Number of Files 60,000 x 1MB (total 60GB) Term of experiment 1 day Migration condition Split:over 90%,Merge:under 70% Workload of all virtual nodes Initially at its lowest,increased until middle
Virtual node groups Twice between two groups Amount of each migration 10% of all the data
43
Our algorithms can keep almost below desired response time.
44
Can also skew the workload effectively as the simulation.
Our system adjusts the number of physical nodes to the variation of workloads and reduces power effectively
Short/Long-term optimization algorithms for reducing power
Simulation results showed that our method kept the workload
Average: 67–74%
Prototype implementation results showed that
Overall Average load was: 67% It can maintain a preferred response time