Pattern-based Write Scheduling and Read Balance-oriented - - PowerPoint PPT Presentation

pattern based write scheduling and read balance oriented
SMART_READER_LITE
LIVE PREVIEW

Pattern-based Write Scheduling and Read Balance-oriented - - PowerPoint PPT Presentation

Pattern-based Write Scheduling and Read Balance-oriented Wear-leveling for Solid State Drivers Jun Li 1 , Xiaofei Xu 1 , Xiaoning Peng 2 , and Jianwei Liao 1,2 1 College of Computer and Information Science, Southwest University 2 College of


slide-1
SLIDE 1

Pattern-based Write Scheduling and Read Balance-oriented Wear-leveling for Solid State Drivers

Jun Li1, Xiaofei Xu1, Xiaoning Peng2, and Jianwei Liao1,2

1College of Computer and Information Science, Southwest University 2College of Computer Science and Engineering, Huaihua University

slide-2
SLIDE 2
  • Introduction and motivation
  • Design
  • Evaluation
  • Conclusion

Outline

slide-3
SLIDE 3
  • Introduction and motivation
  • Design
  • Evaluation
  • Conclusion

Outline

slide-4
SLIDE 4

Performance & Lifetime

Introduction

  • SSDs are widely used in smartphone and PCs
  • Endurance and performance degrade
  • Bit density develops from SLC to 3D TLC, QLC

Purpose:improve performance and endurance

slide-5
SLIDE 5

Introduction

Internal Parallelism

SSD Architecture Overview

slide-6
SLIDE 6

Motivation

Access distribution of logic sector addresses Percentage of frequent access addresses in patterns Read distribution of logic sector addresses

Data can be split into frequent and not, and frequent requests may appear as patterns.

  • Pattern: frequent addresses group set
slide-7
SLIDE 7
  • Introduction and motivation
  • Design
  • Evaluation
  • Conclusion

Outline

slide-8
SLIDE 8

Our work

Basic idea write requests in the pattern same block hot read data during WL

schedule migrate

different parallelism units cut down garbage collection overhead boost read performance

slide-9
SLIDE 9

invalid invalid invalid invalid invalid invalid invalid invalid

Pattern-based Write Scheduling: background

invalid valid valid invalid invalid valid invalid

Block without hot/cold splitting Block with hot/cold splitting High GC overhead with page moves Low GC overhead with direct erase

Garbage Collection Overhead

valid

slide-10
SLIDE 10
  • Pattern Mining

– Get the write patterns based on part of the requests in each time window – Make use of FP-growth algorithm, a mature data mining scheme

  • Pattern Matching

– Match the requests in I/O queue based on patterns – Introduce a matching matrix

  • Scheduling

– Schedule the requests in the same pattern to the same block – We argue that these requests can be invalided together more possibly than only scheduling the hot data together.

Pattern-based Write Scheduling: workflow

slide-11
SLIDE 11

A B C D E F A’ G D’ F’ H . . .

{A, B, D ,F}, {I, O, K, Y}, . . . Matching & Scheduling

Instruct

A B C D E F A’ G D’ F’ H

Cold block Hot block Cold block

Req. time

Pattern-based Write Scheduling: illustration

B’ B’

Be Invalided

slide-12
SLIDE 12

Read Balance-oriented Wear-leveling: background

  • Why do we need to do wear leveling?

– P/E (Program/Erase) cycle damages the unit of SSDs, and the basic erase unit is block. – If the block wears unevenly and some blocks reach the erase limits, the available capacity will reduce.

  • Static Wear Leveling

– BET (Block Erase Table) selects the WL target block

Hot block cold block migrate

slide-13
SLIDE 13
  • PRT (Page Read Table) defines the block type (hot/cold)

– 1 represents read, 0 represents not read.

  • Three steps

– Identify wear leveling target blocks (hot/cold/…) – Regroup (hot/cold) pages into two blocks – Migrate these blocks to different chip units

Read Balance-oriented Wear-leveling: workflow

Identify target block Separating Migrating Regroup pages block1 block2 Is hot/cold no yes Normal Migrating

slide-14
SLIDE 14

③ Inter-chip data movement ② Re-group pages in a block unit

Hot read page Cold read page Other data Heavily erased block Less erased block

Chip 0 Chip n-1 Chip n Channel i ① Identify target blocks

Read Balance-oriented Wear-leveling: illustration

slide-15
SLIDE 15
  • Introduction and motivation
  • Design
  • Evaluation
  • Conclusion

Outline

slide-16
SLIDE 16
  • Simulator: SSDSim[1]
  • One workload from MSRC[2] and three workloads from daily collection
  • 2 channels, 4 chips per channel and 4 planes per chip
  • Comparing these 4 schemes
  • Baseline: Normal flash without scheduling, SWL
  • SWL[3]: Static wear leveling
  • PGIS[4]+SWL: PGIS with native SWL
  • Pattern: The proposal

[1] Y. Hu, H. Jiang, D. Feng et al. Exploring and exploiting the multi-level parallelism inside SSDs for improved performance and endurance. IEEE Transactions on Computers, Vol. 62(6):1141–1155, 2013. [2] http://iotta.snia.org/traces/388. [3] Y. Chang, J. Hsieh, and T. Kuo. Improving flash wear leveling by proactively moving static data. IEEE Transactions on Computers, Vol. 59(1):53–65,2010 [4] J. Guo, Y. Hu, B. Mao, and S. Wu. Parallelism and Garbage Collection Aware I/O Scheduler with Improved SSD Performance. In Proceedings of 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS ’2017), pp. 1184–1193, 2017.

Implementation

slide-17
SLIDE 17
  • Evaluate GC overhead and WL overhead

GC Time (Second)

Experiments

  • Pattern has the least GC time and WL time (except for baseline).
slide-18
SLIDE 18
  • Evaluate read performance

Experiments

Normalized Read Latency

  • Compared to other schemes, Pattern improves 12.8% read response time.
slide-19
SLIDE 19
  • Evaluate endurance

Experiments

Distribution of block erases

  • Pattern has the least Block Erases (except for baseline).
  • PGIS+SWL and Pattern have almost the same standard deviation,

but note that Pattern has better read performance than PGIS+SWL.

slide-20
SLIDE 20

Memory Overhead about PRT and BET (KB)

  • Overhead

Experiments

Mapping Overhead (Second)

  • Pattern has a little mapping time more than other schemes, by less than

138ms.

slide-21
SLIDE 21
  • Introduction and motivation
  • Design
  • Evaluation
  • Conclusion

Outline

slide-22
SLIDE 22

Conclusion

  • We study about low garbage collection overhead and better read

performance with wear-leveling.

  • We propose pattern-based write scheduling and read balance-oriented

wear-leveling.

  • Results show that the proposed approach achieves 11.3% garbage

collection overhead reduction and 12.8% read performance improvement, in average.

slide-23
SLIDE 23

Thank you for your attention! Any Questions?

slide-24
SLIDE 24

附录一:匹配矩阵

Appendix 1: Matching Matrix