Block Device Scheduling Don Porter CSE 506 Logical Diagram Binary - - PowerPoint PPT Presentation

block device scheduling
SMART_READER_LITE
LIVE PREVIEW

Block Device Scheduling Don Porter CSE 506 Logical Diagram Binary - - PowerPoint PPT Presentation

Block Device Scheduling Don Porter CSE 506 Logical Diagram Binary Memory Threads Formats Allocators User System Calls Kernel RCU File System Networking Sync Memory CPU Device Management Scheduler Drivers Todays Lecture


slide-1
SLIDE 1

Block Device Scheduling

Don Porter CSE 506

slide-2
SLIDE 2

Logical Diagram

Memory Management CPU Scheduler User Kernel Hardware Binary Formats Consistency System Calls Interrupts Disk Net RCU File System Device Drivers Networking Sync Memory Allocators Threads Today’s Lecture

slide-3
SLIDE 3

Quick Recap

ò CPU Scheduling

ò Balance competing concerns with heuristics

ò What were some goals?

ò No perfect solution

ò Today: Block device scheduling

ò How different from the CPU? ò Focus primarily on a traditional hard drive ò Extend to new storage media

slide-4
SLIDE 4

Block device goals

ò Throughput ò Latency ò Safety – file system can be recovered after a crash ò Fairness – surprisingly, very little attention is given to storage access fairness

ò Hard problem – solutions usually just prevent starvation ò Disk quotas for space fairness

slide-5
SLIDE 5

Big Picture

VFS Low-level FS (ext4, BTRFS, etc.) Page Cache Block Device

IO Scheduler

Driver

Disk

slide-6
SLIDE 6

OS Model of a Block Dev.

ò Simple array of blocks

ò Blocks are usually 512 or 4k bytes

slide-7
SLIDE 7

Recall: Page Cache

`

Page Cache Block Dev

Page (blue) w/ 3 buffer heads (green) Buffer Heads map disk blocks

slide-8
SLIDE 8

Caching

ò Obviously, the number 1 trick in the OS designer’s toolbox is caching disk contents in RAM

ò Remember the page cache?

ò Latency – can be hidden by pre-reading data into RAM

ò And keeping any free RAM full of disk contents ò Doesn’t help synchronous reads (that miss in RAM cache)

  • r synchronous writes
slide-9
SLIDE 9

Caching + throughput

ò Assume that most reads and writes to disk are asynchronous

ò Dirty data can be buffered and written at OS’s leisure ò Most reads hit in RAM cache – most disk reads are read- ahead optimizations

ò Key problem: How to optimally order pending disk I/O requests?

ò Hint: it isn’t first-come, first-served

slide-10
SLIDE 10

Another view of the problem

ò Between page cache and disk, you have a queue of pending requests ò Requests are a tuple of (block #, read/write, buffer addr) ò You can reorder these as you like to improve throughput ò What reordering heuristic to use? If any? ò Heuristic is called the IO Scheduler

slide-11
SLIDE 11

A simple disk model

ò Disks are slow. Why?

ò Moving parts << circuits

ò Programming interface: simple array of sectors (blocks) ò Physical layout:

ò Concentric circular “tracks” of blocks on a platter ò E.g., sectors 0-9 on innermost track, 10-19 on next track, etc. ò Disk arm moves between tracks ò Platter rotates under disk head to align w/ requested sector

slide-12
SLIDE 12

Disk Model

1 2 3 4 5 6 7

Each block

  • n a sector

Disk Head

Disk Head reads at granularity of entire sector Disk spins at a constant speed. Sectors rotate underneath head.

slide-13
SLIDE 13

Disk Model

Disk Head

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Concentric tracks Disk head seeks to different tracks Gap between 7 and 8 accounts for seek time

slide-14
SLIDE 14

Many Tracks

Disk Head

slide-15
SLIDE 15

Several (~4) Platters

Platters spin together at same speed Each platter has a head; All heads seek together

slide-16
SLIDE 16

Implications of multiple platters

ò Blocks actually striped across platters ò Example:

ò Sector 0 on platter 0 ò Sector 1 on platter 1 at same position ò Sector 2 on platter 2, Sec. 3 on Plat. 3 also at same position ò 4 heads can read all 4 sectors simultaneously

slide-17
SLIDE 17

3 key latencies

ò I/O delay: time it takes to read/write a sector ò Rotational delay: time the disk head waits for the platter to rotate desired sector under it

ò Note: disk rotates continuously at constant speed

ò Seek delay: time the disk arm takes to move to a different track

slide-18
SLIDE 18

Observations

ò Latency of a given operation is a function of current disk arm and platter position ò Each request changes these values ò Idea: build a model of the disk

ò Maybe use delay values from measurement or manuals ò Use simple math to evaluate latency of each pending request ò Greedy algorithm: always select lowest latency

slide-19
SLIDE 19

Example formula

ò s = seek latency, in time/track ò r = rotational latency, in time/sector ò i = I/O latency, in seconds ò Time = (Δtracks * s) + (Δsectors * r) + I ò Note: Δsectors must factor in position after seek is

  • finished. Why?
slide-20
SLIDE 20

Problem with greedy?

ò “Far” requests will starve ò Disk head may just hover around the “middle” tracks

slide-21
SLIDE 21

Elevator Algorithm

ò Require disk arm to move in continuous “sweeps” in and

  • ut

ò Reorder requests within a sweep

ò Ex: If disk arm is moving “out,” reorder requests between the current track and the outside of disk in ascending

  • rder (by block number)

ò A request for a sector the arm has already passed must be

  • rdered after the outermost request, in descending order
slide-22
SLIDE 22

Elevator Algo, pt. 2

ò This approach prevents starvation

ò Sectors at “inside” or “outside” get service after a bounded time

ò Reasonably good throughput

ò Sort requests to minimize seek latency ò Can get hit with rotational latency pathologies (How?)

ò Simple to code up!

ò Programming model hides low-level details; difficult to do fine- grained optimizations in practice

slide-23
SLIDE 23

Pluggable Schedulers

ò Linux allows the disk scheduler to be replaced

ò Just like the CPU scheduler

ò Can choose a different heuristic that favors:

ò Fairness ò Real-time constraints ò Performance

slide-24
SLIDE 24

Complete Fairness Queue (CFQ)

ò Idea: Add a second layer of queues (one per process)

ò Round-robin promote them to the “real” queue

ò Goal: Fairly distribute disk bandwidth among tasks ò Problems?

ò Overall throughput likely reduced ò Ping-pong disk head around

slide-25
SLIDE 25

Deadline Scheduler

ò Associate expiration times with requests ò As requests get close to expiration, make sure they are deployed

ò Constrains reordering to ensure some forward progress

ò Good for real-time applications

slide-26
SLIDE 26

Anticipatory Scheduler

ò Idea: Try to anticipate locality of requests

ò If process P tends to issue bursts of requests for close disk blocks, ò When you see a request from P, hold the request in the disk queue for a while

ò See if more “nearby” requests come in ò Then schedule all the requests at once

ò And coalesce adjacent requests

slide-27
SLIDE 27

Optimizations at Cross-purposes

ò The disk itself does some optimizations:

ò Caching

ò Write requests can sit in a volatile cache for longer than expected

ò Reordering requests internally

ò Can’t assume that requests are serviced in-order ò Dependent operations must wait until first finishes

ò Bad sectors can be remapped to “spares”

ò Problem: disk arm flailing on an old disk

slide-28
SLIDE 28

A note on safety

ò In Linux, and other OSes, the I/O scheduler can reorder requests arbitrarily ò It is the file system’s job to keep unsafe I/O requests out

  • f the scheduling queues
slide-29
SLIDE 29

Dangerous I/Os

ò What can make an I/O request unsafe?

ò File system bookkeeping has invariants on disk

ò Example: Inodes point to file data blocks; data blocks are also marked as free in a bitmap

ò Updates must uphold these invariants

ò Ex: Write an update to the inode, then the bitmap ò What if the system crashes between writes? ò Block can end up in two files!!!

slide-30
SLIDE 30

3 Simple Rules

(Courtesy of Ganger and McKusick, “Soft Updates” paper) ò Never write a pointer to a structure until it has been initialized

ò Ex: Don’t write a directory entry to disk until the inode has been written to disk

ò Never reuse a resource before nullifying all pointers to it

ò Ex: Before re-allocating a block to a file, write an update to the inode that references it

ò Never reset the last pointer to a live resource before a new pointer has been set

ò Ex: Renaming a file – write the new directory entry before the

  • ld one (better 2 links than none)
slide-31
SLIDE 31

A note on safety

ò It is the file system’s job to keep unsafe I/O requests out

  • f the scheduling queues

ò While these constraints are simple, enforcing them in the average file system is surprisingly difficult

ò Journaling helps by creating a log of what you are in the middle of doing, which can be replayed ò (Simpler) Constraint: Journal updates must go to disk before FS updates

slide-32
SLIDE 32

Disks aren’t everything

ò Flash is increasing in popularity

ò Different types with slight variations (NAND, NOR, etc)

ò No moving parts – who cares about block ordering anymore? ò Can only write to a block of flash ~100k times

ò Can read as much as you want

slide-33
SLIDE 33

More in a Flash

ò Flash reads are generally fast, writes are more expensive ò Prefetching has little benefit ò Queuing optimizations can take longer than a read ò New issue: wear leveling – need to evenly distribute writes

ò Flash devices usually have a custom, log-structured FS ò Group random writes

slide-34
SLIDE 34

Even newer hotness

ò Byte-addressible, persistent RAMs (BPRAM)

ò Phase-Change Memory (PCM), Memristors, etc.

ò Splits the difference between RAM and flash:

ò Byte-granularity writes (vs. blocks) ò Fast reads, slower, high-energy writes ò Doesn’t need energy to hold state (DRAM refresh) ò Wear an issue (bytes get stuck at last value)

ò Still in the lab, but getting close

slide-35
SLIDE 35

Important research topic

ò Most work on optimizing storage accessed is tailored to hard drives ò These heuristics are not easily adapted to new media ò Future systems will have a mix of disks, flash, PRAM, DRAM ò Does it even make sense to treat them all the same?

slide-36
SLIDE 36

Summary

ò Performance characteristics of disks, flash, BPRAM ò Disk scheduling heuristics ò Safety constraints for file systems