hard drives fjlesystems 2
play

hard drives / fjlesystems 2 1 last time direct memory access - PowerPoint PPT Presentation

hard drives / fjlesystems 2 1 last time direct memory access write directy to device driver bufgers OS supplies physical address maybe avoid more copies if really clever? disk interface: sectors FAT fjlesystem dividing disk into clusters


  1. hard drives / fjlesystems 2 1

  2. last time direct memory access write directy to device driver bufgers OS supplies physical address maybe avoid more copies if really clever? disk interface: sectors FAT fjlesystem dividing disk into clusters fjles as linked list of cluster numbers fjle alloc table: linked list next pointers + free cluster info directory entries: fjle info + fjrst clutser number 2

  3. on extension requests there was already a paging assignment extension… and I know several students started the assignment with enough time… don’t want students to play “guess what the real due date is” when making plans I wish we had more efgective OH help, but our general assumption is that you should be able complete the assignment without it …and that you won’t start working in the last day or so to give time for getting answers to questions… for particular diffjculty to work assignment, case-by-case extensions (email or submit on kytos) computer/Internet availability issues, sudden moves, illness, … late policy still applies (3, 5 days) 3

  4. on offjce hours hopefully we’re learning to be more effjcient in virtual OH e.g. switching between students to avoid spending too much time at once please help us make them effjcient: good “task” descriptions may let us group students together for help simplify your question: narrow down/simplify test cases simplify your question: fjgure out what of your code is running/doing (via debug prints, GDB, …) use OH time other than in the last 24 hours before the due time 4

  5. note on FAT assignment read from disk image (fjle with contents of hard drive/SSD) use real specs from Microsoft implement FAT32 version; specs describe several variants mapping from cluster numbers to location on disk difgerent end-of-fjle in FAT could be values other than -1 5

  6. why hard drives? what fjlesystems were designed for currently most cost-efgective way to have a lot of online storage solid state drives (SSDs) imitate hard drive interfaces 6

  7. hard drives spins when operating -5 -6 -7 -8 platters stack of fmat discs (only top visible) heads -3 read/write magnetic signals on platter surfaces arm rotates to position heads over spinning platters hard drive image: Wikimedia Commons / Evan-Amos -4 -2 0 8 1 2 3 4 5 6 7 9 -1 10 11 12 13 14 15 0 7

  8. sectors/cylinders/etc. cylinder track sector? seek time — 5–10ms move heads to cylinder faster for adjacent accesses rotational latency — 2–8ms rotate platter to sector depends on rotation speed faster for adjacent reads transfer time — 50–100+MB/s actually read/write data 8

  9. sectors/cylinders/etc. cylinder track sector? seek time — 5–10ms move heads to cylinder faster for adjacent accesses rotational latency — 2–8ms rotate platter to sector depends on rotation speed faster for adjacent reads transfer time — 50–100+MB/s actually read/write data 8

  10. sectors/cylinders/etc. cylinder track sector? seek time — 5–10ms move heads to cylinder faster for adjacent accesses rotational latency — 2–8ms rotate platter to sector depends on rotation speed faster for adjacent reads transfer time — 50–100+MB/s actually read/write data 8

  11. sectors/cylinders/etc. cylinder track sector? seek time — 5–10ms move heads to cylinder faster for adjacent accesses rotational latency — 2–8ms rotate platter to sector depends on rotation speed faster for adjacent reads transfer time — 50–100+MB/s actually read/write data 8

  12. sectors/cylinders/etc. cylinder track sector? seek time — 5–10ms move heads to cylinder rotational latency — 2–8ms rotate platter to sector depends on rotation speed transfer time — 50–100+MB/s actually read/write data 8 faster for adjacent accesses faster for adjacent reads

  13. disk latency components queue time — how long read waits in line? depends on number of reads at a time, scheduling strategy disk controller/etc. processing time seek time — head to cylinder rotational latency — platter rotate to sector transfer time 9

  14. cylinders and latency cylinders closer to edge of disk are faster (maybe) 10 less rotational latency

  15. sector numbers historically: OS knew cylinder/head/track location now: opaque sector numbers more fmexible for hard drive makers same interface for SSDs, etc. typical pattern: low sector numbers = probably closer to edge (faster?) typical pattern: adjacent sector numbers = adjacent on disk 11 actual mapping: decided by disk controller

  16. OS to disk interface disk takes read/write requests sector number(s) location of data for sector modern disk controllers: typically direct memory access disk processes them in some order OS can say “write X before Y” 12 can have queue of pending requests

  17. hard disks are unreliable Google study (2007), heavily utilized cheap disks 1.7% to 8.6% annualized failure rate varies with age disk fails = needs to be replaced 13 ≈ chance a disk fails each year 9% of working disks had reallocated sectors

  18. bad sectors modern disk controllers do sector remapping part of physical disk becomes bad — use a difgerent one disk uses error detecting code to tell data is bad similar idea to storing + checking hash of data maintain mapping (special part of disk, probably) 14 this is expected behavior

  19. queuing requests recall: multiple active requests queue of reads/writes in disk controller and/or OS disk is faster for adjacent/close-by reads/writes less seek time/rotational latency disk controller and/or OS may need schedule requests group nearby requests together as user of disk: better to request multiple things at a time 15

  20. disk performance and fjlesystems fjlesystem can… do contiguous or nearby reads/writes bunch of consecutive sectors much faster to read nearby sectors have lower seek/rotational delay start a lot of reads/writes at once avoid reading something to fjnd out what to read next array of sectors better than linked list 16

  21. solid state disk architecture chip chip NAND fmash chip NAND fmash NAND NAND fmash chip NAND fmash chip NAND fmash chip chip fmash fmash chip NAND fmash chip NAND chip fmash NAND fmash chip NAND fmash chip NAND fmash NAND chip NAND NAND fmash chip NAND fmash chip fmash fmash chip NAND fmash chip NAND fmash chip chip NAND fmash chip chip NAND fmash chip NAND fmash NAND chip fmash chip NAND fmash chip NAND fmash NAND fmash controller chip chip NAND fmash chip NAND fmash NAND NAND fmash chip NAND fmash chip NAND fmash chip chip fmash (includes CPU) RAM NAND fmash chip NAND chip fmash NAND fmash chip NAND fmash chip NAND fmash NAND NAND NAND NAND fmash chip NAND fmash chip fmash fmash chip NAND fmash chip NAND fmash chip chip NAND fmash chip chip NAND fmash chip NAND fmash NAND chip fmash chip NAND fmash chip NAND fmash 17

  22. fmash no moving parts no seek time, rotational latency can read in sector-like sizes (“pages”) (e.g. 4KB or 16KB) write once between erasures erasure only in large erasure blocks (often 256KB to megabytes!) can only rewrite blocks order tens of thousands of times after that, fmash starts failing 18

  23. SSDs: fmash as disk SSDs: implement hard disk interface for NAND fmash read/write sectors at a time sectors much smaller than erasure blocks sectors sometimes smaller than fmash ‘pages’ read/write with use sector numbers, not addresses queue of read/writes trick: block remapping — move where sectors are in fmash need to hide limit on number of erases trick: wear levening — spread writes out 19 need to hide erasure blocks

  24. block remapping can only erase pages 128–191 pages 192-255 pages 256-319 pages 320-383 pages 128–191 pages 192–255 pages 256–319 erased block whole “erasure block” pages 0–63 “garbage collection” (free up new space) copied from erased active data erased + ready-to-write unused (rewritten elsewhere) read sector write sector pages 64–127 fmash locations being written 260 Flash Translation Layer logical physical 0 93 1 … OS sector numbers … 31 74 32 75 … … remapping table 20

  25. block remapping can only erase pages 128–191 pages 192-255 pages 256-319 pages 320-383 pages 128–191 pages 192–255 pages 256–319 erased block whole “erasure block” pages 0–63 “garbage collection” (free up new space) copied from erased active data erased + ready-to-write unused (rewritten elsewhere) read sector write sector pages 64–127 fmash locations being written 260 Flash Translation Layer logical physical 0 93 1 … OS sector numbers … 31 74 32 75 … … remapping table 20

  26. block remapping erased block pages 128–191 pages 192-255 pages 256-319 pages 320-383 pages 128–191 pages 192–255 pages 256–319 can only erase pages 0–63 whole “erasure block” “garbage collection” (free up new space) copied from erased active data erased + ready-to-write unused (rewritten elsewhere) write sector pages 64–127 fmash locations being written OS sector numbers Flash Translation Layer logical physical 0 93 1 260 … … 31 74 32 75 … … remapping table 20 read sector 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend