memory hierarchy and direct
play

Memory Hierarchy and Direct Map Caches Lecture 11 CDA 3103 - PowerPoint PPT Presentation

Memory Hierarchy and Direct Map Caches Lecture 11 CDA 3103 06-25-2014 5.1 Introduction Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are


  1. Memory Hierarchy and Direct Map Caches Lecture 11 CDA 3103 06-25-2014

  2. §5.1 Introduction Principle of Locality  Programs access a small proportion of their address space at any time  Temporal locality  Items accessed recently are likely to be accessed again soon  e.g., instructions in a loop, induction variables  Spatial locality  Items near those accessed recently are likely to be accessed soon  E.g., sequential instruction access, array data Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2

  3. Taking Advantage of Locality  Memory hierarchy  Store everything on disk  Copy recently accessed (and nearby) items from disk to smaller DRAM memory  Main memory  Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory  Cache memory attached to CPU Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3

  4. Memory Hierarchy Levels  Block (aka line): unit of copying  May be multiple words  If accessed data is present in upper level  Hit: access satisfied by upper level  Hit ratio: hits/accesses  If accessed data is absent  Miss: block copied from lower level  Time taken: miss penalty  Miss ratio: misses/accesses = 1 – hit ratio  Then accessed data supplied from upper level Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4

  5. §5.2 Memory Technologies Memory Technology  Static RAM (SRAM)  0.5ns – 2.5ns, $2000 – $5000 per GB  Dynamic RAM (DRAM)  50ns – 70ns, $20 – $75 per GB  Magnetic disk  5ms – 20ms, $0.20 – $2 per GB  Ideal memory  Access time of SRAM  Capacity and cost/GB of disk Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5

  6. SRAM Cell (6 Transistors) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6

  7. Square array of MOSFET cells read Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7

  8. DRAM Technology  Data stored as a charge in a capacitor  Single transistor used to access the charge  Must periodically be refreshed  Read contents and write back  Performed on a DRAM “row” Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8

  9. Advanced DRAM Organization  Bits in a DRAM are organized as a rectangular array  DRAM accesses an entire row  Burst mode: supply successive words from a row with reduced latency  Double data rate (DDR) DRAM  Transfer on rising and falling clock edges  Quad data rate (QDR) DRAM  Separate DDR inputs and outputs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 9

  10. DRAM Generations 300 Year Capacity $/GB 1980 64Kbit $1500000 250 1983 256Kbit $500000 200 1985 1Mbit $200000 1989 4Mbit $50000 Trac 150 Tcac 1992 16Mbit $15000 1996 64Mbit $10000 100 1998 128Mbit $4000 50 2000 256Mbit $1000 2004 512Mbit $250 0 2007 1Gbit $50 '80 '83 '85 '89 '92 '96 '98 '00 '04 '07 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 10

  11. DRAM Performance Factors  Row buffer  Allows several words to be read and refreshed in parallel  Synchronous DRAM  Allows for consecutive accesses in bursts without needing to send each address  Improves bandwidth  DRAM banking (DDR3 etc)  Allows simultaneous access to multiple DRAMs  Improves bandwidth  DIMM (Dual inline memory modules [4- 16 DRAM’s])  A DIMM using DDR4-3200 SDRAM can transfer at 8 x 3200 = 25600 megabytes per second

  12. Increasing Memory Bandwidth  4-word wide memory  Miss penalty = 1 + 15 + 1 = 17 bus cycles  Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle  4-bank interleaved memory  Miss penalty = 1 + 15 + 4×1 = 20 bus cycles  Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12

  13. §6.4 Flash Storage Flash Storage  Nonvolatile semiconductor storage  100 × – 1000 × faster than disk  Smaller, lower power, more robust  But more $/GB (between disk and DRAM) Chapter 6 — Storage and Other I/O Topics — 13

  14. Flash Types  NOR flash: bit cell like a NOR gate  Random read/write access  Used for instruction memory in embedded systems  NAND flash: bit cell like a NAND gate  Denser (bits/area), but block-at-a-time access  Cheaper per GB  Used for USB keys, media storage, …  Flash bits wears out after 1000’s of accesses  Not suitable for direct RAM or disk replacement  Wear leveling: remap data to less used blocks Chapter 6 — Storage and Other I/O Topics — 14

  15. §6.3 Disk Storage Disk Storage  Nonvolatile, rotating magnetic storage Chapter 6 — Storage and Other I/O Topics — 15

  16. Disk Sectors and Access  Each sector records  Sector ID  Data (512 bytes, 4096 bytes proposed)  Error correcting code (ECC)  Used to hide defects and recording errors  Synchronization fields and gaps  Access to a sector involves  Queuing delay if other accesses are pending  Seek: move the heads  Rotational latency  Data transfer  Controller overhead Chapter 6 — Storage and Other I/O Topics — 16

  17. Disk Access Example  Given  512B sector, 15,000rpm, 4ms average seek time, 100MB/s transfer rate, 0.2ms controller overhead, idle disk  Average read time  4ms seek time + ½ / (15,000/60) = 2ms rotational latency + 512 / 100MB/s = 0.005ms transfer time + 0.2ms controller delay = 6.2ms  If actual average seek time is 1ms  Average read time = 3.2ms Chapter 6 — Storage and Other I/O Topics — 17

  18. Disk Performance Issues  Manufacturers quote average seek time  Based on all possible seeks  Locality and OS scheduling lead to smaller actual average seek times  Smart disk controller allocate physical sectors on disk  Present logical sector interface to host  SCSI, ATA, SATA  Disk drives include caches  Prefetch sectors in anticipation of access  Avoid seek and rotational delay Chapter 6 — Storage and Other I/O Topics — 18

  19. §5.3 The Basics of Caches Cache Memory  Cache memory  The level of the memory hierarchy closest to the CPU  Given accesses X 1 , …, X n – 1 , X n  How do we know if the data is present?  Where do we look? Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19

  20. 6 Great Ideas inComputerArchitecture 1. Layers of Representation/Interpretation 2. Moore ’ sLaw 3. Principle of Locality/Memory Hierarchy 4. Parallelism 5. Performance Measurement & Improvement 6. Dependability via Redundancy Dr Dan Garcia

  21. The BigPicture Computer Keyboard, Mouse Devices Processor Memory (active) Input (passive) (where Disk, Control programs, (“brain”) Network data live Output when Datapath (“brawn”) running) Display , Printer Dr Dan Garcia

  22. i.e., storage in Memory Hierarchy computer systems   Processor   holds data in register file (~100 Bytes)   Registers accessed on nanosecond timescale  Memory (we’ll call “main memory”)    More capacity than registers (~Gbytes)   Access time ~50-100 ns   Hundreds of clock cycles per memory access?!   Disk   HUGE capacity (virtually limitless)   VER Yslow: runs ~milliseconds Dr Dan Garcia

  23. Motivation : Processor-Memory Gap µProc 1989 first Intel CPU with cache on chip 1998 Pentium III has two cache levels on chip 55%/year 10000 (2X/1.5yr) “Moore’ s Law” Performance 1000 Processor-Memory 100 Performance Gap (grows 50%/year) 10 DRAM 7%/year 1 (2X/10yrs) Year Dr Dan Garcia

  24. Memory Caching   Mismatch between processor and memory speeds leads us to add a new level: a memory cache   Implemented with same IC processing technology as the CPU (usually integrated on same chip): faster but more expensive than DRAM memory .   Cache is a copy of a subset of main memory .   Most processors have separate caches for instructions and data. Dr Dan Garcia

  25. Characteristicsof the Memory Hierarchy Processor Inclusive – 4-8 bytes (word) what is in L1$ is a Increasing L1$ subset of distance 8-32 bytes (block) what is in L2$ from the L2$ is a subset of processor what is in MM 1 to 4 blocks in access that is a Main Memory time subset of is in 1,024+ bytes SM (disk sector = page) Secondary Memory (Relative) size of the memory at each level Dr Dan Garcia

  26. T ypicalMemoryHierarchy   The T rick: present processor with as much memory as is available in the cheapest technology at the speed offered by the fastest technology On-Chip Components Control Secondary e Cach Second Instr ITLB Memory Main Level (Disk Memory Datapath RegFile Cache Or Flash) Cache (DRAM) DTLB Data (SRAM) 1 ’ s Speed (#cycles): ½ ’ s 10 ’ s 100 ’ s 10,000 ’ s 100 ’ s T ’ s 10K ’ s M ’ s G ’ s Size (bytes): Cost: highest lowest Dr Dan Garcia

  27. Memory Hierarchy   If level closer to Processor , it is:   Smaller   Faster   More expensive   subset of lower levels (contains most recently used data)   Lowest Level (usually disk) contains all available data (does it go beyond the disk?)   Memory Hierarchy presents the processor with the illusion of a very large & fast memory Dr Dan Garcia

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend