cse378 cache
play

CSE378 - Cache Performance metrics for caches Parameters for cache - PDF document

CSE378 - Cache Performance metrics for caches Parameters for cache design Basic performance metric: hit ratio h Goal: Have h as high as possible without paying too much for T cache h = Number of memory references that hit in the cache /


  1. CSE378 - Cache Performance metrics for caches Parameters for cache design • Basic performance metric: hit ratio h • Goal: Have h as high as possible without paying too much for T cache h = Number of memory references that hit in the cache / • The bigger the cache size (or capacity) , the higher h . – True but too big a cache increases T cache total number of memory references – Limit on the amount of “real estate” on the chip (although this limit is not Typically h = 0.90 to 0.97 present for 1 st level caches) • Equivalent metric: miss rate m = 1 -h • The larger the cache associativity , the higher h . • Other important metric: Average memory access time – True but too much associativity is costly because of the number of Av.Mem. Access time = h * T cache + (1-h) * T mem comparators required and might also slow down T cache (extra logic needed to select the “winner”) where T cache is the time to access the cache (e.g., 1 cycle) and • Block (or line) size T mem is the time to access main memory (e.g., 50 cycles) – For a given application, there is an optimal block size but that optimal (Of course this formula has to be modified the obvious way if you block size varies from application to application have a hierarchy of caches) 11/17/2003 CSE 378 Cache Performance 1 11/17/2003 CSE 378 Cache Performance 2 Example of cache hierarchies ( don’t quote me on Parameters for cache design (ct’d) these numbers ) • Write policy (see later) MICRO L1 L2 – There are several policies with, as expected, the most complex Alpha 21064 8K(I), 8K(D), WT, 128K to 8MB,WB, giving the best results 1-way, 32B 1-way,32B • Replacement algorithm (for set-associative caches) Alpha 21164 8K(I), 8K(D), WT, 96K, WB, on-chip, 1-way, 32B ,D l-u fr. 3-way,32B,l-u free – Not very important for caches with small associativity (will be Alpha 21264 64K(I), 64K(D),?, up to 16MB very important for paging systems) 2-way, ? • Split I and D-caches vs. unified caches. Pentium 8K(I),8K(D),both, Depends 2-way, 32 B – First-level caches need to be split because of pipelining that Pentium II, III 16K(I),16K(D), WB, 512K,32B,4-way, requests an instruction every cycle. Allows for different design 4-way(I),2-way(D), tightly-coupled parameters for I-caches and D-caches 32B,l-u free – Second and higher level caches are unified (mostly used for data) 11/17/2003 CSE 378 Cache Performance 3 11/17/2003 CSE 378 Cache Performance 4 1

  2. CSE378 - Cache Examples (cont’d) Back to associativity • Advantages PowerPC 620 32K(I),32K(D),WB 1MB TO 128MB, – Reduces conflict misses 8-way, 64B WB, 1-way MIPS R10000 32K(I),32K(D),l-u, 512K to 16MB, • Disadvantages 2-way, 32B 2-way, 32B – Needs more comparators SUN UltraSparcIII 32K(I),64K(D),l-u, 4-8MB 1-way – Access time is longer (need to choose among the comparisons, i.e., 4-way need of a multiplexor) – Replacement algorithm is needed and could get more complex as AMD K7 64k(I), 64K(D) associativity grows 11/17/2003 CSE 378 Cache Performance 5 11/17/2003 CSE 378 Cache Performance 6 Replacement algorithm Impact of associativity on performance Miss ratio • None for direct-mapped Direct-mapped • Random or LRU or pseudo-LRU for set-associative caches Typical curve. 2-way – LRU = "least recently used": means that the entry in the set which Biggest improvement from direct- 4-way mapped to 2-way; then 2 to 4-way has not been used for the longest time will be replaced (think about 8-way then incremental a stack) 11/17/2003 CSE 378 Cache Performance 7 11/17/2003 CSE 378 Cache Performance 8 2

  3. CSE378 - Cache Impact of block size Classifying the cache misses:The 3 C’s • Recall block size = number of bytes stored in a cache entry • Compulsory misses (cold start) • On a cache miss the whole block is brought into the cache – The first time you touch a block. Reduced (for a given cache capacity and associativity) by having large block sizes • For a given cache capacity, advantages of large block size: • Capacity misses – decrease number of blocks: requires less real estate for tags – decrease miss rate IF the programs exhibit good spatial locality – The working set is too big for the ideal cache of same capacity and – increase transfer efficiency between cache and main memory block size (i.e., fully associative with optimal replacement algorithm). Only remedy: bigger cache! • For a given cache capacity, drawbacks of large block size: • Conflict misses (interference) – increase latency of transfers – might bring unused data IF the programs exhibit poor spatial – Mapping of two blocks to the same location. Increasing locality associativity decreases this type of misses. – Might increase the number of conflict/capacity misses • There is a fourth C: coherence misses (cf. multiprocessors) 11/17/2003 CSE 378 Cache Performance 9 11/17/2003 CSE 378 Cache Performance 10 Impact of block size on performance Performance revisited Typical form of the curve. Miss ratio The knee might appear for • Recall Av.Mem. Access time = h * T cache + (1-h) * T mem 8 bytes different block sizes depending on the application • We can expand on T mem as T mem = T acc + b * T tra 16 bytes and the cache capacity – where T acc is the time to send the address of the block to main 32 bytes 128 bytes memory and have the DRAM read the block in its own buffer, and – T tra is the time to transfer one word (4 bytes) on the memory bus 64 bytes from the DRAM to the cache, and b is the block size (in words) (might also depend on width of the bus) • For example, if T acc = 5 and T tra = 1, what cache is best between – C1 ( b1 =1 ) and C2 (b2 = 4) for a program with h1 = 0.85 and h2=0.92 assuming T cache = 1 in both cases. 11/17/2003 CSE 378 Cache Performance 11 11/17/2003 CSE 378 Cache Performance 12 3

  4. CSE378 - Cache Writing in a cache Write-through policy • Write-through (aka store-through) • On a write hit, should we write: – On a write hit, write both in cache and in memory – In the cache only ( write-back ) policy – On a write miss, the most frequent option is write-around, i.e., write only – In the cache and main memory (or next level cache) ( write- in memory through) policy • Pro: • On a cache miss, should we – consistent view of memory ; – memory is always coherent (better for I/O); – Allocate a block as in a read ( write-allocate) – more reliable – Write only in memory (write-around) • memory units typically store extra bits with each word to detect/correct errors (“ECC” = Error-correcting code) • ECC not required for cache if write-through is used • Con: – more memory traffic (can be alleviated with write buffers ) 11/17/2003 CSE 378 Cache Performance 13 11/17/2003 CSE 378 Cache Performance 14 Write-back policy Cutting back on write backs • Write-back (aka copy-back) • In write-through, you write only the word (byte) you modify – On a write hit, write only in cache ( • requires dirty bit to say that value has changed • In write-back (when finally writing to memory), you write the entire block – On a write miss, most often write-allocate (fetch on miss) but variations are possible – But you could have one dirty bit/word so on replacement you’d – We write to memory when a dirty block is replaced need to write only the words that are dirty • Pro-con reverse of write through 11/17/2003 CSE 378 Cache Performance 15 11/17/2003 CSE 378 Cache Performance 16 4

  5. CSE378 - Cache Hiding memory latency Coherency: caches and I/O • On write-through, the processor has to wait till the memory has stored • In general I/O transfers occur directly to/from memory the data from/to disk • Inefficient since the store does not prevent the processor to continue • The problem: what if the processor and the I/O are working accessing the same words of memory? • To speed-up the process, have write buffers between cache and main – Want processor and I/O to have a "coherent" view of memory memory – write buffer is a (set of) temporary register that contains the contents and • Similar coherence problem arises with multiple CPUs the address of what to store in main memory – Each CPU accesses the same memory, but keeps its own cache – The store to main memory from the write buffer can be done while the processor continues processing • Same concept can be applied to dirty blocks in write-back policy 11/17/2003 CSE 378 Cache Performance 17 11/17/2003 CSE 378 Cache Performance 18 Reducing Cache Misses with more Preserving coherences with I/O “Associativity” -- Victim caches • What happens for memory to disk • Example of an “hardware assist” – With write-through memory is up-to-date. No problem • Victim cache: Small fully-associative buffer “behind” the – With write-back: memory is not up-to-date. Before I/O is done, cache and “before” main memory need to “purge” cache entries that are dirty and that will be sent to • Of course can also exist if cache hierarchy (behind L1 and the disk before L2, or behind L2 and before main memory) • What happens from disk to memory • Main goal: remove some of the conflict misses in direct- – The I/O may change a memory location that is currently in the mapped caches (or any cache with low associativity) cache – The entries in the cache that correspond to memory locations that are read from disk must be invalidated – Need of a valid bit in the cache (or other techniques) 11/17/2003 CSE 378 Cache Performance 19 11/17/2003 CSE 378 Cache Performance 20 5

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend