RESISTIVE MEMORY TECHNOLOGY Mahdi Nazm Bojnordi Assistant Professor - - PowerPoint PPT Presentation

resistive memory technology
SMART_READER_LITE
LIVE PREVIEW

RESISTIVE MEMORY TECHNOLOGY Mahdi Nazm Bojnordi Assistant Professor - - PowerPoint PPT Presentation

RESISTIVE MEMORY TECHNOLOGY Mahdi Nazm Bojnordi Assistant Professor School of Computing University of Utah CS/ECE 7810: Advanced Computer Architecture Overview Upcoming deadlines March 29 th : Sign up for your student paper presentation


slide-1
SLIDE 1

RESISTIVE MEMORY TECHNOLOGY

CS/ECE 7810: Advanced Computer Architecture

Mahdi Nazm Bojnordi

Assistant Professor School of Computing University of Utah

slide-2
SLIDE 2

Overview

¨ Upcoming deadlines

¤ March 29th: Sign up for your student paper

presentation

¨ This lecture

¤ Resistive memory technology ¤ Write optimization techniques ¤ Wear leveling ¤ MLC technologies

slide-3
SLIDE 3

Resistive Memory Technology

¨ Main benefits

¤ Non-volatile memory ¤ Multi-level storage ¤ Denser cells ¤ Better scalability

¨ Shortcomings

¤ Limited endurance ¤ High switching delay and energy

What can we do?

slide-4
SLIDE 4

Comparison of Technologies

¨ Compared to NAND Flash, PCM is byte-addressable, has

  • rders of magnitude lower latency and higher endurance.

DRAM PCM NAND Flash

Page size Page read latency Page write latency Write bandwidth Erase latency 64B 20-50ns 20-50ns ∼GB/s per die N/A 64B ∼ 50ns ∼ 1 µs 50-100 MB/s per die N/A 4KB ∼ 25 µs ∼ 500 µs 5-40 MB/s per die ∼ 2 ms Endurance

10

6 − 10 8

10

4 − 10 5

Read energy Write energy Idle power 0.8 J/GB 1.2 J/GB ∼100 mW/GB 1 J/GB 6 J/GB ∼1 mW/GB 1.5 J/GB [28] 17.5 J/GB [28] 1–10 mW/GB Density 1× 2 − 4× 4×

Sources: [Doller ’09] [Lee et al. ’09] [Qureshi et al. ‘09]

slide-5
SLIDE 5

Comparison of Technologies

¨ Compared to DRAM, PCM has better density and

scalability and similar read but longer write latencies

Sources: [Doller ’09] [Lee et al. ’09] [Qureshi et al. ‘09]

DRAM PCM NAND Flash

Page size Page read latency Page write latency Write bandwidth Erase latency 64B 20-50ns 20-50ns ∼GB/s per die N/A 64B ∼ 50ns ∼ 1 µs 50-100 MB/s per die N/A 4KB ∼ 25 µs ∼ 500 µs 5-40 MB/s per die ∼ 2 ms Endurance

10

6 − 10 8

10

4 − 10 5

Read energy Write energy Idle power 0.8 J/GB 1.2 J/GB ∼100 mW/GB 1 J/GB 6 J/GB ∼1 mW/GB 1.5 J/GB [28] 17.5 J/GB [28] 1–10 mW/GB Density 1× 2 − 4× 4×

slide-6
SLIDE 6

Latency Comparison

10ns 100ns 1us 10us 100us 1ms 10ms NAND Flash PCM DRAM Hard Disk NAND Flash PCM DRAM Hard Disk

Read Write

[Qureshi’09]

slide-7
SLIDE 7

1 1 1 1 1 1 1 1 1 1

Read Compare Write

1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1

PCM

1 1 1 1 1 1 1 1

Cache line

¨ A cache line is written in several cycles ¨ Read-compare-write (differential write)

n Write only modified bits rather than entire cache line

¨ Skipping parts with no modified bits

slide-8
SLIDE 8

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 q Encode write data into either its regular

  • r

inverted form and then pick the encoding that yields in less flips in comparison against old data. 1 1 1 Old New(Regular) New (Inverted) Saves 4 bit flips q Encode write data into a set of data vectors and then pick the vector that yields in less flips in comparison against

  • ld data.

1 1 1 Old New1 New2 Saves 5 bit flips New3 1 1 1 1 1 1 1 1 1

Reducing Bit Flips

Flip-N-Write [MICRO’09] Flip-Min [HPCA’13]

slide-9
SLIDE 9

Challenge : Each cell can endure 10-100 Million writes

With uniform write traffic, system lifetime ranges from 4-20 years

workloads 16 yrs 4 yrs

Limited Lifetime

[Qureshi’09]

slide-10
SLIDE 10

Non-Uniform Writes

¨ Even with 64K spare lines, baseline gets 5%

lifetime of ideal

Average [Qureshi’09]

slide-11
SLIDE 11

Impact of Non-Uniformity

¨ Even with 64K spare lines, baseline gets 5%

lifetime of ideal

20x lower

  • Num. writes before system failure
  • Num. writes before failure with uniform writes
  • Norm. Endurance =

x 100%

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100

  • ltp

db1 db2 fft stride stress Gmean

Normalized Endurance (%)

Baseline w/o spares Baseline (64K spare lines) [Qureshi’09]

slide-12
SLIDE 12

Making Writes Uniform

¨ Wear Leveling: make writes uniform by remapping

frequently written lines

Line Addr. Lifetime Count Period Count A 99K (Low) 1K (Low) B 100K (Med) 3K (High) C 101K (High) 2K (Med)

è

Line Remap Addr A C B A C B

Indirection Table Physical Address PCM Address [Qureshi’09]

slide-13
SLIDE 13

How to Remap

¨ Tables

¤ Area of several (tens of) megabytes ¤ Indirection latency (table in EDRAM/DRAM)

¨ Area overhead can be reduced with more lines per

region

¤ Reduced effectiveness (e.g. Line0 always written) ¤ Support for swapping large memory regions (complex) [Qureshi’09]

slide-14
SLIDE 14

Start-Gap Wear Leveling

¨ Two registers (Start & Gap) + 1 line (GapLine) to support

movement

¨ Move GapLine every 100 writes to memory.

çSTART

A B C 1 2 3 4

PCMAddr = (Start+Addr); (PCMAddr >= Gap) PCMAddr++)

D

GAP è

Storage overhead: less than 8 bytes (GapLine taken from spares) Latency: Two additions (no table lookup) Write overhead: One extra write every 100 writes è 1%

[Qureshi’09]

slide-15
SLIDE 15

Start-Gap Results

¨ On average, Start-Gap gets 53% normalized

endurance

10 20 30 40 50 60 70 80 90 100

  • ltp

db1 db2 fft stride stress Gmean Normalized Endurance (%)

Baseline Start Gap Perfect

[Qureshi’09]

slide-16
SLIDE 16

Multi-Level Cells

Voltage Time

11 00 01 10

[Yoon’14]

slide-17
SLIDE 17

Sensing Multi-level Cells

[Yoon’14]

slide-18
SLIDE 18

Voltage Time

11 00 01 10

Multi-Level Cells

[Yoon’14]

slide-19
SLIDE 19

Voltage Time

11 00 01 10

Time to determine Bit 1's value

Multi-Level Cells

[Yoon’14]

slide-20
SLIDE 20

Voltage Time

11 00 01 10

Time to determine Bit 0's value

Multi-Level Cells

[Yoon’14]

slide-21
SLIDE 21

MLC-PCM cell Bit 1 (fast read) Bit 0 (fast write)

Decoupled Bit Mapping

bit bit bit bit bit bit bit bit bit bit bit bit bit bit bit bit

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Coupled (baseline): Contiguous bits alternate between FR and FW

12 13 14 15 8 9 10 11

bit bit bit bit bit bit bit bit

1 2 3 4 5 6 7

Decoupled: Contiguous regions alternate between FR and FW

[Yoon’14]

slide-22
SLIDE 22

l By decoupling, we've created regions with distinct characteristics

– We examine the use of 4KB regions (e.g., OS page size)

l Want to match frequently read data to FR pages and vice versa l Toward this end, we propose a new OS page allocation scheme

Fast read page Fast write page

Physical address

Decoupled Bit Mapping

[Yoon’14]

slide-23
SLIDE 23

+19% +10%+16% +13% +31% Conventional All fast write All fast read DBM DBM+APM+SRB Ideal Normalized Speedup

Performance Results

[Yoon’14]