Managing Non-Volatile Memory in Database Systems -- Originally - - PowerPoint PPT Presentation

managing non volatile memory in database systems
SMART_READER_LITE
LIVE PREVIEW

Managing Non-Volatile Memory in Database Systems -- Originally - - PowerPoint PPT Presentation

Managing Non-Volatile Memory in Database Systems -- Originally presented at SIGMOD 2018 Alexander van Renen, Viktor Leis, Alfons Kemper, Thomas Neumann Takushi Hashida, Kazuichi Oe, Yoshiyasu Doi, Lilian Harada, Mitsuru Sato Terminology,


slide-1
SLIDE 1

Managing Non-Volatile Memory in Database Systems

Alexander van Renen, Viktor Leis, Alfons Kemper, Thomas Neumann Takushi Hashida, Kazuichi Oe, Yoshiyasu Doi, Lilian Harada, Mitsuru Sato

  • - Originally presented at SIGMOD 2018
slide-2
SLIDE 2

2

Terminology, Assumptions, and Background

  • For this talk: NVM (PMem, NVRAM, SCM, NVMM)
  • NVM assumptions:
  • NVM is byte addressible
  • NVM has a higher access latency than DRAM
  • NVM has lower cost/GB than DRAM
  • NVM has higher capacity than DRAM
  • Sources:
  • Paper: https://db.in.tum.de/~leis/papers/nvm.pdf
  • Video: https://youtu.be/6RRe_cmDl0U
slide-3
SLIDE 3

3

Database Architectures

Main Memory DBs

  • Primary data location in DRAM
  • Snapshots written to SSD
  • Logging to SSD

Disk-based DBs

  • Primary data location on disk
  • Loaded to DRAM for processing
  • Logging to SSD
slide-4
SLIDE 4

4

Database Architectures

Main Memory DBs

  • Primary data location in DRAM
  • Snapshots written to SSD
  • Logging to SSD

How do we change dbs architecture for the NVM ?

Disk-based DBs

  • Primary data location on disk
  • Loaded to DRAM for processing
  • Logging to SSD
slide-5
SLIDE 5

NVM-direct Approach

5

In-place updates

  • Requires failure atomicity
  • High NVM latency
  • No DRAM
  • No SSD

CDDS-Tree [VLDB 2015], NV-Tree [FAST 2015], wB-Tree [VLDB 2015], FP-Tree [SIGMOD 2016], WO[A]RT/ART+CoW [FAST 2017], HiKV [USENIX ATC 2017], Bz-Tree [VLDB 2018], BDCC+NVM [ICDE 2018], SAP Hana for NVM [VLDB 2017]

Root pointer

slide-6
SLIDE 6

NVM-direct Data Structures

[Data Management on Non-Volatile Memory: A Perspective @Datenbank-Spektrum 18]

slide-7
SLIDE 7

NVM-direct Approach

7

In-place updates

  • Requires failure atomicity
  • High NVM latency
  • No DRAM
  • No SSD

CDDS-Tree [VLDB 2015], NV-Tree [FAST 2015], wB-Tree [VLDB 2015], FP-Tree [SIGMOD 2016], WO[A]RT/ART+CoW [FAST 2017], HiKV [USENIX ATC 2017], Bz-Tree [VLDB 2018], BDCC+NVM [ICDE 2018], SAP Hana for NVM [VLDB 2017]

Root pointer

slide-8
SLIDE 8

Buffered Approach

8

Out-of-place updates

  • No byte-addressability
  • No SSD

FOEDUS [SIGMOD 2015], SAP Hana for NVM [VLDB 2017]

slide-9
SLIDE 9

State of the Art

9

slide-10
SLIDE 10

The Ideal System: “Dream Chart”

10

slide-11
SLIDE 11
  • 1. Cache-Line-Grained Loading

11

We transfer individual cache lines (64Byte) instead of entire pages (16KB) between DRAM and NVM.

slide-12
SLIDE 12
  • 1. Cache-Line-Grained Loading

12

We transfer individual cache lines (64Byte) instead of entire pages (16KB) between DRAM and NVM.

slide-13
SLIDE 13
  • 1. Cache-Line-Grained Loading

13

We transfer individual cache lines (64Byte) instead of entire pages (16KB) between DRAM and NVM.

slide-14
SLIDE 14
  • 1. Cache-Line-Grained Loading

14

We transfer individual cache lines (64Byte) instead of entire pages (16KB) between DRAM and NVM.

slide-15
SLIDE 15
  • 1. Cache-Line-Grained Loading

15

We transfer individual cache lines (64Byte) instead of entire pages (16KB) between DRAM and NVM.

San Diego

slide-16
SLIDE 16
  • 1. Cache-Line-Grained Loading

16

We transfer individual cache lines (64Byte) instead of entire pages (16KB) between DRAM and NVM.

San Diego

slide-17
SLIDE 17
  • 2. Mini Pages

17

We implement mini pages which store only 16 cache lines (~1KB instead of 16KB).

slide-18
SLIDE 18
  • 2. Mini Pages

18

We implement mini pages which store only 16 cache lines (~1KB instead of 16KB).

San Diego

slide-19
SLIDE 19
  • 2. Mini Pages

19

We implement mini pages which store only 16 cache lines (~1KB instead of 16KB).

San Diego

slide-20
SLIDE 20
  • 3. Pointer Swizzling

20

We use pointer swizzling and low-overhead replacement strategies to reduce the buffer manager cost.

slide-21
SLIDE 21

Performance Impact of Techniques

21

Buffer management needs to be tuned for NVM.

slide-22
SLIDE 22

The Ideal System: “Dream Chart”

22

slide-23
SLIDE 23
  • 4. Utilize SSDs

23

By using fixed-size pages, we can extend the maximum possible workload size with SSDs.

slide-24
SLIDE 24

Conclusion

24