in Deuteronomy Justin Levandoski, David Lomet, Sudipta Sengupta, - - PowerPoint PPT Presentation

in deuteronomy
SMART_READER_LITE
LIVE PREVIEW

in Deuteronomy Justin Levandoski, David Lomet, Sudipta Sengupta, - - PowerPoint PPT Presentation

High Performance Transactions in Deuteronomy Justin Levandoski, David Lomet, Sudipta Sengupta, Ryan Stutsman , and Rui Wang Microsoft Research Overview Deuteronomy: componentized DB stack Separates transaction, record, and storage management


slide-1
SLIDE 1

High Performance Transactions in Deuteronomy

Justin Levandoski, David Lomet, Sudipta Sengupta, Ryan Stutsman, and Rui Wang Microsoft Research

slide-2
SLIDE 2

Overview

Deuteronomy: componentized DB stack

Separates transaction, record, and storage management Deployment flexibility, reusable in many systems and applications

Conventional wisdom: layering incompatible with performance Build from the ground up for modern hardware

Lock/latch-freedom, multiversion concurrency control, cache-coherence-friendly techniques

Result: 1.5M TPS

Performance rivaling in-memory database systems but clean separation & works even without in-memory data

slide-3
SLIDE 3

The Deuteronomy Database Architecture

Transactional Component (TC) Data Component (DC) Record Operations

(~CRUD)

TC guarantees ACID

Logical concurrency control Logical recovery No knowledge of physical data storage

DC provides record storage

Physical data storage Atomic record modifications No knowledge of transactions, multiversioning

Control Operations

(Exactly Once, WAL, Checkpointing)

slide-4
SLIDE 4

Deployment Flexibility

TC DC DC TC DC TC DC DC DC TC TC

Quorum

DC DC DC DC DC DC DC DC DC TC

Embeddable Key-Value Store Embeddable Transactional Store Networked Transactional Store Scale-out Transactional Store Fault-tolerant Scale-out Transactional Store

slide-5
SLIDE 5

The First Implementation

Transactional Component (TC) Data Component (DC) Lock Manager Log Manager Record Operations Control Operations Record Manager

Bottlenecked on locked remote ops

1 10 100 1,000 10,000 100,000 1,000,000 10,000,000 TC Bw-tree DC

Operations per second

250×

slide-6
SLIDE 6

The New Transactional Component

slide-7
SLIDE 7

Key Mechanisms for Millions of TPS

Multiversion concurrency control (MVCC)

Transactions never block one another Multiversioning limited to TC only

Lock and latch freedom throughout

Buffer management, concurrency control, caches, allocators, …

In-memory recovery log buffers as version cache

Redo-only recovery doubles in-memory cache density Only committed versions sent to DC, shipped in log buffer units

TC and DC run on separate sockets (or machines)

Task parallelism/pipelining to gain performance Data parallel when possible, but not at the expense of the user Eliminate blocking Mitigate latency Maximize concurrency

slide-8
SLIDE 8

TC Overview

MVCC enforces serializability Recovery log acts as version cache Log buffers batch updates to DC Parallel log replay engine at DC

MVCC Version Manager

Data Component (DC)

Volatile Buffers In-memory Stable Buffers Recovery log Read Cache DC Reads DC Updates

slide-9
SLIDE 9

Latch-free Multiversion Concurrency Control

slide-10
SLIDE 10

Timestamp MVCC

Each transaction has a timestamp assigned on begin

Transactions read, write, and commit at that timestamp

Each version marked with create timestamp and last read timestamp

slide-11
SLIDE 11

In-memory recovery log buffers + cache + DC

Log Offset Create TxID 10

Hash Table

Log Offset Create TxID 18 Log Offset Create TxID 4

Version Manager

Key A Version List Read Time 40

Latch-free MVCC T able

Records chained in hash table buckets

. . .

Key Y Version List Read Time 50

slide-12
SLIDE 12

Hash Table

In-memory recovery log buffers + cache + DC

Version Manager

Key A Version List Read Time 40

. . .

Key Y Version List Read Time 50 Log Offset Create TxID 18 Log Offset Create TxID 4 Log Offset Create TxID 10

Latch-free MVCC T able

Ordered version lists chained off each record

slide-13
SLIDE 13

Key A Version List Read Time 40

In-memory recovery log buffers + cache + DC

Latch-free MVCC T able

TxId gives version status and create timestamp

. . .

Hash Table

Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Version Manager

Log Offset Create TxID 10

slide-14
SLIDE 14

In-memory recovery log buffers + cache + DC

. . .

Hash Table

Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Version Manager

Log Offset Create TxID 10

Latch-free MVCC T able: Reads

Key A Version List Read Time 40

Read: find a visible, committed version; compare-and-swap read timestamp

slide-15
SLIDE 15

In-memory recovery log buffers + cache + DC

. . .

Hash Table

Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Version Manager

Latch-free MVCC T able: Reads

Key A Version List Read Time 40 Log Offset Create TxID 10

Data is pointed to directly in in-memory recovery log buffers

slide-16
SLIDE 16

In-memory recovery log buffers + cache + DC

Key A Version List Read Time 50

Latch-free MVCC T able: Reads

. . .

Hash Table

Log Offset Create TxID 10 Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Version Manager Miss Miss Miss Miss Miss

All metadata entries cacheline sized 6 cache misses in the common case Work of indexing done by CC

Miss

slide-17
SLIDE 17

In-memory recovery log buffers + cache + DC

Key A Version List Read Time 50

. . .

Hash Table

Log Offset Create TxID 10 Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Latch-free MVCC T able: Writes

Version Manager

Append new version to in-memory log

slide-18
SLIDE 18

In-memory recovery log buffers + cache + DC

Key A Version List Read Time 50

. . .

Hash Table

Log Offset Create TxID 10 Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Latch-free MVCC T able: Writes

Version Manager

Create new version metadata that points to it

Log Offset Create TxID 4

slide-19
SLIDE 19

In-memory recovery log buffers + cache + DC

. . .

Hash Table

Log Offset Create TxID 10 Log Offset Create TxID 18 Log Offset Create TxID 4 Key Y Version List Read Time 50

Latch-free MVCC T able: Writes

Version Manager

Install version atomically with compare-and-swap

Log Offset Create TxID 4

Compare and swap

Key A Version List Read Time 50

slide-20
SLIDE 20

MVCC Garbage Collection

Track

Oldest active transaction (OAT) Version application progress at the DC

Remove versions older than OAT and applied at the DC Later requests for most recent version of the record go to DC

slide-21
SLIDE 21

Latch-free Log Buffer Allocation

slide-22
SLIDE 22

Serialized Log Allocation, Parallel Filling

Only allocation is serialized, not data copying

Log Buffer

Tail = 80

Allocated & filling Filled Unallocated

slide-23
SLIDE 23

Fast Atomic Operations for Log Allocation

Log Buffer

Tail = 80

Thread 1: CompareAndSwap(&tail, 80, 90) → ok Thread 2: CompareAndSwap(&tail, 80, 85) → fail Wasted shared-mode load for ‘pre-image’ Dilated conflict window creates retries Thread 1: AtomicAdd(&tail, 10) → 90 Thread 2: AtomicAdd(&tail, 5) → 95 No need for load of ‘pre-image’ Order non-deterministic, but both succeed

Filled Unallocated

slide-24
SLIDE 24

TC Proxy

DC-side multicore parallel redo-replay

slide-25
SLIDE 25

Multicore Replay at the DC

Each received log buffer replayed by dedicated hw thread Fixed-size thread pool

Backpressure if entire socket busy

“Blind writes” versions to DC

“Delta chains” avoid read cost for writes

Out-of-order and redo-only safe

LSNs, only replay committed entries, shadow transaction table

TC Proxy

Data Component (Bw-tree)

Incoming Log Buffers from TC

HW Threads

slide-26
SLIDE 26

Evaluation

slide-27
SLIDE 27

Hardware for Experiments

4x Intel Xeon @ 2.8 GHz

64 total hardware threads

Commodity SSD ~450 MB/s

Socket 0 Socket 1 Socket 2 Socket 3 TC Proxy + DC (Bw-tree) TC

slide-28
SLIDE 28

More than half of all records access every 20 seconds Heavily stresses concurrency control and logging overheads

Experimental Workload

YCSB-like 50 million 100-byte values 4 ops/transaction ~“80-20” Zipfian access skew DC on separate NUMA socket; also running periodic checkpoints

slide-29
SLIDE 29

Evaluation: Transaction Throughput

84% reads

50% read-only transactions

1.5M TPS

Competitive w/ in-memory systems

slide-30
SLIDE 30

Evaluation: Impact of Writes

~350,000 TPS w/100% writes Disk close to saturation

90% disk bandwidth utilization

DRAM latency limits write-heavy loads

More misses for DC update than for “at TC” read

slide-31
SLIDE 31

For lack of time; fun stuff in the paper

Unapologetically racy log-structured read-cache Fast async pattern

Eliminates context switch and memory allocation overhead

Lightweight pointer stability

Epoch protection for latch-free data structures free of atomic ops

  • n the fast path

Fast commit with read-only transaction optimization Recovery log as queue for durable commit notification Thread management & NUMA details

slide-32
SLIDE 32

Related Work

Modern in-memory database engines

Hekaton [Diaconu et al] HANA HyPer [Kemper and Neumann] Silo [Tu et al]

Multiversion Timestamp Order [Bernstein, Hadzilacos, Goodman] Strict Timestamp Order CC

Hyper [Wolf et al]

slide-33
SLIDE 33

Future Directions

Dealing with ranges Timestamp concurrency control may be fragile More performance work More functionality Evaluating scale-out

slide-34
SLIDE 34

Conclusions

Deuteronomy: clean DB kernel separation needn’t be costly

Separated transaction, record, and storage management

Flexible deployment allows reuse in many scenarios

Embedded, classic stateless apps, large-scale fault-tolerant

Integrate the lessons of in-memory databases

Eliminate all blocking, locking, and latching MVCC, cache-coherence-friendly techniques

1.5M TPS rivals in-memory database systems but clean separation & works even without in-memory data

slide-35
SLIDE 35