Demystifying Distributed Transactions with the - - PowerPoint PPT Presentation

demystifying distributed transactions with the fairness
SMART_READER_LITE
LIVE PREVIEW

Demystifying Distributed Transactions with the - - PowerPoint PPT Presentation

Demystifying Distributed Transactions with the Fairness-Isolation-Throughput Tradeoff Jose Faleiro Yale University Distributed txns: Too expensive? Several popular databases eschew distributed txns Distributed txns: Too expensive?


slide-1
SLIDE 1

Demystifying Distributed Transactions with the Fairness-Isolation-Throughput Tradeoff

Jose Faleiro Yale University

slide-2
SLIDE 2

Distributed txns: Too expensive?

  • Several popular databases eschew distributed txns
slide-3
SLIDE 3

Distributed txns: Too expensive?

  • Several popular databases eschew distributed txns
slide-4
SLIDE 4

Distributed txns: Too expensive?

  • Several popular databases eschew distributed txns
slide-5
SLIDE 5

Distributed txns: Too expensive?

  • Several popular databases eschew distributed txns
slide-6
SLIDE 6

Distributed txns: Too expensive?

  • Several popular databases eschew distributed txns
slide-7
SLIDE 7

Distributed txns: Too expensive?

  • Several popular databases eschew distributed txns

… and many more

slide-8
SLIDE 8

This talk – Tradeoffs in distributed txns

  • Do distributed txns always imply terrible performance?
  • No
  • For databases which support distributed txns
  • What does the space of tradeoffs look like?
slide-9
SLIDE 9

Fairness-Isolation-Throughput tradeoff

  • Any system implementing distributed transactions can get

at most two of these three properties

slide-10
SLIDE 10

But first, some context

slide-11
SLIDE 11

Early databases

“A transaction ... has the properties

  • f atomicity (all or nothing),

durability (effects survive failures) and consistency (a correct transformation). … The [transaction] concept may have applicability to programming systems in general.”

  • Jim Gray, DB pioneer
slide-12
SLIDE 12

ACID abstraction

  • “Holy grail” of database system correctness
  • Atomicity, Consistency, Isolation, Durability
slide-13
SLIDE 13

ACID abstraction

  • “Holy grail” of database system correctness
  • Atomicity, Consistency, Isolation, Durability
slide-14
SLIDE 14

Atomicity

  • “All-or-nothing” guarantee
  • Either all of a txn’s updates succeed or all fail
  • Txns don’t need to think about partial writes
  • No dangling references
  • E.g., If secondary index points to a record, then the record must exist
slide-15
SLIDE 15

Isolation

  • Guarantees that if a pair of txns conflicts, then one “sees”

the other’s writes

  • Each txn executes as if it has the system to itself
  • Relieves developers from reasoning about txn interleavings
slide-16
SLIDE 16

How to build the abstraction?

  • Mutexes
  • Protect shared data-structures from modification
  • Concurrency control
  • Logical locking, multi-versioning, etc.
  • Logging

On a single node

slide-17
SLIDE 17

How to build the abstraction?

On multiple nodes Things are more complicated …

slide-18
SLIDE 18

Atomicity by example

T: Read R Read B . . . Write R Write B

slide-19
SLIDE 19

Atomicity by example

T: Read R Read B . . . Write R Write B

slide-20
SLIDE 20

Atomicity by example

R B T: Read R Read B . . . Write R Write B

slide-21
SLIDE 21

Atomicity by example

T: Read R Read B . . . Write R Write B

slide-22
SLIDE 22

Atomicity by example

R T: Read R Read B . . . Write R Write B

slide-23
SLIDE 23

Atomicity by example

T: Write Wr Write Wb

Red must learn of blue’s willingness to commit/abort Blue must learn of red’s willingness to commit/abort

slide-24
SLIDE 24

Atomicity by example

T: Write Wr Write Wb

In general… A partition must learn of every

  • ther partition’s willingess to

commit/abort

slide-25
SLIDE 25

Atomicity by example

T: Write Wr Write Wb

In general… A partition must learn of every

  • ther partition’s willingess to

commit/abort

slide-26
SLIDE 26

Atomicity by example

T: Write Wr Write Wb

In general… A partition must learn of every

  • ther partition’s willingess to

commit/abort

slide-27
SLIDE 27

Atomicity by example

T: Write Wr Write Wb

In general… A partition must learn of every

  • ther partition’s willingess to

commit/abort

Distributed Transactions entail unavoidable coordination

slide-28
SLIDE 28

Isolation

T1: Read B . . . Write B

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B

slide-29
SLIDE 29

Isolation

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B T1: Read B . . . Write B

WAIT! Distributed Transactions entail unavoidable coordination

slide-30
SLIDE 30

Isolation

T1: Read B . . . Write B

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B

T1 must wait for commit protocol to finish

slide-31
SLIDE 31

In order to support distributed transactions…

  • Atomicity necessitates distributed coordination
  • Isolation requires waiting
  • Conflicts induce waiting
  • The result?
  • Confliciting transactions must wait for distributed coordination to finish
  • Penalizes even non-distributed transactions!
slide-32
SLIDE 32

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
slide-33
SLIDE 33

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
slide-34
SLIDE 34

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

slide-35
SLIDE 35

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-36
SLIDE 36

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-37
SLIDE 37

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-38
SLIDE 38

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-39
SLIDE 39

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-40
SLIDE 40

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-41
SLIDE 41

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-42
SLIDE 42

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-43
SLIDE 43

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

slide-44
SLIDE 44

Get rid of distributed transactions!

Txnr Txnb

  • Only single partition txns
  • No cross-talk between

partitions

  • Easy scale-out

Txny Txng Txnr

… and many more

slide-45
SLIDE 45

Get rid of distributed transactions?

No atomicty and isolation More complexity L L

slide-46
SLIDE 46

Get rid of distributed transactions?

  • Square pegs and round

holes Txnr Txnb

slide-47
SLIDE 47

Get rid of distributed transactions?

  • Square pegs and round

holes

  • Explicitly deal with

inconsistencies Txnr Txnb Index on documents with the word “Cats” Document(s) not yet inserted

slide-48
SLIDE 48

Get rid of distributed transactions?

  • Square pegs and round

holes

  • Explicitly deal with

inconsistencies

  • Sometimes no correctness

guarantees possible Txnr Txnb Savings + Checking >= 0 Deduct from Savings Deduct from Checking

slide-49
SLIDE 49

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean, Google
slide-50
SLIDE 50

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-51
SLIDE 51

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-52
SLIDE 52

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-53
SLIDE 53

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-54
SLIDE 54

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-55
SLIDE 55

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-56
SLIDE 56

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-57
SLIDE 57

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean
slide-58
SLIDE 58

Get rid of distributed transactions?

“In retrospect I think that [not supporting distributed transactions] was a mistake. … a lot of people did want distributed transactions, and so they hand-rolled their

  • wn protocols, sometimes incorrectly, and it

would have been better to build it into the infrastructure.“

  • Jeff Dean

… and many more

slide-59
SLIDE 59

Why are distributed txns expensive?

slide-60
SLIDE 60

Why are distributed txns expensive?

T1: Read B . . . Write B

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B

slide-61
SLIDE 61

Why are distributed txns expensive?

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B T1: Read B . . . Write B

WAIT! Distributed Transactions entail unavoidable coordination

slide-62
SLIDE 62

Why are distributed txns expensive?

T1: Read B . . . Write B

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B

T1 must wait for commit protocol to finish

slide-63
SLIDE 63

Why are distributed txns expensive?

  • Mechanisms for atomicity and isolation overlap in time
  • Atomicity: Distributed coordination
  • Isolation: Wait during distributed coordination
slide-64
SLIDE 64

Fairness-Isolation-Throughput tradeoff

  • Any system implementing distributed transactions can get

at most two of these three properties

  • Three classes of systems
  • Fairness-Isolation
  • Isolation-Throughput
  • Throughput-Fairness
slide-65
SLIDE 65

FIT Tradeoff

  • Poor performance of distributed transactions

attributable to two fundamental issues

  • Expensive commit protocol (required due to atomicity)
  • Waiting (required for isolation)
  • Commit protocol and waiting overlap in time
  • Space characterized by how to separate commit

protocol from waiting

slide-66
SLIDE 66

Intuition

  • Badness results from overlapping commit with isolation
  • To avoid impact of coordination, separate the two
slide-67
SLIDE 67

Option 1: Weaken isolation

  • Allow conflicting txns to execute without observing each
  • ther’s writes
  • Implementable without making txns wait for each other
  • Susceptible to concurrency bugs
  • Transactions execute against potentially stale state
  • E.g., RAMP transactions
slide-68
SLIDE 68

Option 2: Re-order coordination

  • Move coordination outside of transaction boundaries
  • Amortize coordination across several transactions
  • Compromises fairness because we penalize certain txns to

benefit overall throughput

  • E.g., Calvin, G-Store
slide-69
SLIDE 69

FIT Tradeoff

  • Fairness-Isolation
  • Give up throughput
  • Fairness-Throughput
  • Give up isolation
  • Isolation-Throughput
  • Give up fairness
slide-70
SLIDE 70

FIT Tradeoff

  • Fairness-Isolation
  • Give up throughput
  • Fairness-Throughput
  • Give up isolation
  • Isolation-Throughput
  • Give up fairness

Atomicity and isolation mechanisms overlap

slide-71
SLIDE 71

FIT Tradeoff

  • Fairness-Isolation
  • Give up throughput
  • Fairness-Throughput
  • Give up isolation
  • Isolation-Throughput
  • Give up fairness

Atomicity and isolation mechanisms are decoupled

slide-72
SLIDE 72

Weak Isolation Example

  • Read Atomic Multi-Partition (RAMP) transactions
  • Decouples concurrent transactions
  • Research system
  • Appeared in SIGMOD 2014
  • Peter Bailis et al. from UC Berkeley
slide-73
SLIDE 73

RAMP transactions

R B G

slide-74
SLIDE 74

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-75
SLIDE 75

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-76
SLIDE 76

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-77
SLIDE 77

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-78
SLIDE 78

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

Run a commit protocol

slide-79
SLIDE 79

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

Run a commit protocol Commit protocol represents coordination required for atomicity

slide-80
SLIDE 80

RAMP transactions

R, RT1 B, BT1 G, GT1 T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-81
SLIDE 81

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-82
SLIDE 82

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-83
SLIDE 83

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-84
SLIDE 84

RAMP transactions

T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 R B G

T1 and T2 read the same snapshot

slide-85
SLIDE 85

RAMP transactions

R B G T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-86
SLIDE 86

RAMP transactions

T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

Commit T1

R, RT1 B, BT1 G, GT1

slide-87
SLIDE 87

RAMP transactions

T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 R, RT1 B, BT1 G, GT1 T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

slide-88
SLIDE 88

RAMP transactions

R, RT1, RT2 B, BT1, BT2 G, GT1, GT2 T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

Commit T2

slide-89
SLIDE 89

RAMP transactions

R, RT1, RT2 B, BT1, BT2 G, GT1, GT2 T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

T1 and T2 don’t see each

  • ther’s writes
slide-90
SLIDE 90

RAMP transactions

R, RT1, RT2 B, BT1, BT2 G, GT1, GT2 T1: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1 T2: R0, G0, B0 = Read R G B R1, G1, B1 = f(R0, G0, B0) Write R1 G1 B1

State must be merged in some app dependent manner

slide-91
SLIDE 91

RAMP transactions

  • Decouples execution of concurrent txns
  • “Synchronization independence”
  • Great for scalability
  • More resources means more throughput
  • Weak isolation
  • Diverged state must be reconciled and merged
  • Cannot enforce important class of constraints
slide-92
SLIDE 92

RAMP transactions

  • Decouples execution of concurrent txns
  • “Synchronization independence”
  • Great for scalability
  • More resources means more throughput
  • Weak isolation
  • Diverged state must be reconciled and merged
  • Cannot enforce important class of constraints

Txns never blocks others

slide-93
SLIDE 93

RAMP transactions

  • Decouples execution of concurrent txns
  • “Synchronization independence”
  • Great for scalability
  • More resources means more throughput
  • Weak isolation
  • Diverged state must be reconciled and merged
  • Cannot enforce important class of constraints

Extra development effort

slide-94
SLIDE 94

RAMP transactions

  • Decouples execution of concurrent txns
  • “Synchronization independence”
  • Great for scalability
  • More resources means more throughput
  • Weak isolation
  • Diverged state must be reconciled and merged
  • Cannot enforce important class of constraints

Irrelevant for many applications

slide-95
SLIDE 95

Fairness-Isolation-Throughput tradeoff

  • Any system implementing distributed transactions can get

at most two of these three properties

  • Three classes of systems
  • Fairness-Isolation
  • Isolation-Throughput
  • Throughput-Fairness
slide-96
SLIDE 96

Why are distributed txns expensive?

T1: Read B . . . Write B

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B

T1 must wait for commit protocol to finish

slide-97
SLIDE 97

Re-ordered coordination example

  • Move distributed coordination outside txn boundaries
  • Amortize its cost across several txns
  • Guarantee isolation
  • Conflicts still induce waiting
  • But txns don’t wait for distributed coordination
  • By re-ordering coordination, some txns are penalized
  • Unfairly delay txns to benefit overall throughput
slide-98
SLIDE 98

G-Store

  • Built for workloads with temporal locality
  • E.g., multi-player games
  • Research system
  • Appeared in SoCC 2010
  • Sudipto Das et al. from UC Santa Barbara
  • Built as a transaction layer on top of Hbase
slide-99
SLIDE 99

G-Store

R B G Y

Supports txns on “KeyGroups”

slide-100
SLIDE 100

G-Store

R B G Y

slide-101
SLIDE 101

G-Store

R B G R’ B’ G’ Y

KeyGroup

slide-102
SLIDE 102

G-Store

R B G R’ B’ G’ Y Txns on R, B, G, and Y, execute on Yellow node

slide-103
SLIDE 103

G-Store

R B G R’ B’ G’ Y Update B

slide-104
SLIDE 104

G-Store

R’ B’ G’ Y Txns on R, B, G, and Y, execute on Yellow node

No coordination on txns against a KeyGroup

slide-105
SLIDE 105

G-Store

  • Txns on a KeyGroup do not require coordination
  • Keys are on a single shard
  • Supports arbitrary distributed txns
  • KeyGroups can be created on demand
slide-106
SLIDE 106

G-Store

  • Txns on a KeyGroup do not require coordination
  • Keys are on a single shard
  • Supports arbitrary distributed txns
  • KeyGroups can be created on demand

Don’t distributed transactions necessitate coordination?

slide-107
SLIDE 107

KeyGrouping protocol

R B G Y

Coordination happens here

slide-108
SLIDE 108

KeyGrouping protocol

R B G Y

And here

slide-109
SLIDE 109

G-Store’s position in FIT?

  • Guarantees isolation
  • Txns are serializable
  • Distributed coordination does not occur on txn boundaries
  • Coordination is amortized across txns on a KeyGroup
  • Sacrifices fairness
  • Txns on as yet unformed KeyGroups pay high latency
slide-110
SLIDE 110

Fairness-Isolation example

T1: Read B . . . Write B

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B

T1 must wait for commit protocol to finish

slide-111
SLIDE 111

Fairness-Isolation example

Suppose T1 observes T0’s writes

Write R T0: Read R Read B . . . Write R Write B T1: Read B . . . Write B

Atomicity and isolation mechanisms overlap in time

slide-112
SLIDE 112

How to use the FIT tradeoff?

  • If designing a distributed DB
  • Use FIT to make design desicisions that meet your requirements
slide-113
SLIDE 113

How to use the FIT tradeoff?

  • If designing a distributed DB
  • Use FIT to make design desicisions that meet your requirements
  • If choosing a distributed DB
  • Try to assign it to a point in the FIT tradeoff space
slide-114
SLIDE 114

How to use the FIT tradeoff?

  • If designing a distributed DB
  • Use FIT to make design desicisions that meet your requirements
  • If choosing a distributed DB
  • Try to assign it to a point in the FIT tradeoff space
  • Where do your requirements lie within FIT?
  • Sanity check that they’re plausible
slide-115
SLIDE 115

How to use the FIT tradeoff?

  • If designing a distributed DB
  • Use FIT to make design desicisions that meet your requirements
  • If choosing a distributed DB
  • Try to assign it to a point in the FIT tradeoff space
  • Where do your requirements lie within FIT?
  • Sanity check that they’re plausible
  • Good rule of thumb: What happens under contention?
slide-116
SLIDE 116

Conclusions

  • Distributed txns do not always entail terrible performance
  • Systems implementing distributed txns subject to FIT
  • Fairness-Isolation-Throughput tradeoff
  • Can get at most two of these three properties
  • Uses of FIT
  • When building new systems
  • Yardstick to compare distributed databases
  • Sanity check requirements
slide-117
SLIDE 117

Conclusions

  • Distributed txns do not always entail terrible performance
  • Systems implementing distributed txns subject to FIT
  • Fairness-Isolation-Throughput tradeoff
  • Can get at most two of these three properties
  • Uses of FIT
  • When building new systems
  • Yardstick to compare distributed databases
  • Sanity check requirements

jose.faleiro@yale.edu @jmfaleiro