15-721 DATABASE SYSTEMS Lecture #06 Index Locking & Latching - - PowerPoint PPT Presentation

15 721
SMART_READER_LITE
LIVE PREVIEW

15-721 DATABASE SYSTEMS Lecture #06 Index Locking & Latching - - PowerPoint PPT Presentation

15-721 DATABASE SYSTEMS Lecture #06 Index Locking & Latching Andy Pavlo / / Carnegie Mellon University / / Spring 2016 @Andy_Pavlo // Carnegie Mellon University // Spring 2017 2 TODAYS AGENDA Index Locks vs. Latches Latch


slide-1
SLIDE 1

Andy Pavlo / / Carnegie Mellon University / / Spring 2016

DATABASE SYSTEMS

Lecture #06 – Index Locking & Latching

15-721

@Andy_Pavlo // Carnegie Mellon University // Spring 2017

slide-2
SLIDE 2

CMU 15-721 (Spring 2017)

TODAY’S AGENDA

Index Locks vs. Latches Latch Implementations Latch Crabbing Index Locking

2

slide-3
SLIDE 3

CMU 15-721 (Spring 2017)

DATABASE INDEX

A data structure that improves the speed of data retrieval operations on a table at the cost of additional writes and storage space. Indexes are used to quickly locate data without having to search every row in a table every time a table is accessed.

3

slide-4
SLIDE 4

CMU 15-721 (Spring 2017)

DATA STRUCTURES

Order Preserving Indexes

→ A tree-like structure that maintains keys in some sorted

  • rder.

→ Supports all possible predicates with O(log n) searches.

Hashing Indexes

→ An associative array that maps a hash of the key to a particular record. → Only supports equality predicates with O(1) searches.

4

slide-5
SLIDE 5

CMU 15-721 (Spring 2017)

B-TREE VS. B+TREE

The original B-tree from 1972 stored keys + values in all nodes in the tree.

→ More memory efficient since each key only appears once in the tree.

A B+tree only stores values in leaf nodes. Inner nodes only guide the search process.

→ Easier to manage concurrent index access when the values are only in the leaf nodes.

5

slide-6
SLIDE 6

CMU 15-721 (Spring 2017)

OBSERVATION

We already know how to use locks to protect

  • bjects in the database.

But we have to treat indexes differently because the physical structure can change as long as the logical contents are consistent.

6

slide-7
SLIDE 7

CMU 15-721 (Spring 2017)

SIMPLE EXAMPLE

7

A

20 22

Txn #1: Read ‘22’

slide-8
SLIDE 8

CMU 15-721 (Spring 2017)

SIMPLE EXAMPLE

7

A

20 22

Txn #2: Insert ‘21’ Txn #1: Read ‘22’

slide-9
SLIDE 9

CMU 15-721 (Spring 2017)

SIMPLE EXAMPLE

7

A

20 22

B

20 22

C

Txn #2: Insert ‘21’ Txn #1: Read ‘22’

slide-10
SLIDE 10

CMU 15-721 (Spring 2017)

SIMPLE EXAMPLE

7

A

20 22

B

20 22

C

Txn #2: Insert ‘21’

21 21 22

Txn #1: Read ‘22’

slide-11
SLIDE 11

CMU 15-721 (Spring 2017)

SIMPLE EXAMPLE

7

A

20 22

B

20 22

C

Txn #2: Insert ‘21’

21

Txn #3: Read ‘22’

21 22

Txn #1: Read ‘22’

slide-12
SLIDE 12

CMU 15-721 (Spring 2017)

LOCKS VS. LATCHES

Locks

→ Protects the index’s logical contents from other txns. → Held for txn duration. → Need to be able to rollback changes.

Latches

→ Protects the critical sections of the index’s internal data structure from other threads. → Held for operation duration. → Do not need to be able to rollback changes.

8

A SURVEY OF B-TREE LOCKING TECHNIQUES TODS 2010

slide-13
SLIDE 13

CMU 15-721 (Spring 2017)

LOCKS VS. LATCHES

9

Locks Latches

Separate… User transactions Threads Protect… Database Contents In-Memory Data Structures During… Entire Transactions Critical Sections Modes… Shared, Exclusive, Update, Intention Read, Write Deadlock Detection & Resolution Avoidance …by… Waits-for, Timeout, Aborts Coding Discipline Kept in… Lock Manager Protected Data Structure

Source: Goetz Graefe

slide-14
SLIDE 14

CMU 15-721 (Spring 2017)

LOCK-FREE INDEXES

Possibility #1: No Locks

→ Txns don’t acquire locks to access/modify database. → Still have to use latches to install updates.

Possibility #2: No Latches

→ Swap pointers using atomic updates to install changes. → Still have to use locks to validate txns.

10

slide-15
SLIDE 15

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Blocking OS Mutex Test-and-Set Spinlock Queue-based Spinlock Reader-Writer Locks

11

Source: Anastasia Ailamaki

slide-16
SLIDE 16

CMU 15-721 (Spring 2017)

COMPARE-AND-SWAP

Atomic instruction that compares contents of a memory location M to a given value V

→ If values are equal, installs new given value V’ in M → Otherwise operation fails

12

M

__sync_bool_compare_and_swap(&M, 20, 30)

20

Compare Value Address New Value

slide-17
SLIDE 17

CMU 15-721 (Spring 2017)

COMPARE-AND-SWAP

Atomic instruction that compares contents of a memory location M to a given value V

→ If values are equal, installs new given value V’ in M → Otherwise operation fails

12

M

__sync_bool_compare_and_swap(&M, 20, 30)

30

Compare Value Address New Value

slide-18
SLIDE 18

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #1: Blocking OS Mutex

→ Simple to use → Non-scalable (about 25ns per lock/unlock invocation) → Example: std::mutex

13

std::mutex m; ⋮ m.lock(); // Do something special... m.unlock(); pthread_mutex_t

slide-19
SLIDE 19

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #2: Test-and-Set Spinlock (TAS)

→ Very efficient (single instruction to lock/unlock) → Non-scalable, not cache friendly → Example: std::atomic<T>

14

std::atomic_flag latch; ⋮ while (latch.test_and_set(…)) { // Yield? Abort? Retry? }

std::atomic<bool>

slide-20
SLIDE 20

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #2: Test-and-Set Spinlock (TAS)

→ Very efficient (single instruction to lock/unlock) → Non-scalable, not cache friendly → Example: std::atomic<T>

14

std::atomic_flag latch; ⋮ while (latch.test_and_set(…)) { // Yield? Abort? Retry? }

std::atomic<bool>

slide-21
SLIDE 21

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #3: Queue-based Spinlock (MCS)

→ More efficient than mutex, better cache locality → Non-trivial memory management → Example: std::atomic<Latch*>

15

next Base Latch CPU1

slide-22
SLIDE 22

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #3: Queue-based Spinlock (MCS)

→ More efficient than mutex, better cache locality → Non-trivial memory management → Example: std::atomic<Latch*>

15

next Base Latch next CPU1 Latch CPU1

slide-23
SLIDE 23

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #3: Queue-based Spinlock (MCS)

→ More efficient than mutex, better cache locality → Non-trivial memory management → Example: std::atomic<Latch*>

15

next Base Latch next CPU1 Latch CPU1 CPU2

slide-24
SLIDE 24

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #3: Queue-based Spinlock (MCS)

→ More efficient than mutex, better cache locality → Non-trivial memory management → Example: std::atomic<Latch*>

15

next Base Latch next CPU1 Latch CPU1 CPU2

slide-25
SLIDE 25

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #3: Queue-based Spinlock (MCS)

→ More efficient than mutex, better cache locality → Non-trivial memory management → Example: std::atomic<Latch*>

15

next Base Latch next CPU1 Latch next CPU2 Latch CPU1 CPU2

slide-26
SLIDE 26

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #3: Queue-based Spinlock (MCS)

→ More efficient than mutex, better cache locality → Non-trivial memory management → Example: std::atomic<Latch*>

15

next Base Latch next CPU1 Latch next CPU2 Latch CPU1 CPU2 CPU3

slide-27
SLIDE 27

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

slide-28
SLIDE 28

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0

slide-29
SLIDE 29

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1

slide-30
SLIDE 30

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1

slide-31
SLIDE 31

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1 =2

slide-32
SLIDE 32

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1 =2

slide-33
SLIDE 33

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1 =2 =1

slide-34
SLIDE 34

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1 =2 =1

slide-35
SLIDE 35

CMU 15-721 (Spring 2017)

LATCH IMPLEMENTATIONS

Choice #4: Reader-Writer Locks

→ Allows for concurrent readers → Have to manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

16

read write Latch

=0 =0 =0 =0 =1 =2 =1 =1

slide-36
SLIDE 36

CMU 15-721 (Spring 2017)

LATCH CRABBING

Acquire and release latches on B+Tree nodes when traversing the data structure. A thread can release latch on a parent node if its child node considered safe.

→ Any node that won’t split or merge when updated. → Not full (on insertion) → More than half-full (on deletion)

17

slide-37
SLIDE 37

CMU 15-721 (Spring 2017)

LATCH CRABBING

Search: Start at root and go down; repeatedly,

→ Acquire read (R) latch on child → Then unlock parent if the child is safe.

Insert/Delete: Start at root and go down,

  • btaining write (W) latches as needed.

Once child is locked, check if it is safe:

→ If child is safe, release all locks on ancestors.

18

slide-38
SLIDE 38

CMU 15-721 (Spring 2017)

EXAMPLE #1: SEARCH 23

19

A B D G

20 10 35 6 12 23 38 44

C E F

slide-39
SLIDE 39

CMU 15-721 (Spring 2017)

EXAMPLE #1: SEARCH 23

19

A B D G

20 10 35 6 12 23 38 44

C E F R

slide-40
SLIDE 40

CMU 15-721 (Spring 2017)

EXAMPLE #1: SEARCH 23

19

A B D G

20 10 35 6 12 23 38 44

C E F R R

We can release the latch on A as soon as we acquire the latch for C.

slide-41
SLIDE 41

CMU 15-721 (Spring 2017)

EXAMPLE #1: SEARCH 23

19

A B D G

20 10 35 6 12 23 38 44

C E F R

We can release the latch on A as soon as we acquire the latch for C.

slide-42
SLIDE 42

CMU 15-721 (Spring 2017)

EXAMPLE #1: SEARCH 23

19

A B D G

20 10 35 6 12 23 38 44

C E F R R

We can release the latch on A as soon as we acquire the latch for C.

slide-43
SLIDE 43

CMU 15-721 (Spring 2017)

EXAMPLE #1: SEARCH 23

19

A B D G

20 10 35 6 12 23 38 44

C E F R

We can release the latch on A as soon as we acquire the latch for C.

slide-44
SLIDE 44

CMU 15-721 (Spring 2017)

EXAMPLE #2: DELETE 44

20

A B D G

20 10 35 6 12 23 38 44

C E F

slide-45
SLIDE 45

CMU 15-721 (Spring 2017)

EXAMPLE #2: DELETE 44

20

A B D G

20 10 35 6 12 23 38 44

C E F W

slide-46
SLIDE 46

CMU 15-721 (Spring 2017)

EXAMPLE #2: DELETE 44

20

A B D G

20 10 35 6 12 23 38 44

C E F W W

We may need to coalesce C, so we can’t release the latch on A.

slide-47
SLIDE 47

CMU 15-721 (Spring 2017)

EXAMPLE #2: DELETE 44

20

A B D G

20 10 35 6 12 23 38 44

C E F W W W

We may need to coalesce C, so we can’t release the latch on A. G will not merge with F, so we can release latches on A and C.

slide-48
SLIDE 48

CMU 15-721 (Spring 2017)

EXAMPLE #2: DELETE 44

20

A B D G

20 10 35 6 12 23 38 44

C E F W

We may need to coalesce C, so we can’t release the latch on A. G will not merge with F, so we can release latches on A and C.

X

slide-49
SLIDE 49

CMU 15-721 (Spring 2017)

EXAMPLE #3: INSERT 40

21

A B D G

20 10 35 6 12 23 38 44

C E F

slide-50
SLIDE 50

CMU 15-721 (Spring 2017)

EXAMPLE #3: INSERT 40

21

A B D G

20 10 35 6 12 23 38 44

C E F W

slide-51
SLIDE 51

CMU 15-721 (Spring 2017)

EXAMPLE #3: INSERT 40

21

A B D G

20 10 35 6 12 23 38 44

C E F W W

C has room if its child has to split, so we can release the latch on A.

slide-52
SLIDE 52

CMU 15-721 (Spring 2017)

EXAMPLE #3: INSERT 40

21

A B D G

20 10 35 6 12 23 38 44

C E F W

C has room if its child has to split, so we can release the latch on A.

slide-53
SLIDE 53

CMU 15-721 (Spring 2017)

EXAMPLE #3: INSERT 40

21

A B D G

20 10 35 6 12 23 38 44

C E F W W

C has room if its child has to split, so we can release the latch on A. G has to split, so we can’t release the latch on C.

slide-54
SLIDE 54

CMU 15-721 (Spring 2017)

EXAMPLE #3: INSERT 40

21

A B D G

20 10 35 6 12 23 38 44

C E F W W

C has room if its child has to split, so we can release the latch on A. G has to split, so we can’t release the latch on C.

H

44 40 44

slide-55
SLIDE 55

CMU 15-721 (Spring 2017)

OBSERVATION

What was the first step that the DBMS took in the two examples that updated the index?

22

Delete 44

A

20

W

Insert 40

A

20

W

slide-56
SLIDE 56

CMU 15-721 (Spring 2017)

BETTER LATCH CRABBING

Optimistically assume that the leaf is safe.

→ Take R latches as you traverse the tree to reach it and verify. → If leaf is not safe, then do previous algorithm.

23

CONCURRENCY OF OPERATIONS ON B-TREES Acta Informatica 9: 1-21 1977

slide-57
SLIDE 57

CMU 15-721 (Spring 2017)

EXAMPLE #4: DELETE 44

24

A B D G

20 10 35 6 12 23 38 44

C E F R

slide-58
SLIDE 58

CMU 15-721 (Spring 2017)

EXAMPLE #4: DELETE 44

24

A B D G

20 10 35 6 12 23 38 44

C E F R R

We assume that C is safe, so we can release the latch on A.

slide-59
SLIDE 59

CMU 15-721 (Spring 2017)

EXAMPLE #4: DELETE 44

24

A B D G

20 10 35 6 12 23 38 44

C E F R

We assume that C is safe, so we can release the latch on A.

slide-60
SLIDE 60

CMU 15-721 (Spring 2017)

EXAMPLE #4: DELETE 44

24

A B D G

20 10 35 6 12 23 38 44

C E F R

We assume that C is safe, so we can release the latch on A. Acquire an exclusive latch on G.

slide-61
SLIDE 61

CMU 15-721 (Spring 2017)

EXAMPLE #4: DELETE 44

24

A B D G

20 10 35 6 12 23 38 44

C E F W

We assume that C is safe, so we can release the latch on A. Acquire an exclusive latch on G.

slide-62
SLIDE 62

CMU 15-721 (Spring 2017)

EXAMPLE #4: DELETE 44

24

A B D G

20 10 35 6 12 23 38 44

C E F W

We assume that C is safe, so we can release the latch on A. Acquire an exclusive latch on G.

X

slide-63
SLIDE 63

CMU 15-721 (Spring 2017)

OBSERVATION

Crabbing ensures that txns do not corrupt the internal data structure during modifications. But because txns release latches on each node as soon as they are finished their operations, we cannot guarantee that phantoms do not occur…

25

slide-64
SLIDE 64

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

slide-65
SLIDE 65

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists

R

slide-66
SLIDE 66

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists

R R

slide-67
SLIDE 67

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists

R

slide-68
SLIDE 68

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists

R

!

slide-69
SLIDE 69

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists Txn #2: Insert 25

slide-70
SLIDE 70

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists Txn #2: Insert 25

W

slide-71
SLIDE 71

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists Txn #2: Insert 25

25

W

slide-72
SLIDE 72

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #1

26

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Check if 25 exists Txn #2: Insert 25 Txn #1: Insert 25

25

W

slide-73
SLIDE 73

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23]

slide-74
SLIDE 74

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23]

R

slide-75
SLIDE 75

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23]

R R

slide-76
SLIDE 76

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23] Txn #2: Insert 21

slide-77
SLIDE 77

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23] Txn #2: Insert 21

W W

slide-78
SLIDE 78

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23] Txn #2: Insert 21

23 21

W W

slide-79
SLIDE 79

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23] Txn #2: Insert 21

23 21

slide-80
SLIDE 80

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23] Txn #2: Insert 21

23 21

Txn #1: Scan [12, 23]

slide-81
SLIDE 81

CMU 15-721 (Spring 2017)

PROBLEM SCENARIO #2

27

A B D G

20 10 35 6 12 23 38 44

C E F

Txn #1: Scan [12, 23] Txn #2: Insert 21

23 21

Txn #1: Scan [12, 23]

R R

slide-82
SLIDE 82

CMU 15-721 (Spring 2017)

INDEX LOCKS

Need a way to protect the index’s logical contents from other txns to avoid phantoms. Difference with index latches:

→ Locks are held for the entire duration of a txn. → Only acquired at the leaf nodes. → Not physically stored in index data structure.

28

slide-83
SLIDE 83

CMU 15-721 (Spring 2017)

INDEX LOCKS

29

Lock Table

txn1

X

txn2

S

txn3

S • • •

txn3

S

txn2

S

txn4

S • • •

txn4

IX

txn6

X

txn5

S • • •

slide-84
SLIDE 84

CMU 15-721 (Spring 2017)

INDEX LOCKS

29

Lock Table

txn1

X

txn2

S

txn3

S • • •

txn3

S

txn2

S

txn4

S • • •

txn4

IX

txn6

X

txn5

S • • •

slide-85
SLIDE 85

CMU 15-721 (Spring 2017)

INDEX LOCKING SCHEMES

Predicate Locks Key-Value Locks Gap Locks Key-Range Locks Hierarchical Locking

30

slide-86
SLIDE 86

CMU 15-721 (Spring 2017)

PREDICATE LOCKS

Proposed locking scheme from System R.

→ Shared lock on the predicate in a WHERE clause of a SELECT query. → Exclusive lock on the predicate in a WHERE clause of any UPDATE, INSERT, or DELETE query.

Never implemented in any system.

31

THE NOTIONS OF CONSISTENCY AND PREDICATE LOCKS IN A DATABASE SYSTEM CACM 1976

slide-87
SLIDE 87

CMU 15-721 (Spring 2017)

PREDICATE LOCKS

32

SELECT SUM(balance) FROM account WHERE name = ‘Biggie’ INSERT INTO account (name, balance) VALUES (‘Biggie’, 100);

name=‘Biggie’ Records in Table ‘account’

slide-88
SLIDE 88

CMU 15-721 (Spring 2017)

PREDICATE LOCKS

32

SELECT SUM(balance) FROM account WHERE name = ‘Biggie’ INSERT INTO account (name, balance) VALUES (‘Biggie’, 100);

name=‘Biggie’ name=‘Biggie’∧ balance=100 Records in Table ‘account’

slide-89
SLIDE 89

CMU 15-721 (Spring 2017)

KEY-VALUE LOCKS

Locks that cover a single key value. Need “virtual keys” for non-existent values.

33

10 12 14 16

B+Tree Leaf Node

slide-90
SLIDE 90

CMU 15-721 (Spring 2017)

KEY-VALUE LOCKS

Locks that cover a single key value. Need “virtual keys” for non-existent values.

33

10 12 14 16

B+Tree Leaf Node

Key [14, 14]

slide-91
SLIDE 91

CMU 15-721 (Spring 2017)

GAP LOCKS

Each txn acquires a key-value lock on the single key that it wants to access. Then get a gap lock on the next key gap.

34

10 12 14 16

B+Tree Leaf Node

slide-92
SLIDE 92

CMU 15-721 (Spring 2017)

GAP LOCKS

Each txn acquires a key-value lock on the single key that it wants to access. Then get a gap lock on the next key gap.

34

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

slide-93
SLIDE 93

CMU 15-721 (Spring 2017)

GAP LOCKS

Each txn acquires a key-value lock on the single key that it wants to access. Then get a gap lock on the next key gap.

34

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

Gap (14, 16)

slide-94
SLIDE 94

CMU 15-721 (Spring 2017)

KEY-RANGE LOCKS

A txn takes locks on ranges in the key space.

→ Each range is from one key that appears in the relation, to the next that appears. → Define lock modes so conflict table will capture commutativity of the operations available.

35

slide-95
SLIDE 95

CMU 15-721 (Spring 2017)

KEY-RANGE LOCKS

Locks that cover a key value and the gap to the next key value in a single index.

→ Need “virtual keys” for artificial values (infinity)

36

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

slide-96
SLIDE 96

CMU 15-721 (Spring 2017)

KEY-RANGE LOCKS

Locks that cover a key value and the gap to the next key value in a single index.

→ Need “virtual keys” for artificial values (infinity)

36

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

Next Key [14, 16)

slide-97
SLIDE 97

CMU 15-721 (Spring 2017)

KEY-RANGE LOCKS

Locks that cover a key value and the gap to the next key value in a single index.

→ Need “virtual keys” for artificial values (infinity)

36

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

Prior Key (12, 14]

slide-98
SLIDE 98

CMU 15-721 (Spring 2017)

HIERARCHICAL LOCKING

Allow for a txn to hold wider key-range locks with different locking modes.

→ Reduces the number of visits to lock manager.

37

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

slide-99
SLIDE 99

CMU 15-721 (Spring 2017)

HIERARCHICAL LOCKING

Allow for a txn to hold wider key-range locks with different locking modes.

→ Reduces the number of visits to lock manager.

37

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

IX

[10, 16)

slide-100
SLIDE 100

CMU 15-721 (Spring 2017)

HIERARCHICAL LOCKING

Allow for a txn to hold wider key-range locks with different locking modes.

→ Reduces the number of visits to lock manager.

37

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

IX

[10, 16) [14, 16)

X

slide-101
SLIDE 101

CMU 15-721 (Spring 2017)

HIERARCHICAL LOCKING

Allow for a txn to hold wider key-range locks with different locking modes.

→ Reduces the number of visits to lock manager.

37

10 12 14 16

{Gap} {Gap} {Gap}

B+Tree Leaf Node

IX

[10, 16) [14, 16)

X IX

[12, 12]

X

slide-102
SLIDE 102

CMU 15-721 (Spring 2017)

PARTING THOUGHTS

Hierarchical locking essentially provides predicate locking without complications.

→ Index locking occurs only in the leaf nodes. → Latching is to ensure consistent data structure.

Peloton currently does not support serializable isolation with range scans.

38

slide-103
SLIDE 103

CMU 15-721 (Spring 2017)

NEXT CLASS

Index Key Representation Memory Allocation & Garbage Collection T-Trees (1980s / TimesTen) Concurrent Skip Lists (MemSQL)

39