ADM IN ISTRIVIA Project #1 is due Fri Sept 27 th @ 11:59pm Homework - - PowerPoint PPT Presentation

adm in istrivia
SMART_READER_LITE
LIVE PREVIEW

ADM IN ISTRIVIA Project #1 is due Fri Sept 27 th @ 11:59pm Homework - - PowerPoint PPT Presentation

09 Index Concurrency Intro to Database Systems Andy Pavlo AP AP 15-445/15-645 Computer Science Carnegie Mellon University Fall 2019 2 ADM IN ISTRIVIA Project #1 is due Fri Sept 27 th @ 11:59pm Homework #2 is due Mon Sept 30 th @ 11:59pm


slide-1
SLIDE 1

Intro to Database Systems 15-445/15-645 Fall 2019 Andy Pavlo Computer Science Carnegie Mellon University

AP AP

09 Index

Concurrency

slide-2
SLIDE 2 CMU 15-445/645 (Fall 2019)

ADM IN ISTRIVIA

Project #1 is due Fri Sept 27th @ 11:59pm Homework #2 is due Mon Sept 30th @ 11:59pm Project #2 will be released Mon Sept 30th

2

slide-3
SLIDE 3 CMU 15-445/645 (Fall 2019)

O BSERVATIO N

We assumed that all the data structures that we have discussed so far are single-threaded. But we need to allow multiple threads to safely access our data structures to take advantage of additional CPU cores and hide disk I/O stalls.

3

They Don't Do This!

slide-4
SLIDE 4 CMU 15-445/645 (Fall 2019)

CO N CURREN CY CO N TRO L

A concurrency control protocol is the method that the DBMS uses to ensure "correct" results for concurrent operations on a shared object. A protocol's correctness criteria can vary:

→ Logical Correctness: Can I see the data that I am supposed to see? → Physical Correctness: Is the internal representation of the object sound?

4

slide-5
SLIDE 5 CMU 15-445/645 (Fall 2019)

TO DAY'S AGEN DA

Latches Overview Hash Table Latching B+Tree Latching Leaf Node Scans Delayed Parent Updates

5

slide-6
SLIDE 6 CMU 15-445/645 (Fall 2019)

LO CKS VS. LATCH ES

Locks

→ Protects the database's logical contents from other txns. → Held for txn duration. → Need to be able to rollback changes.

Latches

→ Protects the critical sections of the DBMS's internal data structure from other threads. → Held for operation duration. → Do not need to be able to rollback changes.

6

slide-7
SLIDE 7 CMU 15-445/645 (Fall 2019)

LO CKS VS. LATCH ES

7

Locks Latches

Separate… User transactions Threads Protect… Database Contents In-Memory Data Structures During… Entire Transactions Critical Sections Modes… Shared, Exclusive, Update, Intention Read, Write Deadlock Detection & Resolution Avoidance …by… Waits-for, Timeout, Aborts Coding Discipline Kept in… Lock Manager Protected Data Structure

Source: Goetz Graefe
slide-8
SLIDE 8 CMU 15-445/645 (Fall 2019)

LATCH M O DES

Read Mode

→ Multiple threads can read the same object at the same time. → A thread can acquire the read latch if another thread has it in read mode.

Write Mode

→ Only one thread can access the object. → A thread cannot acquire a write latch if another thread holds the latch in any mode.

8

Read Write Read

✔ X

Write

X X

Compatibility Matrix

slide-9
SLIDE 9 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #1: Blocking OS Mutex

→ Simple to use → Non-scalable (about 25ns per lock/unlock invocation) → Example: std::mutex

9

std::mutex m; ⋮ m.lock(); // Do something special... m.unlock();

slide-10
SLIDE 10 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #2: Test-and-Set Spin Latch (TAS)

→ Very efficient (single instruction to latch/unlatch) → Non-scalable, not cache friendly → Example: std::atomic<T>

10

std::atomic_flag latch; ⋮ while (latch.test_and_set(…)) { // Retry? Yield? Abort? }

slide-11
SLIDE 11 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #3: Reader-Writer Latch

→ Allows for concurrent readers → Must manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

11

read write Latch

=0 =0 =0 =0

slide-12
SLIDE 12 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #3: Reader-Writer Latch

→ Allows for concurrent readers → Must manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

11

read write Latch

=0 =0 =0 =0 =1

slide-13
SLIDE 13 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #3: Reader-Writer Latch

→ Allows for concurrent readers → Must manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

11

read write Latch

=0 =0 =0 =0 =1 =2

slide-14
SLIDE 14 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #3: Reader-Writer Latch

→ Allows for concurrent readers → Must manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

11

read write Latch

=0 =0 =0 =0 =1 =2 =1

slide-15
SLIDE 15 CMU 15-445/645 (Fall 2019)

LATCH IM PLEM EN TATIO NS

Approach #3: Reader-Writer Latch

→ Allows for concurrent readers → Must manage read/write queues to avoid starvation → Can be implemented on top of spinlocks

11

read write Latch

=0 =0 =0 =0 =1 =2 =1 =1

slide-16
SLIDE 16 CMU 15-445/645 (Fall 2019)

H ASH TABLE LATCH IN G

Easy to support concurrent access due to the limited ways threads access the data structure.

→ All threads move in the same direction and only access a single page/slot at a time. → Deadlocks are not possible.

To resize the table, take a global latch on the entire table (i.e., in the header page).

12

slide-17
SLIDE 17 CMU 15-445/645 (Fall 2019)

H ASH TABLE LATCH IN G

Approach #1: Page Latches

→ Each page has its own reader-write latch that protects its entire contents. → Threads acquire either a read or write latch before they access a page.

Approach #2: Slot Latches

→ Each slot has its own latch. → Can use a single mode latch to reduce meta-data and computational overhead.

13

slide-18
SLIDE 18 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B

hash(D)

T1: Find D

slide-19
SLIDE 19 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B R

hash(D)

T1: Find D

slide-20
SLIDE 20 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B R

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-21
SLIDE 21 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B R

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-22
SLIDE 22 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B R

hash(D)

T1: Find D

hash(E)

T2: Insert E

1 2

It’s safe to release the latch on Page #1.

slide-23
SLIDE 23 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B

hash(D)

T1: Find D

R

hash(E)

T2: Insert E

1 2

slide-24
SLIDE 24 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B

hash(D)

T1: Find D

R

hash(E)

T2: Insert E

W

1 2

slide-25
SLIDE 25 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B

hash(D)

T1: Find D

hash(E)

T2: Insert E

W

1 2

slide-26
SLIDE 26 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B

hash(D)

T1: Find D

hash(E)

T2: Insert E

1 2

W

slide-27
SLIDE 27 CMU 15-445/645 (Fall 2019)

| val

D

| val

E

| val

A

| val

C

H ASH TABLE PAGE LATCH ES

14

| val

B

hash(D)

T1: Find D

hash(E)

T2: Insert E

1 2

W

slide-28
SLIDE 28 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B

1 2

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-29
SLIDE 29 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B R

1 2

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-30
SLIDE 30 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B R

1 2

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-31
SLIDE 31 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B R

1 2

W

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-32
SLIDE 32 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B R

1 2

W

hash(D)

T1: Find D

hash(E)

T2: Insert E

It’s safe to release the latch on A

slide-33
SLIDE 33 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B

1 2

W

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-34
SLIDE 34 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B

1 2

W

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-35
SLIDE 35 CMU 15-445/645 (Fall 2019)

| val

D

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B

1 2

W R

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-36
SLIDE 36 CMU 15-445/645 (Fall 2019)

| val

D

| val

E

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B

1 2

R W

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-37
SLIDE 37 CMU 15-445/645 (Fall 2019)

| val

D

| val

E

| val

A

| val

C

H ASH TABLE SLOT LATCH ES

15

| val

B R

1 2

W

hash(D)

T1: Find D

hash(E)

T2: Insert E

slide-38
SLIDE 38 CMU 15-445/645 (Fall 2019)

B+ TREE CO N CURREN CY CO N TRO L

We want to allow multiple threads to read and update a B+Tree at the same time. We need to protect from two types of problems:

→ Threads trying to modify the contents of a node at the same time. → One thread traversing the tree while another thread splits/merges nodes.

16

slide-39
SLIDE 39 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44

41

slide-40
SLIDE 40 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44

41

slide-41
SLIDE 41 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44

41

slide-42
SLIDE 42 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44

41 Rebalance!

slide-43
SLIDE 43 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44 T2: Find 41

41 Rebalance!

slide-44
SLIDE 44 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44 T2: Find 41

41 Rebalance!

slide-45
SLIDE 45 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44 T2: Find 41

41 Rebalance!

slide-46
SLIDE 46 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44 T2: Find 41

41 Rebalance! 41

slide-47
SLIDE 47 CMU 15-445/645 (Fall 2019)

38

B+ TREE M ULTI- TH READED EXAM PLE

17

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

T1: Delete 44 T2: Find 41

41 Rebalance! 41

???

slide-48
SLIDE 48 CMU 15-445/645 (Fall 2019)

LATCH CRABBIN G/ CO UPLIN G

Protocol to allow multiple threads to access/modify B+Tree at the same time. Basic Idea:

→ Get latch for parent. → Get latch for child → Release latch for parent if “safe”.

A safe node is one that will not split or merge when updated.

→ Not full (on insertion) → More than half-full (on deletion)

18

slide-49
SLIDE 49 CMU 15-445/645 (Fall 2019)

LATCH CRABBIN G/ CO UPLIN G

Find: Start at root and go down; repeatedly,

→ Acquire R latch on child → Then unlatch parent

Insert/Delete: Start at root and go down,

  • btaining W latches as needed. Once child is

latched, check if it is safe:

→ If child is safe, release all latches on ancestors.

19

slide-50
SLIDE 50 CMU 15-445/645 (Fall 2019)

EXAM PLE # 1 FIN D 38

20

3 4 6 9 10 11 12 13 20 22 23 31 35 36 38 41 44 20 6 12 23 38 44

B C D E F G H I

35 10

R

A

slide-51
SLIDE 51 CMU 15-445/645 (Fall 2019)

EXAM PLE # 1 FIN D 38

20

3 4 6 9 10 11 12 13 20 22 23 31 35 36 38 41 44 20 6 12 23 38 44

B C D E F G H I

35 10

R R It’s safe to release the latch on A.

A

slide-52
SLIDE 52 CMU 15-445/645 (Fall 2019)

EXAM PLE # 1 FIN D 38

20

3 4 6 9 10 11 12 13 20 22 23 31 35 36 38 41 44 20 6 12 23 38 44

B C D E F G H I

35 10

R

A

slide-53
SLIDE 53 CMU 15-445/645 (Fall 2019)

EXAM PLE # 1 FIN D 38

20

3 4 6 9 10 11 12 13 20 22 23 31 35 36 38 41 44 20 6 12 23 38 44

B C D E F G H I

35 10

R

A

slide-54
SLIDE 54 CMU 15-445/645 (Fall 2019)

EXAM PLE # 1 FIN D 38

20

3 4 6 9 10 11 12 13 20 22 23 31 35 36 38 41 44 20 6 12 23 38 44

B C D E F G H I

35 10

R

A

slide-55
SLIDE 55 CMU 15-445/645 (Fall 2019)

EXAM PLE # 1 FIN D 38

20

3 4 6 9 10 11 12 13 20 22 23 31 35 36 38 41 44 20 6 12 23 38 44

B C D E F G H I

35 10

A

slide-56
SLIDE 56 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

21

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W

slide-57
SLIDE 57 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

21

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W W We may need to coalesce B, so we can’t release the latch on A.

slide-58
SLIDE 58 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

21

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W W W We know that D will not need to merge with C, so it’s safe to release latches on A and B.

slide-59
SLIDE 59 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

21

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W We know that D will not need to merge with C, so it’s safe to release latches on A and B.

slide-60
SLIDE 60 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

21

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W

slide-61
SLIDE 61 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

21

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W

slide-62
SLIDE 62 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 3 IN SERT 4 5

22

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 45 20 6 12 23 38 44

A B C D E F G H I

35 10

slide-63
SLIDE 63 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 3 IN SERT 4 5

22

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 45 20 6 12 23 38 44

A B C D E F G H I

35 10

W W We know that if D needs to split, B has room so it’s safe to release the latch on A.

slide-64
SLIDE 64 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 3 IN SERT 4 5

22

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 45 20 6 12 23 38 44

A B C D E F G H I

35 10

W W

slide-65
SLIDE 65 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 3 IN SERT 4 5

22

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 45 20 6 12 23 38 44

A B C D E F G H I

35 10

W W W Node I won’t split, so we can release B+D.

slide-66
SLIDE 66 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 3 IN SERT 4 5

22

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 45 20 6 12 23 38 44

A B C D E F G H I

35 10

W Node I won’t split, so we can release B+D.

slide-67
SLIDE 67 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W W

slide-68
SLIDE 68 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W

slide-69
SLIDE 69 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W W

slide-70
SLIDE 70 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W

slide-71
SLIDE 71 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W W We need to split F so we need to hold the latch on its parent node.

slide-72
SLIDE 72 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W W

25

We need to split F so we need to hold the latch on its parent node.

slide-73
SLIDE 73 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

23

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W W

25 31

We need to split F so we need to hold the latch on its parent node.

slide-74
SLIDE 74 CMU 15-445/645 (Fall 2019)

O BSERVATIO N

What was the first step that all the update examples did on the B+Tree?

24

20

A

W

Delete 38

20

A

W

Insert 45

20

A

W

Insert 25

slide-75
SLIDE 75 CMU 15-445/645 (Fall 2019)

O BSERVATIO N

What was the first step that all the update examples did on the B+Tree? Taking a write latch on the root every time becomes a bottleneck with higher concurrency. Can we do better?

25

slide-76
SLIDE 76 CMU 15-445/645 (Fall 2019)

BETTER LATCH IN G ALGO RITH M

Assume that the leaf node is safe. Use read latches and crabbing to reach it, and then verify that it is safe. If leaf is not safe, then do previous algorithm using write latches.

26

slide-77
SLIDE 77 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

27

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

R

slide-78
SLIDE 78 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

27

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

R

slide-79
SLIDE 79 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

27

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

R

slide-80
SLIDE 80 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

27

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

R W

slide-81
SLIDE 81 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

27

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W H will not need to coalesce, so we’re safe!

slide-82
SLIDE 82 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 2 DELETE 38

27

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W H will not need to coalesce, so we’re safe!

slide-83
SLIDE 83 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

28

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 38 44

A B C D E F G H I

35 10

W We need to split F so we have to restart and re- execute like before.

slide-84
SLIDE 84 CMU 15-445/645 (Fall 2019)

BETTER LATCH IN G ALGO RITH M

Search: Same as before. Insert/Delete:

→ Set latches as if for search, get to leaf, and set W latch on leaf. → If leaf is not safe, release all latches, and restart thread using previous insert/delete protocol with write latches.

This approach optimistically assumes that only leaf node will be modified; if not, R latches set on the first pass to leaf are wasteful.

29

slide-85
SLIDE 85 CMU 15-445/645 (Fall 2019)

O BSERVATIO N

The threads in all the examples so far have acquired latches in a "top-down" manner.

→ A thread can only acquire a latch from a node that is below its current node. → If the desired latch is unavailable, the thread must wait until it becomes available.

But what if we want to move from one leaf node to another leaf node?

30

slide-86
SLIDE 86 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 1

31

A B

3 1 2 3 4

C

T1: Find Keys < 4

R

slide-87
SLIDE 87 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 1

31

A B

3 1 2 3 4

C

T1: Find Keys < 4

R

slide-88
SLIDE 88 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 1

31

A B

3 1 2 3 4

C

T1: Find Keys < 4

R Do not release latch on C until thread has latch on B

slide-89
SLIDE 89 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 1

31

A B

3 1 2 3 4

C

T1: Find Keys < 4

R R Do not release latch on C until thread has latch on B

slide-90
SLIDE 90 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 1

31

A B

3 1 2 3 4

C

T1: Find Keys < 4

R

slide-91
SLIDE 91 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 2

32

A B

3 1 2 3 4

C

T1: Find Keys < 4 T2: Find Keys > 1

slide-92
SLIDE 92 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 2

32

A B

3 1 2 3 4

C

T1: Find Keys < 4 T2: Find Keys > 1

R

slide-93
SLIDE 93 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 2

32

A B

3 1 2 3 4

C

T1: Find Keys < 4 T2: Find Keys > 1

R R R

slide-94
SLIDE 94 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 2

32

A B

3 1 2 3 4

C

T1: Find Keys < 4 T2: Find Keys > 1

R R

slide-95
SLIDE 95 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 2

32

A B

3 1 2 3 4

C

T1: Find Keys < 4 T2: Find Keys > 1

R R Both T1 and T2 now hold this read latch. Both T1 and T2 now hold this read latch.

slide-96
SLIDE 96 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 2

32

A B

3 1 2 3 4

C

T1: Find Keys < 4 T2: Find Keys > 1

R R Only T1 holds this read latch. Only T2 holds this read latch.

slide-97
SLIDE 97 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 3

33

A B

3 1 2 3 4

C

T1: Delete 4 T2: Find Keys > 1

R

slide-98
SLIDE 98 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 3

33

A B

3 1 2 3 4

C

T1: Delete 4 T2: Find Keys > 1

R W

slide-99
SLIDE 99 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 3

33

A B

3 1 2 3 4

C

T1: Delete 4 T2: Find Keys > 1

R W T2 cannot acquire the read latch on C

slide-100
SLIDE 100 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 3

33

A B

3 1 2 3 4

C

T1: Delete 4 T2: Find Keys > 1

R W T2 does not know what T1 is doing… T2 cannot acquire the read latch on C

slide-101
SLIDE 101 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN EXAM PLE # 3

33

A B

3 1 2 3 4

C

T1: Delete 4 T2: Find Keys > 1

R W T2 does not know what T1 is doing… T2 cannot acquire the read latch on C

slide-102
SLIDE 102 CMU 15-445/645 (Fall 2019)

LEAF N O DE SCAN S

Latches do not support deadlock detection or

  • avoidance. The only way we can deal with this

problem is through coding discipline. The leaf node sibling latch acquisition protocol must support a "no-wait" mode. The DBMS's data structures must cope with failed latch acquisitions.

34

slide-103
SLIDE 103 CMU 15-445/645 (Fall 2019)

DELAYED PAREN T UPDATES

Every time a leaf node overflows, we must update at least three nodes.

→ The leaf node being split. → The new leaf node being created. → The parent node.

Blink-Tree Optimization: When a leaf node

  • verflows, delay updating its parent node.

35

slide-104
SLIDE 104 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R R

T1: Insert 25

slide-105
SLIDE 105 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R R

T1: Insert 25

slide-106
SLIDE 106 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R

T1: Insert 25

slide-107
SLIDE 107 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W Add the new leaf node as a sibling to F, but do not update C

T1: Insert 25

slide-108
SLIDE 108 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

W

25 31

Add the new leaf node as a sibling to F, but do not update C

T1: Insert 25

slide-109
SLIDE 109 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10 25 31

Add the new leaf node as a sibling to F, but do not update C

T1: Insert 25

slide-110
SLIDE 110 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10 25 31

Update C the next time that a thread takes a write latch on it.

T1: Insert 25

C: Add 31

slide-111
SLIDE 111 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10 25 31

T1: Insert 25 T2: Find 31

C: Add 31

slide-112
SLIDE 112 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10 25 31

T1: Insert 25 T2: Find 31

C: Add 31

slide-113
SLIDE 113 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R

25 31

T1: Insert 25 T2: Find 31 T3: Insert 33

C: Add 31

slide-114
SLIDE 114 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R

25 31

T1: Insert 25 T2: Find 31 T3: Insert 33

C: Add 31

slide-115
SLIDE 115 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R

25 31

W

T1: Insert 25 T2: Find 31 T3: Insert 33

C: Add 31

slide-116
SLIDE 116 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R

25 31

W

T1: Insert 25 T2: Find 31 T3: Insert 33

C: Add 31

slide-117
SLIDE 117 CMU 15-445/645 (Fall 2019)

38 41

EXAM PLE # 4 IN SERT 25

36

3 4 6 9 10 11 12 13 20 22 23 31 35 36 44 20 6 12 23 31 38 44

A B C D E F G H I

35 10

R

25 31

W

T1: Insert 25 T2: Find 31 T3: Insert 33

33

C: Add 31

W

slide-118
SLIDE 118 CMU 15-445/645 (Fall 2019)

CO N CLUSIO N

Making a data structure thread-safe is notoriously difficult in practice. We focused on B+Trees but the same high-level techniques are applicable to other data structures.

37

slide-119
SLIDE 119 CMU 15-445/645 (Fall 2019)

N EXT CLASS

We are finally going to discuss how to execute some queries…

38