DATABASE SYSTEM IMPLEMENTATION GT 4420/6422 // SPRING 2019 // - - PowerPoint PPT Presentation

database system implementation
SMART_READER_LITE
LIVE PREVIEW

DATABASE SYSTEM IMPLEMENTATION GT 4420/6422 // SPRING 2019 // - - PowerPoint PPT Presentation

DATABASE SYSTEM IMPLEMENTATION GT 4420/6422 // SPRING 2019 // @JOY_ARULRAJ LECTURE #17: QUERY EXECUTION & SCHEDULING 2 TODAYS AGENDA Process Models Query Parallelization Data Placement Scheduling 3 QUERY EXECUTION A query plan is


slide-1
SLIDE 1

DATABASE SYSTEM IMPLEMENTATION

GT 4420/6422 // SPRING 2019 // @JOY_ARULRAJ LECTURE #17: QUERY EXECUTION & SCHEDULING

slide-2
SLIDE 2

TODAY’S AGENDA

Process Models Query Parallelization Data Placement Scheduling

2

slide-3
SLIDE 3

QUERY EXECUTION

A query plan is comprised of operators. An operator instance is an invocation of an

  • perator on some segment of data.

A task is the execution of a sequence of one or more operator instances.

3

slide-4
SLIDE 4

PROCESS MODEL

A DBMS’s process model defines how the system is architected to support concurrent requests from a multi-user application. A worker is the DBMS component that is responsible for executing tasks on behalf of the client and returning the results.

4

ARCHITECTURE OF A DATABASE SYSTEM Foundations and Trends in Databases 2007

slide-5
SLIDE 5

PROCESS MODELS

Approach #1: Process per DBMS Worker Approach #2: Process Pool Approach #3: Thread per DBMS Worker

5

slide-6
SLIDE 6

PROCESS PER WORKER

Each worker is a separate OS process.

→ Relies on OS scheduler. → Use shared-memory for global data structures. → A process crash doesn’t take down entire system. → Examples: IBM DB2, Postgres, Oracle

6

Dispatcher Worker

slide-7
SLIDE 7

PROCESS PER WORKER

Each worker is a separate OS process.

→ Relies on OS scheduler. → Use shared-memory for global data structures. → A process crash doesn’t take down entire system. → Examples: IBM DB2, Postgres, Oracle

7

Dispatcher Worker

slide-8
SLIDE 8

PROCESS PER WORKER

Each worker is a separate OS process.

→ Relies on OS scheduler. → Use shared-memory for global data structures. → A process crash doesn’t take down entire system. → Examples: IBM DB2, Postgres, Oracle

8

Dispatcher Worker

slide-9
SLIDE 9

PROCESS PER WORKER

Each worker is a separate OS process.

→ Relies on OS scheduler. → Use shared-memory for global data structures. → A process crash doesn’t take down entire system. → Examples: IBM DB2, Postgres, Oracle

9

Dispatcher Worker

slide-10
SLIDE 10

PROCESS PER WORKER

Each worker is a separate OS process.

→ Relies on OS scheduler. → Use shared-memory for global data structures. → A process crash doesn’t take down entire system. → Examples: IBM DB2, Postgres, Oracle

10

Dispatcher Worker

slide-11
SLIDE 11

PROCESS PER WORKER

Each worker is a separate OS process.

→ Relies on OS scheduler. → Use shared-memory for global data structures. → A process crash doesn’t take down entire system. → Examples: IBM DB2, Postgres, Oracle

11

Dispatcher Worker

slide-12
SLIDE 12

PROCESS POOL

A worker uses any process that is free in a pool

→ Still relies on OS scheduler and shared memory. → Bad for CPU cache locality. → Examples: IBM DB2, Postgres (2015)

12

Worker Pool Dispatcher

slide-13
SLIDE 13

PROCESS POOL

A worker uses any process that is free in a pool

→ Still relies on OS scheduler and shared memory. → Bad for CPU cache locality. → Examples: IBM DB2, Postgres (2015)

13

Worker Pool Dispatcher

slide-14
SLIDE 14

PROCESS POOL

A worker uses any process that is free in a pool

→ Still relies on OS scheduler and shared memory. → Bad for CPU cache locality. → Examples: IBM DB2, Postgres (2015)

14

Worker Pool Dispatcher

slide-15
SLIDE 15

THREAD PER WORKER

Single process with multiple worker threads.

→ DBMS has to manage its own scheduling. → May or may not use a dispatcher thread. → Thread crash (may) kill the entire system. → Examples: IBM DB2, MSSQL, MySQL, Oracle (2014)

15

Worker Threads

slide-16
SLIDE 16

THREAD PER WORKER

Single process with multiple worker threads.

→ DBMS has to manage its own scheduling. → May or may not use a dispatcher thread. → Thread crash (may) kill the entire system. → Examples: IBM DB2, MSSQL, MySQL, Oracle (2014)

16

Worker Threads

slide-17
SLIDE 17

PROCESS MODELS

Using a multi-threaded architecture has several advantages:

→ Less overhead per context switch. → Don’t have to manage shared memory.

The thread per worker model does not mean that you have intra-query parallelism. I am not aware of any new DBMS built in the last 7-8 years that doesn’t use threads.

17

slide-18
SLIDE 18

SCHEDULING

For each query plan, the DBMS has to decide where, when, and how to execute it.

→ How many tasks should it use? → How many CPU cores should it use? → What CPU core should the tasks execute on? → Where should a task store its output?

The DBMS always knows more than the OS.

18

slide-19
SLIDE 19

INTER-QUERY PARALLELISM

Improve overall performance by allowing multiple queries to execute simultaneously.

→ Provide the illusion of isolation through concurrency control scheme.

The difficulty of implementing a concurrency control scheme is not significantly affected by the DBMS’s process model.

19

slide-20
SLIDE 20

INTRA-QUERY PARALLELISM

Improve the performance of a single query by executing its operators in parallel. Approach #1: Intra-Operator (Horizontal)

→ Operators are decomposed into independent instances that perform the same function on different subsets of data.

Approach #2: Inter-Operator (Vertical)

→ Operations are overlapped in order to pipeline data from

  • ne stage to the next without materialization.

20

slide-21
SLIDE 21

INTRA-OPERATOR PARALLELISM

21

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

slide-22
SLIDE 22

INTRA-OPERATOR PARALLELISM

22

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

slide-23
SLIDE 23

INTRA-OPERATOR PARALLELISM

23

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3 A B

s

p

s

slide-24
SLIDE 24

INTRA-OPERATOR PARALLELISM

24

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

1 2 3

A B

s

p

s

slide-25
SLIDE 25

INTRA-OPERATOR PARALLELISM

25

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

1 2 3

A B

s

p

s

slide-26
SLIDE 26

INTRA-OPERATOR PARALLELISM

26

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

1 2 3

A B

s

p

s

s s s

slide-27
SLIDE 27

INTRA-OPERATOR PARALLELISM

27

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

1 2 3

A B

s

p

s

s s s

slide-28
SLIDE 28

INTRA-OPERATOR PARALLELISM

28

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

1 2 3

A B

s

p

s

s s s p p p

slide-29
SLIDE 29

INTRA-OPERATOR PARALLELISM

29

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

A B

s

p

s

s s s p p p

slide-30
SLIDE 30

INTRA-OPERATOR PARALLELISM

30

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s p p p

slide-31
SLIDE 31

INTRA-OPERATOR PARALLELISM

31

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s

p p p

slide-32
SLIDE 32

INTRA-OPERATOR PARALLELISM

32

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s

B1 B2 B3

1 2 3

p p p

slide-33
SLIDE 33

INTRA-OPERATOR PARALLELISM

33

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s

B1 B2 B3

1 2 3

p p p

slide-34
SLIDE 34

INTRA-OPERATOR PARALLELISM

34

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s

B1 B2 B3

1 2 3

s s s

p p p p p p

slide-35
SLIDE 35

INTRA-OPERATOR PARALLELISM

35

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s

B1 B2 B3

1 2 3

s s s

Probe HT Probe HT Probe HT

p p p p p p

slide-36
SLIDE 36

INTRA-OPERATOR PARALLELISM

36

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A2 A1 A3

Build HT Build HT Build HT

1 2 3

Exchange

A B

s

p

s

s s s

B1 B2 B3

1 2 3

s s s

Probe HT Probe HT Probe HT

p p p p p p

Exchange

slide-37
SLIDE 37

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

INTER-OPERATOR PARALLELISM

37

A B

s

p

s

slide-38
SLIDE 38

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

INTER-OPERATOR PARALLELISM

38

A B

s

p

s

slide-39
SLIDE 39

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

INTER-OPERATOR PARALLELISM

39

1 ⨝

for r1 ∊ outer: for r2 ∊ inner: emit(r1⨝r2)

A B

s

p

s

slide-40
SLIDE 40

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

INTER-OPERATOR PARALLELISM

40

1 ⨝

for r1 ∊ outer: for r2 ∊ inner: emit(r1⨝r2)

2 p

for r ∊ incoming: emit(pr)

A B

s

p

s

slide-41
SLIDE 41

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

INTER-OPERATOR PARALLELISM

41

1 ⨝

for r1 ∊ outer: for r2 ∊ inner: emit(r1⨝r2)

2 p

for r ∊ incoming: emit(pr)

A B

s

p

s

slide-42
SLIDE 42

OBSERVATION

Coming up with the right number of workers to use for a query plan depends on the number of CPU cores, the size of the data, and functionality

  • f the operators.

42

slide-43
SLIDE 43

WORKER ALLOCATION

Approach #1: One Worker per Core

→ Each core is assigned one thread that is pinned to that core in the OS. → See sched_setaffinity

Approach #2: Multiple Workers per Core

→ Use a pool of workers per core (or per socket). → Allows CPU cores to be fully utilized in case one worker at a core blocks.

43

slide-44
SLIDE 44

TASK ASSIGNMENT

Approach #1: Push

→ A centralized dispatcher assigns tasks to workers and monitors their progress. → When the worker notifies the dispatcher that it is finished, it is given a new task.

Approach #1: Pull

→ Workers pull the next task from a queue, process it, and then return to get the next task.

44

slide-45
SLIDE 45

OBSERVATION

Regardless of what worker allocation or task assignment policy the DBMS uses, it’s important that workers operate on local data. The DBMS’s scheduler has to be aware of it’s underlying hardware’s memory layout.

→ Uniform vs. Non-Uniform Memory Access

45

slide-46
SLIDE 46

UNIFORM MEMORY ACCESS

46

Bus

Cache Cache Cache Cache

slide-47
SLIDE 47

NON-UNIFORM MEMORY ACCESS

47

Cache Cache Cache Cache I n t e r

  • c
  • n

n e c t

slide-48
SLIDE 48

NON-UNIFORM MEMORY ACCESS

48

Cache Cache Cache Cache I n t e r

  • c
  • n

n e c t

Intel (2008): QuickPath Interconnect Intel (2017): UltraPath Interconnect AMD (??): HyperTransport AMD (2017): Infinity Fabric

slide-49
SLIDE 49

DATA PLACEMENT

The DBMS can partition memory for a database and assign each partition to a CPU. By controlling and tracking the location of partitions, it can schedule operators to execute on workers at the closest CPU core. See Linux’s move_pages

49

slide-50
SLIDE 50

MEMORY ALLOCATION

50

slide-51
SLIDE 51

MEMORY ALLOCATION

What happens when the DBMS calls malloc?

→ Assume that the allocator doesn’t already have an chunk

  • f memory that it can give out.

Actually, almost nothing:

→ The allocator will extend the process’ data segment. → But this new virtual memory is not immediately backed by physical memory. → The OS only allocates physical memory when there is a page fault.

51

slide-52
SLIDE 52

MEMORY ALLOCATION

What happens when the DBMS calls malloc?

→ Assume that the allocator doesn’t already have an chunk

  • f memory that it can give out.

Actually, almost nothing:

→ The allocator will extend the process’ data segment. → But this new virtual memory is not immediately backed by physical memory. → The OS only allocates physical memory when there is a page fault.

52

slide-53
SLIDE 53

MEMORY ALLOCATION LOCATION

Now after a page fault, where does the OS allocate physical memory in a NUMA system? Approach #1: Interleaving

→ Distribute allocated memory uniformly across CPUs.

Approach #2: First-Touch

→ At the CPU of the thread that accessed the memory location that caused the page fault.

53

slide-54
SLIDE 54

DATA PLACEMENT – OLTP

54

Source: Danica Porobic

4000 8000 12000 Spread Group Mix OS

Throughput (txn/sec)

Workload: TPC-C Payment using 4 Workers Processor: NUMA with 4 sockets (6 cores each)

? ? ? ?

slide-55
SLIDE 55

DATA PLACEMENT – OLTP

55

Source: Danica Porobic

4000 8000 12000 Spread Group Mix OS

Throughput (txn/sec)

Workload: TPC-C Payment using 4 Workers Processor: NUMA with 4 sockets (6 cores each)

? ? ? ?

slide-56
SLIDE 56

DATA PLACEMENT – OLAP

56

10000 20000 30000

8 24 40 56 72 88 104 120 136 152

Tuples Read Per Second (M) # Threads Random Partition Local Partition Only

Source: Haibin Lin

Database: 10 million tuples Workload: Sequential Scan Processor: 8 sockets, 10 cores per node (2x HT)

slide-57
SLIDE 57

PARTITIONING VS. PLACEMENT

A partitioning scheme is used to split the database based on some policy.

→ Round-robin → Attribute Ranges → Hashing → Partial/Full Replication

A placement scheme then tells the DBMS where to put those partitions.

→ Round-robin → Interleave across cores

57

slide-58
SLIDE 58

OBSERVATION

We have the following so far:

→ Process Model → Worker Allocation Model → Task Assignment Model → Data Placement Policy

But how do we decide how to create a set of tasks from a logical query plan?

→ This is relatively easy for OLTP queries. → Much harder for OLAP queries…

58

slide-59
SLIDE 59

STATIC SCHEDULING

The DBMS decides how many threads to use to execute the query when it generates the plan. It does not change while the query executes.

→ The easiest approach is to just use the same # of tasks as the # of cores.

59

slide-60
SLIDE 60

MORSEL-DRIVEN SCHEDULING

Dynamic scheduling of tasks that operate over horizontal partitions called “morsels” that are distributed across cores.

→ One worker per core → Pull-based task assignment → Round-robin data placement

Supports parallel, NUMA-aware operator implementations.

60

MORSEL-DRIVEN PARALLELISM: A NUMA-AWARE QUERY EVALUATION FRAMEWORK FOR THE MANY-CORE AGE SIGMOD 2014

slide-61
SLIDE 61

HYPER – ARCHITECTURE

No separate dispatcher thread. The threads perform cooperative scheduling for each query plan using a single task queue.

→ Each worker tries to select tasks that will execute on morsels that are local to it. → If there are no local tasks, then the worker just pulls the next task from the global work queue.

61

slide-62
SLIDE 62

Data Table

HYPER – DATA PARTITIONING

62

id a1 a2 a3

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

slide-63
SLIDE 63

Data Table

HYPER – DATA PARTITIONING

63

id a1 a2 a3

A2 A1 A3

Morsels

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

slide-64
SLIDE 64

Data Table

HYPER – DATA PARTITIONING

64

1 2 3

id a1 a2 a3

A2 A1 A3

Morsels

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

slide-65
SLIDE 65

Global Task Queue

HYPER – EXECUTION EXAMPLE

65

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-66
SLIDE 66

Global Task Queue

HYPER – EXECUTION EXAMPLE

66

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-67
SLIDE 67

Global Task Queue

HYPER – EXECUTION EXAMPLE

67

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-68
SLIDE 68

Global Task Queue

HYPER – EXECUTION EXAMPLE

68

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-69
SLIDE 69

Global Task Queue

HYPER – EXECUTION EXAMPLE

69

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-70
SLIDE 70

Global Task Queue

HYPER – EXECUTION EXAMPLE

70

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-71
SLIDE 71

Global Task Queue

HYPER – EXECUTION EXAMPLE

71

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-72
SLIDE 72

Global Task Queue

HYPER – EXECUTION EXAMPLE

72

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-73
SLIDE 73

Global Task Queue

HYPER – EXECUTION EXAMPLE

73

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-74
SLIDE 74

Global Task Queue

HYPER – EXECUTION EXAMPLE

74

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-75
SLIDE 75

Global Task Queue

HYPER – EXECUTION EXAMPLE

75

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-76
SLIDE 76

Global Task Queue

HYPER – EXECUTION EXAMPLE

76

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-77
SLIDE 77

Global Task Queue

HYPER – EXECUTION EXAMPLE

77

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-78
SLIDE 78

Global Task Queue

HYPER – EXECUTION EXAMPLE

78

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-79
SLIDE 79

Global Task Queue

HYPER – EXECUTION EXAMPLE

79

SELECT A.id, B.value FROM A, B WHERE A.id = B.id AND A.value < 99 AND B.value > 100

A B

s

p

s

1

Morsels Buffer

2

Morsels Buffer

3

Morsels Buffer

slide-80
SLIDE 80

MORSEL-DRIVEN SCHEDULING

Because there is only one worker per core, they have to use work stealing because otherwise threads could sit idle waiting for stragglers. Uses a lock-free hash table to maintain the global work queues.

→ We will discuss hash tables next class…

80

slide-81
SLIDE 81

SAP HANA – NUMA-AWARE SCHEDULER

Pull-based scheduling with multiple worker threads that are organized into groups (pools).

→ Each CPU can have multiple groups. → Each group has a soft and hard priority queue.

Uses a separate “watchdog” thread to check whether groups are saturated and can reassign tasks dynamically.

81

SCALING UP CONCURRENT MAIN-MEMORY COLUMN-STORE SCANS: TOWARDS ADAPTIVE NUMA-AWARE DATA AND TASK PLACEMENT VLDB 2015

slide-82
SLIDE 82

SAP HANA – THREAD GROUPS

Each thread group has a soft and hard priority task queues.

→ Threads are allowed to steal tasks from other groups’ soft queues.

Four different pools of thread per group:

→ Working: Actively executing a task. → Inactive: Blocked inside of the kernel due to a latch. → Free: Sleeps for a little, wake up to see whether there is a new task to execute. → Parked: Like free but doesn’t wake up on its own.

82

slide-83
SLIDE 83

SAP HANA – NUMA-AWARE SCHEDULER

Can dynamically adjust thread pinning based on whether a task is CPU or memory bound. Found that work stealing was not as beneficial for systems with a larger number of sockets. Using thread groups allows cores to execute other tasks instead of just only queries.

83

slide-84
SLIDE 84

PARTING THOUGHTS

A DBMS is a beautiful, strong-willed independent piece of software. But it has to make sure that it uses its underlying hardware correctly.

→ Data location is an important aspect of this. → Tracking memory location in a single-node DBMS is the same as tracking shards in a distributed DBMS

Don’t let the OS ruin your life.

84

slide-85
SLIDE 85

NEXT CLASS

Concurrency Control Reminder: Project updates due after Spring break (Mar 26).

85