b a n
play

B a n Parallel DBMS d 1 Terabyte w 1 Terabyte i d Chapter - PDF document

Why Parallel Access To Data? At 10 MB/s 1,000 x parallel 1.2 days to scan 1.5 minute to scan. B a n Parallel DBMS d 1 Terabyte w 1 Terabyte i d Chapter 21, Part A t h Parallelism: 10 MB/s Slides by Joe Hellerstein, UCB, with some


  1. Why Parallel Access To Data? At 10 MB/s 1,000 x parallel 1.2 days to scan 1.5 minute to scan. B a n Parallel DBMS d 1 Terabyte w 1 Terabyte i d Chapter 21, Part A t h Parallelism: 10 MB/s Slides by Joe Hellerstein, UCB, with some material from Jim Gray, divide a big problem Microsoft Research. See also: into many smaller ones http://www.research.microsoft.com/research/BARC/Gray/PDB95.ppt to be solved in parallel. Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 1 2 Parallel DBMS: Intro DBMS: The || Success Story ❖ Parallelism is natural to DBMS processing ❖ DBMSs are the most (only?) successful – Pipeline parallelism: many machines each doing one application of parallelism. step in a multi-step process. – Teradata, Tandem vs. Thinking Machines, KSR.. – Partition parallelism: many machines doing the – Every major DBMS vendor has some || server same thing to different pieces of data. – Workstation manufacturers now depend on || DB – Both are natural in DBMS! server sales. Any Any Sequential Sequential ❖ Reasons for success: Pipeline Program Program – Bulk-processing (= partition ||-ism). – Natural pipelining. Sequential Any Partition Any Sequential Sequential Sequential Sequential Sequential – Inexpensive hardware can do the trick! Program Program outputs split N ways, inputs merge M ways – Users/app-programmers don’t need to think in || Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 3 4 Some || Terminology Architecture Issue: Shared What? Ideal (throughput) Xact/sec. Shared Memory Shared Nothing Shared Disk (SMP) (network) ❖ Speed-Up – More resources means CLIENTS CLIENTS CLIENTS proportionally less time degree of ||-ism for given amount of data. Processors Memory Ideal (response time) ❖ Scale-Up sec./Xact – If resources increased in Hard to program Easy to program proportion to increase in Cheap to build Expensive to build data size, time is constant. Easy to scaleup Difficult to scaleup Sequent, SGI, Sun degree of ||-ism VMScluster, Sysplex Tandem, Teradata, SP2 Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 5 6

  2. What Systems Work This Way Different Types of DBMS ||-ism (as of 9/1995) Shared Nothing Teradata: 400 nodes CLIENTS ❖ Intra-operator parallelism Tandem: 110 nodes – get all machines working to compute a given IBM / SP2 / DB2: 128 nodes Informix/SP2 48 nodes operation (scan, sort, join) ATT & Sybase ? nodes ❖ Inter-operator parallelism CLIENTS – each operator may run concurrently on a different Shared Disk site (exploits pipelining) Oracle 170 nodes DEC Rdb 24 nodes ❖ Inter-query parallelism – different queries run on different sites Shared Memory CLIENTS ❖ We’ll focus on intra-operator ||-ism Informix 9 nodes Processors RedBrick ? nodes Memory Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 7 8 Automatic Data Partitioning Parallel Scans Partitioning a table: Range Hash Round Robin ❖ Scan in parallel, and merge. ❖ Selection may not require all sites for range or hash partitioning. ❖ Indexes can be built at each partition. A...E F...J K...N O...S T...Z A...E F...J K...N O...S T...Z A...E F...J K...N O...S T...Z ❖ Question: How do indexes differ in the Good for equijoins, Good for equijoins Good to spread load different schemes? range queries – Think about both lookups and inserts! group-by – What about unique indexes? Shared disk and memory less sensitive to partitioning, Shared nothing benefits from "good" partitioning Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 9 10 Parallel Sorting Parallel Aggregates ❖ For each aggregate function, need a decomposition: ❖ Current records: – count (S) = Σ count (s(i)), ditto for sum () – 8.5 Gb/minute, shared-nothing; Datamation – avg (S) = ( Σ sum (s(i))) / Σ count (s(i)) benchmark in 2.41 secs (UCB students! http://now.cs.berkeley.edu/NowSort/ ) – and so on... ❖ For groups: ❖ Idea: – Scan in parallel, and range-partition as you go. – Sub-aggregate groups close to the source. – As tuples come in, begin “local” sorting on each – Pass each sub-aggregate to its group’s site. – Resulting data is sorted, and range-partitioned. ◆ Chosen via a hash fn. Count – Problem: skew! Count Count Count Count Count – Solution: “sample” the data at start to determine partition points. A Table A...E F...J K...N O...S T...Z Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 11 12 Jim Gray & Gordon Bell: VLDB 95 Parallel Database Systems Survey

  3. Parallel Joins Parallel Hash Join Partitions OUTPUT 1 1 Phase 1 INPUT 2 ❖ Nested loop: 2 hash function . . . Original Relations – Each outer tuple must be compared with each h (R then S) B-1 B-1 inner tuple that might join. Disk B main memory buffers Disk – Easy for range partitioning on join cols, hard otherwise! ❖ In first phase, partitions get distributed to ❖ Sort-Merge (or plain Merge-Join): different sites: – Sorting gives range-partitioning. – A good hash function automatically distributes work evenly! ◆ But what about handling 2 skews? – Merging partitioned tables is local. ❖ Do second phase at each site. ❖ Almost always the winner for equi-join. Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 13 14 Dataflow Network for || Join Complex Parallel Query Plans ❖ Complex Queries: Inter-Operator parallelism – Pipelining between operators: ◆ note that sort and phase 1 of hash-join block the pipeline!! – Bushy Trees Sites 1-8 Sites 1-4 Sites 5-8 A B R S ❖ Good use of split/merge makes it easier to build parallel versions of sequential join code. Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 15 16 N × M-way Parallelism Observations Merge Merge Merge ❖ It is relatively easy to build a fast parallel Sort Sort Sort Sort Sort query executor Join Join Join Join Join – S.M.O.P. ❖ It is hard to write a robust and world-class A...E F...J K...N O...S T...Z parallel query optimizer. – There are many tricks. N inputs, M outputs, no bottlenecks. – One quickly hits the complexity barrier. – Still open research! Partitioned Data Partitioned and Pipelined Data Flows Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 17 18

  4. Parallel Query Optimization What’s Wrong With That? ❖ Best serial plan != Best || plan! Why? ❖ Common approach: 2 phases ❖ Trivial counter-example: – Pick best sequential plan (System R algorithm) – Table partitioned with local secondary index at – Pick degree of parallelism based on current two nodes system parameters. – Range query: all of node 1 and 1% of node 2. ❖ “Bind” operators to processors – Node 1 should do a scan of its partition. – Take query tree, “decorate” as in previous picture. Index – Node 2 should use secondary index. Table Scan Scan ❖ SELECT * FROM telephone_book WHERE name < “NoGood”; A..M N..Z Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 19 20 Parallel DBMS Summary || DBMS Summary, cont. ❖ ||-ism natural to query processing: ❖ Data layout choices important! – Both pipeline and partition ||-ism! ❖ Most DB operations can be done partition-|| ❖ Shared-Nothing vs. Shared-Mem – Sort. – Shared-disk too, but less standard – Sort-merge join, hash-join. – Shared-mem easy, costly. Doesn’t scaleup. ❖ Complex plans. – Shared-nothing cheap, scales well, harder to – Allow for pipeline-||ism, but sorts, hashes block implement. the pipeline. ❖ Intra-op, Inter-op, & Inter-query ||-ism all – Partition ||-ism acheived via bushy trees. possible. Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 21 22 || DBMS Summary, cont. ❖ Hardest part of the equation: optimization. – 2-phase optimization simplest, but can be ineffective. – More complex schemes still at the research stage. ❖ We haven’t said anything about Xacts, logging. – Easy in shared-memory architecture. – Takes some care in shared-nothing. Database Management Systems, 2 nd Edition. Raghu Ramakrishnan and Johannes Gehrke 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend