database system implementation
play

DATABASE SYSTEM IMPLEMENTATION GT 4420/6422 // SPRING 2019 // - PowerPoint PPT Presentation

DATABASE SYSTEM IMPLEMENTATION GT 4420/6422 // SPRING 2019 // @JOY_ARULRAJ LECTURE #12: OLTP INDEXES (PART II) 2 TODAYS AGENDA B+Tree Overview Index Implementation Issues ART Index 3 LOGISTICS Reminder: Problem set due on Feb 21 st .


  1. 43 BULK INSERT The fastest/best way to build a B+Tree is to first sort the keys and then build the index from the bottom up.

  2. 44 BULK INSERT The fastest/best way to build a B+Tree is to first sort the keys and then build the index from the bottom up. Keys: 3, 7, 9, 13, 6, 1

  3. 45 BULK INSERT The fastest/best way to build a B+Tree is to first sort the keys and then build the index from the bottom up. Keys: 3, 7, 9, 13, 6, 1 Sorted Keys: 1, 3, 6, 7, 9, 13

  4. 46 BULK INSERT The fastest/best way to build a B+Tree is to first sort the keys and then build the index from the bottom up. Keys: 3, 7, 9, 13, 6, 1 Sorted Keys: 1, 3, 6, 7, 9, 13 1 3 6 7 9 13

  5. 47 BULK INSERT The fastest/best way to build a B+Tree is to first sort the keys and then build the index from the bottom up. Keys: 3, 7, 9, 13, 6, 1 Sorted Keys: 1, 3, 6, 7, 9, 13 6 9 1 3 6 7 9 13

  6. 48 POINTER SWIZZLING Nodes use page ids to reference other nodes in the index. The DBMS has to get the memory location from the page table during traversal. If a page is pinned in the buffer pool, then we can store raw pointers instead of page ids, thereby removing the need to get address from the page table.

  7. 49 POINTER SWIZZLING Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to get the memory location from the page table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  8. 50 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to get the memory location from the page table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  9. 51 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to get the memory location from the page table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  10. 52 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to Page #2 get the memory location from the page table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  11. 53 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to Page #2 get the memory location from the page table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, Page #2 → <Page*> then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  12. 54 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to Page #2 get the memory location from the page Page #3 table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, Page #2 → <Page*> then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  13. 55 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to Page #2 get the memory location from the page Page #3 table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, Page #2 → <Page*> Page #3 → <Page*> then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  14. 56 POINTER SWIZZLING Find Key>3 Nodes use page ids to reference other 6 9 nodes in the index. The DBMS has to <Page*> get the memory location from the page <Page*> table during traversal. 1 3 6 7 If a page is pinned in the buffer pool, then we can store raw pointers instead Buffer Pool of page ids, thereby removing the need Header Header Header 1 2 3 to get address from the page table.

  15. 57 MEMORY POOLS We don’t want to be calling malloc and free anytime we need to add or delete a node. This could lead to a system call. → If you call malloc to request 10 bytes of memory, the allocator may invoke the sbrk (or mmap ) system call to request 4K bytes from OS. → Then, when you call malloc next time to request another 10 bytes, it may not have to issue a system call; instead, it may return a pointer within allocated memory.

  16. 58 MEMORY POOLS If all the nodes are the same size, then the index can maintain a pool of available nodes. → Insert : Grab a free node, otherwise create a new one. → Delete: Add the node back to the free pool. Need some policy to decide when to retract the pool size (garbage collection & de-fragmentation).

  17. 59 GARBAGE COLLECTION We need to know when it is safe to reclaim memory for deleted nodes in a latch-free index. → Reference Counting → Epoch-based Reclamation → Hazard Pointers → Many others … K2 K3 K4 V2 V3 V4

  18. 60 GARBAGE COLLECTION We need to know when it is safe to reclaim memory for deleted nodes in a latch-free index. → Reference Counting → Epoch-based Reclamation → Hazard Pointers → Many others … K2 K3 K4 V2 V3 V4

  19. 61 GARBAGE COLLECTION We need to know when it is safe to reclaim memory for deleted nodes in a latch-free index. → Reference Counting → Epoch-based Reclamation → Hazard Pointers → Many others … K2 K3 K4 V2 V3 V4

  20. 62 GARBAGE COLLECTION We need to know when it is safe to reclaim memory for deleted nodes in a latch-free index. → Reference Counting → Epoch-based Reclamation → Hazard Pointers → Many others … X K2 K3 K4 V2 V3 V4

  21. 63 GARBAGE COLLECTION We need to know when it is safe to reclaim memory for deleted nodes in a latch-free index. → Reference Counting → Epoch-based Reclamation → Hazard Pointers → Many others … K2 K4 V2 V4

  22. 64 GARBAGE COLLECTION We need to know when it is safe to reclaim memory for deleted nodes in a latch-free index. → Reference Counting → Epoch-based Reclamation → Hazard Pointers → Many others … K2 K4 V2 V4

  23. 65 REFERENCE COUNTING Maintain a counter for each node to keep track of the number of threads that are accessing it. → Increment the counter before accessing. → Decrement it when finished. → A node is only safe to delete when the count is zero. This has bad performance for multi-core CPUs → Incrementing/decrementing counters causes a lot of cache coherence traffic.

  24. 66 OBSERVATION We don’t actually care about the actual value of the reference counter. We only need to know when it reaches zero. We don’t have to perform garbage collection immediately when the counter reaches zero. Source: Stephen Tu

  25. 67 EPOCH GARBAGE COLLECTION Maintain a global epoch counter that is periodically updated (e.g., every 10 ms). → Keep track of what threads enter the index during an epoch and when they leave. Mark the current epoch of a node when it is marked for deletion. → The node can be reclaimed once all threads have left that epoch (and all preceding epochs). Also known as Read-Copy-Update (RCU) in Linux.

  26. 68 NON-UNIQUE INDEXES Approach #1: Duplicate Keys → Use the same node layout but store duplicate keys multiple times. Approach #2: Value Lists → Store each key only once and maintain a linked list of unique values. MODERN B-TREE TECHNIQUES NOW PUBLISHERS 2010

  27. 69 NON-UNIQUE: DUPLICATE KEYS B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Sorted Keys K2 • • • Kn K1 K1 K1 K2 Values • • • ¤ ¤ ¤ ¤ ¤ ¤

  28. 70 NON-UNIQUE: DUPLICATE KEYS B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Sorted Keys K2 • • • Kn K1 K1 K1 K2 Values • • • ¤ ¤ ¤ ¤ ¤ ¤

  29. 71 NON-UNIQUE: DUPLICATE KEYS B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Sorted Keys K2 • • • Kn K1 K1 K1 K2 Values • • • ¤ ¤ ¤ ¤ ¤ ¤

  30. 72 NON-UNIQUE: VALUE LISTS B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Sorted Keys K5 • • • Kn K1 K2 K3 K4 Values ¤ ¤ ¤ ¤ ¤ • • •

  31. 73 NON-UNIQUE: VALUE LISTS B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Sorted Keys K5 • • • Kn K1 K2 K3 K4 Values ¤ ¤ ¤ ¤ ¤ • • •

  32. 74 VARIABLE LENGTH KEYS Approach #1: Pointers → Store the keys as pointers to the tuple’s attribute. Approach #2: Variable Length Nodes → The size of each node in the index can vary. → Requires careful memory management. Approach #3: Padding → Always pad the key to be max length of the key type. Approach #4: Key Map / Indirection → Embed an array of pointers that map to the key + value list within the node.

  33. 75 KEY MAP / INDIRECTION B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Key Map ¤ ¤ ¤ Key+Values Andy V1 Obama V2 Prashanth V3

  34. 76 KEY MAP / INDIRECTION B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Key Map ¤ ¤ ¤ Key+Values Andy V1 Obama V2 Prashanth V3

  35. 77 KEY MAP / INDIRECTION B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Key Map ¤ ¤ ¤ Key+Values Andy V1 Obama V2 Prashanth V3

  36. 78 KEY MAP / INDIRECTION B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Key Map ¤ ¤ ¤ Key+Values Andy V1 Obama V2 Prashanth V3

  37. 79 KEY MAP / INDIRECTION B+Tree Leaf Node Level Slots Prev Next ¤ ¤ # # Key Map A·¤ O·¤ P·¤ ¤ ¤ ¤ Key+Values Andy V1 Obama V2 Prashanth V3

  38. 80 PREFIX COMPRESSION Store a minimum prefix that is needed to correctly route probes into the index. Since keys are sorted in lexicographical order, there will be a lot of duplicated prefixes. abcdefghijk lmnopqrstuv … … … …

  39. 81 PREFIX COMPRESSION Store a minimum prefix that is needed to correctly route probes into the index. Since keys are sorted in lexicographical order, there will be a lot of duplicated prefixes. abcdefghijk lmnopqrstuv … … … …

  40. 82 PREFIX COMPRESSION Store a minimum prefix that is needed to correctly route probes into the index. Since keys are sorted in lexicographical order, there will be a lot of duplicated prefixes. abc lmn … … … …

  41. 83 PREFIX COMPRESSION Store a minimum prefix that is needed to correctly route probes into the index. Since keys are sorted in lexicographical order, there will be a lot of duplicated prefixes. abc lmn Andre Andy Annie … … … …

  42. 84 PREFIX COMPRESSION Store a minimum prefix that is needed to correctly route probes into the index. Since keys are sorted in lexicographical order, there will be a lot of duplicated prefixes. abc lmn Andre Andy Annie … … … … An dre dy nie

  43. 85 ADAPATIVE RADIX TREE (ART) Uses digital representation of keys to examine prefixes 1-by-1 instead of comparing entire key. Radix trees properties: → The height of the tree depends on the length of keys. (unlike B+tree where height depends on the number of keys) → Does not require rebalancing → The path to a leaf node represents the key of the leaf → Keys are stored implicitly and can be reconstructed from paths. → Structure does not depend on order of key insertion THE ADAPTIVE RADIX TREE: ARTFUL INDEXING FOR MAIN-MEMORY DATABASES ICDE 2013

  44. 86 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) H E A V L ¤ T L E ¤ O Keys: HELLO , HAT , HAVE ¤

  45. 87 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) H E A V L ¤ T L E ¤ O Keys: HELLO , HAT , HAVE ¤

  46. 88 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) H E A V L ¤ T L E ¤ O Keys: HELLO , HAT , HAVE ¤

  47. 89 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) H E A V L ¤ T L E ¤ O Keys: HELLO , HAT , HAVE ¤

  48. 90 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) H E A V L ¤ T L E ¤ O Keys: HELLO , HAT , HAVE ¤

  49. 91 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) Radix Tree H H E A ELLO A ¤ V VE L ¤ T ¤ T ¤ L E ¤ O Keys: HELLO , HAT , HAVE ¤

  50. 92 TRIE VS. RADIX TREE Trie (Re`trie’val - 1959) Radix Tree H H E A ELLO A ¤ V VE L ¤ T ¤ T ¤ L E ¤ O Keys: HELLO , HAT , HAVE ¤

  51. 93 ART: ADAPTIVELY SIZED NODES The index supports four different internal node types with different capacities. Pack in multiple digits into a single node to improve cache locality.

  52. 94 ART: MODIFICATIONS H ELLO A VE T ¤ ¤ ¤

  53. 95 ART: MODIFICATIONS Operation: Insert HAIR H ELLO A VE T ¤ ¤ ¤

  54. 96 ART: MODIFICATIONS Operation: Insert HAIR H ELLO A VE T IR ¤ ¤ ¤ ¤

  55. 97 ART: MODIFICATIONS Operation: Insert HAIR H Operation: Delete HAT , HAVE ELLO A VE T IR ¤ ¤ ¤ ¤

  56. 98 ART: MODIFICATIONS Operation: Insert HAIR H Operation: Delete HAT , HAVE ELLO A VE T IR ¤ ¤ ¤ ¤

  57. 99 ART: MODIFICATIONS Operation: Insert HAIR H Operation: Delete HAT , HAVE ELLO A IR ¤ ¤

  58. 100 ART: MODIFICATIONS Operation: Insert HAIR H Operation: Delete HAT , HAVE ELLO A IR ¤ ¤

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend