rcuarray an rcu like parallel safe
play

RCUArray: An RCU-like Parallel-Safe Distributed Resizable Array By - PowerPoint PPT Presentation

RCUArray: An RCU-like Parallel-Safe Distributed Resizable Array By Louis Jenkins The Problem Parallel-Safe Resizing Not inherently thread-safe to access memory while it is being resized Memory has to be moved from the smaller


  1. RCUArray: An RCU-like Parallel-Safe Distributed Resizable Array By Louis Jenkins

  2. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage

  3. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely Load Store

  4. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior Load Store

  5. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable

  6. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable • What do we need?

  7. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable Load • What do we need? 1. Allow concurrent access to both smaller and larger storage Store

  8. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable Load • What do we need? 1. Allow concurrent access to both smaller and larger storage 2. Ensure safe memory management of smaller storage Store

  9. The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable • What do we need? Load 1. Allow concurrent access to both smaller and larger storage 2. Ensure safe memory management of smaller storage 3. Ensure that stores to old memory are visible in larger storage Store

  10. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 P 𝑇 = 𝑐 1

  11. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ P 𝑇 ′ = 𝑐 1 𝑇 = 𝑐 1

  12. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ P • Update applied to s ′ … 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 = 𝑐 1

  13. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot P 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 = 𝑐 1

  14. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Reader 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 = 𝑐 1 Reader

  15. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Read-Copy-Update Reader-Writer Locks • Readers Concurrent with Readers • Readers Concurrent With Readers

  16. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Read-Copy-Update Reader-Writer Locks • Readers Concurrent with Readers • Readers Concurrent With Readers • Writers Mutually Exclusive with Writers • Writers Mutually Exclusive with Writers

  17. Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Read-Copy-Update Reader-Writer Locks • Readers Concurrent with Readers • Readers Concurrent With Readers • Writers Mutually Exclusive with Writers • Writers Mutually Exclusive with Writers • Readers Concurrent with Writers • Readers Mutually Exclusive with Writers

  18. Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3

  19. Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 𝑐 1 P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3

  20. Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P Reader Reader • Reader Concurrency • Readers will read from local snapshot only 𝑇 = 𝑐 1 𝑇 = 𝑐 1 • All readers regardless of node will see same block • All stores to 𝑐 1 are seen by any snapshot or node 𝑐 1 P P Reader Reader 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3

  21. Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P Reader Reader • Reader Concurrency • Readers will read from local snapshot only 𝑇 = 𝑐 1 𝑇 = 𝑐 1 • All readers regardless of node will see same block • All stores to 𝑐 1 are seen by any snapshot or node 𝑐 1 • Writer Mutual Exclusion • Use a distributed lock P P Reader Reader 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3

  22. Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P Reader Reader • Reader Concurrency • Readers will read from local snapshot only 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 ′ = 𝑐 1 , 𝑐 2 • All readers regardless of node will see same block • All stores to 𝑐 1 are seen by any snapshot or node 𝑐 1 • Writer Mutual Exclusion 𝑐 2 • Use a distributed lock P • Perform each update local to each node P Reader Reader 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇′ = 𝑐 1 , 𝑐 2 Locale #2 Locale #3

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend