depot
play

Depot Cloud storage with minimal trust Prince Mahajan, Srinath - PowerPoint PPT Presentation

Depot Cloud storage with minimal trust Prince Mahajan, Srinath Setty, Sangmin Lee, Allen Clement, Lorenzo Alvisi, Mike Dahlin, Michael Walfish The University of Texas at Austin Monday, October 11, 2010 Cloud storage is appealing CloudPic


  1. Depot Cloud storage with minimal trust Prince Mahajan, Srinath Setty, Sangmin Lee, Allen Clement, Lorenzo Alvisi, Mike Dahlin, Michael Walfish The University of Texas at Austin Monday, October 11, 2010

  2. Cloud storage is appealing CloudPic “add to album” P UT (k, ) Storage Prince Provider G ET (k) “show album” ( ) Mike Monday, October 11, 2010

  3. Cloud storage is appealing CloudPic “add to album” P UT (k, ) Storage Prince Provider G ET (k) “show album” ( ) Mike Monday, October 11, 2010

  4. Risks of cloud storage CloudPic P UT (k, ) Prince G ET (k) Storage ( ) Mike Provider Failures cause undesired behavior Monday, October 11, 2010

  5. Risks of cloud storage Op1: “revoke Mike’ s access to album” CloudPic P UT (k, ) Prince G ET (k) Storage ( ) Mike Provider Failures cause undesired behavior Monday, October 11, 2010

  6. Risks of cloud storage Op1: “revoke Mike’ s Op2:“add to album” access to album” CloudPic P UT (k, ) Prince G ET (k) Storage ( ) Mike Provider Failures cause undesired behavior Monday, October 11, 2010

  7. Risks of cloud storage Op1: “revoke Mike’ s Op2:“add to album” access to album” CloudPic P UT (k, ) Prince G ET (k) “show album” Storage ( ) Mike Provider Failures cause undesired behavior Monday, October 11, 2010

  8. Risks of cloud storage Op1: “revoke Mike’ s Op2:“add to album” access to album” CloudPic P UT (k, ) Prince G ET (k) “show album” Storage ( ) Mike Provider Failures cause undesired behavior Monday, October 11, 2010

  9. We have a conflict Much to like Much to give pause Geographic Black box replication Complex Professional Error-prone management Low cost Our approach: A radical fault-tolerance stance Monday, October 11, 2010

  10. Cloud storage with minimal trust Storage Provider Eliminates trust for Minimizes trust for G ET availability P UT availability Durability Eventual consistency Staleness detection Dependency preservation Monday, October 11, 2010

  11. Cloud storage with minimal trust Storage Provider Eliminates trust for Minimizes trust for G ET availability P UT availability Durability Eventual consistency Staleness detection Dependency preservation Monday, October 11, 2010

  12. Cloud storage with minimal trust Storage Provider Eliminates trust for Minimizes trust for G ET availability P UT availability Durability Eventual consistency Staleness detection Dependency preservation Monday, October 11, 2010

  13. Cloud storage with minimal trust Storage Provider Eliminates trust for Minimizes trust for G ET availability P UT availability Durability Eventual consistency Staleness detection Dependency preservation Monday, October 11, 2010

  14. Rest of the talk I. How does Depot work? II. What properties does it provide? III. How much does it cost? Monday, October 11, 2010

  15. Depot in a nutshell Storage Provider Ensuring high availability Multiple servers Don’ t enforce sequential (CAP tradeoff) Fall back on client-client communication Monday, October 11, 2010

  16. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  17. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  18. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  19. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  20. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  21. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  22. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  23. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  24. Depot in a nutshell P UT (k, ) ( ) Storage Provider G ET (k) Preventing omission, reordering Add metadata to P UT s Add local state to nodes Add checks on received metadata Monday, October 11, 2010

  25. Protecting Consistency (1) Update metadata {nodeID, key, H(value), LocalClock, History} nodeID (2) Nodes store update metadata Logically: Store all previous updates [See paper for garbage collection] Monday, October 11, 2010

  26. Protecting Consistency (3) Local checks Accept an update u created by N if No omissions All updates in u’ s History are also in local state Don’ t modify history u is newer than any prior update by N Monday, October 11, 2010

  27. Protecting Consistency (3) Local checks Accept an update u created by N if No omissions All updates in u’ s History are also in local state Don’ t modify history u is newer than any prior update by N Monday, October 11, 2010

  28. Faults can cause forks A B F Fork: Expose inconsistent views to different nodes Each node’ s view locally consistent Monday, October 11, 2010

  29. Faults can cause forks A B F Forks partition correct nodes Correct nodes’ future updates tainted Receiver’ s update checks fail Forks prevent eventual consistency Inconsistently tainted nodes cannot communicate Monday, October 11, 2010

  30. Faults can cause forks A B F Forks partition correct nodes Correct nodes’ future updates tainted Receiver’ s update checks fail Forks prevent eventual consistency Inconsistently tainted nodes cannot communicate Monday, October 11, 2010

  31. Join forks for eventual consistency A B F’ F’’ Convert faults into concurrency Faulty node --> Two (correct) virtual nodes Correct nodes can accept subsequent updates Correct nodes can evict faulty node Monday, October 11, 2010

  32. Faults v. Concurrency Converting faults into concurrency Allows correct nodes to converge Concurrency can introduce conflicts Conflict: Concurrent updates to same object Problem not introduced by Depot Already possible due to decentralized server Applications built for high availability (such as Amazon S3) allow concurrent writes Depot exposes conflicts to applications GET returns set of most recent concurrent updates Monday, October 11, 2010

  33. Summary: Basic Protocol Protect safety Local checks Protect liveness Joining forks Reduce failures to concurrency Fork-join-causal consistency A novel consistency semantics Suitable for environments with minimal trust Monday, October 11, 2010

  34. Rest of the talk I. How does Depot work? II. What properties does Depot provide? III. How much does it cost? Monday, October 11, 2010

  35. Depot Properties Safety/ Correct Nodes Property Dimension Liveness Required Safety Fork-Join Causal Any Subset Consistency Safety Bounded Staleness Any Subset Safety Eventual Consistency (s) Any Subset Liveness Eventual consistency (l) Any Subset Availability Liveness Always write Any Subset Liveness Always exchange Any Subset Liveness Read availability/ A correct node durability has data Only auth. P UT Safety Any Subset Integrity Safety Valid eviction Any Subset Eviction Monday, October 11, 2010

  36. G ET Availability, Durability Ideal “Trust Only Yourself” Can’ t reach that goal Depot 1. Minimize required number of correct nodes Data can safely flow via any path If any correct node has data, GET eventually succeeds 2. Make it likely a correct node has data SSP replicates to multiple servers Additional replication to protect against total SSP failure Monday, October 11, 2010

  37. Contingency Plan Protect against correlated SSP failure Availability event or permanent failure Key: Storage servers are untrusted Pick any node with low correlation to SSP Prototype: Client that issues P UT keeps copy of data Gossiped update metadata sufficient to route GET requests when SSP unavailable Alternatives: Private cloud storage node (e.g., Eucalyptus/Walrus) Another external SSP Monday, October 11, 2010

  38. Depot Tolerates SSP Failure 35 Staleness (sec) 30 Depot 25 SSP 20 15 10 5 0 0 100 200 300 400 500 600 Time (sec) Complete cloud failure at 300s Depot’ s G ET , P UT continue Depot’ s staleness increases Monday, October 11, 2010

  39. Rest of the talk I. How does Depot work? II. What properties does Depot provide? III. How much does Depot cost? Latency, resources, dollars Monday, October 11, 2010

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend