depot
play

Depot Maciej Smolenski 19 January 2011 Introduction Introduction - PowerPoint PPT Presentation

Depot Maciej Smolenski 19 January 2011 Introduction Introduction 2 / 36 Depot Cloud storage system. Cloud storage: in the spirit of S3, Azure, Google Storage. Depot clients do not have to trust that servers operate correctly.


  1. Depot Maciej Smolenski 19 January 2011

  2. Introduction Introduction 2 / 36

  3. Depot Cloud storage system. • Cloud storage: in the spirit of S3, Azure, Google Storage. • Depot clients do not have to trust that servers operate correctly. Introduction 3 / 36

  4. Untrusted Storage Service Providers • Software bugs. • Misconfigured servers or operator errors. • Malicious insiders. • Acts of God or Man (e.g. fires). Providing trust guarantees will help both Clients and Storage Service Providers (SSPs). Introduction 4 / 36

  5. Depot Consistency Depot ensures that the updates observed by correct nodes are consistently ordered under Fork-Join- Causal consistency (FJC). FJC is a slight weakening of causal consistency that can be both safe and live despite faulty nodes. Introduction 5 / 36

  6. Depot Guarantees Depot implements protocols are based on FJC updates ordering. Depot provides guarentees for: • Consistency • Availability • Durability • Staleness • Latency Depot provides these guarantees with low overhead. Introduction 6 / 36

  7. Consistency Consistency 7 / 36

  8. Consistency Types • Sequential Consistency • Causal Consistency • Fork Consistency • Fork Join Consistency • Fork Join Causal Consistency Consistency 8 / 36

  9. Sequential Consistency updating (events): n1: ------------------------<n1_u1>-------- n2: ------------------------<n2_u1>-------- n3: ---<n3_u1>----------------------------- ordering: n1,n2,n3: (n3_u1) (n2_u1) (n1_u1) or n1,n2,n3: (n3_u1) (n1_u1) (n2_u1) • Events are order linearly (totally). • Conflicts resolution: no conflicts. Consistency 9 / 36

  10. Causal Consistency updating (events): n1: ------------------------<n1_u1>-------- n2: ------------------------<n2_u1>-------- n3: ---<n3_u1>----------------------------- ordering: n1: (n3_u1) (n1_u1) (n2_u1) n2: (n3_u1) (n2_u1) (n1_u1) n3: (n3_u1) (n1_u1) (n2_u1) or: (n3_u1) (n2_u1) (n1_u1) • Events are order causally. • Conflicts resolution: merge (application-specific). Consistency 10 / 36

  11. Fork Consistency Server orders events sequentially (correct server). updating (events): c1: -<c1_u1>----------------------------------<c1_u2>----------------------------------- c2: ---------------------<c2_u1>--------------------------------------<c2_u2>----------- ordering: s: (c1_u1) (c2_u1) (c1_u2) (c2_u2) c1: (c1_u1) (c2_u1) (c1_u2) (c2_u2) c2: (c1_u1) (c2_u1) (c1_u2) (c2_u2) • Events are order totally • Conflicts resolution: no conflicts. Consistency 11 / 36

  12. Fork Consistency (Continued) Faulty server (can't modify data - just forks). updating: c1: -<c1_u1>----------------------------------<c1_u2>----------------------------------- c2: --------------------<c2_u1>---------------------------------------<c2_u2>----------- ordering: /--------(c1_u2)------------------------- version for c1 s: -(c1_u1)-------------(c2_u1)-----< \--------------------------------(c2_u2)- version for c2 c1:-(c1_u1)-------------(c2_u1)---------------(c1_u2)----------------------------------- c2:-(c1_u1)-------------(c2_u1)---------------------------------------(c2_u2)----------- • Server may show different version to different clients. • Clients can be sure that they will be partitioned forever after fork (server misbehaviour detection). • Conflicts resultion: detection is enaugh. • Untrasted server (fork is the only way to lie). Consistency 12 / 36

  13. Fork Join Consistency Same as Fork Consistency when server is correct. Fork resolution (join) when servers misbehaviour detected. updating: c1: -<c1_u1>----------------------------------<c1_u2>---------------------------------- c2: ---------------------<c2_u1>--------------------------------------<c2_u2>---------- ordering: /--------(c1_u2)---------------------------------- s: -(c1_u1)--------------(c2_u1)----< \--------------------------------(c2_u2)---------- fork seen as concurrent updates to different servers: s1:-(c1_u1)-------------(c2_u1)-------------------------(c1_u2)------------------------- s2:-(c1_u1)-------------(c2_u1)-------------------------(c2_u2)------------------------- • Events after fork can be seen as concurrent (on many servers). • Conflicts resolution: merge (application-specific). Consistency 13 / 36

  14. Fork Join Causal Consistency (FJC) • Fork Join Consistency with causal event ordering. • Forks seen as concurrent operation (conflicts resolution when needed). • Conflicts resolution: merge (application-specific). Weaker then "Sequential Consistency" but for many application still useful. Provides session guarantees for each client. • Monotonic reads • Monotonic writes • Read-your-writes • Writes-follow-reads Because of weaker consistency more can be achived on the other fields: • High availability. Consistency 14 / 36

  15. Fork Join Causal Consistency (FJC) (Continued) • Untrasted servers. Consistency 15 / 36

  16. Depot Depot 16 / 36

  17. Plan Protocol to achive FJC. Extended protocol to create storage system with best guarantees (possible with FJC) for: • Trust. • Availability. • Staleness. • Durability. • Integrity and authorization. • Fault tolerance. • Data recovery. Depot - storage system. Depot 17 / 36

  18. Architecture Nodes • Clients. • Servers. All nodes run the same protocol. They are just configured differently. Servers can't issue valid updates (standard cryptography techniques). Key-Value store (GET/PUT interface). Depot 18 / 36

  19. Node Each node maintains a Log and a Checkpoint: • Log - current updates. • Checkpoint - stable system state. Depot 19 / 36

  20. Updates Propagation Gossiping (Log exchange) • Server gossips with other servers regularly (configuration parameter). • Client gossips with its primary server regularly (configuration parameter). Client switches to client- client mode when all servers unavailable. Depot 20 / 36

  21. Update Message Extra data: • Dependency Version Vector (causal ordering). • History hash (causes that only possible server's misbehaviour is a fork). Signature. Depot 21 / 36

  22. Conflicts Resolution Concurrent updates: application-specific merge. Forks - handled the same way as concurrent updates. • Application-specific merge. • Proof Of Misbehaviour (POM). • I-vouch-for-this certificates. Depot 22 / 36

  23. Replication Storing values: • Owner (client) keeps its value. • Each value should be replicated on K servers. • Receipt (prove that value is replicated on K servers). Depot 23 / 36

  24. High Availability Writes: always. Reads: when available. • Owner's copy (client). • Replicated on K servers. • Client accepts updates only when associated with receipt or value. Depot 24 / 36

  25. Staleness Depot guarantees bounded staleness. Each client generates special update regularly (logicaltime). When client A sees this special update from client B then this client can be sure that it has seen all preceeding updates from client B (Fork Consistency). In FC (FJC also) staleness is separated from consistency (flexibility). Depot 25 / 36

  26. Eventual Consistency Depot guarantees eventual consistency. • Safety - successful reads of an object at correct nodes that observe the same set of updates return the same values. • Liveness - Any update issued or observed by a correct node is eventually observeable by all correct nodes. Depot 26 / 36

  27. Evaluation Evaluation 27 / 36

  28. Method • 8 Clients 4 Servers • Server/Server gossiping every second • Client/Primary Server gossiping every 5 seconds • Key size: 32B • Value size: 3B 10KB 1MB • Read/Write: 0/100 10/90 50/50 90/10 100/0 • Non-overlapping key renges for clients (no conflicts) Evaluation 28 / 36

  29. Method (Continued) Compare versions (only Depot - different configurations) • B(Baseline) • B+H(Hash) • B+H+S(Sign) • B+H+S+St(Store) • B+H+S+St(Store)+FJC=Depot +FJC (fork handling): • Extra data (history: storing,checking,hashing,signing,verifying,transfering). Evaluation 29 / 36

  30. Latency Overhead • GET: no • PUT: extra 40% Evaluation 30 / 36

  31. Cost Factors: • CPU • Network • Storage Overhead • GET: no • PUT: extra 40% Evaluation 31 / 36

  32. Cost (Continued) Evaluation 32 / 36

  33. Fault: Cloud Disaster 300 seconds into the experiment all servers had been stopped. Clients swiched to client-client mode. Factors: • Latency • Staleness Evaluation 33 / 36

  34. Fault: Claud Disaster (Continued) Latency Better then before (because client don't gossip with client in GET - as it does with server). Evaluation 34 / 36

  35. Fault: Claud Disaster (Continued) Staleness Worse then before (because client don't gossip with client in GET - as it does with server). Evaluation 35 / 36

  36. Fault: Fork Depot handles forks the same way as concurrency. • No overhead in communication (just POMs and i- vouch-for-this certificates). • No overhead in cpu (all the time there is only correct data in the system - no need to hurry). Evaluation 36 / 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend