raft consensus
play

RAFT Consensus Slide content borrowed from Diego Ongaro, John - PowerPoint PPT Presentation

RAFT Consensus Slide content borrowed from Diego Ongaro, John Ousterhout, and Alberto Montresor Log Consensus Bit consensus: agree on a single bit, based on inputs (0,1,0,0,1,0,0) -> 1 Log consensus: agree on contents and order of


  1. RAFT Consensus Slide content borrowed from Diego Ongaro, John Ousterhout, and Alberto Montresor

  2. Log Consensus • Bit consensus: agree on a single bit, based on inputs • (0,1,0,0,1,0,0) -> 1 • Log consensus: agree on contents and order of events in a log • {A, B, Q, R, W, Z} -> [A, Q, R, B, Z]

  3. Banks / cryptocurrencies • State: account balances • Events: transactions • Alice: $100 • Alice pays Bob $20 • Bob: $200 • Charlie pays Alice $50 • Charlie: $50 • Charlie pays Bob $50

  4. Databases (e.g., enrollment) • State: database tables • Events: transactions • Classes: • Alice drops CS425 • Alice: CS425, CS438 • Bob switches to 3 credits • Bob: CS425, CS411 • Charlie signs up for CS438 • Charlie: ECE428, ECE445 • ECE445 moves to ECEB1013 • Rooms: • CS425: DCL1320 • ECE445: ECEB3013

  5. Filesystems • State: all files on the system • Events: updates • Midterm.tex • Save midterm solutions to midterm-solutions.tex • HW2-solutions.tex • Append MP2 to Assignments.html • Assignments.html • Delete exam-draft.tex

  6. State machines • State: complete state of a • Events: messages received program • Assumption: all state machines determinist

  7. Replicated State Machines • A state machine can fail, taking the state with it • Replicate for • Availability — can continue operation even if one SM fails • Durability — data is not lost • Must ensure: • Consistency!

  8. Log-based • Each replica maintains a log of events (from client(s)) • Replicas apply events in the log to update their state • Same initial state + same order of events in the log => consistent final state

  9. Log Consensus • All replicas must agree on the order of events in the log • Is this possible in asynchronous systems?

  10. Log Consensus • All replicas must agree on the order of events in the log • Is this possible in asynchronous systems? • Totally correct implementation impossible (FLP)! • Safety • Replicas always add events in consistent order • Liveness • If a majority of nodes is available , they will eventually establish consistent log order • Available = not failed, and not delayed beyond a bound

  11. The distributed log (I) • Each server stores a log containing commands • Consensus algorithm ensures that all logs contain the same commands in the same order • State machines always execute commands in the log order • They will remain consistent as long as command executions have deterministic results

  12. The distributed log (II)

  13. The distributed log (III) • Client sends a command to one of the servers • Server adds the command to its log • Server forwards the new log entry to the other servers • Once a consensus has been reached, each server state machine process the command and sends it reply to the client

  14. Paxos Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part- time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament’s protocol provides a new way of implementing the state-machine approach to the design of distributed systems — an approach that has received limited attention because it leads to designs of insufficient complexity.

  15. Paxos Timeline • 1989: Lamport wrote 42 page (!) DEC technical report • 1990: Submitted to and rejected from ACM Transactions on Computer Systems • 1998: The original paper is resubmitted and accepted by TOCS. • 2001 Lamport publishes “Paxos made simple” in ACM SIGACT News • 2007 T. D. Chandra, R. Griesemer, J. Redstone. Paxos made live: an engineering perspective. PODC 2007, Portland, Oregon.

  16. Paxos • Google uses the Paxos algorithm in their Chubby distributed lock service. Chubby is used by BigTable, which is now in production in Google Analytics and other products • Amazon Web Services uses the Paxos algorithm extensively to power its platform • Windows Fabric, used by many of the Azure services, make use of the Paxos algorithm for replication between nodes in a cluster • Neo4j HA graph database implements Paxos, replacing Apache ZooKeeper used in previous versions. • Apache Mesos uses Paxos algorithm for its replicated log coordination

  17. Paxos limitations (I) • Exceptionally difficult to understand “ The dirty little secret of the NSDI * community is that at most five people really, truly understand every part of Paxos ;-). ” – Anonymous NSDI reviewer *The USENIX Symposium on Networked Systems Design and Implementation

  18. Paxos limitations (II) • Very difficult to implement “ There are significant gaps between the description of the Paxos algorithm and the needs of a real-world system…the final system will be based on an unproven protocol. ” – Chubby authors

  19. Designing for understandability • Main objective of RAFT • Whenever possible, select the alternative that is the easiest to understand • Techniques that were used include • Dividing problems into smaller problems • Reducing the number of system states to consider • Could logs have holes in them? No

  20. Raft consensus algorithm (I) • Servers start by electing a leader • Sole server habilitated to accept commands from clients • Will enter them in its log and forward them to other servers • Will tell them when it is safe to apply these log entries to their state machines

  21. Raft consensus algorithm (II) • Decomposes the problem into three fairly independent subproblems • Leader election: How servers will pick a— single —leader • Log replication: How the leader will accept log entries from clients, propagate them to the other servers and ensure their logs remain in a consistent state • Safety

  22. Avoiding split elections • Raft uses randomized election timeouts • Chosen randomly from a fixed interval • Increases the chances that a single follower will detect the loss of the leader before the others

  23. Example Follower with the shortest timeout becomes the new leader Follower A Timeouts Follower B Leader X Last heartbeat

  24. Log replication • Leaders • Accept client commands • Append them to their log (new entry) • Issue AppendEntry RPCs in parallel to all followers • Apply the entry to their state machine once it has been safely replicated • Entry is then committed

  25. A client sends a request Log State Client machine Log Log State State machine machine • Leader stores request on its log and forwards it to its followers

  26. The followers receive the request Log State Client machine Log Log State State machine machine • Followers store the request on their logs and acknowledge its receipt

  27. The leader tallies followers' ACKs Log State Client machine Log Log State State machine machine • Once it ascertains the request has been processed by a majority of the servers, it updates its state machine

  28. The leader tallies followers' ACKs Log State Client machine Log Log State State machine machine • Leader's heartbeats convey the news to its followers: they update their state machines

  29. Log organization Colors identify terms

  30. Handling slow followers ,… • Leader reissues the AppendEntry RPC • They are idempotent

  31. Committed entries • Guaranteed to be both • Durable • Eventually executed by all the available state machine • Committing an entry also commits all previous entries • All AppendEntry RPCS—including heartbeats—include the index of its most recently committed entry

  32. Why? • Raft commits entries in strictly sequential order • Requires followers to accept log entry appends in the same sequential order • Cannot "skip" entries Greatly simplifies the protocol

  33. Raft log matching property • If two entries in different logs have the same index and term • These entries store the same command • All previous entries in the two logs are identical

  34. Handling leader crashes (I) • Can leave the cluster in a inconsistent state if the old leader had not fully replicated a previous entry • Some followers may have in their logs entries that the new leader does not have • Other followers may miss entries that the new leader has

  35. Handling leader crashes (II) (new term)

  36. An election starts Log Log State State machine machine • Candidate for leader position requests votes of other former followers • Includes a summary of the state of its log

  37. Former followers reply Log Log State State machine machine ? • Former followers compare the state of their logs with credentials of candidate • Vote for candidate unless • Their own log is more "up to date" • They have already voted for another server

  38. Handling leader crashes (III) • Raft solution is to let the new leader to force followers' log to duplicate its own • Conflicting entries in followers' logs will be overwritten

  39. The new leader is in charge Log Log State State machine machine • Newly elected candidate forces all its followers to duplicate in their logs the contents of its own log

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend