cs6410 byzantine
play

CS6410 Byzantine Agreement Kai Sun *Some slides are borrowed from - PowerPoint PPT Presentation

CS6410 Byzantine Agreement Kai Sun *Some slides are borrowed from Ken Birman, Andrea C. Arpaci- Dusseau, Eleanor Birrell, Zhiyuan Teo, and Indranil Gupta So Far Weve Talked About State machine replication Paxos So Far Weve


  1. CS6410 – Byzantine Agreement Kai Sun *Some slides are borrowed from Ken Birman, Andrea C. Arpaci- Dusseau, Eleanor Birrell, Zhiyuan Teo, and Indranil Gupta

  2. So Far We’ve Talked About • State machine replication • Paxos

  3. So Far We’ve Talked About • Assumption • Processors do not collude, lie, or otherwise attempt to subvert the protocol • But what if the assumption does not hold?

  4. The Byzantine Generals Problem • Leslie Lamport • PhD Brandeis 1972 • LaTeX, Clocks, Paxos , … • Robert Shostak • PhD Harvard 1974 • Staff scientist for SRI International • Founder and vice president of software for Ansa Software • Founder and CTO for Portera • Founder and CTO for Vocera • Marshall Pease

  5. The Byzantine Generals Problem “I have long felt that, because it was posed as a cute problem about philosophers seated around a table, Dijkstra's dining philosopher's problem received much more attention than it deserves. …” * Leslie Lamport *http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html

  6. Byzantine Agreement • General commands soldiers • If all loyal soldiers attack victory is certain • If none attack, the Empire survives • If some attack, the Empire is lost Curses! I’m surrounded! • Gong keeps time • But they don’t need to all attack at once Attack!

  7. Byzantine Soldiers • The enemy works by corrupting the soldiers • Orders are distributed by exchange of messages • Corrupt soldiers violate protocol at will • Corrupt soldiers can’t intercept and modify messages between loyal troops • The gong sounds slowly • There is ample time for loyal soldiers to exchange messages (all to all)

  8. More Formal • A commander must send an order to his 𝑜 − 1 lieutenants such that • IC1. All loyal lieutenants obey the same order • IC2. If the commander is loyal, then every loyal lieutenant obeys the order he sends • IC1 and IC2 are called the interactive consistency conditions.

  9. Impossibility Results • Let 𝑢 be the maximum number of faulty processes that our protocol is supposed to tolerate • Byzantine agreement is not possible with fewer than 3𝑢 + 1 processes

  10. Impossibility Result • With only 3 generals, no solution can work with even 1 traitor (given oral messages) commander attack L1 L2 retreat What should lieutenant 1 (L1) do? Is commander or lieutenant 2 (L2) the traitor?

  11. Option 1: Loyal Commander commander attack attack L1 L2 retreat What must L1 do? By IC2: L1 must obey commander and attack

  12. Option 2: Loyal L2 commander retreat attack L1 L2 retreat What must L1 do? By IC1: L1 and L2 must obey same order --> L1 must retreat

  13. Two Options commander commander attack retreat attack attack L1 L1 L2 L2 retreat retreat Problem: L1 can’t distinguish between 2 scenarios

  14. General Impossibility Result • No solution with fewer than 3m+1 generals can cope with m traitors • < see paper for details >

  15. Oral Messages • Assumptions • A1) Every message sent is delivered correctly • No message loss • A2) Receiver knows who sent message • Completely connected network with reliable links • A3) Absence of message can be detected • Synchronous system only

  16. Oral Message Algorithm • OM(0) • Commander sends his value to every lieutenant • Each lieutenant uses the value received from the commander, or uses the value RETREAT if he receives no value

  17. Oral Message Algorithm • OM(m), m>0 • Commander sends his value to every lieutenant • For each 𝑗 , let 𝑤 𝑗 be value Lieutenant 𝑗 receives from commander (or RETREAT if he receives no value) • Act as commander for OM(m-1) and send 𝑤 𝑗 to n-2 other lieutenants • For each 𝑗 and each 𝑘 ≠ 𝑗 , let 𝑤 𝑘 be value Li. 𝑗 received from Li. 𝑘 in the above step (or RETREAT if he received no such value). • Li. 𝑗 computes majority( 𝑤 1 ,..., 𝑤 𝑜−1 )

  18. Example: Bad Lieutenant • Scenario: m=1, n=4, traitor = L3 C Round 0 A A A L2 L3 L1 C A A A A Round 1 A A L2 L3 L1 R A R Decision L1 = majority(A, A, R); L2 = majority(A, A, R); Both attack!

  19. Example: Bad Commander • Scenario: m=1, n=4, traitor = C C A A Round 0 R L2 L3 L1 C A A R A Round 1 R A L2 L3 L1 A R A Decision L1=majority(A, R, A); L2=majority(A, R, A); L3=majority(A,R,A); Attack!

  20. Bigger Example: Bad Lieutenants • Scenario: m=2, n=3m+1=7, traitors=L5, L6 C A A A A A A L3 L6 L1 L2 L4 L5 Messages? L3 L6 L1 L2 L4 L5 A R A A A R majority(A,A,A,A,R,R) ==> All loyal lieutenants attack! Decision?

  21. Bigger Example: Bad Commander+Lieutenant • Scenario: m=2, n=7, traitors=C, L6 C A x A R R A L3 L6 L1 L2 L4 L5 Messages? L3 L6 L1 L2 L4 L5 A A R R A A,R,A,R,A Decision?

  22. Decision with Bad Commander+Lieutenant • L1: majority(A,R,A,R,A,A) ==> Attack • L2: majority(A,R,A,R,A,R) ==> Retreat • L3: majority(A,R,A,R,A,A) ==> Attack • L4: majority(A,R,A,R,A,R) ==> Retreat • L5: majority(A,R,A,R,A,A) ==> Attack • Problem: All loyal lieutenants do NOT choose the same action

  23. Next Step of Algorithm • Verify that lieutenants tell each other the same thing • Requires 𝑛 + 1 rounds • What messages does L1 receive in this example? Round 0: A C Round 1: 2R, 3A, 4R, 5A, 6A (doesn’t know 6 is traitor) A x A R R A Round 2: L3 L6 L1 L2 L4 L5 2 { 3A, 4R, 5A, 6R} Messages? 3 {2R, 4R, 5A, 6A} L3 L6 L1 L2 L4 L5 4 {2R, 3A, 5A, 6R} A A R R A 5 {2R, 3A, 4R, 6A} A,R,A,R,A 6 { ?, ?, ?, ? } • All see same messages in round 2 from L1, L2, L3, L4, and L5 • majority(A,R,A,R,A,-) ==> All attack

  24. Algorithm Complexity • What’s the cost? • OM(m) invokes (n-1) OM(m-1). • OM(m-1)invokes (n-2) OM(n-2). • … • OM(m-k)will be called (n-1)(n- 2)…(n -k) times. • Algorithm complexity is O(n m ). (note: m = number of failures)

  25. Signed Messages • Problem • Traitors can lie about what others said • How can we remove that ability?

  26. Signed Messages • New assumption (A4) -- Signed messages (Cryptography) • Loyal general’s signature cannot be forged and contents cannot be altered • Anyone can verify authenticity of signature • Simplifies problem: • When Li. 𝑗 passes on signed message from 𝑘 , receiver knows that 𝑗 didn’t lie about what j said • Lieutenants cannot do any harm alone (cannot forge loyal general’s orders) • Only have to check for traitor commander • With cryptographic primitives, can implement Byzantine Agreement with m+2 nodes, using SM(m)

  27. Signed Messages Algorithm: SM(m) Initially 𝑊 𝑗 = ∅ 1. 2. Commander signs 𝑤 and sends to all as ( 𝑤 :0) 3. Each Li. 𝑗 : A) If receive ( 𝑤 :0) and no other order 1) 𝑊 𝑗 = {𝑤} 2) Send ( 𝑤 :0: 𝑗 ) to all B) If receive ( 𝑤 :0: 𝑘 1 :...: 𝑘 𝑙 ) and 𝑤 not in 𝑊 𝑗 1) Add 𝑤 to 𝑊 𝑗 2) If ( 𝑙 <m) send ( 𝑤 :0: 𝑘 1 :...: 𝑘 𝑙 : 𝑗 ) to all not in 𝑘 1 … 𝑘 𝑙 4. When no more messages, obey order of choice( 𝑊 𝑗 )

  28. Signed Messages Algorithm: SM(m) • 𝐷ℎ𝑝𝑗𝑑𝑓(𝑊) • If the set 𝑊 consists of the single element 𝑤 , then 𝑑ℎ𝑝𝑗𝑑𝑓(𝑊) = 𝑤 • 𝑑ℎ𝑝𝑗𝑑𝑓 ∅ =RETREAT • One possible definition is to let 𝐷ℎ𝑝𝑗𝑑𝑓(𝑊) be the median element of 𝑊

  29. SM(1) Example: Bad Commander • Scenario: m=1, n=m+2=3, bad commander C A:0 R:0 L2 L1 What next? A:0:L1 L2 L1 R:0:L2 𝑊 1 ={A,R} 𝑊 2 ={R,A} Both apply same decision to {A,R}

  30. SM(2): Bad Commander+Lieutenant • Scenario: m=2, n=m+2=4, bad commander and L3 Goal? L1 and L2 C A:0 x must make same A:0 decision L2 L3 L1 A:0:L1 A:0:L3 R:0:L3:L1 L2 L3 L1 A:0:L2 L2 L1 R:0:L3 𝑊 1 = 𝑊 2 = {A,R} ==> Same decision

  31. Other Variations • How to handle missing communication paths • < see paper for details >

  32. Compared with Asynchronous Scenarios m = traitors Synchronous Asynchronous n = total n <= 3m m >=1 * Oral messages: fails if n >= 3m+1 no guarantee works if won’t fail unless no m >= 1 * correct processes Signed messages: fails if n >= 1 no guarantee works if *Fischer, Michael J., Nancy A. Lynch, and Michael S. Paterson. "Impossibility of distributed consensus with one faulty process." Journal of the ACM (JACM) 32.2 (1985): 374-382.

  33. Thought?

  34. Easy Impossibility Proofs for Distributed Consensus Problems • Michael J. Fischer • PhD from Harvard (applied mathematics) • Professor at Yale • ACM Fellow • Nancy A. Lynch • PhD from MIT • Professor at MIT • ACM Fellow, Dijkstra Prize, Knuth Prize, … • Michael Merritt • PhD from GeTech • President, Brookside Engine Company No. 1. • Vice-Chair, Advancement Committee, Patriots' Path Council

  35. Easy Impossibility Proofs for Distributed Consensus Problems • A process is regarded as a machine processing a tape of inputs • Called an agreement device • They build a communications graph. The messages that pass over an edge from a source to a destination node are a behavior of the device on that edge • Behavior of the system is a set of node and edge behaviors • In their proofs, faulty devices often exhibit different and inconsistent behaviors with respect to different participants

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend