bimodal multicast
play

Bimodal Multicast And Cache Invalidation Who/What/Where Bruce - PowerPoint PPT Presentation

Bimodal Multicast And Cache Invalidation Who/What/Where Bruce Spang Software Engineer/Student Fastly ` Powderhorn Outline Frame the problem The papers we looked at Bimodal Multicast What we built Content Delivery


  1. Bimodal Multicast And Cache Invalidation

  2. Who/What/Where • Bruce Spang • Software Engineer/Student • Fastly

  3. `

  4. Powderhorn

  5. Outline • Frame the problem • The papers we looked at • Bimodal Multicast • What we built

  6. Content Delivery Networks

  7. Cache Invalidation We would like to be able to update a piece of content globally

  8. Cache Invalidation

  9. Cache Invalidation

  10. Cache Invalidation

  11. Approach Notify all servers to remove a piece of content

  12. Naïve Approach Central Service.

  13. Central Service e g a s s e M

  14. Central Service k c A

  15. Central Service

  16. Central Service e g a s s e M

  17. Central Service k c A

  18. Central Service Works

  19. Problems • Unreliable • High Latency

  20. Another Idea Cache servers can send purges themselves

  21. Fixes • More Reliable • Lower Latency

  22. Another Idea e g a s s e M

  23. Another Idea k c A

  24. Problem Every server sends an ack to the sender

  25. Conclusion This is hard.

  26. Reliable Broadcast

  27. Basic Problem Send a message to a set of servers

  28. Applications • Stock Exchanges • Control Systems • Configuration • Cache Invalidation

  29. Atomic Broadcast • Paxos • ZooKeeper Atomic Broadcast • etc…

  30. Strong Guarantees • Guaranteed Delivery • Total Order

  31. Too Many Guarantees • Don’t need Ordering • Don’t need Atomicity

  32. “Best-Effort” Broadcast “Try very hard” to deliver a message

  33. Algorithms • Scalable Reliable Multicast • Bimodal Multicast • Plumtree • Sprinkler • etc…

  34. ` `

  35. Goals • “Best-Effort” Delivery • Predictable Performance

  36. Algorithm • Dissemination • Anti-Entropy (Gossip)

  37. Dissemination • Unreliably broadcast a message to all other servers • IP multicast, UDP in a for loop, etc…

  38. Anti-Entropy • Each server sends a random server a digest of the messages it knows about • If the other server hasn’t received some of those messages, it requests that they be retransmitted

  39. Example

  40. Example

  41. Example

  42. Gossip

  43. Convergence

  44. Goals • “Best-Effort” Delivery • Predictable Performance

  45. Stable Throughput e g a s s e M

  46. Stable Throughput Ack

  47. Stable Throughput e g a s s e M

  48. Stable Throughput d n e s e R

  49. Independence A server that’s behind will recover from many servers in the cluster.

  50. Retransmission Limits Don’t DDoS servers that are trying to recover.

  51. Soft Failure Detection Ignore servers that are running behind.

  52. “Best effort” is ok! • Messages are delivered • Predicable Performance

  53. Powderhorn Bimodal Multicast in the Wild

  54. Development All the logic is in failure handling.

  55. Jepsen • Five nodes • Simulated partitions, packet loss • http://github.com/aphyr/jepsen

  56. “Scale” Testing • EC2 • 200 m1.mediums

  57. Avalanche • Random, repeatable network fault injection • https://github.com/fastly/avalanche

  58. Garbage Collection “I have messages {1,2,3,4,5,6,7,8,9,…}”

  59. Garbage Collection

  60. Computers are Horrible We see high packet loss and network partitions all the time .

  61. Convergence < 40%

  62. The Digest List of Message IDs

  63. The Digest Doesn’t Have to be a List

  64. The Digest • send ranges of ids of known messages • “messages 1 to 1,000,000 from server 1" • can represent large numbers of messages

  65. The Digest

  66. Behavior

  67. End-to-End Latency Density plot and 95 th percentile of purge latency by server location New York 0.10 42ms 0.05 0.00 London 0.10 74ms 0.05 Density 0.00 San Jose 0.10 83ms 0.05 0.00 Tokyo 0.10 133ms 0.05 0.00 0 50 100 150 Latency (ms)

  68. Firewall Partition Purge performance under network partition 120 60 Throughput (messages/s) 95 th percentile latency (s) 10 90 Cache server A 1 B 60 C D 0.1 30 0 02:30 03:00 03:30 04:00 02:30 03:00 03:30 04:00 Time Time

  69. NYC to London Partition Purge performance 120 15 Recovered purges (messages/s) Throughput (messages/s) 90 10 Cache server NYC 60 London 5 30 0 0 06:00 06:10 06:20 06:30 06:00 06:10 06:20 06:30 Time Time

  70. APAC Packet Loss Purge performance 400 125 Recovered purges (messages/s) Throughput (messages/s) 100 300 75 Cache server Affected 200 Unaffected 50 100 25 0 0 16:30 17:00 17:30 18:00 16:30 17:00 17:30 18:00 Time Time

  71. DDoS Purge performance under denial − of − service attack 150 60 Throughput (messages/s) 95 th percentile latency (s) 10 100 Cache server • ` 1 Victim Unaffected 50 0.1 0 23:40 23:50 00:00 00:10 00:20 23:40 23:50 00:00 00:10 00:20 Time Time

  72. Bimodal Multicast is Great We generally don’t have to worry about purging failing, even when the network does.

  73. Fin. brucespang.com/bimodal

  74. Questions? brucespang.com/bimodal

  75. We're Hiring www.fastly.com/about/jobs

  76. Decent Hash Table Purge performance with linear probing hash − table 95 th percentile latency (ms) 10 5 0 May 07 May 08 May 09 May 10 May 11 May 12 May 13 Date

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend