tagger practical pfc deadlock prevention in data center
play

Tagger: Practical PFC Deadlock Prevention in Data Center Networks - PowerPoint PPT Presentation

Tagger: Practical PFC Deadlock Prevention in Data Center Networks Shuihai Hu*(HKUST) , Yibo Zhu, Peng Cheng, Chuanxiong Guo* (Toutiao), Kun Tan*(Huawei), Jitendra Padhye, Kai Chen (HKUST) Microsoft CoNEXT 2017, Incheon, South Korea * Work done


  1. Tagger: Practical PFC Deadlock Prevention in Data Center Networks Shuihai Hu*(HKUST) , Yibo Zhu, Peng Cheng, Chuanxiong Guo* (Toutiao), Kun Tan*(Huawei), Jitendra Padhye, Kai Chen (HKUST) Microsoft CoNEXT 2017, Incheon, South Korea * Work done while at Microsoft 1

  2. RDMA is Being Widely Deployed RDMA: Remote Direct Memory Access v High throughput, low latency with low CPU overhead v Microsoft, Google, etc. are deploying RDMA RDMA Application RDMA Application kernel Lossless kernel Kernel Kernel bypass bypass Network (With PFC) RDMA NIC RDMA NIC 2

  3. Priority Flow Control (PFC) Congestion PAUSE PFC threshold: 3pkts PAUSE upstream switch when PFC threshold reached v Avoid packet drop due to buffer overflow 3

  4. A Simple Illustration of PFC Deadlock PFC threshold Switch A PAUSE PAUSE PAUSE Switch B Switch C Due to Cyclic Buffer Dependency (CBD) A->B->C->A Not just a theoretical problem, we have seen it in our datacenters too! 4

  5. CBD in the Clos Network S1 S2 L1 L2 L3 L4 T1 T2 T3 T4 5

  6. CBD in the Clos Network S1 S2 L1 L2 L3 L4 T1 T2 T3 T4 flow 1 flow 2 consider two flows initially follow shortest UP-DOWN paths 6

  7. CBD in the Clos Network S1 S2 L1 L2 L3 L4 T1 T2 T3 T4 flow 1 flow 2 due to link failures, both flows are locally rerouted to non-shortest paths 7

  8. CBD in the Clos Network S1 S2 S1 S2 RX RX L1 L2 L3 L4 RX RX L2 L3 T1 T2 T3 T4 RX RX flow 1 flow 2 buffer dependency graph CBD: L2->S1->L3->S2->L2 these two DOWN-UP bounced flows create CBD 8

  9. Real in Production Data Centers? Packet reroute measurements in more than 20 data centers: ~100,000 DOWN-UP reroutes! 9

  10. Handling Deadlock is Important #1: transient problem à PERMANENT deadlock • v Transient loops due to link failures v Packet flooding v … #2: small deadlock can cause large deadlock • PAUSE PAUSE PAUSE PAUSE deadlock PAUSE PAUSE PAUSE 10

  11. Three Key Challenges What are the challenges in designing a practical deadlock prevention solution? Ø No change to existing routing protocols or hardware Ø Link failures & routing errors are unavoidable at scale Ø Switches support at most 8 limited lossless priorities (and typically only two can be used) 11

  12. The Existing Deadlock Prevention Solutions #1: deadlock-free routing protocols • not supported by commodity switches ( fail challenge #1 ) v not work with link failures or routing errors ( fail challenge #2 ) v #2: buffer management schemes • require a lot of lossless priorities ( fail challenge #3 ) v Our answer: Tagger 12

  13. TAGGER DESIGN 13

  14. Important Observation Fat-tree [Sigcomm’08] VL2 [Sigcomm’09] BCube [Sigcomm’09] HyperX [SC’09] desired path set : all shortest paths desired path set : dimension-order paths Takeaway: In a data center, we can ask operator to supply a set of expected lossless paths (ELP) ! 14

  15. Basic Idea of Tagger 1. Ask operators to provide: topology & expected lossless paths (ELP) v 2. Packets carrying tags when in the network 3. Pre-install match-action rules at switches for tag manipulation and packet queueing packets travel over ELP: lossless queues & CBD never forms v packets deviate ELP: lossy queue, thus PFC not triggered v 15

  16. Illustrating Tagger for Clos Topology S1 S2 L1 L2 L3 L4 Root cause of CBD: packets deviate UP-DOWN routing! T1 T2 T3 T4 flow 1 flow 2 ELP = all shortest paths (CBD-free) 16

  17. Illustrating Tagger for Clos Topology match action S1 S2 Tag InPort OutPort NewTag NoBounce S1 S2 Bounced L1 L2 L3 L4 … … … … match-action rules installed at switches T1 T2 T3 T4 flow 1 tag = NoBounce Under Tagger, packets carry tags when travelling in the network • Initially, tag value = NoBounce • At switches, Tagger pre-install match-action rules for tag manipulation • 17

  18. Illustrating Tagger for Clos Topology tag = NoBounce match action S1 S2 Tag InPort OutPort NewTag NoBounce S1 S2 Bounced L1 L2 L3 L4 … … … … match-action rules installed at switches T1 T2 T3 T4 flow 1 Packet received by switch L3 18

  19. Illustrating Tagger for Clos Topology tag = NoBounce Bounced match action S1 S2 Tag InPort OutPort NewTag NoBounce S1 S2 Bounced L1 L2 L3 L4 … … … … T1 T2 T3 T4 down-up bounce observed! flow 1 rewrite tag once DOWN-UP bounce detected 19

  20. Illustrating Tagger for Clos Topology tag = Bounced S1 S2 L1 L2 L3 L4 T1 T2 T3 T4 flow 1 S2 knows it is a bounced packet that deviates ELP à placed in the lossy queue • No PFC PAUSE sent from S2 to L3 à buffer dependency from L3 to S2 removed • 20

  21. Illustrating Tagger for Clos Topology S1 S2 S1 S1 S2 S2 RX RX RX RX L1 L2 L3 L4 RX RX RX RX L2 L2 L3 L3 T1 T2 T3 T4 RX RX RX RX flow 2 buffer dependency graph CBD: L2->S1->L3->S2->L2 Tagger will do the same for packets of flow 2 • 2 buffer dependency edges are removed à CBD is eliminated • 21

  22. What If ELP Has CBD? S1 S2 L1 L2 L3 L4 T1 T2 T3 T4 ELP = shortest paths + 1-bounce paths (ELP has CBD now!) 22

  23. Segmenting ELP into CBD-free Subsets S1 S2 L1 L2 L3 L4 two bounced paths are in ELP now T1 T2 T3 T4 flow 2 flow 1 S1 S2 S1 S2 L1 L2 L3 L4 L1 L2 L3 L4 T1 T2 T3 T4 T1 T2 T3 T4 flow 2 flow 2 flow 1 flow 1 path segments before bounce path segments after bounce (only have UP-DOWN paths, no CBD) (only have UP-DOWN paths, no CBD) 23

  24. Isolating Path Segments with Tags S1 S2 S1 S2 L1 L2 L3 L4 L1 L2 L3 L4 T1 T2 T3 T4 T1 T2 T3 T4 flow 2 flow 2 flow 1 flow 1 tag 1 à path segments before bounce tag 2 à path segments after bounce 24

  25. Isolating Path Segments with Tags tag = 2 S1 S2 S1 S2 tag = 1 L1 L2 L3 L4 L1 L2 L3 L4 T1 T2 T3 T4 T1 T2 T3 T4 flow 1 Adding a rule at switch L3: (Tag = 1, Inport=S1, OutPort = S2) -> NewTag = 2 25

  26. No CBD after Segmentation S1 S2 S1 S2 S1 S2 1 2 2 1 L1 L2 L3 L4 L1 L2 L3 L4 1 1 T1 T2 T3 T4 T1 T2 T3 T4 L2 L3 flow 2 flow 2 flow 1 flow 1 1 1 tag 1 tag 2 buffer dependency graph CBD: L2->S1->L3->S2->L2 packets with tag i à i-th lossless queue 26

  27. What If k-bounce Paths all in ELP? S1 S2 L1 L2 L3 L4 solution : just segmenting ELP into k CBD-free subsets based on number of bounced times! T1 T2 T3 T4 ELP = shortest up-down paths + 1-bounce paths k -bounce paths 27

  28. Summary: Tagger Design for Clos Topology 1. Initially, packets carry with tag = 1 2. pre-install match-action rules at switches: DOWN-UP bounce: increase tag by 1 • Enqueue packets with tag i to i-th lossless queue (i <= k+1) • Enqueue packets with tag i to lossy queue(i > k+1) • For Clos topology, Tagger is optimal in terms of # of lossless priorities. 28

  29. How to Implement Tagger? • DSCP field in the IP header as the tag carried in the packets • build 3-step match-action pipeline with basic ACL rules available in commodity switches 29

  30. Tagger Meets All the Three Challenges 1. Work with existing routing protocols & hardware 2. Work with link failures & routing errors 3. Work with limited number of lossless queues 30

  31. More Details in the Paper • Proof of Deadlock freedom • Analysis & Discussions – Algorithm complexity – Optimality – Compression of match-action rules – … 31

  32. Evaluation-1: Tagger prevents Deadlock S1 S2 L1 L2 L3 L4 deadlock! T1 T2 T3 T4 flow 2 flow 1 Scenario: two flows forms CBD Tagger avoids CBD caused by bounced flows, and prevents deadlock! 32

  33. Evaluation-2: Scalability of Tagger * last entry includes additional 20,000 random paths. Match-action rules and priorities required for Jellyfish topology Tagger is scalable in terms of number of lossless priorities and ACL rules. 33

  34. Evaluation-3: Overhead of Tagger Tagger rules have no impact on throughput and latency 34

  35. Conclusion • Tagger: a tagging system guarantees deadlock-freedom – Practical : Ø require no change to existing routing protocols Ø implementable with existing commodity switching ASICs Ø work with limited number of lossless priorities – General : Ø work with any topologies Ø work with any ELPs 35

  36. Thanks! 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend