on the fairness efficiency tradeoff for packet processing
play

On the Fairness-Efficiency Tradeoff for Packet Processing with - PowerPoint PPT Presentation

On the Fairness-Efficiency Tradeoff for Packet Processing with Multiple Resources Wei Wang, Chen Feng, Baochun Li, Ben Liang Department of Electrical and Computer Engineering University of Toronto December 4th, 2014 Middleboxes and Deep


  1. On the Fairness-Efficiency Tradeoff for Packet Processing with Multiple Resources Wei Wang, Chen Feng, Baochun Li, Ben Liang Department of Electrical and Computer Engineering University of Toronto December 4th, 2014

  2. Middleboxes and Deep Packet Inspection ‣ Process packets based on payload ‣ IPsec, Monitoring, Firewalls, WAN optimization, etc Users Public Private Network Network Middleboxes (Packet filter, NAT) Servers 2

  3. Consumption of Multiple Resources ‣ Packet processing requires multiple types of resources (e.g., CPU, memory b/w, link b/w) ‣ Different middlebox (MB) modules consume different amounts of resources Ghodsi et al. SIGCOMM’12 3

  4. Resources should be shared fairly and e ffi ciently among flows 4

  5. Fairness ‣ Predictable service isolation ‣ The service a flow receives in an n -flow system is at least 1/ n of that it achieves when the flow monopolizes all resources ‣ Dominant Resource Fairness (DRF) ‣ Flows receive approximately the same processing time on the dominant resources of their packets 5

  6. Efficiency ‣ High resource utilization given a non-empty system, with high traffic throughput ‣ Important in today’s enterprise networks, as a surging volume of traffic now passes through MBs 6

  7. However, fairness and efficiency are conflicting objectives in the presence of multiple resources 7

  8. Fair but Inefficient ... CPU p1 p2 p3 q1 p4 p5 p6 q2 ... Link p1 p2 p3 q1 p4 p5 p6 Time 0 2 4 6 8 10 12 14 16 18 20 22 24 26 (a) A packet schedule that is fair but inefficient. ‣ Fair: both flows receive 9 time units to process on their dominant resources in each round ‣ Inefficient: link is idle at 1/3 of time 8

  9. Efficient but Unfair ... CPU p1 p2 p3 p4 p5 p6 p7 p8 q1 p9 p10 ... Link p1 p2 p3 p4 p5 p6 p7 p8 q1 p9 Time 0 2 4 6 8 10 12 14 16 18 20 22 24 26 (b) A packet schedule that is efficient but unfair. ‣ Unfair: Flow 1 receives 96% of the link bandwidth; Flow 2 receives 36% of the CPU time ‣ Efficient: 100% CPU and link utilization given a non- empty system 9

  10. Ideally… ‣ Allow the network operator to flexibly specify the tradeoff preference ‣ Many applications may have loose fairness requirements ‣ Implement the specified tradeoff via a queueing algorithm 10

  11. However… ‣ Existing multi-resource queueing algorithms focus only on fairness, without efficiency consideration ‣ The tradeoff problem has never been mentioned before, and is unique to multi-resource scheduling ‣ Even the efficiency measure is unclear! 11

  12. The Efficiency Measure 12

  13. Schedule Makespan ‣ Time elapsed from the arrival of the first packet to the time when all packets finish processing on all resources ‣ The completion time of the last flow Max efficiency = Min makespan 13

  14. Quantifying the Efficiency Loss ‣ Theoretical results ‣ m : # of resource types concerned ‣ the makespan of fair queueing could be up to m times the optimal makespan ‣ Experiment confirms 20% throughput loss of existing multi-resource fair queueing 14

  15. Makespan minimization is notoriously hard, especially when there are more than two types of resources concerned (NP-hard) 15

  16. We limit our discussion to the two most concerned types of resources for packet processing—CPU and link bandwidth 16

  17. Our Approach ‣ Relax the scheduling problem to an idealized fluid model ‣ Discuss the tradeoff between fairness and efficiency in the fluid model ‣ Implement the fluid model in the real world via a packet- by-packet tracking algorithm 17

  18. The Fluid Relaxation : packets are assumed to receive services in arbitrarily small increments on all resources 18

  19. Fluid Relaxation ‣ Discrete schedule ... � CPU p1 p2 p3 q1 p4 p5 p6 q2 ... Link p1 p2 p3 q1 p4 p5 p6 � Time 0 2 4 6 8 10 12 14 16 18 20 22 24 26 ‣ Fluid relaxation p1 p2 p3 p4 p5 p6 CPU q1 q2 p1 p2 p3 p4 p5 p6 Link q2 q1 Time 0 2 4 6 8 10 12 14 16 18 20 22 24 26 19

  20. Fluid w/ the Perfect Fairness ‣ Implement the strict DRF allocation at all times Max-min flow’s dominant share max min i 2 B d i � d i X τ i,r d i  1 , ¯ r = 1 , 2 . s.t. � Resource constraints i 2 B ‣ All flows receive the same fair dominant share = ¯ �P τ i, 1 , P d = 1 / max i ¯ i ¯ τ i, 2 20

  21. Fluid w/ the Optimal Efficiency ‣ Greedily maximizes the system dominant share at all times Maximize system dominant share X d i max d i � 0 i 2 B t X τ i,r d i ≤ 1 , r = 1 , 2 . s.t. ¯ i 2 B t Resource constraints 21

  22. Fairness-Efficiency Tradeoff 22

  23. Specifying Fairness Requirement ‣ Let be the fair dominant share under DRF ¯ d ‣ Let be a fairness knob specified by the operator α ∈ [0 , 1] ‣ Fairness constraint : flows receive at least -portion of α fair dominant share X Fair share under DRF ∈ B d i ≥ α ¯ d, ∀ i ∈ B , Dominant share of flow i 23

  24. Fairness-Efficiency Tradeoffs ‣ Maximize the system dominant share under a specified tradeoff level (quantified by fairness knob ) α ∈ [0 , 1] X max d i d i Resource constraint i ∈ B t X τ i,r d i ≤ 1 , ¯ r = 1 , 2 , s.t. i ∈ B t d i ≥ α ¯ d, ∀ i ∈ B t . Fairness constraint 24

  25. Implement the fluid model via packet-by-packet tracking 25

  26. Start-Time Tracking ‣ Maintain the Tradeoff Fluid as a reference system in the background ‣ In the real world, whenever there is a packet scheduling opportunity, the one that starts the earliest in the Tradeoff Fluid is scheduled first ‣ An O(log n) implementation based on a special structure of the Tradeoff Fluid ‣ Asymptotically close to the fluid model in terms of both makespan and fairness guarantee 26

  27. Evaluation 27

  28. Experiment Setup ‣ Prototype implementation in Click modular router ‣ 60 UDP flows each sending 2,000 800-byte pkts/s ‣ Three middlebox processing modules ‣ Packet checking (bandwidth-bound): Flows 1~20 ‣ Statistical monitoring (bandwidth-bound): Flow 21~40 ‣ IPsec (CPU-bound): Flows 41~60 28

  29. Scenario 1: No packet drop 29

  30. Makespan ‣ Each flow sends 10s traffic Makespan (s) Normalized Makespan (%) α � 1.00 55.68 100.00 0.95 52.50 94.28 0.90 48.97 87.95 � 0.85 47.17 84.72 0.70 47.13 84.64 0.60 47.07 84.54 � 0.50 47.07 84.54 ‣ Trading off 15% of fairness is sufficient to achieve the shortest makespan (20% throughput improvement) 30

  31. Fairness 7 Dominant Share (%) 6 5 4 3 2 1 0 50 60 70 80 90 100 α (%) 31

  32. Scenario 2: buffer size=200 32

  33. Resource Utilization Average Utilization (%) 100 90 80 70 CPU B/W 60 80 85 90 95 100 α (%) (a) Resource utilization. 33

  34. Dominant Share 5 Mean Dom. Share (%) α = 0.85 α = 0.90 4 α = 0.95 3 α = 1.00 2 1 0 20 40 60 Flow ID 34

  35. Per-Packet Latency 1 0.8 0.6 CDF α = 0.85 0.4 α = 0.90 α = 0.95 0.2 α = 1.00 0 0 0.2 0.4 0.6 0.8 1 Per − Packet Latency (s) (c) Per-packet latency 35

  36. Conclusions ‣ We have identified the problem of fairness-efficiency tradeoffs for multi-resource packet scheduling ‣ We have designed a scheduling algorithm to achieve a flexible tradeoff between fairness and efficiency for packet processing that requires both CPU and link bandwidth ‣ We have prototyped the tradeoff algorithm in Click. Experimental results show that slight fairness tradeoff is sufficient to achieve the highest efficiency 36

  37. Thank you! weiwang@ece.utoronto.ca http://iqua.ece.toronto.edu/~weiwang/ 37

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend