competitive analysis in buffer management
play

competitive analysis in buffer management . Sergey I. Nikolenko - PowerPoint PPT Presentation

competitive analysis in buffer management . Sergey I. Nikolenko 1,2,3 Summer School on Operational Research and Applications Nizhny Novgorod, May 25, 2016 1 NRU Higher School of Economics, St. Petersburg 2 Steklov Institute of Mathematics at St.


  1. competitive analysis in buffer management . Sergey I. Nikolenko 1,2,3 Summer School on Operational Research and Applications Nizhny Novgorod, May 25, 2016 1 NRU Higher School of Economics, St. Petersburg 2 Steklov Institute of Mathematics at St. Petersburg 3 Deloitte Analytics Institute, Moscow

  2. intro and problem setting .

  3. problem setting . • A buffer B that handles a sequence of arriving packets. • Discrete time, each time slot contains: (1) arrival : new packets arrive, and the buffer management unit performs admission control and, possibly, push-out; (2) assignment and processing : a single packet is selected for processing by the scheduling module; (3) transmission : packets with zero required processing left are transmitted and leave the queue. • The goal is to transmit as many packets as possible (i.e., drop as little as possible). 3

  4. packets, buffers, processing orders . • The structure of the buffer can be different: • single queue : all packets go to a single output port; • multiple queues : clearly separated queues leading to different output ports; • shared memory : different output ports but the memory is shared (and has to be balanced); • CIOQ (combined input-output queued) switches: several inputs, several outputs, one queue per output at every input; • crossbar switches: buffers at intersections. 4

  5. packets, buffers, processing orders . • Packets can differ in various characteristics : • value v(p) ∈ {1, … , V} : how much the packet contributes to the objective function; • required processing cycles r(p) ∈ {1, … , k} : how long must a CPU work on the packet before transmission; • output port : where a packet is headed; • size in bytes (buffer slots). 4

  6. packets, buffers, processing orders . • Finally, processing and transmission orders can be different too: • in FIFO order, packets should be processed and transmitted in the order they arrived; • in semi-FIFO order, processing is free but transmission should follow arrivals; • or the order does not matter, so we are free to construct priority queues. 4

  7. competitiveness . • The goal is to transmit as many packets as possible (i.e., drop as little as possible). Definition An online algorithm A is said to be α -competitive (for some α ≥ 1 ) if for any arrival sequence σ the number of packets successfully transmitted by A is at least 1/α times the number of packets suc- cessfully transmitted by an optimal solution (denoted OPT ) ob- tained by an offline clairvoyant algorithm. • This is a worst-case definition, with guarantees over all traffic distributions. • Lower bounds are counterexamples, upper bounds are uniform guarantees (often interesting theorems). 5

  8. plan . • Our plan: • try to study buffers, packets, processing orders in different combinations; • a lot of different combinations and works, we only look at some of the simplest and most interesting (best interest/technicality ratio); • start with uniform packets (shared memory); • then look at packets with heterogeneous processing; • and finally at packets with multiple characteristics. 6

  9. uniform packets .

  10. simplest setting . • Simplest setting: single queue, all packets are identical. • What is the competitive ratio? 8

  11. simplest setting . • Simplest setting: single queue, all packets are identical. • What is the competitive ratio? • Naturally, 1 : the greedy algorithm is optimal. • Hasn’t been too hard, has it? Well, there’s more... • Based on this paper: • W. Aiello, A. Kesselman, Y. Mansour. Competitive buffer management for shared-memory switches. ACM Transactions on Algorithms, vol. 5, no. 1, 2008. 8

  12. shared memory switch . • Let us now consider a shared memory switch. • N × N switch: each of N output ports has a queue, and the total • N input ports send in at most N packets per time slot (not necessary). • Non-preemptive (non-push-out) algorithms decide what to accept and cannot drop packets. • Preemptive (push-out) algorithms can push out already accepted packets. 9 number of packets in all queues is bounded by B .

  13. push-out algorithms . • Push-out algorithms: there are N queues of total size B , each packet is labeled with its output port. • The queues can be assumed to be FIFO (doesn’t matter since packets are uniform). • A new packet comes in; if there is free space, we lose nothing by accepting. • If the buffer is congested (full), we have to decide: where do we push out from? • What policies can you propose? 10

  14. push-out algorithms • What policies can you propose? • but first... • upper bound for LQD by matching (common technique); • lower bound for LQD by a specific counterexample; • We will demonstrate: • What is its competitive ratio? longest queue. • We’ll study LQD ( longest queue drop ): drop packet from the push out from? . • If the buffer is congested (full), we have to decide: where do we accepting. • A new packet comes in; if there is free space, we lose nothing by packets are uniform). • The queues can be assumed to be FIFO (doesn’t matter since packet is labeled with its output port. • Push-out algorithms: there are N queues of total size B , each 10

  15. general lower bound . • There is one more flavor of results: general lower bounds . • How can we prove a lower bound for all online algorithms? 11

  16. general lower bound . • There is one more flavor of results: general lower bounds . • How can we prove a lower bound for all online algorithms? • We have to show an adversarial bound, constructing a hard example for an online algorithm on the fly. 11

  17. general lower bound queue. • But there are more interesting cases too... • This is how many lower bounds for many settings work. online algorithm . 3 . 4B • The sequence can be repeated, getting the ratio of 4B do the full . • Now ALG can transmit at most 4B • Next, for B time slots we send 1 packet per time slot to the other • At time • For 2B 12 2B • In this case: consider 2 active output ports with queues Q 1 and Q 2 (and N input ports). N−2 time slots, N 2 packets arrive for each port. N−2 , either Q 1 or Q 2 has ≤ B 2 packets ( B in total). N−2 + 3B 2 packets, and OPT can N−2 + 2B . 3B+2 → 4 • Hence, we have shown a general lower bound of 4 3 for any

  18. lqd: lower bound . • LQD accepts more for overloaded ports... processing 2A packets per time slot; keeps all that comes for active ports (which amounts to exactly B ), • OPT accepts one of two packets for each overloaded port and busy in this way. nothing happens, then again; A input ports can keep A active port 13 • A idle: needed only to get enough inputs; • A overloaded: each receives 2 packets per time slot; • output ports: + A , L > A ; 2 • Proof by construction of a specific counterexample: • Lower bound for LQD : √2 . • consider a 3A × 3A switch with B = L(A−1) • A active: L packets arrive over L A time slots, then for L − L A slots

  19. lqd: lower bound 2 2, 1 . ; • in overloaded ports, ≈ xAL ; • the total number of packets in active queues is • when a burst arrives to an active port, it stabilizes at xL ; • the max queue length is ≈ constant, say xL ; • LQD accepts more for overloaded ports: 14 xL + (xL − L A) + (xL − 2 L A) + … ≈ x 2 AL • and in total they should add up to B ≈ LA 2 , so we get 2x 2 + x ≈ 1 x ≈ √2 − 1; • each active port over L time slots transmits ≈ L A + xL ≈ xL , so the total throughput rate is ≈ xL L A + A = √2A , hence the bound.

  20. lqd: upper bound . • Upper bound: 2 . • Proof: we construct a matching between the extra packets processed by OPT and LQD packets. • An extra packet is a packet transmitted by OPT when the corresponding LQD queue is idle. • A potential extra packet is a packet further in OPT’s queue than the entire corresponding LQD queue, pos t OPT (p) > L t LQD (q) . • Matching idea: 15

  21. lqd: upper bound • else match p to an arbitrary unmatched packet in LQD buffer. • If so, we get a competitive ratio of 2 . packets). • the matching is feasible (there are always enough unmatched • all extra packets are matched before they are transmitted; • We have to prove that: they appear. • So the idea is to match all potential extra packets as soon as LQD , match p to itself; . • if p arrived at this time slot and was accepted by both OPT and LQD (q) as follows: OPT (p) > L t q for which pos t (2) at end of arrivals, match each unmatched OPT packet p in queue (1) if during arrival a matched LQD packet p is preempted by p ′ , • Matching routine: on each time slot t , 16 replace p by p ′ in the matching;

  22. lqd: upper bound . • Sequence of lemmas: (1) all extra packets are matched (if matching works at all); then LQD is congested and some packets have been dropped in queue q ; pos t OPT (p) ≥ pos t LQD (p ′ ) (check all cases); (4) matching works (if there are unmatched packets in OPT then there are at least as many unmatched packets in LQD). 17 (2) if p ∈ OPT from queue q is matched to p ′ ∈ LQD and p ′ ≠ p , (3) if p ∈ OPT is matched to p ′ ∈ LQD at time t , then

  23. single queue with heterogeneous processing .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend