cn
play

CN Computer Networks Research Group G R Department of Computer - PowerPoint PPT Presentation

Empirical Evaluation of Upstream Throughput in a DOCSIS Access Network Swapnil Bhatia (with Radim Barto s and Chaitanya Godsay) CN Computer Networks Research Group G R Department of Computer Science and The InterOperability Laboratory


  1. Empirical Evaluation of Upstream Throughput in a DOCSIS Access Network Swapnil Bhatia (with Radim Bartoˇ s and Chaitanya Godsay) CN Computer Networks Research Group G R Department of Computer Science and The InterOperability Laboratory Research Computing Center University of New Hampshire Durham, NH 03824

  2. ✬ ✩ Objectives of this Talk ◮ Report measurement results from our DOCSIS testbed ◮ Describe our approach to interpreting results ◮ Promote discussion of practical aspects of access networks ◮ Solicit feedback and ideas from the audience about each of the above ◮ Promote further collaborative study of access networks ✫ ✪ MSAN 2005 2 of 30

  3. ✬ ✩ Outline ◮ Introduction ⋄ DOCSIS architecture, protocol, enhancers (piggybacking, concatenation, fragmentation etc.). ◮ Background of this study ⋄ InterOperability Lab., vendors and providers, complexity of standard. ◮ Overview of Experiments ⋄ Testbed, variables and data interpretation. ◮ Results ⋄ Subset of conclusions. ◮ Summary and discussion ✫ ✪ MSAN 2005 3 of 30

  4. ✬ ✩ DOCSIS Introduction (Source: DOCSIS 1.1 RFI Specification) ◮ DOCSIS — Data Over Cable Service Interface Specification ⋄ MAC protocol utilizing existing CATV network ⋄ Developed by CableLabs (Louisville, CO) ⋄ Version 1.0 (pre-1999), 1.1, (1999-), 2.0 (2004-05) ✫ ✪ MSAN 2005 4 of 30

  5. ✬ ✩ DOCSIS Introduction (contd.) ◮ Tree topology CM User 1 1 data Splitter/Combiner CM User 2 ◮ Downstream vs. upstream 2 data To CMTS WAN ⋄ Separate frequencies CM User 3 3 data CATV Plant ⋄ Broadcast, unicast (resp.) CM − Cable Modem CMTS − CM Termination System ⋄ TDMA upstream CM User n n data ◮ MAP: Periodic downstream control message ⋄ Describes upstream transmission schedule ⋄ Who: Which CM transmits? ⋄ When: Starting when and how long? ⋄ What: What can it transmit? ◮ Different types of transmission windows ⋄ BW Request (BWR), BWR or Data, Short Data, Long Data, ✫ ✪ Maintenance. MSAN 2005 5 of 30

  6. ✬ ✩ DOCSIS Introduction (contd.) (Source: DOCSIS 1.1 RFI Specification) ✫ ✪ MSAN 2005 6 of 30

  7. ✬ ✩ DOCSIS Introduction (contd.) (Source: DOCSIS 1.1 RFI Specification) ◮ Basic Data Transmission Cycle ⋄ Wait for contention-based BW Request window ⋄ Send Request (with retries) ⋄ Retry until MAP received ⋄ Wait for start of MAPped window ⋄ Send data ◮ Alternatives ⋄ Unicast data or request windows ✫ ✪ MSAN 2005 7 of 30

  8. ✬ ✩ DOCSIS Introduction (contd.) Performance Enhancers ◮ Piggybacking ⋄ Use part of data transmission window to make new requests ◮ Concatenation ⋄ Transmit more than one data PDU in a single transmission window ◮ Fragmentation ⋄ Divide large data PDU to fit into current transmission window ◮ Header Suppression ⋄ Header of data PDU suppressed at CM, regenerated at CMTS ✫ ✪ MSAN 2005 8 of 30

  9. ✬ ✩ DOCSIS Introduction (contd.) Performance Enhancers ◮ Piggybacking ⋄ Use part of data transmission window to make new requests ◮ Concatenation ⋄ Transmit more than one data PDU in a single transmission window ◮ Fragmentation ⋄ Divide large data PDU to fit into current transmission window ◮ Header Suppression ⋄ Header of data PDU suppressed at CM, regenerated at CMTS ✫ ✪ MSAN 2005 9 of 30

  10. ✬ ✩ Outline ◮ Introduction ⋄ DOCSIS architecture, protocol, enhancers (piggybacking, concatenation, fragmentation etc.). ◮ Background of this study ⋄ InterOperability Lab., CableLabs, complexity of standard. ◮ Overview of Experiments ⋄ Testbed, variables and data interpretation. ◮ Results ⋄ Subset of conclusions. ◮ Summary and discussion ✫ ✪ MSAN 2005 10 of 30

  11. ✬ ✩ Background of this Study ◮ Supported by the UNH InterOperability Laboratory ⋄ Largest standards compliance testing facility in the country ⋄ 19 consortia (including: iSCSI, SATA, IPv6, WiMax, EFM . . . ) ⋄ Industry supported, driven testing and applied research ◮ Conformance, interoperability and performance ⋄ Previously verified, but in isolation ◮ Bottomline for vendors and service providers ⋄ Configuration design ⋄ Measurements with real devices ◮ Benefits to ⋄ Protocol designers ⋄ Equipment manufacturers ⋄ Service providers ✫ ✪ MSAN 2005 11 of 30

  12. ✬ ✩ Outline ◮ Introduction ⋄ DOCSIS architecture, protocol, enhancers (piggybacking, concatenation, fragmentation etc.). ◮ Background of this study ⋄ InterOperability Lab., CableLabs, complexity of standard. ◮ Overview of Experiments ⋄ Testbed, variables and data interpretation. ◮ Results ⋄ Subset of conclusions. ◮ Summary and discussion ✫ ✪ MSAN 2005 12 of 30

  13. ✬ ✩ Overview of Experiments ◮ Goal ⋄ Characterize upstream performance to answer deployment design questions of the type: ⋄ When is it better to piggyback than concatenate? ⋄ How much is the improvement using concatenation? ⋄ Dependent or independent of CMTS scheduling algorithm ◮ Independent variables Traffic generator and analyzer ⋄ Upstream channel rate ⋄ Input packet length CM Upstream data ⋄ Performance enhancer CM CMTS ⋄ CMTS CM RF analyzer ◮ Dependent variables Ethernet Coaxial cable ⋄ Throughput ⋄ Latency ✫ ✪ MSAN 2005 13 of 30

  14. ✬ ✩ Overview of Experiments ◮ Independent variables ◮ Upstream channel rate ⋄ { 0.64, 1.28, 2.56, 5.12, 10.24 } Mpbs. ◮ Packet length ⋄ { 64, 128, 256, 512, 768, 1262, 1500 } bytes. ◮ Performance enhancer ⋄ { Concatenation, Piggybacking, Both, Neither } allowed. ◮ CMTS ⋄ { Vendor-A, Vendor-B } . ◮ Load ⋄ Constant load of 8 Mbps (saturation). ✫ ✪ MSAN 2005 14 of 30

  15. ✬ ✩ Overview of Experiments ◮ Define a configuration as a tuple ⋄ < rate , length , enhancer , cmts > ◮ Define a transition as a doubleton of configurations ⋄ { < v 1 , v 2 , v 3 , v 4 >, < u 1 , u 2 , u 3 , u 4 > } such that ∃ ! (1 ≤ i ≤ 4) ( v i � = u i ) ◮ Consider a k − tuple of n 1 , . . . , n k -valued attributes each ◮ Total number of transitions is k k k � n i ( n i − 1) ( n i − 1) � � � � � N = · n j = n i · 2 2 i =1 i =1 i =1 j � = i ✫ ✪ MSAN 2005 15 of 30

  16. ✬ ✩ Overview of Experiments ◮ 2400 cases ⋄ An experiment for each transition ⋄ Capture effect of a single change ⋄ 25 runs per experiment ◮ Decide whether change improves or worsens performance ⋄ Statistically robust, unbiased data interpretation ⋄ Between and across CMTS ✫ ✪ MSAN 2005 16 of 30

  17. ✬ ✩ Overview of Experiments ◮ Wilcoxon Signed Rank Sum Test (WSRS) ⋄ A popular hypothesis test independent of distribution of data ⋄ Calculates probability of median of sorted ranks being zero ⋄ Null hypothesis (NH) : no change in throughput due to a transition ( T original − T changed = 0) ⋄ Test provides probability P of NH being true ⋄ Fix desired significance level α = 0 . 05 ⋄ If P ≤ α , reject NH ( T original − T changed � = 0) ⋄ i.e., transition affects throughput ⋄ Check one-sided alternative ( T original − T changed > 0 , T original − T changed < 0?) ◮ Actual α = 0 . 05 / 2400 (Bonferroni correction) ✫ ✪ MSAN 2005 17 of 30

  18. ✬ ✩ Outline ◮ Introduction ⋄ DOCSIS architecture, protocol, enhancers (piggybacking, concatenation, fragmentation etc.). ◮ Background of this study ⋄ InterOperability Lab., CableLabs, complexity of standard. ◮ Overview of Experiments ⋄ Testbed, variables and data interpretation. ◮ Results ⋄ Subset of conclusions. ◮ Summary and discussion ✫ ✪ MSAN 2005 18 of 30

  19. ✬ ✩ Results ◮ Per CMTS (99% confidence) 3 Channel 0.64Mbps 2.5 1.28Mbps ⋄ Maximum throughput < 3 2.56Mbps Throughput (Mbps) 5.12Mbps 2 10.24Mbps Mbps per CM 1.5 1 0.5 0 0 200 400 600 800 1000 1200 1400 1600 Packet length (bytes) (a) No enhancers ⋄ Enhancers effective for 3 Channel 0.64Mbps 2.5 1.28Mbps smaller packets 2.56Mbps Throughput (Mbps) 5.12Mbps 2 10.24Mbps 1.5 1 0.5 0 0 200 400 600 800 1000 1200 1400 1600 Packet length (bytes) ✫ (b) Both enhancers ✪ MSAN 2005 19 of 30

  20. ✬ ✩ Results ◮ Per CMTS (99% confidence) 3 Channel 0.64Mbps 2.5 1.28Mbps ⋄ Concatenation very effective 2.56Mbps Throughput (Mbps) 5.12Mbps 2 10.24Mbps for smaller packets 1.5 1 0.5 0 0 200 400 600 800 1000 1200 1400 1600 Packet length (bytes) (c) Concatenation ⋄ Piggybacking largely 3 Channel 0.64Mbps 2.5 1.28Mbps ineffective 2.56Mbps Throughput (Mbps) 5.12Mbps 2 10.24Mbps ⋄ Need more CMs to see effect 1.5 1 0.5 0 0 200 400 600 800 1000 1200 1400 1600 Packet length (bytes) ✫ (d) Piggybacking ✪ MSAN 2005 20 of 30

  21. ✬ ✩ Results ◮ When is Piggybacking useful? ⋄ Larger packet lengths at 1.28 No enhancers on 1.28Mbps 0.8 Piggybacking on 1.28Mbps 0.7 Mbps Throughput (normalized) 0.6 0.5 ⋄ Fewer request windows due to 0.4 0.3 large packets 0.2 0.1 0 200 400 600 800 1000 1200 1400 Packet Length (bytes) No enhancers on 2.56Mbps 0.8 Piggybacking on 2.56Mbps 0.7 No enhancers on 5.12Mbps Throughput (normalized) Piggybacking on 5.12Mbps 0.6 0.5 0.4 0.3 0.2 0.1 0 200 400 600 800 1000 1200 1400 Packet Length (bytes) ✫ ✪ MSAN 2005 21 of 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend