measuring and understanding iptv networks
play

Measuring and Understanding IPTV Networks Colin Perkins - PowerPoint PPT Presentation

Measuring and Understanding IPTV Networks Colin Perkins http://csperkins.org/ Martin Ellis http://www.dcs.gla.ac.uk/~ellis/ Talk Outline Research goals Measuring and monitoring IPTV systems Measurement architecture and initial data


  1. Measuring and Understanding IPTV Networks Colin Perkins http://csperkins.org/ Martin Ellis http://www.dcs.gla.ac.uk/~ellis/

  2. Talk Outline • Research goals • Measuring and monitoring IPTV systems • Measurement architecture and initial data • Implications for IPTV systems • Future directions 2

  3. Research Goals • Measure and understand the impairments affecting IPTV network traffic • Packet loss/timing; media aware if possible • Intra- and inter-domain flows • Improve techniques for on-line error repair and off-line network troubleshooting • Inform choice of FEC, retransmission, etc. • Consider network tomography for management [Joint with Jörg Ott’s group @ TKK] 3

  4. IPTV System Model – Interdomain Content Distributor A Transit Provider A R Expected future evolution; deployed Content Provider Transit Provider B IPTV systems a restricted subset – need to understand the end-to-end performance to evolve system S Content Distributor B Monitoring – end-to-end and at domain borders Repair – at edges of content distributor network R Feedback Aggregation – inter- and intra-domain R 4

  5. IPTV System Model – Intradomain S S • Expect largely tree-structured D access network, more well- D connected in the core Core • Network Much access/edge network FT FT topology is hidden below the IP layer, but will influence its performance • Access Networks Long term goal: infer the edge topology using network tomography, understand and Home Networks R locate problems R R R R R R R R R R R R R R R 5

  6. Understanding System Performance • Only limited IPTV measurements available • Most studies either between well-connected sites or using TCP for media transport • Little data on UDP-based IPTV performance • Interdomain from well-connected servers to residential hosts, to understand end-to-end path • Intradomain to understand behaviour of edge networks, evaluate effectiveness of network tomography to diagnose edge problems • Beginning to collect data – early interdomain results today… 6

  7. Interdomain Measurement Architecture • Server well-connected Server (curtis.dcs.gla.ac.uk) on public Internet • Clients on residential JANET connections • Inter-domain path from ISP2 ISP1 server to client • ~15 hops to UK ISPs; choke- ADSL Cable point at Telehouse in London ADSL Client Client • Simulates interdomain IPTV Client scenario 7

  8. Measurement Architecture – Limitations • Server on the public network Server (curtis.dcs.gla.ac.uk) • Conceptually acts as a STUN server for NAT traversal • Will likely need to implement ICE JANET for peer-to-peer scenarios • ISP2 Uncontrolled interdomain path ISP1 • Difficult to separate effect of edge from problems in the core ADSL Cable • Will measurements to other well- ADSL Client Client connected hosts let us infer home Client network performance? 8

  9. Measurement Platform • Deploy into home networks • ADSL - generally 8Mbps downstream • Cable modem • Expect a mix of users • Technical - own Linux/Unix system at home, can run measurement tool • But uncontrolled measurement environment; undesirable variation • Non-technical - require unobtrusive, low-maintenance, measurement box • Soekris net5501 single-board computer with 120GB disk, running FreeBSD 7 • <10W, silent, size of a book 9

  10. Measurement Using Test Streams • Aim: generate test traffic to (roughly) match IPTV flows • Measure loss/jitter characteristics • Looking to move to real-world streaming IPTV over time • Input to simulation of repair mechanisms and topology inference 10

  11. Measurement Plan • Three phases: 1. Initial experiment: CBR flows, manually triggered 2. Simulated VoIP and IPTV traffic, manual control 3. Simulated VoIP and IPTV traffic, automated tool • Starting phase 2 11

  12. Initial Measurements ADSL IPTV CBR 1Mbps Hourly at :50 1min IPTV CBR 2Mbps 03:15 10:15 15:15 20:15 10 mins IPTV CBR 4Mbps 03:35 10:35 15:35 20:35 10 mins Initial trace duration: VoIP CBR 64kbps Hourly at :10 1 min 1-7 November 2008 Cable Modem ~16 million packets IPTV CBR 1Mbps Hourly at :30 1 min IPTV CBR 2Mbps 04:15 11:15 16:15 21:15 10 mins IPTV CBR 4Mbps (not supported by access link) 10 mins VoIP CBR 64kbps Hourly at :55 1 min 12

  13. Packet Loss – Loss Rates 10 10 ADSL 2Mbps ADSL 1Mbps Cable 1Mbps 8 Packet Loss Rate (percent) 8 6 Packet Loss Rate (percent) 4 6 2 4 0 0 5 10 15 20 25 30 Time 80 ADSL 4Mbps 2 70 60 Packet Loss Rate (percent) 0 50 0 20 40 60 80 100 120 140 160 180 Time (hours) 40 30 20 Non-negligible packet loss on ADSL network, 10 unaffected by data rate below some threshold 0 0 5 10 15 20 25 30 Time 13

  14. Packet Loss – Loss Run Lengths 1e+06 ADSL 1Mbps ADSL 2Mbps ADSL 4Mbps Cable 1Mbps 100000 Cable 2Mbps High rate flows: linear plot → geometric distribution 10000 Lower rate flows show some evidence of longer tail Frequency Hypothesis: uniform loss probability dependent on data rate with background rate-independent bursty 1000 loss? No clear distinction between ADSL and cable 100 10 1 1 2 3 4 5 6 7 8 9 Loss burst duration (packets) 14

  15. Packet Loss – Good Run Lengths 100000 ADSL 1Mbps ADSL 2Mbps ADSL 4Mbps Cable 1Mbps 10000 Cable 2Mbps Most packets are in long good runs, but most good runs are short 1000 Frequency 100 10 1 1 10 100 1000 10000 Good run duration (packets) 15

  16. Packet Reordering • Packet reordering infrequent • 4 packets reordered out of ~16 million sent • Worst was out-of-sequence (delayed) by 4 packets • 2 flows affected • Matches expectations: reordering due to route change or misbehaving load balancing at high rates 16

  17. ADSL Inter-arrival Times 3000 3000 4 Nov 2008 06:50 4 Nov 2008 13:50 2500 2500 2000 2000 Frequency Frequency 1500 1500 1000 1000 500 500 0 0 6 8 10 12 14 6 8 10 12 14 Binned interarrival times (milliseconds) Binned interarrival times (milliseconds) 1 Mbps CBR flows • Traffic dispersion pattern not unexpected • Highly dependent on time-of-day 17

  18. ADSL Inter-arrival Times (24 Hour Trace) 3000 14 2500 Binned Interarrival Time (milliseconds) 12 2000 10 1500 1000 8 500 6 0 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/05 00:00 02:00 04:00 06:00 08:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 00:00 Date and Time 18

  19. ADSL Inter-arrival Times (1 Week Trace) 3000 14 2500 Binned Interarrival Time (milliseconds) 12 2000 10 1500 1000 8 500 6 0 11/01 11/02 11/03 11/04 11/05 11/06 11/07 11/08 00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00 Date and Time 19

  20. Cable Inter-arrival Times 3000 3000 4 Nov 2008 04:30 4 Nov 2008 20:50 2500 2500 2000 2000 Frequency Frequency 1500 1500 1000 1000 500 500 0 0 6 8 10 12 14 6 8 10 12 14 Binned interarrival times (milliseconds) Binned interarrival times (milliseconds) • Slightly worse dispersion than ADSL at busy times, much better at quiet times 20

  21. Cable Inter-arrival Times (24 Hour Trace) 3000 14 2500 Binned Interarrival Time (milliseconds) 12 2000 10 1500 1000 8 Temporal profile differs from ADSL: sharper 500 distinction between unloaded and busy times; 6 more residential users? 0 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/04 11/05 00:00 02:00 04:00 06:00 08:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 00:00 Date and Time 21

  22. Cable Inter-arrival Times (1 Week Trace) 3000 14 2500 Binned Interarrival Time (milliseconds) 12 2000 10 1500 1000 8 500 6 0 11/01 11/02 11/03 11/04 11/05 11/06 11/07 11/08 00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00 Date and Time 22

  23. Summary of Measurements • Despite uncontrolled inter-domain path, see clear distinctions between edge networks • Analysis just starting... • Very early results: planning to conduct more measurements • Range of different ISPs • Multiple users in the same ISP 23

  24. Implications for Error Concealment • If these results are typical… • Most loss bursts short (2-3 packets), but many short good runs → small amounts of FEC, but not on adjacent packets • Longer bursts infrequent → not worth overhead of FEC to protect against these; reactive repair • Need more data, from flows reflecting real IPTV traffic, to confirm repair effectiveness 24

  25. Implications for Network Trouble Shooting • Eventual aim: network tomography to locate problem areas in access network • Collecting more data, to understand correlation between receivers in single ISP • Expect to be able to trace 2 or 3 receivers in deployed networks without ISP cooperation • Not enough to confirm use of tomography, but hopefully sufficient to direct future measurements • RTCP XR summary reports would be a good proxy for full packet traces, if available 25

  26. Future Work • Debugging and deploying measurement tool across a range of ISPs • Interest and potential collaboration with other groups for wider data collection • Will make traces available once infrastructure stabilises • Analysis and understanding performance • Application to repair and tomography tasks 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend