Putting the "Ultrain UltraGrid: Full rate Uncompressed HDTV - - PowerPoint PPT Presentation

putting the ultra in ultragrid full rate uncompressed
SMART_READER_LITE
LIVE PREVIEW

Putting the "Ultrain UltraGrid: Full rate Uncompressed HDTV - - PowerPoint PPT Presentation

Putting the "Ultrain UltraGrid: Full rate Uncompressed HDTV Video Conferencing Ladan Gharai ....University of Southern California/ISI Colin Perkins ........................... University of Glasgow Alvaro Saurin


slide-1
SLIDE 1

Putting the "Ultra”in UltraGrid: Full rate Uncompressed HDTV Video Conferencing

Ladan Gharai ....……University of Southern California/ISI Colin Perkins .........................…….. University of Glasgow Alvaro Saurin ..……................……. University of Glasgow

slide-2
SLIDE 2

Outline

 The UltraGrid System  Beyond 1 Gbps  Experimentation

 Lab Experiments  Network Experiments

 Summary

slide-3
SLIDE 3

The UltraGrid System

 UltraGrid is ultra-high quality video conferencing tool

 Supports uncompressed High Definition TV video formats  Video codecs: Digital Video (DV)  Incurs minimum latency  Adaptable to network conditions

 Not solely a video conferencing tool:

 HDTV distribution system for editing purposes  A general purpose SMPTE292M-over-IP system  High-definition visualization and remote steering applications

slide-4
SLIDE 4

Approach

 Build a system that can be replicated and built by

  • ther HDTV enthusiasts:

 Use hardware that is commercially available  All audio and video codecs are open source  Use standard protocols:

 Real-time Transport Protocol (RTP)  Custom payload formats and profiles where necessary

 Software available for download

slide-5
SLIDE 5

Outline

 The UltraGrid System  Beyond 1 Gbps  Experimentation

 Lab Experiments  Network Experiments

 Summary

slide-6
SLIDE 6

Beyond 1 Gbps

 We have previously successfully demonstrated

UltraGrid at ~1Gbps

 Supercomputing 2002  Video is down sampled at the sender:

 Color is down sampled from 10bits to 8bits  Auxiliary data removed

 Why < 1 Gbps limitation?

 limitation is the Gigabit Ethernet NIC

 Solutions:

  • 1. 2 Gigabit Ethernet NICs
  • 2. 10 Gigabit Ethernet NIC
slide-7
SLIDE 7

The (new) UltraGrid node

 10 Gigabit Ethernet NIC:

 T110 10GbE from Chelsio: http://www.chelsio.com/  133Mhz/PCI-X

slide-8
SLIDE 8

The (new) UltraGrid node

 10 Gigabit Ethernet NIC:

 T110 10GbE from Chelsio: http://www.chelsio.com/  133Mhz/PCI-X

 HDTV capture card:

 Centaurus HDTV capture card from www.dvs.de

 same SDK as HDstation  100Mhz/PCI-X

slide-9
SLIDE 9

The (new) UltraGrid node

 10 Gigabit Ethernet NIC:

 T110 10GbE from Chelsio: http://www.chelsio.com/  133Mhz/PCI-X

 HDTV capture card:

 Centaurus HDTV capture card from www.dvs.de

 same SDK as HDstation  100Mhz/PCI-X

 Dual Xeon EM64T Power Station

 SuperMicro mother board  5 programmable PCI-X slots  32bit Fedora Core3 - Linux 2.6 Kernel

slide-10
SLIDE 10

UltraGrid: Architectural Overview

An open and flexible architecture with “plug-in” support for codecs and transport protocols:

Codec Support:

 DV, RFC 3189  M-JPEG, RFC 2435  H.261, RFC 2032

Transport protocols:

 RTP/RTCP  RFC 3550

Congestion Control:

 TCP Friendly Rate Control

(TFRC), RFC 3448

decoder encoder display grabber Playout buffer Packetization

Transport + Congestion Control

rat

UltraGrid Node

slide-11
SLIDE 11

UltraGrid: Architectural Overview

Display HDTV/DV camera Frame Grabber

Grabber thread Transmit thread

Video Codec RTP Framing Congestion Control RTP Sender

Send buffer

Network

RTP Receiver RTCP

Playout buffer

RTP Framing Video Codec Colour Conversion Display

Receive thread Display thread

slide-12
SLIDE 12

Software modifications

Both capture cards operate in 10bit or 8bit mode

Update code to operate in 10bit mode

 packetization must operate in 10bit mode  packetization is based on draft-ietf-avt-uncomp-video-06.txt

 Supports range of formats including standard & high definition

video

 Interlaced and progressive  RGB, RGBA, BGR, BGRA, YUV  Various color sub-sampling: 4:4:4, 4:2:2, 4:2:0, 4:1:1

slide-13
SLIDE 13

Outline

 The UltraGrid System  Beyond 1 Gbps  Experimentation

 Lab Experiments  Network Experiments

 Summary

slide-14
SLIDE 14

Experimentation

1.

Lab Tests

Back to back

2.

Network Tests

The DRAGON Metropolitan Area Network

Measured:

Throughput

Packet loss and reordering

Frame inter-display times

Packet interarrival times at sender and receiver

Measured on a subset of 50000 packets

slide-15
SLIDE 15

Lab Tests

Centaurus 10 GigE Centaurus 10 GigE UltraGrid Sender UltraGrid Receiver LDK-6000 PDP-502MX 1.485 Gbps RTP/UDP/IP SMPTE 292M SMPTE 292M

slide-16
SLIDE 16

Lab Tests

Centaurus 10 GigE Centaurus 10 GigE UltraGrid Sender UltraGrid Receiver LDK-6000 PDP-502MX 1.485 Gbps RTP/UDP/IP SMPTE 292M SMPTE 292M

Back-2-back tests:

 Duration: 10 min  RTT: 70 µs  MTU: 8800 bytes

slide-17
SLIDE 17

Lab Tests

Centaurus 10 GigE Centaurus 10 GigE UltraGrid Sender UltraGrid Receiver LDK-6000 PDP-502MX 1.485 Gbps RTP/UDP/IP SMPTE 292M SMPTE 292M

Back-2-back tests:

 Duration: 10 min  RTT: 70 µs  MTU: 8800 bytes

Results:

 No loss or reordering  1198.03 Mbps

throughput

 Total 10,178,098 packets

sent and received

slide-18
SLIDE 18

Inter-packet Intervals: Sender vs. Receiver

slide-19
SLIDE 19

Inter-packet Intervals: Sender vs. Receiver

slide-20
SLIDE 20

Frame inter-display times

1/60 sec

At 60 fps frames are displayed with an inter-display time of 16666 µs

The Linux scheduler interferes with timing in some instances:

 This is an OS scheduling issue  One solution is to change granularity of scheduler to 1 ms

slide-21
SLIDE 21

Network Tests

Network tests were conducted over a metropolitan network in the Washington D.C. area, known as the DRAGON network.

DRAGON is a GMPLS based multiservice WDM network and provides transport at multiple network layers including layer3, layer2 and below.

DRAGON allows the dynamic creation of “Application Specific Topologies” in direct response to application requirements.

Our Ultragrid testing was conducted over the DRAGON metropolitan ethernet service connecting:

 University of Southern California Information Sciences Institute

(USC/ISI) East (Arlington, Virginia); and

 University of Maryland (UMD) Mid-Atlantic Crossroads (MAX) in

College Park, Maryland.

slide-22
SLIDE 22

UltraGrid over DRAGON Network

HOPI / NLR

CLPK ARLG MCLN MIT Haystack Observatory (HAYS) UMD MAX Goddard Space Flight Center GSFC) National Computational Science Alliance (NCSA) ACCESS USC/ISI East DCNE DCNE

ATDNet

Optical switching element Optical edge device

DRAGON

slide-23
SLIDE 23

UltraGrid over DRAGON Network

HOPI / NLR

CLPK ARLG MCLN MIT Haystack Observatory (HAYS) UMD MAX Goddard Space Flight Center GSFC) National Computational Science Alliance (NCSA) ACCESS USC/ISI East DCNE DCNE

ATDNet

Optical switching element Optical edge device

DRAGON

slide-24
SLIDE 24

UltraGrid over DRAGON Network

HOPI / NLR

CLPK ARLG MCLN MIT Haystack Observatory (HAYS) UMD MAX Goddard Space Flight Center GSFC) National Computational Science Alliance (NCSA) ACCESS USC/ISI East DCNE DCNE

ATDNet

Optical switching element Optical edge device

DRAGON

Network tests:

 Duration: 10 min  RTT: 570 µs  MTU: 8800 bytes

Results:

 No loss or reordering  1198.03 Mbps throughput  Total 10,178,119 packets

sent and received

slide-25
SLIDE 25

Inter-packet Intervals: Sender vs. Recevier

slide-26
SLIDE 26

Inter-packet Intervals: Sender vs. Recevier

slide-27
SLIDE 27

Frame inter-display times

1/60 sec

In the network tests we see the same interference from the Linux scheduler in the inter-display times of frames:

 This is an OS scheduling issue  Solution: change granularity of scheduler to 1 ms/1000 Hz

slide-28
SLIDE 28

Summary

 Full rate uncompressed HDTV video conferencing is

available today, with current network and end-system technologies.

 Approximate cost UltraGrid nodes are:

 Hardware: ~$18000  Software: open source code

 It is paramount to be able to adapt to differing

network technologies and conditions:

 Full rate 1.2Gbps flows on dedicated networks  Network friendly flows on IP best effort networks

slide-29
SLIDE 29

Further Information…

 UltraGrid project web-site: http://ultragrid.east.isi.edu/

 Latest UltraGrid release available for download  UltraGrid-users mailing list subscription information

 Congestion control for media: http://macc.east.isi.edu/

 Version of Iperf+TFRC for UDP flows, available for download

 DRAGON network : http://dragon.east.isi.edu/

DRAGON