In-Network Computing to the rescue
- f Faulty Links
Acknowledgements: Isaac Pedisich (UPenn), Gordon Brebner (Xilinx),
DARPA Contracts No. HR0011-17-C-0047 and HR0011-16-C-0056, and NSF grant CNS-1513679.
In-Network Computing to the rescue of Faulty Links - - PowerPoint PPT Presentation
In-Network Computing to the rescue of Faulty Links Acknowledgements: Isaac Pedisich (UPenn), Gordon Brebner (Xilinx), DARPA Contracts No. HR0011-17-C-0047 and HR0011-16-C-0056, and NSF grant CNS-1513679. Path Node 1 Node 2 2 Path Node
Acknowledgements: Isaac Pedisich (UPenn), Gordon Brebner (Xilinx),
DARPA Contracts No. HR0011-17-C-0047 and HR0011-16-C-0056, and NSF grant CNS-1513679.
Node 1 Node 2
2
Node 1 Node 2
Packet loss -> Application malfunction
3
Node 1 Node 2
Unstable, various mitigations
4
Node 1 Node 2
Stable, rerouting mitigation + replacement
5
Node 1 Node 2
6
Node 1 Node 2
Traffic Engineering
7
50 100 150 200 250 300 Time (seconds) 2 4 6 8 TCP Throughput (Gb/s) Loss Rate
10−1 10−2 10−3 10−4 10−5 10−6 10−7
8
Loss disproportionate to corruption
Node 1 Node 2
Disable Link(s)
9
Node 1 Node 2
Forward Error-Correction
10
Node 1 Node 2
FEC
11
Node 1 Node 2
FEC
12
In-network solution relying on computing
Node 1 Node 2
FEC
13
Build on recent advances in programmable datacenter networks. In-network solution relying on computing
about whether to activate FEC
and redundancy)
14
(Each element sees to its own links. Faster reaction time) (Single element decides for other elements’ links)
15
(Resend information) (In the hope more info gets through)
16
Physical Link Network
(Change Ethernet) (End-to-end overhead) (Overhead on faulty links)
17
Physical Link Network
(End-to-end overhead) (Overhead on faulty links)
18
Physical Link Network
(Overhead on faulty links)
19
Client Server Encoding Switch Decoding Switch Faulty Link
20
21
22
23
24
25
26
27
28
Stats Stats
29
30
31
32
1 block = k data frames + h parity frames
Traffic classification: protocol+port (Configured by network controller)
ZCU102) and CPU (x86)
35
Post- processor Pre- processor Packet Port In Packet Port Out
FEC External Function FEC UserEngine
P4 PX C
FEC Implementation
Packet In Packet Out Packet Stream In Packet Stream Out Data Words In Data Words Out
DPDK vs FPGA/CPU implementation of Encoder FPGA: 9.3Gbps CPU: 1.4Gbps (8 physical cores)
iperf vs model.
38
10−5 10−4 10−3 10−2 10−1 Error Rate (Percent of packets Lost) 2 4 6 8 10 Throughput (Gb/s)
No FEC (25, 1) (25, 5) (25, 10) (10, 5) (5, 5)
39
10−5 10−4 10−3 10−2 10−1 Error Rate (Percent of packets Lost) 2 4 6 8 10 Throughput (Gb/s)
No FEC (25, 1) (25, 5) (25, 10) (10, 5) (5, 5)
40
10−5 10−4 10−3 10−2 10−1 Error Rate (Percent of packets Lost) 2 4 6 8 10 Throughput (Gb/s)
No FEC (25, 1) (25, 5) (25, 10) (10, 5) (5, 5)
41
10−5 10−4 10−3 10−2 10−1 Error Rate (Percent of packets Lost) 101 102 103 Congestion Window Size (KB)
No FEC (25, 1) (25, 5) (25, 10) (10, 5) (5, 5)
Components: FEC + management logic
classes, low non-FEC overhead.
technicians/SREs.
integrating new “externs” on heterogeneous host/network
43
Acknowledgements: Isaac Pedisich (UPenn), Gordon Brebner (Xilinx),
DARPA Contracts No. HR0011-17-C-0047 and HR0011-16-C-0056, and NSF grant CNS-1513679.