FACILITATING ICN DEPLOYMENT WITH AN EXTENDED OPENFLOW PROTOCOL - - PowerPoint PPT Presentation

facilitating icn deployment with an extended openflow
SMART_READER_LITE
LIVE PREVIEW

FACILITATING ICN DEPLOYMENT WITH AN EXTENDED OPENFLOW PROTOCOL - - PowerPoint PPT Presentation

FACILITATING ICN DEPLOYMENT WITH AN EXTENDED OPENFLOW PROTOCOL Piotr Zuraniewski, Niels van Adrichem, Wieger Ijntema, Daan Ravesteijn (TNO) Christos Papadopoulos, Chengyu Fan (CSU) SDN4ICN: CONNECTING ICN ISLANDS Forklift upgrade


slide-1
SLIDE 1

FACILITATING ICN DEPLOYMENT WITH AN EXTENDED OPENFLOW PROTOCOL

Piotr Zuraniewski, Niels van Adrichem, Wieger Ijntema, Daan Ravesteijn (TNO) Christos Papadopoulos, Chengyu Fan (CSU)

slide-2
SLIDE 2

SDN4ICN: CONNECTING ICN „ISLANDS”

ICN ICN ICN Data source Non-ICN network

Data interest … …

„Forklift upgrade” of Internet to speak ICN not realistic, migration scenarios needed One of deployment modes of ICN: „islands” separated by traditional network Manual tunnels set-up to enable communication: tedious, error-prone etc. SDN could help: set-up tunnel on demand, based on ICN name (address) ...that would require parsing ICN packets in SDN which is not trivial

a) Available in cache? b) Pending in PIT? c) If not, then forward

2

slide-3
SLIDE 3

SDN – NO ICN SUPPORT OUT OF THE BOX

Contrary to common belief it is not easy to test and deploy any kind of new protocol in SDN Reason: matching in OpenFlow is on pre-defined fields (ingress port, src/dst MAC, src/dst UDP port etc.) Asking controller to handle every single „unsupported” packet not an option due to performance penalty Complex structure of NDN packet Nested Type-Length-Value (TLV) format „Value” but also „Type” and „Length” fields can be of variable size Interesting ICN name can be buried deeply inside packet

Facilitating ICN Deployment with an Extended OpenFlow Protocol

T0 L0 T1 L1 T2 L2 V1 V2

3

slide-4
SLIDE 4

REQUIRED SOLUTION: FLEXIBLE, EASY TO PROGRAM MATCHING ON SWITCH

Avoid frequent controller communication Universal and “future-proof” (solve more than ICN parsing) High performance, preferably line-rate Easy to deploy No changes in OpenFlow standard Our proposition: extend OpenFlow protocol to allow for matching on the result

  • f Extended Berkely Packet Filter program executed locally on a switch

Extensibility is within current standard “Ultimate” extension – all protocols can be handled Inspired by architecture described first by Jouet, Cziva and Pezaros

Facilitating ICN Deployment with an Extended OpenFlow Protocol 4

slide-5
SLIDE 5

INTERMEZZO: BERKELEY PACKET FILTER

BPF: way of filtering packets in the kernel (McCanne, Van Jacobson 1993) You use it every day with tcpdump/libcap/wireshark/... BPF program is compiled to bytecode and is attached to the network tap interface Extended BPF (eBFP) – can be written in C, loops possible

https://blog.cloudflare.com/bpf-the-forgotten-bytecode/

Facilitating ICN Deployment with an Extended OpenFlow Protocol 5

slide-6
SLIDE 6

ARCHITECTURE DETAILS

Facilitating ICN Deployment with an Extended OpenFlow Protocol 6

slide-7
SLIDE 7

ARCHITECTURE DETAILS – EBPF PROGRAM

Facilitating ICN Deployment with an Extended OpenFlow Protocol 7

slide-8
SLIDE 8

ARCHITECTURE DETAILS – OUR IMPLEMENTED EXTENSIONS

Facilitating ICN Deployment with an Extended OpenFlow Protocol

Ryu controller modified to handle eBPF programs; can send them to and remove from the switch Experimenter OpenFlow message: capable to transport programs of 64kB size; allows for complex code, if needed

8

slide-9
SLIDE 9

ARCHITECTURE DETAILS – OUR IMPLEMENTED EXTENSIONS

Facilitating ICN Deployment with an Extended OpenFlow Protocol

Ryu controller modified to handle eBPF programs; can send them to and remove from the switch Experimenter OpenFlow message: capable to transport programs of 64kB size; allows for complex code, if needed OFSoftSwitch has now experimenter flow match field

  • Matching locally on a switch (controller not asked)
  • Many concurrent eBPF programs can be present
  • Each can be parametrized (e.g., ICN name)
  • Meta-data also handled (port ID, table ID)
  • Own vendor extension used – OpenFlow compliant

9

slide-10
SLIDE 10

ICN ICN ICN SDN SDN controller SDN SDN Data source Non-ICN network with SDN gateways

Once switch can match on ICN name, any OpenFlow supported action to transport (tunnel) over legacy network is possible Use GRE tunnel, re-write IP, push MPLS,.. ICN routing information is leaked to controller to create correct flows Two modes possible: Proactive: controller pre-installs eBPF programs and flows with matching/tunneling actions Reactive: installation after switch finds unknow ICN name

SDN ENHANCED ICN FORWARDING

10

slide-11
SLIDE 11

TNO/SCINET TESTBED

3 locations: 1 in NL, 2 in USA Connectivity via plain IPv4 Internet Each location hosted VM with vanilla NDN 0.4.1 stack and python script advertising RIB to controller VM with modified SDN switch and eBPF VM Additionally, TNO hosted modified Ryu SDN controller Tunneling: switches re-write dstIP based on ICN name and IP mapping

Facilitating ICN Deployment with an Extended OpenFlow Protocol 11

ICN ICN ICN

Internet TNO – the Netherlands Colorado State University NCAR - Wyoming Supercomputing Center

SDN 1.1.1.1/24 2.2.2.2/24 3.3.3.3/24 /tno ? dstIP:=1.1.1.1

slide-12
SLIDE 12

TNO/SCINET TESTBED

Test 1: general connectivity ndnping(server) between each pair of nodes Test 2: specific application repo-ng file transfer In both cases connectivity seamlessly established Performance test was not a goal here, data transfer speed can be improved by using app with window control*)

*) reviewer’s remark

Facilitating ICN Deployment with an Extended OpenFlow Protocol 12

File transfer experiment

Especially for Dave: ~10 Bearer channels in ISDN J

slide-13
SLIDE 13

MODIFIED SWITCH PERFORMANCE

How fast switch can match on ICN name and perform tunnelling ? 100 000 interests packet sent with various speeds (PPS); 30 reps each time Look for breaking-point, i.e., first PPS value for which we see losses Four set-ups:

  • ne “operational” - using eBPF and header rewriting header so packet can

be consumed by next-hop IPv4 router three “non-operational” for baselining only like set-up (D) being passthrough eBPF match is cheap, rewriting is expensive and “costs” about 2000 PPS

Facilitating ICN Deployment with an Extended OpenFlow Protocol 13

test set-up eBPF match IP/MAC re- write purpose last loss-less [PPS] first loss [PPS] first loss mean[P] first loss stdev [P] (A) Y Y

  • perational

2100 2200 99997.8 9.5 (B) N Y evaluation 2100 2200 99999.9 0.5 (C) Y N evaluation 4000 4100 99998.1 10.6 (D) N N evaluation 4100 4200 99998.4 8.9

slide-14
SLIDE 14

GOING GIGABITS ? EXPRESS DATA PATH + MASTER STUDENTS*) TO THE RESCUE

2000 interest PPS may generate lots of data in return but on its own means speed of ~2Mbps New linux kernels (4.13+) and several card vendors

  • ffer usage of eXpress

Data Path (XDP)

Facilitating ICN Deployment with an Extended OpenFlow Protocol 14

*) Based on “eBPF filter acceleration for arbitrary packet matching in the Linux kernel”, MSc thesis, Jeffrey Panneman, TNO/UvA, Aug 2017 Figure: https://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp

XDP hooks into the network-adapter device driver, no kernel by-pass eBPF programs can be triggered upon receiving packet by XDP

slide-15
SLIDE 15

XDP/EBPF RESULTS SNEAK PEEK PREVIEW

Netronome Agilo CX 2x10Gbps NIC with driver supporting XDP (“xdpdrv”) Interest packets sent towards the card @10Gbps rate Four test flavors, “map-match-and- rewrite” is operational one with eBPF program doing ICN matching and MAC/IP headers manipulation No losses observed up to 2Gbps Loss rate for 4Gbps ~1e-7 All of these using only 1 core No OpenFlow here; control via “maps”

Facilitating ICN Deployment with an Extended OpenFlow Protocol 15

RFC 2544 guided tests; error bars too small to be noticed

slide-16
SLIDE 16

CONCLUSIONS

Proposed framework allows for easy development of parametrizable, flexible eBPF programs Capability to match on arbitrary part of a datagram Virtually any current and future protocols can be handled Current performance of whole stack ~Mbps Modern data plane solutions (XDP) seem very promising with Gbps rates Next steps: control plane for XDP, hardware offload…

Facilitating ICN Deployment with an Extended OpenFlow Protocol 16

SDN - flexibility Accelerated data plane - performance ICN – new architecture