pisa protocol independent switch architecture
play

PISA: Protocol Independent Switch Architecture [Sigcomm 2013] - PowerPoint PPT Presentation

Programming the Forwarding Plane Nick McKeown Stanford University PISA: Protocol Independent Switch Architecture [Sigcomm 2013] Match+AcHon ALU Memory Programmable Parser 2 PISA: Protocol Independent Switch Architecture Match+AcHon


  1. Programming the Forwarding Plane Nick McKeown Stanford University

  2. PISA: Protocol Independent Switch Architecture [Sigcomm 2013] Match+AcHon ALU Memory Programmable Parser 2

  3. PISA: Protocol Independent Switch Architecture Match+AcHon Programmable Parser 3

  4. P4 and PISA P4 code Compiler Compiler Target Programmable Parser

  5. [ACM CCR 2014] “Best Paper 2014” ACM Sigcomm Computer Communications Review July 2014 P4: Programming Protocol-Independent Packet Processors Pat Bosshart † , Dan Daly * , Glen Gibb † , Martin Izzard † , Nick McKeown ‡ , Jennifer Rexford ** , Cole Schlesinger ** , Dan Talayco † , Amin Vahdat ¶ , George Varghese § , David Walker ** † Barefoot Networks * Intel ‡ Stanford University ABSTRACT Amin MarHn Jen Dave George ** Princeton University P4 is a high-level language for programming protocol-inde- Vahdat pendent packet processors. P4 works in conjunction with Izzard Rexford Walker Varghese SDN control protocols like OpenFlow. In its current form, ¶ Google OpenFlow explicitly specifies protocol headers on which it § Microsoft Research operates. This set has grown from 12 to 41 fields in a few multiple stages of rule tables, to allow switches to expose years, increasing the complexity of the specification while more of their capabilities to the controller. still not providing the flexibility to add new headers. In this The proliferation of new header fields shows no signs of paper we propose P4 as a strawman proposal for how Open- stopping. For example, data-center network operators in- Flow should evolve in the future. We have three goals: (1) creasingly want to apply new forms of packet encapsula- Reconfigurability in the field: Programmers should be able tion (e.g., NVGRE, VXLAN, and STT), for which they re- to change the way switches process packets once they are sort to deploying software switches that are easier to extend deployed. (2) Protocol independence: Switches should not with new functionality. Rather than repeatedly extending be tied to any specific network protocols. (3) Target inde- the OpenFlow specification, we argue that future switches pendence: Programmers should be able to describe packet- should support flexible mechanisms for parsing packets and processing functionality independently of the specifics of the matching header fields, allowing controller applications to underlying hardware. As an example, we describe how to leverage these capabilities through a common, open inter- use P4 to configure a switch to add a new hierarchical label. face (i.e., a new “OpenFlow 2.0” API). Such a general, ex- tensible approach would be simpler, more elegant, and more 1. INTRODUCTION future-proof than today’s OpenFlow 1.x standard. Software-Defined Networking (SDN) gives operators pro- grammatic control over their networks. In SDN, the con- trol plane is physically separate from the forwarding plane, and one control plane controls multiple forwarding devices. While forwarding devices could be programmed in many ways, having a common, open, vendor-agnostic interface (like OpenFlow) enables a control plane to control forward- ing devices from di ff erent hardware and software vendors. Version Dan Glen OF 1.0 Nick Date OF 1.1 Dec 2009 Pat Cole Daly OF 1.2 Feb 2011 12 fields (Ethernet, TCP/IPv4) Header Fields Gibb McKeown Dan OF 1.3 Dec 2011 15 fields (MPLS, inter-table metadata) OF 1.4 Jun 2012 36 fields (ARP, ICMP, IPv6, etc.) Figure 1: P4 is a language to configure switches. Bosshart Schlesinger Oct 2013 40 fields Talayco Table 1: Fields recognized by the OpenFlow standard 41 fields Recent chip designs demonstrate that such flexibility can be achieved in custom ASICs at terabit speeds [1, 2, 3]. Pro- The OpenFlow interface started simple, with the abstrac- gramming this new generation of switch chips is far from tion of a single table of rules that could match packets on a easy. dozen header fields (e.g., MAC addresses, IP addresses, pro- Each chip has its own low-level interface, akin to microcode programming. In this paper, we sketch the de- tocol, TCP/UDP port numbers, etc.). Over the past five sign of a higher-level language for Programming Proto years, the specification has grown increasingly m independent Packet Processors (P4). plicated (see Table 1), with many more relationship between P4—used t it how packets are to as Open Figu

  6. Update on P4 Language Ecosystem

  7. P4.org – P4 Language ConsorHum

  8. P4.org – P4 Language ConsorHum Github for open-source tools • Regular P4 meeHngs Maintains the P4 language spec Open for free to any individual or • Reference P4 programs • Full-day tutorial at Sigcomm 2015 organizaHon • Compiler • 2 nd P4 Workshop at Stanford on November 18 • P4 so[ware switch • 1 st P4 Boot camp for PhD students November 19-20 • Test framework • 1 st P4 Developers Day November 19 • Apache license

  9. P4 Consor8um – P4.org Operators Systems Targets Academia

  10. Mapping P4 programs to compiler target Lavanya Jose, Lisa Yan, George Varghese, NM [NSDI 2015]

  11. Switch Pipeline Control Flow Programmable Parser Match Table L2 Table Naïve Mapping: Control Flow Graph L2 L2 AcHon AcHon Macro Match Table IPv4 Table v4 AcHon Macro AcHon Macro v4 v4 v6 v6 Match Table IPv6 Table v6 AcHon Macro AcHon Macro ACL ACL ACL Match Table Table AcHon Macro AcHon Queues 11

  12. Table Dependency Graph (TDG) v4 L2 ACL v6 Control Flow Graph Table Dependency Graph v4 L2 ACL v6 12

  13. Efficient Mapping: TDG v4 v4 L2 ACL L2 ACL v6 v6 Table Dependency Graph Control Flow Graph Switch Pipeline L2 Table AcHon AcHon v6 AcHon Macro Table v4 AcHon Macro Queues Programmable ACL IPv4 Table IPv6 Table Parser 13

  14. Example Use Case: Typical TDG Ipv4_Urpf IG_ACL1 Ipv4-Ecmp Ipv4-Ucast- Ipv4-Ucast- Host LPM IG-Router- IPv4- Mac Nexthop IG-Smac IPv4-Mcast EG_Props IPv6- IG_Phy_Meta IG_Bcast_St Nexthop EG-ACL1 orm IG-Props IG-Dmac Ipv6_Urpf EG-Phy- IG-Agg-Inj Ipv6-Ecmp Meta Ipv6-Ucast- Ipv6-Ucast- LPM Host IG_ACL2 IPv6-Mcast ConfiguraHon for 16-stage PISA Exact TCAM 14

  15. Mapping Techniques [NSDI 2015] Compare: Greedy Algorithm versus Integer Linear Programming (ILP) Greedy Algorithm runs 100-Hmes faster ILP Algorithm uses 30% fewer stages RecommendaHons: 1. If enough Hme, use ILP 2. Else, run ILP offline to find best parameters for Greedy algorithm P4 code, switch models and compilers available at: hqp://github.com/p4lang

  16. PISCES: Protocol Independent So[ware Hypervisor Switch Mohammad Shahbaz*, Sean Choi, Jen Rexford*, Nick Feamster*, Ben Pfaff, NM Problem : Adding new protocol feature to OVS is complicated • Requires domain experHze in kernel programming and networking • Many modules affected • Long QA and deployment cycle: typically 9 months Approach : Specify forwarding behavior in P4; compile to modify OVS QuesHon : How does the PISCES switch performance compare to OVS?

  17. PISCES Architecture P4 Program Runtime Flow Rules Flow Rule P4 Compiler Type Checker C Code Match-Action Rules Slow Path Match Parse Action Configuration OVS Source Code OVS Executable

  18. NaHve OVS expressed in P4 Routing route Match: ip.dst Action: nexthop drop VLAN Egress VLAN Ingress ACL Routable Switching MAC Processing Processing Learning Match: ip.src,ip.dst Match: eth.dst Match: eth.src Match: egress_port ip.prtcl, Match: ingress_port Match: eth.src eth.dst vlan.vid port.src,port.dst vlan.vid vlan.vid Action: learn vlan.vid Action: forward Action: remove_vlan Action: add_vlan Action: no_op no_op bcast Action: no_op no_op drop no_op

  19. PISCES vs NaHve OVS PISCES PISCES (Optimized) OVS 50 Throughput (Gbps) 40 30 20 10 0 64 128 192 256 Packet Size (Bytes)

  20. Complexity Comparison 40x reducHon in LOC 20x reducHon in method size Code mastery no longer needed

  21. Next Steps 1. Make PISCES available as open-source (May 2016) 2. Accumulate experience, measure reducHon in deployment Hme 3. Develop P4-to-eBPF compiler for kernel forwarding

  22. PERC: ProacHve Explicit Rate Control Lavanya Jose, Stephen Ibanez, Mohammad Alizadeh, George Varghese, Sachin Kaw, NM Problem : CongesHon control algorithms in DCs are “reacHve” • Typically takes 100 RTTs to converge to fair-share rates (e.g. TCP, RCP, DCTCP) • The algorithm it doesn’t know the answer; it uses successive approximaHon Approach : Explicitly calculate the fair-share rates in the forwarding plane QuesHon : Does it converge much faster? Is it pracHcal? [Hotnets 2015]

  23. ReacHve vs ProacHve Algorithms

  24. Performance Results Convergence Hme determined by dependency chain

  25. Next Steps Convergence Hme • Proof that convergence Hme equals length of dependency chain • Reduce measured Hme to provable minimum Develop pracHcal algorithm • Resilient to imperfect and lost update informaHon • Calculated in PISA-style forwarding plane 25

  26. <The End> 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend