software defined security reloaded
play

Software Defined Security Reloaded Open Infrastructure Summit, - PowerPoint PPT Presentation

Software Defined Security Reloaded Open Infrastructure Summit, Shanghai Nov 4, 2019 Ash Bhalgat (Sr. Director, Cloud Marketing, Mellanox Technologies) Yossef Efraim (Director, Software Development, Mellanox Technologies) Zhike Wang (JD


  1. Software Defined Security – Reloaded Open Infrastructure Summit, Shanghai Nov 4, 2019 Ash Bhalgat (Sr. Director, Cloud Marketing, Mellanox Technologies) Yossef Efraim (Director, Software Development, Mellanox Technologies) Zhike Wang (JD Cloud IaaS Architect and Product Lead, JD.com)

  2. Agenda ▪ Mellanox ASAP 2 Security ▪ Futuriom Survey: Security bottlenecks in SDN world ▪ Network Virtualization Challenges ▪ Mellanox ASAP 2 Overview ▪ Traditional Connection tracking: Linux and OVS ▪ ASAP 2 Security: Efficient ConnTrack Offloads ▪ Demo and Benchmark comparison ▪ JD Cloud: ASAP 2 Security Deployment ▪ JD Cloud SmartNIC Requirements ▪ JD Cloud Conntrack Use Cases (Virtualized and Baremetal Cloud) ▪ JD Cloud SmartNIC CT Offload Performance in Production ▪ Key Takeaways

  3. Futuriom Survey: Security bottlenecks in SDN world To Learn More: Download the “ Untold Secrets of the Efficient Data Center ” Futuriom Report

  4. Software Defined Everything (SDX) Kills Performance Smart NIC Improves Security & Restores Server Application Performance! Virtualized & Software Defined Bare Metal Software Defined Hardware Accelerated Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Smart NIC Core Application Core Application Core Virtualization, Security & SDX Penalty

  5. Mellanox SmartNICs – an Acceleration Strategy Basic NICs ConnectX SmartNIC BlueField SmartNIC x86 Core processing packets – virtualization, security, storage x86 Core Available for Application Commodity NICs ConnectX-5/6/6-Dx BlueField 1 and 2 ▪ Not programmable ▪ Best performance for price ▪ Highly customizable ▪ Built-in hardware offloads ▪ Leverage hardware ▪ Stateless offloads ▪ Extra flexibility, efficiency accelerations ▪ 1G/10G NICs with CPU ▪ Full programmability and performance doing heavy lifting ▪ Priced as per the value

  6. Common Operations in Networking ▪ Most network functions share some data-path operations ▪ Packet Classification (into flows) ▪ Action based on the classification result ▪ Mellanox SmartNICs offload both the classification and actions in hardware NIC Action B Classification A Classification N Packets In Processed Packets Out Classification B Action A Action N

  7. SDN: Flow Tables Overview ▪ Each table contains Match and Action Rule Entries ▪ Multiple tables ▪ Programmable table size ▪ Programmable table cascading ▪ Dedicated, isolated tables per hypervisor, VM, Container, port ▪ Practically unlimited table size ▪ Can support million of rules/flows

  8. SDN: Flow Tables – Classify/Match Rules and Take Action ▪ Match Fields (5-Tuple, 7-tuple) ▪ Actions (ConnectX-5) ▪ Ethernet Layer 2 ▪ Steering and Forwarding ▪ Destination MAC ▪ Drop / Allow ▪ 2 outer VLANs / priority ▪ Encap/Decap ▪ Ethertype ▪ VXLAN, NVGRE, ▪ IP (v4 /v6) Geneve, NSH, ▪ Source address MPLSoGRE/UDP ▪ Destination address ▪ Flex encap/decap ▪ Protocol / Next header ▪ Report Flow ID ▪ TCP /UDP ▪ Header rewrite ▪ Source port ▪ Hairpin mode ▪ Destination port ▪ Counter set ▪ Flexible fields extraction by ▪ Connection Tracking New Mellanox “Flexparse”

  9. ASAP 2 : OVS Datapath Hardware Offload ▪ Mellanox Branding: Accelerated Switching and Packet Processing (ASAP 2 ) ▪ Best of Both Worlds! Enable SR-IOV data path with OVS control plane ▪ Enable support for most SDN controllers with SR-IOV data plane First Packet VM VM Subsequent Packets(Slow Path) SR-IOV SR-IOV VF VF ovs -vswitchd Hypervisor User Kernel OVS Data Path OVS Kernel (Slow Path) PF ConnectX-5 eSwitch

  10. ASAP 2 : OVS Datapath Hardware Offload ▪ Mellanox Branding: Accelerated Switching and Packet Processing (ASAP 2 ) ▪ Best of Both Worlds! Enable SR-IOV data path with OVS control plane ▪ Enable support for most SDN controllers with SR-IOV data plane First Packet VM VM Subsequent packets with HW Offload (Fast Path) SR-IOV SR-IOV VF VF ovs -vswitchd Hypervisor User Kernel OVS Data Path OVS Kernel (Slow Path) PF SmartNIC eSwitch ConnectX-5 eSwitch (Fast Path)

  11. Basic Architecture: Linux TC_Flower API OVS User Space vswitchd netlink Kernel Rules Table OVS TC_Flower < - - - - > | op < - - - - > | op Data Path < - - - - > | op < - - - - > | op Standard control path HW offload control path Driver FDB HW < - - - - > | op eSwitch < - - - - > | op < - - - - > | op < - - - - > | op

  12. OVS Performance: DPDK vs. ASAP 2 70 Zero CPU Load! Million Packet Per Second 60 66 MPPS 8X-10X 50 Higher is Better 40 Better 30 2 Cores 20 7.6 MPPS 10 0 (ASAP 2 ) OVS over DPDK OVS Offload ASAP 2 : Highest Packet Rate with Zero CPU Load ▪ Mellanox OVS Offload (ASAP 2 ) ▪ Open Source - No Vendor Lock-In ▪ 20X higher performance than vanilla OVS ▪ Adopted broadly by Linux community & industry ▪ 8X-10X better performance than OVS-DPDK ▪ Full Community Support (OVS, Linux, OpenStack) ▪ Line rate performance at 25/40/50/100Gbps ▪ Industry Ecosystem Support ▪ Nuage/Nokia, Red Hat, Ubuntu, Dell, F5, etc.

  13. Open Ecosystem Components Open Source Components: ✓ Kernel code is upstream: Kernel 4.8+ ✓ OVS code is upstream: OVS 2.8+ ✓ OpenStack Release : Queens Commercial Products ✓ Mellanox SmartNICs: ConnectX-5 and BlueField ✓ Red Hat: RHEL 7.5+ and RHOSP 13 (Tech Preview) ✓ Nuage Networks: VSP 5.4.1

  14. What is Connection Tracking (conntrack)? • Tracks connections and stores information about the state of connections • For each packet • Finds the connection in DB or creates a new entry • CT state for every packet can be • New – The connection is starting (SYN for TCP) • Established – The connection has already been established • Related - The connection is related to an establish connection • Invalid - packets do not follow the expected behavior of a connection • CT also used for NAT

  15. OVS CT (connection tracking) • OVS CT uses the same conntrack mechanism as the iptables/nftables • Step1: Incoming Packet is classified and action is determined • Step2: If there is an OVS action for Connection Tracking, packet sent to Connection Tracker Netfilter module • Step3: Connection State information from CT Table sent to OVS • Step4: Packet with CT meta-data is recirculated and steered as per the rule

  16. ASAP 2 CT Offload Concept • Connection establishment is done by packets TC configuration software (conntrack) • HW enforces CT state, augmenting SW CT HW vendor driver conntrack • Control Plane: Linux TC API extended for CT Generic modified SW hardware offload programming CT state • First packet in a connection follows slow NetDev n NetDev1 NetDev2 path (OVS-CT and Linux ConnTrack) VF VF • Data Plane: Subsequent packets fast PF 1 2 vPorts switched to VF by SmartNIC e-switch • Software CT State Table replicated in NIC 5 tuple CT state Flow based DB CT hardware for two reasons: • Fast switching/forwarding (Match : Action) • Saving CPU Cores through hardware offloads eSwitch (datapath)

  17. ASAP 2 CT - OpenStack Integration ▪ All changes for OVS Connection Tracking offloads – Transparent to OpenStack ▪ No OpenStack Changes. Works Seamlessly with no modifications! ▪ Just use OVS Firewall Driver to provision OpenStack Security Groups ▪ It just works!

  18. ASAP 2 CT - Open Source Software Contributions ➢ Linux Kernel modules ✓ TC flowers (support CT match/action) ✓ CT offload modules ✓ Netfliter ✓ Mellanox drivers ▪ OVS User-space ▪ Linux IProute

  19. Demo: Setup ▪ Load generator ▪ Runs T-rex OVS host ▪ OVS host (24 cores) VM 100GbE Load generator – ▪ 1 core for slow-path (softirq+OVS) OVS VxLAN T-rex ▪ 1 VM, running DPDK over 4 vCPUs ▪ 1 VF passthrough to the VM via SR-IOV 12 cores ▪ VxLAN is configured ▪ Both connected by ConnectX-5 Ex 100GbE

  20. Demo: OpenFlow Rules for Connection Tracking ▪ Trivial connection tracking rules: ▪ ‘table=0, arp , action=normal’ ▪ 'table=0, ip, ct_state=-trk, action=ct(table=1)' ▪ 'table=1, priority=1, ip, ct_state=+trk+new, action=ct(commit),normal' ▪ ‘table=1, priority=1, ip, ct_state=+trk+est, action=normal established ip CT table 0 table 1 action=ct(commit) action=normal new arp

  21. Demo ▪ Best Case ▪ Worst Case

  22. Initial state

  23. Slow Path

  24. Slow Path to Fast path

  25. Fast Path

  26. Demo Results Mellanox Lab Tested Results ▪ 5k UDP streams over OVS with HW offload ▪ Best case performance from HW perspective ▪ 45 MPPS ▪ 200k UDP streams over OVS with HW offload ▪ Worst case performance from HW perspective (100% cache miss) ▪ 15 MPPS

  27. JD Cloud SmartNIC and Conntrack offload Zhike Wang, Architect of IAAS, JD Cloud Date 2019.11.04

  28. Agenda ➢ Why SmartNIC ➢ What are JD Cloud SmartNIC Solution Requirements? ➢ Flavors of SmartNIC ➢ Security Group & Conntrack ➢ Conntrack offload challenge ➢ Use Case scenario in JD Cloud ➢ JD Cloud SmartNIC Performance in production ➢ JD Cloud on-going work

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend