comparison of network interface controllers for software
play

Comparison of Network Interface Controllers for Software Packet - PowerPoint PPT Presentation

Chair of Network Architectures and Services Department of Informatics Technical University of Munich Comparison of Network Interface Controllers for Software Packet Processing (Final Talk) Alexander Frank Advisors: Paul Emmerich, Sebastian


  1. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Comparison of Network Interface Controllers for Software Packet Processing (Final Talk) Alexander Frank Advisors: Paul Emmerich, Sebastian Gallenmüller, Dominik Scholz Supervisor: Prof. Dr.-Ing. Georg Carle October 2, 2017 Chair of Network Architectures and Services Department of Informatics Technical University of Munich

  2. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Contents Motivation Hardware Comparison Software Comparison Future Work Bibliography A. Frank – Network Controller Comparison 2

  3. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Motivation A. Frank – Network Controller Comparison 3

  4. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Motivation - General Used software: • MoonGen - High-speed packet generator based on Lua scripts • libmoon - Lua wrapper for DPDK • DPDK (Data Plane Development Kit) - Provides drivers and libraries for fast packet processing Hardware support: • DPDK supports NICs from Intel, Mellanox, Broadcom, Chelsio, ... • MoonGen/libmoon (technically) support all hardware supported by DPDK • MoonGen/libmoon only tested against Intel hardware A. Frank – Network Controller Comparison 4

  5. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Motivation - Introduction to MoonGen/libmoon libmoon script MoonGen script  Rate Control config API  Timestamping MoonGen Software  Lua Wrapper for DPDK libmoon  Custom Drivers config API  DPDK libraries DPDK  Drivers HW NIC NIC Figure 1: MoonGen/libmoon software stack A. Frank – Network Controller Comparison 5

  6. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Motivation - Goals of the Thesis Programming: Integrate Mellanox NICs into libmoon/MoonGen • Change build system to automatically handle Mellanox drivers and dependencies • Enable hardware filtering • Uniform packet counting and statistics Research: Compare three selected NICs in terms of features and ca- pabilities • Hardware level - interfaces, offloads, timestamping • Software level - integration with DPDK, filter support A. Frank – Network Controller Comparison 6

  7. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Hardware Comparison A. Frank – Network Controller Comparison 7

  8. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Tested NICs Vendor NIC driver Intel Ethernet Controller X550T ixgbe Intel Ethernet Controller XL710 i40e Mellanox ConnectX-4 Lx mlx5 Table 1: Investigated Intel and Mellanox NICs All adapters shown here are compatible with Ethernet For testing we used 10 Gb/s versions of the NICs A. Frank – Network Controller Comparison 8

  9. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Hardware Interfaces All three NICs provide similar types of interfaces Relevant for throughput: network interface and PCIe interface NIC Series Ports Port Speed PCIe Interface X550 1/2 10 Gb/s v3.0 x4 @ 31.504 Gb/s 710 1/2/4* 40 Gb/s v3.0 x8 @ 63.016 Gb/s ConnectX-4 1/2 100 Gb/s v3.0 x16 @ 126.032 Gb/s Table 2: Investigated Intel and Mellanox NIC series. Port speed refers to usage of a single port [3], [6], [4] *XL710 provides 4 MAC interfaces for 2 physical ports. Can be utilized with breakout cable A. Frank – Network Controller Comparison 9

  10. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Timestamping Timestamping is used by MoonGen for high precision latency mea- surements Drawbacks of software timestamping: • Inaccurate as software does not know when exactly a packet leaves the port • Might in itself influence the timing characteristics → Hardware timestamping desirable A. Frank – Network Controller Comparison 10

  11. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Timestamping - Intel • X550 - 80 MHz clock with precision ± 12.5 ns • 710 - 625 MHz clock with precision ± 0.8 ns (Values depend on configured link speed, 40 Gb/s in this case) Message Timestamp Point Ethernet First Octet Preamble Start of Frame Following Octet Delimiter Start of Frame 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 bit time Figure 2: Timestamping with X550 and 710 series NICs [6] A. Frank – Network Controller Comparison 11

  12. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Timestamping - Mellanox 1. Post WQEs Software (Work Queue Entries) 5. Read CQE (Completion Queue Entry) 2. Pass WQE ownership to hardware CQ WQ (completion queue) SQ (Send Work 4. Post Hardware Queue) CQE RQ CQE (Recieve contains Work Queue) 3. Access/Execute timestamp WQEs Figure 3: Timestamps with ConnectX-4 based devices [4] A. Frank – Network Controller Comparison 12

  13. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Timestamping - Summary Point in time to generate timestamp: • Intel’s NICs timestamp is generated when a certain bit leaves the port • Mellanox’ NICs include a timestamp of CQE generation • When is a CQE exactly generated? Deterministic? Clock frequencies of Mellanox cards must be queried from hardware and are not documented A. Frank – Network Controller Comparison 13

  14. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Software Comparison A. Frank – Network Controller Comparison 14

  15. Chair of Network Architectures and Services Department of Informatics Technical University of Munich DPDK MoonGen/libmoon MoonGen/libmoon DPDK libraries DPDK libraries User User mlx5 PMD I40E PMD/IXGBE Driver Control Path User Verbs (libibverbs+libmlx5) Control Path Kernel Verbs Data Path mlx5 verbs provider igb_uio Data Path (mlx5_ib) or Kernel Kernel uio_pci_generic mlx5_core PRM Hardware ConnectX-4/5 Hardware XL710/X550T Figure 4: DPDK and driver relation, Mellanox left, Intel right [2], [5] A. Frank – Network Controller Comparison 15

  16. Chair of Network Architectures and Services Department of Informatics Technical University of Munich DPDK with Intel MoonGen/libmoon • DPDK was originally DPDK libraries developed by Intel User IXGBE Driver • Usage of UIO modules → driver completely in Control Path userspace igb_uio Kernel Data Path or • Registers of hardware uio_pci_generic are directly mapped into userspace Hardware X550T Figure 5: DPDK with ixgbe [2], [5] A. Frank – Network Controller Comparison 16

  17. Chair of Network Architectures and Services Department of Informatics Technical University of Munich DPDK with Mellanox 1 Mellanox uses the InfiniBand software stack. Why? Mellanox Background: • Portfolio is centered around InfiniBand capable hardware • Adapter types: VPI (InfiniBand and Ethernet), EN (Ethernet only) • We use EN types. Software stack is the same for VPI and EN InfiniBand Background: • InfiniBand provides very low latencies and is mainly used for HPC • Verbs are abstract definitions of NIC operations defined in the In- finiBand standard • Verbs are implemented by third party verbs providers A. Frank – Network Controller Comparison 17

  18. Chair of Network Architectures and Services Department of Informatics Technical University of Munich DPDK with Mellanox 2 MoonGen/libmoon • mlx5 PMD is the userspace driver and interfaces with the DPDK libraries User verbs layer mlx5 PMD Control Path • mlx5_core is the kernel level User Verbs (libibverbs+libmlx5) driver Kernel Verbs • Kernel Verbs and the mlx5 Data Path verbs provider implement the mlx5 verbs provider (mlx5_ib) Kernel InfiniBand verbs mlx5_core PRM • This setup allows to receive and send packets via the OS’ Hardware ConnectX-4/5 network stack on ports which are not used by DPDK Figure 6: DPDK with mlx5 [2], [5] A. Frank – Network Controller Comparison 18

  19. Chair of Network Architectures and Services Department of Informatics Technical University of Munich Benefits of InfiniBand - RDMA Traditional Interconnect RDMA Zero-Copy Interconnect Buffer Application Buffer Application Buffer Application Buffer Application Buffer Sockets Buffer Sockets Sockets Sockets Transport Transport Buffer Buffer Protocol Driver Protocol Driver Buffer NIC Driver Buffer NIC Driver NIC Driver NIC Driver Buffer NIC Buffer NIC Buffer RNIC Buffer RNIC Figure 7: Comparison of data flow with and without RDMA [1] A. Frank – Network Controller Comparison 19

  20. Chair of Network Architectures and Services Department of Informatics Technical University of Munich DPDK support Timestamping: • The ixgbe and i40e drivers implement timestamping • The mlx5 driver does currently not support timestamping • Patchwork DPDK is checking a patch to solve this issue HW-Filtering: • Intel devices support DPDK’s old filtering framework • Mellanox supports this framework only partially • DPDK 17 introduces new framework with better support by Mel- lanox A. Frank – Network Controller Comparison 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend