Linux Bridge, l2-overlays, E-VPN! Roopa Prabhu Cumulus Networks - - PowerPoint PPT Presentation

linux bridge l2 overlays e vpn
SMART_READER_LITE
LIVE PREVIEW

Linux Bridge, l2-overlays, E-VPN! Roopa Prabhu Cumulus Networks - - PowerPoint PPT Presentation

Linux Bridge, l2-overlays, E-VPN! Roopa Prabhu Cumulus Networks This tutorial is about ... Linux bridge at the center of data center Layer-2 deployments Deploying Layer-2 network virtualization overlays with Linux Linux hardware


slide-1
SLIDE 1

Linux Bridge, l2-overlays, E-VPN!

Roopa Prabhu Cumulus Networks

slide-2
SLIDE 2

2

This tutorial is about ...

  • Linux bridge at the center of data center Layer-2

deployments

  • Deploying Layer-2 network virtualization overlays with

Linux

  • Linux hardware vxlan tunnel end points
  • Ethernet VPN’s: BGP as a control plane for Network

virtualization overlays

slide-3
SLIDE 3

3

Tutorial Focus/Goals ..

  • Outline and document Layer-2 deployment models with

Linux bridge

  • Focus is on Data center deployments

▪ All Examples are from a TOR (Top-of-the-rack) switch running Linux Bridge

slide-4
SLIDE 4

4

Tutorial flow ...

Data center Layer-2 networks Linux bridge Layer-2

  • verlay

networks Linux bridge and Vxlan E-VPN: BGP control plane for overlay networks Linux bridge and E-VPN

slide-5
SLIDE 5

5

Data Center Network Basics

  • Racks of servers grouped into PODs
  • Vlans, Subnets stretched across Racks or POD’s
  • Overview of data center network designs [1]

▪ Layer 2 ▪ Hybrid layer 2-3 ▪ Layer 3

  • Modern Data center networks:

▪ Clos Topology [2] ▪ Layer 3 or Hybrid Layer 2-3

slide-6
SLIDE 6

6

Modern Data center network

SPINE LEAF/TOR

slide-7
SLIDE 7

7

Hybrid layer-2 - layer-3 data center network

SPINE LEAF (TOR)

Layer2-3 boundary Layer-2 gateway

slide-8
SLIDE 8

8

Layer-3 only data center network

SPINE LEAF (TOR)

layer-3 boundary Layer-3 gateway

slide-9
SLIDE 9

9

Layer-2 Gateways with Linux Bridge

slide-10
SLIDE 10

10

Layer-2 Gateways with Linux Bridge

  • Connect layer-2 segments with bridge
  • Bridge within same vlans
  • TOR switch can be your L2 gateway

bridging between vlans on the servers in the same rack

slide-11
SLIDE 11

11

What do you need ?

  • TOR switches running Linux Bridge
  • Switch ports are bridge ports
  • Bridge vlan filtering or non-vlan filtering mode:
  • Linux bridge supports two modes:
  • A more modern scalable vlan filtering mode
  • Or old traditional non-vlan filtering mode
slide-12
SLIDE 12

12

Layer-2 switching within a vlan

bridge bridge swp1.100 swp2.100 swp1 swp2 vlan: 100 vlan: 100,

Non-vlan filtering bridge

swp1 swp2

vlan filtering bridge

slide-13
SLIDE 13

13

Routing between vlans

Bridge10 10.0.1.20 bridge swp1.10 swp2.10 swp1 swp2 vlan: 10 vlan: 20 Bridge20 10.0.3.20 swp1.20 swp2.20 Bridge.10 10.0.1.20 Bridge.20 10.0.3.20 swp1 swp2

Non-vlan filtering bridge vlan filtering bridge

slide-14
SLIDE 14

14

Scaling with Linux bridge

A vlan filtering bridge results in less number of overall net-devices Example: Deploying 2000 vlans 1-2000 on 32 ports:

  • non-Vlan filtering bridge:

▪ Ports + 2000 vlan devices per port + 2000 bridge devices ▪ 32 + 2000 * 32 + 2000 = 66032

  • Vlan filtering bridge:

▪ 32 ports + 1 bridge device + 2000 vlan devices on bridge for routing ▪ 32 + 1 + 2000 = 2033 netdevices

slide-15
SLIDE 15

15

L2 gateway on the TOR with Linux bridge

Spine

Hosts Rack1

swp2 swp2

Host/VM 1 Mac1, VLAN-10 Host/VM 2 mac2, VLAN-20

Leaf1 Leaf2 Leaf3

Hosts Rack2 Hosts Rack3

Host/VM 3 mac3, VLAN-30

bridge bridge bridge

swp2 swp1 swp1 swp1

leaf1 leaf2 leaf3

bridge.10 bridge.20 bridge.30

Host/VM 11 Mac11, VLAN-10 Host/VM 22 mac22, VLAN-20 Host/VM 33 mac33, VLAN-30

  • leaf* are l2

gateways

  • Bridge within the

same vlan and rack and route between vlans

  • bridge.* vlan

interfaces are used for routing

slide-16
SLIDE 16

16

Bridge features and flags

  • Learning
  • Igmp Snooping
  • Selective control of broadcast, multicast and unknown

unicast traffic

  • Arp and ND proxying
  • STP
slide-17
SLIDE 17

17

Note: for the rest of this tutorial we will

  • nly use the vlan filtering bridge for

simplicity.

slide-18
SLIDE 18

18

Layer-2 - Overlay Networks

slide-19
SLIDE 19

19

Overlay networks basics

  • Overlay networks are an approach for providing

network virtualization services to a set of Tenant Systems (TSs)

  • Overlay networks achieve network virtualization by
  • verlaying layer 2 networks over physical layer 3

networks

slide-20
SLIDE 20

20

Network Virtualization End-points

  • Network virtualization endpoints (NVE) provide a logical

interconnect between Tenant Systems that belong to a specific Virtual network (VN)

  • NVE implements the overlay protocol (eg: vxlan)
slide-21
SLIDE 21

21

NVE Types

  • Layer-2 NVE

▪ Tenant Systems appear to be interconnected by a LAN environment over an L3 underlay

  • Layer-3 NVE

▪ An L3 NVE provides virtualized IP forwarding service, similar to IP VPN

slide-22
SLIDE 22

22

Overlay network

L3 underlay network NVE NVE TS TS

slide-23
SLIDE 23

23

Why Overlay networks ?

  • Isolation between tenant systems
  • Stretch layer-2 networks across racks, POD’s, inter or

intra data centers ▪ Layer-2 networks are stretched

  • To allow VM’s talking over same broadcast domain

to continue after VM mobility without changing network configuration

  • In many cases this is also needed due to Software

licensing tied to mac-addresses

slide-24
SLIDE 24

24

Why Overlay networks ? (Continued)

  • Leverage benefits of L3 networks while maintaining L2

reachability

  • Cloud computing demands:

▪ Multi tenancy ▪ Abstract physical resources to enable sharing

slide-25
SLIDE 25

25

NVE deployment options

Overlay network end-points (NVE) can be deployed on

  • The host or hypervisor or container OS (System

where Tenant systems are located) OR

  • On the Top-of-the-rack (TOR) switch
slide-26
SLIDE 26

26

VTEP on the servers or the TOR ?

Vxlan tunnel endpoint on the servers:

  • Hypervisor or container
  • rchestration systems

can directly map tenants to VNI

  • Works very well in a

pure layer-3 datacenter: terminate VNI on the servers Vxlan tunnel endpoint on the TOR:

  • A TOR can act as a l2
  • verlay gateway mapping

tenants to VNI

  • Vxlan encap and decap at

line rate in hardware

  • Tenants are mapped to
  • vlans. Vlans are mapped

to VNI at TOR

slide-27
SLIDE 27

27

Layer-2 Overlay network dataplane: vxlan

  • VNI - virtual network identifier (24 bit)
  • Vxlan tunnel endpoints (VTEPS) encap and decap vxlan

packets

  • VTEP has a routable ip address
  • Linux vxlan driver
  • Tenant to vni mapping
slide-28
SLIDE 28

28

Vxlan tunnel end-point on the hypervisor

SPINE LEAF (TOR)

layer-3 boundary Layer-3 gateway or

  • verlay gateway

Vteps on hypervisor

slide-29
SLIDE 29

29

Vxlan tunnel end-point on the TOR switch

SPINE LEAF (TOR)

Layer2-3 boundary Layer-2 overlay gateway: vxlan vteps Vlans on the hypervisors

slide-30
SLIDE 30

30

Linux vxlan tunnel end point (layer-3)

  • Tenant systems directly mapped to VNI

L3 underlay vxlan

L3 gateway Tenant systems Tenant systems L3 gateway Vxlan driver Vxlan driver

vxlan

slide-31
SLIDE 31

31

Linux Layer-2 overlay gateway: vxlan

  • Tenant systems mapped to vlans
  • Linux bridge on the TOR maps vlans to vni

Linux bridge (gateway) Linux bridge (gateway)

L3 overlay vlans vxlan vxlan

Vxlan driver Vxlan driver Tenant systems Tenant systems Vlans-to-vxlan vxlan-to-vlans Vlans-to-vxlan vxlan-to-vlans

vlans

slide-32
SLIDE 32

32

FDB Learning options

  • Flood and learn (default)
  • Control plane learning

▪ Control plane protocols disseminate end point address mappings to vteps ▪ Typically done via a controller

  • Static mac install via orchestration tools
slide-33
SLIDE 33

33

Layer-2 overlay gateway tunnel fdb tables

Linux bridge Driver Local port remote tunnel port

Local port Tunnel driver

Vlan mapped to tunnel id

  • LInux bridge and tunnel endpoints maintain separate fdb tables
  • Linux bridge fdb table contains all macs in the stretched L2 segment
  • Tunnel end point fdb table contains remote dst reachability information

Remote dst fdb table fdb table

Vlan

slide-34
SLIDE 34

34

Bridge and vxlan driver fdb

Bridge fdb

<local_mac>, <vlan>, <local_port> <remote_mac>, <vlan>, <vxlan port>

Local port Vxlan fdb

<remote_mac>, <vni>, <remote vtep dst> Vlan is mapped to vni

  • Vxlan fdb is an extension of bridge fdb table with additional remote dst

entry info per fdb entry

  • Vlan entry in bridge fdb entry maps to vni in vxlan fdb
slide-35
SLIDE 35

35

Broadcast, unknown unicast and multicast traffic (BUM)

  • An l2 network by default floods unknown traffic
  • Unnecessary traffic leads to wasted bw and cpu cycles
  • This is aggravated when l2 networks are stretched to

larger areas: across racks, POD’s or data centers

  • Various optimizations can be considered in such l2

stretched overlay networks

slide-36
SLIDE 36

36

Bridge driver handling of BUM traffic

Bridge driver has separate controls

  • To drop broadcast, unknown unicast and multicast

traffic

slide-37
SLIDE 37

37

Vxlan driver handling of BUM traffic

  • Multicast:

▪ Use a multicast group to forward BUM traffic to registered vteps

  • Multicast group can be specified during creation of the vxlan

device

  • Head end replication:
  • Default remote vtep list to replicate a BUM traffic to
  • Can be specified by a vxlan all-zero fdb entry pointing to a

remote vtep list

  • Flood: simply flood to all remote ports

▪ Control plane can minimize flood by making sure every vtep knows remote end-points it cares about

slide-38
SLIDE 38

38

Vxlan netdev types

  • A traditional vxlan netdev

▪ Deployed with one netdev per vni ▪ Each vxlan netdev maintains forwarding database (fdb) for its vni

  • Fdb entries hashed by mac
  • Recent kernels support ability to deploy a single vxlan

netdev for all VNI’s ▪ Such a mode is called collect_metadata or LWT mode ▪ A single forwarding database (fdb) for all VNI’s ▪ Fdb entries are hashed by <mac, VNI>

slide-39
SLIDE 39

39

Linux L2 vxlan overlay gateway example

slide-40
SLIDE 40

40

Building an Linux l2 overlay gateway

  • Vxlan tunnel netdevice(s) for encap and decap
  • Linux bridge device with local and vxlan ports
  • Bridge maps Vlan to vni
  • Bridge switches

▪ Vlan traffic from local to remote vxlan ports ▪ Remote traffic from vxlan ports to local vlan ports

slide-41
SLIDE 41

41

Recipe-1 [one vxlan netdev per vni Example shows leaf1 config]

slide-42
SLIDE 42

42

Recipe 1: create all your netdevs

$ # create bridge device: $ ip link add type bridge dev bridge $ # create vxlan netdev: $ ip link add type vxlan dev vxlan-10 vni 10 local 10.1.1.1 $ # enslave local and remote ports $ ip link set dev vxlan-10 master bridge $ ip link set dev swp1 master bridge

slide-43
SLIDE 43

43

Recipe 1: Configure vlan filtering and vlans

$ #configure vlan filtering on bridge

$ ip link set dev bridge type bridge vlan_filtering 1

$ #configure vlans

$ bridge vlan add vid 10 dev vxlan-10 $ bridge vlan add vid 10 untagged pvid dev vxlan-10 $ bridge vlan add vid 10 dev swp1

slide-44
SLIDE 44

44

Recipe 1: Add default fdb entries

$ # add your default remote dst forwarding entry $ bridge fdb add 00:00:00:00:00:00 dev vxlan-10 dst 10.1.1.2 self permanent $ bridge fdb add 00:00:00:00:00:00 dev vx-10 dst 10.1.1.3 self permanent

slide-45
SLIDE 45

45

Recipe 1: Here's how it all looks

Spine L3 Underlay

Hosts Rack1

vxlan-10 10.1.1.1 vxlan-10 10.1.1.2

Host/VM 1 Mac1, VLAN-10 Host/VM 2 mac2, VLAN-10

Leaf1 Leaf2 Leaf3

Hosts Rack2 Hosts Rack3

Host/VM 3 mac3, VLAN-10

bridge bridge bridge

vxlan-10 10.1.1.3

$bridge fdb show mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan-10 vlan 10 master bridge mac2 vxlan-10 dst 10.1.1.2 self mac3 dev vxlan-10 vlan 10 master bridge mac3 dev vxlan-10 dst 10.1.1.3 self $bridge fdb show mac3 dev swp1 vlan 10 master bridge mac2 dev vxlan-10 vlan 10 master bridge mac2 vxlan-10 dst 10.1.1.2 self mac1 dev vxlan-10 vlan 10 master bridge mac1 dev vxlan-10 dst 10.1.1.3 self

swp1 swp1 swp1

VXLAN Tunnel leaf1 leaf2 leaf3

slide-46
SLIDE 46

46

Zoom into the bridge config on the TOR switches

bridge swp1 vlan: 10 vlan: 10 $bridge vlan show port vlan ids swp1 1 PVID Egress Untagged 10 vxlan-10 10 PVID Egress Untagged 10

vxlan-10

  • Vlan 10 is mapped to vxlan vni 10
slide-47
SLIDE 47

47

Zoom into bridge and vxlan driver fdb tables

Bridge fdb: mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan-10 vlan 10 master bridge mac3 dev vxlan-10 vlan 10 master bridge swp1 Vxlan-10 fdb: mac2 dev vxlan-10 dst 10.1.1.2 self mac3 dev vxlan-10 dst 10.1.1.3 self

Vlan is mapped to vni

  • Vxlan fdb is an extension of bridge fdb table with additional remote dst

info per fdb entry

  • Vlan entry in bridge fdb entry maps to vni in vxlan fdb
slide-48
SLIDE 48

48

Recipe 1: check your running kernel state

$ ip link show master bridge $ bridge vlan show port vlan ids vxlan-10 10 PVID Egress Untagged swp1 1 PVID Egress Untagged 10 bridge None $ bridge fdb show mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan-10 vlan 10 master bridge mac2 vxlan-10 dst 10.1.1.2 self mac3 dev vxlan-10 vlan 10 master bridge mac3 dev vxlan-10 dst 10.1.1.3 self $ # check bridge flags $ ip -d link show dev bridge

slide-49
SLIDE 49

49

Recipe-2 (With single vxlan netdev Example shows leaf1 config)

slide-50
SLIDE 50

50

Recipe 2: create all your netdevs

$ # create bridge device: $ ip link add type bridge dev bridge $ # create vxlan netdev: $ ip link add type vxlan dev vxlan0 external local 10.0.1.1 $ # enslave local and remote ports $ ip link set dev vxlan0 master bridge $ ip link set dev swp1 master bridge

slide-51
SLIDE 51

51

Recipe 2: Enable vlan filtering and vlan_tunnel mode

$ #configure vlan filtering on bridge

$ ip link set dev bridge type bridge vlan_filtering 1 $ # enable tunnel mode on the vxlan tunnel bridge ports $ bridge link set dev vxlan0 vlan_tunnel on

slide-52
SLIDE 52

52

Recipe 2: configure vlans

$ #configure vlans $ bridge vlan add vid 10 dev vxlan0 $ bridge vlan add vid 10 dev swp1 $ # set tunnel mappings on the ports per vlan $ # map vlan 10 to tunnel id 10 (in this case vni 10) $ bridge vlan add dev vxlan0 vid 10 tunnel_info id 10

slide-53
SLIDE 53

53

Recipe 2: configure default fdb entries

$ # add your default remote dst forwarding entry $ bridge fdb add 00:00:00:00:00:00 dev vxlan0 vni 10 dst 10.1.1.2 self permanent $ bridge fdb add 00:00:00:00:00:00 dev vxlan0 vni 10 dst 10.1.1.3 self permanent

slide-54
SLIDE 54

54

Recipe 2: Here's how it all looks

Spine L3 Underlay

Hosts Rack1

vxlan0 10.1.1.1 vxlan0 10.1.1.2

Host/VM 1 mac1, VLAN-10 Host/VM 2 mac2, VLAN-10

Leaf1 Leaf2 Leaf3

Hosts Rack2 Hosts Rack3

Host/VM 3 mac3, VLAN-10

bridge bridge bridge

vxlan0 10.1.1.3

$bridge fdb show mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan0 vlan 10 master bridge mac2 dev vxlan0 vni 10 dst 10.1.1.2 self mac3 dev vxlan0 vlan 10 master bridge mac3 dev vxlan0 vlan 10 dst 10.1.1.3 self $bridge fdb show mac3 dev swp1 vlan 10 master bridge mac2 dev vxlan0 vlan 10 master bridge mac2 dev vxlan0 vni 10 dst 10.1.1.2 self mac1 dev vxlan0 vlan 10 master bridge mac1 dev vxlan0 vni 10 dst 10.1.1.3 self

swp1 swp1 swp1

VXLAN Tunnel leaf1 leaf2 leaf3

slide-55
SLIDE 55

55

Zoom into the bridge config on the leaf switches

bridge swp1 vlan: 10 vlan: 10 $bridge vlan show port vlan ids swp1 1 PVID Egress Untagged 10 vxlan0 1 PVID Egress Untagged 10

vxlan0

  • Vlan 10 is mapped to vxlan vni 10

$bridge vlan tunnelshow port vlan id tunnel id Vxlan0 10 10

slide-56
SLIDE 56

56

Recipe 2: check your running kernel state

$ bridge vlan show port vlan ids vxlan0 1 PVID Egress Untagged 10 swp1 1 PVID Egress Untagged 10 bridge None $ bridge vlan tunnelshow port vlan id tunnel id Vxlan0 10 10 $ bridge fdb show mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan0 vlan 10 master bridge mac2 vxlan0 dst 10.1.1.2 self mac3 dev vxlan0 vlan 10 master bridge mac3 dev vxlan0 dst 10.1.1.3 self $ ip -d link show dev bridge

slide-57
SLIDE 57

57

Zoom into bridge and vxlan driver fdb tables

Bridge fdb: mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan0 vlan 10 master bridge mac3 dev vxlan0 vlan 10 master bridge swp1 Vxlan0 fdb: mac2 dev vxlan0 vni 10 dst 10.1.1.2 self mac3 dev vxlan0 vni 10 dst 10.1.1.3 self

Vlan is mapped to vni

  • Vxlan fdb is an extension of bridge fdb table with additional remote dst

info per fdb entry

  • Vlan entry in bridge fdb entry maps to vni in vxlan fdb
slide-58
SLIDE 58

58

Other Network Virtualization Technologies

  • Other overlay data planes:

▪ Geneve, NVGRE, STT

  • ILA - Identifier Locator Addressing

▪ Wise Tom Herbert says ‘Move to Ipv6 and use ILA for native network virtualization’ :)

slide-59
SLIDE 59

59

Summary overlays:

  • Flood and learn by default
  • Controllers can be used to disseminate MAC addresses

to avoid flooding

  • Distributed controllers win over Centralized controllers
  • Many controller solutions available: some proprietary
  • Need for an Open Standards based controller: Lets dive

into the next section which does just that

slide-60
SLIDE 60

60

Ethernet VPNs (E-VPNS)

slide-61
SLIDE 61

61

What are E-VPNs ?

  • Ethernet VPN i.e. another form of Layer-2 VPN

▪ L2-VPN’s are virtual private networks carrying layer-2 traffic ▪ Different from VPLS [5, 6] ▪ Used to separate tenants at Layer-2

  • Original EVPN RFC 7432: [7]

▪ BGP MPLS-based Ethernet VPN ▪ Requirements defined in RFC 7209 [8]

slide-62
SLIDE 62

62

Why E-VPN ?

  • Overcome limitations of prior L2-VPN technologies like

VPLS

  • Support for multihoming and redundancy
  • Control plane learning: No flooding
  • Supports multiple data plane encapsulations
  • Various optimizations

▪ Multicast optimization ▪ ARP-ND broadcast handling

slide-63
SLIDE 63

63

Evpn use-cases

  • Initially introduced to support l2 vpn provider services

to customers

  • Multi-tenant hosting
  • Stretch L2 across POD’s in the data center
  • Data center interconnect (DCI) technology

▪ Stretch l2 across data centers

slide-64
SLIDE 64

64

In this tutorial we look at BGP based E-VPN as a distributed controller for layer-2 network virtualization

slide-65
SLIDE 65

65

E-VPN is adopted in the data center with “vxlan”

  • verlay. This tutorial will focus on

BGP-Vxlan based E-vpn.

slide-66
SLIDE 66

66

New RFC’s to adopt E-VPN in the data center

  • A network virtualization overlay solution using E-VPN [3]
  • BGP based control plane for Vxlan
slide-67
SLIDE 67

67

Border Gateway Protocol (BGP)

  • Routing protocol of the internet
  • A typical BGP implementation [10] on Linux installs routes

kernel FIB to install routes

  • With E-VPN, we are telling BGP to also look at layer-2

forwarding entries in the kernel and distribute to peers

slide-68
SLIDE 68

68

BGP E-VPN

  • BGP runs on each Vtep
  • Peers with BGp on other Vteps
  • Exchanges local Mac and Mac/IP routes with peers
  • Exchanges VNI’s each VTEP is interested in
  • Tracks mac address moves for faster convergence
  • Type of Information exchanged is tagged by ‘Route

types’ ▪ MAC or MAC-IP routes are Type 2 routes ▪ BUM replication list exchanged via Type 3 routes

slide-69
SLIDE 69

69

  • In this tutorial we will only focus on E-VPN
  • n the data center TOR switches

running Linux.

slide-70
SLIDE 70

70

E-VPN flow (distribute macs)

Spine L3 Underlay

Hosts Rack1

vxlan-10 10.1.1.1 vxlan-10 10.1.1.2

Host/VM 1 mac1, IP1 VLAN-10 Host/VM 2 mac2 IP2 VLAN-10

Leaf1 Leaf2 Leaf3

Hosts Rack2 Hosts Rack3

Host/VM 3 mac3, IP3 VLAN-10

bridge bridge bridge

vxlan-10 10.1.1.3 swp1 swp1 swp1

VXLAN Tunnel leaf1 leaf2 leaf3

BGP

(a) BGP discovers local vlan-vni mapping via netlink (b) BGP reads local bridge <mac, vlan> entries and distributes them to bgp E-vpn peers (c) BGP learns remote <mac, vni> entries from E-VPN peers and installs them in the kernel bridge fdb table (d) Kernel bridge fdb table has all local and remote mac’s for forwarding

BGP BGP

(a) Bridge learns local <mac, vlan> in its fdb

slide-71
SLIDE 71

71

Arp And ND suppression

  • ARP and ND traffic is by default flooded to all nodes in

the broadcast domain

  • “Arp and ND suppression” is an E-VPN function

▪ To reduce Arp and ND flooded traffic in such large broadcast domains ▪ ARP broadcast traffic problems in large data center are described here [4]

  • BGP E-VPN control plane knows remote IP-MAC’s

▪ These remote MAC-IP’s can be used to proxy local ARP-ND requests

slide-72
SLIDE 72

72

Linux Bridge Arp And ND suppression for E-VPN

  • BGP exchanges local MAC-IP’s with E-VPN peers as Type

2 MAC-IP routes

  • BGP installs remote MAC-IP’s from E-VPN peers in the

kernel neigh table

  • Linux bridge driver uses remote MAC-IP’s (neigh entries)

installed by E-VPN to proxy requests for MAC-IP from local end hosts

  • For a MAC-IP entry not present in the neigh table,

▪ bridge driver floods such requests to all ports in that vlan/vni

slide-73
SLIDE 73

73

E-VPN flow: arp nd suppression (distribute mac + ip)

Spine L3 Overlay

Hosts Rack1

vxlan-10 10.1.1.1 vxlan-10 10.1.1.2

Host/VM 1 mac11, IP1 VLAN-10 Host/VM 2 mac2, IP2, VLAN-10

Leaf1 Leaf2 Leaf3

Hosts Rack2 Hosts Rack3

Host/VM 3 Mac3, IP3 VLAN-10

bridge bridge bridge

vxlan-10 10.1.1.3 swp1 swp1 swp1

VXLAN Tunnel leaf1 leaf2 leaf3

BGP

(a) BGP discovers local vlan-vni mapping via netlink (b) BGP reads local <mac, ip, vlan> entries and distributes them to bgp E-vpn peers (c) BGP learns remote <mac, ip, vni> entries from E-VPN peers and installs them in the kernel neigh table (d) Kernel neigh table has all local and remote <mac + ip> for proxying neigh discovery msgs

BGP BGP

(a) Local snooper process snoops <mac, ip> on local ports and adds them to the kernel neigh table

bridge.10 bridge.10 bridge.10

slide-74
SLIDE 74

74

Deploying E-VPN with Linux Bridge

slide-75
SLIDE 75

75

Deploy Linux bridge with Tunnel vxlan ports

  • Deploy Linux bridge with Tunnel vxlan ports as described

previously in the tutorial

  • Run BGP on each VTEP
  • Configure BGP for E-VPN: example FRR config [13]
  • Run local snooper process: to snoop local end-point macs

and add to the bridge fdb table

  • BGP Listens to neigh notifications and distributes local

macs

  • BGP adds remote macs from peer with

NTF_EXT_LEARNED

slide-76
SLIDE 76

76

Following example only covers a vxlan device per VNI.

slide-77
SLIDE 77

77

Create all your netdevs (iproute2)

$ # create bridge device: $ ip link add type bridge dev bridge $ # create vxlan netdev: $ ip link add type vxlan dev vxlan-10 $ # enslave local and remote ports $ ip link set dev vxlan-10 master bridge $ ip link set dev swp1 master bridge (see ifupdown2 [12] example in References section [14])

slide-78
SLIDE 78

78

Create additional netdevs for neigh entries (E-VPN MAC-IP routes)

$ # E-VPN MAC-IP entries (neigh entries) are installed per VNI and $ # hence per vlan. Hence create per vlan bridge entries for MAC-IP $ # ie. create vlan devices on bridge $ ip link add type vlan dev bridge.10 $ # create vxlan netdev: $ ip link add type vxlan dev vxlan-10 $ # enslave local and remote ports $ ip link set dev vxlan-10 master bridge $ ip link set dev swp1 master bridge

slide-79
SLIDE 79

79

Configure vlans

$ ip link set dev bridge type bridge vlan_filtering 1 $ #configure vlans $ bridge vlan add vid 10 dev vxlan-10 $ bridge vlan add vid 10 untagged pvid dev vxlan-10 $ bridge vlan add vid 10 dev swp1 $ bridge vlan add vid 10 dev swp1 $ # Default fdb entries for BUM replication are installed by BGP

slide-80
SLIDE 80

80

E-VPN specific config

$ #turn off learning on tunnel ports (MAC’s are learnt by BGP) $ bridge link set dev vxlan-10 learning off # turn on neigh suppression on tunnel ports $ bridge link set dev vxlan-10 neigh_suppress on $ # you can further turn off flooding completely on tunnel ports $ # set unknown unicast flood off $ bridge link set dev vxlan-10 flood off $ # set multicast flood off $ bridge link set dev vxlan-10 mcast_flood off

slide-81
SLIDE 81

81

Check Config

$ # Check bridge port flags to make sure all required flags are on $ bridge -d link show dev vxlan-10

slide-82
SLIDE 82

82

Check your kernel vlan, fdb and neigh state

$ bridge vlan show port vlan ids vxlan-10 1 PVID Egress Untagged 10 swp1 10 PVID Egress Untagged 10 bridge None

$ bridge fdb show mac1 dev swp1 vlan 10 master bridge mac2 dev vxlan0 vlan 10 master bridge extern_learn mac2 vxlan0 dst 10.1.1.2 self ext_learn mac3 dev vxlan0 vlan 10 master bridge extern_learn mac3 dev vxlan0 dst 10.1.1.3 self extern_learn $ ip neigh show IP1 mac1 dev swp1 IP2 mac2 dev vxlan-10

slide-83
SLIDE 83

83

Troubleshooting and Debugging ..

slide-84
SLIDE 84

84

Most common problems

  • Fdb entries missing from kernel due to control plane

netlink errors

  • Fdb entries overwritten by learn from hardware or

dynamic learn by the bridge or vxlan driver in kernel

  • End-point mobility problems:

▪ remote end-point or tenant system reachable via vxlan may move to a locally connected node ▪ Bridge fdb and vxlan fdb must be kept in sync to avoid black hole or incorrect forwarding behavior

slide-85
SLIDE 85

85

Debugging using iproute2 and perf probes

  • Dumping bridge and tunnel fdb tables:
  • $bridge fdb show
  • Both bridge and vxlan fdb tables are dumped
  • Vxlan fdb entries are qualified by dev =

<vxlan_dev> and flag ‘self’

  • Monitoring bridge link and fdb events:
  • $bridge monitor [link | fdb]
  • In recent kernels use bridge perf tracepoints:
  • $perf probe --add bridge:*
slide-86
SLIDE 86

86

References

[1] Data center networks: https://tools.ietf.org/html/rfc7938#section-4 [2] Data center clos topology: https://tools.ietf.org/html/rfc7938#section-3.2 [3] A Network Virtualization Overlay Solution using EVPN: https://tools.ietf.org/html/draft-ietf-bess-evpn-overlay-08 [4] Address resolution problems in large data centers: https://tools.ietf.org/html/rfc6820 [5] Framework for Layer 2 Virtual Private Networks (L2VPNs) https://tools.ietf.org/html/rfc4664 [6] VPLS rfc : https://tools.ietf.org/html/rfc4762

slide-87
SLIDE 87

87

References (Continued)

[7] BGP MPLS based E-VPN: https://www.rfc-editor.org/rfc/rfc7432.txt [8] Requirements for E-VPN: https://tools.ietf.org/html/rfc7209 [9] E-VPN ARP and ND proxy: https://tools.ietf.org/html/draft-ietf-bess-evpn-proxy-arp-nd-03 [10] Free range routing (FRR): https://frrouting.org/ [11] E-VPN webinar by Dinesh Dutt: http://go.cumulusnetworks.com/l/32472/2017-09-22/95t27t [12] Ifupdown2: https://github.com/CumulusNetworks/ifupdown2

slide-88
SLIDE 88

88

[13] BGP Config for switches (FRR implementation)

LEAF switch config router bgp 65456 bgp router-id 27.0.0.21 neighbor fabric peer-group neighbor fabric remote-as external neighbor uplink-1 interface peer-group fabric neighbor uplink-2 interface peer-group fabric address-family ipv4 unicast neighbor fabric activate redistribute connected address-family l2vpn evpn neighbor fabric activate advertise-all-vni SPINE switch config router bgp 65535 bgp router-id 27.0.0.21 neighbor fabric peer-group neighbor fabric remote-as external neighbor swp1 interface peer-group fabric neighbor swp2 interface peer-group fabric address-family ipv4 unicast neighbor fabric activate redistribute connected address-family l2vpn evpn neighbor fabric activate

slide-89
SLIDE 89

89

[14] Ifupdown2 config for E-VPN on LEAF switches

# /etc/network/interfaces # example shows one vxlan device per vni auto vxlan-10 iface vxlan-10 vxlan-id 10 bridge-access 10 vxlan-local-tunnelip 10.1.1.1 bridge-learning off bridge-arp-nd-suppress on mstpctl-portbpdufilter yes mstpctl-bpduguard yes mtu 9152 # /etc/network/interfaces # vxlan device per vni auto bridge iface bridge bridge-vlan-aware yes bridge-ports vxlan-10 swp1 bridge-stp on bridge-vids 10 bridge-pvid 1 auto bridge.10 iface bridge.10

slide-90
SLIDE 90

90

Thank you!