Dataplane Broker (DPB) Steven Simpson, Arsham Farshad, Paul - - PowerPoint PPT Presentation

dataplane broker dpb
SMART_READER_LITE
LIVE PREVIEW

Dataplane Broker (DPB) Steven Simpson, Arsham Farshad, Paul - - PowerPoint PPT Presentation

Dataplane Broker (DPB) Steven Simpson, Arsham Farshad, Paul McCherry, Abubakr Ali Magzoub Problem statement Multj-site (multj-VIM) Each VNF assigned to a site Site 1 VNF Some VLs split across sites WIM responsible for


slide-1
SLIDE 1

Dataplane Broker (DPB)

Steven Simpson, Arsham Farshad, Paul McCherry, Abubakr “Ali” Magzoub

slide-2
SLIDE 2

Problem statement

  • Multj-site (multj-VIM)

– Each VNF assigned to a

site

– Some VLs split across sites – WIM responsible for inter-

site connectjvity

  • Dataplane Broker

(DPB)

– Can act as WIM

VNF VNF VNF Site 1 Site 2

slide-3
SLIDE 3

Wide-area L2 connectjons

  • VLAN endpoints

– Functjonal isolatjon of VLs

  • Multjpoint

– NSes can be split over 2+

sites

  • Bandwidth guarantees

– Non-functjonal isolatjon – Traffjc from one NS

shouldn’t be able to drown out another

– Asymmetric

  • Multjswitch

– Plugin framework for base

‘fabric’ layer

– Heterogeneous physical

network

  • Corsa DP2000 series
  • Generic OpenFlow
  • Scalability

– Hierarchical abstractjon – Not looking for optjmal

solutjon

  • OpenSource
slide-4
SLIDE 4

Network abstractjon

  • Named terminals

– Associated with sliced resources

at specifjc locatjons, e.g., lancaster-openstack, paris- vpngw, berlin-ofx

  • Numerically labeled circuits

– Distjnguishes services occupying

same terminal

– Maps to encapsulatjon

technology (e.g., VLAN ids)

  • Services

– Connect 2+ circuits – Bandwidth guarantees

site1-

  • fx

site1-

  • pst

site2-

  • fx

site2-

  • pst

site3-

  • fx

site3-

  • pst

2010 961 91 435 961 logical network terminals services circuit labels

  • Logical switch

– Logical network subtype – Maps directly to physical switch – Uses adaptor to map to fabric

technology

slide-5
SLIDE 5

Aggregator

  • Control of inferior

networks

– ‘Trunks’ connect ‘internal’

terminals of inferiors

– Own terminals map to

‘external’ terminals of inferiors

– Aggregator manages

capacity of its own trunks

– Aggregator service maps

to set of inferior services

  • Same northbound interface

– Hierarchies could be built – Inferiors are either more

aggregators, or ‘logical switches’

– Leaves are always switches

site2-

  • pst

site1 site2

91 961 961 91 73 73

site2 site1

  • pst
  • pst

site1-

  • pst

synonymities trunk aggregator inferior network inferior network

slide-6
SLIDE 6

Fabric adaptatjon

  • OpenFlow adaptor

– Uses VLAN OF operatjons

for VLAN switching

– Some metering applied to

implement QoS

  • OF1.5

– Custom Ryu controller app

implements multjple isolated learning switches in one physical switch

  • Fabric adaptors are plugins for

specifjc technologies

– Difgerent adaptor usable by each logical

switch

– Network heterogeneity – No persistent state

REST

Fabric

OpenFlow switch Ryu

tupleslicer.py VLANCircuitFabric.java

OpenFlow

slide-7
SLIDE 7

Fabric adaptatjon

  • Corsa adaptor

– Uses custom Ryu app to

switch between internal ports of VFC

– Uses switch management

REST API to atuach VFC ports to physical ports and VLANs

  • (De-)tagging handled by

atuachments, not by OpenFlow

  • Shaping applied to

atuachments

– QoS not implemented by

OpenFlow

REST

Fabric

Corsa DP2000

VFC

Ryu

portslicer.py

1 2

PortSlicedVFCFabric.java

mgmt

OpenFlow Corsa REST physical port tunnel attachment virtual forwarding context 73 91 VLAN id

slide-8
SLIDE 8

Aggregate bandwidths

slide-9
SLIDE 9

Aggregate bandwidths

slide-10
SLIDE 10

Aggregate bandwidths

slide-11
SLIDE 11

Sub-optjmal results

slide-12
SLIDE 12

Sub-optjmal results

slide-13
SLIDE 13

Future of DPB

  • Service modifjcatjon

– Pretend that resources

consumed by current confjguratjon are available for new

  • Bandwidth matrix

– For betuer expression of

(say) E-TREE

  • OVSDB as fabric

– Similar to Corsa

architecture

  • Multj-segment

– Establish all disjoint

segments or fail

  • Alternatjve metrics for

path computatjon

– Latency, reliability, …

  • Multjtenancy

– In the control plane – Betuer isolatjon of one

user’s services from other users’ control

slide-14
SLIDE 14

Acknowledgements

slide-15
SLIDE 15

OSM multj-VIM issues

  • IP pool splittjng

– OSM must co-ordinate IP

confjguratjon as it splits VL, not afuer

– Same subnet; disjoint IP pools – Our work-around: block DHCP – Watch out for connected

internal and external VLDs

– What about switch-like and

router-like behaviour across interfaces?

– Holistjc solutjon to related

issues?

  • Pre-existjng networks

– (including management) – Don’t connect them during ns-

create!

– Assume they are already

connected

– Or deal with:

  • Modifjcatjon of existjng services
  • Merging of two services into one
  • Surprise unrelated subnets

– Detectjon:

  • vim-network-name expressed or

implied; and

  • profjle unspecifjed
slide-16
SLIDE 16

Multj-tenant multj-VIM management networks

  • Per-tenant VIM

confjguratjons

– Distjnct VIM tenants and

default management network names

– Per-tenant isolatjon of

management networks

– Overlapping subnets – Juju client needs distjnct

netns context to access multjple simultaneously

  • VPN in?
  • Tool to set up multj-VIM

management network?

– Admin credentjals of OSM and all

requested VIMs

– Create VIM projects at each site

  • Create VIM network

– Create VPN gateway(s)

  • vpnmgr

– Gather endpoints and connect with

broker

– Create OSM tenant

  • Populate with VIMs’ project credentjals and

local network names

  • Provide Juju with VPN credentjals
  • Or do it through OSM?

– Need VPN gateways as VNFs – Need VLD pinning (or dummy VNFs)

slide-17
SLIDE 17

Multj-VIM IP pool split

  • A VNF could consist of

multjple and variable VDUs (scaling)

  • VL(D) profjles:

– Subnet (e.g.,

192.168.10/24)

– DHCP range (e.g., 30-40) – Some defjned by

VNFD/NSD providers

– Rest defjned at

deployment

VNF VNF VNF VL VL VL

slide-18
SLIDE 18

Multj-VIM IP pool split

  • Express as NSD
  • Deploy it

– Assign VNFs to difgerent

VIMs

  • OSM 5/6

implementatjon

– Leads to WIM interactjon – No IP address co-

  • rdinatjon

VNF VNF VNF Site 1 Site 2

slide-19
SLIDE 19

Multj-VIM IP pool split

  • No VNF spans two or more

sites

  • No internal VL spans sites
  • Some external VLs span sites

– Some may span more than two – A split VL will need representatjon at

each site

  • VL profjles must be defjned

before splittjng

– Representatjons of the same VL at

difgerent sites must be compatjble

– Representatjons of difgerent VLs at

difgerent sites must be distjnct

– To permit L2 inter-site connectjvity

VNF VNF VNF Site 1 Site 2

192.168.10/24 192.168.20/24 (20-39) 10.30.67/24 (20-39)

slide-20
SLIDE 20

Multj-VIM IP pool split

  • Each OpenStack site

provides a DHCP agent for each VL it represents

– One address is used as the

default gateway, DNS server and DHCP server

– Agent only responds to DHCP

requests of MACs known locally to use that network

– No awareness of DHCP at other

site

– DHCP ranges for same VL at each

site must not overlap!

– DHCP ranges must antjcipate

scaling

VNF VNF VNF Site 1 Site 2

192.168.10/24 192.168.20/24 (30-39) 10.30.67/24 (20-29) 192.168.20/24 (20-29) 10.30.67/24 (30-39)

slide-21
SLIDE 21

Multj-VIM IP pool split

  • Inter-site connectjvity

– Get VLAN tags of VIM

representatjons of multj-site VLs

  • 42 & 57
  • 69 & 60

– Add site identjfjcatjon as

context

  • Site 1.42 & Site 2.57
  • Site 1.69 & Site 2.60

– Estjmate bandwidth at each end

point

  • Site 1.42 (10M) & Site 2.57 (10M)
  • Site 1.69 (10M) & Site 2.60 (10M)

– Supply to WIM

VNF VNF VNF Site 1 Site 2

192.168.10/24 192.168.20/24 (30-39) 10.30.67/24 (20-29) 192.168.20/24 (20-29) 10.30.67/24 (30-39) 42 57 69 60

slide-22
SLIDE 22

Multj-VIM IP pool split

  • Site 1.42 (10M) & Site

2.57 (10M)

  • Site 1.69 (10M) & Site

2.60 (10M)

  • Broadcasts are visible

across both sites

– ARPs work – DHCP requests seen by

both agents, but only one responds

VNF VNF VNF Site 1 Site 2

192.168.10/24 192.168.20/24 (30-39) 10.30.67/24 (20-29) 192.168.20/24 (20-29) 10.30.67/24 (30-39) 42 57 69 60

slide-23
SLIDE 23

New management networks through OSM

  • Defjne a VLD

– Include a VPN gateway as

a VNF

  • Deploy across sites

– But only VNFs can be

assigned to VIMs

  • Create tenant-specifjc

VIMs using new network as default management

VPN gateway

Site 1 Site 2

mgmt public

VPN gateway

mgmt public

Site 3 Site 4

? ? ?