Improve Network Performance Yotam Harchol Research (This work was - - PowerPoint PPT Presentation

improve network performance
SMART_READER_LITE
LIVE PREVIEW

Improve Network Performance Yotam Harchol Research (This work was - - PowerPoint PPT Presentation

Reusing Network Services Logic to Improve Network Performance Yotam Harchol Research (This work was done while at the Hebrew University) Joint work with Anat Bremler-Barr and David Hay Appeared in ACM SIGCOMM 2016 THE HEBREW This research


slide-1
SLIDE 1

THE HEBREW UNIVERSITY OF JERUSALEM

Reusing Network Services Logic to Improve Network Performance

Yotam Harchol

Research

(This work was done while at the Hebrew University)

Joint work with Anat Bremler-Barr and David Hay Appeared in ACM SIGCOMM 2016

This research was supported by the European Research Council ERC Grant agreement no 259085, the Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11), and the Neptune Consortium.

slide-2
SLIDE 2

Network Functions (Middleboxes)

2

Firewall Load Balancer Intrusion Prevention System

  • Monolithic closed black-boxes

✘ High cost ✘ Limited provisioning and scalability Network Function Virtualization (NFV): ✔ Reduce cost (by moving to software) ✔ Improve provisioning and scalability (by virtualizing software NFs) At the cost of: ✘ Reduced performance (mainly latency)

slide-3
SLIDE 3

Network Functions (Middleboxes)

✘High cost ✘Limited provisioning and scalability ✘Limited and separate management

  • Different vendors
  • No standards
  • Separate control plane

3

slide-4
SLIDE 4

Network Functions (Middleboxes)

  • Actually, many of these black-boxes are very modular

4

Network Function

✘ High cost ✘ Limited provisioning and scalability ✘ Limited and separate management ✘ Limited functionality and limited innovation (High entry barriers) ✘ Similar complex processing steps, no re-use

slide-5
SLIDE 5

OpenBox Controller

OBI OBI OBI

OpenBox

  • OpenBox: A new software-defined framework for network functions
  • Decouples network function control from their data plane
  • Unifies data plane of multiple network functions

Benefits: Easier, unified control Better performance (improved latency) Scalability Flexible deployment Inter-tenant isolation Innovation

github.com/OpenBoxProject www.openboxproject.org

slide-6
SLIDE 6
  • High cost of middleboxes
  • Limited provisioning and scalability of middleboxes
  • Limited management of middleboxes
  • Limited functionality

and limited innovation

  • Complex processing steps

Software Defined Networking

6

OpenFlow Controller OpenBox Controller

OBI OBI OBI

switches switches switches distributed algorithms 40%-60% of the appliances in large-scale networks are middleboxes!

[Sherry & Ratnasamy, ‘12]

slide-7
SLIDE 7

The OpenBox Framework

7

Logically-Centralized OpenBox Controller Network Functions: OpenBox Applications Control Plane Data Plane OpenBox Service Instances

OpenBox Protocol Northbound API Additionally:  Isolation between NFs / multiple tenants  Support for hardware accelerators  Dynamically extend the protocol

slide-8
SLIDE 8

Most network functions do very similar processing steps

Observation:

8

But there is no re-use…

The design the OpenBox framework is based on this observation

slide-9
SLIDE 9

Network Function Decomposition

9

Firewall:

Read Packets Header Classifier Drop Alert Output

Load Balancer:

Read Packets Header Classifier Rewrite Header Output

Intrusion Prevention System:

Read Packets Header Classifier Drop Alert DPI DPI DPI Output

slide-10
SLIDE 10

Northbound API

10

OpenBox Protocol

OpenBox Service Instances OpenBox Controller OpenBox Applications Control Plane Data Plane

NB API

Read Packets Header Classifier Drop Alert Output Read Packets Header Classifier Rewrite Header Output Read Packets Header Classifier Drop Alert DPI DPI DPI Output

Specify processing graph and block configuration Events, Load information

Intrusion Prevention System Load Balancer Firewall

slide-11
SLIDE 11

Logically-Centralized Controller

11

OpenBox Protocol

OpenBox Service Instances OpenBox Controller OpenBox Applications Control Plane Data Plane

NB API Multiple tenants run multiple applications for multiple policies in the same network Isolation between applications and tenants enforced by NB API SDN Protocol

SDN Switches SDN Controller

Network-wide view Automatic scaling, provisioning, placement, and steering

slide-12
SLIDE 12

Naïve Graph Merge

12

Firewall:

Read Packets Header Classifier Drop Alert Output

Intrusion Prevention System:

Read Packets Header Classifier Drop Alert DPI DPI DPI Output Header Classifier Drop Alert (IPS) DPI DPI DPI Output Read Packets Header Classifier Drop Alert (Firewall)

Concatenated Processing Graph: Performance ≈ Diameter of Graph (# of classifiers)

Total: 134μs 30μs 10μs 50μs 10μs 2μs 2μs 30μs

slide-13
SLIDE 13

Graph Merge Algorithm

13

Firewall:

Read Packets Header Classifier Drop Alert Output

Intrusion Prevention System:

Read Packets Header Classifier Drop Alert DPI DPI DPI Output

Input Graphs:

?

slide-14
SLIDE 14

Graph Merge Algorithm

14

Firewall:

Read Packets Header Classifier Drop Alert Output

Intrusion Prevention System:

Read Packets Header Classifier Drop Alert DPI DPI DPI Output

Step 1: Normalize graphs to trees

Output Alert Alert Output Output Output Drop Drop Output Output Output

slide-15
SLIDE 15

Graph Merge Algorithm

15

Read Packets Header Classifier Drop Alert (Firewall) Header Classifier Drop Alert (IPS) DPI DPI DPI Output

Step 2: Concatenate graphs

Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output Header Classifier Drop Alert (IPS) DPI DPI DPI Output Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output

slide-16
SLIDE 16

Graph Merge Algorithm

16

Read Packets Header Classifier Drop Alert (Firewall) Header Classifier Drop Alert (IPS) DPI DPI DPI Output

Step 3: Merge classifiers

Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output Header Classifier Drop Alert (IPS) DPI DPI DPI Output Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output

slide-17
SLIDE 17

Graph Merge Algorithm

17

Read Packets Header Classifier Drop Alert (Firewall) Header Classifier Drop Alert (IPS) DPI DPI DPI Output

Step 3: Merge classifiers

Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output Drop Alert (IPS) DPI DPI DPI Output Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output

Can we change block order?

slide-18
SLIDE 18

Graph Merge Algorithm

18

Read Packets Header Classifier Drop Alert (Firewall) Drop Alert (IPS) DPI DPI DPI Output

Step 3: Merge classifiers

Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output Drop Alert (IPS) DPI DPI DPI Output Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output Alert (Firewall) Alert (Firewall) Alert (Firewall)

slide-19
SLIDE 19

Graph Merge Algorithm

19

Read Packets Header Classifier Drop Alert (Firewall) Drop Alert (IPS) DPI DPI DPI Output

Step 4: Remove redundant block copies (and rewire connectors accordingly)

Alert (IPS) Alert (IPS) Output Output Output Drop Drop Output Output Output Drop Alert (IPS) DPI DPI DPI Output Alert (IPS) Alert (IPS) Output Output DPI Drop Drop Output Output Output Alert (Firewall) Alert (Firewall) Alert (Firewall)

slide-20
SLIDE 20

Graph Merge Algorithm

20

Merged Processing Graph:

Read Packets Header Classifier Drop Alert (IPS) DPI DPI DPI Output Alert (Firewall) Alert (Firewall) Alert (Firewall) Alert (Firewall)

Shorter Diameter (less classifiers)

30μs 10μs 50μs 10μs Total: 104μs (22% improvement) 2μs 2μs

slide-21
SLIDE 21

When NOT to Merge?

When cross product is too large:

  • Two d-dimensional classifiers: A – n rules, B – m rules
  • Classification is logarithmic with # of rules, exponential with dimension
  • Serial classification time: (log 𝑜)𝑒−1+ (log 𝑛)𝑒−1
  • Cross product: 𝑜 ∙ 𝑛 rules (worst case)
  • Single classifier worst case time:

log(𝑜 ∙ 𝑛) 𝑒−1 = (log 𝑜)𝑒−1+(log 𝑛)𝑒−1+

𝑗=1 𝑒−2 𝑒 − 1

𝑗 (log 𝑜)𝑗 + (log 𝑛)𝑒−𝑗−1 > (log 𝑜)𝑒−1 + (log 𝑛)𝑒−1 When most packets won’t go through both classifiers:

21

Classifier A Output Classifier B Drop

? ?

slide-22
SLIDE 22

OpenBox Data Plane Processing

22

Read Packets Header Classifier DPI

Classification

VLAN Pop VLAN Push Rewrite Header

Header Modification

Begin Transaction Rollback Transaction Commit Transaction

Transactions

Gzip Decompress Gzip Compress

De/compression

HTML Normalizer JavaScript Normalizer XML Normalizer

Normalization

Store Packet Restore Packet

Caching

Alert Log

Reporting

Output Drop

Terminals

FIFO Queue Front Drop Queue RED Queue Leaky Bucket

Queue Management

slide-23
SLIDE 23

OpenBox Data Plane Processing

23

Read Packets Header Classifier DPI

Classification

VLAN Pop VLAN Push Rewrite Header

Header Modification

Begin Transaction Rollback Transaction Commit Transaction

Transactions

Gzip Decompress Gzip Compress

De/compression

HTML Normalizer JavaScript Normalizer XML Normalizer

Normalization

Store Packet Restore Packet

Caching

Alert Log

Reporting

Output Drop

Terminals

FIFO Queue Front Drop Queue RED Queue Leaky Bucket

Queue Management

OpenBox Service Instance Virtual or Physical

  • Provides data plane services to realize the logic of network functions
  • Controlled by the logically-centralized OpenBox controller
slide-24
SLIDE 24

Distributed Data Plane

OpenBox Service Instance Software OpenBox Service Instance Hardware (TCAM) E.g., an OpenFlow switch with encapsulation features (e.g., NSH, Geneve, FlowTags)

Header Classifier Alert DPI Rewrite Header

Metadata

slide-25
SLIDE 25

Split Processing Graph

25

Read Packets Header Classifier Drop Output Write Metadata Encapsulate Metadata Read Packets Drop Alert DPI DPI DPI Output Decapsulate Metadata Read Metadata

HW Instance: SW Instance:

?

slide-26
SLIDE 26

Distributed Data Plane

26

OpenBox Controller OpenBox Applications OBI VM HW OBI OBI VM

2 3 4 5 1 6

A B

slide-27
SLIDE 27

Extensible Data Plane

27

OpenBox Protocol

OpenBox Service Instances OpenBox Controller Control Plane Data Plane

NB API

Media Encoder

Option 1: New hardware implementation Supports encapsulation

Option 2: Software module injection

NEW APP

Custom software module (signed)

On the fly No need to recompile No need to redeploy

slide-28
SLIDE 28

Scalable & Reliable Data Plane

28

OBI OBI OBI OBI OBI

Scalability Provisioning Reliability

OBI OBI OBI OBI OBI OBI OBI OBI OBI OBI OBI OBI OBI OBI

OpenBox Controller

OBI Hypervisor Hypervisor

slide-29
SLIDE 29

OpenBox Protocol: Block Hierarchy

29

HeaderClassifier TCAMClassifier TrieClassifier

Controller Service Instance

Hello … Supported implementations: HeaderClassifier: [TCAMClassifier, TrieClassifier] SetProcessingGraphRequest … Use TCAMClassifier in graph

Abstract Processing Block

slide-30
SLIDE 30

Future Work: Infrastructure Support

  • Infrastructure can help VNFs

– Provide high performance (e.g., hardware accelerators) – Reuse processing (e.g., packet switching, “outsourced” services)

  • Challenge: Design a system, define a protocol to offload processing

from VNFs to infrastructure

  • Gradual solution, easier to adopt for existing VNFs

30

Offloading Controller VNF VNF Hypervisor Other VM Network VNF VNF Hypervisor Other VM VNF Hypervisor Other VM Other VM

?

slide-31
SLIDE 31

Implementation

31

Java-based OpenBox Controller Software OpenBox Service Instance Generic wrapper for execution engines (Python)

FW Northbound API REST client/server Graph Aggregator Management API Network Manager Translation Engine

github.com/OpenBoxProject

REST

IPS

Load Balancer

. . .

Click-based execution engine (C++)

Control Plane Data Plane

REST API

(Plug here other execution engines. E.g., ClickNP [SIGCOMM ‘16])

5500 LoCs (Python) 2400 LoCs for plugin (C++) 7500 LoCs (Java)

slide-32
SLIDE 32

Performance Improvement

32

VM1 Firewall VM2 IPS

Without OpenBox

VM1 OBI1: FW+IPS VM2 OBI2: FW+IPS

With OpenBox

10 20 30 40 50 60 70 80 100 200 300 400 500 600 700 800 900 Firewall IPS Latency [µs] Throughput [Mbps]

Standalone VM

20 40 60 80 100 120 140 100 200 300 400 500 600 700 800 900 1 2 Latency [µs] Throughput [Mbps]

NF Pipeline

Without OpenBox With OpenBox

slide-33
SLIDE 33

Related Work

  • Orthogonal to OpenBox:

– NF traffic steering (e.g., SIMPLE [SIGCOMM ’14]) – NF orchestration (e.g., Stratos, OpenMano, OpenStack) – Runtime platforms (e.g., xOMB [ANCS ‘12], ClickNP [SIGCOMM ‘16])

  • Similar Motivation:

– CoMb [NSDI ‘12] – focuses on resource sharing and placement – E2 [SOSP ‘15] – composition framework for virtual NFs – Slick [SOSR ’15] – focuses on the placement of data plane units

  • Only OpenBox provides:

– Core processing decomposition and reuse – Standardization and full decoupling of NF control and data planes

33

slide-34
SLIDE 34

Conclusions

  • Network functions are currently a real challenge in large scale

networks

  • By decoupling the data plane processing of NFs their control

logic we:

– Reduce costs – Enhance performance – Improve scalability – Increase reliability – Provide inter-tenant isolation – Allow easier innovation

  • There is still work to do…

34

OpenBox Protocol

OpenBox Service Instances OpenBox Controller OpenBox Applications Control Plane Data Plane

NB API

slide-35
SLIDE 35

THANK YOU!

Questions?

35

Play with OpenBox on a Mininet VM:

github.com/OpenBoxProject/openbox-mininet