Docker cker Ov Overlay rlay Networks tworks Performance analysis - - PowerPoint PPT Presentation

docker cker ov overlay rlay networks tworks
SMART_READER_LITE
LIVE PREVIEW

Docker cker Ov Overlay rlay Networks tworks Performance analysis - - PowerPoint PPT Presentation

Docker cker Ov Overlay rlay Networks tworks Performance analysis in high-latency environments Students: s: Siem Hermans Patrick de Niet Resea earc rch Project ect 1 Supervi rviso sor: Dr. Paola Grosso System and Network Engineering


slide-1
SLIDE 1

Docker cker Ov Overlay rlay Networks tworks

Performance analysis in high-latency environments

Students: s: Siem Hermans Patrick de Niet Supervi rviso sor:

  • Dr. Paola Grosso

Resea earc rch Project ect 1

System and Network Engineering System and Network Engineering

slide-2
SLIDE 2

Resea search rch question

  • n

2

“What is the performance of various Docker overlay solutions when implemented in high latency environments and more specifically in the GÉANT Testbeds Services (GTS)?”

slide-3
SLIDE 3

3

  • Claassen, J. (2015, July). Container Network Solutions. Retrieved January 31, 2016, from

http://rp.delaat.net/2014-2015/p45/report.pdf.

  • Rohprimardho, A. (2015, August). Measuring The Impact of Docker on Network I/O
  • Performance. Retrieved January 31, 2016, from http://rp.delaat.net/2014-2015/p92/report.pdf.

Internal

  • Kratzke, N. (2015). About Microservices, Containers and their Underestimated Impact on

Network Performance. CLOUD COMPUTING 2015, 180.

  • Barker, S. K., & Shenoy, P. (2010, February). Empirical evaluation of latency-sensitive application

performance in the cloud. In Proceedings of the first annual ACM SIGMM conference on Multimedia systems (pp. 35-46). ACM. External

Relat lated ed Work

slide-4
SLIDE 4

4

Virt rtual ual Machi hine ne Conta taine ner

  • Containerization
  • Gaining traction
  • Performance increases
  • Role of Docker

Basics

Docker er - Concepts

slide-5
SLIDE 5

5

  • Virtual networks that span underlying hosts
  • Powered by libnetwork

Multi-host st networki working ng

slide-6
SLIDE 6

6

Libnetwork

(Native overlay driver)

Weave Net Project Calico Flannel

  • Based on SocketPlane
  • Integrating OVS APIs in

Docker

  • VXLAN based forwarding
  • Previously routing based
  • n pcap. Now uses OVS.
  • Libnetwork plugin
  • VXLAN based forwarding
  • Technically not an overlay
  • Routing via BGP
  • Segmentation via iptables
  • State distribution via BGP

route reflectors

  • No tunneling
  • Flanneld agent
  • No integration with

libnetwork

  • Subnet per host
  • UDP or VXLAN forwarding

Overlay erlay solutions ns

Kratzke, N. (2015).

slide-7
SLIDE 7

7

  • European research community
  • Amsterdam
  • Bratislava
  • Ljubljana
  • Milan
  • Prague
  • GÉANT Testbeds Service (GTS)
  • OpenStack platform, interconnected by MPLS
  • KVM for compute nodes
  • Resembles IaaS providers; Shared infrastructure

GÉANT NT - Introduction

slide-8
SLIDE 8

Topologies Topologies (1) 1)

8

  • Four full mesh instances
  • DSL 2.0 grammar (JSON)
  • Local site; Feasibility evaluation

FullMesh { id="FullMesh_Dispersed" host { id= "h1" location= "AMS" port { id="port11"} port { id="port12"} } link { id="l1" port {id="src"} port {id="dst"} } adjacency h1.port14, l1.src adjacency h2.port24, l1.dst } {...} DSL

slide-9
SLIDE 9

9

Topologies Topologies (2) 2)

  • Flannel VXLAN tunneling
  • Key-value store placement
  • Storing network state
  • Separate distributed system

Setup

  • Scaling up from single-site feasibility check
  • Calico
  • droppe

ropped

  • Full mesh divided in:

1. 1. Point

  • int-to

to-poi point, synthetic benchmarks 2. 2. Star ar top

  • pol
  • logy

gy, real-world scenario

slide-10
SLIDE 10

10

Synthe ntheti tic benchm nchmar ark (PtP tP)

  • Placement of nodes

Netperf

  • Latency
  • Jitter

Iperf

  • TCP/UDP throughput
  • Jitter

Laten atency cy sensi nsiti tive ve app pplica cati tion n (Medi dia stream reaming) g)

  • Darwin Streaming Server, Faban RTSP clients
  • Jitter (with netperf)
  • Bitrate

Methodology logy - Performance

Barker, S. K., & Shenoy, P. (2010, February).

slide-11
SLIDE 11

11 Setup up

Documentatio ion

VPN Provis isio ionin ing Resources rces Acces cess Support

Resu sults ts - GÉANT

slide-12
SLIDE 12

Resu sults ts - PtP VM to VM Latency

12

slide-13
SLIDE 13

13

Resu sults ts - PtP Docker to Docker Latency

In Milliseconds (ms) Circuit Topology

  • Min. Latency

Mean Latency 99th % Latency AMS – MIL LIBNET 36.3 36.5 37.0 WEAVE 36.2 36.5 37.0 FLANNEL 42.5 42.9 43.0 AMS – LJU LIBNET 30.1 30.3 31.0 WEAVE 29.8 30.3 31.0 FLANNEL 29.8 30.3 31.0 AMS – BRA LIBNET 17.6 17.7 18.0 WEAVE 17.4 17.7 18.0 FLANNEL 17.4 17.7 18.0 MIL – LJU LIBNET 61.8 62.1 62.4 WEAVE 59.6 59.8 60.0 FLANNEL 55.6 55.8 56.0 MIL – BRA LIBNET 12.7 13.0 14.0 WEAVE 12.9 13.1 14.0 FLANNEL 12.9 13.1 14.0 BRA – LJU LIBNET 47.1 47.4 48.0 WEAVE 43.1 59.5 130.0 FLANNEL 43.1 43.4 44.0

slide-14
SLIDE 14

Resu sults ts - PtP Throughput

14

50 100 150 200 250 Libnet Weave Flannel VM Mbps Solution

AMS to BRA TCP Throughput

50 100 150 200 250 300 Libnet Weave Flannel VM Mbps Solution

AMS to BRA UDP Throughput

slide-15
SLIDE 15

15

Resu sults ts - Streaming Experiment

0.00 0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60 1.80 2.00

1 Worker 3 Worker 9 Worker 1 Worker 3 Worker 9 Worker 1 Worker 3 Worker 9 Worker 1 Worker 3 Worker 9 Worker VM LIBNET WEAVE FLANNEL BRA – AMS Bitrate per stream in Mbps

Instance

BRA - AMS Concurrency Bitrate

Mean Maximum 0.5 1 1.5 2 2.5 1 Worker 3 Worker 9 Worker 1 Worker 3 Worker 9 Worker 1 Worker 3 Worker 9 Worker 1 Worker 3 Worker 9 Worker VM LIBNET WEAVE FLANNEL BRA – AMS Mean Jitter in ms

Instance

BRA - AMS Concurrency Jitter

slide-16
SLIDE 16

16

  • Measurements currently only valid within GTS environment;

– Reconduct performance analysis in heavily shared environment (e.g. Amazon EC2) – Perform experiments with more compute resources (CPU capping)

  • Anomalies in throughput performance not identified (UDP, TCP)

– Similar behavior discovered in the work of J. Claassen

  • Ideally more measurements to increase accuracy
  • No significant performance degradations by implementing Docker overlays within GTS
  • Use Weave ideally within the GTS environment

Conclus usion ion & Fu Future e Work

slide-17
SLIDE 17

A: Science Park 904, Amsterdam NH W: rp.delaat.net github.com/siemhermans/gtsperf

Thank you

Qu Ques estio ions ns ?

siem.hermans@os3.nl patrick.deniet@os3.nl

Resea earc rch Project ect 1

System and Network Engineering System and Network Engineering

slide-18
SLIDE 18