Research – Zurich Research Laboratory
Got Loss? Get zOVN!
Daniel Crisan, Robert Birke, Gilles Cressier, Cyriel Minkenberg, and Mitch Gusat
ACM SIGCOMM 2013, 12-16 August, Hong Kong, China
Got Loss? Get zOVN! Daniel Crisan, Robert Birke, Gilles Cressier, - - PowerPoint PPT Presentation
Got Loss? Get zOVN! Daniel Crisan, Robert Birke, Gilles Cressier, Cyriel Minkenberg, and Mitch Gusat ACM SIGCOMM 2013, 12-16 August, Hong Kong, China Research Zurich Research Laboratory Application Performance in Virtualized Datacenter
Research – Zurich Research Laboratory
Daniel Crisan, Robert Birke, Gilles Cressier, Cyriel Minkenberg, and Mitch Gusat
ACM SIGCOMM 2013, 12-16 August, Hong Kong, China
Research – Zurich Research Laboratory
2
Global Internet
long-and-fat links End-users accessing datacenter services
Physical Datacenter Network
short-and-fat links Router Router
Switch
Switch
Switch
Switch Virtual Switch NIC VM 1
vNIC
VM K1
vNIC
Virtualized Server 1 Virtual Switch NIC VM 1
vNIC
VM KN
vNIC
Virtualized Server N Virtual Switch NIC VM 1
vNIC
VM K2
vNIC
Virtualized Server 2 Virtual Switch NIC VM 1
vNIC
VM K3
vNIC
Virtualized Server 3 …
Research – Zurich Research Laboratory
3
Research – Zurich Research Laboratory
4
Physical Networks Virtual Networks
Research – Zurich Research Laboratory
networks
5
Research – Zurich Research Laboratory
6
Research – Zurich Research Laboratory
7
vSwitch VM 1
Source
vNIC Tx
VM 2
Source
vNIC Tx
VM 3
Sink
vNIC Rx Port A Tx Port B Tx Port C Rx
Research – Zurich Research Laboratory
8
vSwitch VM 1
Source
vNIC Tx
VM 2
Source
vNIC Tx
VM 3
Sink
vNIC Rx Port A Tx Port B Tx Port C Rx
1 1 2 2 3 3 4 5 6
Research – Zurich Research Laboratory
9
Configuration Hypervisor vNIC vSwitch C1 Qemu/KVM Virtio Linux Bridge C2 Qemu/KVM Virtio Open vSwitch C3 Qemu/KVM Virtio VALE C4 H2 N2 S4 C5 H2 E1000 S4 C6 Qemu/KVM E1000 Linux Bridge C7 Qemu/KVM E1000 Open vSwitch
50 100 150 200 C1 C2 C3 C4 C5 C6 C7 Injected traffic [MBps] Stack Loss vSwitch Loss Received
Research – Zurich Research Laboratory
10
Research – Zurich Research Laboratory
NIC
Hypervisor zOVN bridge
VM
11
vSwitch
Application
Port B Tx Port A Rx
Guest kernel
vNIC Tx socket Tx
write return value
Qdisc NIC Tx
Physical link
send frame receive PAUSE
encapsulation wake-up receive return value start/stop queue start_xmit enqueue free skb
Research – Zurich Research Laboratory
NIC
Hypervisor zOVN bridge
VM
12
vSwitch
Application
Port B Rx Port A Tx
Guest kernel
vNIC Rx socket Rx
read return value
NIC Rx
Physical link
receive frame send PAUSE
decapsulation wake-up send return value pause/resume queue netif_receive skb
NET RX Softirq
setsockopt Select lossy or lossless.
Research – Zurich Research Laboratory
13
vSwitch
Port 1 Tx Port 2 Tx Port N Tx Port 1 Rx Port 2 Rx Port N Rx
Senders:
Receivers:
packets
Forwarder:
from Tx to Rx
Rx port full
when something is consumed
Research – Zurich Research Laboratory
vSwitch – between (3) and (4) Receive stack – between (5) and (6) 14
vSwitch VM 1
Source
vNIC Tx
VM 2
Source
vNIC Tx
VM 3
Sink
vNIC Rx Port A Tx Port B Tx Port C Rx
1 1 2 2 3 3 4 5 6
Research – Zurich Research Laboratory
15
Research – Zurich Research Laboratory
16
1 4 2 2 2 2 3 3 3 3
Research – Zurich Research Laboratory
17 Control network HP 1810-8G 1G Switch VM 1 VM 16 IBM x3550 M4 Server
…
1G VM 1 VM 16 IBM x3550 M4 Server
…
1G VM 1 VM 16 IBM x3550 M4 Server
…
1G VM 1 VM 16 IBM x3550 M4 Server
…
1G Data network IBM G8264 10G Switch vSwitch vSwitch vSwitch vSwitch 10G 10G 10G 10G
* as in “DCTCP: Efficient
Packet Transport for the Commoditized Data Center” SIGCOMM 2010
Research – Zurich Research Laboratory
18
1 10 100 1000 1 10 100 1000 10000 Mean completion time [ms] Response size [Packets] LL LZ ZL ZZ
Virtual Network Flow Control Physical Network Flow Control No No No Yes Yes No Yes Yes
congestion point. Physical switch congestion negligible
remain on lossy priorities
Research – Zurich Research Laboratory
19
Research – Zurich Research Laboratory
virtual network
20
5 10 15 20 25 30 35 40 45 NewReno Vegas Cubic Mean completion time [ms] LL LZ ZL ZZ
Virtual Network Flow Control Physical Network Flow Control No No No Yes Yes No Yes Yes
Research – Zurich Research Laboratory
21
changing hardware
Research – Zurich Research Laboratory
22
Research – Zurich Research Laboratory
23
Research – Zurich Research Laboratory
24
Workflow
1.
Source VM sends packet to its attached vSwitch.
2.
vSwitch queries the Controller to find the address of the destination.
3.
Controller answers. The information is cached by the switch.
4.
Packet sent over physical network encapsulated with new headers.
5.
Packet decapsulated at destination virtual Switch.
Payload TCP| IP|Eth Encap|UDP|IP|Eth Physical Network
VM VM VM
vSwitch
Cache
(1) (3) (2)
VM VM VM
vSwitch
Cache
(5) (4)
Destination Server Source Server Fabric Controller