Flowgrind A TCP Traffic Generator for Developers Arnd Hannemann - - PowerPoint PPT Presentation

flowgrind
SMART_READER_LITE
LIVE PREVIEW

Flowgrind A TCP Traffic Generator for Developers Arnd Hannemann - - PowerPoint PPT Presentation

Flowgrind A TCP Traffic Generator for Developers Arnd Hannemann <arnd.hannemann@credativ.de> 05.10.2016 Arnd Hannemann credativ GmbH 1 / 29 Overview Introduction Flowgrind Architecture Example measurements Summary Arnd Hannemann


slide-1
SLIDE 1

Flowgrind

A TCP Traffic Generator for Developers Arnd Hannemann <arnd.hannemann@credativ.de>

05.10.2016

Arnd Hannemann credativ GmbH 1 / 29

slide-2
SLIDE 2

Overview

Introduction Flowgrind Architecture Example measurements Summary

Arnd Hannemann credativ GmbH 2 / 29

slide-3
SLIDE 3

Overview

Introduction Motivation Flowgrind Architecture Example measurements Summary

Arnd Hannemann credativ GmbH 3 / 29

slide-4
SLIDE 4

Measuring network performance

Tool requirements

◮ Background: Wireless Mesh Networks ◮ Creating load anywhere in the network ◮ Measuring TCP performance between

any two nodes

◮ Testing TCP variants ◮ Extensive list of TCP metrics ◮ Separation of control and test traffic

Mesh Gateways Routing Mesh Clients Backbone Mesh Routers Non-routing Mesh Clients Wireless Access-Point Connection Backbone Wired Connection Wirless Mesh Connection

Backbone Wired Internet

Arnd Hannemann credativ GmbH 4 / 29

slide-5
SLIDE 5

Related works

Feature Iperf Iperf3 Netperf Thrulay TTCP NUTTCP TCP

  • UDP
  • SCTP

– – Other protocols – –

– – Kernel statistics –

– –

  • Interval reports
  • Concurrent tests against same hosts
  • Concurrent tests against different hosts

– – – Distributed tests – – – – –

  • Bidirectional test connections

– – – – Test scheduling – – – – – – Traffic generation – –

– – Control/test data separation – – – – –

  • Arnd Hannemann

credativ GmbH 5 / 29

slide-6
SLIDE 6

Motivation for a new tool

Shortcomings of existing tools

◮ Client-server architecture

⇒ hard to generate cross-traffic

◮ Separation of data/control traffic

Arnd Hannemann credativ GmbH 6 / 29

slide-7
SLIDE 7

Overview

Introduction Flowgrind Architecture Architecture Client-server architecture RPC Example measurements Summary

Arnd Hannemann credativ GmbH 7 / 29

slide-8
SLIDE 8

Flowgrind

Flowgrind

◮ Is a distributed network performance measurement tool ◮ Focuses on TCP testing/debugging ◮ Knobs to test TCP variants against each other ◮ Dump packet headers with libpcap ◮ Gathers TCP statistics from kernel (Linux/FreeBSD)

Arnd Hannemann credativ GmbH 8 / 29

slide-9
SLIDE 9

Terminology in Flowgrind

Flows

◮ One data connection for each flow ◮ Flows have a source and a

destination endpoint

◮ Test data can be sent in either

direction

◮ Scheduling, flows can run

sequentially, in parallel or can

  • verlap

◮ Individual parameters for each flow

Mesh Gateways Routing Mesh Clients Backbone Mesh Routers Non-routing Mesh Clients Wireless Access-Point Connection Backbone Wired Connection Wirless Mesh Connection Flowgrind Controller Flowgrind Daemon Test Connection RPC Connection

Backbone Wired Internet

Arnd Hannemann credativ GmbH 9 / 29

slide-10
SLIDE 10

Problems with client-server architecture

network Wireless multi-hop

Arnd Hannemann credativ GmbH 10 / 29

slide-11
SLIDE 11

Problems with client-server architecture

Client Server

network Wireless multi-hop

Arnd Hannemann credativ GmbH 10 / 29

slide-12
SLIDE 12

Problems with client-server architecture

Client Server

network Wireless multi-hop

Arnd Hannemann credativ GmbH 10 / 29

slide-13
SLIDE 13

Client-server architecture

Overview

◮ Tools like iperf: split into client and server ◮ Flows can only be established between a client and a server, not between servers ◮ Architecture implemented in older versions of Flowgrind

Problems with client-server architecture

◮ For multiple clients: external synchronization of test start is needed ◮ Potential different data handling in client and server (e.g. Thrulay)

Arnd Hannemann credativ GmbH 11 / 29

slide-14
SLIDE 14

Distributed architecture

Controller (flowgrind)

◮ Parses the test parameters ◮ Configures all involved daemons ◮ Presents the results

Daemon (flowgrindd)

◮ Started on every test node ◮ Performs actual tests ◮ Measures performance metrics

Mesh Gateways Routing Mesh Clients Backbone Mesh Routers Non-routing Mesh Clients Wireless Access-Point Connection Backbone Wired Connection Wirless Mesh Connection Flowgrind Controller Flowgrind Daemon Test Connection RPC Connection

Backbone Wired Internet

Arnd Hannemann credativ GmbH 12 / 29

slide-15
SLIDE 15

Remote Procedure Calls (RPC)

RPC in Flowgrind

◮ Uses XML-RPC ◮ All calls initiated by controller, no RPC between daemons ◮ Can employ different IP address / interface to separate control and test traffic

During a test

◮ Controller periodically queries all daemons for interval results ◮ Formats and prints results upon receiving them

Arnd Hannemann credativ GmbH 13 / 29

slide-16
SLIDE 16

Overview

Introduction Flowgrind Architecture Example measurements Wireless Multi-Hop Network with Cross Traffic AWS example: congestion control algorithms Summary

Arnd Hannemann credativ GmbH 14 / 29

slide-17
SLIDE 17

Cross-Traffic in a Wireless Multihop Network

Test scenario

◮ Measurement performed on testbed ◮ Two flows between two unique pairs of nodes ◮ Routes overlap, one bottleneck link ◮ Second flow started after a delay, stopping earlier

Arnd Hannemann credativ GmbH 15 / 29

slide-18
SLIDE 18

Topology

Bottleneck link

D C B A F E ◮ Flow 1 between nodes A and E ◮ Flow 2 between nodes B and F

Arnd Hannemann credativ GmbH 16 / 29

slide-19
SLIDE 19

Topology

Arnd Hannemann credativ GmbH 17 / 29

slide-20
SLIDE 20

WMN example: Flowgrind arguments

flowgrind -n 2 -i 5 -O b=TCP_CONG_MODULE=reno

  • F 0 -H s=wlan0.mrouter16/mrouter16,d=wlan0.mrouter8/mrouter8
  • T b=900
  • F 1 -H s=wlan0.mrouter17/mrouter17,d=wlan0.mrouter9/mrouter9
  • T b=300 -Y b=300

Arnd Hannemann credativ GmbH 18 / 29

slide-21
SLIDE 21

WMN example: Output

# ID begin end through RTT RTT RTT IAT IAT IAT # ID [s] [s] [Mbit] min avg max min avg max S 0 375.011 380.004 0.288782 12916.913 14135.647 15035.946 30.069 183.367 969.321 R 0 375.008 380.001 0.446299 5378.736 7304.811 8322.028 12.080 138.115 1206.780 S 1 375.008 380.009 0.157245 1284.537 2348.903 3978.513 70.058 418.893 2341.099 R 1 375.009 380.010 0.026211 11766.836 11766.836 11766.836 2919.213 2919.213 2919.213 S 0 380.004 385.000 0.288551 13335.203 14015.217 15029.046 63.087 269.419 1427.218 R 0 380.001 385.003 0.406170 7380.097 8201.946 9628.294 16.043 191.917 987.361 cwnd ssth uack sack lost retr fack reor rtt rttvar rto castate mss mtu status # # # # # # 83.000 59 83 3 3276.500 50.000 4940.000

  • pen 1448 1500 (n/n)

128.000 107 128 3 2879.000 6.000 4252.000

  • pen 1448 1500 (n/n)

44.000 7 44 3 2880.500 256.000 4208.000

  • pen 1448 1500 (n/n)

8.000 5 8 3 2832.500 149.000 3848.000

  • pen 1448 1500 (n/n)

86.000 59 86 3 3654.500 190.000 5072.000

  • pen 1448 1500 (n/n)

142.000 107 142 3 3388.500 65.000 4520.000

  • pen 1448 1500 (n/n)

Arnd Hannemann credativ GmbH 19 / 29

slide-22
SLIDE 22

WMN example: Goodput

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 100 200 300 400 500 600 700 800 900 Goodput [ Mb/s] Time [s] Flow 0 from node 16 to 8 Flow 1 from node 17 to 9

Arnd Hannemann credativ GmbH 20 / 29

slide-23
SLIDE 23

WMN example: Congestion Window

50 100 150 200 250 300 350 400 100 200 300 400 500 600 700 800 900 Window size [segments] Time [s] Congestion Window Slowstart threshold

Arnd Hannemann credativ GmbH 21 / 29

slide-24
SLIDE 24

Test of congestion control algorithms in AWS

Test scenario

◮ Measurement performed in VPC ◮ Four flows between a pair of nodes ◮ Four different congestion control algorithms

Arnd Hannemann credativ GmbH 22 / 29

slide-25
SLIDE 25

AWS: Flowgrind arguments

flowgrind -n 4 -H s=172.30.0.122,d=172.30.0.123 -T s=900

  • F 0 -O s=TCP_CONGESTION=yeah
  • F 1 -O s=TCP_CONGESTION=cubic
  • F 2 -O s=TCP_CONGESTION=highspeed
  • F 3 -O s=TCP_CONGESTION=htcp

Arnd Hannemann credativ GmbH 23 / 29

slide-26
SLIDE 26

AWS example: Output

# ID 0 S: 172.30.0.138 (Linux 4.6.0-1-amd64), random seed: 1611955119, sbuf = 12582912/0 [B] (real/req), rbuf = 12582912/0 [B] (real/req), SMSS = 8949 [B], PMTU = 9001 [B], Interface MTU = 9001 (unknown) [B], CC = yeah, duration = 900.003/900.000 [s] (real/req), through = 5.758049/0.000000 [Mbit/s] (out/in), request blocks = 79075/0 [#] (out/in) # ID 0 D: 172.30.0.139 (Linux 4.6.0-1-amd64), random seed: 1611955119, sbuf = 12582912/0 [B] (real/req), rbuf = 12582912/0 [B] (real/req), SMSS = 1448 [B], PMTU = 9001 [B], Interface MTU = 9001 (unknown) [B], through = 0.000000/5.684553 [Mbit/s] (out/in), request blocks = 0/78065 [#] (out/in), IAT = 0.004/11.529/281.197 [ms] (min/avg/max), delay = 18.708/11481.539/27539.894 [ms] (min/avg/max) ... Arnd Hannemann credativ GmbH 24 / 29

slide-27
SLIDE 27

AWS example: Goodput

100 200 300 400 500 100 200 300 400 500 600 700 800 900 100 200 300 400 500 Goodput in Mb s−1 Time in s YeAH-TCP CUBIC TCP Highspeed TCP H-TCP

Arnd Hannemann credativ GmbH 25 / 29

slide-28
SLIDE 28

Overview

Introduction Flowgrind Architecture Example measurements Summary

Arnd Hannemann credativ GmbH 26 / 29

slide-29
SLIDE 29

Summary

Feature Iperf Iperf3 Netperf Thrulay TTCP NUTTCP Flowgrind TCP

  • UDP

SCTP –

– – – Other protocols – –

– – – Kernel statistics –

– –

  • Interval reports
  • Conc. tests w. same hosts
  • Conc. tests w. different hosts

– – –

  • Distributed tests

– – – – –

  • Bidirectional traffic

– – – –

  • Test scheduling

– – – – – –

  • Traffic generation

– –

– –

  • Control/test data

– – – – –

  • Arnd Hannemann

credativ GmbH 27 / 29

slide-30
SLIDE 30

Summary

Flowgrind

◮ Distributed architecture well suited for complex test scenarios ◮ Extensive TCP metrics ◮ Advanced traffic generation features ◮ https://github.com/flowgrind/flowgrind

Possible future improvements

◮ Option for easier multi-core support, performance ◮ support for TCP Fast Open ◮ Add support for other procotols: UDP/DCCP/SCTP

Arnd Hannemann credativ GmbH 28 / 29

slide-31
SLIDE 31

Thanks for listening.

Questions?

Arnd Hannemann credativ GmbH 29 / 29