1
Servers in Action: Towards Distributed Traffic Measurement in Data Centers
Praveen Tammana & Myungjin Lee
School of Informatics University of Edinburgh MSN’14 - Cosener's House, Abingdon
Servers in Action: Towards Distributed Traffic Measurement in Data - - PowerPoint PPT Presentation
Servers in Action: Towards Distributed Traffic Measurement in Data Centers Praveen Tammana & Myungjin Lee School of Informatics University of Edinburgh MSN14 - Cosener's House, Abingdon 1 Network Measurement in Data Centers Data
1
School of Informatics University of Edinburgh MSN’14 - Cosener's House, Abingdon
2
– High Bandwidth – Latency sensitive
– Control
– Measurement
– Accuracy, Scalability
– Programmable, Responsive, Evolvable
3
Core Edge Aggregate
4
Accounting Traffic Engineering Fault diagnosis SLA monitoring Anomaly detection worms, portscans, botnets Forensic analysis Network Management Tasks Flow Collector Flow Monitoring
5
– NetFlow, sFlow – High traffic rates compliance with limited
switch resources (SRAM, CPU)
– Not Accurate (Basic Requirement)
➔ Flow coverage and accuracy are
compromised.
➔ Not suitable for management tasks
that requires fine grained flow details.
Accounting Traffic Engineering Fault diagnosis SLA monitoring Anomaly detection worms, portscans, botnets Forensic analysis
sampled Packet stream
Counters Task 1 Task 2 Task N
# Bytes/Pkts
Management Tasks
6
– Task 1 : Anomaly Detection – Task 2 : Traffic Engineering
– Not Evolvable (DC Requirement)
Packet stream
Counters (SRAM) Counters Counters
# Bytes/Pkts
Task 1 Task 2 Task N
7
(Net Optics 2013)
8
– Distribute flow monitoring
Statistic pkts S t a t i s t i c p k t s
f1 f2 f3 f4 f5
Flows Monitor Flows 1 2 3 Aggregate pkts
Report results
Collector
9
Core Edge
Aggregate
10
Measurement module
statistic packets (s-pkts) (e.g., per flow record)
Aggregation module
packets (s-pkts)
high density DRAM
11
Measurement module NIC
regular packets Reglular packets
12
Measurement module NIC
regular packets
Regular packets
packets (s-pkts)
13
Aggregation Module
statistic packets (s-pkts)
Reglular packets
Measurement module NIC
regular packets
Regular packets
packets (s-pkts) Ingress port Egress port
14
– Packet path encoding and IP source route option – Use switch forwarding table
15
40 40 19 12 7 1 5 2 21 12 9 9 3 5 4
Threshold : T=10
11 **** 0*** 1*** 00** 01** 000* 001* 010* 011* 0000 0001 0010 0011 0100 0101 0110 0111 HHH: Longest IP prefix occupies more than fraction T of link bandwidth after excluding any descendant HHH
Traffic volume for each IP Prefix
16
f1 f2 f3 f4 f1 f5
HHH Measurement Module HHH Aggregation Module Collector Report HHH Pre-filtering IP Prefix Trie (Source IP) Statistic pkts
f1 f2 f3 f1
HHH Measurement Module Pre-filtering Statistic pkts
17
– Measurement module : Customized YAF – Aggregation module : IP Prefix Trie – Packet trace – T. Benson : University data center
– HHH Accuracy – Computation overhead on Servers and switches – Compared with NetFlow
18
Aggregation module overhead (AMO)
HHH Accuracy Varying Sampling Rates
FPR : False Positive Rate FNR : False Negative Rate
19
Aggregation module overhead (AMO)
HHH Accuracy Varying Sampling Rates
FPR : False Positive Rate FNR : False Negative Rate
Correctness : 100% Sampling rate – 100% Accuracy Overhead: AMO is just < 2% of NFO
20
– Our framework offloads overhead on switch – Evolves along with data center traffic volume – Provides more flexibility to data centre operators
– Prototyping proposed framework – Exploring performance across different measurement tasks – Endhost based network trouble shooting (e.g., packet loss, delay) – Impact of packet loss on accuracy – Distributing measurement task overhead across network
21
22
– Handling multiple paths between End Hosts – Consistency with forwarding rules update
23
24
S R T1 S T2 A1 Flow Path : S → T1 → A1 → T2 → R Encodes path information into packet
25
S R T1 S T2 A1 Flow Path : S → T1 → A1 → T2 → R s-pkt : R → T2 → A1 → T1
s-pkt s-pkt s
k t