Towards Knowledge-Defined Networking using In-band Network Telemetry - - PDF document

towards knowledge defined networking using in band
SMART_READER_LITE
LIVE PREVIEW

Towards Knowledge-Defined Networking using In-band Network Telemetry - - PDF document

Towards Knowledge-Defined Networking using In-band Network Telemetry Jonghwan Hyun, Nguyen Van Tu and James Won-Ki Hong Department of Computer Science and Engineering, POSTECH, Pohang, Korea Email: { noraki, tunguyen, jwkhong } @postech.ac.kr


slide-1
SLIDE 1

Towards Knowledge-Defined Networking using In-band Network Telemetry

Jonghwan Hyun, Nguyen Van Tu and James Won-Ki Hong

Department of Computer Science and Engineering, POSTECH, Pohang, Korea Email: {noraki, tunguyen, jwkhong}@postech.ac.kr

Abstract—As the number of connected devices in the network is growing rapidly, network management is becoming more

  • complex. Closed-loop network management can be a solution to

address the problem, and self-driving network concept is one of the most promising solution. To realize the self-driving network, Knowledge-Defined Networking (KDN), network telemetry and Software-Defined Networking (SDN) are essential parts. As a first step toward realizing self-driving network, in this paper, we propose an architecture for self-driving network and suggest its use cases. We also present network monitoring system implemen- tation on SDN controller using INT, and discuss its limitations. Index Terms—SDN, KDN, P4, Network monitoring, In-band Network Telemetry, Closed-loop network management, ONOS

  • I. INTRODUCTION

As the number of connected devices in the network is growing rapidly, network management is becoming more

  • complex. According to the Gartner’s report [1], the number
  • f devices connected to the network is expected to increase

up to 20.4 billion by 2020, mainly because of IoT devices. Moreover, various protocols and services are supported by those devices, which makes network management much more

  • complex. Besides, dynamic change of the network (e.g., hosts

moving around, contents dynamically moving in the network, dynamic bandwidth allocation caused by SLA and QoS) makes it impossible to manage the network in real-time by human

  • perators. Therefore, the need for a closed-loop network

management solution is arising. The concept of self-driving network has been arising in recent years, and it can be a good candidate for the closed- loop network management solution. Like self-driving car, self- driving network can operate and manage the network without

  • perators’ intervention, with the help of Artificial Intelligence

(AI) and Machine Learning (ML) technologies. Telemetry is an important part of self-driving network, providing detailed information of the network, which makes AI and ML to learn the network deeply and build meaningful knowledge. By analyzing the collected telemetry data, a decision can be made and applied to the network. All these procedures are running automatically. Adopting self-driving network can realize OPEX savings, by reducing the complexity of network management, since most of the operations and decisions are done by the network management system. The performance of the network can be also improved since issues caused by the dynamicity of the network can be handled by the system automatically and adapting network telemetry can overcome the limitation cause by current network monitoring techniques. For example, ECMP (Equal-Cost Multi-Path) may split packets unevenly, misconfiguration of routing path can lead to inefficient packet forwarding, error on network equipment can loss packets [2]. Knowledge-Defined Networking (KDN), network telemetry and Software-Defined Networking (SDN) are fundamental building blocks to realize self-driving network. SDN decouples traditional switches into data plane and control plane. This separation provides flexibility to SDN in controlling network behavior, centralized network view, and programmability to network management. KDN was originally defined at the early stage of the Internet, which suggest a concept of Knowledge Plane that adopts AI and cognitive system to build a model for the network [3]. KDN gets network information as input and generates policies to improve network performance. Net- work telemetry collects information generated in the network, including SNMP, sFlow [4], NetFlow [5] data and syslog. In addition, In-band Network Telemetry (INT) [6] enables to collect packet-level network state, such as hop latency, queue congestion state, with the help of P4 [7]. As a first step toward self-driving network, in this paper, we propose an architecture for self-driving network and suggest its use cases. We also present an implementation of network monitoring system on ONOS controller using INT. This paper is organized as follows. Firstly, it provides a brief overview of KDN and INT, and presents related work regarding to network monitoring in Section II. In Section III, we propose the architecture for self-driving network and suggest its use cases in Section IV. Section V presents the design and implementation of our network monitoring system in ONOS. Section VI presents an evaluation and discusses

  • pen issues and possible solutions. Section VII concludes with

future work.

  • II. BACKGROUND AND RELATED WORK
  • A. Background

1) P4 and INT: P4 [7], a high-level language for pro- gramming protocol-independent packet processors, is used to program how packets are to be processed in data path. In traditional switches, the specification already indicates what a switch can do and what it cannot do. Even the Open- Flow switch still has a fixed set of functions defined in the OpenFlow specification. This makes the switch design

978-1-5386-3416-5/18/$31.00 c 2018 IEEE

slide-2
SLIDE 2

complex, when the OpenFlow specification grows and supports more network protocols. P4 solves this problem by directly programming switch data planes, allowing programmers to decide how a switch parses, modifies and forwards packets with their own parsers and actions. INT [6] is a network monitoring framework designed to collect and report network states directly from data plane. The idea is to add network information to packets at each switch while being forwarded [8]. The advantage of this approach is that it is performed at full line rate without delaying packet forwarding. While it also has some issues, for example, the number of hops can be limited not to exceed MTU for each packet, INT can provide end-to-end packet-level network information in real time. Consequently, INT enables many network applications including network troubleshooting, advanced congestion control, advanced routing, and network data plane verification [9]. 2) Knowledge-Defined Networking: KDN [3] is originally defined at the early stage of the Internet, which suggest a concept of Knowledge Plane that adopts AI and cognitive system to build a model for the network. But networks were distributed systems inherently, so that each node only had a local view and computing power is limited, which makes it complex to learn on those nodes. With the advent of SDN, however, it is able to obtain a centralized view of the network, in control plane. Computing power of each node is also improved to maintain a richer view of the network. Consequently, KDN is re-defined as the combination of knowledge plane, network telemetry, SDN and network analytics [10]. When KDN adopts programmable data plane, richer set of actions can be provided, compared with relying on the SDN and management protocols.

  • B. Network Monitoring Methods

1) Traditional: Two traditional methods, NetFlow [5] and sFlow [4], have been widely used for many years. NetFlow performs per-flow monitoring, which captures the information

  • f IP flows when they pass through switches, and then exports

the aggregated data. NetFlow thus requires additional memory and CPU for extracting and processing flow data. In addition, a monitoring center needs to perform polling on the switches to obtain data. Since the shortest period for exporting NetFlow data is 15 seconds, NetFlow is inappropriate for real-time monitoring. sFlow samples packet flows passing through the network

  • interface. sFlow takes a sample once every n packets, with

the configurable sampling rate n. Sampled packets can then be sent immediately to a target instance for further analysis. sFlow requires little additional CPU and memory on switches, but has a disadvantage with regard to accuracy. The sampling method might miss microbursts, anomalies and also be unable to detect small flows. 2) OpenFlow-based: With the separation of the control and data planes, new methods for monitoring in an SDN environment have been developed. Many of these are based

  • n OpenFlow, de facto standard SDN protocol. FlowSense

[11] proposed a push-based monitoring approach, rather than a polling-based approach. FlowSense leverages PacketIn and FlowRemoved messages sent from switches to the controller to report network state. It has very low overhead at the cost of not being able to perform real-time and accurate monitoring. OpenNetMon [12] used a polling technique instead. Polling rate is adaptive, can be increased when flow rates change rapidly and decreased when they are stabilized to minimize the number of queries. Adaptive polling provides reasonable accuracy while maintaining low CPU overhead. However, it does not provide information regarding to intermediate switches and lack a complete view of network state, since OpenNetMon polls edge switches only. OpenSample [13] is another monitoring framework for OpenFlow protocol that leverages sFlows packet sampling ap-

  • proach. Thus, the focus of OpenSample is traffic engineering,

since it can quickly detect elephant flows and estimate link utilization. 3) Programmable Data Plane-based: Recently, a pro- grammable data plane has been proposed, changing switches from fixed functions to programmable ones. Some new meth-

  • ds that utilize this ability have been proposed, such as

OpenSketch [14], UnivMon [15], In-situ OAM (iOAM) [16], and INT. OpenSketch and UnivMon process sketch-based streaming algorithms inside switches, allowing fine-grained monitoring with low overhead. OpenSketch is deployed in FPGA switches, while UnivMon is deployed using P4. iOAM and INT are specifications to embed network teleme- try date within user traffic.

  • III. PROPOSED ARCHITECTURE

In this section, we propose the architecture for self-driving network based on KDN, INT and SDN. The architecture is depicted in Fig. 1, composed of four planes: control plane, data plane, management plane, and knowledge plane.

  • A. Control Plane

SDN controller works as control plane and deploys INT functionality to programmable switches. It also controls net- work monitoring using INT and converts the intent-driven requirements from knowledge plane into specific network policies. First of all, INT functionality is implemented in P4 and compiled into a device context to run on programmable

  • switches. Controller then deploys the device context into

switches. If every packet sends its metadata to management plane, the amount of metadata is excessive. Therefore, we choose to collect metadata for specific flows. Control plane provides the functionality to describe flows to monitor as a five-tuple (source/destination IP, source/destination port, protocol). Control plane also takes analysis results from knowledge plane, in the form of an intent. It also provides an interface for other applications and visualizes collected metadata.

slide-3
SLIDE 3
  • Fig. 1: Proposed architecture
  • B. Data Plane

In data plane, INT metadata for each packet is generated, extracted and transmitted to management plane. There are three different roles for switches according to the types of the switch. Source switch matches every flow from hosts to determine whether it is specified by control plane

  • r not. If it matches, INT header is added and corresponding

metadata indicated by the INT header is also added into each packet of the flow. In transit switches, INT packet is identified and INT metadata is inserted. In a sink switch, INT metadata is inserted and entire INT data (INT header and metadata stack) is extracted. Extracted data is then transmitted to management plane.

  • C. Management Plane

In management plane, INT metadata is collected, stored and aggregated for further analysis in knowledge plane. INT metadata collector converts the metadata into JSON format and sends to distributed messaging system. Distributed messaging system provides scalability of the

  • plane. It distributes the metadata to one of data stream proces-
  • sors. By using a last digit of source IP address of the metadata

as a hash key, the metadata from the same flow is sent to the same processor. The system also stores the metadata to databases. Data stream processor provides basic analysis result and statistics per flow, per switch and sends them to control plane. The result contains a trajectory of packets, latency at each hop, queue occupancy and congestion status for each queue it passed through. The result is then sent to control plane and knowledge

  • plane. An event which requires immediate action (e.g., link

failure, black-hole or loop detection) is handled by control

  • plane. Actions requiring knowledge, such as resource plan-

ning, optimization, performance management, and verification are handled by knowledge plane.

  • D. Knowledge Plane

In knowledge plane, the data gathered in management plane is fed to ML algorithms and the output of the algorithms is knowledge, a model of the network. Knowledge is then converted into the form of intent to describe requirements to change network configuration. Knowledge plane is separated from control plane since ML algorithms are generally compute-intensive and it may affect to the performance of control plane.

  • IV. USE CASES

In this section, we discuss two use cases for the proposed architecture: traffic engineering and network anomaly detec- tion.

  • A. Traffic Engineering

Current traffic engineering (TE) algorithms in SDN (e.g., Hedera [17], MicroTE [18]) suffer from the large overhead and huge computation cost, which need additional hardware

  • r can be applied only to large flows. Also, an SDN controller

is overloaded while collecting the flow information from switches in the network. Using the proposed architecture, fine-grained traffic in- formation is collected and analyzed in management plane, which reduces overhead to the controller. Moreover, the traffic information is refined and clustered in management plane, which can reduce the number of input values of the TE algorithm. With merging the centralized network view from the SDN controller, various traffic characteristics can be analyzed, such as traffic size, interval, duration, end-points. Those information is useful to forecast short-term and long-term traffic status, which enables effective traffic engineering.

  • B. Network Anomaly Detection

Network attacks are prevailing and advancing continuously. Using the proposed architecture, we can detect various types

  • f network attacks and anomalies, including DDoS, port scan

and APT (Advanced Persistent Threat). First of all, network model for the normal condition in the network is built. When the unexpected traffic pattern is detected, the traffic is blocked and an alarm is raised. DDoS and port scan attacks can be easily detected in this way. To detect APT, knowledge plane always records and analyzes traffic generated in each host. Moreover, the security level is assigned to each host and traffic pattern accessing to the hosts are also analyzed. Since the attack can be divided into three phases, attacking from the outside, searching the hosts and vulnerability inside the network, and leaking the information, the abnormal traffic pattern for each of the phases can be detected and blocked by the system.

slide-4
SLIDE 4
  • V. NETWORK MONITORING SYSTEM DESIGN

In this section, we present our design and implementation

  • f network monitoring system on ONOS controller using INT,

including INT switch with modified INT specification for UDP.

  • A. INT Header Format
  • Fig. 2: INT header format for UDP protocol

The INT header format is described in the INT specification [9]. INT header and metadata must be added as an option or payload in an encapsulation protocol. Our work is based on UDP (Fig. 2), with following changes in the INT header fields:

  • INT Port and Original Dest Port: INT port is a pre-

defined port number to indicate the existence of INT after UDP header. UDP destination port field is overwritten and original UDP destination port is stored at the field Original Dest Port, to be restored at a sink switch.

  • O: This is an additional field to inform ONOS controller

whether the INT packet is sent to the controller for analysis (O field is one) or is a normal packet with INT data that must be forwarded (O field is zero).

  • INT len: Total length of INT header and metadata stack.
  • B. INT Switch

The processing flow of an INT switch is shown in Fig. 3. Following general P4 design, processing flow of an INT switch consists of three parts: Parser, Ingress, and Egress, of which Ingress performs forwarding and determines necessary flags and Egress performs most of INT processing. 1) Parser: The proposed switch design supports INT with basic TCP/UDP protocol. The parser is thus required to parse

  • nly Ethernet, IPv4, TCP, and UDP/INT headers. At each

parsing state, packets will be passed to the next stage if the condition is satisfied. Otherwise, parsing finishes. 2) Ingress: After parsing, packets are fed to Ingress match/action for egress port selection. A basic forwarding match + action table is implemented in Ingress, which supports basic ONOS services and applications such as link layer discovery and reactive forwarding. These enable transmission

  • f UDP packets from sources to destinations through the

network.

  • Fig. 3: INT Switch processing flow

Ingress is also responsible for extracting metadata for pack-

  • ets. Firstly, it determines whether it is first hop of the packet

(applied for every packet, regardless of INT header existence). After that, Ingress determines whether the switch is source switch (first switch in the path) or sink switch (last switch in the path), and then sets equivalent flags. In addition, if a switch is sink switch, the packet is cloned. After completing Ingress, the packet is sent to Queues/Buffers. 3) Egress: After exiting queues, packets pass through Egress Match + Action for processing INT headers. A series

  • f match/action tables is applied. Firstly, Egress checks the

source flag. If it is set, an INT header is inserted into a UDP packet using telemetry instructions sent from the controller. Next, INT metadata for this switch is added on top of INT metadata stack. The type of metadata is determined by telemetry instructions. Finally, outer UDP header is updated. Packets exiting Egress are sent to Deparser to be serialized, and then sent out of the switch. In case a switch is the last switch before reaching the destination host, packet mirroring is enabled. The entire packet with its INT header and metadata is cloned. For the original packet, Egress attaches the last INT data of the sink switch and sends the packet to the controller. The cloned packet is sent to the destination host after removing INT header and

  • metadata. In this way, monitoring system becomes transparent

to end hosts. Although P4 supports branching (e.g., If - else) in code level, to simplify the design of a switch, we choose to push almost all decision logics into table rules and these rules are installed by the controller right after the initialization.

  • C. ONOS INT Monitoring System
  • Fig. 4 shows the architecture of the INT monitoring sys-

tem on ONOS. ONOS controller communicates with BMv2 switches [19] using Thrift, a light-weight, general and flexible protocol which is suitable for the flexibility of P4. ONOS

slide-5
SLIDE 5
  • Fig. 4: ONOS INT monitoring architecture

INT monitoring system works through three phases: Initial Configuration, Control and Monitoring. 1) Initial Configuration: BMv2 software switch is a soft- ware simulator of P4 hardware, and it requires a P4 configura- tion file to be fully functional. In this work, the configuration file is INT switch implementation, sw.json, the output of compilation of INT P4 source codes. At the beginning, INT Monitoring (or IntMon) controller reads sw.json and pushes configuration data to all BMv2 switches. 2) Control: IntMon controller is in charge of populating all flow rules to tables in INT switches. It builds custom flow rules and sends those rules to designated match/action tables in switches. This is performed after the initial phase and every time changing, adding, or removing a rule through

  • GUI. IntMon controller also updates flow rules when network

events occur, for example, when hosts are unplugged from, or new hosts are plugged into, the network. The rules for setting up which flows and fields to mon- itor are entered through GUI in Web interface. Multiple rules can be stored at the same time, and flows that match

  • ne of the matching rules are monitored. The flow selec-

tions follow five-tuples: protocol, source/destination port, and source/destination IP address. Source and destination IP ad- dress fields support both exact matching and prefix matching. Because of prefix matching, it is possible that a flow matches more than one rule. Thus, we also add a priority field, with each rule requiring a unique priority value. Monitoring fields are also selected from the same GUI

  • interface. There are eight parameters that can be collected:

switch ID, ingress port, hop latency, queue occupancy, ingress timestamp, egress port, queue congestion status, and egress port utilization. All fields have packet-level granularity. Switch firmware in BMv2 needs to be modified to provide those information. 3) Monitoring: INT packets sent to ONOS are processed at low-level layer of the ONOS, Int data processor in BMv2

  • provider. Typically, a packet sent to ONOS is passed to all

upper applications that have packet requests. Because we expect enormous number of INT packets to be sent to the controller, we decide to process and then discard INT packets as soon as possible. It can prevent them from being processed by other applications, thereby reducing CPU and memory cost

  • f the controller.

INT data are analyzed in IntMon service and displayed in Web interface. We provide a time-series graph to show collected data in real-time, updated every 100 msec; a tabular view for showing all monitoring parameters, updated once per second; and a topology view to update link bandwidth. The update interval is limited due to CPU cost of refreshing GUI

  • view. IntMon service provides interfaces for other applications

to obtain packet-level information in real-time. Since we have a custom P4 switch with custom rules,

  • riginal ONOS services cannot understand our rules. Java

interpreter provides rule-interpreting interfaces to enable basic ONOS services and applications to work without modifica- tion.

  • VI. EVALUATION AND DISCUSSION

We evaluated performance and overhead of our monitoring system by measuring IntMon processing time, CPU usage and calculating network bandwidth overhead. Simple tree topol-

  • gy, as shown in Fig. 5, was built for evaluation, using Mininet

[20] and BMv2 software switch. IntMon is in charge of controlling whole monitoring procedure. The ONOS controller ran on a virtual machine with one core CPU and 2 GB RAM.

  • Fig. 5: Evaluation topology
slide-6
SLIDE 6

Traffic is generated using iPerf [21], sending UDP flows from host h1 (client) to host h8 (server). We gradually in- creased the number of concurrent flows, from 4 to 14, and each flow has 100 kbps throughput, with 80 bytes of data packet size (the throughput is limited due to low performance

  • f BMv2 switches). INT monitoring was enabled for all those
  • flows. After each packet has been reached sw7, INT data is

sent to ONOS controller and the original packet is restored and forwarded to the destination (h8).

  • A. Average Processing Time and CPU Usage

The average CPU usage of the ONOS controller depends

  • n the number of INT packets sent to the controller, as shown

in Fig. 6. In addition, Table I shows the average delay of some common ONOS services, including IntMon service. TABLE I: Average processing delay time

Service Average delay (ms) Link Layer Discovery (LLDP) 0.13303 Reactive Forwarding (FWD) 0.23605 INT Monitoring (IntMon) 0.00773 600 1200 1800 2400 0 % 6 % 12 % 18 % 24 % Number of packets CPU usage

  • Fig. 6: CPU usage against number of INT packets

From Table I, we can see that the IntMon service is much faster than other ONOS services. However, from Fig. 6, we notice that the CPU usage increases linearly as the number of INT packets sent to the ONOS controller increases. Although INT allows ONOS controller to be free from polling, large computing overhead is incurred to analyze massive amount of INT data. Therefore, the scalability issue needs to be addressed in the future.

  • B. Bandwidth Overhead

Bandwidth overhead for carrying INT data can be calculated as follows: B = P × (12 + N × F × 4) × 8 (bps) where B is additional bandwidth on a link, P is the number

  • f packets (INT enabled) per second, N is the number of

switches that a packet has passed through, and F is the total number of monitoring fields. For example, suppose that we have a link with two UDP flows sharing bandwidth equally. The link is the TX link of the third switch. All flows are UDP flows with 80-byte data size, meaning a 126-byte packet size (including Ethernet, IPv4, and UDP headers). One of two flows are monitored, and the total number of monitoring fields is three. Then, the application throughput is reduced by 16% (note that the application throughput decreases, but the maximum link bandwidth still remains the same).

  • C. Discussion

Evaluation results reveal two limitations: data analysis cost in terms of CPU and memory, and the additional bandwidth for carrying INT data. There are methods that can be applied to solve or mitigate these problems. One direction is to offload a portion of the INT analysis to P4 logic instead of sending all information to the ONOS controller. It can reduce computation

  • verhead for processing INT data and be particularly useful

for detecting significant network events such as path changes

  • r spikes [22]. Another direction is to move the INT analysis

block to an external INT collector system. Regarding to the bandwidth overhead issue, we can reduce the bandwidth cost by monitoring certain amount of flows

  • nly, instead of monitoring all flows. In the event that we

must monitor many flows or elephant flows, we can adopt sampling technique, while introducing a trade-off between granularity level and bandwidth consumption. Sampling might be suitable for some specific monitoring requirements, such as link utilization.

  • VII. CONCLUSION

As the scale of the network is rapidly growing and becomes heterogeneous, network management by human operators be- comes harder and therefore the need for the closed-loop network management solution arises. In this paper, we presented and discussed about the concept

  • f self-driving network and its building blocks, P4 INT, SDN

and KDN, with its use cases. P4 INT makes it possible to collect packet-level network telemetry and KDN brings the intelligence to the network management, using the telemetry

  • data. SDN enables to manage and control the network accord-

ing to the decision made by the knowledge plane. We also presented our design and implementation of an INT monitoring framework on the ONOS controller that enables packet-level monitoring, along with the INT architecture using

  • UDP. The evaluation results revealed some limitations of
  • ur design, and consequently, we proposed some possible

solutions for future investigation. For future work, we plan to implement the whole archi- tecture proposed and find a way to solve the scalability and performance issues. ACKNOWLEDGMENT This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2017-0-00195, Devel-

  • pment of Core Technologies for Programmable Switch in

Multi-Service Networks).

slide-7
SLIDE 7

REFERENCES

[1] Gartner, “Forecast: Internet of things - endpoints and associated services, worldwide, 2016,” Gartner, Tech. Rep., January 2017. [2] Y. Zhu, N. Kang, J. Cao, A. Greenberg, G. Lu, R. Mahajan, D. Maltz,

  • L. Yuan, M. Zhang, B. Y. Zhao, and H. Zheng, “Packet-Level Telemetry

in Large Datacenter Networks,” in Proceedings of the ACM SIGCOMM 2015. ACM, 2015, pp. 479–491. [3] D. D. Clark, C. Partridge, J. C. Ramming, and J. T. Wroclawski, “A knowledge plane for the internet,” in Proceedings of the ACM SIGCOMM 2003. ACM, 2003, pp. 3–10. [4] M. Wang, B. Li, and Z. Li, “sFlow: Towards resource-efficient and agile service federation in service overlay networks,” Distributed Computing Systems, 2004. Proceedings. 24th International Conference on, pp. 628– 635, 2004. [5] B. Claise, “Cisco systems netflow services export version 9,” RFC 3954 (Informational), Internet Engineering Task Force, 2004. [6] C. Kim, A. Sivaraman, N. Katta, A. Bas, A. Dixit, L. J. Wobker, and

  • B. Networks, “In-band Network Telemetry via Programmable Data-

planes,” in Sosr, 2015, pp. 2–3. [7] P. Bosshart, G. Varghese, D. Walker, D. Daly, G. Gibb, M. Izzard,

  • N. McKeown, J. Rexford, C. Schlesinger, D. Talayco, and A. Vahdat,

“P4: Programming Protocol-Independent Packet Processors,” ACM SIG- COMM Computer Communication Review, vol. 44, no. 3, pp. 87–95, 2014. [8] V. Jeyakumar, M. Alizadeh, Y. Geng, C. Kim, and D. Mazi` eres, “Millions of little minions: Using packets for low latency network pro- gramming and visibility,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 4, pp. 3–14, 2015. [9] C. Kim, P. Bhide, E. Doe, H. Holbrook, A. Ghanwani, D. Daly, M. Hira, and B. Davie, “Inband Network Telemetry,” June 2016. [10] A. Mestres, A. Rodriguez-Natal, J. Carner, P. Barlet-Ros, E. Alarc´

  • n,

M. Sol´ e, V. Munt´ es, D. Meyer, S. Barkai, M. J. Hibbett, G. Estrada, K. Ma‘ruf, F. Coras, V. Ermagan, H. Latapie, C. Cassar, J. Evans, F. Maino, J. Walrand, and A. Cabellos, “Knowledge-Defined Networking,” pp. 1–8, 2016. [Online]. Available: http://arxiv.org/abs/1606.06222 [11] C. Yu, C. Lumezanu, Y. Zhang, V. Singh, G. Jiang, and H. V. Madhyastha, “FlowSense: Monitoring network utilization with zero measurement cost,” International Conference on Passive and Active Network Measurement, pp. 31–41, 2013. [12] N. L. M. van Adrichem, C. Doerr, and F. A. Kuipers, “OpenNetMon: Network monitoring in OpenFlow Software-Defined Networks,” Net- work Operations and Management Symposium (NOMS), 2014 IEEE,

  • pp. 1–8, 2014.

[13] J. Suh, T. T. Kwon, C. Dixon, W. Felter, and J. Carter, “OpenSample: A low-latency, sampling-based measurement platform for commodity SDN,” Distributed Computing Systems (ICDCS), 2014 IEEE 34th Inter- national Conference on, pp. 228–237, 2014. [14] M. Yu, L. Jose, and R. Miao, “Software Defined Traffic Measurement with OpenSketch,” 10th USENIX Conference on Networked Systems Design and Implementation (NSDI), vol. 13, pp. 29–42, 2013. [15] Z. Liu, A. Manousis, G. Vorsanger, V. Sekar, and V. Braverman, “One sketch to rule them all: Rethinking network flow monitoring with UnivMon,” Proceedings of the 2016 conference on ACM SIGCOMM 2016 Conference, pp. 101–114, 2016. [16] F. Brockners, S. Bhandari, S. Dara, C. Pignataro, H. Gredler, J. Leddy,

  • S. Youell, D. Mozes, T. Mizrahi, P. Lapukhov, and R. Chang, “Require-

ments for in-situ oam,” Working Draft, Internet-Draft draft-brockners- inband-oam-requirements-03, March 2017. [17] M. Al-Fares, S. Radhakrishnan, and B. Raghavan, “Hedera: Dynamic Flow Scheduling for Data Center Networks.” Nsdi, p. 19, 2010. [18] T. Benson, A. Anand, A. Akella, and M. Zhang, “MicroTE: fine grained traffic engineering for data centers,” Proceedings of the Seventh COnference on emerging Networking EXperiments and Technologies,

  • p. 8, 2011.

[19] BMv2, https://github.com/p4lang/behavioral-model. [20] Mininet, https://github.com/mininet/mininet. [21] IPerf, https://github.com/esnet/iperf. [22] T. Jithin and L. Petr, “Tracking packets paths and latency via INT (In- band Network Telemetry),” P4 Workshop, May 2016.