detector a topology aware monitoring system for data
play

deTector: a Topology-aware Monitoring System for Data Center - PDF document

deTector: a Topology-aware Monitoring System for Data Center Networks Yanghua Peng, The University of Hong Kong; Ji Yang, Xian Jiaotong University; Chuan Wu, The University of Hong Kong; Chuanxiong Guo, Microsoft Research; Chengchen Hu, Xian


  1. deTector: a Topology-aware Monitoring System for Data Center Networks Yanghua Peng, The University of Hong Kong; Ji Yang, Xi’an Jiaotong University; Chuan Wu, The University of Hong Kong; Chuanxiong Guo, Microsoft Research; Chengchen Hu, Xi’an Jiaotong University; Zongpeng Li, University of Calgary https://www.usenix.org/conference/atc17/technical-sessions/presentation/peng This paper is included in the Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC ’17). July 12–14, 2017 • Santa Clara, CA, USA ISBN 978-1-931971-38-6 Open access to the Proceedings of the 2017 USENIX Annual Technical Conference is sponsored by USENIX.

  2. deTector : a Topology-aware Monitoring System for Data Center Networks Yanghua Peng Ji Yang Chuan Wu The University of Hong Kong Xi’an Jiaotong University The University of Hong Kong Chuanxiong Guo Chengchen Hu Zongpeng Li Microsoft Research Xi’an Jiaotong University University of Calgary Abstract high quality of service (QoS) for users ( e.g. , no more than a few minutes of downtime per month [21]) and to Troubleshooting network performance issues is a chal- increase revenue for operators. lenging task especially in large-scale data center net- Rapid failure recovery is not possible without a good works. This paper presents deTector , a network mon- network monitoring system. There have been a number itoring system that is able to detect and localize net- of systems proposed in the past few years [36, 26, 37, work failures (manifested mainly by packet losses) ac- 48]. Several limitations still exist in these systems that curately in near real time while minimizing the moni- prohibit fast failure detection and localization. toring overhead. deTector achieves this goal by tightly First , existing monitoring systems may fail to detect coupling detection and localization and carefully select- one type of failures or another. Traditional passive mon- ing probe paths so that packet losses can be localized itoring approaches, such as querying the device counter only according to end-to-end observations without the via SNMP or retrieving information via device CLI when help of additional tools ( e.g. , tracert). In particular, we users have perceived some issues, can detect clean fail- quantify the desirable properties of the matrix of probe ures such as link down, line card malfunctions. How- paths, i.e. , coverage and identifiability, and leverage an ever, gray failures may occur, i.e. , faults not detected or efficient greedy algorithm with a good approximation ra- ignored by the device, or malfunctioning not properly re- tio and fast speed to select probe paths. We also propose ported by the device due to some bugs [37]. Active mon- a loss localization method according to loss patterns in itoring systems ( e.g. , Pingmesh [26], NetNORAD [37]) a data center network. Our algorithm analysis, experi- can detect such failures by sending end-to-end probes, mental evaluation on a Fattree testbed and supplementary but they may fail to capture failures that cause low rate large-scale simulation validate the scalability, feasibility losses, due to ECMP in data center ( § 2). and effectiveness of deTector . Second , probe systems such as Pingmesh and NetNO- RAD inject probes between each pair of servers with- out selection, which may introduce too much bandwidth 1 Introduction overhead. In addition, they typically treat the whole DCN as a black box, and hence require many probes to A variety of services are hosted in large-scale data cen- cover all parallel paths between any server pair with high ters today, e.g. , search engines, social networks and file probability. sharing. To support these services with high quality, data center networks (DCNs) are carefully designed to effi- Third , failures in the network can be reported in these ciently connect thousands of network devices together, active monitoring systems, but the exact failure locations e.g. , a 64-ary Fattree [9] DCN has more than 60,000 cannot be pinpointed automatically. The network opera- servers and 5,000 switches. However, due to the large tor typically learns a suspected source-destination server network scale, frequent upgrades and management com- pair once packet loss happens. Then she/he needs to re- plexity, failures in DCNs are the norm rather than the sort to additional tools such as tracert to verify the issue exception [21], such as routing misconfigurations, link and locate the faulty spot. However, it may be difficult flaps, etc. Among these failures, those leading to user- to play back the issues due to transient failures. Hence perceived performance issues ( e.g. , packet losses, latency this diagnosis approach ( i.e. , separation of detection and spikes) are among the first priority to be detected and localization) may take several hours or even days to pin- eliminated promptly [27, 26, 21], in order to maintain point the fault spot [21], yet ideally the failures should USENIX Association 2017 USENIX Annual Technical Conference 55

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend