Observatory system John Hicks GlobalNOC, Indeana University - - PowerPoint PPT Presentation

observatory system
SMART_READER_LITE
LIVE PREVIEW

Observatory system John Hicks GlobalNOC, Indeana University - - PowerPoint PPT Presentation

Observatory system John Hicks GlobalNOC, Indeana University Takatoshi Ikeda APAN-JP,KDDI Labs APAN 19 - Bangkok, Thailand 27-January-2005 Outline Brief description of an Observatory system Why do we need a persistent system


slide-1
SLIDE 1

Observatory system

John Hicks

GlobalNOC, Indeana University

Takatoshi Ikeda

APAN-JP,KDDI Labs

APAN 19 - Bangkok, Thailand 27-January-2005

slide-2
SLIDE 2

Outline

  • Brief description of an Observatory system
  • Why do we need a persistent system
  • Deployment of the Observatory
slide-3
SLIDE 3

Brief description of Observatory system

  • The word “Observatory” is barrowed from the

Internet2 “Abilene Observatory” system.

  • The goal of this effort is to provide valuable data and

tools to network engineers and researchers in order to debug network problems and improve application performance.

  • Providing an Observatory framework will help

determine the performance characteristics of the complete path by aggregating information about the segments that make up the network path.

slide-4
SLIDE 4

Brief description of Observatory system (cont.)

  • The Observatory system consists of PCs deployed at

key location along the network backbone collecting and analysing measurement data.

  • This work makes use of existing tools such as

Internet2s bwctl and owamp codes.

  • Other tools from the Abilene NOC, APAN NOC JP, and

NLANR may also be deployed on this hardware.

  • The collect data is made publicly available.
  • Basic authentication is also provided.
slide-5
SLIDE 5

Why do we need an Observatory

  • An Observatory system is needed at key locations

along the network backbone in order to perform partial path analysis of network segments.

  • Partial path data analysis provides a finer

grained view of performance issues to network administrators and application engineers.

  • Providing a persistent measurement infrastructure

gives a consistent view of network performance.

  • Regularly scheduled and on demand testing are

accomplished using this infrastructure.

  • Making the data and tools publicly available opens the

door for easy data access.

slide-6
SLIDE 6

Deploying APAN Observatory

  • Takatoshi Ikeda visited Indiana University in December

2004 to work on Observatory code including bwclt,

  • wamp, and netflow flow-tools.
  • Measurement machines were installed at Indiana

University (IUPUI in Indianapolis).

  • Currently measuring from Indiana to Tokyo over the

JGN2 link in Chicago.

  • SNAPP and other Global NOC tools are being deployed

to support data collection and analysis.

  • APAN SNAPP implementation:

http://nms2.jp.apan.net/cgi-bin/snapp/index.cgi

slide-7
SLIDE 7

TransPAC2 Measurement goals for the first year

  • Measurement machines will be deployed in the TransPAC2 U.S.

co-location space to collect data (some resources already located in Tokyo).

  • Full code implementation of existing measurement and analysis

tools.

  • Schedule persistent tests between APAN/TransPAC2 and Abilene

Observatory nodes.

  • Make measurement data available to the network and research

communities.

  • Foster collaboration between the APAN measurement

community and other global network measurement projects.

slide-8
SLIDE 8

Current deployment plan

TokyoXP Indianapolis Los Angeles

slide-9
SLIDE 9

Takatashi Ikeda

KDDI labs Japan

slide-10
SLIDE 10

Deployment of the Observatory

slide-11
SLIDE 11

Current deployment

slide-12
SLIDE 12

Location

– Japan

  • Tokyo XP

– U.S.

  • Indianapolis IUPUI

Average RTT 190ms TokyoXP Chicago/Indianapolis Los Angeles

slide-13
SLIDE 13

Network Diagram between NMS servers

nms1 nms2 nms3 nms4 DELL TPR4 pro8801 STARLIGHT Force 10 CHINng

MTU=9188B MTU=9000B MTU=9000B MTU=9000B MTU=9000B 1GbE Fibre 10GbE Fibre 1GbE Cupper SONET 10G

nms1 nms2 nms3 nms4 HP4108 Cisco6509 iplsng Cisco4000

MTU=9180B MTU=9000 MTU=1500 JGN2 International Link MTU=9000B MTU=1500

TPR2 MS3 iplsng

SONET 2.4G

Abilene

TransPAC

Japan U.S.

slide-14
SLIDE 14

Implement

This table shows the data set and status of implement

Data set Tools machine implement Throughput Iperf BWCTL nms1 nms4 nms3 nms2

  • Available to do the throughput test on

demand One-way Latency OWAMP Available to measure the one-way delay

  • n demand

netflow flow-tools rsync Has collected the netflow data at Tokyo XP. The data is available for the research community with authentication. usage net-snmp SNAPP Available a Hi-resolution data of a usage statistics of routers at Tokyo XP. Router

  • Routing
  • Syslog
slide-15
SLIDE 15

Example of available data

Hourly data Collect data every 10s

  • Netflow

The traffic graphs of major port on security are generated from this netflow data.

  • Usage data

The files of data can be downloaded and the graphs are available

http://www.jp.apan.net/noc/Observatory/usage-data.html http://nms2.jp.apan.net/cgi-bin/snapp/index.cgi

Daily traffic data of www (tcp:80)

http://vabo1.jp.apan.net/flow/

slide-16
SLIDE 16

Utilization of Observatory

– Throughput

  • Connect with Abilene Observatory

– It was used to find out the network degradation at SC2004

  • It can be used to measure the throughput performance with

users

– we used it when some users connected to Tokyo XP

– Netflow

  • The data can be used to analyze the traffic in detail.

(per protocol, port, IP adress, AS,,,etc)

  • The netflow data is available using rsync for the research

community.

– Usage

  • The graphs of the Hi-resolution data are helpful to grasp the

burst traffic in demonstrations and experiments.

  • The Hi-resolution usage data of major links are available.
slide-17
SLIDE 17

Deployment plan across the TransPAC2

slide-18
SLIDE 18

Network Diagram

nms1 nms2 nms3 nms4 DELL TPR4

MTU=9000B MTU=9000B MTU=9000B

nms… L2 switch

MTU=9000

Trans pacific circuit of TransPAC2

TokyoXP Los Angeles

router

1GbE Fibre 1GbE Cupper OC192c L2 device L3 device server new machines and moving machines from iupui

Japan U.S.

slide-19
SLIDE 19

Collected data

We will also collect the same data of current implement.

  • across the TransPAC2 circuit.

– Throughput – One-way Latency

  • At each node

– Netflow – Usage statistics – Router – Routing – Syslog

slide-20
SLIDE 20

Scheduled Observatory for SC2005 ( Plan)

Network Network

Observatory data

1, register the schedule

  • start time, end time
  • required bandwidth
  • Src IP , Dst IP
  • …etc

user user user user

Get the network data Check the schedule 2, generate the huge traffic for experiment

Scheduler

  • Manage the schedule

Monitoring tool

  • Monitor the traffic
  • Alert to unexpected user

Check the schedule manage the schedule

schedule

Unexpected user Unexpected user Unexpected user Unexpected user

congestion detect

Notification Alert

  • perator
  • perator

Scheduled observatory

slide-21
SLIDE 21

Scheduled Observatory (cont.)

– Scheduler

  • Scheduler should manage this information.

– who generate the traffic – start time, end time – required bandwidth – SrcIP, DstIP, port number – contact point ,,etc

– Monitoring tool

  • Monitoring tool should monitor these data

– Flow data (netflow) – Usage data of interface ,,etc

– Other Observatory functions for SC2005

slide-22
SLIDE 22

Reference

– Abilene Observatory

http://abilene.internet2.edu/observatory/

– Observatory at APAN Tokyo XP

http://www.jp.apan.net/NOC/Observatory/

– TransPAC2

http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0441096

– Iperf

http://dast.nlanr.net/Projects/Iperf/

– BWCTL

http://e2epi.internet2.edu/bwctl/

– OWAMP

http://e2epi.internet2.edu/owamp/

– SNAPP

http://tools.globalnoc.iu.edu/snapp.html

– Flow-tools

http://www.splintered.net/sw/flow-tools/

slide-23
SLIDE 23

Thank you