Circuits provisioning in PIONIER with AutoBAHN system Radek - - PowerPoint PPT Presentation

circuits provisioning in pionier with autobahn system
SMART_READER_LITE
LIVE PREVIEW

Circuits provisioning in PIONIER with AutoBAHN system Radek - - PowerPoint PPT Presentation

Circuits provisioning in PIONIER with AutoBAHN system Radek Krzywania radek.krzywania@man.poznan.pl ToC Introduction to PIONIER AutoBAHN deployment in PIONIER network Topology Network abstraction Reservation process


slide-1
SLIDE 1

Circuits provisioning in PIONIER with AutoBAHN system

Radek Krzywania radek.krzywania@man.poznan.pl

slide-2
SLIDE 2

ToC

  • Introduction to PIONIER
  • AutoBAHN deployment in PIONIER network

– Topology – Network abstraction – Reservation process – Potential usage

slide-3
SLIDE 3

Introduction to PIONIER

slide-4
SLIDE 4

PIONIER Infrastructure

  • PIONIER is Polish

NREN dedicated to interconnect all research and academicals institutions in Poland

  • The infrastructure

interconnects 22 MANs and HPC centers

  • PSNC is the
  • perator of

PIONIER

slide-5
SLIDE 5

PIONIER Infrastructure

  • 5.300 km of own Fiber Optics Cables
  • DWDM Adva equipment for L1
  • Foundry Networks NetIron XMR 8000 series switches for

L2/L3 in 22 MAN centers

  • Juniper M5 router for L3
slide-6
SLIDE 6

PIONIER Infrastructure

MAN Eth 1 Gb/s SDH 2,5 Gb/s 2x10 Gb/s (2 λ) CBDF 10Gb/s (2 λ)

slide-7
SLIDE 7

PIONIER place in Europe

  • PIONIER in Europe after 5 years of operating:

– 4th place among EU i EFTA countries in core network size (Mb/s x km) – 1st place among EU and EFTA countries in core capacity of the network (Mb/s) (equal to SURFnet) – The highest number of CBDF links in EU countries (4

  • perating + 4 more planned in short time scale)

– 5th place in outgoing traffic and 6th in incoming traffic to the NREN backbone

  • Source: TERENA Compendium 2008
slide-8
SLIDE 8

AutoBAHN deployment in PIONIER network

Topology

slide-9
SLIDE 9

AutoBAHN in PIONIER

  • PSNC is an active partner in GEANT2 JRA3 activity

(AutoBAHN) since its very beginning

  • PIONIER testbed infrastructure was one of the first to

deploy AutoBAHN instance for dynamic circuit management

  • Testbed equipment is exactly the same as in parallel
  • perational infrastructure
slide-10
SLIDE 10

PIONIER topology for AutoBAHN

  • NetIron XMR 8000

switches are interconnected with 10Gb/s interfaces

  • AutoBAHN Technology

Proxy has access to each of the boxes with CLI interface

  • The resources are seen

as single MPLS cloud

slide-11
SLIDE 11

PIONIER topology for AutoBAHN

  • Technology Proxy is aware of each

piece of equipment in the network (all XMR boxes)

  • DM is provided with limited topology

information, where only edge switches are present (those connected with end-points or having external connections to neighbour domains)

  • IDM topology is similar to the one at

DM level, however the network equipment details are hidden and information about neighbour and global topology is included

slide-12
SLIDE 12

AutoBAHN deployment in PIONIER network

Network abstraction

slide-13
SLIDE 13

Topology Abstraction Process

  • MPLS cloud abstraction decreases amount of

information about physial network topology

  • Only reachability information between domain

edge points are provided with additional links metrics

  • The pathfinding is limited to definition of ingres

and egres network node and port

  • The intermediate nodes are selected

automatically by MPLS with limited influance of administrator or AutoBAHN system

slide-14
SLIDE 14

MPLS cloud issues for AutoBAHN

  • AutoBAHN was considered at the beginning of work to have full control over physical

network resources

  • MPLS cloud abstraction prevents AutoBAHN to see all particular links in the network
  • The overall capacity of network links must be abstracted, which causes loss of some

information

  • The control of booked and used network capacity is limited to heuristic accuracy
  • The pathfinding is limited to defining just source and destination end ports and nodes in

topology

slide-15
SLIDE 15

Alternative link capacity constraints

  • Ingress/Egress links limit the capacity

allowed to reserve in the network – Core network bandwidth is considered to be infinite – Network may refuse reservation in case

  • f insufficient bandwidth available
  • Capacity allowed to reserve is limited with

policy rule – All domain ingress/egress links are associated with one node – All end points are associated with single node – Technology Proxy must be able to translate DM port names to physical ones – Accurate capacity value in policy may prevent reservation denials – Allows improved capacity control by network administrators

slide-16
SLIDE 16

AutoBAHN deployment in PIONIER network

Reservation process

slide-17
SLIDE 17

How Circuits Are Created

  • A User wants to have circuit from some end point in GARR, terminated

at file server in PIONIER network

  • GARR and GEANT2 domains provides their constraints set to

PIONIER network with request to schedule reservation to the end point from selected ingress point

slide-18
SLIDE 18

How Circuits Are Created

  • The request is forwarded to PIONIER DM, where pathfinder process is executed and

constraints for local domain are given

  • IDM analyze constraints and defines global path attributes, which are send to DM in
  • rder to schedule reservation.
  • Again pathfinding is performed, and a path of two nodes and four links are given as a

result

  • Links are validated in Calendar module to confirm resources availability
  • Then the resources are booked and reservation is scheduled
  • IDM is informed about successfully created reservation
slide-19
SLIDE 19

How Circuits Are Created

  • At reservation start time, DM sends request for a circuit implementation

to TP

  • TP transform DM topology into physical one and contact proper edge

nodes to configure end ports of the circuit

  • The VLL is routed according to internal MPLS procedures
slide-20
SLIDE 20

AutoBAHN deployment in PIONIER network

Potential usage

slide-21
SLIDE 21

Potential AutoBAHN users in PIONIER

  • SCARIe project

– AutoBAHN provides connectivity for SCARIe research activities, interconnecting radiotelescopes at global scale – One of the radiotelescopes is located physically next to Toruń city (PL) and is connected directly to PIONIER infrastructure

slide-22
SLIDE 22

Potential AutoBAHN users in PIONIER

  • iTPV – Interactive Television may require dedicated circuits between

data repositories

slide-23
SLIDE 23

Potential AutoBAHN users in PIONIER

  • Data storage infrastructures – multiple data storage infrastructures

distributed in Poland may be connected on demand with dedicated links

slide-24
SLIDE 24

Potential AutoBAHN users in PIONIER

  • Telemedicine – dedicated circuits for high quality video

streaming

  • HPC centers interconnectivity in Poland
  • VLAB – Virtual Laboratories
  • Interconnectivity dedicated for distributed Projects
slide-25
SLIDE 25

Q&A

Thank you

slide-26
SLIDE 26

GMPLS/G2MPLS in PIONIER network

Bartosz Belter bartosz.belter@man.poznan.pl Presented by: Radek Krzywania radek.krzywania@man.poznan.pl Poznan Supercomputing and Networking Center

slide-27
SLIDE 27

BRIEF INTRODUCTION TO G2MLPS

slide-28
SLIDE 28

What is G2MPLS?

  • G2MPLS is …
  • a Network Control Plane architecture that implements the concept of

Grid Network Services

  • GNS is a service that allows the provisioning of network and Grid resources in

a single-step, through a set of seamlessly integrated procedures.

  • expected to expose interfaces specific for Grid services
  • made of a set of extensions to the standard GMPLS
  • provide enhanced network and Grid services for “power” users / apps (the Grids)
  • G2MPLS is not …
  • an application-specific architecture; it aims to
  • support any kind of end-user applications by providing network transport

services and procedures that can fall back to the standard GMPLS ones

  • provide automatic setup and resiliency of network connections for “standard” users
slide-29
SLIDE 29
  • uniform interface for the Grid-user to

trigger Grid & network resource actions

  • single-step provisioning of Grid and

network resources (w.r.t. the dual approach Grid brokers + NRPS-es)

  • adoption of well-established procedures

for traffic engineering, resiliency and crankback

  • possible integration of Grids in
  • perational/commercial networks, by
  • vercoming the limitation of Grids
  • perating on dedicated, stand-alone

network infrastructures Grid nodes can be modelled as network nodes with node-level grid resources to be advertised and configured (this is a native task for GMPLS CP)

Why G2MPLS?

G2 G2 G2 G2

G.I-NNI G.E-NNI G.O-UNI

G2MPLS NRPS Vsite A Vsite B Vsite C

G.O-UNI

slide-30
SLIDE 30

G2MPLS goals

  • G2MPLS will provide part of the functionalities related to the selection

and co-allocation of both Grid and network resources

  • Co-allocation functionalities
  • Discovery and Advertisement of Grid + network capabilities and

resources of the participating virtual sites (Vsites)

  • Service setup / teardown
  • coordination with local job scheduler in middleware
  • configuration of the involved network connections among the participating

Vsites

  • (The network end-point – TNA – might not be specified, if Grid resources are

specified)

  • resiliency mgmt for the installed network connections and possible recovery

escalation to the Grid MW for job recovery

  • advanced reservations of Grid and network resources
  • Service monitoring
  • retrieving the status of a job (Grid transaction) and of the related network

connections

slide-31
SLIDE 31

GMPLS/G2MLPS DEPLOYMENT IN PIONIER NETWORK

slide-32
SLIDE 32
  • ADVA FSP 3000RE-II (Lambda Switch)
  • 15 pass through ports
  • 6 local ports
  • 3 physical units
  • Calient Diamond Wave (Fibre Switch)
  • 60 ports
  • 1 physical unit / 4 logical units (switch virtualization)
  • Foundry XMR NetIron 8000 (Ethernet Switch)
  • 2 x 4-port 10GE modules (XFP)
  • 1 x 24-port 1GE module (SFP)
  • 3 physical units

G2MPLS test-bed – Transport Plane [1]

slide-33
SLIDE 33
  • Three technoloy domains:
  • LSC
  • FSC
  • Ethernet
  • Interconnections to other

testbeds via GÉANT2

  • GRID sites are emulated

with the PCs connected to the testbed

  • Successful demonstration
  • f G2MPLS features with

Distributed Data Storage System (DDSS)

G2MPLS test-bed – Transport Plane [2]

slide-34
SLIDE 34
  • The Control Plane implemented by a set of G2MPLS node controlers
  • Each of them operates exclusively on a Transport Network element (real or

derived from partitioning)

  • Each controller is interfaced to the Transport Network equipment (Southbound

Interface) through TL1 (ADVA, CALIENT) and SNMP (Foundry XMR)

  • Node controllers run on i386 32-bit platform with Gentoo Linux distribution
  • Signaling Control Network (SCN)
  • To transport signaling messages between the CP components
  • Each G2MPLS exposes at least one interface on the Signaling Communication

Network (SCN) over which the G2MPLS protocol messages flow

  • SCN is IP-based with addresses from the private scope. IP tunnelling is used

for out of band connectivity between controllers.

G2MPLS test-bed – Control Plane [1]

slide-35
SLIDE 35
  • The configuration of the G2MPLS CP requires mapping of actual physical

topology into the configuration files associated with each of the G2MPLS processes

  • Due to complexity of the whole CP design, the following picture covers only a part
  • f the CP configuration (FSC technology domain):

G2MPLS test-bed – Control Plane [2]

slide-36
SLIDE 36
  • 25 test-cards divided into three main areas:
  • LSP signalling
  • Validate the components of the stack involved in the LSP signalling:
  • G2.RSVP-TE, LRM, TNRC, SCNGW
  • G2MPLS call signalling
  • Validate the components of the stack involved in the call signalling:
  • Intra-domain scope: G2.NCC, RC and G2.RSVP-TE
  • Inter-domain scope: G2.NCC and G.ENNI-RSVP
  • G2MPLS routing
  • Validate the components of the stack involved in routing:
  • Intra-domain scope: G2.OSPF-INNI, G2.OSP-UNI, LRM, SCNGW
  • Inter-domain scope: G2.OSPF-INNI, G2.OSPF.ENNI, G2.OSPF-UNI, LRM, SCNGW
  • Multi-domain test-bed required to validate some of the features achieved by interconnecting

local test-beds of PIONIER (Poland) and University of Essex (UK) via GÉANT2 network

G2MPLS in PIONIER – functional tests

slide-37
SLIDE 37

LSP signalling tests No Test Card Test name Status 1 G2MPLS-TC-1.1 Network node initialization Passed 2 G2MPLS-TC-1.2 Transport Plane notifications from the network node Passed 3 G2MPLS-TC-1.3 Setup of one bidirectional LSP Passed 4 G2MPLS-TC-1.4 Tear down of one bidirectional LSP from HEAD node Passed 5 G2MPLS-TC-1.5 Tear down of one bidirectional LSP from TAIL node Passed 6 G2MPLS-TC-1.6 Unsuccessful bidirectional LSP setup (failure in HEAD node) Passed 7 G2MPLS-TC-1.7 Unsuccessful bidirectional LSP setup (failure in intermediate node) Passed 8 G2MPLS-TC-1.8 Unsuccessful bidirectional LSP setup (failure in TAIL node) Passed 9 G2MPLS-TC-1.9 Setup of one bidirectional LSP with advance reservation Passed 10 G2MPLS-TC-1.10 Tear down of one bidirectional LSP with advance reservation from HEAD node Passed

LSP signalling tests

  • All tests have been done on the LSC/FSC/Ethernet nodes
slide-38
SLIDE 38

Intra-domain G2MPLS call signalling tests No Test Card Test name Status 11 G2MPLS-TC-2.1 Setup of one bidirectional single-domain LSP by G2.NCC module Passed 12 G2MPLS-TC-2.2 Teardown of the one bidirectional single-domain LSP by G2.NCC module Passed 13 G2MPLS-TC-2.3 Setup of one bidirectional single-domain LSP by G2.CCC module Passed 14 G2MPLS-TC-2.4 Teardown of the one bidirectional single-domain LSP by G2.CCC module Passed 15 G2MPLS-TC-2.5 Setup of one bidirectional single-domain LSP by G.UNI-GW module Passed 16 G2MPLS-TC-2.6 Teardown of the one bidirectional single-domain LSP by G.UNI-GW module Passed 17 G2MPLS-TC-2.7 Setup of one bidirectional single-domain LSP by Middleware WS-Agreement client Passed 18 G2MPLS-TC-2.8 Teardown of the one bidirectional single-domain LSP by Middleware WS-Agreement client Passed Inter-domain G2MPLS call signalling tests No Test Card Test name Status 19 G2MPLS-TC-2.9 Setup of one bidirectional inter-domain LSP by G2.CCC Passed 20 G2MPLS-TC-2.10 Teardown of the one bidirectional single-domain LSP by G2.CCC Passed

G2MPLS call signalling tests

  • All tests have been done on the LSC/FSC/Ethernet nodes
slide-39
SLIDE 39

Intra-domain G2MPLS routing tests No Test Card Test name Status 21 G2MPLS-TC-3.1 I-NNI G2.OSPF-TE instance initialization Passed 22 G2MPLS-TC-3.2 Distribution of TE information through the G.I-NNI interfaces Passed 23 G2MPLS-TC-3.3 Distribution of Grid information through the G.UNI and G.I-NNI interfaces Passed Inter-domain G2MPLS routing tests No Test Card Test name Status 24 G2MPLS-TC-3.4 Routing information exchange between adjacent RAs Passed 25 G2MPLS-TC-3.5 Grid information exchange between adjacent RAs Passed

G2MPLS routing tests

  • All tests have been done on the LSC/FSC/Ethernet nodes
slide-40
SLIDE 40
  • Currently, the Open Source G2MPLS protocol stack supports the

representatives from three main technology areas: LSC, FSC and Ethernet

  • The stack is extendable: quick and simple development of the extensions in

support of different vendors and equipment

  • Extensions for cheap Ethernet switches expected soon
  • G2MPLS is developed to support UNICORE, but GLOBUS extensions are

expected soon

Summary

slide-41
SLIDE 41
  • G2MPLS allows to run any kind of applications, even not bridged by Grid
  • Middleware. It is possible to connect the application directly to the

network through G.OUNI, bypassing existing gateways developed for UNICORE

  • Corba interfaces allow easy plug&play of external applications in the

G2MPLS framework

  • PHOSPHORUS G2MPLS is backward compatible with ASON/GMPLS
  • Provides „legacy” ASON/GMPLS transport services and procedures
  • This compliance fosters the possible integration of Grids in operational and/or

commercial networks

Summary

slide-42
SLIDE 42

Q&A

Thank you