Outline The Phosphorus project (FP6) Network resource abstraction - - PowerPoint PPT Presentation

outline
SMART_READER_LITE
LIVE PREVIEW

Outline The Phosphorus project (FP6) Network resource abstraction - - PowerPoint PPT Presentation

FP6 Phosphorus and FP7 Federica projects Network resources for grid applications and Europe-wide experimental open infrastructure Joan Antoni Garca Espn Network Engineer i2CAT Foundation , Barcelona (Catalonia, Spain) 7 th NREN and Grids


slide-1
SLIDE 1

7th NREN and Grids workshop, Dublin (I reland), September 2008

FP6 Phosphorus and FP7 Federica projects

Network resources for grid applications and Europe-wide experimental open infrastructure

Joan Antoni García Espín

Network Engineer

i2CAT Foundation, Barcelona (Catalonia, Spain)

7th NREN and Grids workshop, Dublin (I reland), September 2008

slide-2
SLIDE 2

7th NREN and Grids workshop, Dublin (I reland), September 2008

Outline

  • The Phosphorus project (FP6)

Network resource abstraction and allocation for Grid apps

  • The Federica project (FP7)

Open network infrastructure virtualisation

  • The IaaS framework and Federica

Beyond the Articulated Private Networks

slide-3
SLIDE 3

7th NREN and Grids workshop, Dublin (I reland), September 2008

PHOSPHORUS project overview

What: 6th FP project in the area “Research networking test-beds” 5.1 M€ EC contribution, 6.9 M€ budget 20 partners, 814 Person Months When: 1st October 2007 – 30th March 2009 (30 months) More: http://www.ist-phosphorus.eu

Project Vision and Mission

  • The project addresses some of the key technical

challenges in enabling on-demand e2e network services across multiple, heterogenous domains

  • Phosphorus has demonstrated solutions and

functionalities across a test-bed involving European NRENs, GÉANT2, Cross Border Dark Fibre and GLIF

slide-4
SLIDE 4

7th NREN and Grids workshop, Dublin (I reland), September 2008

Members of the PHOSPHORUS consortium

NRENs & RON:

CESNET Poznan Supercomputing and Networking Center SURFnet MCNC

Manufacturers:

ADVA Optical Networking Hitachi Europe Ltd. NORTEL

SMEs:

NextWorks

HPC centers, Universities and Research Institutions:

Communication Research Centre Fraunhofer–Gesellschaft Fundació I2CAT (with UPC as Third party) Forschungszentrum Jülich Interdisciplinair instituut voor BreedBand Technologies Research Academic Computer Technology Institute Research and Education Society in Information Technology SARA Computing and Networking Services University of Bonn University of Amsterdam University of Essex University of Wales Swansea / University of Leeds

slide-5
SLIDE 5

7th NREN and Grids workshop, Dublin (I reland), September 2008

PHOSPHORUS Key Features

Integration between application middleware and transport networks, based on three planes:

– Service plane:

  • Middleware extensions and APIs to expose network and Grid resources and make

reservations with those resources

  • Security mechanisms (AAA) for network domains participating in a global network

infrastructure, allowing both network resource owners and applications to have a stake in the decision to allocate specific network resources

– Network Resource Provisioning plane:

  • Adaptation of existing Network Resource Provisioning Systems (NRPS)
  • Implementation of interfaces between different NRPS to allow multi–domain

interoperability with Phosphorus’ resource reservation system

– Control plane:

  • Enhancements of the GMPLS Control Plane (Grid-GMPLS or G² MPLS) to provide optical

network resources as first-class Grid resource

  • I nterworking of GMPLS-controlled network domains with NRPS-based domains, i.e.

interoperability between G2MPLS and Argia/ UCLP, DRAC and ARGON

slide-6
SLIDE 6

PHOSPHORUS Architecture

slide-7
SLIDE 7

7th NREN and Grids workshop, Dublin (I reland), September 2008

Harmony system: NRPS and NSP interfaces (WP1)

NBI: It receives the reservation requests from the Grid Middleware. It is used to indicate to the NSP which are the resources under control (NRPSs, endpoints links). EWI: It is in charge of the communication between NRPSs. SBI: It handles the communication between the NRPSs and the lower layers (GMPLS or transport layer). EI: It provides intero-perability between the NSP and the G2MPLS CP or other projects.

GRID APLICATIONS & MIDDLEWARE

OTHER PROJECTS

Network Resource Provisioning Systems Transport Network Transport Network Request Handler Reservation Handler Scheduler TOPOLOGICAL CONFIGURATION

G

2MPLS

Transport

Network

PHASE 2 DB Network Service Plane Path Computer NRPS Broker

ARGON UCLP DRAC

GMPLS

NBI NBI SBI SBI EWI

NBI: North-Bound Interface SBI: South-Bound Interface EWI: East-West Interface EI: External Interface

EI EI

Key points:

  • Ability to create point-to-point

connections using resources from several domains

  • AAA mechanism for global

authentication

  • Advance reservations: users and

Grid applications can program fixed, deferrable or malleable resource reservations with one or more connections

t bw t bw t bw t bw t bw t bw

(a) fixed reservation (b) deferrable reservation (c) malleable reservation

Type of advance reservations

slide-8
SLIDE 8

The NRPS systems

Network Resource Provisioning Systems (NRPSs)

ARGON

The Allocation and Reservation in Grid-enabled Optic Networks system was developed to manage resources of advanced network equipment as it is present in the German VIOLA test-

  • bed. The advance reservation service of ARGON is able to operate on the GMPLS as well as on

the MPLS level. It guarantees a certain QoS for applications for the requested time interval. This feature enables a Meta-Scheduling Service to seamlessly integrate the network resources into a Grid environment.

DRAC

The Dynamic Resource Allocation Controller system was developed by NORTEL and it is a commercial-grade network abstraction and mediation middleware platform, acting as an agent for network clients (users, applications, compute resource managers) to negotiate and reserve appropriate network resources on their behalf. DRAC uses client's QoS requirements and pre- defined policies to negotiate end-to-end connectivity across heterogeneous in support of just-in- time or scheduled computing workflows.

UCLP (ARGIA)

The User Controlled LightPaths is provided by CRC, Inocybe and i2CAT. It provides a network virtualization framework upon which communities of users can build their own services or

  • applications. Articulated Private Networks (APNs) are presented as the first services. An APN

can be considered as a next generation Virtual Private Network where a user can create a complex, multi-domain topology by binding together network resources, time slices, switching nodes and virtual/real routing services.

slide-9
SLIDE 9

7th NREN and Grids workshop, Dublin (I reland), September 2008

Current prototype: the

Harmony Architecture

Key points:

  • Distributed (P2P) or hierarchical

architecture for the Network Service Plane

  • Harmony Service Interface common

for the adaptation layer and the network service plane

  • The Network Service Plane is

composed by independent entities (Inter Domain Brokers)

  • The distinct IDBs flood the

information of each domain they control

  • The new P2P architecture is being

tested over the new virtual testbed

slide-10
SLIDE 10

Harmony system overview

10

slide-11
SLIDE 11

Harmony global architecture and NSP detail

Provided by the partners Implemented in Phosphorus Network Service Plane (NSP) Middleware (WP3)

ARGON (NRPS)

NRPS Adapter

DRAC (NRPS)

NRPS Adapter

UCLP (NRPS)

Thin NRPS

GMPLS drive

NRPS Adapter ‘zoom’

G²MPLS G²MPLS G-Lambda G-Lambda GÉANT2 (JRA3) GÉANT2 (JRA3)

Reservation-WS Reservation-WS

NSP NSP

Network Service Plane

Administrator

Topology client

Middleware

MSS

User

Application WS-Modules

Topology-WS

Reservation-WS Notification-WS Java MySQL

DB

Req handler Database NRPS manager Resv setup

Resv operation

Authn Topology Validation Path comp

ARGON Domain ARGON Domain UCLP Domain UCLP Domain DRAC Domain DRAC Domain GMPLS Domain GMPLS Domain

ARGON

Reservation-WS

UCLP

Reservation-WS

DRAC

Reservation-WS

GMPLS thin

Reservation-WS ARGON Adapter UCLP Adapter DRAC Adapter

Key Points

  • Ability to create point-to-point connections

using resources from several domains

  • AAA mechanism for global authentication
  • Advance reservations: users and Grid

applications can program fixed or malleable resource reservations with one or more connections

slide-12
SLIDE 12

7th NREN and Grids workshop, Dublin (I reland), September 2008

NRPS1

The NRPS Manager (as part of the NSP)

  • Module inside the NSP in charge of direct communications with NRPSs, by means of their

endpoints

  • It coordinates the call from the NSP to the NRPSs adapters and returns the replies
  • The requests to the NRPSs adapters are launched at the same time to let the NRPSs work

in parallel in order to shorten the request processing time.

D H F E G B C A A D E G B C F H

NRPS2 NRPSn NRPS Controllers

NRPS1 Controller NRPS2 Controller NRPSn Controller

slide-13
SLIDE 13

Existing NRPS comparison

ARGON DRAC UCLP Network virtualization Advanced reservations AAA

Partly

(under further development)

Restoration / Protection Point to point connections Point to multipoint connections Interface Type WS WS WS

Partly

(under further development)

slide-14
SLIDE 14

7th NREN and Grids workshop, Dublin (I reland), September 2008

NRPS Layer system architecture

  • The project does not cover developments on internal architectures of the

NRPSs systems but development of interfaces for NRPSs interoperability through the NSP

  • Interfaces define common data types and common operations for NRPSs
  • Specification of suitable NRPS Adapters for the interoperability layer. They

act as a wrapper translator

  • NRPS adapters provide translation of common operations to specific NRPS
  • The NSP is the controller of the underlying NRPS for interoperability, since

each NRPS uses its own specific interface operations and own specific data types:

– Reservation interface: reservation of resources in e2e path provisioning – Topology interface: updating NSP with topology information

  • The NRPS adapter enables communication between the NSP and NRPS
  • Interoperability is achieved by web service implementation of abstract

service interface descriptions, defined within WSDL

slide-15
SLIDE 15

7th NREN and Grids workshop, Dublin (I reland), September 2008

NRPS & NSP interoperability (high level view)

Interoperability layer Network Service Plane Adapter ARGON driver UCLP driver DRAC driver Topology Service Path Computer Grid Middleware Database NRPS Manager

Reservation service Topology client Reservation service client

Adapter

Reservation service Topology client

Adapter

Reservation service Topology client ARGON domain UCLP domain DRAC domain Topology information Resource reservations Reservation service interface Topology service interface Legend: East / West interfaces Request for end-to-end resource provisioning NRPS address information

Reservation WS:

  • Availability Request
  • Reservation Request
  • Cancel Reservation
  • Status Request
  • Retrieve Features
  • Retrieve Endpoints

Topology WS:

  • Add domain
  • Delete domain
  • Edit domain
  • Retrieve domain
  • Add Endpoints
  • Delete Endpoint
  • Edit Endpoints
  • Retrieve Endpoints
  • Add Link
  • Delete Link
  • Edit Link
  • Retrieve Link

Current NRPS adapter functionalities

slide-16
SLIDE 16

7th NREN and Grids workshop, Dublin (I reland), September 2008

The NRPS Adapters

Middleware NSP NRPS Adapter UCLP NRPS Adapter ARGON THIN NRPS NRPS Adapter DRAC NRPS UCLP NRPS ARGON GMPLS Driver NRPS DRAC UCLP controlled network ARGON controlled network GMPLS controlled network DRAC controlled network

ARGON

Specific NRPS Driver

Common NRPS Driver

DRAC

Specific NRPS Driver

Common NRPS Driver

UCLP

Specific NRPS Driver

Common NRPS Driver

ARGON

Specific NRPS Adapter

DRAC

Specific NRPS Adapter

Common NRPS Adapter

UCLP

Specific NRPS Adapter

NRPS

Specific NRPS Driver Common NRPS Driver

WS Interface

NRPS

Specific NRPS Adapter

Common NRPS Adapter

WS Interface

Common NRPS Adapter Common NRPS Adapter

slide-17
SLIDE 17

7th NREN and Grids workshop, Dublin (I reland), September 2008

Thin NRPS for GMPLS CP

Thin NRPS: is a network resource provisioning system for domains with a GMPLS control plane. It is a NRPS with restricted functionality

  • Provides a reservation web service to

reserve, create and delete network connections via the GMPLS driver

  • Provides advance reservation services

(checking end points availability and possible overlapping reservations)

  • Provides notifications receiver

interface

  • Acts as a client of the Topology

manager WS of the NSP

  • Acts as a client of the GMPLS driver
  • Domain registering
  • Handles reservation request from NSP
slide-18
SLIDE 18

7th NREN and Grids workshop, Dublin (I reland), September 2008

The GMPLS Driver

GMPLS driver: an interface between NRPS and the GMPLS CP. It is a general WS to create, delete and monitor paths for different GMPLS implementations, provides a WEB interface for testing the WEB service, Internal data base containing topology, path and status information, modules for accessing vendor specific GMPLS control planes (e.g. Alcatel-Lucent or Nortel and G(²)MPLS, dummy interface to GMPLS

GMPLS driver services:

  • Path creation
  • Path termination
  • Path monitoring
  • Path discovery
  • Endpoint discovery
  • Registration service
  • Path delete

notification

  • EndPOint update

notification

slide-19
SLIDE 19

7th NREN and Grids workshop, Dublin (I reland), September 2008

The Network Service Plane (NSP)

NRPS IDB (top) NRPS NRPS TN TN TN NRPS IDB NRPS NRPS

. . .

NRPS IDB NRPS NRPS

. . .

IDB NRPS NRPS

. . .

Phase 1 : Centralized Architecture (flat or hierarchical) Phase 2 : Distributed-heriarchical Architecture (peering multi-level)

IDB NRPS NRPS

. . . . . .

IDB

. . .

NRPS NRPS TN TN

. . . . . .

slide-20
SLIDE 20

7th NREN and Grids workshop, Dublin (I reland), September 2008

Enhancements to the GMPLS Control Plane for Grid Network Services (GNS)

Extensions to the GMPLS CP for automatic and single–step setup of Grid & network resources G2 G2 G2 G2

G.I–NNI G.E–NNI G.O–UNI

G2MPLS NRPS Grid site A Grid site B Grid site C

G.O–UNI

  • Grid–GMPLS (G2MPLS) main research tracks:

– seamless coexistence with NRPS (UCLP, DRAC and ARGON) & Grid MW – Grid–aware network reference points (G.O–UNI, G.E–NNI, G.I–NNI) – CBR algorithms for recovery and TE – Integration with AAA system

  • Expected innovation in the field of co–

allocation of Grid and network resources, because of

– faster dynamics for service setup – adoption of well–established procedures for traffic engineering, resiliency and crankback – uniform interface for the Grid–user to trigger Grid & network transactions

slide-21
SLIDE 21

Demonstration scenario (real test-bed)

slide-22
SLIDE 22

Demonstrator Client (create reservation)

22

slide-23
SLIDE 23

Administrative Web (create reservation)

slide-24
SLIDE 24

Administrative Web (reservation list and status)

slide-25
SLIDE 25

7th NREN and Grids workshop, Dublin (I reland), September 2008

G2MPLS roadmap

  • In a short–term

– Open source GMPLS Control Plane prototype for:

  • Optical LSP setup/teardown via UNI interface compatible with OIF UNI 2.0
  • LSP protection/restoration and crankback
  • Flexible adaptation to the management interfaces exported by the underlying

Transport Plane (e.g. TL1, SNMP)

  • Multi–domain operations through OIF ENNI compliant interface

– Basic modules for routing and signalling in single control domain delivered on Q3–2007 – Recovery strategies and inter–domain expected by Q1–2008

  • In a longer–term (but within project lifetime)

– Grid–GMPLS enhancements to the Control Plane prototype, in terms of

  • Discovery and advertisement of Grid capabilities and resources of the participating

Grid sites

  • Service setup and maintenance

– Coordination, co–allocation and configuration of grid and network resources associated with a Grid job – Recovery of the installed network services and possible escalation of procedures – Advance reservations of Grid and network resources

  • Service monitoring: retrieving of the status of a Grid job and the related network

connections

slide-26
SLIDE 26

7th NREN and Grids workshop, Dublin (I reland), September 2008

Strategy to integrate WP1-WP2 developments

ADAPTER

Transport Network

Network Service Plane MIDDLEWARE or DEMONSTRATOR CLIENT NRPSs ARGON DRAC UCLP GMPLS

Transport Network

TOPOLOGY CLIENT G2MPLS

AdvResv capable OUNI-N OUNI-C

THIN NRPS

AdvResv capable ADAPTER ADAPTER ADAPTER

Transport Network

E-NNI ADAPTER

OUNI WRAPPER

OUNI-N

Transport Network

Grid resource B Grid resource A Connect A to B OIF signalling (propagate network info, NO Grid info)

OUNI-C

slide-27
SLIDE 27

7th NREN and Grids workshop, Dublin (I reland), September 2008

AAA Authorisation Service Architecture

  • Operates as integral part of the general Network Resource Provisioning

Service (NRPS)

– Can be called from NRPS domain controller, or – Can drive reservation/provisioning process

  • Incorporates 3 basic generic AAA AuthZ sequences

– Push, pull, and agent

  • Implements 3 basic AAA AuthZ operational models for complex

multidomain network resource provisioning:

– Polling, Relaying and Agent

  • Supports different and multiple policy decision mechanisms

– Multiple policy combination and/or mapping – Creates and supports dynamic user and resource associations – Supports different AuthZ frameworks interoperation

  • Supports different policy enforcement models

– Using AuthZ tickets and tokens at Service and Control planes – Integration of token based enforcement mechanism inside GMPLS control plane using RFC2750 policy data object

slide-28
SLIDE 28

7th NREN and Grids workshop, Dublin (I reland), September 2008

AAA AuthZ components and basic

  • perational models
  • Polling sequence (P) when the user client polls all individual network domains to make a

reservation.

  • Relay (R) or hop–by–hop reservation when the user client contacts only the local

network domain/provider and each consecutive domain provides path to the next domain.

  • Agent (A) sequence when the User delegates network provisioning negotiation to the

Agent that will do all necessary negotiations with all involved domains

PDP AAA PEP

User Client

NRPS NE PDP AAA PEP NRPS NE PDP AAA PEP NRPS NE Agent P R A

Service (AAA) plane Control plane

Dest Host Appli– cation

Network plane

AAA

TVS – Token Validation Service DRAM – Dynamic Resource Allocation and Mngnt PDP – Policy Decision Point PEP – Policy Enforcement Point

slide-29
SLIDE 29

7th NREN and Grids workshop, Dublin (I reland), September 2008

Middleware and Applications goals

  • Main goals

– Adapt and extend Grid middleware to present PHOSPHORUS services to applications

  • Coordinated reservation and allocation of compute, storage, and

network resources

  • Integration of MetaScheduling Service (MSS) into UNICORE 6
  • Allow semantic annotation of resources to improve resource

selection

  • Interface with NPRS, Control Plane and AAA activities

– Adapt and extend network–based applications to evaluate and demonstrate the PHOSPHORUS developments in the test–bed

  • Make use of network / grid resource reservation services
  • Prepare for deployment in test–bed, support tests
slide-30
SLIDE 30

7th NREN and Grids workshop, Dublin (I reland), September 2008

Middleware and Applications

  • Integration of network reservation services into existing Grid

middleware

– services for user–driven or application–driven set–up of execution environments with dedicated capabilities & performance

  • Compute nodes, storage systems, visualization devices
  • Network resources with defined QoS
  • Integration of applications

– WI SDOM: Wide in silicio docking on Malaria – KoDaVis: collaborative, distributed visualization of huge data sets – TOPS: Streaming of ultra high resolution data sets over lambda networks – DDSS: Distributed Data Storage System – I NCA: Intelligent Network Caching Architecture

Provide application access to PHOSPHORUS services and showcase their benefit via applications

slide-31
SLIDE 31

7th NREN and Grids workshop, Dublin (I reland), September 2008

Supporting Studies

Job routing & scheduling algorithms Network & resource management Job routing & scheduling algorithms Network & resource management Simulation environment Simulation environment Control plane design Control plane design

  • Job demand models
  • QoS resource scheduling
  • Grid job routing algorithms
  • Physical layer constraints
  • Advance reservations
  • Optical network
  • Advanced control plane
  • Network service plane
  • Architectural issues
  • Integration strategies
  • Recommendations

NRPS activity Control Plane activity Middleware and Applications activity

slide-32
SLIDE 32

7th NREN and Grids workshop, Dublin (I reland), September 2008

Supporting Studies results

  • Research–oriented results

– Job demand models – Grid job routing algorithms

  • Include physical impairments in routing decision
  • Multi–domain job routing

– QoS–aware resource scheduling – Support for advance reservations

  • Simulation environment

– Java, no dependencies, discrete event – Modeling network and Grid resources – Dynamic OCS path set–up and tear–down – Flexible job models (based on Markov states) – GUI to define network topology and traffic models – Code available on svn: http://phosphorus.atlantis.ugent.be/phosphorus

  • Mail marc.deleenheer@intec.ugent.be for login details
slide-33
SLIDE 33

7th NREN and Grids workshop, Dublin (I reland), September 2008

Test–bed & Demonstration activities

  • Objectives:

– Requirements analysis and design of the test–bed – Construction of the test–bed and configuration of all related software components, middleware and applications – Tests of project’s developments – Demonstration of project’s results – Real scientific application and real life scenarios used to verify project developments

  • Architecture:

– Several local test–bed supporting different technologies (optical, Lambdas, Ethernet, SDH switching), GRID & HPC resources and applications – Lightpaths to interconnect local test–beds

  • Current status:

– Most resources in local test–beds available for applications, almost all interconnection in place and NPRSes installed – Verification of the test–bed – Installation of middleware and applications

slide-34
SLIDE 34

7th NREN and Grids workshop, Dublin (I reland), September 2008

PHOSPHORUS Current test–bed data links

slide-35
SLIDE 35

7th NREN and Grids workshop, Dublin (I reland), September 2008

PHOSPHORUS Multi-domain testbed

slide-36
SLIDE 36

7th NREN and Grids workshop, Dublin (I reland), September 2008

PHOSPHORUS Dissemination and Contribution to standards

  • Disseminate information concerning the technical

developments to

– NRENs – Related projects:

  • Géant2 JRA3 AutoBAHN, NOBEL, e-Photon ONE+ , RINGRID,

BELIEF, Argia/UCLP, G-Lambda, DRAGON/OSCARS, FEDERI CA, …

  • Coordinate direct contributions to standards (mainly

OGF NSI and Glif GNI)

  • Build a collaborative framework for participation to

test–bed activities from within and external to EU

slide-37
SLIDE 37

7th NREN and Grids workshop, Dublin (I reland), September 2008

FEDERICA project e-Infrastructure

What: 7th FP project in the area “Capacities - Research Infrastructures” 3.7 M€ EC contribution, 5.2 M€ budget 20 partners, 461 Person Months When: 1st January 2008 - 30 June 2010 (30 months) Virtualization infrastructure, a “Network Factory” to provide “slices” to researchers in Future Internet, where a slice is a mix of network circuits and computing elements. Built using resources (Gb Ethernet circuits) from GÉANT2 and NRENs as contributions to the project. Open to interconnect other Infrastructures Connected to Internet (through NRENs)

slide-38
SLIDE 38

7th NREN and Grids workshop, Dublin (I reland), September 2008

FEDERICA - Goals Summary

—Act as a forum and support for researchers/projects on “Future Internet”.

Support of experimental activities to validate theoretical concepts, scenarios, architectures, control and management solutions. Users have full control of their slice

—Provide on European scale network and system agnostic e-infrastructure

to be deployed in phases. Provide its operation, maintenance and on- demand configuration

—Validate and gather experimental information for the next generation of

research networking also through basic tool validation

—Dissemination and cooperation between NRENs and researchers’

community

—Contribution to standards in form of requirements and experience

In scope

—Internal extended research, e.g. advanced optical technology —Development and support of Grid applications —Offer raw computing power —Offer transit capacity

Out of scope

slide-39
SLIDE 39

7th NREN and Grids workshop, Dublin (I reland), September 2008

FEDERICA – What and how

Scope

  • Create an e-Infrastructure for (future) Internet research made of

network and computing systems, providing virtualized networks/facilities for end-users, allowing disruptive emulations.

  • Research in multi-(virtual)-domain control, management and

monitoring, including user oriented control (IaaS philosophy).

  • Pave the way/create experience for GN3

How

  • Employ a mesh of initially up to 1 Gbps MPLS & GigE circuits from

NRENs and GEANT2 (using the GN2+ service)

  • Install virtualization computing nodes (capable of hosting e.g.
  • pen source routers) and open API routers and switches in

selected FEDERICA PoPs

  • Develop a tool-bench for managing virtual e2e facilities and the

infrastructure itself

slide-40
SLIDE 40

7th NREN and Grids workshop, Dublin (I reland), September 2008

FEDERICA e-Infrastructure

slide-41
SLIDE 41

7th NREN and Grids workshop, Dublin (I reland), September 2008

FEDERICA – Main characteristics

  • Suited to core network research, where a network is a (stack of) virtual

networks as mix of circuits and computing elements. A slice is as transparent and agnostic from specific technologies as possible.

  • Interdomain communication developments (monitoring, services,

topology) also between “real” and “virtual” networks

  • Full user control on its “slice”, including disruptive experimentsOpen to

host any application/service

  • Open to host researchers’ hardware and to external connections
  • Internal research activities on control, management and monitoring of

virtual infrastructures

  • Basic monitoring data is provided to the user

Phase 1

  • Start with manual configuration of “slices” (e.g. L2 networks, IP routed

networks) and proceed developing automated tools

slide-42
SLIDE 42

7th NREN and Grids workshop, Dublin (I reland), September 2008

The way to network virtualization. What is IaaS?

  • Virtualization

consists

  • f

representing a physical device/ substrate as a Software entity (P2V)

– Initially started with PC virtualization (VMware, Virtual Iron, VirtualPC, VirtualBox, and others) – Amazon and BlueLock pioneer the IaaS service by renting hardware using proprietary solutions

  • IaaS is equivalent of SaaS for hardware devices

– Users pay to use shared infrastructures – Monthly fees or Pay per use – Long term exchanges compared to on-demand services – Users control/own the (virtual) infrastructure – Don’t buy an optical switch, just get some ports of it for a certain period of time!

slide-43
SLIDE 43

UCLP, Argia and the IaaS Framework

  • Two UCLP research programs were put in place by CANARIE to provide a

virtualization solution for optical networks, starting in 2001

– UCLP initial goal was to provide e2e to end paths across domains (Cisco was a sponsor of the program) – UCLPv2 goals were to create reusable and configurable network blocks

  • UCLPv2 concepts are evolving into many different Physical to Virtual

(P2V) products and R&D projects that are built on the IaaS Framework:

– Argia -> Product for Optical Networks – Ether -> R&D for Ethernet and MPLS Networks – MANTICORE -> R&D for (logical) IP Networks – GRIM -> R&D for Instruments and Sensors

RMC MANTICORE ETHER GRIM CHRONOS

slide-44
SLIDE 44

IaaS framework: Open Source for building IaaS solutions

  • A generalized architecture to the outcome of years of

research under the UCLP Research programs funded by CANARIE

  • A software resource architecture to expose the physical

devices as infrastructure resources

  • A set of capabilities that can be used to quickly provide

functionalities like permissions/security, reservation, lifetime management

  • Libraries and tools to manage persistence or communication with

the hardware devices (IaaS Engine -> Driver Architecture)

  • A basic RMC GUI extensible with plugins
  • The enabling technology for upcoming products and R&D

initiatives (Phosphorus, FEDERICA, …)

slide-45
SLIDE 45

7th NREN and Grids workshop, Dublin (I reland), September 2008

IaaS Framework Resource Architecture

IaaS Engine (Driver Architecture)

Physical Devices

Transient Information Resource Representations / Service Interfaces (Java) Persistent Information Persistance Layer

DB

Data Sources

LDAP File System

Application Container Security Framework Business Logic

Capability Capability Capability Capability

Presentation

(SOAP WS)

Presentation

(REST WS)

Presentation

(Web GUI)

slide-46
SLIDE 46

7th NREN and Grids workshop, Dublin (I reland), September 2008

  • Physical Network Administrator (I nfrastructure Provider):

– Owner of the physical infrastructure (e.g. optical switches and links) decides which parts of the infrastructure can be accessed by what user (e.g. User A can access device X, ports X.1, X.2, … or ALL users can access device Y). – Makes sure the infrastructure is in a good shape (e.g. if a card stops working he must substitute it)

  • APN (or Virtual Network) Administrator (I nfrastructure I ntegrator):

– Asks for resources to physical network administrators or other APN Administrators (e.g. Get device X from network A, port X.1, and also device Y from network B, …) – Integrates all the resources under a single control/management domain (its

  • wn domain)

– Provides services to the End User (e.g. deploys a circuit reservation service) – Can handoff the control of the resources it has to other APN Admins.

  • End User:

– Typical end user, requests a network service (such as bandwidth on demand, circuit reservations, …)

User Roles

slide-47
SLIDE 47

7th NREN and Grids workshop, Dublin (I reland), September 2008

Resource Trading (I): Direct Export

User A Provider 1 User B Provider 2 User C

Resource List Resource List Resource List Resource List

slide-48
SLIDE 48

7th NREN and Grids workshop, Dublin (I reland), September 2008

Resource Trading (II): Brokering sites

48

slide-49
SLIDE 49

7th NREN and Grids workshop, Dublin (I reland), September 2008

Device Controller Resources Device Partition Resources

Optical Switch Resources Router Resources Resource Broker Resources

End User Services

Ethernet Switch Resources

IP Network Resources TDM Timeslot Resources Ethernet Port Resources WDM Resources Connections Resources VLAN Resources Physical Network Resources Resource List Resources

Resource List and PN Resources

WS MANTICORE (IP Research Project) WS Framework (Open Source) WS Ether™ (Product) (Ethernet Networks) WS Argia™ (Product) (Optical Networks)

Resource Management Centre and User Web Portal.

WS GRIM (Virtual Instruments Research) GRIM Resources Instrument Resources GUI DB Resources Support Services User Workspace Resources

Unless specified otherwise the development is being performed in partnership by i2CAT, CRC and Inocybe Technologies.

RCP WEB (Development)

. . .

IaaS Framework, Products and Research Projects Architecture

slide-50
SLIDE 50

7th NREN and Grids workshop, Dublin (I reland), September 2008

Thank you for your attention

Joan Antoni García Espín

joan.antoni.garcia@i2cat.net

I 2CAT Foundation, I nternet and Digital I nnovation in Catalonia

Phone: (+ 34) 93 553 2518 Fax: (+ 34) 93 553 2520

Thanks to

Sergi Figuerola and Eduard Grasa from i2CAT and both Phosphorus and Federica consortiums