Research activities in Grid middleware in the PARIS* research group
Thierry Priol IRISA/INRIA
* Common project with CNRS, ENS-Cachan, INRIA, INSA, University of Rennes 1
Research activities in Grid middleware in the PARIS* research group - - PowerPoint PPT Presentation
Research activities in Grid middleware in the PARIS* research group Thierry Priol IRISA/INRIA * Common project with CNRS, ENS-Cachan, INRIA, INSA, University of Rennes 1 Small Britain PARIS Identity card Created in December 1999 Research
* Common project with CNRS, ENS-Cachan, INRIA, INSA, University of Rennes 1
and Grids (C. Morin)
(J-L. Pazat, C. Pérez, T. Priol)
(G. Antoniu, L. Bougé, Y. Jégou)
Banâtre)
including salaries of researchers and some of the PhD, ~1670 K€ otherwise)
Small Britain
Towards a software component platform for the Grid
Extension/adaptation of existing component models Communication framework for software components Deployment of software components Coordination of components
Towards a shared global address space for the Grid
Distributed shared memory Hierarchical cache coherence protocols A P2P approach for large scale data sharing
complex… thanks to the increasing in performance of
simulation of complex phenomena
acoustic-vibration
study for optimal design
Plane wave Scattering of 2 on 1 Virtual plane Scattering of 1 on 2
Object 1 Object 2
Single-physic multiple-object Thermal Dynamics Structural Mechanics Optics
QuickTime™ et un décompresseur TIFF (non compressé) sont requis pour visionner cette image.Ocean-atmosphere coupling Electromagnetic coupling Multiple-physics single-object
SAN cluster
Code coupler (MpCCI, OASIS …)
MPI code #1
…
MPI implementation
MPI code #2 MPI code #n MPI code #3
CALCIUM, ISAS, …
tools
machines with some on-going works to target Grid infrastructure
“plug and play”
sockets, shared memory segments, …)
MPI MpCCI Code A Code B MpCCI The MpCCI coupling library
Code A
Code interface
Out In Code B
Code interface
Out In
deployment”
code coupling
a framework (code coupler)
frameworks)
(but component models are often complex to master !)
Code coupler Component-based framework
In Out Out In Out Out Code B
Code interface
Out In Code D
Code interface
Out In Code C
Code interface
Out In In In Code A
Code interface
Out In
and their interaction
input/output interfaces
communication
binary multi-lingual executables
notion of containers (lifecycle, security, transaction, persistent, events)
multi-vendors, …
distributed nodes simultaneously My Component Component interface Facets Event sources Event sinks Attributes Receptacles OFFERED REQUIRED
Component-based application
possible when assembling components
How much my parallel code has to be modified to fit the CCM model What has to be exposed outside the component ?
components to communicate efficiently
How to combine multiple communication middleware/runtime and to make
them to run seamlessly and efficiently ?
How to manage different networking technologies transparently ? Can my two components communicate using Myrinet for one run and using
Ethernet for another run without any modification, recompilation,..?
What the application designer should see…
components :
method invocation (to a CORBA
SPMD process = a standard component
components should use the ORB to ensure interoperability
to the CCM specification
components by having several data flows between components
HPC Component A HPC Component B
… and how it must be implemented !
// Comp. A SPMD Proc. SPMD Proc. SPMD Proc. SPMD Proc. SPMD Proc.
…
// Comp. B SPMD Proc. SPMD Proc. SPMD Proc. SPMD Proc. SPMD Proc.
…
interface Interfaces1 { void example(in Matrix mat); }; … component A { provides Interfaces1 to_client; uses Interfaces2 to_server; }; Parallel Component Of type A
SPMD Proc.
SPMD Proc.
SPMD Proc.
SPMD Proc.
SPMD Proc.
to_client to_server Component: A Port: to_client Name: Interfaces1.example Type: Parallel Argument1: *, bloc …
XML IDL
Message 8368 of 11692 | Previous |Next [ Up Thread ] Message Index From: ------- -------- < -.-----@n... > Date: Mon May 27, 2002 10:04 pm Subject: [mico-devel] mico with mpich I am t r y i n g to run a M i co p rog ram in pa ra l l e l us i ng mp ich . When cal l i ng CORBA: :ORB_ in i t ( a rgc , a rgv ) i t seems to cor edump . Does anyone have expe r i ence i n runn i ng m ico and mp ich a t t he same t ime?
OmniORB MICO Orbacus Orbix/E Kaffe Mome Mpich CERTI
make easy the porting of existing middleware
Message, Madeleine, …
access to the network and scheduling of threads
selection, …
based on Madeleine
networking technologies
(multithreading)
DSM JVM MPI CORBA HLA
PadicoTM Services
Madeleine
Portability across networks
Marcel
I/O aware multi-threading
Myrinet SCI Multithreading Network s TCP Personality Layer Internal engine
PadicoTM Core
This work is supported by the Incentive Concerted Action ``GRID'' (ACI GRID) of the French Ministry of Research.
Experimental protocols
MPICH, CORBA OmniORB3, Kaffe JVM
Ghz with Myrinet 2000, Dolphin SCI
Performance results
MPI: 240 MB/s, 11 µs CORBA: 240 MB/s, 20 µs 96 % of the maximum
achievable bandwidth of Madeleine
MPI: 75 MB/s, 23 µs CORBA: 89 MB/s, 55 µs
50 100 150 200 250 1 10 100 1000 10000 100000 10000001E+07 Message size (bytes)
CORBA/Myrinet-2000 MPI/Myrinet-2000 Java/Myrinet-2000 CORBA/SCI MPI/SCI TCP/Ethernet-100
Bandwidth (MB/s)
18 µs
Cluster A Cluster B Cluster C Cluster D
Grid
within a set of peers
data among a set of peers
replication
implement P2P services This work is supported by the Incentive Concerted Action ``Masse de Données'’ (ACI MD) of the French Ministry of Research.
National projects
ACI GRID GRID2: scientific
animation of the French Grid scientific community
ACI GRID HYDROGRID: Grid
applications
ACI GRID ANIMAGRID:
scientific animation related to storage Grid
ACI GRID GRID’5000: set up a
National Grid infrastructure
ACI MD GDS: Large Scale Data
Management
ACI MD GdX: Grid Data
Explorer
collaboration (with NSU)
(USA)
Coordinator)
computing
platform
bandwidth wide area network experiments
distributed objects platform
48 ports Fast-Iron 1 Gbit Switch
100 Mbit/s November 2003 December 2003
32 32 Catalyst 6500 router
VTHD 2.5 Gbit/s 3 x 1 Gbit/s Q2 2004 Q2 2005
96 96
Around 50 nodes (PC, Apple) in 2003… Around 256 nodes (512 processors) in 2005…
PARIS research activities
Not on the Grid main stream (no Grid services or a bit)… with
some technology risks (CORBA…) but we are civil servants…
We are more interested to work on concepts (including their
validation through research prototypes) rather than the development of the whole grid software infrastructure
We are looking for collaborations on
The infrastructure level for component deployments Applications, applications and applications…