research activities in grid middleware in the paris
play

Research activities in Grid middleware in the PARIS* research group - PowerPoint PPT Presentation

Research activities in Grid middleware in the PARIS* research group Thierry Priol IRISA/INRIA * Common project with CNRS, ENS-Cachan, INRIA, INSA, University of Rennes 1 Small Britain PARIS Identity card Created in December 1999 Research


  1. Research activities in Grid middleware in the PARIS* research group Thierry Priol IRISA/INRIA * Common project with CNRS, ENS-Cachan, INRIA, INSA, University of Rennes 1

  2. Small Britain PARIS Identity card Created in December 1999 Research activities � � Head of the project Operating Systems for Clusters � � and Grids (C. Morin) T. Priol (DR INRIA) � Software component platform Researchers-Lecturers � � (J-L. Pazat, C. Pérez, T. Priol) J-P. Banâtre (Prof IFSIC) � Large Scale Data Management � L. Bougé (Prof ENS) (G. Antoniu, L. Bougé, Y. � J.L. Pazat (Ass. Prof. INSA) Jégou) � Full time researchers P2P systems (A-M. Kermarrec) � � Coordination models (J-P. G. Antoniu (CR INRIA) � � Banâtre) Y. Jégou (CR INRIA) � A-M. Kermarrec (DR INRIA) � Budget in 2003: 620 K€ (not � C. Morin (DR INRIA) � including salaries of C. Perez (CR INRIA) � researchers and some of the 3 Engineers � PhD, ~1670 K€ otherwise) 11 PhD Students �

  3. Research activities in Grid Computing � Towards a software component platform for the Grid � Extension/adaptation of existing component models � Communication framework for software components � Deployment of software components � Coordination of components � Towards a shared global address space for the Grid � Distributed shared memory � Hierarchical cache coherence protocols � A P2P approach for large scale data sharing

  4. High Performance applications Not anymore a single parallel application but several of them High performance applications are more and more � Virtual plane complex… thanks to the increasing in performance of Plane off-the-shelves hardware wave Scattering of 1 on 2 Several codes coupled together involved in the � simulation of complex phenomena Fluid-fluid, fluid-structure, structure-thermo, fluid- � acoustic-vibration Scattering of 2 on 1 Even more complex if you consider a parameterized � Object 1 Object 2 study for optimal design Single-physic multiple-object Some examples � e-Science � Weather forecast: Sea-Ice-Ocean-Atmosphere-Biosphere � Industry Structural � Optics Mechanics Aircraft: CFD-Structural Mechanics, Electromagnetism � Satellites: Optics-Thermal-Dynamics-Structural Mechanism � QuickTime™ et un décompresseur TIFF (non compressé) sont requis pour visionner cette image. Thermal Dynamics Multiple-physics single-object Electromagnetic coupling Ocean-atmosphere coupling

  5. The current practice… Coupling is achieved through the use MPI MPI MPI MPI � code code code code … of specific code coupling tools #1 #2 #3 #n Not just a matter of communication ! � Code coupler (MpCCI, OASIS …) Interpolation, time management, … � Examples: MpCCI, OASIS, PAWS, MPI implementation � CALCIUM, ISAS, … SAN Limitations of existing code coupling � tools Originally targeted to parallel � machines with some on-going works cluster to target Grid infrastructure Static coupling (at compile time): not Code A Code B � “plug and play” Ad-hoc communication layers (MPI, � MpCCI MpCCI sockets, shared memory segments, …) Lack of explicit coupling interfaces MPI � Lack of standardization � The MpCCI coupling library Lack of interoperability �

  6. Another approach for code coupling We need a programming model suitable for the Grid ! Out In Component definition by C. Szyperski Code A interface Code B � Code interface Code “A component is a unit of independent � Out deployment” In “Well separated from its environment and from � other components” Component programming is well suited for � Code A Code B code coupling Code Code Codes are encapsulated into components � interface interface Components have public interfaces � In In Out Out Components can be coupled together or through � a framework (code coupler) Out Out Components are reusable (with other In In � frameworks) Code coupler Application design is simpler through composition Component-based framework � In In (but component models are often complex to Out Out master !) Some examples of component models Out Out � In In HPC component models � Code Code CCA, ICENI interface interface � Standard component models � Code C Code D EJB, DCOM/.NET, OMG CCM �

  7. Distributed components: OMG CCM we like CORBA but seems to be alone in the Grid community Component interface A distributed component-oriented model � REQUIRED OFFERED An architecture for defining components � Facets and their interaction My Receptacles Interaction implemented through Component � Event input/output interfaces sinks Event Synchronous and asynchronous � sources communication Attributes A packaging technology for deploying � binary multi-lingual executables A runtime environment based on the � notion of containers (lifecycle, security, transaction, persistent, events) Multi-languages, multi-OS, multi-ORBs, � multi-vendors, … Include a deployment model � Could be deployed and run on several � distributed nodes simultaneously Component-based application

  8. CCM in the context of HPC Encapsulation of parallel codes into software components � Parallelism should be hidden from the application designers as much as � possible when assembling components Issues: � � How much my parallel code has to be modified to fit the CCM model � What has to be exposed outside the component ? Communication between components � Components should use the available networking technologies to let � components to communicate efficiently Issues: � � How to combine multiple communication middleware/runtime and to make them to run seamlessly and efficiently ? � How to manage different networking technologies transparently ? � Can my two components communicate using Myrinet for one run and using Ethernet for another run without any modification, recompilation,..?

  9. Making CORBA Components parallel-aware What the application designer should see… Communication between � components : Implemented through a remote � HPC HPC method invocation (to a CORBA Component Component A B object) Constraints � … and how it must be implemented ! A parallel component with one � SPMD process = a standard component // Comp. A // Comp. B Communication between � components should use the ORB SPMD SPMD to ensure interoperability Proc. SPMD Proc. SPMD … Proc. SPMD Proc. SPMD … Little but preferably no modification Proc. SPMD Proc. SPMD � Proc. SPMD Proc. SPMD to the CCM specification Proc. Proc. Scalable communication between � components by having several data flows between components

  10. GridCCM interface Interfaces1 Parallel Component { Of type A void example( in Matrix mat); }; … component A to_client to_server Comp. A-0 Comp. A-0 { Comp. A-1 Comp. A-0 provides Interfaces1 to_client; SPMD Comp. A-2 Comp. A-0 uses Interfaces2 to_server; Proc. SPMD Comp. A-3 Comp. A-0 }; � � SPMD Proc. Comp. A-4 Comp. A-0 SPMD Proc. IDL Proc. SPMD Component : A Proc. Port : to_client Name : Interfaces1.example Type : Parallel Argument1 : *, bloc XML …

  11. Runtime support for a grid-aware component model Main goal for such a runtime � Should support several communication runtime/middleware at the same time � Parallel runtime (MPI) & Distributed Middleware (CORBA) such as for GridCCM � Underlying networking technologies not exposed to the applications � Should be independent from the networking interfaces � Let’s take a simple example � MPI and CORBA using the same protocol/network (TCP/IP, Ethernet) � MPI within a GridCCM component, CORBA between GridCCM components � The two libraries are linked together with the component code, does it work ? � Extracted from a mailing list: � Message 8368 of 11692 | Previous | Next [ Up Thread ] Message Index From: ------- -------- < -.-----@n... > Date: Mon May 27, 2002 10:04 pm Subject: [mico-devel] mico with mpich I am t r y i n g to run a M i co p rog ram in pa ra l l e l us i ng mp ich . When cal l i ng CORBA: :ORB_ in i t ( a rgc , a rgv ) i t seems to cor edump . Does anyone have expe r i ence i n runn i ng m ico and mp ich a t t he same t ime?

  12. PadicoTM architecture overview Provide a set of personalities to OmniORB � MICO make easy the porting of Mome Mpich Kaffe CERTI Orbacus existing middleware Orbix/E BSD Socket, AIO, Fast � Message, Madeleine, … PadicoTM HLA DSM MPI CORBA JVM Services The Internal engine controls the � access to the network and Personality Layer PadicoTM scheduling of threads Core Internal engine Arbitration, multiplexing, � selection, … Madeleine s Network Portability across networks Marcel Low level communication layer TCP � Multithreading I/O aware multi-threading Myrinet SCI based on Madeleine Available on a large number of � networking technologies This work is supported by the Incentive Concerted Action ``GRID'' Associated with Marcel � (ACI GRID) of the French Ministry of Research. (multithreading)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend