the infn activities in the frame of the national strategy
play

The INFN activities in the frame of the national strategy 1 - PowerPoint PPT Presentation

The INFN activities in the frame of the national strategy 1 Tommaso Boccali INFN Pisa A bit of History 2 INFN is the Italian National Research Institute coordinating (and funding) activities on Particle, Astro-particle, Nuclear,


  1. The INFN activities in the frame of the national strategy 1 Tommaso Boccali – INFN Pisa

  2. A bit of History … 2  INFN is the Italian National Research Institute coordinating (and funding) activities on Particle, Astro-particle, Nuclear, theoretical and Applied Physics  While not primary INFN goal, (scientific) computing has grown as a necessary tool for the research field  INFN has participated or seeded many Italian activities in computing, in the last decades  The harmonization of an Italian Research Network  The growth of an organic set of High Throughput Computing (HTC) sites  R&D activities in High Performance Computing (HPC)  The harmonization of access to computing resources via the GRID Middleware, and now Indigo-DataCloud  What next for us?

  3. INFN facilities on the territory 3  INFN is mostly unique among Italian research institutes (and not only): strong decentralization over the territory, basically have presence  Wherever there is a (sizeable) Physics Department  INFN and Academic personnel door to door in the same building  3 national laboratories  Specialized centers  Among which, you should already know well: CNAF

  4. The Network – GARR (Italy’s NREN… ) 4  GARR was born in the ’80s, in an attempt to harmonize scientific networks Most of the personnel was formerly INFN, and still is …  Various technological steps since then, now with GARR-X dedicated  fibers between INFN centers (and not only), at multiple 10 Gbps, multiple 100 Gbps soon  More that 15.000 km of GARR owned fibers  ~9.000 Km of backbone  ~6.000 Km of access links  About 1000 user sites interconnected  Among which also schools, hospitals, …  > 1 Tbps aggregated access capacity  > 2 Tbps total backbone capacity  2x100 Gbps IP capacity to GÉANT  Cross border fibers with ARNES (Slovenia), SWITCH (Switzerland).  > 100 Gbps to General Internet and Internet Exchanges in Italy  NOC and engineering are in-house, in Rome.

  5. INFN Scientific computing facilities 5 MILANO Tot Cores: 2448 (23 kHS06) PADOVA/LEGNARO  (~) 1990-2000: each Disk Space: 1850 TB TORINO Tot Cores: 5200 (55 kHS06) Netw connectivity: 10 Gb/s Tot Cores: 2500 (27 kHS06) Disk Space: 3000 TB INFN facility had a Disk Space: 2500 TB Netw connectivity: 20 Gb/s small sized computing Netw connectivity: 10 Gb/s center, handling CNAF/BOLOGNA everything from mail Tot Cores: 21250 (221 kHS06) servers to the first Disk Space: 22765 TB scientific computing Tape Space: 42000 TB Netw connectivity: 80 Gb/s farms PISA Tot Cores: 12000 (125 kHS06)  2000+: consolidation Disk Space: 2000 TB BARI (INFN and UNIBA) on the WLCG Netw connectivity: 20 Gb/s Tot Cores: 13000 (130 kHS06) hierarchy, with one Disk Space: 5000 TB Tier-1 center and 9 Tier- Netw connectivity: 20 Gb/s ROMA 2 centers – we are still Tot Cores: 3172 (32 kHS06) there, with evolutions COSENZA Disk Space: 2160 TB Tot Cores: 3500 (35 kHS06) Netw connectivity: 10 Gb/s  In WLCG, Italy around Disk Space: 900 TB Netw connectivity: 10 Gb/s 10% of the total installation FRASCATI Tot Cores: 2000 (20 kHS06)  2020+: transition to new CATANIA NAPOLI (INFN and UNINA) Disk Space: 1350 TB Tot Cores: 3000 (30 kHS06) models? Tot Cores: 8440 (69 kHS06) Netw connectivity: 10 Gb/s Disk Space: 1500 TB Disk Space: 2805 TB Netw connectivity: 20 Gb/s Netw connectivity: 20 Gb/s

  6. INFN is not only HTC! 6  While INFN does not deploy large scale HPC centers (like PRACE), it has a long history of HPC R&D and operation  APE project : late 80s (APE) to ~2005 (ApeNext): in-house developed machines (mostly) for Lattice QCD  ExaNeSt : solutions for Fast Interconnect for HPC  Human Brain Project : Wavescales (brain during sleep / At the times, APE competitive withthe most anesthesia) powerful machines for LQCD  Imaging in Medical Physics with Big Data algorithms

  7. EU Projects (past / present / future) 7 Long history of participation in GRID  projects EDG, EGEE, EGI, EMI, WLCG …  Under evaluation:  Evolution from the GRID:  EOSC-HUB (EINFRA-12)  EGI_Engage  DEEP-HybridDataCloud (EINFRA-21)  INDIGO-DataCloud  XDC (EINFRA-21)  IPCEI-HPC-BDA  ICARUS (INFRAIA-02)  EOSCpilot  SCALE-UP Open DataCloud (ICT-16)  Ongoing Projects  HNSciCloud  West-Life  EGI_Engage  INDIGO-DataCloud  EOSCpilot (INFRADEV-04)  ExaNeST  … 

  8. Collaboration with other Italian 8 Research Institutes  CINECA (PRACE Tier-0, consortium of 6 Research  INAF (Astrophysics): Institutes + 70 Universities)  MoU signed for extensive collaboration. Many projects in common (CTA a clear example of  Realize common infrastructure, with resource Computing-demanding experiment) sharing and co-location  Attempt to form a common infrastructure,  INFN is acquiring a sizeable share in the upgrade of sharing computing centers CINECA HPC Marconi system (already now at #12 on Top500), for its Theoretical Physics use cases  ASI (Italian Space Agency)  INFN is planning to acquire a fraction of 2018+ CPU  Mirror Copernicus Project + realization of a HTC resources @ CINECA, while maintaining the national infrastructure for satellite data analysis storage in house  Bari, CNAF Cloud Infrastructures, evaluating Indigo  In general, CNAF+CINECA, given also the physical Tools vicinity, constitute an example of HPC/HTC integration with beneficial consequences for the whole Italian research system  CNR, …

  9. What is happening around us? 9 INFN is part of a large scientific ecosystem, and participate the never-ending  process of defining the future modalities of access and provisioning of scientific computing In Italy, our Tier-1 is close (~10km) to a HPC PRACE Tier-0 (CINECA), and  collaboration and integration of facilities is becoming more and more important Planning for co-located resources, Tb-level direct connection  Other Italian HTC realities (although smaller) have resource deployments;  attempts of national level integration ongoing GARR-X Progress, ENEA, ReCaS, INAF, …  Formerly integrated via Italian Grid Initiative (IGI) – a new attempt at concertation  WLCG (++?)  CERN started a process via the Scientific Computing Forum for the evolution of LHC  computing for 2025+  Evolution towards fewer big sites O(10 MW) Reduction of operation costs  Open to HPC by initial design  Open to commercial procurement by initial design  HEP Software Foundation  Preparing a Community (driven) White Paper to serve as baseline for High Energy Physics  Scientific Computing in the next decade(s) At EU level, incentive towards a common infrastructure research / industry /  administration

  10. EOSC and IPCEI 10  European Open Science Cloud (EOSC)  European Data infrastructure Goals:  Goals:   Aims to give Europe a global lead in scientific data infrastructures, to ensure that Deploy the underpinning super-computing capacity, the fast connectivity  European scientists reap the full benefits of data-driven science. and the high-capacity cloud solutions they need Develop a trusted, open environment for the scientific community for storing,  sharing and re- using scientific data and results Start by federating existing scientific data infrastructures, today scattered across  disciplines and Member States IPCEI: An Important Project of Common European Interest on HPC and Big  Data Enabled Applications Signed by Luxembourg, France Italy and Spain (2016)  Italy as a “system” participates with “ The Italian Data Infrastructure ” - lead by  INFN EOSCpilot the first related project (INFRADEV-04), aiming to:   Design and trial a stakeholder-driven governance framework Contribute to the development of European open science policy and best  practice; Develop demonstrators of integrated services and infrastructures in a number of  scientific domains, showcasing interoperability and its benefits;  Engage with a broad range of stakeholders , crossing borders and communities, to build trust and skills

  11. An example of current/future directions 11 Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Tasks Compute nodes Compute nodes Compute nodes Compute nodes ~700 Km, 20 Gbit/s Compute nodes Compute nodes Compute nodes Compute nodes dedicated L2 link Compute nodes Compute nodes (in production) Compute nodes Compute nodes GPFS TSM Compute nodes Compute nodes GPFS TSM Compute nodes Compute nodes ~10 Km, Compute nodes Compute nodes Logical View O(Tbit/s) dedicated Compute nodes Compute nodes Compute nodes Compute nodes physical link Compute nodes Compute nodes Compute nodes Compute nodes (planned)

  12. An example of current/future directions 12 In prod Compute nodes Compute nodes In (small) prod Compute nodes Compute nodes Tasks Compute nodes Compute nodes Compute nodes ~1000 Km, 20 Compute nodes Compute nodes Compute nodes Gbit/s dedicated L2 link Under test For 2018 (in production) GPFS TSM Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes Compute nodes ~10 Km, O(Tbit/s) Compute nodes Compute nodes … Compute nodes Compute nodes dedicated (EGI Federated physical link Compute nodes Compute nodes Compute nodes Compute nodes Cloud?) (planned)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend