SLIDE 1 ENES – PRACE Board of Directors March 20th, 2013 Brussels
Outline
- 1. ENES and European Climate models
- 2. ENES infrastructure strategy 2012-2022 & HPC needs
- 3. Collaboration with PRACE and ENES
SLIDE 2
§
ENES
European Network for Earth System modelling
http://enes.org
A network of European groups in climate/Earth system modeling Launched in 2001 (MOU) Ca 50 groups from academic, public and industrial world Main focus : discuss strategy to accelerate progress in climate/Earth system modelling and understanding Several EU projects Collaboration with PRACE IS-ENES Infrastructure for ENES European projects 2009-2013; 2013-2017 Infrastructure Models & their environment Model data (ESGF) Interface with HPC ecosystem Users : Climate modelling community Impact studies
SLIDE 3
State of the art in climate modelling CMIP5 experiments
7 in Europe 1 Canada 6 USA 1 Russia 5 China / 1 Korea 4 Japan 2 Australia 27 modelling groups 58 models
SLIDE 4
Modelling the Earth’s climate system
each ESM > 1000 man years: strong legacy
Basic physical laws Based on Navier-Stokes Conservation of: energy, mass (air, water, carbon) & Parameterisations clouds, radiation, subgrid-scale processes IPCC (1990) IPCC (1995) IPCC (2001) IPCC (2007) ESMs
SLIDE 5 Key science questions
- Q1. How predictable is climate on a range of timescales ?
- Q2. What is the sensitivity of climate and how can we reduce uncertainties ?
- Q3. What is needed to provide reliable predictions of regional climate changes ?
- Q4. Can we model and understand glacial-interglacial cycles ?
- Q5. Can we attribute observed signals to understand processes ?
Drivers : Science & Society From understanding to development of “Climate Services” Infrastructure Strategy for the European Earth System Modelling Community 2012-2022
Writing team:
- J. Mitchell, R. Budich, S. Joussaume, B. Lawrence & J. Marotzke
52 contributors from BE, CZ, DE, DK, FI, FR, IT, NO, SE, SP, UK
SLIDE 6
- Q1. How predictable is climate at different time scales ?
Multi-model decadal predictions
5 European models, 10 yr simulations at every 5 years With ocean initial conditions
COMBINE EU Project, courtesy of
HPC Data assimilation Large ensemble runs Resolution Observations Multi model 1960 2015 Surface air temperature
SLIDE 7
- Q2. What is the sensitivity of climate and can we reduce uncertainties ?
Feedbacks (eg. clouds, carbon cycle), nonlinear behaviours
Temperature change to 2 x CO2 Uncertainty to cloud feedbacks Dufresne & Bony
HPC Ensemble experiments (eg. process studies) CMIP3 (AR4) 2 to 4.5 °C mean 3°C Multi-model mean Inter-model Clouds
SLIDE 8
- Q3. What is needed to provide reliable predictions/projections
- f regional climate changes ?
Summer precipitation 2005 Simulations global climate model HADGEM3 Resolutions 135km à 12km PRACE UPSCALE project
Courtesy of PL Vidale (NCAS) & M. Roberts (MO/HC) HPC: Spatial Resolution Ensemble runs (internal variability, parameterisations)
135 km 60 km 40 km 25 km 17 km 12 km
Observations
SLIDE 9
- Q4. Can we model and understand glacial-interglacial cycles and better
constrain model sensitivity using the past ?
N20 CO2 CH4 T ice In thousand of years
Source : EPICA community members, Nature 2004 Last Glacial Maximum (21 000 years BP) Simulations (PMIP2) HPC Duration (1000 to 100 000 simulated years) Complexity IPCC (2007) Braconnot et al. (Clim Past 2007) Observed past 600 000 years
SLIDE 10
- Q5. Can we attribute observed signals to understand processes ?
Globally ? Extreme events ?
Simulations Natural forcings Simulations Natural & anthropogenic forcings
HPC Large ensemble runs For extreme events: Spatial resolution Fast availability of computing power IPCC (2007)
SLIDE 11 Infrastructure Strategy Roadmap R e s
u t i
Resources Complexity EO, Data Assim. Needs for HPC And data storage From Jim Kinter, the World Modelling Summit, 2008 ensemble: x 10 x 5-10 x 3 ≈ x 27 x 100 ≈ x 106 duration: x 10-100
SLIDE 12
Challenge: towards 1 km scale global climate models Infrastructure Strategy Roadmap « CORDEX » « CMIP5 » NASA
SLIDE 13
Infrastructure strategy for ENES for the next 10 years
Recommandations: 1) Access to world-class HPC for climate at least «tailored » for climate up to « dedicated » 2) Develop the next generation of climate models 3) Set up data infrastructure (global and regional models) for large range of users from impact community 4) Improve physical network (e.g. link national archives) 5) Strengthen European expertise and networking Input to IS-ENES2 ENES Towards an European Climate Infrastructure Initiative : a sustainable virtual laboratory
SLIDE 14 § Collaboration in PRACE IPs: § PRACE 1IP, WP7: SARA (John Donners): collaboration on EC-EARTH high-resolution Benchmarks: very limited interaction with scientific community § PRACE 2IP WP8 : ENES priorities: coupler, I/O, dynamical cores, fault tolerance
- nly partially followed & limited interactions
§ PRACE 3IP: none yet § Projects on Tier0 machines: UPSCALE & PULSATION HIRESCLIM & SPRUCE § Involvment in PRACE organisation: Members of SSC: Sylvie Joussaume, Jose Baldasano/Antonio Navarra Member of PRACE User Forum: Pier Luigi Vidale
ENES & PRACE
SLIDE 15 § Projects on Tier0 machines:
Pier Luigi Vidale (UK) Hermit
Sébastien Masson (FR) Curie
Colin Jones (SE) MareNostrum3
Eric Maisonnave (FR) Curie
Feedbacks from PRACE projects
§ Preparatory access phase: too short (several components, tests, workflow) (0.5-1 yr) § Even if code ready on similar machine: time needed to adapt on Tier 0 (e.g. different environment, availability of tools, test experiments) “CPU hours must be used regularly during the one year long project”: impossible § Need for IO, data storage and some data analysis easily accessible from Tier0 and petascale data storage in the network neighbourhood § Trained man power is key § Very difficult and limited with 1-year access § Need some long-term planning of machines and use
SLIDE 16 Key requirements (ENES Strategy and EOI for programme access) § Data intensive science : high- performance data IO and storage e.g. Upscale : 500 TB § Need multi-year access time to scientifically develop and validate the model configuration used (not just porting) § Recognise specificities: § Need of both capability and capacity : several runs in parallel: need to be recognised as massively parallel § Several codes coupled § Environment: support multi-executables, mix MPI/openPM § Workflow: queue system match, long initialisations, large number
World-class HPC for climate « Tailored for climate »
SLIDE 17 Scalability issue
Scalability tests at resolution 25-30 km for the atmosphere
[E-W processors] x [N-S processors] x [openMP threads]
P.L. Vidale (NCAS)
Joint Weather and Climate Research Programme A partnership in climate research
HadGEM3 – CRAY XE6 Hector
IPSL Atmosphere EC-Earth AOGCM ARPEGE + simplified ocean ≈ 12 K cores CPUs/1000
SLIDE 18
Scalability issue
Revisit dynamical cores And numerics Parallel coupling Parallel I/O Issue: develop Computing /climate collaboration
SLIDE 19
§ Next international experiments : CMIP6/CORDEX6, still under definition § Most experiments to be performed at national level § Some « high-end » experiments to be performed on PRACE ? High international visibility, coordinated European experiments Additional requirements: § Commitment known in advance § Prepare simulations in advance and then no change of machine Typically : validation phase (2 years) & production phase (2 years) § Strict deadlines for submission of results
Contribution of PRACE to international experiments ?
SLIDE 20
« The mission of PRACE is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. »
Conclusions
From climate community: q Increasing demand from society: more reliable predictions of climate change for adaptation q Strong needs for HPC: Scalability: Increased resolution, Number of experiments, complexity Duration of experiments remains a problem q HPC facilities tailored to our needs : IO, Data storage, physical network Stability of computing environment: development/evaluation/ production runs Strong expectations that PRACE can serve our needs