exploring extreme scalability in scientific applications
play

Exploring Extreme Scalability in Scientific Applications Mike - PowerPoint PPT Presentation

Computational Science & Engineering Exploring Extreme Scalability in Scientific Applications Mike Ashworth, Ian Bush, Charles Moulinec, Ilian Todorov Computational Science & Engineering STFC Daresbury Laboratory m.ashworth@dl.ac.uk


  1. Computational Science & Engineering Exploring Extreme Scalability in Scientific Applications Mike Ashworth, Ian Bush, Charles Moulinec, Ilian Todorov Computational Science & Engineering STFC Daresbury Laboratory m.ashworth@dl.ac.uk http://www.cse.scitech.ac.uk/ 6 th May 2008 CUG 2008 Helsinki

  2. Computational Science & Engineering Outline • Why? • How? • What • Where? 6 th May 2008 CUG 2008 Helsinki

  3. Computational Science & Engineering Outline • Why explore extreme scalability? • How are we doing this? • What have we found so far? • Where are we going next? 6 th May 2008 CUG 2008 Helsinki

  4. Computational Science & Engineering UK National Services T3D T3E EPCC Technology Upgrade T3E Origin Altix CSAR IBM p690 p690+ p5-575 p5+ HPCx Cray XT4 XT4 QC ? HECToR “Child of HECToR” 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 6 th May 2008 CUG 2008 Helsinki

  5. Computational Science & Engineering HPC Strategy in the UK HPC Strategy Committee: "… the UK should aim to achieve sustained Petascale performance as early as possible across a broad field of scientific applications, permitting the UK to remain internationally competitive in an increasingly broad set of high-end computing grand challenge problems.“ … from A Strategic Framework for High-End Computing 6 th May 2008 CUG 2008 Helsinki

  6. Computational Science & Engineering What will a Petascale system look like ? Current indicators: • TOP500 #1LLNL Blue Gene L 0.478 Pflop/s – 212,992 processors, dual-core nodes • TACC ranger Sun Constellation Cluster 0.504 Pflop/s peak – 62,976 processors, 4x quad-core nodes • ORNL current upgrade to Cray XT4 0.250 Pflop/s – 45,016 processors, quad-core nodes • Japanese Petascale project – Smaller number of O(100) Gflop/s vector processors Most likely solution is O(100,000) processors using multi- core components 6 th May 2008 CUG 2008 Helsinki

  7. Computational Science & Engineering Challenges at the Petascale Scientific: • What new science can you do with 1000 Tflop/s ? • Larger problems, multi-scale, multi-disciplinary Technical: • How will existing codes scale to 10,000 or 100,000 processors ? Scaling of time with processors, time with problem size, memory with problem size • Data management, incl. pre- and post-processing • Visualisation • Fault tolerance 6 th May 2008 CUG 2008 Helsinki

  8. Computational Science & Engineering Daresbury Petascale project Scaling analysis of current codes Performance analysis on O(10,000) procs Forward-look prediction to O(100,000) procs Optimisation of current algorithms Development of new algorithms Evaluation of alternative programming models 6 th May 2008 CUG 2008 Helsinki

  9. Computational Science & Engineering Machines 6 th May 2008 CUG 2008 Helsinki

  10. Computational Science & Engineering Machines Cray XT4 HECToR – DC 2.8 GHz Opteron 11328 cores IBM p5-575 HPCx – DC 1.7 GHz POWER5, HPS, 2560 cores Cray XT3 palu CSCS – DC 2.6 GHz Opteron 3328 cores IBM BlueGene/L jubl – DC 700 MHz PowerPC, 16384 cores “Application Performance on the UK’s New HECToR Service”, Fiona Reid et al, CUG 2008, Wednesday pm 6 th May 2008 CUG 2008 Helsinki

  11. Computational Science & Engineering CCLRC Daresbury Laboratory Home of HPCx – 2560-CPU IBM POWER5 6 th May 2008 CUG 2008 Helsinki

  12. Computational Science & Engineering Applications 6 th May 2008 CUG 2008 Helsinki

  13. Computational Science & Engineering Applications PDNS3D/SBLI – Direct Numerical Simulation of Turbulent Flow Code_Saturne – Unstructured Finite Element CFD code POLCOMS – Coastal-ocean finite difference code DL_POLY3 – Molecular dynamics code CRYSTAL – First principles periodic quantum chemistry code 6 th May 2008 CUG 2008 Helsinki

  14. Computational Science & Engineering What is a processor? A processor by any other name … An applications view … A processor is what is has always been … – A short name for Central Processing Unit – Something that runs a single instruction stream – Something that runs an MPI task – Something that runs a bunch of threads (OpenMP) 6 th May 2008 CUG 2008 Helsinki

  15. Computational Science & Engineering PDNS3D / SBLI 6 th May 2008 CUG 2008 Helsinki

  16. Computational Science & Engineering DNS results of near-wall turbulent flow 6 th May 2008 CUG 2008 Helsinki

  17. Computational Science & Engineering 3D grid partitioning with halo cells calculation cost: scales as n 3 communication cost: scales as n 2 strong scaling: increasing P decreasing n comms will dominate 6 th May 2008 CUG 2008 Helsinki

  18. Computational Science & Engineering SBLI on Cray XT4 Turbulent channel flow benchmark 800 600x600x600 700 Performance (Mgrid-points*iterations/sec) 480x480x480 600 360x360x360 500 400 300 200 Larger problems scale better 100 0 0 1024 2048 3072 4096 5120 6144 7168 8192 Number of processors 6 th May 2008 CUG 2008 Helsinki

  19. Computational Science & Engineering % comms time from craypat 50% 360x360x360 480x480x480 40% 600x600x600 Communications time (%) 30% 20% 10% 0% 0 1024 2048 3072 4096 5120 6144 Number of processors 6 th May 2008 CUG 2008 Helsinki

  20. Computational Science & Engineering Code_Saturne 6 th May 2008 CUG 2008 Helsinki

  21. Computational Science & Engineering Code_Saturne performance 78 million cells 60 120 million cells Performance (arbitrary) 40 20 0 0 2048 4096 6144 8192 Number of processors 6 th May 2008 CUG 2008 Helsinki

  22. Computational Science & Engineering Code_Saturne Unstructured CFD code from EDF Run with structured mesh for an LES simulation turbulent channel flow Metis or Scotch used to partition the grid Linear scaling performance to 8192 processors (no I/O) Efficient parallel I/O is essential for this code Memory for partitioning an issue with very large meshes Need to move to a parallel partitioner Then will the mesh quality be maintained 6 th May 2008 CUG 2008 Helsinki

  23. Computational Science & Engineering POLCOMS 6 th May 2008 CUG 2008 Helsinki

  24. Computational Science & Engineering High-Resolution Coastal Ocean Modelling POLCOMS is the finest resolution model to-date to simulate the circulation, temperature and salinity of the Northwest European continental Shelf important for understanding of the transport of nutrients, pollutants and dissolved carbon around shelf seas We have worked with POL on coupling with ERSEM, WAM, CICE, data assimilation and optimisation for HPC platforms Volume transport Jul-Sep mean Advective controls on primary production in the stratified western Irish Sea: An eddy-resolving model study, JT Holt, R Proctor, JC Blackford, JI Allen, M Ashworth, Journal of Geophysical Research, 109, 2004, p. C05024 6 th May 2008 CUG 2008 Helsinki

  25. Computational Science & Engineering Coupled Marine Ecosystem Model Irradiation Heat Flux Cloud Cover Pelagic Ecosystem Model Wind Stress River Inputs o C C, N, P, Si Sediments Open Boundary o C Physical Model Benthic Model 6 th May 2008 CUG 2008 Helsinki

  26. Computational Science & Engineering POLCOMS HRCS performance POLCOMS HRCS physics-only 3000 Cray XT4 HECToR Performance (model days/day) Cray XT3 palu IBM p5-575 HPCx 2000 1000 0 0 256 512 768 1024 1280 1536 Number of processors 6 th May 2008 CUG 2008 Helsinki

  27. Computational Science & Engineering POLCOMS Structured-grid finite difference code from POL Sophisticated advection scheme to represent, fronts, eddies etc in the shelf seas Halo-based partitioning Complicated by land/sea issue Performance dependent on partitioning Known issue with communications imbalance – new version under test Efficient parallel I/O is essential for this code 6 th May 2008 CUG 2008 Helsinki

  28. Computational Science & Engineering DL_POLY 6 th May 2008 CUG 2008 Helsinki

  29. Computational Science & Engineering Migration from Replicated to Distributed data DL_POLY3: Coulomb Energy Evaluation Conventional routines ( e.g. fftw) assume plane or column distributions. A global transpose of the data is required to complete the 3D FFT and additional costs are incurred re-organising the data from the natural block domain decomposition. Planes Blocks An alternative FFT algorithm has been designed to reduce communication costs. – the 3D FFT is done as a series of 1D FFTs, each involving communications only between blocks in a given column – The data distribution matches that used for the rest of the DL_POLY energy routines – More data is transferred, but in far fewer messages – Rather than all-to-all, the communications are column-wise only (see sparse comms structure, left) 6 th May 2008 CUG 2008 Helsinki

  30. Computational Science & Engineering BlueGene/L times 14.6 million particle Gd 2 Zr 2 O 7 system 3.0 MD total 2.5 Ewald - k space Link Seconds / Evaluation Other 2.0 Van der Waals Ewald - Real Space 1.5 1.0 0.5 0.0 0 4096 8192 12288 16384 Number of Processors 6 th May 2008 CUG 2008 Helsinki

  31. Computational Science & Engineering Cray XT4 & BGL performance 5.0 Cray XT4 hector IBM BlueGene/L jubl 4.0 Performance (arbitray) 3.0 2.0 1.0 0.0 0 4096 8192 12288 16384 Number of Processors 6 th May 2008 CUG 2008 Helsinki

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend