the titan tools
play

The Titan Tools Experience Michael J. Brim, Ph.D. Computer Science - PowerPoint PPT Presentation

The Titan Tools Experience Michael J. Brim, Ph.D. Computer Science Research, CSMD/NCCS Petascale Tools Workshop 2013 Madison, WI July 15, 2013 Overview of Titan Cray XK7 18,688+ compute nodes 16-core AMD Opteron 6274 @ 2.2GHz


  1. The Titan Tools Experience Michael J. Brim, Ph.D. Computer Science Research, CSMD/NCCS Petascale Tools Workshop 2013 Madison, WI July 15, 2013

  2. Overview of Titan • Cray XK7 – 18,688+ compute nodes • 16-core AMD Opteron 6274 @ 2.2GHz • 32GB DDR3 RAM • NVIDIA Kepler K20 GPU: 14 SM with 6GB RAM – Gemini Interconnect: 3-D Torus http://www.olcf.ornl.gov/titan/ 2 Managed by UT-Battelle for the U.S. Department of Energy

  3. My Roles at ORNL • “Tools Developer” is my official job title – HPC debugging, performance, & system administration tools – Matrixed in CSMD & NCCS • CSMD: tools research • NCCS: evaluate/improve production tool offerings • Titan (OLCF-3) Acceptance – responsible for testing “Programming Environment and Tools” • OLCF-4 Tools Lead 3 Managed by UT-Battelle for the U.S. Department of Energy

  4. Tools for Titan • In production use: – Debugging • Allinea DDT, gdb, cuda-gdb, STAT, Cray ATP – Performance • Cray PAT, TAU, Vampirtrace, NVIDIA nvvp, CAPS HMPP wizard • In testing/evaluation: – HPCToolkit, OpenSpeedshop, Score-P, Allinea MAP • Allinea, CAPS, and TU-Dresden – prior/ongoing funding for feature improvements, mostly GPU-related – on-site personnel to assist users and scientific computing liaisons 4 Managed by UT-Battelle for the U.S. Department of Energy

  5. Performance Tools Study • Three goals 1. develop familiarity with tools (as a user, not a developer) 2. evaluate scalability and usability on Titan 3. identify areas for improvements • Strategy – follow tool use recommendations (per Titan user guide) – test functionality on hybrid MPI+OpenMP and MPI+GPU apps – evaluate usability/scalability using production science apps • My science app friends let me down L – settled for: dummy MPI (master-worker), Sequoia IRS v1.0 5 Managed by UT-Battelle for the U.S. Department of Energy

  6. Tool Configurations • HPCToolkit 5.3.2 (svn head from June 20) – Profile: PAPI_L1_TCM:PAPI_TLB_TL, PAPI_TOT_CYC@50,000,000 – Trace: PAPI_L1_TCM:PAPI_TLB_TL, Process fraction 10% • OpenSpeedshop 2.0.2-u11 – Profile: pcsamp – Trace: hwctime • PAT 6.0.1 (perftools) – automatic program analysis in two phases (pat, apa) • TAU 2.22.2-openmp – Profile/Trace: PAPI_L1_TCM:PAPI_TLB_TL, MPI communication tracking • Vampirtrace 5.14.2-nogpu – compiler instrumentation (default on Titan), tauinst currently broken – 512MB trace limit per process, 128MB trace buffer – Profile/Trace: PAPI_L1_TCM:PAPI_TLB_TL 6 Managed by UT-Battelle for the U.S. Department of Energy

  7. Tool Evaluations - Functionality • dummy_mpi: simple master-worker MPI – C and C++ versions • CUDA SDK: various GPU apps GNU Intel PGI Cray HPCToolkit ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ OpenSpeedshop ✔ ☐ ✔ ☐ ✔ ☐ ✔ ☐ PAT ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ TAU ✔ ☐ ✔ ☐ ✔ ☐ ✗ Vampirtrace (compinst) ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ 7 Managed by UT-Battelle for the U.S. Department of Energy

  8. IRS Results - HPCToolkit Storage Requirements (MiB) Execution Overhead Baseline Profile Trace 800 600 PGI 512 1728 4096 8000 400 Profile 23 117 359 969 200 Trace 4 10 32 107 0 512 1728 4096 8000 Baseline Profile Trace 800 Cray 512 1728 4096 8000 600 400 Profile 205 1,232 4,083 11,175 200 Trace 13 69 240 511 0 512 1728 4096 8000 8 Managed by UT-Battelle for the U.S. Department of Energy

  9. IRS Results – OpenSpeedShop Storage Requirements (MiB) Execution Overhead Baseline Profile Trace 1.5 PGI 512 1728 4096 8000 1 Profile 0.5 Trace 0 512 1728 4096 8000 Baseline Profile Trace 1.5 Cray 512 1728 4096 8000 1 Profile 0.5 Trace 0 512 1728 4096 8000 9 Managed by UT-Battelle for the U.S. Department of Energy

  10. IRS Results – TAU Storage Requirements (MiB) Execution Overhead Baseline Profile Trace 1.5 PGI 512 1728 4096 8000 1 Profile 0.5 Trace 0 512 1728 4096 8000 Baseline Profile Trace 1.5 Cray 512 1728 4096 8000 1 Profile 0.5 Trace 0 512 1728 4096 8000 10 Managed by UT-Battelle for the U.S. Department of Energy

  11. IRS Results – Cray PAT Storage Requirements (MiB) Execution Overhead Baseline pat apa 800 600 PGI 512 1728 4096 8000 400 pat ? ? ? 49 200 apa 8 27 68 TBD 0 script 512 1728 4096 8000 error Baseline pat apa 1500 Cray 512 1728 4096 8000 1000 pat ? ? ? 206 500 apa 46 159 377 TBD 0 script error 512 1728 4096 8000 11 Managed by UT-Battelle for the U.S. Department of Energy

  12. IRS Results - Vampirtrace Storage Requirements (MiB) Execution Overhead Baseline Profile Trace 1500 PGI 512 1728 4096 8000 1000 Profile 0.3 1.0 2.3 4.3 500 Trace 1,400 4,400 11,000 20,000 0 512 1728 4096 8000 Baseline Profile Trace 8000 Cray 512 1728 4096 8000 6000 4000 Profile 2.3 7.5 18 35 2000 Trace 1,200 TBD TBD TBD 0 > 3hr > 4hr > 5hr 512 1728 4096 8000 12 Managed by UT-Battelle for the U.S. Department of Energy

  13. IRS Results – Comparing Tools Normalized Execution Overhead hpctk-prof 100 49.5 45.8 hpctk-trace pat-pat 10 4.4 pat-apa 2.7 2.8 1.8 1.4 1.2 1.2 1.1 1.1 1.1 vt-prof 1 vt-trace PGI CRAY hpctk-prof 100 33.1 hpctk-trace pat-pat 10 4.5 3.7 3.1 pat-apa 1.5 1.5 1.4 1.3 1.2 1.2 1.1 vt-prof 1 vt-trace PGI CRAY 13 Managed by UT-Battelle for the U.S. Department of Energy

  14. IRS Results – Comparing Tools Normalized Execution Overhead hpctk-prof 100 hpctk-trace 22.1 pat-pat 10 4.2 3.9 pat-apa 2.4 1.4 1.4 1.4 1.2 1.1 1.1 1.1 vt-prof 1 vt-trace PGI CRAY hpctk-prof 100 33.1 hpctk-trace pat-pat 10 4.5 3.7 3.1 pat-apa 1.5 1.5 1.4 1.3 1.2 1.2 1.1 vt-prof 1 vt-trace PGI CRAY 14 Managed by UT-Battelle for the U.S. Department of Energy

  15. dummy_mpi Results – Some Tools Normalized Execution Overhead Baseline time: 2,812 seconds (~3/4 hr) hpctk-prof 1.5 1.38 1.4 1.36 1.34 hpctk-trace 1.3 1.25 oss-pcsamp 1.2 1.12 vt-prof 1.1 1 vt-trace Intel Much less overhead than small-scale: limit 512MB trace per process 15 Managed by UT-Battelle for the U.S. Department of Energy

  16. Next Steps • Work with production codes – LAMMPS : C++, MPI + CUDA – NUCCOR-J, CESM: Fortran, MPI + OpenMP – S3D : Fortran, MPI + OpenACC • Compare information collected across tools – user/developer feedback on new insights gleaned (if any) – tool expert feedback • Large-scale tests – at least half of Titan nodes – up until things break or I run out of allocation 16 Managed by UT-Battelle for the U.S. Department of Energy

  17. Next Steps – Part 2 • Identify areas for tool improvements • Work with tool developers – user guidance – scalability, new feature development 17 Managed by UT-Battelle for the U.S. Department of Energy

  18. Questions & Feedback This research used resources of the Oak Ridge Leadership Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. www.ornl.gov 18 Managed by UT-Battelle for the U.S. Department of Energy

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend