toward radiative transfer on amr with gpus
play

(Toward) Radiative transfer on AMR with GPUs Dominique Aubert - PowerPoint PPT Presentation

(Toward) Radiative transfer on AMR with GPUs Dominique Aubert Universit de Strasbourg Austin, TX, 14.12.12 jeudi 13 dcembre 2012 A few words about GPUs Cache and control replaced by calculation units Large number of


  1. (Toward) Radiative transfer on AMR with GPUs Dominique Aubert Université de Strasbourg Austin, TX, 14.12.12 jeudi 13 décembre 2012

  2. A few words about GPUs • Cache and control replaced by calculation units • Large number of Multiprocessors + Scheduler • High load + Independent + Non-Random Memory access • x10 to x100 compared to CPU • High-level interface with CUDA (C, Nvidia) , OpenCL (Kronos) • High-end GPUs ~1.5-2 kEuros, 4-6 GB RAM • Tianhe (Changsha, 7168 GPUs), Titan (Oak Ridge 7000-18 000 GPUs by 2012), MareNostrum (???), in France: Titane (198 GPUs), Curie (268 GPUs) jeudi 13 décembre 2012

  3. Principle of GPU programming with CUDA Host GPU Shared GPU Host RAM RAM Memory RAM RAM If possible: independent & identical threads Calculations + High arithmetic intensity = acceleration Blocks Data Transfer jeudi 13 décembre 2012

  4. 1. Cosmological Radiative Transfer jeudi 13 décembre 2012

  5. Radiative Transfer equations : explicit solver First 2 moments of the RT equations + variable Eddington Tensor with M1 closure relation Gonzales et al. 2008, Aubert & Teysier 2008, Rosdahl & Blaizot 2012 U p + 1 − U p + ∂ F ( U p ) ∂ t + ∂ F ( U ) ∂ U = S = S ∂ x ∆ t ∂ x 100 000 timesteps required to cover the Explicit: CFL constrains reionization (z~5) with GPUs it’s ok @ c < ∆ x ∆ t < ∆ x c=300 000 km/s ∆ t c Aubert & Teyssier, 08,10 jeudi 13 décembre 2012

  6. Post-Processed Radiative Transfer with ATON gas density +sources Subcycled physics (almost) fixed number of operations radiative energy Independent and high load Ionisation state UV+X rad. transport Temperature H Chemistry Conservative transport fixed & predictable number of operations Regular 3D Grid heating Independent and contiguous calculations jeudi 13 décembre 2012

  7. Performances GPUs VS CPUs CPU 256 3 (Opteron 2.7 GHz) 192 3 x80 128 3 64 3 256 3 192 3 128 3 GPU 8800 GTX 64 3 Aubert & Teyssier, 08,10 jeudi 13 décembre 2012

  8. Multi-GPU with boundary layers Aubert & Teyssier, 08,10 jeudi 13 décembre 2012

  9. Applications :TRASH Project ( T ransfert RA diatif S ur H ydrodynamique) Gas and source distribution from cudATON on TITANE-CCRT: the Mare Nostrum Hydro 1024 3 grid simulation Cartesian domain decomposition 1024x1024x1024 cells + 2 8x8x2 refinement levels (128 GPUs - S1070 servers- Infiniband DDR) ~60 000 - 180 000 time steps Self-consistent stellar particles dt ~10 000 yrs over 1 Gyrs used as sources jeudi 13 décembre 2012

  10. cudATON on TITANE-CCRT: 1024 3 grid Cartesian domain decomposition 8x8x2 (128 GPUs - S1070 servers- Infiniband DDR) ~60 000 - 180 000 time steps dt ~10 000 yrs over 1 Gyrs Aubert & Teyssier 2010 Structure of the UV background @ different resolution and sub- grid models Aubert & Teyssier, ApJ, 2010 jeudi 13 décembre 2012

  11. Timings on Titane Communication ~10-15% global time 512 3 8 GPUs 1024 3 128 GPUs 1024 3 64 GPUs 512 3 64 GPUs Aubert & Teyssier, ApJ, 2010 jeudi 13 décembre 2012

  12. Small scale effects 100 Mpc/h -1024 3 box clumping C(delta) extracted from a 12.5 h/Mpc -1024 3 with subgrid clumping without subgrid clumping Aubert & Teyssier, ApJ, 2010 jeudi 13 décembre 2012

  13. J21 Vs nH x Vs nH Aubert & Teyssier, ApJ, 2010 jeudi 13 décembre 2012

  14. Residual Neutral Fraction and J21 ~100 runs @ 1024 3 resolution Aubert & Teyssier, ApJ, 2010 jeudi 13 décembre 2012

  15. Residual Neutral Fraction and J21 100 Mpc/h -1024 3 Aubert & Teyssier, ApJ, 2010 jeudi 13 décembre 2012

  16. Application : Local Group Reionisation (with P . Ocvirk) CLUES zoom on the local Group Timing of the local reionisation ? Ocvirk et al.2012a,b (submitted+in prep.) jeudi 13 décembre 2012

  17. Application: Merger Trees of HII regions during overlap (with J. Chardin) Chardin, Aubert & Ocvirk, A&A, 2012 jeudi 13 décembre 2012

  18. Large Volumes for 21cm forecast (with B. Semelin) Grand Challenge Curie-CCRT 256 GPUs 2048x2048x2048 60 000 pdt -15h ionized fraction at z~10 Curie, CCRT-CEA jeudi 13 décembre 2012

  19. RAMSES-RT (with T. Stranex & R. Teyssier, Zurich) RAMSES & ATON are coupled RAMSES ATON (DYNAMICS) (RT) UNIGRID version will be used on Titan for the INCITE project courtesy T. Stranex jeudi 13 décembre 2012

  20. 3. Towards Multi-Fluid AMR jeudi 13 décembre 2012

  21. N-Body Radiative Transfer Hydro AMR+GPU+Multi ok GPU+ Multi ok AMR+GPU ok ~x10 w.r.t. CPU x 30-40 ? w.r.t. CPU ~x15 w.r.t. CPU Aubert et al. 2009 Aubert & Teyssier, 2008,2010 EMMA Project: -3 fluids coupled on an AMR structure with Hardware Acceleration, with e.g. GPUs jeudi 13 décembre 2012

  22. Multi-GPU PM 1.2 billions particles (1024 3 real particles +2 10 8 ghosts) 8 sec/tstep on 64 Teslas with 25 % spent in communications with sort optimisation we may expect 6 sec/ tstep communication~40% asynchronous coms ? jeudi 13 décembre 2012

  23. Under Heavy development jeudi 13 décembre 2012

  24. Quartz • Written in C+CUDA+MPI • Parallel (Space-Filling Curve + essential Tree domain decomposition) • AMR, with FTT data structure • N-Body + Hydro only ( for the moment) • MG Poisson Solver on GPU+ MUSCL-Hancock Godunov Hydo Solver on GPU + Data Logistics on CPU • Hopefully will become EMMA ( E lectro M agnetism and M echanics on A MR) for gravity+hydro +radiation jeudi 13 décembre 2012

  25. Fully Threaded Tree (Khokhlov 1997) (aka «Pointer Party») Particles → → → → → → Particles → → → → → → ART (Kravtsov et al. 1997) RAMSES (Teyssier 2001) jeudi 13 décembre 2012

  26. Fully THREADED Tree In a lot of cases, the tree is explored Horizontally Level by Level (with some +/-1 level interactions at boundaries) Even CIC can be considered level by level jeudi 13 décembre 2012

  27. Multi-levels Grid Multi-levels CIC density Potential (via relaxation) jeudi 13 décembre 2012

  28. «Vectorization» AMR Tree CPU Gather Scatter GPU Flat Vector Leads to a bottle neck. Patch based AMR may be more appropriate (see e.g. Schive et al. 2009) jeudi 13 décembre 2012

  29. How do we vectorize ? 7.6 -0.1 12.1 2.1 8.1 9.9 -1.8 0.3 -1.2 2.5 1.2 3.1 1 2 3 4 5 6 7 8 9 10 11 12 1.2 3.1 -1.2 2.5 8.1 9.9 -0.1 12.1 -1.8 7.6 2.1 0.3 Coalescent but large gather storing neighbor values -1.2 9.9 -1.8 7.6 ~Non-Coalescent but no gather storing neighbor adresses 3 6 10 11 jeudi 13 décembre 2012

  30. AMR issues with Explicit formulation Hydro Level L+1 (fine) Radiation Hydro Level L (Coarse) Radiation Subcycling induce problematic inter-levels interaction It forces the hydro to be synchronized with radiation E.g Rosdahl & Blaizot reduces the speed of light by 10-100 and synchronize the hydro on a small radiation timestep jeudi 13 décembre 2012

  31. Current Status Without optimizations ~X10-15 (DP) compared to CPU for Hydro. RT might kill it or increase it... jeudi 13 décembre 2012

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend