spectre toward simulations of binary black hole mergers
play

SpECTRE: Toward simulations of binary black hole mergers using - PowerPoint PPT Presentation

SpECTRE: Toward simulations of binary black hole mergers using Charm++ Franc ois H ebert @ Caltech for the Simulating eXtreme Spacetimes (SXS) Collaboration Charm++ Workshop, Oct 20 2020 1 / 26 Outline 1. Role of binary merger


  1. SpECTRE: Toward simulations of binary black hole mergers using Charm++ Franc ¸ois H´ ebert @ Caltech for the Simulating eXtreme Spacetimes (SXS) Collaboration Charm++ Workshop, Oct 20 2020 1 / 26

  2. Outline 1. Role of binary merger simulations 2. Current methods and challenges 3. SpECTRE: towards improved algorithms and scalability 4. Preliminary binary BH results 5. Load-balancing with Charm++ 2 / 26

  3. Gravitational waves LIGO/Virgo detect gravitational waves from merging binary BHs (and NSs) Simulation waveforms enable ⊲ detection of weak signals ⊲ characterization Future detectors will need significantly more accurate waveforms LIGO/Caltech/MIT 3 / 26

  4. Modeling relativistic matter Recent observations ⊲ merging binary NSs ⊲ accretion around supermassive BH Simulations provide models for ⊲ matter dynamics ⊲ heavy-element creation ⊲ electromagnetic spectra Simulations are expensive and struggle to reach desired accuracy Event Horizon Telescope Collaboration 4 / 26

  5. A binary BH simulation N. Fischer/SXS/AEI 5 / 26

  6. A binary NS simulation shortly after merger NASA 6 / 26

  7. Equations to solve Many coupled PDEs ⊲ hyperbolic equations: ∂ t U + ∂ i F i ( U ) + B i · ∂ i U = S ( U ) Complicating features ⊲ Einstein’s equations: — choice of coordinates — singularity inside BH ⊲ GRMHD: — turbulence & shocks — neutrinos, nuclear reactions, ... 7 / 26

  8. Solving the PDEs — current methods Finite volume/difference methods ⊲ represent solution with values at points ⊲ overlapping cartesian grids ⊲ shock-capturing schemes ⊲ polynomial convergence ⊲ “ghost zone” data from neighbors Most binary BH, all matter simulations Ghost zones 8 / 26

  9. Solving the PDEs — current methods Spectral methods ⊲ represent solution with basis functions ⊲ geometrically-adapted grids ⊲ smooth solutions only ⊲ exponential convergence ⊲ boundary data from neighbors Fluxes State of the art for binary BH simulations 9 / 26

  10. Parallelism – current methods MPI + some threading ⊲ finite volume/difference codes scale to ∼ 10 , 000 cores ⊲ Spectral Einstein Code (SpEC) — ∼ 1 spectral element per core — ∼ 100 , 000 FV cells per core — scales to ∼ 50 cores Simulations take time ⊲ binary BH ∼ week ⊲ binary NS ∼ month 10 / 26

  11. SpECTRE SpECTRE: a next-generation code for relativistic astrophysics ⊲ discontinuous Galerkin ⊲ task-based parallelism ⊲ github.com/sxs-collaboration/spectre This talk ⊲ methods for binary BHs ⊲ preliminary binary BH results ⊲ load balancing with Charm++ Not in this talk – improving hydrodynamics algorithms 11 / 26

  12. Discontinuous Galerkin ⊲ generalized spectral method — exponential convergence for smooth solutions — fall back to shock-capturing schemes where needed ⊲ geometric flexibility ⊲ nearest-neighbor boundary communication ⊲ AMR and local timestepping 12 / 26

  13. Code test: single BH G. Lovelace 13 / 26

  14. Code test: code scaling Scaling on BlueWaters (NSCA, UIUC) ⊲ green = strong scaling, fixed problem size ⊲ blue = weak scaling, proportional problem size (*) measurements made with a hydrodynamics evolution; predate an infrastructure rewrite in SpECTRE 14 / 26

  15. Towards a binary BH evolution ⊲ initial data — initial guess + solve elliptic constraint equations — in development — for now, use SpEC initial data ⊲ PDE solver (discontinuous Galerkin + time stepper) ⊲ strategy to keep the singularities off the grid 15 / 26

  16. Keeping the singularities off the grid Excision ⊲ cut out BH interior ⊲ move excised regions with BH orbit Control system ⊲ measures BH positions and shapes ⊲ updates time-dependent mappings to keep excised regions inside the BH Time derivatives gain moving-mesh terms 16 / 26

  17. Towards a binary BH evolution ⊲ initial data — initial guess + solve elliptic constraint equations — in development — for now, use SpEC initial data ⊲ PDE solver (discontinuous Galerkin + time stepper) ⊲ strategy to keep the singularities off the grid — in development — for now, use moving-mesh data from SpEC 17 / 26

  18. Binary black hole evolution Movie shows equatorial cut ⊲ colored by lapse: spacetime curvature component associated with flow of time ⊲ manually excised regions ⊲ BHs follow excision regions for many orbits 18 / 26

  19. SpECTRE use of Charm++ SpECTRE components ⊲ DG elements = array chares ⊲ data processing (IO, interpolations) = group and nodegroup chares ⊲ measuring a BH position and shape = singleton chare ⊲ computing gravitational waves = singleton chare Evolution remains roughly in sync ⊲ PDE structure imposes causality ⊲ efficiency requires load balance 19 / 26

  20. Load balancing in SpECTRE Initial questions ⊲ given a bad distribution of chares to nodes, can the LB improve it? ⊲ given a good distribution (e.g., space-filling curve), will the LB preserve it? Future work: balancing load and communications 20 / 26

  21. Load balancing implementation Initial implementation: ⊲ add global synchronizations every N timesteps ⊲ call AtSync() ⊲ resume timestepping from ResumeFromSync() ⊲ update registration in pup::er calls — array de-registers with group when packing — re-registers when unpacking 21 / 26

  22. Load balancing results 350 Dummy A small test evolution Comm 300 ⊲ 1024 array chares on 2 nodes GreedyComm GraphBFT 250 Wallclock time [s] ⊲ ∼ 25 chares per proc RecBipart RecBipart + Dummy 200 Best LB is within 20% of optimal 150 100 50 0 1 1/2 1/4 1/8 Inter-node communications / total communications 22 / 26

  23. Load balancing results "Cost" vs "Time" 0.5 2 nodes - RecBipart Slowdown with larger problem size 4 nodes - RecBipart Time per timestep [seconds] 0.4 ⊲ increase problem size and procs 8 nodes - RecBipart 8 nodes - RecBipart + Dummy 0.3 Ongoing investigation 0.2 ⊲ normal scaling with graph size? ⊲ is this Charm++ issue #2060? 0.1 ⊲ SpECTRE performance 0.0 0 50 100 150 200 250 300 350 400 LB count 23 / 26

  24. Checkpoint-restart results 1e 9 Initial implementation: full run from checkpoint ⊲ call CkStartCheckpoint() from Error 1 global synchronization point 0 0 1 2 3 4 5 6 7 1e 25 Works on same number of nodes Change after restart 2.5 ⊲ future work: generalize 0.0 2.5 0 1 2 3 4 5 6 7 Time 24 / 26

  25. Wishlist after initial experiments LB clarifications ⊲ when to use which LB? ⊲ how does each LB make its decisions? scale with graph complexity? Checkpoint-restart clarifications ⊲ what is order of initialization on restart? ⊲ can group chare dependencies from program startup be enforced on restart? Feature wishlist ⊲ LB based on space-filling curve? ⊲ checkpoint vs migration-aware pup::er will help optimize packing — avoid checkpointing caches to disk — tailor registration updates 25 / 26

  26. Summary ⊲ Future observations motivate improved simulations of binary mergers ⊲ SpECTRE: improving algorithms and scalability ⊲ Binary BH simulations ⊲ Load balancing and checkpointing 26 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend