SpECTRE: Toward simulations of binary black hole mergers using - - PowerPoint PPT Presentation

spectre toward simulations of binary black hole mergers
SMART_READER_LITE
LIVE PREVIEW

SpECTRE: Toward simulations of binary black hole mergers using - - PowerPoint PPT Presentation

SpECTRE: Toward simulations of binary black hole mergers using Charm++ Franc ois H ebert @ Caltech for the Simulating eXtreme Spacetimes (SXS) Collaboration Charm++ Workshop, Oct 20 2020 1 / 26 Outline 1. Role of binary merger


slide-1
SLIDE 1

SpECTRE: Toward simulations of binary black hole mergers using Charm++

Franc ¸ois H´ ebert @ Caltech for the Simulating eXtreme Spacetimes (SXS) Collaboration Charm++ Workshop, Oct 20 2020

1 / 26

slide-2
SLIDE 2

Outline

  • 1. Role of binary merger simulations
  • 2. Current methods and challenges
  • 3. SpECTRE: towards improved algorithms and scalability
  • 4. Preliminary binary BH results
  • 5. Load-balancing with Charm++

2 / 26

slide-3
SLIDE 3

Gravitational waves

LIGO/Caltech/MIT

LIGO/Virgo detect gravitational waves from merging binary BHs (and NSs) Simulation waveforms enable ⊲ detection of weak signals ⊲ characterization Future detectors will need significantly more accurate waveforms

3 / 26

slide-4
SLIDE 4

Modeling relativistic matter

Recent observations ⊲ merging binary NSs ⊲ accretion around supermassive BH Simulations provide models for ⊲ matter dynamics ⊲ heavy-element creation ⊲ electromagnetic spectra Simulations are expensive and struggle to reach desired accuracy

Event Horizon Telescope Collaboration

4 / 26

slide-5
SLIDE 5

A binary BH simulation

  • N. Fischer/SXS/AEI

5 / 26

slide-6
SLIDE 6

A binary NS simulation shortly after merger

NASA

6 / 26

slide-7
SLIDE 7

Equations to solve

Many coupled PDEs ⊲ hyperbolic equations: ∂tU + ∂iFi(U) + Bi · ∂iU = S(U) Complicating features ⊲ Einstein’s equations:

— choice of coordinates — singularity inside BH

⊲ GRMHD:

— turbulence & shocks — neutrinos, nuclear reactions, ...

7 / 26

slide-8
SLIDE 8

Solving the PDEs — current methods

Finite volume/difference methods ⊲ represent solution with values at points ⊲ overlapping cartesian grids ⊲ shock-capturing schemes ⊲ polynomial convergence ⊲ “ghost zone” data from neighbors Most binary BH, all matter simulations Ghost zones

8 / 26

slide-9
SLIDE 9

Solving the PDEs — current methods

Spectral methods ⊲ represent solution with basis functions ⊲ geometrically-adapted grids ⊲ smooth solutions only ⊲ exponential convergence ⊲ boundary data from neighbors State of the art for binary BH simulations Fluxes

9 / 26

slide-10
SLIDE 10

Parallelism – current methods

MPI + some threading ⊲ finite volume/difference codes scale to ∼ 10, 000 cores ⊲ Spectral Einstein Code (SpEC)

— ∼ 1 spectral element per core — ∼ 100, 000 FV cells per core — scales to ∼ 50 cores

Simulations take time ⊲ binary BH ∼ week ⊲ binary NS ∼ month

10 / 26

slide-11
SLIDE 11

SpECTRE

SpECTRE: a next-generation code for relativistic astrophysics ⊲ discontinuous Galerkin ⊲ task-based parallelism ⊲ github.com/sxs-collaboration/spectre This talk ⊲ methods for binary BHs ⊲ preliminary binary BH results ⊲ load balancing with Charm++ Not in this talk – improving hydrodynamics algorithms

11 / 26

slide-12
SLIDE 12

Discontinuous Galerkin

⊲ generalized spectral method

— exponential convergence for smooth solutions — fall back to shock-capturing schemes where needed

⊲ geometric flexibility ⊲ nearest-neighbor boundary communication ⊲ AMR and local timestepping

12 / 26

slide-13
SLIDE 13

Code test: single BH

  • G. Lovelace

13 / 26

slide-14
SLIDE 14

Code test: code scaling

Scaling on BlueWaters (NSCA, UIUC) ⊲ green = strong scaling, fixed problem size ⊲ blue = weak scaling, proportional problem size (*) measurements made with a hydrodynamics evolution; predate an infrastructure rewrite in SpECTRE

14 / 26

slide-15
SLIDE 15

Towards a binary BH evolution

⊲ initial data

— initial guess + solve elliptic constraint equations — in development — for now, use SpEC initial data

⊲ PDE solver (discontinuous Galerkin + time stepper) ⊲ strategy to keep the singularities off the grid

15 / 26

slide-16
SLIDE 16

Keeping the singularities off the grid

Excision ⊲ cut out BH interior ⊲ move excised regions with BH orbit Control system ⊲ measures BH positions and shapes ⊲ updates time-dependent mappings to keep excised regions inside the BH Time derivatives gain moving-mesh terms

16 / 26

slide-17
SLIDE 17

Towards a binary BH evolution

⊲ initial data

— initial guess + solve elliptic constraint equations — in development — for now, use SpEC initial data

⊲ PDE solver (discontinuous Galerkin + time stepper) ⊲ strategy to keep the singularities off the grid

— in development — for now, use moving-mesh data from SpEC

17 / 26

slide-18
SLIDE 18

Binary black hole evolution

Movie shows equatorial cut ⊲ colored by lapse: spacetime curvature component associated with flow of time ⊲ manually excised regions ⊲ BHs follow excision regions for many orbits

18 / 26

slide-19
SLIDE 19

SpECTRE use of Charm++

SpECTRE components ⊲ DG elements = array chares ⊲ data processing (IO, interpolations) = group and nodegroup chares ⊲ measuring a BH position and shape = singleton chare ⊲ computing gravitational waves = singleton chare Evolution remains roughly in sync ⊲ PDE structure imposes causality ⊲ efficiency requires load balance

19 / 26

slide-20
SLIDE 20

Load balancing in SpECTRE

Initial questions ⊲ given a bad distribution of chares to nodes, can the LB improve it? ⊲ given a good distribution (e.g., space-filling curve), will the LB preserve it? Future work: balancing load and communications

20 / 26

slide-21
SLIDE 21

Load balancing implementation

Initial implementation: ⊲ add global synchronizations every N timesteps ⊲ call AtSync() ⊲ resume timestepping from ResumeFromSync() ⊲ update registration in pup::er calls

— array de-registers with group when packing — re-registers when unpacking

21 / 26

slide-22
SLIDE 22

Load balancing results

A small test evolution ⊲ 1024 array chares on 2 nodes ⊲ ∼ 25 chares per proc Best LB is within 20% of optimal

1 1/2 1/4 1/8 Inter-node communications / total communications 50 100 150 200 250 300 350 Wallclock time [s] Dummy Comm GreedyComm GraphBFT RecBipart RecBipart + Dummy 22 / 26

slide-23
SLIDE 23

Load balancing results

Slowdown with larger problem size ⊲ increase problem size and procs Ongoing investigation ⊲ normal scaling with graph size? ⊲ is this Charm++ issue #2060? ⊲ SpECTRE performance

50 100 150 200 250 300 350 400 LB count 0.0 0.1 0.2 0.3 0.4 0.5 Time per timestep [seconds]

"Cost" vs "Time"

2 nodes - RecBipart 4 nodes - RecBipart 8 nodes - RecBipart 8 nodes - RecBipart + Dummy 23 / 26

slide-24
SLIDE 24

Checkpoint-restart results

Initial implementation: ⊲ call CkStartCheckpoint() from global synchronization point Works on same number of nodes ⊲ future work: generalize

1 2 3 4 5 6 7 1 Error 1e 9 full run from checkpoint 1 2 3 4 5 6 7 Time 2.5 0.0 2.5 Change after restart 1e 25 24 / 26

slide-25
SLIDE 25

Wishlist after initial experiments

LB clarifications ⊲ when to use which LB? ⊲ how does each LB make its decisions? scale with graph complexity? Checkpoint-restart clarifications ⊲ what is order of initialization on restart? ⊲ can group chare dependencies from program startup be enforced on restart? Feature wishlist ⊲ LB based on space-filling curve? ⊲ checkpoint vs migration-aware pup::er will help optimize packing

— avoid checkpointing caches to disk — tailor registration updates

25 / 26

slide-26
SLIDE 26

Summary

⊲ Future observations motivate improved simulations of binary mergers ⊲ SpECTRE: improving algorithms and scalability ⊲ Binary BH simulations ⊲ Load balancing and checkpointing

26 / 26