SpECTRE: Towards improved simulations of relativistic astrophysical - - PowerPoint PPT Presentation

spectre towards improved simulations of relativistic
SMART_READER_LITE
LIVE PREVIEW

SpECTRE: Towards improved simulations of relativistic astrophysical - - PowerPoint PPT Presentation

SpECTRE: Towards improved simulations of relativistic astrophysical systems Nils Deppe May 1, 2019 github.com/sxs-collaboration/spectre 1 Table of Contents 1 Background and motivation 2 Numerical methods 3 SpECTRE implementation


slide-1
SLIDE 1

SpECTRE: Towards improved simulations of relativistic astrophysical systems

Nils Deppe May 1, 2019

github.com/sxs-collaboration/spectre 1

slide-2
SLIDE 2

Table of Contents

1 Background and motivation 2 Numerical methods 3 SpECTRE implementation

github.com/sxs-collaboration/spectre 2

slide-3
SLIDE 3

Simulations of GRMHD coupled to Einstein’s equations are complicated, difficult, and interesting

github.com/sxs-collaboration/spectre 3

slide-4
SLIDE 4

Simulation Goals

  • Accretion disks
  • Binary neutron star

mergers

  • Core-collapse supernova

explosions

Event Horizon Telescope Collaboration

github.com/sxs-collaboration/spectre 4

slide-5
SLIDE 5

Need For High Accuracy

  • Gravitational waveforms for

LIGO/Virgo and space-based detectors

  • LIGO/Virgo follow-up

waveforms

  • Accretion for Event Horizon

Telescope

  • Improved understanding of

heavy element generation

Abbott et al. 2017

github.com/sxs-collaboration/spectre 5

slide-6
SLIDE 6

General Equations to Solve

  • Hyperbolic equations in general form:

∂tU + ∂iFi(U) + Bi · ∂iU = S(U)

  • Elliptic equations of the form:

∂2U = S(U, ∂U)

github.com/sxs-collaboration/spectre 6

slide-7
SLIDE 7

Table of Contents

1 Background and motivation 2 Numerical methods 3 SpECTRE implementation

github.com/sxs-collaboration/spectre 7

slide-8
SLIDE 8

Vacuum Evolutions: Spectral Methods

  • Smooth solutions
  • Exponential convergence
  • Non-overlapping grids
  • General grids:

github.com/sxs-collaboration/spectre 8

slide-9
SLIDE 9

Hydrodynamics: Finite Volume Methods

  • Work on shocks
  • Polynomial convergence
  • Typically Cartesian grids
  • Overlapping grids

u(x) x

github.com/sxs-collaboration/spectre 9

slide-10
SLIDE 10

Parallelism

Current codes:

  • Message passing (MPI) + some threading
  • Spectral Einstein Code (SpEC):
  • Spectral methods: one element per core
  • Finite volume: ∼ 100, 000 − 150, 000 cells per core
  • Pseudospectral methods ∼ 50 cores
  • Finite volume methods ∼ 20, 000 cores

github.com/sxs-collaboration/spectre 10

slide-11
SLIDE 11

Discontinuous Galerkin Method

  • Exponential convergence for smooth solutions
  • Shock capturing
  • Non-overlapping deformed grids
  • hp-adaptivity
  • Local time stepping
  • Nearest-neighbor communication

github.com/sxs-collaboration/spectre 11

slide-12
SLIDE 12

Boundary Data

  • Boundary fluxes communicated between elements
  • Nearest-neighbor only, good for parallelization

Ωk−1 Ωk Fluxes

github.com/sxs-collaboration/spectre 12

slide-13
SLIDE 13

Boundary Correction

  • Consider element Ωk−1:

Gk−1 = 1 2

  • F i,+n+

i + F i,−n− i

  • − C

2

  • u+ − u−

Ωk−1 Ωk F i,+ n+

i

u+ F i,− n−

i

u−

github.com/sxs-collaboration/spectre 13

slide-14
SLIDE 14

The DG Algorithm Summary

1 Compute time derivatives 2 Send data for boundary data 3 Integrate in time 4 Send data for limiting 5 Apply limiter

github.com/sxs-collaboration/spectre 14

slide-15
SLIDE 15

Table of Contents

1 Background and motivation 2 Numerical methods 3 SpECTRE implementation

github.com/sxs-collaboration/spectre 15

slide-16
SLIDE 16

SpECTRE Design Goals

  • Modular and extensible

github.com/sxs-collaboration/spectre 16

slide-17
SLIDE 17

SpECTRE Design Goals

  • Modular and extensible
  • Correctness: unit tests, integration tests, physics tests, etc.

github.com/sxs-collaboration/spectre 16

slide-18
SLIDE 18

SpECTRE Design Goals

  • Modular and extensible
  • Correctness: unit tests, integration tests, physics tests, etc.
  • Maintainability: GitHub, documentation, tools, etc.

github.com/sxs-collaboration/spectre 16

slide-19
SLIDE 19

SpECTRE Design Goals

  • Modular and extensible
  • Correctness: unit tests, integration tests, physics tests, etc.
  • Maintainability: GitHub, documentation, tools, etc.
  • Scalability: task-based parallelism (Charm++)

github.com/sxs-collaboration/spectre 16

slide-20
SLIDE 20

SpECTRE Design Goals

  • Modular and extensible
  • Correctness: unit tests, integration tests, physics tests, etc.
  • Maintainability: GitHub, documentation, tools, etc.
  • Scalability: task-based parallelism (Charm++)
  • Efficiency: vectorization, hardware specific code (Blaze, LIBXSMM)

github.com/sxs-collaboration/spectre 16

slide-21
SLIDE 21

SpECTRE Design Goals

  • Modular and extensible
  • Correctness: unit tests, integration tests, physics tests, etc.
  • Maintainability: GitHub, documentation, tools, etc.
  • Scalability: task-based parallelism (Charm++)
  • Efficiency: vectorization, hardware specific code (Blaze, LIBXSMM)
  • General framework for hyperbolic (Cornell, Caltech, CalState Fullerton, UNH) and

elliptic (AEI) PDEs

github.com/sxs-collaboration/spectre 16

slide-22
SLIDE 22

Available Physical Systems

  • Scalar wave
  • Curved scalar wave (mostly)
  • Newtonian Euler (in code review)
  • Relativistic Euler (mostly)
  • GRMHD
  • Generalized harmonic (in code review)

github.com/sxs-collaboration/spectre 17

slide-23
SLIDE 23

Numerical Schemes

Numerical fluxes:

  • Rusanov (local Lax-Friedrichs)
  • HLL
  • Upwind

Planned numerical fluxes:

  • HLLC
  • Roe
  • Marquina

Limiters:

  • Minmod (MUSCL, ΛΠ1, ΛΠN)
  • Krivodonova
  • SimpleWENO (in code review)
  • HWENO (in code review)
  • Multipatch FV/FD subcell (in

progress) Planned limiters:

  • Moe-Rossmanith-Seal (MRS)
  • Hierarchical Barth-Jespersen and

vertex-based

github.com/sxs-collaboration/spectre 18

slide-24
SLIDE 24

Convergence for Smooth Problems: Alfv´ en Wave

P1 P3 P5 P7 P9 P11 10

11

10

9

10

7

10

5

10

3

10

1

L1( (vz)) Nx = 4 Nx = 8 Nx = 16 Nx = 32 1 2 4 8 16 32 Nx 10

11

10

9

10

7

10

5

10

3

L1( (vz)) x

4

x

6

x

8

x

10

P3 P5 P7 P9 github.com/sxs-collaboration/spectre 19

slide-25
SLIDE 25

Single Black Hole Evolutions

  • Generalized harmonic system
  • Excised cube in center

500 1000 1500 2000 Time/Mass 10

7

10

6

10

5

L2(Ha +

a)

Error(gab) Error(

iab)

Error(

ab)

github.com/sxs-collaboration/spectre 20

slide-26
SLIDE 26

Komissarov Slow Shock

256 × 1 × 1 elements, 33 points per element

0.0 0.5 1.0 1.4 x 1.0 1.5 2.0 2.5 3.0 3.5

t = 0.00 t = 0.48 t = 0.96 t = 1.44 t = 1.92

0.0 0.5 1.0 1.4 x 1.0 1.5 2.0 2.5 3.0 3.5

t = 0.00 t = 0.48 t = 0.96 t = 1.44 t = 1.92

0.0 0.5 1.0 1.4 x 1.0 1.5 2.0 2.5 3.0 3.5

t = 0.00 t = 0.48 t = 0.96 t = 1.44 t = 1.92

Krivodonova SimpleWENO HWENO

github.com/sxs-collaboration/spectre 21

slide-27
SLIDE 27

Cylindrical Blast Wave

1282 × 1 elements, 23 points per element Krivodonova SimpleWENO HWENO

github.com/sxs-collaboration/spectre 22

slide-28
SLIDE 28

Cylindrical Blast Wave

1282 × 1 elements, 33 points per element Krivodonova SimpleWENO HWENO

github.com/sxs-collaboration/spectre 23

slide-29
SLIDE 29

Fishbone-Moncrief Disk

  • Torus around a black hole
  • Code comparison project
  • χ = 0.9375, ρmax ≈ 77
  • Orbital period Torb ≈ 247
  • Hexahedron: [−40, 40] × [2, 40] × [−8, 8]

github.com/sxs-collaboration/spectre 24

slide-30
SLIDE 30

Fishbone-Moncrief Disk

Rest mass density ρ at t = 600

github.com/sxs-collaboration/spectre 25

slide-31
SLIDE 31

Fishbone-Moncrief Disk

Error in rest mass density ρ at t = 600

github.com/sxs-collaboration/spectre 26

slide-32
SLIDE 32

Scaling Bondi Accretion GRMHD

  • Run on BlueWaters supercomputer, NCSA, UIUC, IL, USA
  • Green is perfect speedup for fixed problem size (strong scaling)
  • Blue shows actual weak scaling (flat is ideal)

103 104 105

Number of Threads

102 103

Runtime (s)

1,646,592 elements 245,760 elements

github.com/sxs-collaboration/spectre 27

slide-33
SLIDE 33

Summary

  • Improved vacuum and GRMHD simulations necessary for experiment

github.com/sxs-collaboration/spectre 28

slide-34
SLIDE 34

Summary

  • Improved vacuum and GRMHD simulations necessary for experiment
  • Current methods difficult to scale to new machines

github.com/sxs-collaboration/spectre 28

slide-35
SLIDE 35

Summary

  • Improved vacuum and GRMHD simulations necessary for experiment
  • Current methods difficult to scale to new machines
  • Discontinuous Galerkin as alternative new method

github.com/sxs-collaboration/spectre 28

slide-36
SLIDE 36

Summary

  • Improved vacuum and GRMHD simulations necessary for experiment
  • Current methods difficult to scale to new machines
  • Discontinuous Galerkin as alternative new method
  • SpECTRE as general hyperbolic and elliptic PDE solver (not just DG)

github.com/sxs-collaboration/spectre 28

slide-37
SLIDE 37

Summary

  • Improved vacuum and GRMHD simulations necessary for experiment
  • Current methods difficult to scale to new machines
  • Discontinuous Galerkin as alternative new method
  • SpECTRE as general hyperbolic and elliptic PDE solver (not just DG)
  • Successful scaling to largest machines available

github.com/sxs-collaboration/spectre 28

slide-38
SLIDE 38

Summary

  • Improved vacuum and GRMHD simulations necessary for experiment
  • Current methods difficult to scale to new machines
  • Discontinuous Galerkin as alternative new method
  • SpECTRE as general hyperbolic and elliptic PDE solver (not just DG)
  • Successful scaling to largest machines available
  • Limiting and primitive recovery an open problem

github.com/sxs-collaboration/spectre 28