Leveraging time integration to increase efficiency and robustness of - - PowerPoint PPT Presentation

leveraging time integration to increase efficiency and
SMART_READER_LITE
LIVE PREVIEW

Leveraging time integration to increase efficiency and robustness of - - PowerPoint PPT Presentation

Introduction Numerical Methods Test Problems Results Conclusions Leveraging time integration to increase efficiency and robustness of nonlinear implicit solvers Daniel R. Reynolds reynolds@smu.edu Department of Mathematics Southern


slide-1
SLIDE 1

Introduction Numerical Methods Test Problems Results Conclusions

Leveraging time integration to increase efficiency and robustness of nonlinear implicit solvers

Daniel R. Reynolds

reynolds@smu.edu

Department of Mathematics Southern Methodist University

ICERM Workshop on Numerical Methods for Large-Scale Nonlinear Problems and Their Applications

September 1, 2015

slide-2
SLIDE 2

Introduction Numerical Methods Test Problems Results Conclusions

Generic Nonlinear Solver Algorithm

Nonlinear solver algorithms typically fit within a standard format: (a) Given an initial guess, x(0) (b) Loop: (i) Compute x(k+1) = g(x(k))

  • r x(k+1) = g(x(k), . . . , x(k−m))
  • (ii) Check for convergence

Typically (b.i) receives the greatest attention, since it is what we analyze to determine the cost per iteration, rate of convergence, etc. This talk instead focuses on the two other (often critically important) components, (a) and (b.ii), and what we may do for them in the context of embedded diagonally-implicit Runge-Kutta time integrators.

slide-3
SLIDE 3

Introduction Numerical Methods Test Problems Results Conclusions

Outline

1

Introduction

2

Numerical Methods

3

Test Problems

4

Results

5

Conclusions

slide-4
SLIDE 4

Introduction Numerical Methods Test Problems Results Conclusions

Outline

1

Introduction

2

Numerical Methods

3

Test Problems

4

Results

5

Conclusions

slide-5
SLIDE 5

Introduction Numerical Methods Test Problems Results Conclusions

DIRK Time Integration

Consider the ODE system ˙ y = f(t, y), t ∈ [t0, tf], y(t0) = y0 ∈ RN. An s-stage DIRK method for evolving the time step tn−1 → tn approximates two solutions to the ODE system (yn and ˜ yn) as:

zi = yn−1 + hn

i

  • j=1

Ai,jf(tn,j, zj), i = 1, . . . , s yn = yn−1 + hn

s

  • j=1

bjf(tn,j, zj), [solution] ˜ yn = yn−1 + hn

s

  • j=1

˜ bjf(tn,j, zj), [embedding],

where tn,j ≡ tn−1 + cjhn and hn = tn − tn−1. This defines a sequence of s nonlinear systems to solve per time step: Gi(zi) ≡ zi − yn−1 − hn

i

  • j=1

Ai,jf(tn,j, zj) = 0, i = 1, . . . , s

slide-6
SLIDE 6

Introduction Numerical Methods Test Problems Results Conclusions

Newton Nonlinear Solver

We denote the WRMS norm of a vector v ∈ Rn as v =

  • 1

N

N

  • i=1
  • vi

atol,i + rtol|yn−1| 21/2 , where atol,i and rtol are our target absolute and relative errors on yn,i. Given a guess x(0) for a stage solution zi. Until “convergence”, update x(k+1) = x(k) + s(k), where s(k) satisfies: J(x(k))s(k) = −G(x(k)), [direct: dense/banded]

  • r

J(x(k))s(k) + G(x(k)) ≤ ηG(x(k)), [iterative: GMRES] Jij(x) = ∂Gi(x)

∂xj

is the Jacobian, and η is the relative linear residual tolerance. We allow up to 5 Newton iterations per stage in the following experiments.

slide-7
SLIDE 7

Introduction Numerical Methods Test Problems Results Conclusions

Accelerated Fixed Point Nonlinear Solver

We also consider a fixed point nonlinear solver with Anderson acceleration: Denote g(x) = x − G(x). Given a guess x(0) for a stage solution zi, and some m > 0. Set x(1) = g(x(0)). Until “convergence”, for k = 1, 2, . . .: mk = min{m, k} F (k) =

  • f (k−mk), . . . , f (k)

, where f (i) = g(x(i)) − x(i) α(k) = argminα=(α0,...,αmk)

T , mk i=0 αi=1 F (k)α2

x(k+1) =

mk

  • i=0

α(k)

i

g(x(k−mk+i)) We allow up 10 iterations per stage, with m = 5 in the following experiments.

slide-8
SLIDE 8

Introduction Numerical Methods Test Problems Results Conclusions

Reliance on x(0)

Numerical Analysis 101: Newton’s method converges quadratically to a simple root x∗, assuming x(0) − x∗ < ε, for ε “small enough”. Tim Kelley’s talk (Monday PM) and Roger Pawlowski’s results (Monday AM): Anderson accelerated fixed-point method converges under similar assumptions on the quality of x(0). For general nonlinear systems, initial guess selection is more “art” than “science,” but in many applications (including ours) we may leverage the problem itself to construct x(0). Balancing act: explicit predictors extrapolate solution values, but extrapolation error can increase with polynomial order.

slide-9
SLIDE 9

Introduction Numerical Methods Test Problems Results Conclusions

Constructing the Initial Guess – Trivial Predictor (0)

Runge-Kutta methods approximate yn = yn−1 + tn

tn−1

f(t, y) dt with solution values/derivatives at tn,i. Typically zi − y(tn,i) ∝ h, even when yn is much higher-order.

hn f n−1 f n,1 z4 f n,4 z3 f n,3 z2 f n,2 yn−1 t n−1 t n z1 z5 f n,5 t

Basic assumption: if hn is “small” then z1 ≈ · · · ≈ zs ≈ yn−1, leading to the typical initial guess z(0)

i

= yn−1, i = 1, . . . , s We’ll call this the trivial predictor. Hypotheses: (+) robust, (−) inaccurate

slide-10
SLIDE 10

Introduction Numerical Methods Test Problems Results Conclusions

Constructing the Initial Guess – Maximum-Order Predictor (1)

Idea: use “accurate” data from the previous step, {yn−2, yn−1, fn−2, fn−1}, to construct a cubic Hermite interpolant p3(t): p3(tn−2) = yn−2, p3(tn−1) = yn−1, p′

3(tn−2) = fn−2, p′ 3(tn−1) = fn−1,

and extrapolate z(0)

i

= p3(tn,i). We’ll call this the maximum-order predictor. Hypotheses: (+) very accurate for smooth, non-stiff dynamics (−) less robust for stiff dynamics, especially for later stages (−) inaccurate for hn ≫ hn−1

t n−1 t n hn hn−1 t n−2 yn−2 yn−1 f n−1 z5

(0)

z2

(0)

z3

(0)

z4

(0)

z1

(0)

f n−2 t p (t)

3

slide-11
SLIDE 11

Introduction Numerical Methods Test Problems Results Conclusions

Constructing the Initial Guess – Variable-Order Predictor (2)

Idea: to avoid increased extrapolation error when used away from data, reduce the order of the predicting polynomial for later stages: z(0)

i

=      p3(tn,i), (tn,i − tn)/hn ≤ 0.5, p2(tn,i), 0.5 < (tn,i − tn)/hn ≤ 0.75, p1(tn,i), 0.75 < (tn,i − tn)/hn where p2(t) is the quadratic Hermite polynomial through {yn−2, yn−1, fn−1}, and p1(t) interpolates {yn−2, yn−1}. We’ll call this the variable-order predictor. Hypotheses: (+) very accurate for smooth dynamics and early stages (+) robust for late stages (−) inaccurate for late stages

t n−1 t n hn hn−1 t n−2 yn−2 yn−1 f n−1 f n−2 z3

(0)

z4

(0)

z2

(0)

z5

(0)

p (t)

2

p (t)

1

z1

(0)

p (t)

3

t

slide-12
SLIDE 12

Introduction Numerical Methods Test Problems Results Conclusions

Constructing the Initial Guess – Cutoff-Order Predictor (3)

Idea: for increased robustness, only use p3(t) and p1(t), with a threshold again based on (tn,i − tn)/hn: z(0)

i

=

  • p3(tn,i),

(tn,i − tn)/hn ≤ 0.5, p1(tn,i), 0.5 < (tn,i − tn)/hn We’ll call this the cutoff-order predictor. Hypotheses: (+) very accurate for smooth dynamics and early stages (+) robust for intermediate & late stages (−) inaccurate for intermediate & late stages

t n−1 t n hn hn−1 t n−2 yn−2 yn−1 f n−1 f n−2 z2

(0)

z5

(0)

z4

(0)

p (t)

1

z3

(0)

z1

(0)

p (t)

3

t

slide-13
SLIDE 13

Introduction Numerical Methods Test Problems Results Conclusions

Stopping the Iteration

hn are adapted to ensure that solution and embedding agree sufficiently:

yn − ˜ yn = hn

  • s
  • j=1

(bj − ˜ bj)f(tn,j, zj)

  • ≤ 1

Recall: · encodes rtol and atol,i (accuracy and units) for the solution.

hn f n,1 z4 f n,4 z3 f n,3 z2 f n,2 yn f n,5 yn−1 z1 yn ~ t

Each inner nonlinear solve computes the stage solutions zj, so errors therein propagate into y and ˜ y through f, but get scaled by an extra factor of hn. We may leverage this information in attempt to ensure that we do not “over solve” each nonlinear system for the stage solutions zj.

slide-14
SLIDE 14

Introduction Numerical Methods Test Problems Results Conclusions

Stopping the Iteration – Fixed Residual Tolerance (C)

A standard choice for a nonlinear solver is to enforce a tolerance on the nonlinear residual:

  • G
  • x(k)
  • 2 < ε.

We’ll use values of ε = {10−7, 10−9, 10−11} in our experiments. (+) May be performed at beginning of nonlinear iteration, allowing it to stop immediately upon ‘success’ (−) The residual only indirectly measures accuracy in the nonlinear solution (−) The 2-norm knows nothing about the desired accuracy or ‘typical’ units of the solution vector x(k), so both must be included in ε (−) Hence, ε is typically highly problem-dependent (−) Stopping criteria cannot adapt to a solution with changing magnitude

slide-15
SLIDE 15

Introduction Numerical Methods Test Problems Results Conclusions

Stopping the Iteration – Fixed Relative Solution Tolerance (B)

Another standard choice controls the anticipated relative solution error:

  • x(k) − x(k−1)
  • 2 <

ε R

  • x(k−1)
  • 2 ≈

ε R yn−12 , where R =

  • 1,

k == 1, x(k) − x(k−1)2 / x(k−1) − x(k−2)2, k > 1 We’ll use values of ε = {10−7, 10−9, 10−11} in our experiments. (+) R measures the [linear] convergence rate, allowing an estimate of the error

  • x∗ − x(k)
  • 2 ≈ R
  • x(k) − x(k−1)
  • 2

(+) Directly measures accuracy in the nonlinear solution (+) Relative error adapts to a solution with changing magnitude ((−) still doesn’t include units) (−) Requires multiple nonlinear iterations to test convergence (−) ε is still typically problem-dependent

slide-16
SLIDE 16

Introduction Numerical Methods Test Problems Results Conclusions

Stopping the Iteration – IVP-aware Relative Solution Tolerance (A)

Perhaps the most complex choice uses our WRMS-norm for IVP solution error within the nonlinear solver:

  • x(k) − x(k−1)
  • <

ε R, where R =

  • 1,

k == 1, x(k) − x(k−1) / x(k−1) − x(k−2), k > 1 We’ll use ε = {10−1, 10−2, 10−3} (since · already includes rtol and atol,i). (+) R measures the [linear] convergence rate, allowing an estimate of the error

  • x∗ − x(k)
  • 2 ≈ R
  • x(k) − x(k−1)
  • 2

(+) Directly relates nonlinear solver error to IVP solver accuracy (−) Requires multiple nonlinear iterations to test convergence (−) ε may still be problem-dependent

slide-17
SLIDE 17

Introduction Numerical Methods Test Problems Results Conclusions

Outline

1

Introduction

2

Numerical Methods

3

Test Problems

4

Results

5

Conclusions

slide-18
SLIDE 18

Introduction Numerical Methods Test Problems Results Conclusions

Brusselator

Nonlinear ODE system for t ∈ [0, 10]: ˙ u = 1.2 − (w + 1)u + vu2, u(0) = 3.9, ˙ v = wu − vu2, v(0) = 1.1, ˙ w = 105(2.5 − w) − wu, w(0) = 2.8. Test parameters: O(h4) DIRK integrator (s = 5); rtol = 10−6, atol = 10−11 Nonlinear solvers: Newton-GMRES and Accelerated Fixed Point Predictor algorithms 0,1,2,3 Convergence tests: A (ε = 0.1) B (ε = 10−7) C (ε = 10−7)

slide-19
SLIDE 19

Introduction Numerical Methods Test Problems Results Conclusions

Brusselator1D FEM

Nonlinear PDE system for t ∈ [0, 10], x ∈ [0, 1]:

∂tu = 1 40 ∂2

xxu + 3

5 − (w + 1)u + vu2, u(0, x) = 3 5 + 1 10 sin(πx), ∂tv = 1 40 ∂2

xxv + wu − vu2,

v(0, x) = 10 3 + 1 10 sin(πx), ∂tw = 1 40 ∂2

xxw + 105(2 − w) − wu,

w(0, x) = 2 + 1 10 sin(πx),

with stationary boundary conditions, i.e.

∂tu(t, 0) = ∂tu(t, 1) = ∂tv(t, 0) = ∂tv(t, 1) = ∂tw(t, 0) = ∂tw(t, 1) = 0.

Test parameters: O(h4) DIRK integrator (s = 5); rtol = 10−6, atol = 10−11 Newton-GMRES solver (Accelerated Fixed Point fails for all attempts) Predictor algorithms 0,1,2,3 Convergence tests: A (ε = 0.1), B (ε = 10−7), (C fails all attempts)

slide-20
SLIDE 20

Introduction Numerical Methods Test Problems Results Conclusions

HIRES Problem

ODE system modeling chemical reactions governing ‘High Irradiance Responses’

  • f photomorphogenesis on the basis of phytochrome; for t ∈ [0, 321.8122]:

˙ y1 = −1.71y1 + 0.43y2 + 8.32y3 + 0.0007 ˙ y2 = 1.71y1 − 8.75y2 ˙ y3 = −10.03y3 + 0.43y4 + 0.035y5 ˙ y4 = 8.32y2 + 1.71y3 − 1.12y4 ˙ y5 = −1.745y5 + 0.43(y6 + y7) ˙ y6 = −280y6y8 + 0.69y4 + 1.71y5 − 0.43y6 + 0.69y7 ˙ y7 = 280y6y8 − 1.81y7 ˙ y8 = 1.81y7 − 280y6y8 y(0) = 0, except y1(0) = 1, y8(0) = 0.0057.

O(h4) DIRK integrator (s = 5); rtol = 10−6, atol = 10−10: Nonlinear solvers: Newton-Dense and Accelerated Fixed Point Predictor algorithms 0,1,2,3 Convergence tests: A (ε = 0.01), B (ε = 10−9), C (ε = 10−11)

slide-21
SLIDE 21

Introduction Numerical Methods Test Problems Results Conclusions

Medical Akzo Nobel Problem

Nonlinear PDE system modeling penetration of radio-labeled antibodies into tumor-infected tissue; for t ∈ [0, 20], x ∈ [0, 1]: ˙ u = (x − 1)4 16 ∂2

xxu + (x − 1)3

8 ∂xu − kuv, u(0, x) = 0, ˙ v = −kuv, v(0, x) = 1. Boundary conditions on u of u(t, 0) =

  • 2,

t ≤ 5, 0, t > 5, ∂xu(t, 1) = 0. Test parameters: O(h4) DIRK integrator (s = 5); rtol = 10−6, atol = 10−10 Nonlinear solvers: Newton-Band and Accelerated Fixed Point Predictor algorithms 0,1,2,3 Convergence tests: A (ε = 0.1) B (ε = 10−7) C (ε = 10−9)

slide-22
SLIDE 22

Introduction Numerical Methods Test Problems Results Conclusions

Ring Modulator Problem

Stiff, highly nonlinear system of 15 ODEs, modeling a ring modulator in electrical circuit analysis; for t ∈ [0, 10−3], with initial conditions y(0) = 0. Test parameters: O(h4) DIRK integrator (s = 5); rtol = 10−4, atol = 10−4 Nonlinear solvers: Newton-Dense and Accelerated Fixed Point Predictor algorithms 0,1,2,3 Convergence tests: A (ε = 0.001) B (ε = 10−9) C (ε = 10−7)

slide-23
SLIDE 23

Introduction Numerical Methods Test Problems Results Conclusions

van der Pol Oscillator Problem

Stiff, nonlinear system of ODEs describing the behavior of nonlinear vacuum tube circuits; for t ∈ [0, 2]: ˙ y1 = y2, y1(0) = 2, ˙ y2 = 106((1 − y2

1)y2 − y1),

y2(0) = 0. Test parameters: O(h4) DIRK integrator (s = 5); rtol = 10−6, atol = 10−10 Nonlinear solvers: Newton-Dense and Accelerated Fixed Point Predictor algorithms 0,1,2,3 Convergence tests: A (ε = 0.001) B (ε = 10−11) C (ε = 10−7)

slide-24
SLIDE 24

Introduction Numerical Methods Test Problems Results Conclusions

Outline

1

Introduction

2

Numerical Methods

3

Test Problems

4

Results

5

Conclusions

slide-25
SLIDE 25

Introduction Numerical Methods Test Problems Results Conclusions

Initial Guess Results – Newton

Predictors: 0 Trivial 2 Variable order 1 Maximum order 3 Cutoff order All results are normalized against trivial predictor (0). Red highlights best result(s) for problem/category.

Problem Predictor Runtime NIters FEvals 1 0.89 0.58 0.81 Brusselator 2 0.77 0.63 0.85 3 0.81 0.65 0.83 1 0.69 0.78 0.86 Brusselator1D 2 0.69 0.80 0.86 3 0.74 0.85 0.90 1 0.73 0.66 0.71 HIRES 2 0.72 0.67 0.76 3 0.81 0.76 0.81

slide-26
SLIDE 26

Introduction Numerical Methods Test Problems Results Conclusions

Initial Guess Results – Newton (continued)

Predictors: 0 Trivial 2 Variable order 1 Maximum order 3 Cutoff order All results are normalized against trivial predictor (0). Red highlights best result(s) for problem/category.

Problem Predictor Runtime NIters FEvals 1 0.60 0.58 0.61 Medakzo 2 0.67 0.69 0.69 3 0.74 0.80 0.77 1 0.85 0.72 0.87 Ringmod 2 0.86 0.73 0.89 3 0.90 0.72 0.90 1 0.48 0.40 0.52 van der Pol 2 0.51 0.49 0.54 3 0.50 0.53 0.52

slide-27
SLIDE 27

Introduction Numerical Methods Test Problems Results Conclusions

Initial Guess Results – Fixed Point

Predictors: 0 Trivial 2 Variable order 1 Maximum order 3 Cutoff order All results are normalized against trivial predictor (0). Red highlights best result(s) for problem/category.

Problem Predictor Runtime NIters FEvals 1 0.07 0.07 0.07 Brusselator 2 0.25 0.26 0.27 3 0.20 0.21 0.22 1 1.05 1.09 1.12 HIRES 2 1.92 1.97 2.11 3 2.00 2.08 2.20

Notes: Fixed point solver fails all tests for Brusselator1D. Trivial predictor is most efficient for HIRES.

slide-28
SLIDE 28

Introduction Numerical Methods Test Problems Results Conclusions

Initial Guess Results – Fixed Point (continued)

Predictors: 0 Trivial 2 Variable order 1 Maximum order 3 Cutoff order All results are normalized against trivial predictor (0). Red highlights best result(s) for problem/category.

Problem Predictor Runtime NIters FEvals 1 1.00 1.13 1.25 Medakzo 2 1.29 1.34 1.35 3 1.37 1.38 1.38 1 0.86 0.87 0.88 Ringmod 2 0.76 0.81 0.81 3 0.73 0.76 0.76 1 1.00 1.00 1.00 van der Pol∗ 2 1.41 1.48 1.49 3 4.90 5.73 5.76

Notes: predictor 0 fails van der Pol; results normalized against 1 instead. Trivial predictor is most efficient for Medakzo.

slide-29
SLIDE 29

Introduction Numerical Methods Test Problems Results Conclusions

Nonlinear Solver Tolerance Results – Newton

Solver Convergence Test Methods: A IVP-aware Relative Solution Tolerance B Fixed Relative Solution Tolerance C Fixed Residual Tolerance

Problem Method (ε) Runtime NIters FEvals A (1e-1) 0.0152 2220 3344 Brusselator B (1e-7) 0.0139 2220 3344 C (1e-7) 0.0120 1121 3395 A (1e-1) 40.4 1157 1751 Brusselator1D B (1e-7) 40.5 1157 1751 C – – – A (1e-2) 0.0847 18207 27886 HIRES B (1e-9) 0.0855 20660 28749 C (1e-11) 0.0773 16842 31467 A (1e-1) 0.484 14052 24163 Medakzo B (1e-7) 0.482 14052 24163 C (1e-9) 0.673 18211 38712 A (1e-3) 8.66 1721970 2534501 Ringmod B (1e-9) 3.54 821131 1075163 C (1e-7) 3.57 696212 1196796 A (1e-3) 0.366 97428 141454 van der Pol B (1e-11) 0.264 76966 103677 C (1e-7) 0.259 58695 117220

slide-30
SLIDE 30

Introduction Numerical Methods Test Problems Results Conclusions

Nonlinear Solver Tolerance Results – Fixed Point

Solver Convergence Test Methods: A IVP-aware Relative Solution Tolerance B Fixed Relative Solution Tolerance C Fixed Residual Tolerance

Problem Method (ε) Runtime NIters FEvals A (1e-1) 0.0246 6597 8709 Brusselator B (1e-7) 0.0244 6597 8709 C (1e-7) 5.71 1298719 3209707 A (1e-2) 0.164 43206 53538 HIRES B (1e-9) 0.174 47294 58999 C – – – A (1e-2) 8.83 236496 279000 Medakzo B (1e-7) 8.87 267053 351912 C (1e-9) 13.3 359933 732192 A (1e-3) 39.3 10154719 12365025 Ringmod B (1e-9) 32.4 8717267 10221743 C – – – A (1e-1) 0.364 92873 124177 van der Pol B (1e-7) 0.362 92873 124177 C – – –

Note: Fixed point solver fails all tests for Brusselator1D

slide-31
SLIDE 31

Introduction Numerical Methods Test Problems Results Conclusions

Outline

1

Introduction

2

Numerical Methods

3

Test Problems

4

Results

5

Conclusions

slide-32
SLIDE 32

Introduction Numerical Methods Test Problems Results Conclusions

Conclusions – Predictor

Implicit RK methods provide accurate additional data to nonlinear solver, enabling high-order Hermite extrapolation for initial nonlinear iterate. Solvers may utilize any subset of this information in constructing x(0), in an attempt to balance high-order accuracy with danger in extrapolating far away from data. “Maximum-order” predictor shows best overall results: More uniform benefit for Newton than AAFP; but always competitive. Accurate data more important for predictor than extrapolation danger. Surprisingly, more robust than the trivial predictor.

slide-33
SLIDE 33

Introduction Numerical Methods Test Problems Results Conclusions

Conclusions – Nonlinear Convergence Criteria

IVP solvers with embeddings enable accurate estimate of solution error; allows balance between nonlinear solver & time integration errors: While fixed residual had best efficiency, it required significant tuning to each problem, and showed serious robustness issues. Fixed point: IVP-aware tolerance wins or is competitive for all tests Newton: IVP-aware tolerance typically competitive with fixed relative solution tolerance, with less dramatic tuning required This Newton/FP difference may result from differing convergence rate

slide-34
SLIDE 34

Introduction Numerical Methods Test Problems Results Conclusions

Advertisement

All of these experiments were performed within the ARKode library

[see talk by David Gardner, Monday PM]:

Newest component in SUNDIALS Flexible solver infrastructure, choice over: Nonlinear/Linear solvers Internal methods (e.g. initial guess, adaptivity algorithm) Internal parameters (e.g. ε, adaptivity parameters) Support for non-identity mass matrices (FEM), ARK IMEX methods: M ˙ y = fE(t, y) + fI(t, y) Non-identity M used in the 1D brusselator problem Native support for vector resizing (e.g. spatial adaptivity) User control over degree of solver “implicitness” Supports user-supplied preconditioning separately for Newton and mass-matrix systems

slide-35
SLIDE 35

Introduction Numerical Methods Test Problems Results Conclusions

Thanks & Acknowledgements

Collaborators/Students: Carol S. Woodward [LLNL] Alan C. Hindmarsh [LLNL] David J. Gardner [LLNL] John Loffeld [LLNL] Jean M. Sexton [SMU, PhD] Current Grant/Computing Support: DOE SciDAC Program NSF XRAC Program SMU Center for Scientific Computation Software: ARKode – http://faculty.smu.edu/reynolds/arkode SUNDIALS – https://computation.llnl.gov/casc/sundials

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC.