leveraging time integration to increase efficiency and
play

Leveraging time integration to increase efficiency and robustness of - PowerPoint PPT Presentation

Introduction Numerical Methods Test Problems Results Conclusions Leveraging time integration to increase efficiency and robustness of nonlinear implicit solvers Daniel R. Reynolds reynolds@smu.edu Department of Mathematics Southern


  1. Introduction Numerical Methods Test Problems Results Conclusions Leveraging time integration to increase efficiency and robustness of nonlinear implicit solvers Daniel R. Reynolds reynolds@smu.edu Department of Mathematics Southern Methodist University ICERM Workshop on Numerical Methods for Large-Scale Nonlinear Problems and Their Applications September 1, 2015

  2. Introduction Numerical Methods Test Problems Results Conclusions Generic Nonlinear Solver Algorithm Nonlinear solver algorithms typically fit within a standard format: (a) Given an initial guess, x (0) (b) Loop: (i) Compute x ( k +1) = g ( x ( k ) ) � or x ( k +1) = g ( x ( k ) , . . . , x ( k − m ) ) � (ii) Check for convergence Typically (b.i) receives the greatest attention, since it is what we analyze to determine the cost per iteration, rate of convergence, etc. This talk instead focuses on the two other (often critically important) components, (a) and (b.ii), and what we may do for them in the context of embedded diagonally-implicit Runge-Kutta time integrators.

  3. Introduction Numerical Methods Test Problems Results Conclusions Outline Introduction 1 Numerical Methods 2 Test Problems 3 Results 4 Conclusions 5

  4. Introduction Numerical Methods Test Problems Results Conclusions Outline Introduction 1 Numerical Methods 2 Test Problems 3 Results 4 Conclusions 5

  5. Introduction Numerical Methods Test Problems Results Conclusions DIRK Time Integration y = f ( t, y ) , t ∈ [ t 0 , t f ] , y ( t 0 ) = y 0 ∈ R N . Consider the ODE system ˙ An s -stage DIRK method for evolving the time step t n − 1 → t n approximates two solutions to the ODE system ( y n and ˜ y n ) as: i � z i = y n − 1 + h n A i,j f ( t n,j , z j ) , i = 1 , . . . , s j =1 s � y n = y n − 1 + h n b j f ( t n,j , z j ) , [solution] j =1 s ˜ � ˜ y n = y n − 1 + h n b j f ( t n,j , z j ) , [embedding] , j =1 where t n,j ≡ t n − 1 + c j h n and h n = t n − t n − 1 . This defines a sequence of s nonlinear systems to solve per time step: i � G i ( z i ) ≡ z i − y n − 1 − h n A i,j f ( t n,j , z j ) = 0 , i = 1 , . . . , s j =1

  6. Introduction Numerical Methods Test Problems Results Conclusions Newton Nonlinear Solver We denote the WRMS norm of a vector v ∈ R n as � 2 � 1 / 2 � N 1 � v i � � v � = , N a tol ,i + r tol | y n − 1 | i =1 where a tol ,i and r tol are our target absolute and relative errors on y n,i . Given a guess x (0) for a stage solution z i . Until “convergence”, update x ( k +1) = x ( k ) + s ( k ) , where s ( k ) satisfies: J ( x ( k ) ) s ( k ) = − G ( x ( k ) ) , [direct: dense/banded] or � J ( x ( k ) ) s ( k ) + G ( x ( k ) ) � ≤ η � G ( x ( k ) ) � , [iterative: GMRES] J ij ( x ) = ∂G i ( x ) is the Jacobian, and η is the relative linear residual tolerance. ∂x j We allow up to 5 Newton iterations per stage in the following experiments.

  7. Introduction Numerical Methods Test Problems Results Conclusions Accelerated Fixed Point Nonlinear Solver We also consider a fixed point nonlinear solver with Anderson acceleration: Denote g ( x ) = x − G ( x ) . Given a guess x (0) for a stage solution z i , and some m > 0 . Set x (1) = g ( x (0) ) . Until “convergence”, for k = 1 , 2 , . . . : m k = min { m, k } F ( k ) = , where f ( i ) = g ( x ( i ) ) − x ( i ) � f ( k − m k ) , . . . , f ( k ) � α ( k ) = argmin α = ( α 0 ,...,α mk ) i =0 α i =1 � F ( k ) α � 2 T , � mk m k x ( k +1) = � α ( k ) g ( x ( k − m k + i ) ) i i =0 We allow up 10 iterations per stage, with m = 5 in the following experiments.

  8. Introduction Numerical Methods Test Problems Results Conclusions Reliance on x (0) Numerical Analysis 101 : Newton’s method converges quadratically to a simple root x ∗ , assuming � x (0) − x ∗ � < ε , for ε “small enough”. Tim Kelley’s talk (Monday PM) and Roger Pawlowski’s results (Monday AM) : Anderson accelerated fixed-point method converges under similar assumptions on the quality of x (0) . For general nonlinear systems, initial guess selection is more “art” than “science,” but in many applications (including ours) we may leverage the problem itself to construct x (0) . Balancing act: explicit predictors extrapolate solution values, but extrapolation error can increase with polynomial order.

  9. Introduction Numerical Methods Test Problems Results Conclusions Constructing the Initial Guess – Trivial Predictor (0) f n,1 Runge-Kutta methods approximate f n−1 f n,4 z 1 z 4 � t n z 3 f n,3 y n−1 y n = y n − 1 + f ( t, y ) d t t n − 1 z 2 with solution values/derivatives at t n,i . f n,2 t t n−1 t n Typically � z i − y ( t n,i ) � ∝ h , even when z 5 f n,5 y n is much higher-order. h n Basic assumption: if h n is “small” then z 1 ≈ · · · ≈ z s ≈ y n − 1 , leading to the typical initial guess z (0) = y n − 1 , i = 1 , . . . , s i We’ll call this the trivial predictor . Hypotheses: (+) robust, ( − ) inaccurate

  10. Introduction Numerical Methods Test Problems Results Conclusions Constructing the Initial Guess – Maximum-Order Predictor (1) Idea: use “accurate” data from the previous step, { y n − 2 , y n − 1 , f n − 2 , f n − 1 } , to construct a cubic Hermite interpolant p 3 ( t ) : p 3 ( t n − 2 ) = y n − 2 , p 3 ( t n − 1 ) = y n − 1 , p ′ 3 ( t n − 2 ) = f n − 2 , p ′ 3 ( t n − 1 ) = f n − 1 , and extrapolate z (0) = p 3 ( t n,i ) . We’ll call this the maximum-order predictor . i Hypotheses: (+) very accurate for smooth, non-stiff dynamics ( − ) less robust for stiff dynamics, especially for later stages ( − ) inaccurate for h n ≫ h n − 1 f n−2 y n−1 (0) z 1 f n−1 y n−2 p (t) (0) z 3 (0) 3 z 2 (0) (0) z 5 z 4 t t n−1 t n t n−2 h n−1 h n

  11. Introduction Numerical Methods Test Problems Results Conclusions Constructing the Initial Guess – Variable-Order Predictor (2) Idea: to avoid increased extrapolation error when used away from data, reduce the order of the predicting polynomial for later stages:  p 3 ( t n,i ) , ( t n,i − t n ) /h n ≤ 0 . 5 ,   z (0) = p 2 ( t n,i ) , 0 . 5 < ( t n,i − t n ) /h n ≤ 0 . 75 , i  p 1 ( t n,i ) , 0 . 75 < ( t n,i − t n ) /h n  where p 2 ( t ) is the quadratic Hermite polynomial through { y n − 2 , y n − 1 , f n − 1 } , and p 1 ( t ) interpolates { y n − 2 , y n − 1 } . We’ll call this the variable-order predictor . Hypotheses: (+) very accurate for smooth dynamics and early stages (+) robust for late stages ( − ) inaccurate for late stages (0) z 5 (0) z 2 p (t) 1 f n−2 y n−1 (0) z 4 (0) f n−1 z 3 y n−2 z 1 (0) p (t) 2 p (t) 3 t t n−2 t n−1 t n h n−1 h n

  12. Introduction Numerical Methods Test Problems Results Conclusions Constructing the Initial Guess – Cutoff-Order Predictor (3) Idea: for increased robustness, only use p 3 ( t ) and p 1 ( t ) , with a threshold again based on ( t n,i − t n ) /h n : � p 3 ( t n,i ) , ( t n,i − t n ) /h n ≤ 0 . 5 , z (0) = i p 1 ( t n,i ) , 0 . 5 < ( t n,i − t n ) /h n We’ll call this the cutoff-order predictor . Hypotheses: (+) very accurate for smooth dynamics and early stages (+) robust for intermediate & late stages ( − ) inaccurate for intermediate & late stages (0) z 5 p (t) (0) z 2 1 (0) z 3 (0) z 4 f n−2 y n−1 f n−1 y n−2 (0) z 1 p (t) 3 t t n−1 t n t n−2 h n−1 h n

  13. Introduction Numerical Methods Test Problems Results Conclusions Stopping the Iteration h n are adapted to ensure that solution f n,1 and embedding agree sufficiently: f n,4 z 1 z 4 z 3 f n,3 y n−1 � � s � � � ( b j − ˜ � � � y n − ˜ y n � = h n b j ) f ( t n,j , z j ) ≤ 1 � � z 2 � � j =1 f n,2 � � t ~ y n Recall: � · � encodes r tol and a tol ,i y n f n,5 (accuracy and units) for the solution. h n Each inner nonlinear solve computes the stage solutions z j , so errors therein propagate into y and ˜ y through f , but get scaled by an extra factor of h n . We may leverage this information in attempt to ensure that we do not “over solve” each nonlinear system for the stage solutions z j .

  14. Introduction Numerical Methods Test Problems Results Conclusions Stopping the Iteration – Fixed Residual Tolerance (C) A standard choice for a nonlinear solver is to enforce a tolerance on the nonlinear residual: � x ( k ) �� � � G 2 < ε. � � � We’ll use values of ε = { 10 − 7 , 10 − 9 , 10 − 11 } in our experiments. (+) May be performed at beginning of nonlinear iteration, allowing it to stop immediately upon ‘success’ ( − ) The residual only indirectly measures accuracy in the nonlinear solution ( − ) The 2-norm knows nothing about the desired accuracy or ‘typical’ units of the solution vector x ( k ) , so both must be included in ε ( − ) Hence, ε is typically highly problem-dependent ( − ) Stopping criteria cannot adapt to a solution with changing magnitude

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend