design of optimal runge kutta methods
play

Design of optimal Runge-Kutta methods David I. Ketcheson King - PowerPoint PPT Presentation

Design of optimal Runge-Kutta methods David I. Ketcheson King Abdullah University of Science & Technology (KAUST) D. Ketcheson (KAUST) 1 / 36 Acknowledgments Some parts of this are joint work with: Aron Ahmadia Matteo Parsani D.


  1. Design of optimal Runge-Kutta methods David I. Ketcheson King Abdullah University of Science & Technology (KAUST) D. Ketcheson (KAUST) 1 / 36

  2. Acknowledgments Some parts of this are joint work with: Aron Ahmadia Matteo Parsani D. Ketcheson (KAUST) 2 / 36

  3. Outline High order Runge-Kutta methods 1 Linear properties of Runge-Kutta methods 2 Nonlinear properties of Runge-Kutta methods 3 Putting it all together: some optimal methods and applications 4 D. Ketcheson (KAUST) 3 / 36

  4. Outline High order Runge-Kutta methods 1 Linear properties of Runge-Kutta methods 2 Nonlinear properties of Runge-Kutta methods 3 Putting it all together: some optimal methods and applications 4 D. Ketcheson (KAUST) 4 / 36

  5. Solution of hyperbolic PDEs The fundamental algorithmic barrier is the CFL condition: ∆ t ≤ a ∆ x Implicit methods don’t usually help (due to reduced accuracy) D. Ketcheson (KAUST) 5 / 36

  6. Solution of hyperbolic PDEs The fundamental algorithmic barrier is the CFL condition: ∆ t ≤ a ∆ x Implicit methods don’t usually help (due to reduced accuracy) Strong scaling limits the effectiveness of spatial parallelism alone D. Ketcheson (KAUST) 5 / 36

  7. Solution of hyperbolic PDEs The fundamental algorithmic barrier is the CFL condition: ∆ t ≤ a ∆ x Implicit methods don’t usually help (due to reduced accuracy) Strong scaling limits the effectiveness of spatial parallelism alone Strategy: keep ∆ x as large as possible by using high order methods D. Ketcheson (KAUST) 5 / 36

  8. Solution of hyperbolic PDEs The fundamental algorithmic barrier is the CFL condition: ∆ t ≤ a ∆ x Implicit methods don’t usually help (due to reduced accuracy) Strong scaling limits the effectiveness of spatial parallelism alone Strategy: keep ∆ x as large as possible by using high order methods But: high order methods cost more and require more memory D. Ketcheson (KAUST) 5 / 36

  9. Solution of hyperbolic PDEs The fundamental algorithmic barrier is the CFL condition: ∆ t ≤ a ∆ x Implicit methods don’t usually help (due to reduced accuracy) Strong scaling limits the effectiveness of spatial parallelism alone Strategy: keep ∆ x as large as possible by using high order methods But: high order methods cost more and require more memory Can we develop high order methods that are as efficient as lower order methods? D. Ketcheson (KAUST) 5 / 36

  10. Time Integration Using a better time integrator is usually simple but can be highly beneficial. D. Ketcheson (KAUST) 6 / 36

  11. Time Integration Using a better time integrator is usually simple but can be highly beneficial. Using a different time integrator can: Reduce the number of RHS evaluations required D. Ketcheson (KAUST) 6 / 36

  12. Time Integration Using a better time integrator is usually simple but can be highly beneficial. Using a different time integrator can: Reduce the number of RHS evaluations required Alleviate timestep resrictions due to D. Ketcheson (KAUST) 6 / 36

  13. Time Integration Using a better time integrator is usually simple but can be highly beneficial. Using a different time integrator can: Reduce the number of RHS evaluations required Alleviate timestep resrictions due to Linear stability D. Ketcheson (KAUST) 6 / 36

  14. Time Integration Using a better time integrator is usually simple but can be highly beneficial. Using a different time integrator can: Reduce the number of RHS evaluations required Alleviate timestep resrictions due to Linear stability Nonlinear stability D. Ketcheson (KAUST) 6 / 36

  15. Time Integration Using a better time integrator is usually simple but can be highly beneficial. Using a different time integrator can: Reduce the number of RHS evaluations required Alleviate timestep resrictions due to Linear stability Nonlinear stability Improve accuracy (truncation error, dispersion, dissipation) D. Ketcheson (KAUST) 6 / 36

  16. Time Integration Using a better time integrator is usually simple but can be highly beneficial. Using a different time integrator can: Reduce the number of RHS evaluations required Alleviate timestep resrictions due to Linear stability Nonlinear stability Improve accuracy (truncation error, dispersion, dissipation) Reduce storage requirements D. Ketcheson (KAUST) 6 / 36

  17. Runge-Kutta Methods To solve the initial value problem: u (0) = u 0 u ′ ( t ) = F ( u ( t )) , a Runge-Kutta method computes approximations u n ≈ u ( n ∆ t ): i − 1 y i = u n + ∆ t � a ij F ( y j ) j =1 s − 1 u n +1 = u n + ∆ t � b j F ( y j ) j =1 The accuracy and stability of the method depend on the coefficient matrix A and vector b . D. Ketcheson (KAUST) 7 / 36

  18. Runge-Kutta Methods: a philosophical aside An RK method builds up information about the solution derivatives through the computation of intermediate stages At the end of a step all of this information is thrown away! Use more stages = ⇒ keep information around longer D. Ketcheson (KAUST) 8 / 36

  19. Outline High order Runge-Kutta methods 1 Linear properties of Runge-Kutta methods 2 Nonlinear properties of Runge-Kutta methods 3 Putting it all together: some optimal methods and applications 4 D. Ketcheson (KAUST) 9 / 36

  20. The Stability Function For the linear equation u ′ = λ u , a Runge-Kutta method yields a solution u n +1 = φ ( λ ∆ t ) u n , where φ is called the stability function of the method: φ ( z ) = det( I − z ( A − eb T ) det( I − z A ) D. Ketcheson (KAUST) 10 / 36

  21. The Stability Function For the linear equation u ′ = λ u , a Runge-Kutta method yields a solution u n +1 = φ ( λ ∆ t ) u n , where φ is called the stability function of the method: φ ( z ) = det( I − z ( A − eb T ) det( I − z A ) Example: Euler’s Method u n +1 = u n + ∆ tF ( u ); φ ( z ) = 1 + z . D. Ketcheson (KAUST) 10 / 36

  22. The Stability Function For the linear equation u ′ = λ u , a Runge-Kutta method yields a solution u n +1 = φ ( λ ∆ t ) u n , where φ is called the stability function of the method: φ ( z ) = det( I − z ( A − eb T ) det( I − z A ) Example: Euler’s Method u n +1 = u n + ∆ tF ( u ); φ ( z ) = 1 + z . For explicit methods of order p : p s 1 j ! z j + � � α j z j . φ ( z ) = j =0 j = p +1 D. Ketcheson (KAUST) 10 / 36

  23. Absolute Stability For the linear equation u ′ ( t ) = Lu we say the solution is absolutely stable if | φ ( λ ∆ t ) | ≤ 1 for all λ ∈ σ ( L ). D. Ketcheson (KAUST) 11 / 36

  24. Absolute Stability For the linear equation u ′ ( t ) = Lu we say the solution is absolutely stable if | φ ( λ ∆ t ) | ≤ 1 for all λ ∈ σ ( L ). Example: Euler’s Method u n +1 = u n + ∆ tF ( u ); φ ( z ) = 1 + z . − 1 D. Ketcheson (KAUST) 11 / 36

  25. Stability optimization This leads naturally to the following problem. Stability optimization Given L , p , s , maximize ∆ t subject to | φ (∆ t λ ) | − 1 ≤ 0 , λ ∈ σ ( L ) , p s 1 j ! z j + � � α j z j . where φ ( z ) = j =0 j = p +1 Here the decision variables are ∆ t and the coefficients α j , j = p + 1 , . . . , s . This problem is quite difficult; we approximate its solution by solving a sequence of convex problems (DK & A. Ahmadia, arXiv preprint). D. Ketcheson (KAUST) 12 / 36

  26. Accuracy optimization We could instead optimize accuracy over some region in C : Accuracy optimization Given L , p , s , maximize ∆ t subject to | φ (∆ t λ ) − exp(∆ t λ | ≤ ǫ, λ ∈ σ ( L ) , p s 1 j ! z j + � � α j z j . where φ ( z ) = j =0 j = p +1 In the PDE case, we can replace exp(∆ t λ ) with the exact dispersion relation for each Fourier mode. D. Ketcheson (KAUST) 13 / 36

  27. Stability Optimization: a toy example As an example, consider the advection equation u t + u x = 0 discretized in space by first-order upwind differencing with unit spatial mesh size U ′ i ( t ) = − ( U i ( t ) − U i − 1 ( t )) with periodic boundary condition U 0 ( t ) = U N ( t ). 8 6 4 2 0 2 4 6 8 4 3 2 10 1 D. Ketcheson (KAUST) 14 / 36

  28. Stability Optimization: a toy example 8 8 6 6 4 4 2 2 0 0 2 2 4 4 6 6 8 8 4 3 2 10 1 14 4 12 10 8 6 2 0 (a) RK(4,4) (b) Optimized 10-stage method D. Ketcheson (KAUST) 15 / 36

  29. Stability Optimization: a toy example What is the relative efficiency? Stable step size Cost per step RK(4,4): 1 . 4 4 ≈ 0 . 35 6 RK(10,4): 10 = 0 . 6 By allowing even more stages, can asymptotically approach the efficiency of Euler’s method. D. Ketcheson (KAUST) 16 / 36

  30. Stability Optimization: a more interesting example Second order discontinuous Galerkin discretization of advection: D. Ketcheson (KAUST) 17 / 36

  31. Stability Optimization: one more example 20 15 10 5 0 5 10 15 20 80 70 60 50 40 30 20 10 0 s = 20 D. Ketcheson (KAUST) 18 / 36

  32. Stability Optimization: one more example s = 20 D. Ketcheson (KAUST) 18 / 36

  33. Stability Optimization: one more example s = 20 D. Ketcheson (KAUST) 18 / 36

  34. Outline High order Runge-Kutta methods 1 Linear properties of Runge-Kutta methods 2 Nonlinear properties of Runge-Kutta methods 3 Putting it all together: some optimal methods and applications 4 D. Ketcheson (KAUST) 19 / 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend