am 205 lecture 12
play

AM 205: lecture 12 Last time: Numerical differentiation, numerical - PowerPoint PPT Presentation

AM 205: lecture 12 Last time: Numerical differentiation, numerical solution of ordinary differential equations Today: Convergence and stability of ODE numerical methods Convergence We now consider whether one-step methods converge to the


  1. AM 205: lecture 12 ◮ Last time: Numerical differentiation, numerical solution of ordinary differential equations ◮ Today: Convergence and stability of ODE numerical methods

  2. Convergence We now consider whether one-step methods converge to the exact solution as h → 0 Convergence is a crucial property, we want to be able to satisfy an accuracy tolerance by taking h sufficiently small In general a method that isn’t convergent will give misleading results and is useless in practice!

  3. Convergence We define global error, e k , as the total accumulated error at t = t k e k ≡ y ( t k ) − y k We define truncation error, T k , as the amount “left over” at step k when we apply our method to the exact solution and divide by h e.g. for an explicit one-step ODE approximation, we have T k ≡ y ( t k +1 ) − y ( t k ) − Φ( t k , y ( t k ); h ) h

  4. Convergence The truncation error defined above determines the local error introduced by the ODE approximation For example, suppose y k = y ( t k ), then for the case above we have hT k ≡ y ( t k +1 ) − y k − h Φ( t k , y k ; h ) = y ( t k +1 ) − y k +1 Hence hT k is the error introduced in one step of our ODE approximation 1 Therefore the global error e k is determined by the accumulation of the T j for j = 0 , 1 , . . . , k − 1 Now let’s consider the global error of the Euler method in detail 1 Because of this fact, the truncation error is defined as hT k in some texts

  5. Convergence Theorem: Suppose we apply Euler’s method for steps 1 , 2 , . . . , M , to y ′ = f ( t , y ), where f satisfies a Lipschitz condition: | f ( t , u ) − f ( t , v ) | ≤ L f | u − v | , where L f ∈ R > 0 is called a Lipschitz constant. Then e L f t k − 1 � � � � | e k | ≤ 0 ≤ j ≤ k − 1 | T j | max , k = 0 , 1 , . . . , M , L f where T j is the Euler method truncation error. 2 2 Notation used here supposes that y ∈ R , but the result generalizes naturally to y ∈ R n for n > 1

  6. Convergence Proof: From the definition of truncation error for Euler’s method we have y ( t k +1 ) = y ( t k ) + hf ( t k , y ( t k ); h ) + hT k Subtracting y k +1 = y k + hf ( t k , y k ; h ) gives e k +1 = e k + h [ f ( t k , y ( t k )) − f ( t k , y k )] + hT k , hence | e k +1 | ≤ | e k | + hL f | e k | + h | T k | = (1 + hL f ) | e k | + h | T k |

  7. Convergence Proof (continued...): This gives a geometric progression, e.g. for k = 2 we have | e 3 | ≤ (1 + hL f ) | e 2 | + h | T 2 | ≤ (1 + hL f )((1 + hL f ) | e 1 | + h | T 1 | ) + h | T 2 | (1 + hL f ) 2 h | T 0 | + (1 + hL f ) h | T 1 | + h | T 2 | ≤ 2 � � � (1 + hL f ) j ≤ h 0 ≤ j ≤ 2 | T j | max j =0 Or, in general � k − 1 � � (1 + hL f ) j | e k | ≤ h 0 ≤ j ≤ k − 1 | T j | max j =0

  8. Convergence Proof (continued...): Hence use the formula k − 1 r j = 1 − r k � 1 − r j =0 with r ≡ (1 + hL f ), to get | e k | ≤ 1 � � ((1 + hL f ) k − 1) 0 ≤ j ≤ k − 1 | T j | max L f Finally, we use the bound 3 1 + hL f ≤ exp( hL f ) to get the desired result. � 3 For x ≥ 0, 1 + x ≤ exp( x ) by power series expansion 1 + x + x 2 / 2 + · · ·

  9. Convergence: Lipschitz Condition A simple case where we can calculate a Lipschitz constant is if y ∈ R and f is continuously differentiable Then from the mean value theorem we have: | f ( t , u ) − f ( t , v ) | = | f y ( t , θ ) || u − v | , for θ ∈ ( u , v ) Hence we can set: L f = max | f y ( t , θ ) | t ∈ [0 , t M ] θ ∈ ( u , v )

  10. Convergence: Lipschitz Condition However, f doesn’t have to be continuously differentiable to satisfy Lipschitz condition! e.g. let f ( x ) = | x | , then | f ( x ) − f ( y ) | = || x | − | y || ≤ | x − y | , 4 hence L f = 1 in this case 4 This is the reverse triangle inequality

  11. Convergence For a fixed t ( i.e. t = kh , as h → 0 and k → ∞ ), the factor ( e L f t − 1) / L f in the bound is a constant Hence the global convergence rate for each fixed t is given by the dependence of T k on h Our proof was for Euler’s method, but the same dependence of global error on local error holds in general We say that a method has order of accuracy p if | T k | = O ( h p ) (where p is an integer) Hence ODE methods with order ≥ 1 are convergent

  12. Order of Accuracy Forward Euler is first order accurate: y ( t k +1 ) − y ( t k ) T k ≡ − f ( t k , y ( t k )) h y ( t k +1 ) − y ( t k ) − y ′ ( t k ) = h y ( t k ) + hy ′ ( t k ) + h 2 y ′′ ( θ ) / 2 − y ( t k ) − y ′ ( t k ) = h h 2 y ′′ ( θ ) =

  13. Order of Accuracy Backward Euler is first order accurate: y ( t k +1 ) − y ( t k ) T k ≡ − f ( t k +1 , y ( t k +1 )) h y ( t k +1 ) − y ( t k ) − y ′ ( t k +1 ) = h y ( t k +1 ) − y ( t k +1 ) + hy ′ ( t k +1 ) − h 2 y ′′ ( θ ) / 2 − y ′ ( t k +1 ) = h − h 2 y ′′ ( θ ) =

  14. Order of Accuracy Trapezoid method is second order accurate: Let’s prove this using a quadrature error bound, recall that: � t k +1 y ( t k +1 ) = y ( t k ) + f ( s , y ( s ))d s t k and hence � t k +1 y ( t k +1 ) − y ( t k ) = 1 f ( s , y ( s ))d s h h t k So � t k +1 T k = 1 f ( s , y ( s ))d s − 1 2 [ f ( t k , y ( t k )) + f ( t k +1 , y ( t k +1 ))] h t k

  15. Order of Accuracy Hence �� t k +1 1 � f ( s , y ( s ))d s − h T k = 2 ( f ( t k , y ( t k )) + f ( t k +1 , y ( t k +1 ))) h t k �� t k +1 1 y ′ ( s )d s − h �� y ′ ( t k ) + y ′ ( t k +1 ) � = h 2 t k Therefore T k is determined by the trapezoid rule error for the integrand y ′ on t ∈ [ t k , t k +1 ] Recall that trapezoid quadrature rule error bound depended on ( b − a ) 3 = ( t k +1 − t k ) 3 = h 3 and hence T k = O ( h 2 )

  16. Order of Accuracy The table below shows global error at t = 1 for y ′ = y , y (0) = 1 for (forward) Euler and trapezoid h E Euler E Trap 2.0e-2 2.67e-2 9.06e-05 1.0e-2 1.35e-2 2.26e-05 5.0e-3 6.76e-3 5.66e-06 2.5e-3 3.39e-3 1.41e-06 h → h / 2 = ⇒ E Euler → E Euler / 2 h → h / 2 = ⇒ E Trap → E Trap / 4

  17. Stability So far we have discussed convergence of numerical methods for ODE IVPs, i.e. asymptotic behavior as h → 0 It is also crucial to consider stability of numerical methods: for what (finite and practical) values of h is our method stable? We want our method to be well-behaved for as large a step size as possible All else being equal, larger step sizes = ⇒ fewer time steps = ⇒ more efficient!

  18. Stability In this context, the key idea is that we want our methods to inherit the stability properties of the ODE If an ODE is unstable, then we can’t expect our discretization to be stable But if an ODE is stable, we want our discretization to be stable as well Hence we first discuss ODE stability, independent of numerical discretization

  19. ODE Stability Consider an ODE y ′ = f ( t , y ), and ◮ Let y ( t ) be the solution for initial condition y (0) = y 0 ◮ Let ˆ y ( t ) be the solution for initial condition ˆ y (0) = ˆ y 0 The ODE is stable if: For every ǫ > 0, ∃ δ > 0 such that � ˆ y 0 − y 0 � ≤ δ = ⇒ � ˆ y ( t ) − y ( t ) � ≤ ǫ for all t ≥ 0 “Small input perturbation leads to small perturbation in the solution”

  20. ODE Stability Stronger form of stability, asymptotic stability: � ˆ y ( t ) − y ( t ) � → 0 as t → ∞ , perturbations decay over time These two definitions of stability are properties of the ODE, independent of any numerical algorithm This nomenclature is a bit confusing compared to previous Units: ◮ We previously referred to this type of property as the conditioning of the problem ◮ Stability previously referred only to properties of a numerical approximation In ODEs (and PDEs), it is standard to use stability to refer to sensitivity of both the mathematical problem and numerical approx.

  21. ODE Stability Consider stability of y ′ = λ y (assuming y ( t ) ∈ R ) for different values of λ y 0 ) e λ t y ( t ) − ˆ y ( t ) = ( y 0 − ˆ 2 y 0 = 1 y 0 = 2 ˆ 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 3 4 5 λ = − 1, asymptotically stable

  22. ODE Stability y 0 ) e λ t y ( t ) − ˆ y ( t ) = ( y 0 − ˆ 3 y 0 = 1 y 0 = 2 ˆ 2.5 2 1.5 1 0.5 0 0 1 2 3 4 5 λ = 0, stable

  23. ODE Stability y 0 ) e λ t y ( t ) − ˆ y ( t ) = ( y 0 − ˆ 300 y 0 = 1 y 0 = 2 ˆ 250 200 150 100 50 0 0 1 2 3 4 5 λ = 1, unstable

  24. ODE Stability More generally, we can allow λ to be a complex number: λ = a + ib Then y ( t ) = y 0 e ( a + ib ) t = y 0 e at e ibt = y 0 e at (cos( bt ) + i sin( bt )) The key issue for stability is now the sign of a = Re( λ ): ◮ Re( λ ) < 0 = ⇒ asymptotically stable ◮ Re( λ ) = 0 = ⇒ stable ◮ Re( λ ) > 0 = ⇒ unstable

  25. ODE Stability Our understanding of the stability of y ′ = λ y extends directly to the case y ′ = Ay , where y ∈ R n , A ∈ R n × n Suppose that A is diagonalizable, so that we have the eigenvalue decomposition A = V Λ V − 1 , where ◮ Λ = diag( λ 1 , λ 2 , . . . , λ n ), where the λ j are eigenvalues ◮ V is matrix with eigenvectors as columns, v 1 , v 2 , . . . , v n Then, y ′ = Ay = V Λ V − 1 y = ⇒ V − 1 y ′ = Λ V − 1 y = ⇒ z ′ = Λ z where z ≡ V − 1 y and z 0 ≡ V − 1 y 0

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend