Finite Difference Method Motivation For a given smooth function , - - PowerPoint PPT Presentation
Finite Difference Method Motivation For a given smooth function , - - PowerPoint PPT Presentation
Finite Difference Method Motivation For a given smooth function , we want to calculate the derivative at a given value of . Suppose we dont know how to compute the analytical expression for , or it is
Motivation
For a given smooth function π π¦ , we want to calculate the derivative πβ² π¦ at a given value of π¦. Suppose we donβt know how to compute the analytical expression for πβ² π¦ ,
- r it is computationally very expensive. However you do know how to evaluate
the function value: We know that:
πβ² π¦ = lim
!β#
π π¦ + β β π(π¦) β
Can we just use πβ² π¦ β
! "#$ %! " $
as an approximation? How do we choose β? Can we get estimate the error of our approximation?
For a differentiable function π: β β β, the derivative is defined as:
πβ² π¦ = lim
!β#
π π¦ + β β π(π¦) β
Taylor Series centered at π¦, where Μ π¦ = π¦ + β
π π¦ + β = π π¦ + π$ π¦ β + πβ²β² π¦
!! % +πβ²β²β² π¦ !" & + β―
π π¦ + β = π π¦ + π$ π¦ β + π(β%)
We define the Forward Finite Difference as: Therefore, the truncation error of the forward finite difference approximation is bounded by:
Finite difference method
In a similar way, we can write: π π¦ β β = π π¦ β π! π¦ β + π(β") β π! π¦ = π π¦ β π π¦ β β β + π(β) And define the Backward Finite Difference as: ππ π¦ = π π¦ β π π¦ β β β β π! π¦ = ππ π¦ + π(β) And subtracting the two Taylor approximations π π¦ + β = π π¦ + π! π¦ β + πβ²β² π¦
#! " +πβ²β²β² π¦ #" $ + β―
π π¦ β β = π π¦ β π! π¦ β + πβ²β² π¦
#! " βπβ²β²β² π¦ #" $ + β―
π π¦ + β β π π¦ β β = 2π! π¦ β + πβ²β²β² π¦ β% 6 + π(β&) π! π¦ = π π¦ + β β π π¦ β β 2β + π(β") And define the Central Finite Difference as: ππ π¦ = π¦ + β β π π¦ β β 2β β π! π¦ = ππ π¦ + π(β")
Forward Finite Difference: ππ π¦ =
' ()# *' ( #
β π! π¦ = ππ π¦ + π(β) Backward Finite Difference: ππ π¦ =
' ( *' (*# #
β π! π¦ = ππ π¦ + π(β) Central Finite Difference: ππ π¦ = ' ()# *' (*#
"#
β π! π¦ = ππ π¦ + π(β") How accurate is the finite difference approximation? How many function evaluations (in additional to π π¦ )? Our typical trade-off issue! We can get better accuracy with Central Finite Difference with the (possible) increased computational cost.
How small should the value of π?
Truncation error: π(β) Cost: 1 function evaluation Truncation error: π(β) Cost: 1 function evaluation Truncation error: π(β") Cost: 2 function evaluation2
Example
π π¦ = π$ β 2 πβ² π¦ = π$ ππππππ ππ¦ = (π$%!β2) β (π$β2) β ππ π ππ (β) = πππ‘(πβ² π¦ β ππππππ ππ¦) We want to obtain an approximation for πβ² 1
β ππ π ππ
Truncation error
Example
Should we just keep decreasing the perturbation β, in order to approach the limit β β 0 and
- btain a better approximation for the derivative?
Uh-Oh!
What happened here? π π¦ = π$ β 2, πβ² π¦ = π$ β πβ² 1 β 2.7
ππ 1 = π 1 + β β π(1) β
Forward Finite Difference
ππ(π¦) = π π¦ + β β π(π¦) β β€ π+ |π π¦ | β
When computing the finite difference approximation, we have two competing source of errors: Truncation errors and Rounding errors
Optimal βhβ Loss of accuracy due to rounding
ππ π ππ ~π β
Truncation error: Rounding error:
ππ π ππ ~ π&|π π¦ | β
Minimize the total error ππ π ππ ~ π+|π π¦ | β + πβ Gives β = π+|π π¦ |/π