Nonlinear Equations = 40 / How can we solve these equations? - - PowerPoint PPT Presentation
Nonlinear Equations = 40 / How can we solve these equations? - - PowerPoint PPT Presentation
Nonlinear Equations = 40 / How can we solve these equations? Spring force: = ! = 0.5 / What is the displacement when = 2N? Drag force: = 0.5 ! " = !
How can we solve these equations?
- Spring force:
๐บ = ๐ ๐ฆ What is the displacement when ๐บ = 2N?
- Drag force:
๐บ = 0.5 ๐ท! ๐ ๐ต ๐ค" = ๐! ๐ค" What is the velocity when ๐บ = 20N?
๐ = 40 ๐/๐ ๐! = 0.5 ๐๐ก/๐
- Spring force:
๐ ๐ฆ = ๐ ๐ฆ โ ๐บ = 0
- Drag force:
๐ ๐ค = ๐! ๐ค" โ๐บ = 0
๐! = 0.5 ๐๐ก/๐
Nonlinear Equations in 1D
Goal: Solve ๐ ๐ฆ = 0 for ๐: โ โ โ Find the root (zero) of the nonlinear equation ๐ ๐ค Often called Root Finding
Bisection method
Algorithm: 1.Take two points, ๐ and ๐, on each side of the root such that ๐(๐) and ๐(๐) have
- pposite signs.
2.Calculate the midpoint ๐ = !"#
$
- 3. Evaluate ๐(๐) and use ๐ to replace either ๐ or ๐, keeping the signs of the
endpoints opposite.
Convergence
- The bisection method does not estimate ๐ฆ., the approximation of the
desired root ๐ฆ. It instead finds an interval smaller than a given tolerance that contains the root.
- The length of the interval at iteration ๐ is /01
"! . We can define this
interval as the error at iteration ๐
lim
"โ$
|๐"%&| |๐"| = lim
"โ$
|๐"%&| |๐"| = lim
"โ$
๐ โ ๐ 2"%& ๐ โ ๐ 2" = 0.5
- Linear convergence
Convergence
An iterative method converges with rate ๐ if: lim
.โ<
||๐.=>|| ||๐.||? = ๐ท, 0 < ๐ท < โ ๐ = 1: linear convergence ๐ > 1: superlinear convergence ๐ = 2: quadratic convergence Linear convergence gains a constant number of accurate digits each step (and ๐ท < 1 matters! Quadratic convergence doubles the number of accurate digits in each step (however it only starts making sense once ||๐.|| is small (and ๐ท does not matter much)
Example:
Consider the nonlinear equation ๐ ๐ฆ = 0.5๐ฆ" โ 2 and solving f x = 0 using the Bisection Method. For each of the initial intervals below, how many iterations are required to ensure the root is accurate within 20@? A) [โ10, โ1.8] B) [โ3, โ2.1] C) [โ4, 1.9]
Bisection Method - summary
q The function must be continuous with a root in the interval ๐, ๐ q Requires only one function evaluations for each iteration!
- The first iteration requires two function evaluations.
q Given the initial internal [๐, ๐], the length of the interval after ๐ iterations is /01
"!
q Has linear convergence
Newtonโs method
- Recall we want to solve ๐ ๐ฆ = 0 for ๐: โ โ โ
- The Taylor expansion:
๐ ๐ฆ. + โ โ ๐ ๐ฆ. + ๐โฒ ๐ฆ. โ gives a linear approximation for the nonlinear function ๐ near ๐ฆ.. ๐ ๐ฆ. + โ = 0 โ โ = โ๐ ๐ฆ. /๐โฒ ๐ฆ.
- Algorithm:
๐ฆ.=> = ๐ฆ. โ ๐ ๐ฆ. /๐โฒ ๐ฆ. ๐ฆA = ๐ก๐ข๐๐ ๐ข๐๐๐ ๐๐ฃ๐๐ก๐ก
Newtonโs method
Equation of the tangent line: ๐โฒ(๐ฆ.) = ๐ ๐ฆ. โ 0 ๐ฆ. โ ๐ฆ.=>
๐ฆ"%& ๐ฆ"
Iclicker question
Consider solving the nonlinear equation 5 = 2.0 ๐C + ๐ฆ" What is the result of applying one iteration of Newtonโs method for solving nonlinear equations with initial starting guess ๐ฆA = 0, i.e. what is ๐ฆ>? A) โ2 B) 0.75 C) โ1.5 D) 1.5 E) 3.0
Newtonโs Method - summary
q Must be started with initial guess close enough to root (convergence is
- nly local). Otherwise it may not converge at all.
q Requires function and first derivative evaluation at each iteration (think about two function evaluations) q What can we do when the derivative evaluation is too costly (or difficult to evaluate)? q Typically has quadratic convergence lim
.โ<
||๐.=>|| ||๐.||" = ๐ท, 0 < ๐ท < โ
Secant method
Also derived from Taylor expansion, but instead of using ๐โฒ ๐ฆ. , it approximates the tangent with the secant line: Secant line: ๐โฒ(๐ฆ.) โ ๐ ๐ฆ. โ ๐ ๐ฆ.0> ๐ฆ. โ ๐ฆ.0>
๐ฆ"%& ๐ฆ"'& ๐ฆ"
๐ฆ.=> = ๐ฆ. โ ๐ ๐ฆ. /๐โฒ ๐ฆ.
- Algorithm:
๐ฆ!, ๐ฆ" = ๐ก๐ข๐๐ ๐ข๐๐๐ ๐๐ฃ๐๐ก๐ก๐๐ก ๐# ๐ฆ$ = ๐ ๐ฆ$ โ ๐ ๐ฆ$%" ๐ฆ$ โ ๐ฆ$%" ๐ฆ$&" = ๐ฆ$ โ ๐ ๐ฆ$ /๐โฒ ๐ฆ$
Secant Method - summary
q Still local convergence q Requires only one function evaluation per iteration (only the first iteration requires two function evaluations) q Needs two starting guesses q Has slower convergence than Newtonโs Method โ superlinear convergence lim
.โ<
||๐.=>|| ||๐.||? = ๐ท, 1 < ๐ < 2
1D methods for root finding:
Method Update Convergence Cost Bisection Check signs of ๐ ๐ and ๐ ๐ ๐ข! = |๐ โ ๐| 2! Linear (๐ = 1 and c = 0.5) One function evaluation per iteration, no need to compute derivatives Secant ๐ฆ!"# = ๐ฆ! + โ โ = โ๐ ๐ฆ! /๐๐๐ ๐๐๐ = ๐ ๐ฆ! โ ๐ ๐ฆ!$# ๐ฆ! โ ๐ฆ!$# Superlinear ๐ = 1.618 , local convergence properties, convergence depends on the initial guess One function evaluation per iteration (two evaluations for the initial guesses only), no need to compute derivatives Newton ๐ฆ!"# = ๐ฆ! + โ โ = โ๐ ๐ฆ! /๐โฒ ๐ฆ! Quadratic ๐ = 2 , local convergence properties, convergence depends on the initial guess Two function evaluations per iteration, requires first order derivatives
Nonlinear system of equations
https://www.youtube.com/watch?v=NRgNDlVtmz0 (Robotic arm 1) https://www.youtube.com/watch?v=9DqRkLQ5Sv8 (Robotic arm 2) https://www.youtube.com/watch?v=DZ_ocmY8xEI (Blender)
Robotic arms
Nonlinear system of equations
Goal: Solve ๐ ๐ = ๐ for ๐: โD โ โD In other words, ๐ ๐ is a vector-valued function ๐ ๐ = ๐
> ๐
โฎ ๐
D ๐
= ๐
> ๐ฆ>, ๐ฆ", ๐ฆE, โฆ , ๐ฆD
โฎ ๐
D ๐ฆ>, ๐ฆ", ๐ฆE, โฆ , ๐ฆD
If looking for a solution to ๐ ๐ = ๐, then instead solve ๐ ๐ = ๐ ๐ โ ๐ = ๐
Newtonโs method
Approximate the nonlinear function ๐ ๐ by a linear function using Taylor expansion: ๐ ๐ + ๐ โ ๐ ๐ + ๐ฒ ๐ ๐ where ๐ฒ ๐ is the Jacobian matrix of the function ๐: ๐ฒ ๐ =
FG
" ๐
FC"
โฆ
FG
" ๐
FC#
โฎ โฑ โฎ
FG
# ๐
FC"
โฆ
FG
# ๐
FC#
- r ๐ฒ ๐
IJ = FG$ ๐ FC%
Set ๐ ๐ + ๐ = ๐ โน ๐ฒ ๐ ๐ = โ๐ ๐ This is a linear system of equations (solve for ๐)!
Newtonโs method
Algorithm: ๐A = ๐๐๐๐ข๐๐๐ ๐๐ฃ๐๐ก๐ก Solve ๐ฒ ๐. ๐. = โ๐ ๐. Update ๐.=> = ๐. + ๐. Convergence:
- Typically has quadratic convergence
- Drawback: Still only locally convergent
Cost:
- Main cost associated with computing the Jacobian matrix and solving
the Newton step.
Newtonโs method - summary
q Typically quadratic convergence (local convergence) q Computing the Jacobian matrix requires the equivalent of ๐" function evaluations for a dense problem (where every function of ๐ ๐ depends
- n every component of ๐).
q Computation of the Jacobian may be cheaper if the matrix is sparse. q The cost of calculating the step ๐ is ๐ ๐E for a dense Jacobian matrix (Factorization + Solve) q If the same Jacobian matrix ๐ฒ ๐. is reused for several consecutive iterations, the convergence rate will suffer accordingly (trade-off between cost per iteration and number of iterations needed for convergence)
Example
Consider solving the nonlinear system of equations 2 = 2๐ง + ๐ฆ 4 = ๐ฆ" + 4๐ง" What is the result of applying one iteration of Newtonโs method with the following initial guess? ๐A = 1
Finite Difference
Find an approximate for the Jacobian matrix:
๐ฒ ๐ =
%&
% ๐
%(%
โฆ
%&
% ๐
%(&
โฎ โฑ โฎ
%&
& ๐
%(%
โฆ
%&
& ๐
%(&
- r ๐ฒ ๐
)* = %&' ๐ %((
๐๐ ๐ฆ ๐๐ฆ โ ๐ ๐ฆ + โ โ ๐ ๐ฆ โ
In 1D: In ND:
๐ฒ ๐
IJ = FG$ ๐ FC% โ G$ ๐=K ๐บ% 0G$ ๐ K