Nonlinear Equations How can we solve these equations? = 40 / - - PowerPoint PPT Presentation

β–Ά
nonlinear equations how can we solve these equations
SMART_READER_LITE
LIVE PREVIEW

Nonlinear Equations How can we solve these equations? = 40 / - - PowerPoint PPT Presentation

Nonlinear Equations How can we solve these equations? = 40 / Spring force: = What is the displacement when = 2N? How can we solve these equations? Drag force: ! = 0.5 / = 0.5


slide-1
SLIDE 1

Nonlinear Equations

slide-2
SLIDE 2

How can we solve these equations?

  • Spring force:

𝐺 = 𝑙 𝑦 What is the displacement when 𝐺 = 2N?

𝑙 = 40 𝑂/𝑛

slide-3
SLIDE 3

How can we solve these equations?

  • Drag force:

𝐺 = 0.5 𝐷( 𝜍 𝐡 𝑀) = 𝜈( 𝑀) What is the velocity when 𝐺 = 20N?

𝜈! = 0.5 𝑂𝑑/𝑛

slide-4
SLIDE 4

𝑔 𝑀 = 𝜈( 𝑀) βˆ’πΊ = 0

𝜈! = 0.5 𝑂𝑑/𝑛

Nonlinear Equations in 1D

Goal: Solve 𝑔 𝑦 = 0 for 𝑔: β„› β†’ β„› Find the root (zero) of the nonlinear equation 𝑔 𝑀 Often called Root Finding

slide-5
SLIDE 5

Bisection method

slide-6
SLIDE 6

Bisection method

slide-7
SLIDE 7

Convergence

An iterative method converges with rate 𝑠 if: lim

.β†’0 ||2!"#|| ||2!||$ = 𝐷,

0 < 𝐷 < ∞ 𝑠 = 1: linear convergence Linear convergence gains a constant number of accurate digits each step (and 𝐷 < 1 matters!) For example: Power Iteration

slide-8
SLIDE 8

Convergence

An iterative method converges with rate 𝑠 if: lim

.β†’0

||𝑓.34|| ||𝑓.||5 = 𝐷, 0 < 𝐷 < ∞ 𝑠 = 1: linear convergence 𝑠 > 1: superlinear convergence 𝑠 = 2: quadratic convergence Linear convergence gains a constant number of accurate digits each step (and 𝐷 < 1 matters!) Quadratic convergence doubles the number of accurate digits in each step (however it only starts making sense once ||𝑓.|| is small (and 𝐷 does not matter much)

slide-9
SLIDE 9

Convergence

  • The bisection method does not estimate 𝑦., the approximation of the

desired root 𝑦. It instead finds an interval smaller than a given tolerance that contains the root.

slide-10
SLIDE 10

Example:

Consider the nonlinear equation 𝑔 𝑦 = 0.5𝑦) βˆ’ 2 and solving f x = 0 using the Bisection Method. For each of the initial intervals below, how many iterations are required to ensure the root is accurate within 267? A) [βˆ’10, βˆ’1.8] B) [βˆ’3, βˆ’2.1] C) [βˆ’4, 1.9]

slide-11
SLIDE 11

Bisection method

Algorithm: 1.Take two points, 𝑏 and 𝑐, on each side of the root such that 𝑔(𝑏) and 𝑔(𝑐) have

  • pposite signs.

2.Calculate the midpoint 𝑛 = !"#

$

  • 3. Evaluate 𝑔(𝑛) and use 𝑛 to replace either 𝑏 or 𝑐, keeping the signs of the

endpoints opposite.

slide-12
SLIDE 12

Bisection Method - summary

q The function must be continuous with a root in the interval 𝑏, 𝑐 q Requires only one function evaluations for each iteration!

  • The first iteration requires two function evaluations.

q Given the initial internal [𝑏, 𝑐], the length of the interval after 𝑙 iterations is 869

)!

q Has linear convergence

slide-13
SLIDE 13

Newton’s method

  • Recall we want to solve 𝑔 𝑦 = 0 for 𝑔: β„› β†’ β„›
  • The Taylor expansion:

𝑔 𝑦! + β„Ž β‰ˆ 𝑔 𝑦! + 𝑔′ 𝑦! β„Ž gives a linear approximation for the nonlinear function 𝑔 near 𝑦!.

slide-14
SLIDE 14

Newton’s method

𝑦"

slide-15
SLIDE 15

Example

Consider solving the nonlinear equation 5 = 2.0 𝑓" + 𝑦# What is the result of applying one iteration of Newton’s method for solving nonlinear equations with initial starting guess 𝑦$ = 0, i.e. what is 𝑦%? A) βˆ’2 B) 0.75 C) βˆ’1.5 D) 1.5 E) 3.0

slide-16
SLIDE 16

Newton’s Method - summary

q Must be started with initial guess close enough to root (convergence is

  • nly local). Otherwise it may not converge at all.

q Requires function and first derivative evaluation at each iteration (think about two function evaluations) q Typically has quadratic convergence lim

.β†’0

||𝑓.34|| ||𝑓.||) = 𝐷, 0 < 𝐷 < ∞ q What can we do when the derivative evaluation is too costly (or difficult to evaluate)?

slide-17
SLIDE 17

Secant method

Also derived from Taylor expansion, but instead of using 𝑔′ 𝑦. , it approximates the tangent with the secant line: 𝑦.34 = 𝑦. βˆ’ 𝑔 𝑦. /𝑔′ 𝑦.

slide-18
SLIDE 18

Secant Method - summary

q Still local convergence q Requires only one function evaluation per iteration (only the first iteration requires two function evaluations) q Needs two starting guesses q Has slower convergence than Newton’s Method – superlinear convergence lim

.β†’0

||𝑓.34|| ||𝑓.||5 = 𝐷, 1 < 𝑠 < 2

slide-19
SLIDE 19

1D methods for root finding:

Method Update Convergence Cost Bisection Check signs of 𝑔 𝑏 and 𝑔 𝑐 𝑒! = |𝑐 βˆ’ 𝑏| 2! Linear (𝑠 = 1 and c = 0.5) One function evaluation per iteration, no need to compute derivatives Secant 𝑦!"# = 𝑦! + β„Ž β„Ž = βˆ’π‘” 𝑦! /𝑒𝑔𝑏 𝑒𝑔𝑏 = 𝑔 𝑦! βˆ’ 𝑔 𝑦!$# 𝑦! βˆ’ 𝑦!$# Superlinear 𝑠 = 1.618 , local convergence properties, convergence depends on the initial guess One function evaluation per iteration (two evaluations for the initial guesses only), no need to compute derivatives Newton 𝑦!"# = 𝑦! + β„Ž β„Ž = βˆ’π‘” 𝑦! /𝑔′ 𝑦! Quadratic 𝑠 = 2 , local convergence properties, convergence depends on the initial guess Two function evaluations per iteration, requires first order derivatives