A Secant Method for Nonsmooth Optimization Asef Nazari CSIRO - - PowerPoint PPT Presentation

a secant method for nonsmooth optimization
SMART_READER_LITE
LIVE PREVIEW

A Secant Method for Nonsmooth Optimization Asef Nazari CSIRO - - PowerPoint PPT Presentation

A Secant Method for Nonsmooth Optimization Asef Nazari CSIRO Melbourne CARMA Workshop on Optimization, Nonlinear Analysis, Randomness and Risk Newcastle, Australia 12 July, 2014 Asef Nazari A Secant Method for Nonsmooth Optimization Outline


slide-1
SLIDE 1

A Secant Method for Nonsmooth Optimization

Asef Nazari

CSIRO Melbourne CARMA Workshop on Optimization, Nonlinear Analysis, Randomness and Risk Newcastle, Australia 12 July, 2014

Asef Nazari A Secant Method for Nonsmooth Optimization

slide-2
SLIDE 2

Outline Background Secant Method

1 Background

The Problem Optimization Algorithms and components Subgradient and Subdifferential

2 Secant Method

Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Asef Nazari Secant Method

slide-3
SLIDE 3

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

The problem

Unconstrained Optimization Problem min

x∈Rn f (x)

f : Rn → Rn Locally Lipshitz ¨ O

Asef Nazari Secant Method

slide-4
SLIDE 4

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

The problem

Unconstrained Optimization Problem min

x∈Rn f (x)

f : Rn → Rn Locally Lipshitz ¨ O Why just unconstrained?

Asef Nazari Secant Method

slide-5
SLIDE 5

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Transforming Constrained into Unconstrained

Constrained Optimization Problem min

x∈Y f (x)

where Y ⊂ Rn.

Asef Nazari Secant Method

slide-6
SLIDE 6

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Transforming Constrained into Unconstrained

Constrained Optimization Problem min

x∈Y f (x)

where Y ⊂ Rn. Distance function dist(x, Y ) = min

y∈Y y − x

Asef Nazari Secant Method

slide-7
SLIDE 7

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Transforming Constrained into Unconstrained

Constrained Optimization Problem min

x∈Y f (x)

where Y ⊂ Rn. Distance function dist(x, Y ) = min

y∈Y y − x

Theory of penalty function (under some conditions) min

x∈Rn f (x) + σdist(x, y)

f (x) + σdist(x, y) is a nonsmooth function

Asef Nazari Secant Method

slide-8
SLIDE 8

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Sources of Nonsmooth Problems

Minimax Problem min

x∈Rn f (x)

f (x) = max

1≤i≤m fi(x)

Asef Nazari Secant Method

slide-9
SLIDE 9

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Sources of Nonsmooth Problems

Minimax Problem min

x∈Rn f (x)

f (x) = max

1≤i≤m fi(x)

System of Nonlinear Equations fi(x) = 0, i = 1, . . . , m,

Asef Nazari Secant Method

slide-10
SLIDE 10

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Sources of Nonsmooth Problems

Minimax Problem min

x∈Rn f (x)

f (x) = max

1≤i≤m fi(x)

System of Nonlinear Equations fi(x) = 0, i = 1, . . . , m, we often do min

x∈Rn ¯

f (x) where ¯ f (x) = (f1(x), . . . , fm(x))

Asef Nazari Secant Method

slide-11
SLIDE 11

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Structure of Optimization Algorithms

Step 1 Initial Step x0 ∈ Rn Step 2 Termination Criteria Step 3 Finding descent direction dk at xk Step 4 Finding step size f (xk + αkdk) < f (xk) Step 5 Loop xk+1 = xk + αkdk and go to step 2.

Asef Nazari Secant Method

slide-12
SLIDE 12

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Classification of Algorithms Based on dk and αk

Directions

dk = −∇f (xk) Steepest Descent Method dk = −H−1(xk)∇f (xk) Newton Method dk = −B−1∇f (xk) Quasi-Newton Method

Asef Nazari Secant Method

slide-13
SLIDE 13

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Classification of Algorithms Based on dk and αk

Directions

dk = −∇f (xk) Steepest Descent Method dk = −H−1(xk)∇f (xk) Newton Method dk = −B−1∇f (xk) Quasi-Newton Method

Step sizes

h(α) = f (xk + αdk)

1

exactly solve h′(α) = 0 exact line search

2

loosly solve it, inexact line serach

Trust region methods

Asef Nazari Secant Method

slide-14
SLIDE 14

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

When the objective function is smooth

Descent direction f ′(xk, d) < 0 f ′(xk, d) = ∇f (xk), d ≤ 0 descent direction −∇f (xk) steepest descent direction ∇f (xk) ≤ ε good stopping criteria (First Order Necessary Conditions)

Asef Nazari Secant Method

slide-15
SLIDE 15

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

Asef Nazari Secant Method

slide-16
SLIDE 16

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

Basic Inequality for convex differentiable function: f (x) ≥ f (y) + ∇f (y)T.(x − y) ∀x ∈ Dom(f )

Asef Nazari Secant Method

slide-17
SLIDE 17

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

g ∈ Rn is a subgradient of a convex function f at y if f (x) ≥ f (y) + gT.(x − y) ∀x ∈ Dom(f )

Asef Nazari Secant Method

slide-18
SLIDE 18

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

g ∈ Rn is a subgradient of a convex function f at y if f (x) ≥ f (y) + gT.(x − y) ∀x ∈ Dom(f )

Asef Nazari Secant Method

slide-19
SLIDE 19

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

g ∈ Rn is a subgradient of a convex function f at y if f (x) ≥ f (y) + gT.(x − y) ∀x ∈ Dom(f ) if x < 0, g = −1

Asef Nazari Secant Method

slide-20
SLIDE 20

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

g ∈ Rn is a subgradient of a convex function f at y if f (x) ≥ f (y) + gT.(x − y) ∀x ∈ Dom(f ) if x < 0, g = −1 if x > 0, g = 1

Asef Nazari Secant Method

slide-21
SLIDE 21

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subgradient

g ∈ Rn is a subgradient of a convex function f at y if f (x) ≥ f (y) + gT.(x − y) ∀x ∈ Dom(f ) if x < 0, g = −1 if x > 0, g = 1 if x = 0, g ∈ [−1, 1]

Asef Nazari Secant Method

slide-22
SLIDE 22

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subdifferential

Set of all subgradients of f at x, ∂f (x), is called subdifferential.

Asef Nazari Secant Method

slide-23
SLIDE 23

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subdifferential

Set of all subgradients of f at x, ∂f (x), is called subdifferential. for f (x) = |x|, the subdifferential at x = 0 is ∂f (0) = [−1, 1] For x > 0 ∂f (x) = {1} For x < 0 ∂f (x) = {−1}

Asef Nazari Secant Method

slide-24
SLIDE 24

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subdifferential

Set of all subgradients of f at x, ∂f (x), is called subdifferential. for f (x) = |x|, the subdifferential at x = 0 is ∂f (0) = [−1, 1] For x > 0 ∂f (x) = {1} For x < 0 ∂f (x) = {−1} f is differentiable at x ⇐ ⇒ ∂f (x) = {∇f (x)}

Asef Nazari Secant Method

slide-25
SLIDE 25

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Subdifferential

Set of all subgradients of f at x, ∂f (x), is called subdifferential. for f (x) = |x|, the subdifferential at x = 0 is ∂f (0) = [−1, 1] For x > 0 ∂f (x) = {1} For x < 0 ∂f (x) = {−1} f is differentiable at x ⇐ ⇒ ∂f (x) = {∇f (x)} for Lipschitz functions (Clarke) ∂f (x0) = conv{lim ∇f (xi) : xi − → x0 ∇f (xi)exists}

Asef Nazari Secant Method

slide-26
SLIDE 26

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Optimality Condition

For a smooth convex function f (x∗) = inf

x∈Dom(f ) f (x) ⇐

⇒ 0 = ∇f (x∗)

Asef Nazari Secant Method

slide-27
SLIDE 27

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Optimality Condition

For a smooth convex function f (x∗) = inf

x∈Dom(f ) f (x) ⇐

⇒ 0 = ∇f (x∗) For a nonsmooth convex function f (x∗) = inf

x∈Dom(f ) f (x) ⇐

⇒ 0 ∈ ∂f (x∗)

Asef Nazari Secant Method

slide-28
SLIDE 28

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Directional derivative

In Smooth Case f ′(xk, d) = lim

λ→0+

f (xk + λd) − f (xk) λ = ∇f (xk), d

Asef Nazari Secant Method

slide-29
SLIDE 29

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Directional derivative

In Smooth Case f ′(xk, d) = lim

λ→0+

f (xk + λd) − f (xk) λ = ∇f (xk), d In nonsmooth case f ′(xk, d) = lim

λ→0+

f (xk + λd) − f (xk) λ = sup

g∈∂f (xk)

gT, d

Asef Nazari Secant Method

slide-30
SLIDE 30

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Descent Direction

In Smooth Case, if f ′(xk, d) = ∇f (xk), d < 0. (−∇f (xk) steepest descent direction)

Asef Nazari Secant Method

slide-31
SLIDE 31

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Descent Direction

In Smooth Case, if f ′(xk, d) = ∇f (xk), d < 0. (−∇f (xk) steepest descent direction) In nonsmooth case, if f ′(xk, d) < 0. It is proved that d = −argmin

g∈∂f (xk) g

Asef Nazari Secant Method

slide-32
SLIDE 32

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

In Summary

Optimality condition f (x∗) = inf

x∈Dom(f ) f (x) ⇐

⇒ 0 ∈ ∂f (x∗) Directional Derivative f ′(xk, d) = lim

λ→0+

f (xk + λd) − f (xk) λ = sup

g∈∂f (xk)

gT, d Steepest Descent Direction d = −argmin

g∈∂f (xk) g

Asef Nazari Secant Method

slide-33
SLIDE 33

Outline Background Secant Method The Problem Optimization Algorithms and components Subgradient and Subdifferential

Methods for nonsmooth problems

The way we treat ∂f (x) leads to different types of algorithms in nonsmooth optimization Subgradient method (one subgradient at each iteration) Bundle method (a bundle of subgradients in each iteration) Gradient Sampling method smoothing technique

Asef Nazari Secant Method

slide-34
SLIDE 34

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Definition of r-secant

S1 = {g ∈ Rn : g = 1} the unit sphere in Rn

Asef Nazari Secant Method

slide-35
SLIDE 35

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Definition of r-secant

S1 = {g ∈ Rn : g = 1} the unit sphere in Rn g ∈ S1 we define gmax = max {|gi|, i = 1, . . . , n} .

Asef Nazari Secant Method

slide-36
SLIDE 36

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Definition of r-secant

S1 = {g ∈ Rn : g = 1} the unit sphere in Rn g ∈ S1 we define gmax = max {|gi|, i = 1, . . . , n} . g ∈ S1 and gj = gmax, v ∈ ∂f (x + rg)

Asef Nazari Secant Method

slide-37
SLIDE 37

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Definition of r-secant

S1 = {g ∈ Rn : g = 1} the unit sphere in Rn g ∈ S1 we define gmax = max {|gi|, i = 1, . . . , n} . g ∈ S1 and gj = gmax, v ∈ ∂f (x + rg) s = s(x, g, r) ∈ Rn where s = (s1, . . . , sn) : si = vi, i = 1, . . . , n, i = j and sj = f (x + rg) − f (x) − r n

i=1,i=j sigi

rgj is called an r-secant of the function f at a point x in the direction g.

Asef Nazari Secant Method

slide-38
SLIDE 38

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Facts about r-secant

If n = 1, s = f (x + rg) − f (x) rg

Asef Nazari Secant Method

slide-39
SLIDE 39

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Facts about r-secant

If n = 1, s = f (x + rg) − f (x) rg Mean Value Theorem for r-secants f (x + rg) − f (x) = rs(x, g, r), g

Asef Nazari Secant Method

slide-40
SLIDE 40

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Facts about r-secant

If n = 1, s = f (x + rg) − f (x) rg Mean Value Theorem for r-secants f (x + rg) − f (x) = rs(x, g, r), g Set of all possible r-secants of the function f at the point x Srf (x) = {s ∈ Rn : ∃g ∈ S1 : s = s(x, g, r)}

Asef Nazari Secant Method

slide-41
SLIDE 41

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Facts about r-secant

If n = 1, s = f (x + rg) − f (x) rg Mean Value Theorem for r-secants f (x + rg) − f (x) = rs(x, g, r), g Set of all possible r-secants of the function f at the point x Srf (x) = {s ∈ Rn : ∃g ∈ S1 : s = s(x, g, r)}

1

compact for r > 0

Asef Nazari Secant Method

slide-42
SLIDE 42

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Facts about r-secant

If n = 1, s = f (x + rg) − f (x) rg Mean Value Theorem for r-secants f (x + rg) − f (x) = rs(x, g, r), g Set of all possible r-secants of the function f at the point x Srf (x) = {s ∈ Rn : ∃g ∈ S1 : s = s(x, g, r)}

1

compact for r > 0

2

(x, r) → Srf (x), r > 0 is closed and upper semi-continuous

Asef Nazari Secant Method

slide-43
SLIDE 43

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

A new set

Sc

0 f (x) = conv{v ∈ Rn : ∃(g ∈ S1, rk → +0, k → +∞) :

v = lim

k→+∞ s(x, g, rk)},

Asef Nazari Secant Method

slide-44
SLIDE 44

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

A new set

Sc

0 f (x) = conv{v ∈ Rn : ∃(g ∈ S1, rk → +0, k → +∞) :

v = lim

k→+∞ s(x, g, rk)},

For regular and semismooth function f at a point x ∈ Rn: ∂f (x) = Sc

0 f (x).

Asef Nazari Secant Method

slide-45
SLIDE 45

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Optimality condition

Let x ∈ Rn be a local minimizer of the function f and it is directionally differentiable at x. Then 0 ∈ Sc

0 f (x).

x ∈ Rn is an r-stationary point for a function f on Rn if 0 ∈ Sc

r f (x).

x ∈ Rn is an (r, δ)-stationary point for a function f on Rn if 0 ∈ Sc

r f (x) + Bδ where

Bδ = {v ∈ Rn : v ≤ δ}.

Asef Nazari Secant Method

slide-46
SLIDE 46

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Descent Direction

If x ∈ Rn is not an r-stationary point of a function f on Rn, 0 ∈ Sc

r f (x).

we can compute a descent direction using the set Sc

r f (x)

Asef Nazari Secant Method

slide-47
SLIDE 47

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Descent Direction

If x ∈ Rn is not an r-stationary point of a function f on Rn, 0 ∈ Sc

r f (x).

we can compute a descent direction using the set Sc

r f (x)

Let x ∈ Rn and for given r > 0 min{v : v ∈ Sc

r f (x)} = v0 > 0.

Then for g0 = −v0−1v0 f (x + rg0) − f (x) ≤ −rv0.

Asef Nazari Secant Method

slide-48
SLIDE 48

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Descent Direction

If x ∈ Rn is not an r-stationary point of a function f on Rn, 0 ∈ Sc

r f (x).

we can compute a descent direction using the set Sc

r f (x)

Let x ∈ Rn and for given r > 0 min{v : v ∈ Sc

r f (x)} = v0 > 0.

Then for g0 = −v0−1v0 f (x + rg0) − f (x) ≤ −rv0. minimize v2 subjectto v ∈ Sc

r f (x).

Asef Nazari Secant Method

slide-49
SLIDE 49

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

An algorithm for descent direction (Alg1)

step 1. compute an r-secant s1 = s(x, g1, r) . Set W 1(x) = {s1} and k = 1.

Asef Nazari Secant Method

slide-50
SLIDE 50

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

An algorithm for descent direction (Alg1)

step 1. compute an r-secant s1 = s(x, g1, r) . Set W 1(x) = {s1} and k = 1. step 2. wk2 = min{w2 : w ∈ coW k(x)}. If wk ≤ δ, then stop. Otherwise go to Step 3.

Asef Nazari Secant Method

slide-51
SLIDE 51

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

An algorithm for descent direction (Alg1)

step 1. compute an r-secant s1 = s(x, g1, r) . Set W 1(x) = {s1} and k = 1. step 2. wk2 = min{w2 : w ∈ coW k(x)}. If wk ≤ δ, then stop. Otherwise go to Step 3. step 3. gk+1 = −wk−1wk.

Asef Nazari Secant Method

slide-52
SLIDE 52

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

An algorithm for descent direction (Alg1)

step 1. compute an r-secant s1 = s(x, g1, r) . Set W 1(x) = {s1} and k = 1. step 2. wk2 = min{w2 : w ∈ coW k(x)}. If wk ≤ δ, then stop. Otherwise go to Step 3. step 3. gk+1 = −wk−1wk. step 4. f (x + rgk+1) − f (x) ≤ −crwk, then stop. Otherwise go to Step 5.

Asef Nazari Secant Method

slide-53
SLIDE 53

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

An algorithm for descent direction (Alg1)

step 1. compute an r-secant s1 = s(x, g1, r) . Set W 1(x) = {s1} and k = 1. step 2. wk2 = min{w2 : w ∈ coW k(x)}. If wk ≤ δ, then stop. Otherwise go to Step 3. step 3. gk+1 = −wk−1wk. step 4. f (x + rgk+1) − f (x) ≤ −crwk, then stop. Otherwise go to Step 5. step 5. sk+1 = s(x, gk+1, r) with respect to the direction gk+1, construct the set W k+1(x) = co

  • W k(x) {sk+1}
  • , set

k = k + 1 and go to Step 2.

Asef Nazari Secant Method

slide-54
SLIDE 54

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (r, δ)-stationary point(Alg 2)

step 1. x0 ∈ Rn and set k = 0.

Asef Nazari Secant Method

slide-55
SLIDE 55

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (r, δ)-stationary point(Alg 2)

step 1. x0 ∈ Rn and set k = 0. step 2. Apply Alg 1 at x = xk Either vk ≤ δ or for the search direction gk = −vk−1vk

Asef Nazari Secant Method

slide-56
SLIDE 56

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (r, δ)-stationary point(Alg 2)

step 1. x0 ∈ Rn and set k = 0. step 2. Apply Alg 1 at x = xk Either vk ≤ δ or for the search direction gk = −vk−1vk step 3. If vk ≤ δ then stop. Otherwise go to Step 4.

Asef Nazari Secant Method

slide-57
SLIDE 57

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (r, δ)-stationary point(Alg 2)

step 1. x0 ∈ Rn and set k = 0. step 2. Apply Alg 1 at x = xk Either vk ≤ δ or for the search direction gk = −vk−1vk step 3. If vk ≤ δ then stop. Otherwise go to Step 4. step 4. xk+1 = xk + σkgk, where σk is defined as follows σk = arg max

  • σ ≥ 0 : f (xk + σgk) − f (xk) ≤ −c2σvk
  • .

Set k = k + 1 and go to Step 2.

Asef Nazari Secant Method

slide-58
SLIDE 58

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (Alg 3)

step 1. x0 ∈ Rn and set k = 0.

Asef Nazari Secant Method

slide-59
SLIDE 59

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (Alg 3)

step 1. x0 ∈ Rn and set k = 0. step 2. Apply Alg 2 to xk for r = rk and δ = δk. This algorithm terminates after a finite number of iterations p > 0 and as a result the algorithm finds (rk, δk)-stationary point xk+1.

Asef Nazari Secant Method

slide-60
SLIDE 60

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

The secant method (Alg 3)

step 1. x0 ∈ Rn and set k = 0. step 2. Apply Alg 2 to xk for r = rk and δ = δk. This algorithm terminates after a finite number of iterations p > 0 and as a result the algorithm finds (rk, δk)-stationary point xk+1. step 3. Set k = k + 1 and go to Step 2.

Asef Nazari Secant Method

slide-61
SLIDE 61

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Convergence Theorem

Theorem Assume that the function f is locally Lipschitz and the set L(x0) is bounded for starting points x0 ∈ Rn. Then every accumulation point of the sequence {xk} belongs to the set X 0 = {x ∈ Rn : 0 ∈ ∂f (x)}.

Asef Nazari Secant Method

slide-62
SLIDE 62

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Results

Asef Nazari Secant Method

slide-63
SLIDE 63

Outline Background Secant Method Definitions Optimality Condition and Descent Direction The Secant Algorithm Numerical Results

Results

Asef Nazari Secant Method