Unconstrained Optimization
◮ Optimization problem
Given f : Rn − → R find x∗ ∈ Rn, such that x∗ = argmin
x
f(x)
◮ Global minimum and local minimum ◮ Optimality
◮ Necessary condition:
∇f(x∗) = 0
◮ Sufficient condition:
Unconstrained Optimization Optimization problem Given f : R n R - - PowerPoint PPT Presentation
Unconstrained Optimization Optimization problem Given f : R n R find x R n , such that x = argmin f ( x ) x Global minimum and local minimum Optimality Necessary condition: f ( x ) = 0 Sufficient
◮ Optimization problem
x
◮ Global minimum and local minimum ◮ Optimality
◮ Necessary condition:
◮ Sufficient condition:
◮ Taylor series approximation of f at k-th iterate xk:
◮ Differentiating with respect to x and setting the result equal to zero
◮ Newton’s method converges quadratically when x0 is near a minimum.
◮ Directional derivative of f at x in the direction u:
h→0
◮ To min f(x), we would like to find the direction u in which f
◮ Using the directional derivative,
u uT ∇f(x) = min u u2∇f(x)2 cos θ
2
◮ u = −∇f(x) is call the steepest descent direction.
◮ The steepest descent algorithm:
◮ How to pick τ?
◮ Let A ∈ Rm×n and b = (bi) ∈ Rm ◮ The least squares problem, also known as linear regression:
x f(x) = min x
2
x
m
i (x)
◮ Gradient: ∇f(x) = AT Ax − AT b ◮ The method of gradient descent:
◮ set the stepsize τ and tolerance δ to small positive numbers. ◮ while AT Ax − AT b2 > δ do
x g(x) = g(x1, x2, . . . , xn) = n