nonlinear optimization i
play

Nonlinear Optimization I Dr. Thomas M. Surowiec Humboldt University - PowerPoint PPT Presentation

Introduction Optimality Conditions Nonlinear Optimization I Dr. Thomas M. Surowiec Humboldt University of Berlin Department of Mathematics Summer 2013 Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013 Introduction Optimality Conditions


  1. Introduction Optimality Conditions Nonlinear Optimization I Dr. Thomas M. Surowiec Humboldt University of Berlin Department of Mathematics Summer 2013 Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  2. Introduction Optimality Conditions Course Information 1 Lectures: Tuesdays 15-17 RUD 26 1.304 Thursdays 11-13 RUD 1.013 Recitations: Thursdays 15-17 RUD 25 3.006 2 Office Hours: By appointment. 3 Office: RUD 2.425 Phone: x5497 Email: surowiec math.hu-berlin.de Website: http://www.mathematik.hu-berlin.de/ surowiec/ Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  3. Introduction Optimality Conditions Course Information & Grading 1 Lecture Notes, Slides. Complete lecture notes (should be!) available at some point during the semester. Slides available following each lecture on my website. Slides = Lecture Notes − Proofs − Prose. Homeworks: 2 1 Homework series per week with 4 problems. Problems to be solved on the board during the following recitation. Final Exam: oral. 3 Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  4. Introduction Optimality Conditions Definition of an Optimization Problem Definition (Minimization Problem) Let X ⊂ R n an arbitrary set and f : X → R a continuous function. The problem is to find an x ∗ ∈ X such that f ( x ∗ ) ≤ f ( x ) , ∀ x ∈ X . Alternate formulations: min f ( x ) subject to (s.t.) x ∈ X , or min x ∈ X f ( x ) Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  5. Introduction Optimality Conditions Examples of Optimization Problems Basic Idea: Given observations or measurements of a system of interest, how can we determine certain intrinsic properties? Example Let M be a point of mass with mass m affixed to the end of a vertical spring. At equilibrium, M is located at the origin. K is the restoring force applied by spring to M upon displacement. For small (vertical) displacements y , the spring applies K = − ˆ ky (Hooke’s law). Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  6. Introduction Optimality Conditions Examples of Optimization Problems Example (cont.) ˆ k is an as yet unknown positive spring constant. y ( t ) := Position of M at time t . Ignoring damping and friction, Newton’s law states: y = − ˆ m ¨ ky , (1) i.e. mass m times acceleration ¨ y equals the opposing force of the spring − ˆ ky . (1) is called “the undamped harmonic oscillator equation.” Usually, friction and damping forces behave proportionally to the velocity ( ˙ y ) of M . We model with by − r ˙ y with fixed r > 0. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  7. Introduction Optimality Conditions Examples of Optimization Problems Example (cont.) Together with (1), we obtain y + ˆ m ¨ y + r ˙ ky = 0 . (2) Setting c := r / m , k := ˆ k / m , (2) becomes: ¨ y + c ˙ y + ky = 0 . (3) Assume at time t = 0: ˙ y ( 0 ) = y 0 ∈ R , y ( 0 ) = 0 . (4) Given endtime T > 0, we consider this initial boundary value problem (IVP) on the interval [ 0 , T ] . Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  8. Introduction Optimality Conditions Examples of Optimization Problems Example (cont.) Suppose we do not know x := ( c , k ) T . y j � N For j := 1 , . . . , N , we are given measurements � j = 1 of the spring deviation at time instances t j = ( j − 1 ) T / ( N − 1 ) . Let y ( x ; , t ) be the solution of our IVP for a given x . By solving the unconstrained optimization problem N x ∈ R 2 f ( x ) := 1 � | y ( x ; t j ) − y j | 2 , min (5) 2 j = 1 we seek to determine the spring constant k and damping factor c . Note y ( · , t ) is differentiable wrt to x provided c 2 − 4 k � = 0 . (5) : “Nonlinear Least Squares Problem.” Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  9. Introduction Optimality Conditions Examples of Optimization Problems Basic Idea Deciding product capacity based on fixed and variable costs. Example x output quantity. K v ( x ) variable costs, K f ( x ) = c > 0 fixed costs. K ( x ) := K v ( x ) + K f ( x ) , x ∈ R total costs. Normally one looks for an x ∗ , which minimizes total costs of K ( x ) , i.e. x ∗ = argmin { K f ( x ) + K v ( x ) s.t. x ∈ R } = argmin { K v ( x ) s.t. x ∈ R } (6) In general, x ∗ is not unique. We therefore write: x ∗ ∈ argmin { K v ( x ) s.t. x ∈ R } (7) Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  10. Introduction Optimality Conditions Constrained Optimization Particularly in the previous example, one often has constraints on x . When X � R n , we often have X = X 1 ∩ X 2 ∩ X 3 , where x ∈ R n | c i ( x ) = 0 , i ∈ I 1 � � X 1 = x ∈ R n | c i ( x ) ≤ 0 , i ∈ I 2 � � X 2 = x ∈ R n | x i ∈ Z , i ∈ I 3 X 3 = � � I i , i = 1 , 2 , 3 are called index sets. X 1 , X 2 , X 3 are called equality, inequality, and integer constraints, respectively. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  11. Introduction Optimality Conditions Catagories of Optimization Problems X set of discrete points → discrete or combinatorial optimization. Otherwise, continuous optimization. f , c i for any i is nondifferentiable → nonsmooth optimization. Example Nonsmooth Problems f ( x ) := || · || 2 ( f nondifferentiable at 0). c ( x ) = | x | − x 2 = 0. Optimizing optimization problems (game theory, optimal control, etc.) Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  12. Introduction Optimality Conditions Notions of Solutions Definition (Solution Types) Let f : X → R with X ⊂ R n . The point x ∗ ∈ X is called a (strict) global minimizer of f (on X ) if and only if 1 f ( x ∗ ) ≤ f ( x ) , ( f ( x ∗ ) < f ( x )) ∀ x ∈ X \ { x ∗ } . (strict) local minimizer of f (on X ) if and only if there exists a 2 neighborhood U of x ∗ such that f ( x ∗ ) ≤ f ( x ) , ( f ( x ∗ ) < f ( x )) ∀ x ∈ ( X ∩ U ) \ { x ∗ } . The optimal objective value of f ( x ∗ ) is called a (strict) local minimum. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  13. Introduction Optimality Conditions Stationary Points For f ∈ C 1 , we denote its gradient by � ∂ f � T ( x ) , . . . , ∂ f ∇ f ( x ) = ∂ x n ( x ) . ∂ x 1 If f : X → R is directionally differentiable, then its directional derivative at x in direction d is denoted by f ( x + α d ) − f ( x ) f ′ ( x ; d ) := lim . α α ↓ 0 Definition (Stationarity) Let X ⊂ R n be an open set and f : X → R directionally differentiable. The point x ∗ ∈ X is called a stationary point of f , if ∇ f ( x ∗ ) = 0 . holds true. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  14. Introduction Optimality Conditions Fermat’s Theorem Theorem Let X ⊂ R n be an open set and f : X → R a continuously differentiable function. If x ∗ ∈ X is a local minimizer of f (on X), then ∇ f ( x ∗ ) = 0 . i.e. x ∗ is a stationary point. Proof. On the board. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  15. Introduction Optimality Conditions A Fair Warning Fermat’s Theorem, regardless of what variant, is a first order necessary condition . Example Let f ( x ) = − x 2 with x ∗ = 0. Here, ∇ f ( x ∗ ) = − 2 x ∗ = 0 , but x ∗ is in fact a global maximizer of f . In the previous example, ∇ 2 f ( x ∗ ) = − 2 indicates that f is concave (down). Hence, x ∗ cannot be even a local minimum. This leads to a general result. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  16. Introduction Optimality Conditions Continuity of Eigenvalues Lemma Let S n be the vector space of symmetrical n × n-matrices. For A ∈ S n let λ ( A ) ∈ R be the smallest eigenvalue of A. Then the following estimate holds true: | λ ( A ) − λ ( B ) | ≤ || A − B || for all A , B ∈ S n In other words, the mapping that takes a real symmetric matrix to its smallest eigenvalue in Lipschitz continuous with modulus 1. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  17. Introduction Optimality Conditions An Important Implication Lemma 12 allows us to make the following observation: f ∈ C 2 ⇒ ∇ 2 f : R n → R n × n continuous wrt some matrix norm || · || . ∇ 2 f is symmetric. ∇ 2 f ( x ∗ ) positive definite implies ∇ 2 f ( x ) pos. def. for || x − x ∗ || suff. small. In general: Small perturbations (in S n !) of symmetric pos. def. matrices remain pos. def. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  18. Introduction Optimality Conditions Second-Order Necessary Optimality Conditions Theorem Let X ⊂ R n be open and f : X → R be twice continuously differentiable. If x ∗ ∈ X is a local minimizer of f on (X), then ∇ f ( x ∗ ) = 0 and the Hessian ∇ 2 f ( x ∗ ) is positive semi-definite. Proof. On the board. Question: What happens if we try to apply this as a sufficient condition? Example 2 , x ∗ = ( 0 , 0 ) T . f ( x ) = x 2 1 − x 4 Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

  19. Introduction Optimality Conditions Second-Order Sufficient Optimality Conditions Theorem Let X ⊂ R n be open and f : X → R twice continuously differentiable. If ∇ f ( x ∗ ) = 0 and 1 ∇ 2 f ( x ∗ ) is positive definite, 2 then x ∗ is a strict local minimizer of f (on X). Proof. On the board. Dr. Thomas M. Surowiec BMS Couse NLO, Summer 2013

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend