convex optimization
play

Convex Optimization 1. Introduction Prof. Ying Cui Department of - PowerPoint PPT Presentation

Convex Optimization 1. Introduction Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2020 SJTU Ying Cui 1 / 18 Outline Mathematical optimization Least-squares problems Linear programming Convex


  1. Convex Optimization 1. Introduction Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2020 SJTU Ying Cui 1 / 18

  2. Outline Mathematical optimization Least-squares problems Linear programming Convex optimization Nonlinear optimization Outline of textbook SJTU Ying Cui 2 / 18

  3. Mathematical optimization Optimization problem min f 0 ( x ) x s . t . f i ( x ) ≤ b i , i = 1 , · · · , m . ◮ optimization variable: x = ( x 1 , · · · , x n ) ∈ R n ◮ objective function: f 0 : R n → R ◮ (inequality) constraint functions: f i : R n → R , i = 1 , · · · , m ◮ limits or bounds for the constraints: b i : R , i = 1 , · · · , m ◮ optimal or a solution: x ⋆ has the smallest objective value among all vectors that satisfy the constraints ◮ ∀ z ∈ R n with f i ( z ) ≤ b i , i = 1 , · · · , m , we have f 0 ( z ) ≥ f 0 ( x ∗ ) SJTU Ying Cui 3 / 18

  4. Mathematical optimization Classes of optimization problems ◮ linear optimization problems: f 0 , f 1 , · · · , f m are linear ◮ f i ( α x + β y ) = α f i ( x ) + β f i ( y ) for all x , y ∈ R n and all α, β ∈ R ◮ nonlinear optimization problems ◮ convex optimization problems: f 0 , f 1 , · · · , f m are convex ◮ f i ( α x + β y ) ≤ α f i ( x ) + β f i ( y ) for all x , y ∈ R n and all α, β ∈ R with α + β = 1 , α, β ≥ 0 ◮ nonconvex optimization problem ◮ convex optimization is a generalization of linear optimization ◮ convexity is more general than linearity: inequality replaces more restrictive equality and inequality must hold only for certain values of α and β ◮ “the greatest watershed in optimization isn’t between linearity and nonlinearity, but convexity and nonconvexity” ◮ stated by Ralph Tyrrell Rockafellar, in his 1993 SIAM survey paper [Roc93] ◮ first formal argument that convex optimization problems are easier to solve than general nonlinear optimization problems ◮ made by Nemirovski and Yudin, in their 1983 book [NY83] SJTU Ying Cui 4 / 18

  5. Mathematical optimization Applications ◮ optimization is an abstraction of the problem of making the best possible choice from a set of candidate choices ◮ variable x represents the choice made ◮ constraints f i ( x ) ≤ b i , i = 1 , · · · , m represent firm requirements or specifications limiting the possible choices ◮ objective value f 0 ( x ) represents the cost of choosing x ( − f 0 ( x ) represents the value or utility of choosing x ) ◮ a solution x ⋆ corresponds to a choice with minimum cost (or maximum utility), among all possible choices ◮ many practical problems involving decision making, system design, analysis, and operation can be cast in the form of an optimization problem or some variation of it ◮ optimization has become an important tool in many areas ◮ e.g., electronic design automation, automatic control systems, and optimal design problems arising in civil, chemical, mechanical, and aerospace engineering SJTU Ying Cui 5 / 18

  6. Mathematical optimization Examples ◮ portfolio optimization: seek the best way to invest some capital in a set of assets ◮ variables: amounts invested in different assets ◮ constraints: budget, max./min. investment per asset, minimum return ◮ objective: overall risk or return variance ◮ device sizing in electronic circuits: choose the width and length of each device in an electronic circuit ◮ variables: device widths and lengths ◮ constraints: manufacturing limits, timing requirements, maximum area ◮ objective: power consumption ◮ data fitting: find a model that best fits some observed data ◮ variables: model parameters ◮ constraints: prior information, parameter limits ◮ objective: measure of misfit or prediction error SJTU Ying Cui 6 / 18

  7. Mathematical optimization Solving optimization problems ◮ effectiveness of solution methods varies considerably, and depends on factors, e.g., ◮ forms of objective and constraint functions, numbers of variables and constraints, and special structures (e.g., sparsity) ◮ a problem is sparse if each constraint function depends on only a small number of the variables ◮ the general optimization problem is very difficult to solve, and approaches to it involve some compromise, e.g., ◮ very long computation time, or possibility of not finding the solution ◮ a few problem classes can be solved reliably and efficiently ◮ least-squares problems, linear programming problems, convex optimization problems SJTU Ying Cui 7 / 18

  8. Least-squares problems A least-squares problem is an optimization problem with no constraints and an objective which is a sum of squares of terms of the form a T i x − b i : k || Ax − b || 2 � i x − b i ) 2 ( a T min 2 = x i =1 = ( Ax − b ) T ( Ax − b ) = x T A T Ax − 2 x T A T b + b T b A ∈ R k × n ( k ≥ n ), a T i , i = 1 , · · · , k are the rows of A , rank A = n , b ∈ R k and x ∈ R n Solving least-squares problems ◮ analytical solution: x = ( A T A ) − 1 A T b ⇒ x = ( A T A ) − 1 A T b ◮ ∇ f 0 ( x ) = 2 A T Ax − 2 A T b = 0 = ◮ reliable and efficient algorithms and software ◮ computation time proportional to n 2 k , less if structured (e.g., A is sparse, i.e., has far fewer than kn nonzero entries) ◮ a mature technology SJTU Ying Cui 8 / 18

  9. Least-squares problems Using least-squares ◮ basis for regression analysis, optimal control, and many parameter estimation and data fitting methods ◮ least-squares problems are easy to recognize ◮ verify that the objective is a quadratic function of a positive semidefinite form, i.e., x T Px + q T x + r , P ∈ S n + ◮ several standard techniques are used to increase flexibility ◮ e.g., including weights, adding regularization terms ◮ weighted least-squares: � k i =1 w i ( a T i x − b i ) 2 ◮ weights w i chosen to reflect differing levels of concern about sizes of terms a T i x − b i i x − b i ) 2 + ρ � n ◮ regularization: � k i =1 ( a T i =1 x 2 i ◮ parameter ρ chosen to give right trade-off between making i x − b i ) 2 small, while keeping � n � k i =1 ( a T i =1 x 2 i not too big SJTU Ying Cui 9 / 18

  10. Linear programming Linear programming is a class of optimization problems where the objective and all constraint functions are linear: c T x min x a T i x ≤ b i , i = 1 , · · · , m s . t . vectors c , a 1 , · · · , a m ∈ R n , scalars b 1 , · · · , b m ∈ R and x ∈ R n Solving linear programs ◮ no analytical formula for solution ◮ reliable and efficient algorithms (e.g., simplex method and interior-point methods) and software ◮ computation time proportional to n 2 m (assuming m ≥ n ), less with structure (e.g., problem is sparse, i.e., each constraint function depends on only a small number of the variables) ◮ a mature technology SJTU Ying Cui 10 / 18

  11. Linear programming Using linear programming ◮ not as easy to recognize as least-squares problems ◮ a few standard tricks used to convert problems into linear programs ◮ e.g., problems involving l 1 -norm or l ∞ -norm, piecewise-linear functions ◮ Chebyshev approximation problem: i =1 , ··· , k | a T min max i x − b i | x vectors a 1 , · · · , a m ∈ R n and scalars b 1 , · · · , b m ∈ R ◮ equivalent problem: min t x , t a T i x − t ≤ b i , i = 1 , · · · , k s . t . − a T i x − t ≤ − b i , i = 1 , · · · , k SJTU Ying Cui 11 / 18

  12. Convex optimization Convex optimization is a class of optimization problems where the objective and all constraint functions are convex: min f 0 ( x ) x f i ( x ) ≤ b i , i = 1 , · · · , m s . t . ◮ functions f 0 , · · · , f m : R n → R are convex, i.e., satisfy f i ( α x + β y ) ≤ α f i ( x ) + β f i ( y ) for all x , y ∈ R n and all α, β ∈ R with α + β = 1 , α ≥ 0 , β ≥ 0 ◮ least-squares problem and linear programming problem are both special cases SJTU Ying Cui 12 / 18

  13. Convex optimization Solving convex optimization problems ◮ in general no analytical formula for solution ◮ reliable and efficient algorithms ◮ interior-point methods: almost always 10-100 iterations, each with computation time proportional to max { n 3 , n 2 m , F } , where F is cost of evaluating first and second derivatives of f 0 , · · · , f m ◮ almost a technology Using convex optimization ◮ often difficult to recognize ◮ many tricks for transforming problems into convex form ◮ surprisingly many problems can be solved via convex optimization SJTU Ying Cui 13 / 18

  14. Nonlinear optimization Nonlinear optimization (or programming) is the term used to describe an optimization problem that is not linear, but not known to be convex. ◮ no effective methods for solving general nonconvex problems Local optimization ◮ seek a locally optimal solution, which minimizes the objective function among feasible points near it ◮ widely used in applications where there is value in finding a good point, if not the very best ◮ local optimization methods ◮ advantages: can be fast, can handle large problems, and are widely applicable (only require differentiability of objective and constraint functions) ◮ disadvantages: require initial guess (which can greatly affect the objective value of the local solution obtained), provide no information about distance to (global) optimum, and are sensitive to algorithm parameter values SJTU Ying Cui 14 / 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend