constrained optimization problem in standard form
play

Constrained optimization Problem in standard form minimize f ( x ) - PowerPoint PPT Presentation

Constrained optimization Problem in standard form minimize f ( x ) subject to a i ( x ) = 0, for i = 1 , 2 , p c j ( x ) 0 for j = 1 , 2 , , q a i : R n R c j : R n R f : R n R f ( x ) = , if problem is


  1. Constrained optimization

  2. Problem in standard form minimize f ( x ) subject to a i ( x ) = 0, for i = 1 , 2 , · · · p c j ( x ) ≥ 0 for j = 1 , 2 , · · · , q a i : R n → R c j : R n → R f : R n → R f ( x ∗ ) = ∞ , if problem is infeasible f ( x ∗ ) = −∞ , if problem is unbounded below

  3. Equality constraints • An equality constraint defines a hypersurface where a i ( x ) = 0 • A regular point is a point in the feasible region and has a full- rank Jacobian • A tangent plane of the hypersurface determined by the constraint at a regular point x is well defined • The number of constraints, p , must be less than the dimension of the domain, n

  4. Linear equality constraints • What is the Jacobian of a linear equality constraint, Ax = b ? • If rank( A ) = p , any feasible x is a regular point • If rank( A ) < p , we can test whether contradiction or redundancy exists by checking: rank([ A b ]) • if rank([ A b ]) ? rank( A ), contradiction • if rank ([ A b ]) ? rank( A ), redundancy

  5. Inequality constraints • What is the largest number of inequality constraints in an optimization in R n ? • Two general approaches to deal with inequality constraints: • Divide into active and inactive constraints • Convert into equality constraints

  6. Linear programming minimize c T x subject to Ax = b x ≥ 0 • Alternative form can be conformed to the standard form minimize c T x subject to Ax ≥ b • The feasible set is a polyhedra

  7. Constraint transformations • We can convert each inequality to equality constraint by introducing slack variable: y = Ax - b • Inequalities becomes Ax - y = b and y ≥ 0 • We can introduce nonnegative bounds on x by adding two nonnegative vectors x + and x - : x = x + - x - • With new variables, ,the problem becomes: x = [ x + x − y ] ˆ c T ˆ minimize ˆ x x = ˆ subject to ˆ A ˆ b x ≥ 0 ˆ

  8. Convex quadratic programming 2 x T Hx + x T p + c minimize f ( x ) = 1 subject to Ax = b Cx ≥ d • If Hessian is positive semidefinite, QP can be regarded as a special class of convex programming • If Hessisn is indefinite, the problem becomes NP hard

  9. Quadratically constrained QP 2 x T Hx + x T p + c minimize f ( x ) = 1 subject to Ax = b 1 2 x T H i x + x T p i + c i , i = 1 , · · · , m • Objective function and constraints are convex objective • If H i are positive definite, the feasible region is the intersection of m ellipsoids and an affine set

  10. Second-order cone programming minimize b T x subject to � � i � + � i � 2 ≤ � T i � + d i i = 1 , · · · , q • Inequalities are called second-order cone constraints { � Ax + c � , b T x + d } ∈ second order cone in R n +1 • More general than LP and QCQP • For A i = 0 and c i = 0 , reduces to an LP. For b i = 0 , reduces to QCQP

  11. Semidefinite programming minimize c T x subjecto to x 1 F 1 + x 2 F 2 + · · · + x n F n + G � 0 with F i , G ∈ S n Ax = b • The inequality constraint is called linear matrix inequality (LMI) x 1 ˆ F 1 + x 2 ˆ F 2 + · · · + x n ˆ F n + ˆ G � 0 x 1 ˜ F 1 + x 2 ˜ F 2 + · · · + x n ˜ F n + ˜ G � 0 • Multiple LMIs can be represented by one a single LMI � ˆ � ˆ � ˆ � � � F 1 0 F n 0 G 0 � 0 x 1 + · · · + x n + ˜ ˜ ˜ 0 F 1 0 F n 0 G

  12. Nonconvex problems • A problem is not convex if one constraint is not convex or the objective function is not convex • Use SQP or penalty methods (Barrier function methods)

  13. Simple transformation methods • Introduce equality constraints • Eliminate equality constraints • Eliminate nonnegativity bounds • Eliminate interval-type constraints

  14. Eliminate equality constraints minimize f ( x ) subject to Ax = b c i ( x ) ≥ 0 for 1 ≤ i ≤ q • Use Moore-Penrose pseudo inverse of A • A + = A T ( AA T ) -1 • A + b is a point on the hyperplane • Introduce a new variable ϕ’ , which is a vector lies on the hyperplane defined by the constraint • ϕ’ is in the null space of A

  15. Eliminate equality constraints • Reduce the dimension of variables from n to the null space of A • Apply SVD on A = U Σ V T to computes the null space of A • Null space of A : V r , spanned by the last n - m vectors of V • The old variable x can be represented by • x = V r ϕ + A + b , where ϕ is an arbitrary vector in R n - m

  16. Eliminate nonnegativity bounds • Nonnegativity bound x i ≥ 0 can be eliminated using the variable transformation x i = y i 2 • Constraint x i ≥ d can be eliminated by the variable transformation x i = d + y i 2 • What about x i ≤ d ?

  17. Eliminate interval-type constraints • Interval constraint a ≤ x ≤ b can be eliminated by variable transformation tanh( z ) + b + a x = b − a 2 2 , where tanh( z ) = e z − e − z e z + e − z 2.5 -5 -2.5 0 2.5 5 -2.5

  18. References • A. Antoniou and W.S. Lu, Practical optimization • S. Boyd, Convex optimization, lecture notes

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend