CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 9: - - PowerPoint PPT Presentation

cs599 convex and combinatorial optimization fall 2013
SMART_READER_LITE
LIVE PREVIEW

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 9: - - PowerPoint PPT Presentation

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 9: Convex Optimization Problems Instructor: Shaddin Dughmi Announcements Homework: Due beginning of next class Must submit a hard copy, unless you have a good excuse If using late


slide-1
SLIDE 1

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 9: Convex Optimization Problems

Instructor: Shaddin Dughmi

slide-2
SLIDE 2

Announcements

Homework: Due beginning of next class

Must submit a hard copy, unless you have a good excuse If using late days, due by Monday in Shaddin’s mailbox

Today: Convex Optimization Problems

Read all of B&V Chapter 4.

slide-3
SLIDE 3

Outline

1

Convex Optimization Basics

2

Common Classes

3

Interlude: Positive Semi-Definite Matrices

4

More Convex Optimization Problems

slide-4
SLIDE 4

Recall: Convex Optimization Problem

A problem of minimizing a convex function (or maximizing a concave function) over a convex set. minimize f(x) subject to x ∈ X X ⊆ Rn is convex, and f : Rn → R is convex Terminology: decision variable(s), objective function, feasible set,

  • ptimal solution/value, ǫ-optimal solution/value

Convex Optimization Basics 1/24

slide-5
SLIDE 5

Standard Form

Instances typically formulated in the following standard form minimize f(x) subject to gi(x) ≤ 0, for i ∈ C1. a⊺

i x = bi,

for i ∈ C2. gi is convex Terminology: equality constraints, inequality constraints, active/inactive at x, feasible/infeasible, unbounded

Convex Optimization Basics 2/24

slide-6
SLIDE 6

Standard Form

Instances typically formulated in the following standard form minimize f(x) subject to gi(x) ≤ 0, for i ∈ C1. a⊺

i x = bi,

for i ∈ C2. gi is convex Terminology: equality constraints, inequality constraints, active/inactive at x, feasible/infeasible, unbounded In principle, every convex optimization problem can be formulated in this form (possibly implicitly)

Recall: every convex set is the intersection of halfspaces

Convex Optimization Basics 2/24

slide-7
SLIDE 7

Standard Form

Instances typically formulated in the following standard form minimize f(x) subject to gi(x) ≤ 0, for i ∈ C1. a⊺

i x = bi,

for i ∈ C2. gi is convex Terminology: equality constraints, inequality constraints, active/inactive at x, feasible/infeasible, unbounded In principle, every convex optimization problem can be formulated in this form (possibly implicitly)

Recall: every convex set is the intersection of halfspaces

When f(x) is immaterial (say f(x) = 0), we say this is convex feasibility problem

Convex Optimization Basics 2/24

slide-8
SLIDE 8

Local and Global Optimality

Fact

For a convex optimization problem, every locally optimal feasible solution is globally optimal.

Convex Optimization Basics 3/24

slide-9
SLIDE 9

Local and Global Optimality

Fact

For a convex optimization problem, every locally optimal feasible solution is globally optimal.

Proof

Let x be locally optimal, and y be any other feasible point.

Convex Optimization Basics 3/24

slide-10
SLIDE 10

Local and Global Optimality

Fact

For a convex optimization problem, every locally optimal feasible solution is globally optimal.

Proof

Let x be locally optimal, and y be any other feasible point. The line segment from x to y is contained in the feasible set.

Convex Optimization Basics 3/24

slide-11
SLIDE 11

Local and Global Optimality

Fact

For a convex optimization problem, every locally optimal feasible solution is globally optimal.

Proof

Let x be locally optimal, and y be any other feasible point. The line segment from x to y is contained in the feasible set. By local optimality f(x) ≤ f(θx + (1 − θ)y) for θ sufficiently close to 1.

Convex Optimization Basics 3/24

slide-12
SLIDE 12

Local and Global Optimality

Fact

For a convex optimization problem, every locally optimal feasible solution is globally optimal.

Proof

Let x be locally optimal, and y be any other feasible point. The line segment from x to y is contained in the feasible set. By local optimality f(x) ≤ f(θx + (1 − θ)y) for θ sufficiently close to 1. Jensen’s inequality then implies that y is suboptimal. f(x) ≤ f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y) f(x) ≤ f(y)

Convex Optimization Basics 3/24

slide-13
SLIDE 13

Representation

Typically, by problem we mean a family of instances, each of which is described either explicitly via problem parameters, or given implicitly via an oracle, or something in between.

Convex Optimization Basics 4/24

slide-14
SLIDE 14

Representation

Typically, by problem we mean a family of instances, each of which is described either explicitly via problem parameters, or given implicitly via an oracle, or something in between.

Explicit Representation

A family of linear programs of the following form maximize cT x subject to Ax b x 0 may be described by c ∈ Rn, A ∈ Rm×n, and b ∈ Rm.

Convex Optimization Basics 4/24

slide-15
SLIDE 15

Representation

Typically, by problem we mean a family of instances, each of which is described either explicitly via problem parameters, or given implicitly via an oracle, or something in between.

Oracle Representation

At their most abstract, convex optimization problems of the following form minimize f(x) subject to x ∈ X are described via a separation oracle for X and epi f.

Convex Optimization Basics 4/24

slide-16
SLIDE 16

Representation

Typically, by problem we mean a family of instances, each of which is described either explicitly via problem parameters, or given implicitly via an oracle, or something in between.

Oracle Representation

At their most abstract, convex optimization problems of the following form minimize f(x) subject to x ∈ X are described via a separation oracle for X and epi f. Given additional data about instances of the problem, namely a range [L, H] for its optimal value and a ball of volume V containing X, the ellipsoid method returns an ǫ-optimal solution using only poly(n, log( H−L

ǫ

), log V ) oracle calls.

Convex Optimization Basics 4/24

slide-17
SLIDE 17

Representation

Typically, by problem we mean a family of instances, each of which is described either explicitly via problem parameters, or given implicitly via an oracle, or something in between.

In Between

Consider the following fractional relaxation of the Traveling Salesman Problem, described by a network (V, E) and distances de on e ∈ E. min

e dexe

s.t.

  • e∈δ(S) xe ≥ 2,

∀S ⊂ V, S = ∅. x 0

Convex Optimization Basics 4/24

slide-18
SLIDE 18

Representation

Typically, by problem we mean a family of instances, each of which is described either explicitly via problem parameters, or given implicitly via an oracle, or something in between.

In Between

Consider the following fractional relaxation of the Traveling Salesman Problem, described by a network (V, E) and distances de on e ∈ E. min

e dexe

s.t.

  • e∈δ(S) xe ≥ 2,

∀S ⊂ V, S = ∅. x 0 Representation of LP is implicit, in the form of a network. Using this representation, separation oracles can be implemented efficiently, and used as subroutines in the ellipsoid method.

Convex Optimization Basics 4/24

slide-19
SLIDE 19

Equivalence

Next up: we look at some common classes of convex optimization problems Technically, not all of them will be convex in their natural representation However, we will show that they are “equivalent” to a convex

  • ptimization problem

Convex Optimization Basics 5/24

slide-20
SLIDE 20

Equivalence

Next up: we look at some common classes of convex optimization problems Technically, not all of them will be convex in their natural representation However, we will show that they are “equivalent” to a convex

  • ptimization problem

Equivalence

Loosly speaking, two optimization problems are equivalent if an

  • ptimal solution to one can easily be “translated” into an optimal

solution for the other.

Convex Optimization Basics 5/24

slide-21
SLIDE 21

Equivalence

Next up: we look at some common classes of convex optimization problems Technically, not all of them will be convex in their natural representation However, we will show that they are “equivalent” to a convex

  • ptimization problem

Equivalence

Loosly speaking, two optimization problems are equivalent if an

  • ptimal solution to one can easily be “translated” into an optimal

solution for the other.

Note

Deciding whether an optimization problem is equivalent to a tractable convex optimization problem is, in general, a black art honed by

  • experience. There is no silver bullet.

Convex Optimization Basics 5/24

slide-22
SLIDE 22

Outline

1

Convex Optimization Basics

2

Common Classes

3

Interlude: Positive Semi-Definite Matrices

4

More Convex Optimization Problems

slide-23
SLIDE 23

Linear Programming

We have already seen linear programming minimize c⊺x subject to Ax ≤ b

Common Classes 6/24

slide-24
SLIDE 24

Linear Fractional Programming

Generalizes linear programming minimize

c⊺x+d e⊺x+f

subject to Ax ≤ b e⊺x + f ≥ 0 The objective is quasiconvex (in fact, quasilinear) over the halfspace where the denominator is nonnegative.

Common Classes 7/24

slide-25
SLIDE 25

Linear Fractional Programming

Generalizes linear programming minimize

c⊺x+d e⊺x+f

subject to Ax ≤ b e⊺x + f ≥ 0 The objective is quasiconvex (in fact, quasilinear) over the halfspace where the denominator is nonnegative. Can be reformulated as an equivalent linear program

1

Change variables to y =

x e⊺x+f and z = 1 e⊺x+f

minimize c⊺y + dz subject to Ay ≤ bz z ≥ 0 y =

x e⊺x+f

z =

1 e⊺x+f

Common Classes 7/24

slide-26
SLIDE 26

Linear Fractional Programming

Generalizes linear programming minimize

c⊺x+d e⊺x+f

subject to Ax ≤ b e⊺x + f ≥ 0 The objective is quasiconvex (in fact, quasilinear) over the halfspace where the denominator is nonnegative. Can be reformulated as an equivalent linear program

1

Change variables to y =

x e⊺x+f and z = 1 e⊺x+f

2

(y, z) is a solution to the above iff e⊺y + fz = 1

minimize c⊺y + dz subject to Ay ≤ bz z ≥ 0

✘✘✘✘✘

y =

x e⊺x+f

✘✘✘✘✘

z =

1 e⊺x+f

e⊺y + fz = 1

Common Classes 7/24

slide-27
SLIDE 27

Example: Optimal Production Variant

n products, m raw materials Every unit of product j uses aij units of raw material i There are bi units of material i available Product j yields profit cj dollars per unit, and requires an investment of ej dollars per unit to produce, with f as a fixed cost Facility wants to maximize “Return rate on investment” maximize

c⊺x e⊺x+f

subject to a⊺

i x ≤ bi,

for i = 1, . . . , m. xj ≥ 0, for j = 1, . . . , n.

Common Classes 8/24

slide-28
SLIDE 28

Geometric Programming

Definition

A monomial is a function f : Rn

+ → R+ of the form

f(x) = cxa1

1 xa2 2 . . . xan n ,

where c ≥ 0, ai ∈ R. A posynomial is a sum of monomials.

Common Classes 9/24

slide-29
SLIDE 29

Geometric Programming

Definition

A monomial is a function f : Rn

+ → R+ of the form

f(x) = cxa1

1 xa2 2 . . . xan n ,

where c ≥ 0, ai ∈ R. A posynomial is a sum of monomials. A Geometric Program is an optimization problem of the following form minimize f0(x) subject to fi(x) ≤ bi, for i ∈ C1. hi(x) = bi, for i ∈ C2. x 0 where fi’s are posynomials, hi’s are monomials, and bi > 0 (wlog 1).

Common Classes 9/24

slide-30
SLIDE 30

Geometric Programming

Definition

A monomial is a function f : Rn

+ → R+ of the form

f(x) = cxa1

1 xa2 2 . . . xan n ,

where c ≥ 0, ai ∈ R. A posynomial is a sum of monomials. A Geometric Program is an optimization problem of the following form minimize f0(x) subject to fi(x) ≤ bi, for i ∈ C1. hi(x) = bi, for i ∈ C2. x 0 where fi’s are posynomials, hi’s are monomials, and bi > 0 (wlog 1).

Interpretation

GP model volume/area minimization problems, subject to constraints.

Common Classes 9/24

slide-31
SLIDE 31

Example: Designing a Suitcase

A suitcase manufacturer is designing a suitcase Variables: h, w,d Want to minimize surface area 2(hw + hd + wd) (i.e. amount of material used) Have a target volume hwd ≥ 5 Practical/aesthetic constraints limit aspect ratio: h/w ≤ 2, h/d ≤ 3 Constrained by airline to h + w + d ≤ 7 minimize 2hw + 2hd + 2wd subject to h−1w−1d−1 ≤ 1

5

hw−1 ≤ 2 hd−1 ≤ 3 h + w + d ≤ 7 h, w, d ≥ 0

Common Classes 10/24

slide-32
SLIDE 32

Example: Designing a Suitcase

A suitcase manufacturer is designing a suitcase Variables: h, w,d Want to minimize surface area 2(hw + hd + wd) (i.e. amount of material used) Have a target volume hwd ≥ 5 Practical/aesthetic constraints limit aspect ratio: h/w ≤ 2, h/d ≤ 3 Constrained by airline to h + w + d ≤ 7 minimize 2hw + 2hd + 2wd subject to h−1w−1d−1 ≤ 1

5

hw−1 ≤ 2 hd−1 ≤ 3 h + w + d ≤ 7 h, w, d ≥ 0 More interesting applications involve optimal component layout in chip design.

Common Classes 10/24

slide-33
SLIDE 33

Designing a Suitcase in Convex Form

minimize 2hw + 2hd + 2wd subject to h−1w−1d−1 ≤ 1

5

hw−1 ≤ 2 hd−1 ≤ 3 h + w + d ≤ 7 h, w, d ≥ 0

Common Classes 11/24

slide-34
SLIDE 34

Designing a Suitcase in Convex Form

minimize 2hw + 2hd + 2wd subject to h−1w−1d−1 ≤ 1

5

hw−1 ≤ 2 hd−1 ≤ 3 h + w + d ≤ 7 h, w, d ≥ 0 Change of variables to h = log h, w = log w, d = log d minimize 2e

h+ w + 2e h+ d + 2e w+ d

subject to e−

h− w− d ≤ 1 5

e

h− w ≤ 2

e

h− d ≤ 3

e

h + e w + e d ≤ 7

Common Classes 11/24

slide-35
SLIDE 35

Geometric Programs in Convex Form

minimize f0(x) subject to fi(x) ≤ bi, for i ∈ C1. hi(x) = bi, for i ∈ C2. x 0 where fi’s are posynomials, hi’s are monomials, and bi > 0 (wlog 1). In their natural parametrization by x1, . . . , xn ∈ R+, geometric programs are not convex optimization problems

Common Classes 12/24

slide-36
SLIDE 36

Geometric Programs in Convex Form

minimize f0(x) subject to fi(x) ≤ bi, for i ∈ C1. hi(x) = bi, for i ∈ C2. x 0 where fi’s are posynomials, hi’s are monomials, and bi > 0 (wlog 1). In their natural parametrization by x1, . . . , xn ∈ R+, geometric programs are not convex optimization problems However, the feasible set and objective function are convex in the variables y1, . . . , yn ∈ R where yi = log xi

Common Classes 12/24

slide-37
SLIDE 37

Geometric Programs in Convex Form

minimize f0(x) subject to fi(x) ≤ bi, for i ∈ C1. hi(x) = bi, for i ∈ C2. x 0 where fi’s are posynomials, hi’s are monomials, and bi > 0 (wlog 1). Each monomial cxa1

1 xa2 2 . . . xak k can be rewritten as a convex

function cea1y1+a2y2+...+akyk Therefore, each posynomial becomes the sum of these convex exponential functions Inequality constraints and objective become convex Equality constraint cxa1

1 xa2 2 . . . xak k = b reduces to an affine

constraint a1y1 + a2y2 . . . akyk = log b

c

Common Classes 12/24

slide-38
SLIDE 38

Outline

1

Convex Optimization Basics

2

Common Classes

3

Interlude: Positive Semi-Definite Matrices

4

More Convex Optimization Problems

slide-39
SLIDE 39

Symmetric Matrices

A matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji for all i, j. We denote the cone of n × n symmetric matrices by Sn.

Interlude: Positive Semi-Definite Matrices 13/24

slide-40
SLIDE 40

Symmetric Matrices

A matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji for all i, j. We denote the cone of n × n symmetric matrices by Sn.

Fact

A matrix A ∈ Rn×n is symmetric if and only if it is orthogonally diagonalizable.

Interlude: Positive Semi-Definite Matrices 13/24

slide-41
SLIDE 41

Symmetric Matrices

A matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji for all i, j. We denote the cone of n × n symmetric matrices by Sn.

Fact

A matrix A ∈ Rn×n is symmetric if and only if it is orthogonally diagonalizable. i.e. A = QDQ⊺ where Q is an orthogonal matrix and D = diag(λ1, . . . , λn). The columns of Q are the (normalized) eigenvectors of A, with corresponding eigenvalues λ1, . . . , λn Equivalently: As a linear operator, A scales the space along an

  • rthonormal basis Q

The scaling factor λi along direction qi may be negative, positive,

  • r 0.

Interlude: Positive Semi-Definite Matrices 13/24

slide-42
SLIDE 42

Positive Semi-Definite Matrices

A matrix A ∈ Rn×n is positive semi-definite if it is symmetric and moreover all its eigenvalues are nonnegative. We denote the cone of n × n positive semi-definite matrices by Sn

+

We use A 0 as shorthand for A ∈ Sn

+

Interlude: Positive Semi-Definite Matrices 14/24

slide-43
SLIDE 43

Positive Semi-Definite Matrices

A matrix A ∈ Rn×n is positive semi-definite if it is symmetric and moreover all its eigenvalues are nonnegative. We denote the cone of n × n positive semi-definite matrices by Sn

+

We use A 0 as shorthand for A ∈ Sn

+

A = QDQ⊺ where Q is an orthogonal matrix and D = diag(λ1, . . . , λn), where λi ≥ 0. As a linear operator, A performs nonnegative scaling along an

  • rthonormal basis Q

Interlude: Positive Semi-Definite Matrices 14/24

slide-44
SLIDE 44

Positive Semi-Definite Matrices

A matrix A ∈ Rn×n is positive semi-definite if it is symmetric and moreover all its eigenvalues are nonnegative. We denote the cone of n × n positive semi-definite matrices by Sn

+

We use A 0 as shorthand for A ∈ Sn

+

A = QDQ⊺ where Q is an orthogonal matrix and D = diag(λ1, . . . , λn), where λi ≥ 0. As a linear operator, A performs nonnegative scaling along an

  • rthonormal basis Q

Note

Positive definite, negative semi-definite, and negative definite defined similarly.

Interlude: Positive Semi-Definite Matrices 14/24

slide-45
SLIDE 45

Geometric Intuition for PSD Matrices

For A 0, let q1, . . . , qn be the orthonormal eigenbasis for A, and let λ1, . . . , λn ≥ 0 be the corresponding eigenvalues. The linear operator x → Ax scales the qi component of x by λi When applied to every x in the unit ball, the image of A is an ellipsoid with principal directions q1, . . . , qn and corresponding diameters 2λ1, . . . , 2λn

When A is positive definite (i.e.λi > 0), and therefore invertible, the ellipsoid is the set

  • x : xT A−1x ≤ 1
  • Interlude: Positive Semi-Definite Matrices

15/24

slide-46
SLIDE 46

Useful Properties of PSD Matrices

If A 0, then xT Ax ≥ 0 for all x The quadratic function xT Ax is convex A = BT B for some matrix B.

Interpretation: PSD matrices encode the “pairwise similarity” relationships of a family of vectors Interpretation: The quadratic form xT Ax is the length of an affine transformation of x, namely ||Bx||2

2

A has a positive semi-definite square root A

1 2

A

1 2 = Q diag(√λ1, . . . , √λn)Q⊺

A can be expressed as a sum of vector outer-products (xx⊺)

Interlude: Positive Semi-Definite Matrices 16/24

slide-47
SLIDE 47

Useful Properties of PSD Matrices

If A 0, then xT Ax ≥ 0 for all x The quadratic function xT Ax is convex A = BT B for some matrix B.

Interpretation: PSD matrices encode the “pairwise similarity” relationships of a family of vectors Interpretation: The quadratic form xT Ax is the length of an affine transformation of x, namely ||Bx||2

2

A has a positive semi-definite square root A

1 2

A

1 2 = Q diag(√λ1, . . . , √λn)Q⊺

A can be expressed as a sum of vector outer-products (xx⊺) As it turns out, each of the above is also sufficient for A 0 (assuming A is symmetric).

Interlude: Positive Semi-Definite Matrices 16/24

slide-48
SLIDE 48

Outline

1

Convex Optimization Basics

2

Common Classes

3

Interlude: Positive Semi-Definite Matrices

4

More Convex Optimization Problems

slide-49
SLIDE 49

Quadratic Programming

Minimizing a convex quadratic function over a polyhedron. minimize x⊺Px + c⊺x + d subject to Ax ≤ b P 0 Objective can be rewritten as (x − x0)⊺P(x − x0) for some center x0 Sublevel sets are scaled copies of an ellipsoid centered at x0

More Convex Optimization Problems 17/24

slide-50
SLIDE 50

Examples

Constrained Least Squares

Given a set of measurements (a1, b1), . . . , (am, bm), where ai ∈ Rn is the i’th input and bi ∈ R is the i’th output, fit a linear function minimizing mean square error, subject to known bounds on the linear coefficients. minimize ||Ax − b||2

2 =

x⊺A⊺Ax − 2b⊺Ax + b⊺b subject to li ≤ xi ≤ ui, for i = 1, . . . , n.

More Convex Optimization Problems 18/24

slide-51
SLIDE 51

Examples

Distance Between Polyhedra

Given two polyhedra Ax b and Cx d, find the distance between them. minimize ||z||2

2 = z⊺Iz

subject to z = y − x Ax b By d

More Convex Optimization Problems 18/24

slide-52
SLIDE 52

Conic Optimization Problems

This is an umbrella term for problems of the following form minimize c⊺x subject to Ax + b ∈ K Where K is a convex cone (e.g. Rn

+, positive semi-definite matrices,

etc). Evidently, such optimization problems are convex.

More Convex Optimization Problems 19/24

slide-53
SLIDE 53

Conic Optimization Problems

This is an umbrella term for problems of the following form minimize c⊺x subject to Ax + b ∈ K Where K is a convex cone (e.g. Rn

+, positive semi-definite matrices,

etc). Evidently, such optimization problems are convex. As shorthand, the cone containment constraint is often written using generalized inequalities Ax + b K 0 −Ax K b . . .

More Convex Optimization Problems 19/24

slide-54
SLIDE 54

Example: Second Order Cone Programming

We will exhibit an example of a conic optimization problem with K as the second order cone K = {(x, t) : ||x||2 ≤ t}

More Convex Optimization Problems 20/24

slide-55
SLIDE 55

Example: Second Order Cone Programming

Linear Program with Random Constraints

Consider the following optimization problem, where each ai is a gaussian random variable with mean ai and covariance matrix Σi. minimize c⊺x subject to a⊺

i x ≤ bi w.p. at least 0.9,

for i = 1, . . . , m. ui := a⊺

i x is a univariate normal r.v. with mean ui := a⊺ i x and

stddev σi := √x⊺Σix = ||Σ

1 2

i x||2

More Convex Optimization Problems 20/24

slide-56
SLIDE 56

Example: Second Order Cone Programming

Linear Program with Random Constraints

Consider the following optimization problem, where each ai is a gaussian random variable with mean ai and covariance matrix Σi. minimize c⊺x subject to a⊺

i x ≤ bi w.p. at least 0.9,

for i = 1, . . . , m. ui := a⊺

i x is a univariate normal r.v. with mean ui := a⊺ i x and

stddev σi := √x⊺Σix = ||Σ

1 2

i x||2

ui ≤ bi with probability φ( bi−ui

σi ), where φ is the CDF of the

standard normal random variable.

More Convex Optimization Problems 20/24

slide-57
SLIDE 57

Example: Second Order Cone Programming

Linear Program with Random Constraints

Consider the following optimization problem, where each ai is a gaussian random variable with mean ai and covariance matrix Σi. minimize c⊺x subject to a⊺

i x ≤ bi w.p. at least 0.9,

for i = 1, . . . , m. ui := a⊺

i x is a univariate normal r.v. with mean ui := a⊺ i x and

stddev σi := √x⊺Σix = ||Σ

1 2

i x||2

ui ≤ bi with probability φ( bi−ui

σi ), where φ is the CDF of the

standard normal random variable. Since we want this probability to exceed 0.9, we require that bi − ui σi ≥ φ−1(0.9) ≈ 1.3 ≈ 1/0.77 ||Σ

1 2

i x||2 ≤ 0.77(bi − a⊺ i x)

More Convex Optimization Problems 20/24

slide-58
SLIDE 58

Semi-Definite Programming

These are conic optimization problems where the cone in question is the set of positive semi-definite matrices. minimize c⊺x subject to x1F1 + x2F2 . . . xnFn + G 0 Where F1, . . . , Fn are matrices, and refers to the positive semi-definite cone Sn

+.

More Convex Optimization Problems 21/24

slide-59
SLIDE 59

Semi-Definite Programming

These are conic optimization problems where the cone in question is the set of positive semi-definite matrices. minimize c⊺x subject to x1F1 + x2F2 . . . xnFn + G 0 Where F1, . . . , Fn are matrices, and refers to the positive semi-definite cone Sn

+.

Examples

Fitting a distribution, say a Gaussian, to observed data. Variable is a positive semi-definite covariance matrix. As a relaxation to combinatorial problems that encode pairwise relationships: e.g. finding the maximum cut of a graph.

More Convex Optimization Problems 21/24

slide-60
SLIDE 60

Quasiconvex Optimization Problems

More Convex Optimization Problems 22/24

slide-61
SLIDE 61

Example

More Convex Optimization Problems 23/24

slide-62
SLIDE 62

A Note on Tractability

More Convex Optimization Problems 24/24