On Second-Order Optimality Conditions for Conic Programming Hctor - - PowerPoint PPT Presentation

on second order optimality conditions for conic
SMART_READER_LITE
LIVE PREVIEW

On Second-Order Optimality Conditions for Conic Programming Hctor - - PowerPoint PPT Presentation

On Second-Order Optimality Conditions for Conic Programming Hctor Ramrez C. 1 1 Departamento Ingeniera Matemtica & Centro Modelamiento Matemtico, Universidad de Chile, Santiago de Chile 6mes Journes Franco-Chiliennes


slide-1
SLIDE 1

On Second-Order Optimality Conditions for Conic Programming

Héctor Ramírez C.1

1Departamento Ingeniería Matemática & Centro Modelamiento Matemático,

Universidad de Chile, Santiago de Chile

6èmes Journées Franco-Chiliennes d’Optimisation May 19th, 2008 Université du Sud Toulon-Var

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 1 / 24

slide-2
SLIDE 2

Outline

1

Introduction and Motivation Formulation of Our Problem Applications of SDP and SOCP Optimality Conditions Constraint Qualification Conditions Reduction Approach

2

Main results Duality Results for Conic Programming Strong Regularity Condition

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 2 / 24

slide-3
SLIDE 3

Outline

1

Introduction and Motivation Formulation of Our Problem Applications of SDP and SOCP Optimality Conditions Constraint Qualification Conditions Reduction Approach

2

Main results Duality Results for Conic Programming Strong Regularity Condition

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 3 / 24

slide-4
SLIDE 4

Conic Programming

Consider the optimization problem over a closed convex cone K min

x∈X f(x) ; g(x) ∈ K ⊆ Y

(P) where X and Y are finite dimensional Hilbert spaces. For instance: K = {0} × Rm

− ⊂ Rp × Rm,

(NLP) K = Sm

(SDP)

  • r

K = Qm1+1 × Qm2+1 × ... × QmJ+1, (SOCP) where Qmj+1 = {y = (y0, ¯ y⊤) ∈ R × Rmj : y0 ≥ ¯ y}

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 4 / 24

slide-5
SLIDE 5

Conic Programming

Consider the optimization problem over a closed convex cone K min

x∈X f(x) ; g(x) ∈ K ⊆ Y

(P) where X and Y are finite dimensional Hilbert spaces. For instance: K = {0} × Rm

− ⊂ Rp × Rm,

(NLP) K = Sm

(SDP)

  • r

K = Qm1+1 × Qm2+1 × ... × QmJ+1, (SOCP) where Qmj+1 = {y = (y0, ¯ y⊤) ∈ R × Rmj : y0 ≥ ¯ y}

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 4 / 24

slide-6
SLIDE 6

Conic Programming

Consider the optimization problem over a closed convex cone K min

x∈X f(x) ; g(x) ∈ K ⊆ Y

(P) where X and Y are finite dimensional Hilbert spaces. For instance: K = {0} × Rm

− ⊂ Rp × Rm,

(NLP) K = Sm

(SDP)

  • r

K = Qm1+1 × Qm2+1 × ... × QmJ+1, (SOCP) where Qmj+1 = {y = (y0, ¯ y⊤) ∈ R × Rmj : y0 ≥ ¯ y}

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 4 / 24

slide-7
SLIDE 7

Conic Programming

Consider the optimization problem over a closed convex cone K min

x∈X f(x) ; g(x) ∈ K ⊆ Y

(P) where X and Y are finite dimensional Hilbert spaces. For instance: K = {0} × Rm

− ⊂ Rp × Rm,

(NLP) K = Sm

(SDP)

  • r

K = Qm1+1 × Qm2+1 × ... × QmJ+1, (SOCP) where Qmj+1 = {y = (y0, ¯ y⊤) ∈ R × Rmj : y0 ≥ ¯ y}

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 4 / 24

slide-8
SLIDE 8

Applications

Minimization of Maximum Eigenvalue of G(x) min

t∈R, x∈Rn t ; G(x) − tI 0

Robust Linear Programming min

x∈Rn{f(x) = c⊤x ; a⊤ i x ≤ bi

∀ai ∈ Ei, i = 1, ..., m} (RLP) where Pi = P⊤

i

0, and Ei := {¯ ai + Piu : u ≤ 1} The robust linear constraint can be reformulated as follows max{a⊤

i x : ai ∈ Ei} = ¯

a⊤

i x + Pix ≤ bi,

which is of the form: bi − ¯ a⊤

i x

Pix

  • ∈ Qn+1

Then (RLP) can be casted as a (SOCP) problem

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 5 / 24

slide-9
SLIDE 9

Applications

Minimization of Maximum Eigenvalue of G(x) min

t∈R, x∈Rn t ; G(x) − tI 0

Robust Linear Programming min

x∈Rn{f(x) = c⊤x ; a⊤ i x ≤ bi

∀ai ∈ Ei, i = 1, ..., m} (RLP) where Pi = P⊤

i

0, and Ei := {¯ ai + Piu : u ≤ 1} The robust linear constraint can be reformulated as follows max{a⊤

i x : ai ∈ Ei} = ¯

a⊤

i x + Pix ≤ bi,

which is of the form: bi − ¯ a⊤

i x

Pix

  • ∈ Qn+1

Then (RLP) can be casted as a (SOCP) problem

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 5 / 24

slide-10
SLIDE 10

Applications

Minimization of Maximum Eigenvalue of G(x) min

t∈R, x∈Rn t ; G(x) − tI 0

Robust Linear Programming min

x∈Rn{f(x) = c⊤x ; a⊤ i x ≤ bi

∀ai ∈ Ei, i = 1, ..., m} (RLP) where Pi = P⊤

i

0, and Ei := {¯ ai + Piu : u ≤ 1} The robust linear constraint can be reformulated as follows max{a⊤

i x : ai ∈ Ei} = ¯

a⊤

i x + Pix ≤ bi,

which is of the form: bi − ¯ a⊤

i x

Pix

  • ∈ Qn+1

Then (RLP) can be casted as a (SOCP) problem

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 5 / 24

slide-11
SLIDE 11

Applications

Minimization of Maximum Eigenvalue of G(x) min

t∈R, x∈Rn t ; G(x) − tI 0

Robust Linear Programming min

x∈Rn{f(x) = c⊤x ; a⊤ i x ≤ bi

∀ai ∈ Ei, i = 1, ..., m} (RLP) where Pi = P⊤

i

0, and Ei := {¯ ai + Piu : u ≤ 1} The robust linear constraint can be reformulated as follows max{a⊤

i x : ai ∈ Ei} = ¯

a⊤

i x + Pix ≤ bi,

which is of the form: bi − ¯ a⊤

i x

Pix

  • ∈ Qn+1

Then (RLP) can be casted as a (SOCP) problem

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 5 / 24

slide-12
SLIDE 12

Applications

Minimization of Maximum Eigenvalue of G(x) min

t∈R, x∈Rn t ; G(x) − tI 0

Robust Linear Programming min

x∈Rn{f(x) = c⊤x ; a⊤ i x ≤ bi

∀ai ∈ Ei, i = 1, ..., m} (RLP) where Pi = P⊤

i

0, and Ei := {¯ ai + Piu : u ≤ 1} The robust linear constraint can be reformulated as follows max{a⊤

i x : ai ∈ Ei} = ¯

a⊤

i x + Pix ≤ bi,

which is of the form: bi − ¯ a⊤

i x

Pix

  • ∈ Qn+1

Then (RLP) can be casted as a (SOCP) problem

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 5 / 24

slide-13
SLIDE 13

First Order Optimality Conditions

Consider the optimization problem over a closed convex cone K min

x

f(x) ; g(x) ∈ K (P) Karush-Kuhn-Tucker Conditions We say that (x∗, y∗) is a KKT-point (y∗ ∈ Λ(x∗)) if it satisfies ∇xL(x∗, y∗) = ∇f(x∗) + Dg(x∗)⊤y∗ = 0, y∗ ∈ NK(g(x∗)), (KKT) where NK(z) is the normal cone to K at z ∈ K

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 6 / 24

slide-14
SLIDE 14

First Order Optimality Conditions

Consider the optimization problem over a closed convex cone K min

x

f(x) ; g(x) ∈ K (P) Karush-Kuhn-Tucker Conditions We say that (x∗, y∗) is a KKT-point (y∗ ∈ Λ(x∗)) if it satisfies ∇xL(x∗, y∗) = ∇f(x∗) + Dg(x∗)⊤y∗ = 0, g(x∗), y∗ = 0, g(x∗) ∈ K, y∗ ∈ K − (= −K), (KKT) where K − is the negative polar cone of K

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 6 / 24

slide-15
SLIDE 15

Constraint Qualification Conditions

Robinson’s Constraint Qualification Condition

Let x∗ be a feasible point of (P).

Definition

We say that x∗ satisfies Robinson’s const. qualif. cond. if Dg(x∗)X + TK(g(x∗)) = Y (Rob) where TK(g(x∗)) is the tangent (or Bouligand) cone of K at g(x∗) NLP case: Mangasarian-Fromovitz condition: ∇gi(x∗), for all i ∈ {1, ..., p}, are l.i. and ∃h ∈ Rn such that: ∇gi(x∗)⊤h = 0, ∀i ∈ {1, ..., p}, ∇gi(x∗)⊤h < 0, ∀i ∈ {p + 1, ..., p + m} s.t. gi(x∗) = 0 SDP case: ∃ h ∈ Rn such that E⊤Dg(x∗)hE ≺ 0, where the columns of E are an orthonormal basis of Ker g(x∗)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 7 / 24

slide-16
SLIDE 16

Constraint Qualification Conditions

Robinson’s Constraint Qualification Condition

Let x∗ be a feasible point of (P).

Definition

We say that x∗ satisfies Robinson’s const. qualif. cond. if 0 ∈ int{g(x∗) + Dg(x∗)X − K} (Rob) NLP case: Mangasarian-Fromovitz condition: ∇gi(x∗), for all i ∈ {1, ..., p}, are l.i. and ∃h ∈ Rn such that: ∇gi(x∗)⊤h = 0, ∀i ∈ {1, ..., p}, ∇gi(x∗)⊤h < 0, ∀i ∈ {p + 1, ..., p + m} s.t. gi(x∗) = 0 SDP case: ∃ h ∈ Rn such that E⊤Dg(x∗)hE ≺ 0, where the columns of E are an orthonormal basis of Ker g(x∗)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 7 / 24

slide-17
SLIDE 17

Constraint Qualification Conditions

Robinson’s Constraint Qualification Condition

Let x∗ be a feasible point of (P).

Definition

We say that x∗ satisfies Robinson’s const. qualif. cond. if 0 ∈ int{g(x∗) + Dg(x∗)X − K} (Rob) NLP case: Mangasarian-Fromovitz condition: ∇gi(x∗), for all i ∈ {1, ..., p}, are l.i. and ∃h ∈ Rn such that: ∇gi(x∗)⊤h = 0, ∀i ∈ {1, ..., p}, ∇gi(x∗)⊤h < 0, ∀i ∈ {p + 1, ..., p + m} s.t. gi(x∗) = 0 SDP case: ∃ h ∈ Rn such that E⊤Dg(x∗)hE ≺ 0, where the columns of E are an orthonormal basis of Ker g(x∗)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 7 / 24

slide-18
SLIDE 18

Constraint Qualification Conditions

Robinson’s Constraint Qualification Condition

Let x∗ be a feasible point of (P).

Definition

We say that x∗ satisfies Robinson’s const. qualif. cond. if 0 ∈ int{g(x∗) + Dg(x∗)X − K} (Rob) NLP case: Mangasarian-Fromovitz condition: ∇gi(x∗), for all i ∈ {1, ..., p}, are l.i. and ∃h ∈ Rn such that: ∇gi(x∗)⊤h = 0, ∀i ∈ {1, ..., p}, ∇gi(x∗)⊤h < 0, ∀i ∈ {p + 1, ..., p + m} s.t. gi(x∗) = 0 SDP case: ∃ h ∈ Rn such that E⊤Dg(x∗)hE ≺ 0, where the columns of E are an orthonormal basis of Ker g(x∗)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 7 / 24

slide-19
SLIDE 19

Constraint Qualification Conditions

Nondegeneracy Condition

Definition

We say that x∗ is nondegenerate if Dg(x∗)X + lin(TK(g(x∗))) = Y, (NDG) where lin(C) is the biggest linear space contained in C NLP case: ∇gi(x∗), for all i ∈ {1, ..., p + m} s.t. gi(x∗) = 0, are l.i. SDP case: Either g(x∗) ∈ int K or h ∈ Rn → ψ(h) = E⊤Dg(x∗)hE is

  • nto, where the columns of E are an orthonormal basis of Ker g(x∗)

Also called Transversality condition in Shapiro ’96

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 8 / 24

slide-20
SLIDE 20

Constraint Qualification Conditions

Nondegeneracy Condition

Definition

We say that x∗ is nondegenerate if Dg(x∗)X + lin(TK(g(x∗))) = Y, (NDG) where lin(C) is the biggest linear space contained in C NLP case: ∇gi(x∗), for all i ∈ {1, ..., p + m} s.t. gi(x∗) = 0, are l.i. SDP case: Either g(x∗) ∈ int K or h ∈ Rn → ψ(h) = E⊤Dg(x∗)hE is

  • nto, where the columns of E are an orthonormal basis of Ker g(x∗)

Also called Transversality condition in Shapiro ’96

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 8 / 24

slide-21
SLIDE 21

Constraint Qualification Conditions

Nondegeneracy Condition

Definition

We say that x∗ is nondegenerate if Dg(x∗)X + lin(TK(g(x∗))) = Y, (NDG) where lin(C) is the biggest linear space contained in C NLP case: ∇gi(x∗), for all i ∈ {1, ..., p + m} s.t. gi(x∗) = 0, are l.i. SDP case: Either g(x∗) ∈ int K or h ∈ Rn → ψ(h) = E⊤Dg(x∗)hE is

  • nto, where the columns of E are an orthonormal basis of Ker g(x∗)

Also called Transversality condition in Shapiro ’96

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 8 / 24

slide-22
SLIDE 22

Remarks about Nondegeneracy Condition

(NDG) implies (Rob) (⇒ ∃ y∗ (KKT)-multiplier) (NDG) implies a unique y∗ (NDG) ⇔ ∃! y∗, when strict complementarity condition holds: y∗ ∈ ri NK(g(x∗)) Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 9 / 24

slide-23
SLIDE 23

Remarks about Nondegeneracy Condition

(NDG) implies (Rob) (⇒ ∃ y∗ (KKT)-multiplier) (NDG) implies a unique y∗ (NDG) ⇔ ∃! y∗, when strict complementarity condition holds: y∗ ∈ ri NK(g(x∗)) Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 9 / 24

slide-24
SLIDE 24

Remarks about Nondegeneracy Condition

(NDG) implies (Rob) (⇒ ∃ y∗ (KKT)-multiplier) (NDG) implies a unique y∗ (NDG) ⇔ ∃! y∗, when strict complementarity condition holds: y∗ ∈ ri NK(g(x∗)) Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 9 / 24

slide-25
SLIDE 25

Remarks about Nondegeneracy Condition

(NDG) implies (Rob) (⇒ ∃ y∗ (KKT)-multiplier) (NDG) implies a unique y∗ (NDG) ⇔ ∃! y∗, when strict complementarity condition holds: y∗ ∈ ri NK(g(x∗)) Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 9 / 24

slide-26
SLIDE 26

Reduction Approach

Let K ⊆ X and ˆ K ⊆ Y be closed, convex cones

Definition

K is said to be (pointed, C2 and cone) reducible at s∗ ∈ K to ˆ K if there exist a neighborhood N of s∗ and a C2 mapping Ξ : N → Y such that Ξ(s∗) = 0 and Tˆ

K(Ξ(s∗)) is a pointed cone

for all s ∈ N, s ∈ K iff Ξ(s) ∈ ˆ K DΞ(s∗) : X → Y is onto When K is reducible for all s∗ ∈ K to some ˆ K (possibly depending on s∗), we simply say that K is reducible In this talk K is supposed to be reducible

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 10 / 24

slide-27
SLIDE 27

Reduction Approach

Let K ⊆ X and ˆ K ⊆ Y be closed, convex cones

Definition

K is said to be (pointed, C2 and cone) reducible at s∗ ∈ K to ˆ K if there exist a neighborhood N of s∗ and a C2 mapping Ξ : N → Y such that Ξ(s∗) = 0 and Tˆ

K(Ξ(s∗)) is a pointed cone

for all s ∈ N, s ∈ K iff Ξ(s) ∈ ˆ K DΞ(s∗) : X → Y is onto When K is reducible for all s∗ ∈ K to some ˆ K (possibly depending on s∗), we simply say that K is reducible In this talk K is supposed to be reducible

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 10 / 24

slide-28
SLIDE 28

Reduction Approach

Let K ⊆ X and ˆ K ⊆ Y be closed, convex cones

Definition

K is said to be (pointed, C2 and cone) reducible at s∗ ∈ K to ˆ K if there exist a neighborhood N of s∗ and a C2 mapping Ξ : N → Y such that Ξ(s∗) = 0 and Tˆ

K(Ξ(s∗)) is a pointed cone

for all s ∈ N, s ∈ K iff Ξ(s) ∈ ˆ K DΞ(s∗) : X → Y is onto When K is reducible for all s∗ ∈ K to some ˆ K (possibly depending on s∗), we simply say that K is reducible In this talk K is supposed to be reducible

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 10 / 24

slide-29
SLIDE 29

Reduction Approach

Examples: SDP case: Sm

+ is reducible. Indeed, for every matrix ˆ

Z ∈ Sm

+:

(i) If ˆ Z ≻ 0, take ˆ K = {0} and Ξ(Z) = 0, (ii) Else, take ˆ K = Sk

+ and

Ξ(Z) = E⊤ZE, where columns of E ∈ Rm×k are an orth. basis of Ker ˆ Z

Proposition

Let x∗ be a feasible point of problem (P). Hence, nondegeneracy of x∗ is equivalent to saying that the mapping h → DA(x∗)h is onto, where A := Ξ ◦ g Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 11 / 24

slide-30
SLIDE 30

Reduction Approach

Examples: SOCP case: Qm+1 is reducible. Indeed, for every vector ˆ s ∈ Qm+1: (i) If ˆ s = 0, take ˆ K = Qm+1 and Ξ(s) = s, (ii) If ˆ s0 > ¯ ˆ s, take ˆ K = {0} and Ξ(s) = 0, (iii) If 0 = ¯ s0 = ¯ s, take ˆ K = R− and Ξ(s) = ¯ s − s0.

Proposition

Let x∗ be a feasible point of problem (P). Hence, nondegeneracy of x∗ is equivalent to saying that the mapping h → DA(x∗)h is onto, where A := Ξ ◦ g Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 11 / 24

slide-31
SLIDE 31

Reduction Approach

Examples: SOCP case: Qm+1 is reducible. Indeed, for every vector ˆ s ∈ Qm+1: (i) If ˆ s = 0, take ˆ K = Qm+1 and Ξ(s) = s, (ii) If ˆ s0 > ¯ ˆ s, take ˆ K = {0} and Ξ(s) = 0, (iii) If 0 = ¯ s0 = ¯ s, take ˆ K = R− and Ξ(s) = ¯ s − s0.

Proposition

Let x∗ be a feasible point of problem (P). Hence, nondegeneracy of x∗ is equivalent to saying that the mapping h → DA(x∗)h is onto, where A := Ξ ◦ g Bonnans & Shapiro 2000

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 11 / 24

slide-32
SLIDE 32

Second-Order Optimality Conditions

For the problem (P), the SONC at x∗ is given by sup

y∈Λ(x∗)

  • D2

xxL(x∗, y)(h, h) − σ(y, T (h))

  • ≥ 0, ∀ h ∈ C(x∗),

where C(x∗) = {h ∈ X : Dg(x∗)h ∈ TK(g(x∗)), ∇f(x∗)⊤h = 0}, σ(y, K) := supw∈Kw, y (σ(y, T (h)) is known as the sigma term) , and T (h) := T 2

K(z, d) =

  • w ∈ Y : dist(z + td + 1

2t2w, K) = o(t2), t ≥ 0

  • is the second order tangent set to K at z = g(x∗) in the direction

d = Dg(x∗)h Bonnans, Cominetti & Shapiro ’99

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 12 / 24

slide-33
SLIDE 33

Second-Order Optimality Conditions

For the problem (P), the SONC at x∗ is given by sup

y∈Λ(x∗)

  • D2

xxL(x∗, y)(h, h) − σ(y, T (h))

  • ≥ 0, ∀ h ∈ C(x∗),

where C(x∗) = {h ∈ X : Dg(x∗)h ∈ TK(g(x∗)), ∇f(x∗)⊤h = 0}, σ(y, K) := supw∈Kw, y (σ(y, T (h)) is known as the sigma term) , and T (h) := T 2

K(z, d) =

  • w ∈ Y : dist(z + td + 1

2t2w, K) = o(t2), t ≥ 0

  • is the second order tangent set to K at z = g(x∗) in the direction

d = Dg(x∗)h Bonnans, Cominetti & Shapiro ’99

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 12 / 24

slide-34
SLIDE 34

Second-Order Optimality Conditions

For the problem (P), the SONC at x∗ is given by sup

y∈Λ(x∗)

  • D2

xxL(x∗, y)(h, h) − σ(y, T (h))

  • ≥ 0, ∀ h ∈ C(x∗),

where C(x∗) = {h ∈ X : Dg(x∗)h ∈ TK(g(x∗)), ∇f(x∗)⊤h = 0}, σ(y, K) := supw∈Kw, y (σ(y, T (h)) is known as the sigma term) , and T (h) := T 2

K(z, d) =

  • w ∈ Y : dist(z + td + 1

2t2w, K) = o(t2), t ≥ 0

  • is the second order tangent set to K at z = g(x∗) in the direction

d = Dg(x∗)h Bonnans, Cominetti & Shapiro ’99

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 12 / 24

slide-35
SLIDE 35

Second-Order Optimality Conditions

For the problem (P), the SONC at x∗ is given by sup

y∈Λ(x∗)

  • D2

xxL(x∗, y)(h, h) − σ(y, T (h))

  • ≥ 0, ∀ h ∈ C(x∗),

where C(x∗) = {h ∈ X : Dg(x∗)h ∈ TK(g(x∗)), ∇f(x∗)⊤h = 0}, σ(y, K) := supw∈Kw, y (σ(y, T (h)) is known as the sigma term) , and T (h) := T 2

K(z, d) =

  • w ∈ Y : dist(z + td + 1

2t2w, K) = o(t2), t ≥ 0

  • is the second order tangent set to K at z = g(x∗) in the direction

d = Dg(x∗)h Bonnans, Cominetti & Shapiro ’99

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 12 / 24

slide-36
SLIDE 36

Second-Order Optimality Conditions

For the problem (P), the SOSC at x∗ is given by sup

y∈Λ(x∗)

  • D2

xxL(x∗, y)(h, h) − σ(y, T (h))

  • > 0, ∀ h ∈ C(x∗) \ {0},

where C(x∗) = {h ∈ X : Dg(x∗)h ∈ TK(g(x∗)), ∇f(x∗)⊤h = 0}, σ(y, K) := supw∈Kw, y (σ(y, T (h)) is known as the sigma term) , and T (h) := T 2

K(z, d) =

  • w ∈ Y : dist(z + td + 1

2t2w, K) = o(t2), t ≥ 0

  • is the second order tangent set to K at z = g(x∗) in the direction

d = Dg(x∗)h Bonnans, Cominetti & Shapiro ’99

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 12 / 24

slide-37
SLIDE 37

Second-Order Optimality Conditions

For the problem (P), the SOSC at x∗ is given by sup

y∈Λ(x∗)

  • D2

xxL(x∗, y)(h, h) − σ(y, T (h))

  • > 0, ∀ h ∈ C(x∗) \ {0},

where C(x∗) = {h ∈ X : Dg(x∗)h ∈ TK(g(x∗)), ∇f(x∗)⊤h = 0}, σ(y, K) := supw∈Kw, y (σ(y, T (h)) is known as the sigma term) , and T (h) := T 2

K(z, d) =

  • w ∈ Y : dist(z + td + 1

2t2w, K) = o(t2), t ≥ 0

  • is the second order tangent set to K at z = g(x∗) in the direction

d = Dg(x∗)h Bonnans, Cominetti & Shapiro ’99

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 12 / 24

slide-38
SLIDE 38

Strong Second-Order Sufficient Optimality Conditions

For the problem (P), the SSOSC at x∗ and y∗ ∈ Λ(x∗) is given by D2

xxL(x∗, y∗)(h, h) − ξ∗(h) > 0, ∀ h ∈ Sp(C(x∗)) \ {0}

where Sp(C) := R+(C − C) is the linear space generated by C, and ξ∗(h) is a quadratic function on h, that depends on y∗, g(x∗) and

  • n the reduction Ξg(x∗)

Remark: When Dg(x∗)h ∈ TK(g(x∗)), it holds T (h) = ∅, obtaining ξ∗(h) = σ(y∗, T (h))

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 13 / 24

slide-39
SLIDE 39

Strong Second-Order Sufficient Optimality Conditions

For the problem (P), the SSOSC at x∗ and y∗ ∈ Λ(x∗) is given by D2

xxL(x∗, y∗)(h, h) − ξ∗(h) > 0, ∀ h ∈ Sp(C(x∗)) \ {0}

where Sp(C) := R+(C − C) is the linear space generated by C, and ξ∗(h) is a quadratic function on h, that depends on y∗, g(x∗) and

  • n the reduction Ξg(x∗)

Remark: When Dg(x∗)h ∈ TK(g(x∗)), it holds T (h) = ∅, obtaining ξ∗(h) = σ(y∗, T (h))

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 13 / 24

slide-40
SLIDE 40

Strong Second-Order Sufficient Optimality Conditions

SDP case

For the problem (SDP), this sufficient condition looks like follows: h⊤∇2

xxL(x∗, y∗)h + h⊤H(x∗, y∗)h > 0, ∀h ∈ Sp(C(x∗)) \ {0},

where H(x∗, Y ∗)ij := −2Y ∗ · [Dxig(x∗)g(x∗)†Dxjg(x∗)]

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 14 / 24

slide-41
SLIDE 41

Strong Second-Order Sufficient Optimality Conditions

SOCP case

For the problem (SOCP), this sufficient condition looks like follows: h⊤∇2

xxL(x∗, y∗)h + h⊤H(x∗, y∗)h > 0, ∀h ∈ Sp(C(x∗)) \ {0},

where H = J

j=1 Hj(x∗, yj), with

Hj(x∗, yj) := − yj gj

0(x∗)

Dgj(x∗)⊤ 1 0⊤ −Imj

  • Dgj(x∗),

if gj(x∗) ∈ ∂Qmj+1 \ {0}, and Hj(x∗, yj) := 0 otherwise In both cases ξ∗(d) coincides with the expression for the sigma term

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 15 / 24

slide-42
SLIDE 42

Strong Second-Order Sufficient Optimality Conditions

SOCP case

For the problem (SOCP), this sufficient condition looks like follows: h⊤∇2

xxL(x∗, y∗)h + h⊤H(x∗, y∗)h > 0, ∀h ∈ Sp(C(x∗)) \ {0},

where H = J

j=1 Hj(x∗, yj), with

Hj(x∗, yj) := − yj gj

0(x∗)

Dgj(x∗)⊤ 1 0⊤ −Imj

  • Dgj(x∗),

if gj(x∗) ∈ ∂Qmj+1 \ {0}, and Hj(x∗, yj) := 0 otherwise In both cases ξ∗(d) coincides with the expression for the sigma term

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 15 / 24

slide-43
SLIDE 43

Second-Order Optimality Conditions

Why should we study second-order optimality conditions involving an additional term (e.g. σ-term)? It seems to be the “right” conditions for studying convergence of algorithms for solving non-linear conic problems. For instance, S-SDP methods: Fares et al.’02, Correa & Ramírez’04, Garcés, Gómez & Jarre’08 It allows us to understand some subtle differences between NLP and other conic programming, such as SDP and SOCP . For instance: Differences from the sensitivity viewpoint (strong regular solutions, Aubin property for critical points, etc.) and from the duality viewpoint, among others

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 16 / 24

slide-44
SLIDE 44

Second-Order Optimality Conditions

Why should we study second-order optimality conditions involving an additional term (e.g. σ-term)? It seems to be the “right” conditions for studying convergence of algorithms for solving non-linear conic problems. For instance, S-SDP methods: Fares et al.’02, Correa & Ramírez’04, Garcés, Gómez & Jarre’08 It allows us to understand some subtle differences between NLP and other conic programming, such as SDP and SOCP . For instance: Differences from the sensitivity viewpoint (strong regular solutions, Aubin property for critical points, etc.) and from the duality viewpoint, among others

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 16 / 24

slide-45
SLIDE 45

Second-Order Optimality Conditions

Why should we study second-order optimality conditions involving an additional term (e.g. σ-term)? It seems to be the “right” conditions for studying convergence of algorithms for solving non-linear conic problems. For instance, S-SDP methods: Fares et al.’02, Correa & Ramírez’04, Garcés, Gómez & Jarre’08 It allows us to understand some subtle differences between NLP and other conic programming, such as SDP and SOCP . For instance: Differences from the sensitivity viewpoint (strong regular solutions, Aubin property for critical points, etc.) and from the duality viewpoint, among others

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 16 / 24

slide-46
SLIDE 46

Outline

1

Introduction and Motivation Formulation of Our Problem Applications of SDP and SOCP Optimality Conditions Constraint Qualification Conditions Reduction Approach

2

Main results Duality Results for Conic Programming Strong Regularity Condition

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 17 / 24

slide-47
SLIDE 47

Extended Wolfe’s Dual

Definition

We define the extended Wolfe’s dual as follows max

x∈X, y∈YL(x, y) = f(x) + y, g(x)

subject to ∇xL(x, y) = 0, y ∈ K −(= −K), (EWD) where K − is the negative polar cone of K

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 18 / 24

slide-48
SLIDE 48

Dual Strong Second-Order Sufficient Condition

For simplicity, from now on we suppose that K is self-polar (K − = −K) Let (x∗, y∗) be KKT-point of problem (P)

Definition

We say that the Dual Strong Second-Order Sufficient Condition (DSSOSC) holds at (x∗, y∗) if ∇2

xxL(x∗, y∗) is nonsingular and

w⊤[∇2

xxL(x∗, y∗)]−1w − ξD ∗ (u) > 0, ∀ (w, u) ∈ X × Y \ {0} ;

w = Dg(x∗)⊤u, u ∈ Sp(TK −(y∗) ∩ g(x∗)⊥) where ξD

∗ (u) is a quadratic function on u, that depends on y∗, g(x∗)

and on the reduction Ξy∗

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 19 / 24

slide-49
SLIDE 49

Dual Strong Second-Order Sufficient Condition

For simplicity, from now on we suppose that K is self-polar (K − = −K) Let (x∗, y∗) be KKT-point of problem (P)

Definition

We say that the Dual Strong Second-Order Sufficient Condition (DSSOSC) holds at (x∗, y∗) if ∇2

xxL(x∗, y∗) is nonsingular and

w⊤[∇2

xxL(x∗, y∗)]−1w − ξD ∗ (u) > 0, ∀ (w, u) ∈ X × Y \ {0} ;

w = Dg(x∗)⊤u, u ∈ Sp(TK −(y∗) ∩ g(x∗)⊥) where ξD

∗ (u) is a quadratic function on u, that depends on y∗, g(x∗)

and on the reduction Ξy∗

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 19 / 24

slide-50
SLIDE 50

Dual Strong Second-Order Sufficient Condition

For simplicity, from now on we suppose that K is self-polar (K − = −K) Let (x∗, y∗) be KKT-point of problem (P)

Definition

We say that the Dual Strong Second-Order Sufficient Condition (DSSOSC) holds at (x∗, y∗) if ∇2

xxL(x∗, y∗) is nonsingular and

w⊤[∇2

xxL(x∗, y∗)]−1w − ξD ∗ (u) > 0, ∀ (w, u) ∈ X × Y \ {0} ;

w = Dg(x∗)⊤u, u ∈ Sp(TK −(y∗) ∩ g(x∗)⊥) where ξD

∗ (u) is a quadratic function on u, that depends on y∗, g(x∗)

and on the reduction Ξy∗

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 19 / 24

slide-51
SLIDE 51

DSSOSC and NDG

Proposition (Ramírez ’08)

Let (x∗, y∗) be KKT-point of problem (P). Then: (x∗, y∗, 0, g(x∗)) is a KKT-point for problem (EWD), that is, (0, g(x∗)) ∈ Rn × K is a KKT-multiplier of (x∗, y∗) for problem (EWD) DSSOSC holds at (x∗, y∗) iff SSOSC holds at (x∗, y∗, 0, g(x∗)) for problem (EWD)

Theorem (Ramírez ’08)

Let (x∗, y∗) be a KKT-point of (P). Then, if DSSOSC holds at (x∗, y∗) we obtain that x∗ is nondegenerate for problem (P) These results extend similar ones obtained for NLP (Mangasarian ’84)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 20 / 24

slide-52
SLIDE 52

DSSOSC and NDG

Proposition (Ramírez ’08)

Let (x∗, y∗) be KKT-point of problem (P). Then: (x∗, y∗, 0, g(x∗)) is a KKT-point for problem (EWD), that is, (0, g(x∗)) ∈ Rn × K is a KKT-multiplier of (x∗, y∗) for problem (EWD) DSSOSC holds at (x∗, y∗) iff SSOSC holds at (x∗, y∗, 0, g(x∗)) for problem (EWD)

Theorem (Ramírez ’08)

Let (x∗, y∗) be a KKT-point of (P). Then, if DSSOSC holds at (x∗, y∗) we obtain that x∗ is nondegenerate for problem (P) These results extend similar ones obtained for NLP (Mangasarian ’84)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 20 / 24

slide-53
SLIDE 53

DSSOSC and NDG

Proposition (Ramírez ’08)

Let (x∗, y∗) be KKT-point of problem (P). Then: (x∗, y∗, 0, g(x∗)) is a KKT-point for problem (EWD), that is, (0, g(x∗)) ∈ Rn × K is a KKT-multiplier of (x∗, y∗) for problem (EWD) DSSOSC holds at (x∗, y∗) iff SSOSC holds at (x∗, y∗, 0, g(x∗)) for problem (EWD)

Theorem (Ramírez ’08)

Let (x∗, y∗) be a KKT-point of (P). Then, if DSSOSC holds at (x∗, y∗) we obtain that x∗ is nondegenerate for problem (P) These results extend similar ones obtained for NLP (Mangasarian ’84)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 20 / 24

slide-54
SLIDE 54

DSSOSC and NDG

Proposition (Ramírez ’08)

Let (x∗, y∗) be KKT-point of problem (P). Then: (x∗, y∗, 0, g(x∗)) is a KKT-point for problem (EWD), that is, (0, g(x∗)) ∈ Rn × K is a KKT-multiplier of (x∗, y∗) for problem (EWD) DSSOSC holds at (x∗, y∗) iff SSOSC holds at (x∗, y∗, 0, g(x∗)) for problem (EWD)

Theorem (Ramírez ’08)

Let (x∗, y∗) be a KKT-point of (P). Then, if DSSOSC holds at (x∗, y∗) we obtain that x∗ is nondegenerate for problem (P) These results extend similar ones obtained for NLP (Mangasarian ’84)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 20 / 24

slide-55
SLIDE 55

DSSOSC and NDG

Proposition (Ramírez ’08)

Let (x∗, y∗) be KKT-point of problem (P). Then: (x∗, y∗, 0, g(x∗)) is a KKT-point for problem (EWD), that is, (0, g(x∗)) ∈ Rn × K is a KKT-multiplier of (x∗, y∗) for problem (EWD) DSSOSC holds at (x∗, y∗) iff SSOSC holds at (x∗, y∗, 0, g(x∗)) for problem (EWD)

Theorem (Ramírez ’08)

Let (x∗, y∗) be a KKT-point of (P). Then, if DSSOSC holds at (x∗, y∗) we obtain that x∗ is nondegenerate for problem (P) These results extend similar ones obtained for NLP (Mangasarian ’84)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 20 / 24

slide-56
SLIDE 56

DSSOSC and NDG

Proposition (Ramírez ’08)

Let (x∗, y∗) be KKT-point of problem (P). Then: (x∗, y∗, 0, g(x∗)) is a KKT-point for problem (EWD), that is, (0, g(x∗)) ∈ Rn × K is a KKT-multiplier of (x∗, y∗) for problem (EWD) DSSOSC holds at (x∗, y∗) iff SSOSC holds at (x∗, y∗, 0, g(x∗)) for problem (EWD)

Theorem (Ramírez ’08)

Let (x∗, y∗) be a KKT-point of (P). Then, if DSSOSC holds at (x∗, y∗) we obtain that x∗ is nondegenerate for problem (P) These results extend similar ones obtained for NLP (Mangasarian ’84)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 20 / 24

slide-57
SLIDE 57

Strongly Regular Solutions

(KKT) ⇔ (z = (x, y)) 0 ∈ F(z) + N(z) (GE) We linearize (GE) at z∗ = (x∗, y∗) and parameterize by δ: δ ∈ F(z∗) + DF(z∗)(z − z∗) + N(z) (LEδ)

Definition (Robinson ’80)

z∗ is called a strongly regular (SR) solution of (GE) if there exists a neighborhood N of z∗ s.t. for all δ small enough, (LEδ) has a unique solution z∗(δ) in N, which is Lipschitz on δ This definition can be applied to: K = Sm

− (SDP)

and K = Qm+1 (SOCP)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 21 / 24

slide-58
SLIDE 58

Strongly Regular Solutions

(KKT) ⇔ (z = (x, y)) 0 ∈ F(z) + N(z) (GE) We linearize (GE) at z∗ = (x∗, y∗) and parameterize by δ: δ ∈ F(z∗) + DF(z∗)(z − z∗) + N(z) (LEδ)

Definition (Robinson ’80)

z∗ is called a strongly regular (SR) solution of (GE) if there exists a neighborhood N of z∗ s.t. for all δ small enough, (LEδ) has a unique solution z∗(δ) in N, which is Lipschitz on δ This definition can be applied to: K = Sm

− (SDP)

and K = Qm+1 (SOCP)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 21 / 24

slide-59
SLIDE 59

Strongly Regular Solutions

(KKT) ⇔ (z = (x, y)) 0 ∈ F(z) + N(z) (GE) We linearize (GE) at z∗ = (x∗, y∗) and parameterize by δ: δ ∈ F(z∗) + DF(z∗)(z − z∗) + N(z) (LEδ)

Definition (Robinson ’80)

z∗ is called a strongly regular (SR) solution of (GE) if there exists a neighborhood N of z∗ s.t. for all δ small enough, (LEδ) has a unique solution z∗(δ) in N, which is Lipschitz on δ This definition can be applied to: K = Sm

− (SDP)

and K = Qm+1 (SOCP)

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 21 / 24

slide-60
SLIDE 60

Remarks about Strong Regularity

Existence of SR solution implies (Rob) If K is reducible, Existence of SR solution implies (NDG) SR “extends” the Implicite Function Theorem See Robinson ’80 SR stable under small perturbations Characterization of SR in nonlinear programming Bonnans & Sulem’95 / Dontchev & Rockafellar’98 It is still an open problem for a general conic optimization framework

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 22 / 24

slide-61
SLIDE 61

Remarks about Strong Regularity

Existence of SR solution implies (Rob) If K is reducible, Existence of SR solution implies (NDG) SR “extends” the Implicite Function Theorem See Robinson ’80 SR stable under small perturbations Characterization of SR in nonlinear programming Bonnans & Sulem’95 / Dontchev & Rockafellar’98 It is still an open problem for a general conic optimization framework

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 22 / 24

slide-62
SLIDE 62

Remarks about Strong Regularity

Existence of SR solution implies (Rob) If K is reducible, Existence of SR solution implies (NDG) SR “extends” the Implicite Function Theorem See Robinson ’80 SR stable under small perturbations Characterization of SR in nonlinear programming Bonnans & Sulem’95 / Dontchev & Rockafellar’98 It is still an open problem for a general conic optimization framework

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 22 / 24

slide-63
SLIDE 63

Remarks about Strong Regularity

Existence of SR solution implies (Rob) If K is reducible, Existence of SR solution implies (NDG) SR “extends” the Implicite Function Theorem See Robinson ’80 SR stable under small perturbations Characterization of SR in nonlinear programming Bonnans & Sulem’95 / Dontchev & Rockafellar’98 It is still an open problem for a general conic optimization framework

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 22 / 24

slide-64
SLIDE 64

Remarks about Strong Regularity

Existence of SR solution implies (Rob) If K is reducible, Existence of SR solution implies (NDG) SR “extends” the Implicite Function Theorem See Robinson ’80 SR stable under small perturbations Characterization of SR in nonlinear programming Bonnans & Sulem’95 / Dontchev & Rockafellar’98 It is still an open problem for a general conic optimization framework

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 22 / 24

slide-65
SLIDE 65

Characterization of the Strong regularity

SDP case:

Theorem (Bonnans & Ramírez ’05, Sun ’06)

Let x∗ be a sol. of (SDP) and Y ∗ its corresponding (KKT)-multiplier. Then, If (x∗, Y ∗) is a strongly regular solution iff x∗ is nondegenerate and the following strong SOSC holds: h⊤∇2

xxL(x∗, Y ∗)h + h⊤H(x∗, Y ∗)h > 0, ∀h ∈ Sp(C(x∗)) \ {0},

where Sp(C) := R+(C − C) and H(x∗, Y ∗)ij := −2Y ∗ · [Dxig(x∗)g(x∗)†Dxjg(x∗)]

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 23 / 24

slide-66
SLIDE 66

Characterization of the Strong regularity

SOCP case:

Theorem (Bonnans & Ramírez ’05)

Let x∗ be a sol. of (SOCP) and y∗ its corresponding (KKT)-multiplier. Then, (x∗, y∗) is a strongly regular solution iff x∗ is nondegenerate and the following strong SOSC holds: h⊤∇2

xxL(x∗, y∗)h + h⊤H(x∗, y∗)h > 0, ∀h ∈ Sp(C(x∗)) \ {0},

where H = J

j=1 Hj(x∗, yj), with

Hj(x∗, yj) := − yj gj

0(x∗)

Dgj(x∗)⊤ 1 0⊤ −Imj

  • Dgj(x∗),

if gj(x∗) ∈ ∂Qmj+1 \ {0}, and Hj(x∗, yj) := 0 otherwise SDP case:

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 23 / 24

slide-67
SLIDE 67

Bibliography

  • S. M. Robinson.

Strongly regular generalized equations. Mathematics of Operations Research 5: pp. 43–62, 1980.

  • O. L. Mangasarian.

Local duality of nonlinear programming. SIAM J. Control and Optimization 22(1): pp. 162–169, 1984.

  • J. F

. Bonnans and H. Ramírez C. Strong regularity of semidefinite programming problems. Technical report CMM-DIM 137, 2005.

  • J. F

. Bonnans and H. Ramírez C. Perturbation analysis of second order cone programming problems. Mathematical Programming Ser. B, 104:pp. 205 – 227, 2005.

  • D. Sun.

The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their applications. Mathematics of Operations Research 31: pp. 761– 776, 2006.

Héctor Ramírez C. (DIM & CMM, U. Chile) SOOC for Conic Programming Toulon 2008 24 / 24