Wolfes Combinatorial Method is Exponential Jamie Haddock STOC June - - PowerPoint PPT Presentation

wolfe s combinatorial method is exponential
SMART_READER_LITE
LIVE PREVIEW

Wolfes Combinatorial Method is Exponential Jamie Haddock STOC June - - PowerPoint PPT Presentation

Wolfes Combinatorial Method is Exponential Jamie Haddock STOC June 26, 2018 UC Davis/UCLA Poster in Bradbury/Rose and Hershey/Crocker at 8 pm joint with Jes us A. De Loera and Luis Rademacher https://arxiv.org/abs/1710.02608 1 Minimum


slide-1
SLIDE 1

Wolfe’s Combinatorial Method is Exponential

Jamie Haddock STOC June 26, 2018

UC Davis/UCLA

Poster in Bradbury/Rose and Hershey/Crocker at 8 pm

joint with Jes´ us A. De Loera and Luis Rademacher https://arxiv.org/abs/1710.02608

1

slide-2
SLIDE 2

Minimum Norm Point in Polytope

We are interested in solving the problem (MNP(P)): min

x∈P x2

where P is a polytope, and determining the minimum dimension face, F, which achieves distance x2.

2

slide-3
SLIDE 3

Minimum Norm Point in Polytope

We are interested in solving the problem (MNP(P)): min

x∈P x2

p1 p2 p3 x p5 p4 P O

2

slide-4
SLIDE 4

Minimum Norm Point in Polytope

We are interested in solving the problem (MNP(P)): min

x∈P x2

p1 p2 p3 x p5 p4 P O ⊲ can be solved in polynomial time via interior-point methods

2

slide-5
SLIDE 5

Minimum Norm Point in Polytope

We are interested in solving the problem (MNP(P)): min

x∈P x2

p1 p2 p3 x p5 p4 P O ⊲ can be solved in polynomial time via interior-point methods ⊲ no strongly-polynomial time algorithm known (even for simplex MNP)

2

slide-6
SLIDE 6

Applications

  • nearest point problem for transportation polytopes

3

slide-7
SLIDE 7

Applications

  • nearest point problem for transportation polytopes
  • colorful linear programming

3

slide-8
SLIDE 8

Applications

  • nearest point problem for transportation polytopes
  • colorful linear programming
  • submodular function minimization

3

slide-9
SLIDE 9

Applications

  • nearest point problem for transportation polytopes
  • colorful linear programming
  • submodular function minimization

⊲ subroutine in Fujishige-Wolfe method

3

slide-10
SLIDE 10

Applications

  • nearest point problem for transportation polytopes
  • colorful linear programming
  • submodular function minimization

⊲ subroutine in Fujishige-Wolfe method ⊲ machine learning - vision, large-scale learning

3

slide-11
SLIDE 11

Applications

  • nearest point problem for transportation polytopes
  • colorful linear programming
  • submodular function minimization

⊲ subroutine in Fujishige-Wolfe method ⊲ machine learning - vision, large-scale learning

  • linear programming

3

slide-12
SLIDE 12

Our Results I

Theorem (De Loera, H., Rademacher ’17) Linear programming reduces to distance to a V -simplex in strongly-polynomial time.

4

slide-13
SLIDE 13

Our Results I

Theorem (De Loera, H., Rademacher ’17) Linear programming reduces to distance to a V -simplex in strongly-polynomial time.

  • Step 1: LP reduces to “membership in V -polytope.”

4

slide-14
SLIDE 14

Our Results I

Theorem (De Loera, H., Rademacher ’17) Linear programming reduces to distance to a V -simplex in strongly-polynomial time.

  • Step 1: LP reduces to “membership in V -polytope.”
  • Step 2: “Membership in V -polytope” reduces to “distance to

V -simplex.”

4

slide-15
SLIDE 15

Our Results I

Theorem (De Loera, H., Rademacher ’17) Linear programming reduces to distance to a V -simplex in strongly-polynomial time.

  • Step 1: LP reduces to “membership in V -polytope.”
  • Step 2: “Membership in V -polytope” reduces to “distance to

V -simplex.” If a strongly-polynomial method for projection onto a polytope exists then this gives a strongly-polynomial method for LP.

4

slide-16
SLIDE 16

Our Results I

Theorem (De Loera, H., Rademacher ’17) Linear programming reduces to distance to a V -simplex in strongly-polynomial time.

  • Step 1: LP reduces to “membership in V -polytope.”
  • Step 2: “Membership in V -polytope” reduces to “distance to

V -simplex.” If a strongly-polynomial method for projection onto a polytope exists then this gives a strongly-polynomial method for LP. It was previously known that linear programming reduces to MNP on a polytope in weakly-polynomial time [Fujishige, Hayashi, Isotani ’06].

4

slide-17
SLIDE 17

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

p1 p2 p3 x p5 p4 P O {y : xTy = x2

2} 5

slide-18
SLIDE 18

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)).

5

slide-19
SLIDE 19

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)). q1 q2 O

5

slide-20
SLIDE 20

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)). q1 q2 O q1 q2 q3 O

5

slide-21
SLIDE 21

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)). q1 q2 O q1 q2 q3 O

N N

5

slide-22
SLIDE 22

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)). q1 q2 O q1 q2 q3 O

N N

q2 q1 O

N

X

5

slide-23
SLIDE 23

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)). q1 q2 O q1 q2 q3 O

N N

Note: Singletons are corrals.

5

slide-24
SLIDE 24

Background

Lemma (Wolfe ’74) Let P = conv(p1, p2, ..., pm). Then x ∈ P is MNP(P) if and only if xTpj ≥ x2

2 for all j = 1, 2, ..., m.

Def: An affinely independent set of input points Q = {q1, q2, ..., qk} is a corral if MNP(aff(Q)) ∈ relint(conv(Q)). q1 q2 O q1 q2 q3 O

N N

Note: Singletons are corrals. Note: There is a corral in P whose convex hull contains MNP(P).

5

slide-25
SLIDE 25

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

6

slide-26
SLIDE 26

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

  • searches through sequence of corrals whose MNPs have strictly

decreasing norm until finding optimal

6

slide-27
SLIDE 27

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

  • searches through sequence of corrals whose MNPs have strictly

decreasing norm until finding optimal

  • maintains set C defining the current corral and the MNP in C, x

6

slide-28
SLIDE 28

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

  • searches through sequence of corrals whose MNPs have strictly

decreasing norm until finding optimal

  • maintains set C defining the current corral and the MNP in C, x

Sketch: ⊲ Start with any point in P as C and x.

6

slide-29
SLIDE 29

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

  • searches through sequence of corrals whose MNPs have strictly

decreasing norm until finding optimal

  • maintains set C defining the current corral and the MNP in C, x

Sketch: ⊲ Start with any point in P as C and x. ⊲ If C not the optimal corral (checked via optimality criterion):

6

slide-30
SLIDE 30

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

  • searches through sequence of corrals whose MNPs have strictly

decreasing norm until finding optimal

  • maintains set C defining the current corral and the MNP in C, x

Sketch: ⊲ Start with any point in P as C and x. ⊲ If C not the optimal corral (checked via optimality criterion):

  • Find improving point p (given via optimality criterion) to add to C.

6

slide-31
SLIDE 31

Idea of Wolfe’s Method

  • combinatorial method for computing MNP on a

vertex-representation polytope

  • searches through sequence of corrals whose MNPs have strictly

decreasing norm until finding optimal

  • maintains set C defining the current corral and the MNP in C, x

Sketch: ⊲ Start with any point in P as C and x. ⊲ If C not the optimal corral (checked via optimality criterion):

  • Find improving point p (given via optimality criterion) to add to C.
  • Let x “follow gravity” within conv(C) while dimension of current

face decreases to a face which is a corral and update C.

6

slide-32
SLIDE 32

Related Methods

⊲ related to von Neumann’s algorithm for linear programming ⊲ related to Frank-Wolfe method for convex programming ⊲ related to Gilbert’s procedure for quadratic programming

7

slide-33
SLIDE 33

Related Methods

⊲ related to von Neumann’s algorithm for linear programming ⊲ related to Frank-Wolfe method for convex programming ⊲ related to Gilbert’s procedure for quadratic programming ⊲ generalized by Hanson-Lawson procedure for non-negative least-squares

7

slide-34
SLIDE 34

Our Results II

Theorem (De Loera, H., Rademacher ’17) There exists a set of points P(d) ⊂ Rd for d = 2k − 1 where Wolfe’s method with the minnorm insertion rule visits a sequence of corrals C(d)

  • f length 5 · 2k−1 − 4.

8

slide-35
SLIDE 35

Our Results II

Theorem (De Loera, H., Rademacher ’17) There exists a set of points P(d) ⊂ Rd for d = 2k − 1 where Wolfe’s method with the minnorm insertion rule visits a sequence of corrals C(d)

  • f length 5 · 2k−1 − 4.

Note: P(d) is of size 2d − 1.

8

slide-36
SLIDE 36

Our Results II

Theorem (De Loera, H., Rademacher ’17) There exists a set of points P(d) ⊂ Rd for d = 2k − 1 where Wolfe’s method with the minnorm insertion rule visits a sequence of corrals C(d)

  • f length 5 · 2k−1 − 4.

Note: P(d) is of size 2d − 1. Note: binary encoding lengths of coordinates of points of P(d) grow polynomially in d.

8

slide-37
SLIDE 37

Previous Results

  • # iterations ≤ d+1

i=1 i

m

i

  • with any insertion rules (Wolfe ’74)

9

slide-38
SLIDE 38

Previous Results

  • # iterations ≤ d+1

i=1 i

m

i

  • with any insertion rules (Wolfe ’74)
  • convergence rates for linopt insertion rule

(Chakrabarty, Jain, Kothari ’14) (Lacoste-Julien, Jaggi ’15)

9

slide-39
SLIDE 39

Previous Results

  • # iterations ≤ d+1

i=1 i

m

i

  • with any insertion rules (Wolfe ’74)
  • convergence rates for linopt insertion rule

(Chakrabarty, Jain, Kothari ’14) (Lacoste-Julien, Jaggi ’15)

9

slide-40
SLIDE 40

Previous Results

  • # iterations ≤ d+1

i=1 i

m

i

  • with any insertion rules (Wolfe ’74)
  • convergence rates for linopt insertion rule

(Chakrabarty, Jain, Kothari ’14) (Lacoste-Julien, Jaggi ’15) ⊲ pseudo-polynomial complexity

9

slide-41
SLIDE 41

An Inefficient Example

⊲ a < p < q < r < s

10

slide-42
SLIDE 42

An Inefficient Example

⊲ a < p < q < r < s

10

slide-43
SLIDE 43

An Inefficient Example

⊲ a < p < q < r < s

10

slide-44
SLIDE 44

An Inefficient Example

Sets C: a ap apq pq pqr qr qrs rs rsa ⊲ a < p < q < r < s

10

slide-45
SLIDE 45

Exponential Lower Bound

⊲ replace a and x1 by a subspace and a set of points in it, constructed recursively

11

slide-46
SLIDE 46

Exponential Lower Bound

⊲ replace a and x1 by a subspace and a set of points in it, constructed recursively ⊲ P(d) = P(d − 2) ∪ {pd, qd, rd, sd}

11

slide-47
SLIDE 47

Exponential Lower Bound

⊲ replace a and x1 by a subspace and a set of points in it, constructed recursively ⊲ P(d) = P(d − 2) ∪ {pd, qd, rd, sd}

11

slide-48
SLIDE 48

Exponential Lower Bound

⊲ replace a and x1 by a subspace and a set of points in it, constructed recursively ⊲ P(d) = P(d − 2) ∪ {pd, qd, rd, sd} Sets C: C(d − 2) O(d − 2)p pq qr rs rsC(d − 2)

11

slide-49
SLIDE 49

Future Directions

12

slide-50
SLIDE 50

Future Directions

  • 1. Find an exponential example for Wolfe’s method with linopt

insertion rule.

12

slide-51
SLIDE 51

Future Directions

  • 1. Find an exponential example for Wolfe’s method with linopt

insertion rule.

  • 2. Search for types of polytopes where Wolfe’s method is polynomial

(e.g. polytopes arising in submodular function minimizations).

12

slide-52
SLIDE 52

Future Directions

  • 1. Find an exponential example for Wolfe’s method with linopt

insertion rule.

  • 2. Search for types of polytopes where Wolfe’s method is polynomial

(e.g. polytopes arising in submodular function minimizations).

  • 3. Give an average (or smoothed) analysis of Wolfe’s method.

12

slide-53
SLIDE 53

Thanks for attending!

Questions?

[1]

  • I. B´

ar´ any and S. Onn. Colourful linear programming and its relatives. Mathematics of Operations Research, 22(3):550–567, 1997. [2]

  • D. Chakrabarty, P. Jain, and P. Kothari.

Provable submodular minimization using Wolfe’s algorithm. In Proc. Advances Neural Info. Proc. Systems (NIPS), pages 802–809, 2014. [3]

  • G. B. Dantzig.

An ǫ-precise feasible solution to a linear program with a convexity constraint in 1/ǫ2 iterations independent of problem size. Technical report, Stanford University, 1992. [4]

  • J. A. De Loera, J. Haddock, and L. Rademacher.

The minimum Euclidean-norm point on a convex polytope: Wolfe’s combinatorial algorithm is exponential. 2017. [5]

  • M. Frank and P. Wolfe.

An algorithm for quadratic programming. Naval Research Logistics (NRL), 3(1-2):95–110, 1956. [6]

  • S. Fujishige, T. Hayashi, and S. Isotani.

The minimum-norm-point algorithm applied to submodular function minimization and linear programming. Citeseer, 2006. [7]

  • E. G. Gilbert.

An iterative procedure for computing the minimum of a quadratic form on a convex set. SIAM J. Control, 4:61–80, 1966. [8]

  • S. Lacoste-Julien and M. Jaggi.

On the global linear convergence of Frank-Wolfe

  • ptimization variants.

In Proc. Advances Neural Info. Proc. Systems (NIPS), pages 496–504, 2015. [9]

  • C. L. Lawson and R. J. Hanson.

Solving least squares problems, volume 15 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1995. Revised reprint of the 1974 original. [10]

  • P. Wolfe.

Finding the nearest point in a polytope.

  • Math. Programming, 11(2):128–149, 1976.

13

slide-54
SLIDE 54

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P

14

slide-55
SLIDE 55

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P

14

slide-56
SLIDE 56

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P C = {p1}

14

slide-57
SLIDE 57

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P C = {p1}

14

slide-58
SLIDE 58

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < ||x||2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P C = {p1}

14

slide-59
SLIDE 59

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P C = {p1, p2}

14

slide-60
SLIDE 60

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P y C = {p1, p2}

14

slide-61
SLIDE 61

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 = x p3 p2 O P y C = {p1, p2}

14

slide-62
SLIDE 62

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x = y C = {p1, p2}

14

slide-63
SLIDE 63

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x = y C = {p1, p2}

14

slide-64
SLIDE 64

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < ||x||2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x = y C = {p1, p2}

14

slide-65
SLIDE 65

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x = y C = {p1, p2, p3}

14

slide-66
SLIDE 66

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O = y P x C = {p1, p2, p3}

14

slide-67
SLIDE 67

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O = y P x C = {p1, p2, p3}

14

slide-68
SLIDE 68

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

||z − y||2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) z p1 p3 p2 O = y P x C = {p1, p2, p3}

14

slide-69
SLIDE 69

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) z p1 p3 p2 O = y P x C = {p2, p3}

14

slide-70
SLIDE 70

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) x = z p1 p3 p2 O = y P C = {p2, p3}

14

slide-71
SLIDE 71

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) x = z p1 p3 p2 O P y C = {p2, p3}

14

slide-72
SLIDE 72

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) x = z p1 p3 p2 O P y C = {p2, p3}

14

slide-73
SLIDE 73

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x = y C = {p2, p3}

14

slide-74
SLIDE 74

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x C = {p2, p3}

14

slide-75
SLIDE 75

Sketch of Method

x ∈ P = {p1, p2, ..., pm} C = {x} while x is not MNP(P) pj ∈ {p ∈ P : xTp < x2

2}

C = C ∪ {pj} y = MNP(aff(C)) while y ∈ relint(conv(C)) z = argmin

z∈conv(C)∩xy

z − y2 C = C − {pi} where pi, z are on different faces of conv(C) x = z y = MNP(aff(C)) x = y return x p1 = (0, 2) p2 = (3, 0) p3 = (−2, 1) p1 p3 p2 O P x C = {p2, p3}

14