Maximizing Volume Subject to Combinatorial Constraints Sasho - - PowerPoint PPT Presentation

maximizing volume subject to combinatorial constraints
SMART_READER_LITE
LIVE PREVIEW

Maximizing Volume Subject to Combinatorial Constraints Sasho - - PowerPoint PPT Presentation

Maximizing Volume Subject to Combinatorial Constraints Sasho Nikolov University of Toronto Nikolov (UofT) Max Volume 1 / 17 The Problem(s) Outline The Problem(s) 1 Algorithm for the Maximum d -Subdeterminant 2 Extending to j < d and


slide-1
SLIDE 1

Maximizing Volume Subject to Combinatorial Constraints

Sasho Nikolov

University of Toronto

Nikolov (UofT) Max Volume 1 / 17

slide-2
SLIDE 2

The Problem(s)

Outline

1

The Problem(s)

2

Algorithm for the Maximum d-Subdeterminant

3

Extending to j < d and Beyond

Nikolov (UofT) Max Volume 2 / 17

slide-3
SLIDE 3

The Problem(s)

Maximum j-simplex

Given points v1, . . . , vn ∈ Rd Find the largest j-dimensional simplex ∆ ⊆ conv{v1, . . . , vn}. j-dim simplex = convex hull of j + 1 points

Nikolov (UofT) Max Volume 3 / 17

slide-4
SLIDE 4

The Problem(s)

Maximum j-simplex

Given points v1, . . . , vn ∈ Rd Find the largest j-dimensional simplex ∆ ⊆ conv{v1, . . . , vn}. j-dim simplex = convex hull of j + 1 points

Nikolov (UofT) Max Volume 3 / 17

slide-5
SLIDE 5

The Problem(s)

Maximum j-simplex

Given points v1, . . . , vn ∈ Rd Find the largest j-dimensional simplex ∆ ⊆ conv{v1, . . . , vn}. j-dim simplex = convex hull of j + 1 points Can assume ∆ = conv{vi1, . . . , vij+1}

Nikolov (UofT) Max Volume 3 / 17

slide-6
SLIDE 6

The Problem(s)

Maximum j-subdeterminant

Given Positive semidefinite n × n matrix M, rank M = d Find the j × j principal submatrix MS,S with the largest determinant

S S M

Nikolov (UofT) Max Volume 4 / 17

slide-7
SLIDE 7

The Problem(s)

Motivation

Maximum Volume Simplex: A natural problem in computational geometry [GK94, GKL95] General class of problems: approximate a complicated body with a “simple” one contained inside it

e.g. John ellipsoid: largest volume ellipsoid

Maximum Subdeterminant: Applications in

modeling diversity in machine learning [KT12] maximum entropy sampling [Lee06] low-rank matrix approximation [GT01] discrepancy theory [LSV86, Mat11]

Maximizing volume ↔ maximizing diversity

Nikolov (UofT) Max Volume 5 / 17

slide-8
SLIDE 8

The Problem(s)

What was known

Approximation equivalence:

α(j)-approximation for j-MSD = ⇒

  • α(j)-approximation for j-MVS

α(j)-approximation for j-MVS = ⇒ (j + 1)α(j)2-approximation for j-MSD

[Kou06, C ¸M13, DEFM14] NP-hard to approximate better than cj for some constant c > 1 and any j = dΩ(1). Approximation algorithms:

[Kha95, Pac04, C ¸M09] jj/2-approximation for all j; [DEFM14] (log d)d/2-approximation for j = d.

Nikolov (UofT) Max Volume 6 / 17

slide-9
SLIDE 9

The Problem(s)

Main Result

Theorem There exists a deterministic polynomial time algorithm that approximates Maximum j-Subdeterminant up to a factor of jj

j! ≈ ej,

Maximum Volume j-Simplex up to a factor of

  • jj

j! ≈ ej/2.

Since j-dimensional volume is degree-j homogeneous, this is morally a constant factor approximation.

Nikolov (UofT) Max Volume 7 / 17

slide-10
SLIDE 10

Algorithm for the Maximum d-Subdeterminant

Outline

1

The Problem(s)

2

Algorithm for the Maximum d-Subdeterminant

3

Extending to j < d and Beyond

Nikolov (UofT) Max Volume 8 / 17

slide-11
SLIDE 11

Algorithm for the Maximum d-Subdeterminant

An Upped Bound from Ellipsoids

Cholesky decomposition: M = V TV , V is d × n Columns of V : v1, . . . , vn. det(MS,S) = det(VS)2

Nikolov (UofT) Max Volume 9 / 17

slide-12
SLIDE 12

Algorithm for the Maximum d-Subdeterminant

An Upped Bound from Ellipsoids

Cholesky decomposition: M = V TV , V is d × n Columns of V : v1, . . . , vn. det(MS,S) = det(VS)2

E

a1 a2

Centered ellipsoid E s.t. v1, . . . , vn ∈ E: a1, . . . , ad: major axes of E

Nikolov (UofT) Max Volume 9 / 17

slide-13
SLIDE 13

Algorithm for the Maximum d-Subdeterminant

An Upped Bound from Ellipsoids

Cholesky decomposition: M = V TV , V is d × n Columns of V : v1, . . . , vn. det(MS,S) = det(VS)2

E

a1 a2

Centered ellipsoid E s.t. v1, . . . , vn ∈ E: a1, . . . , ad: major axes of E Hadamard’s Inequality: max

S:|S|=d det(MS,S) = max S:|S|=d det(VS)2 ≤ a12 . . . ad2

Nikolov (UofT) Max Volume 9 / 17

slide-14
SLIDE 14

Algorithm for the Maximum d-Subdeterminant

The L¨

  • wner Ellipsoid

  • wner Ellipsoid EL: minimum volume ellipsoid containing ±v1, . . . , ±vn

EL minimizes the upper bound a12 . . . ad2; Intuition: the body touches EL in all directions.

Nikolov (UofT) Max Volume 10 / 17

slide-15
SLIDE 15

Algorithm for the Maximum d-Subdeterminant

The L¨

  • wner Ellipsoid

  • wner Ellipsoid EL: minimum volume ellipsoid containing ±v1, . . . , ±vn

EL minimizes the upper bound a12 . . . ad2; Intuition: the body touches EL in all directions. Theorem ([Joh48]) EL = F · Bd if and only if there exist weights c1, . . . , cn ≥ 0 so that

n

  • i=1

civivT

i

= FF T

n

  • i=1

ci = d.

Nikolov (UofT) Max Volume 10 / 17

slide-16
SLIDE 16

Algorithm for the Maximum d-Subdeterminant

The L¨

  • wner Ellipsoid

  • wner Ellipsoid EL: minimum volume ellipsoid containing ±v1, . . . , ±vn

EL minimizes the upper bound a12 . . . ad2; Intuition: the body touches EL in all directions. Theorem ([Joh48]) EL = F · Bd if and only if there exist weights c1, . . . , cn ≥ 0 so that

n

  • i=1

civivT

i

= FF T

n

  • i=1

ci = d. The ci give a distribution over vi that approximates EL Idea: treat the ci as probabilities and sample

Nikolov (UofT) Max Volume 10 / 17

slide-17
SLIDE 17

Algorithm for the Maximum d-Subdeterminant

The Randomized Rounding Algorithm

Algorithm: Sample S := {i1, . . . , id} independently with replacement, where i is sampled with probability pi := ci

d .

Nikolov (UofT) Max Volume 11 / 17

slide-18
SLIDE 18

Algorithm for the Maximum d-Subdeterminant

The Randomized Rounding Algorithm

Algorithm: Sample S := {i1, . . . , id} independently with replacement, where i is sampled with probability pi := ci

d .

Analysis: Binet-Cauchy formula and John’s Theorem: E det(VS)2 =

  • S⊆[n]:|S|=d

d!

  • i∈S

pi

  • det(VS)2

Nikolov (UofT) Max Volume 11 / 17

slide-19
SLIDE 19

Algorithm for the Maximum d-Subdeterminant

The Randomized Rounding Algorithm

Algorithm: Sample S := {i1, . . . , id} independently with replacement, where i is sampled with probability pi := ci

d .

Analysis: Binet-Cauchy formula and John’s Theorem: E det(VS)2 =

  • S⊆[n]:|S|=d

d!

  • i∈S

pi

  • det(VS)2

= d! dd

  • S⊆[n]:|S|=d
  • i∈S

ci

  • det(VS)2

Nikolov (UofT) Max Volume 11 / 17

slide-20
SLIDE 20

Algorithm for the Maximum d-Subdeterminant

The Randomized Rounding Algorithm

Algorithm: Sample S := {i1, . . . , id} independently with replacement, where i is sampled with probability pi := ci

d .

Analysis: Binet-Cauchy formula and John’s Theorem: E det(VS)2 =

  • S⊆[n]:|S|=d

d!

  • i∈S

pi

  • det(VS)2

= d! dd

  • S⊆[n]:|S|=d
  • i∈S

ci

  • det(VS)2

= d! dd det n

  • i=1

civivT

i

  • Nikolov (UofT)

Max Volume 11 / 17

slide-21
SLIDE 21

Algorithm for the Maximum d-Subdeterminant

The Randomized Rounding Algorithm

Algorithm: Sample S := {i1, . . . , id} independently with replacement, where i is sampled with probability pi := ci

d .

Analysis: Binet-Cauchy formula and John’s Theorem: E det(VS)2 =

  • S⊆[n]:|S|=d

d!

  • i∈S

pi

  • det(VS)2

= d! dd

  • S⊆[n]:|S|=d
  • i∈S

ci

  • det(VS)2

= d! dd det n

  • i=1

civivT

i

  • = d!

dd det(FF T) = d! dd a12 . . . ad2

Nikolov (UofT) Max Volume 11 / 17

slide-22
SLIDE 22

Algorithm for the Maximum d-Subdeterminant

The Randomized Rounding Algorithm

Algorithm: Sample S := {i1, . . . , id} independently with replacement, where i is sampled with probability pi := ci

d .

Analysis: Binet-Cauchy formula and John’s Theorem: E det(VS)2 =

  • S⊆[n]:|S|=d

d!

  • i∈S

pi

  • det(VS)2

= d! dd

  • S⊆[n]:|S|=d
  • i∈S

ci

  • det(VS)2

= d! dd det n

  • i=1

civivT

i

  • = d!

dd det(FF T) = d! dd a12 . . . ad2 Derandomize using conditional expectations.

Nikolov (UofT) Max Volume 11 / 17

slide-23
SLIDE 23

Extending to j < d and Beyond

Outline

1

The Problem(s)

2

Algorithm for the Maximum d-Subdeterminant

3

Extending to j < d and Beyond

Nikolov (UofT) Max Volume 12 / 17

slide-24
SLIDE 24

Extending to j < d and Beyond

Relaxation Suggested by the Rounding

Cue from the rounding algorithm: solve the optimization problem Maximize

  • S⊆[n]:|S|=j
  • i∈S

ci

  • det(VS)2

s.t.

n

  • i=1

ci = j By the same analysis, randomized rounding gives a jj

j!-approximation.

Nikolov (UofT) Max Volume 13 / 17

slide-25
SLIDE 25

Extending to j < d and Beyond

Relaxation Suggested by the Rounding

Cue from the rounding algorithm: solve the optimization problem Maximize

  • S⊆[n]:|S|=j
  • i∈S

ci

  • det(VS)2

s.t.

n

  • i=1

ci = j By the same analysis, randomized rounding gives a jj

j!-approximation.

The objective equals Ej(

n

  • i=1

civivT) = ej(λ1, . . . , λd), where ej is the degree j elementary symmetric polynomial, and λ1, . . . , λd are the eigenvalues of n

i=1 civivT.

Nikolov (UofT) Max Volume 13 / 17

slide-26
SLIDE 26

Extending to j < d and Beyond

Relaxation Suggested by the Rounding

Cue from the rounding algorithm: solve the optimization problem Maximize

  • S⊆[n]:|S|=j
  • i∈S

ci

  • det(VS)2

s.t.

n

  • i=1

ci = j By the same analysis, randomized rounding gives a jj

j!-approximation.

The objective equals Ej(

n

  • i=1

civivT) = ej(λ1, . . . , λd), where ej is the degree j elementary symmetric polynomial, and λ1, . . . , λd are the eigenvalues of n

i=1 civivT.

Aleksandrov-Fenchel inequalities: log Ej(n

i=1 civivT) is convex in c:

we can solve the optimization problem.

Nikolov (UofT) Max Volume 13 / 17

slide-27
SLIDE 27

Extending to j < d and Beyond

A General Problem

B: a collection of feasible subsets of [n] of size j, specified by an oracle. E.g. bases of a matroid M. We want to solve max{det(MS,S) : S ∈ B} for a given PSD matrix M. Also equivalent volume version.

Nikolov (UofT) Max Volume 14 / 17

slide-28
SLIDE 28

Extending to j < d and Beyond

A General Problem

B: a collection of feasible subsets of [n] of size j, specified by an oracle. E.g. bases of a matroid M. We want to solve max{det(MS,S) : S ∈ B} for a given PSD matrix M. Also equivalent volume version. Partition constraints: [n] is partitioned into color classes P1, . . . , Pk, and B contains the sets with exactly jℓ elements from Pℓ, for each ℓ, 1 ≤ ℓ ≤ k.

Nikolov (UofT) Max Volume 14 / 17

slide-29
SLIDE 29

Extending to j < d and Beyond

Colorful j-simplex

Pick the largest colorful j-dimensional simplex:

Nikolov (UofT) Max Volume 15 / 17

slide-30
SLIDE 30

Extending to j < d and Beyond

Optimization problem, using same strategy: Maximize

  • S∈B
  • i∈S

ci

  • det(VS)2

s.t. c ∈ conv{1S : S ∈ B}. Issue: The objective is a mixed discriminant for partition constraints: #P-hard to compute.

We need to further relax the optimization problem.

Nikolov (UofT) Max Volume 16 / 17

slide-31
SLIDE 31

Extending to j < d and Beyond

Optimization problem, using same strategy: Maximize

  • S∈B
  • i∈S

ci

  • det(VS)2

s.t. c ∈ conv{1S : S ∈ B}. Issue: The objective is a mixed discriminant for partition constraints: #P-hard to compute.

We need to further relax the optimization problem.

Using the theory of real stable polynomials:

[NS16] jj

j!-approximation under partition constraints.

[SV16, AG17] e2j-approximation when B is the bases of a regular matroid.

Nikolov (UofT) Max Volume 16 / 17

slide-32
SLIDE 32

Extending to j < d and Beyond

Open Problems

Open problems: Tight approximation factors? Most general constraints? The algorithms for partition and matroid constraints do not find an approximately optimal solution with high probability.

Thank you!

Nikolov (UofT) Max Volume 17 / 17

slide-33
SLIDE 33

Extending to j < d and Beyond

Nima Anari and Shayan Oveis Gharan. A generalization of permanent inequalities and applications in counting and optimization. CoRR, abs/1702.02937, 2017. Ali C ¸ivril and Malik Magdon-Ismail. On selecting a maximum volume sub-matrix of a matrix and related problems.

  • Theor. Comput. Sci., 410(47-49):4801–4811, 2009.

Ali C ¸ivril and Malik Magdon-Ismail. Exponential inapproximability of selecting a maximum volume sub-matrix. Algorithmica, 65(1):159–176, 2013. Marco Di Summa, Friedrich Eisenbrand, Yuri Faenza, and Carsten Moldenhauer. On largest volume simplices and sub-determinants. CoRR, abs/1406.3512, 2014.

Nikolov (UofT) Max Volume 17 / 17

slide-34
SLIDE 34

Extending to j < d and Beyond

To appear in SODA 2015. Peter Gritzmann and Victor Klee. On the complexity of some basic problems in computational convexity:

  • I. containment problems.

Discrete Mathematics, 136(1-3):129–174, 1994. Peter Gritzmann, Victor Klee, and D. G. Larman. Largest j-simplices n-polytopes. Discrete & Computational Geometry, 13:477–515, 1995.

  • S. A. Goreinov and E. E. Tyrtyshnikov.

The maximal-volume concept in approximation by low-rank matrices. In Structured matrices in mathematics, computer science, and engineering, I (Boulder, CO, 1999), volume 280 of Contemp. Math., pages 47–51. Amer. Math. Soc., Providence, RI, 2001. Fritz John. Extremum problems with inequalities as subsidiary conditions.

Nikolov (UofT) Max Volume 17 / 17

slide-35
SLIDE 35

Extending to j < d and Beyond

In Studies and Essays Presented to R. Courant on his 60th Birthday, January 8, 1948, pages 187–204. Interscience Publishers, Inc., New York, N. Y., 1948. Leonid Khachiyan. On the complexity of approximating extremal determinants in matrices.

  • J. Complexity, 11(1):138–153, 1995.

Ioannis Koutis. Parameterized complexity and improved inapproximability for computing the largest j-simplex in a V-polytope.

  • Inf. Process. Lett., 100(1):8–13, 2006.

Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. arXiv preprint arXiv:1207.6083, 2012. Jon Lee. Maximum Entropy Sampling.

Nikolov (UofT) Max Volume 17 / 17

slide-36
SLIDE 36

Extending to j < d and Beyond

John Wiley and Sons, Ltd, 2006.

  • L. Lov´

asz, J. Spencer, and K. Vesztergombi. Discrepancy of set-systems and matrices. European Journal of Combinatorics, 7(2):151–160, 1986. Jiˇ r´ ı Matouˇ sek. The determinant bound for discrepancy is almost tight. http://arxiv.org/abs/1101.0767, 2011. Aleksandar Nikolov and Mohit Singh. Maximizing determinants under partition constraints. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 192–201. ACM, 2016. Asa Packer. Polynomial-time approximation of largest simplices in v-polytopes. Discrete Applied Mathematics, 134(1-3):213–237, 2004.

Nikolov (UofT) Max Volume 17 / 17

slide-37
SLIDE 37

Extending to j < d and Beyond

Damian Straszak and Nisheeth K. Vishnoi. Real stable polynomials and matroids: Optimization and counting. CoRR, abs/1611.04548, 2016.

Nikolov (UofT) Max Volume 17 / 17