kls conjecture and volume computation
play

KLS conjecture and volume computation Alexander Tarasov - PowerPoint PPT Presentation

KLS conjecture and volume computation Alexander Tarasov Saint-Petersburg State University May 19, 2019 1 / 37 Table of content KLS conjecture Origins form TCS Computation of volume is complex Probabilistic approach to the volume


  1. KLS conjecture and volume computation Alexander Tarasov Saint-Petersburg State University May 19, 2019 1 / 37

  2. Table of content ◮ KLS conjecture ◮ Origins form TCS ◮ Computation of volume is complex ◮ Probabilistic approach to the volume computation ◮ Gaussian cooling and O ∗ ( n 3 ) algorithm 2 / 37

  3. Isoperimetric inequality Let us begin with the classical isoperimetric inequality in R n which states that for every bounded Borel set A ⊆ R n n − 1 m + ( A ) ≥ Cm ( A ) n where m is the n -dimensional Lebesgue measure in R n , m + is the outer Minkowski content defined by m ( A ǫ ) − m ( A ) m + ( A ) = lim inf ǫ ǫ → 0 3 / 37

  4. Log-concave probability distribution We say that µ is log-concave if its density with respect to the Lebesgue measure is e − V for some convex function V : R n → ( −∞ , ∞ ]. In particular, the uniform probability measure on a convex body in R n is log-concave. Definition We say that µ verifies Cheeger’s isoperimetric inequality with constant C if for every Borel set A ⊆ R n µ + ( A ) ≥ C min { µ ( A ) , µ ( A c ) } where µ ( A ǫ ) − µ ( A ) µ + ( A ) = lim inf . ǫ ǫ → 0 4 / 37

  5. Cheeger’s isoperimetric inequality We can define C µ as µ + ( A ) C µ = min min { µ ( A ) , µ ( A c ) } A We will find expressions like this later in the application of KLS-type statements. 5 / 37

  6. Isotropic measures Definition We say that a log-concave probability µ is isotropic if: ◮ The barycenter is the origin, i.e., E µ x = 0, and ◮ The covariance matrix M is the identity, i.e., E µ x i x j = δ i , j , 1 ≤ i , j ≤ n . The KLS conjecture (restricted to isotropic measures) can be formulated as Conjecture There exists an absolute constant, independent of µ and n, such that µ + ( A ) ≥ C µ ( A ) for any Borel set A with µ ( A ) ≤ 1 / 2 . 6 / 37

  7. Variance conjecture A particular case of KLS (but more easily formulated) Variance conjecture is the following Conjecture For isotropic log-concave probability measures holds Var µ | x | 2 ≤ Cn for some absolute constant C > 0 . 7 / 37

  8. Slicing conjecture It is true that from Variance conjecture one of the main problems of convex geometry so-called Slicing of Hyperplane conjecture will follows. Conjecture For any convex set A ⊂ R n of n-dimensional volume 1 there exists hyperplane H such that Vol n − 1 ( A ∩ H ) ≥ c for some absolute constant c. 8 / 37

  9. Origins of the KLS conjecture One of the problems treated in theoretical computer science is the design of an algorithm to compute the volume of an n -dimensional convex body, i.e., an algorithm that receives as an input an n -dimensional convex body K , a point x 0 ∈ K and an error parameter and returns as an output a real number A such that (1 − ǫ ) | K | ≤ A ≤ (1 + ǫ ) | K | 9 / 37

  10. Body as an oracle The convex body is given as an oracle. Typically, this will be the membership oracle which, given a point x ∈ R n , will tell whether the point x belongs to K or not. The complexity of such an algorithm will be measured by both the number of calls to the oracle and the number of arithmetic operations. 10 / 37

  11. Complexity of deterministic algorithm Any deterministic algorithm for computing the volume of a convex body had been proved to have an exponential complexity: ◮ G. Elekes, A geometric inequality and the complexity of computing volume, Discrete and Computational Geometry, 1, 289–292, (1986) ◮ I.Barany and Z. Furedi, Computing the volume is difficult, Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing, 442–447, (1986) 11 / 37

  12. Exponential complexity Let’s have a look on one of these articles. Let S be a ball in R n space with volume 1. Choose any m points P 1 , P 2 , . . . , P m , ∈ S . Denote by C m the convex hull of { P i ; 1 ≤ i ≤ m } and by v ( n , m ) - the maximum of C m over all possible sets of points. Theorem v ( n , m ) ≤ m 2 n 12 / 37

  13. Proof of the theorem Let O be the center of S . Denote by S j (1 ≤ i ≤ m ) the ball with diameter OP i . Naturally all vol ( S i ) ≤ 1 2 . Statement � C m ⊆ { S i ; 1 ≤ i ≤ m } This clearly implies the Theorem. 13 / 37

  14. Proof of the statement Suppose that this claim is false, i.e., there is a point Q of C m which is not contained in any S j . This is equivalent to the property that ∠ OQP i < π 2 for all 1 ≤ i ≤ m . Consider the hyperplane H orthogonal to OQ and going through Q . ∠ OQP < π 2 implies that for all 1 ≤ i ≤ m , P i is in the same open halfspace determined by H as O . Hence Q , which is on H , cannot be in the convex hull, a contradiction. � 14 / 37

  15. Updated oracle Consider well-guaranteed separation oracle that in addition will give to as: ◮ a ball S containing K ◮ a ball S ′ contained in K ◮ in case of ”NO”, hyperplane that separate point and body K 15 / 37

  16. Theorem Theorem Suppose that an algorithm has access to a well-guaranteed separation oracle encoding K. If for some c = c ( n ) < 1 the algorithm can give an estimate v 0 for Vol( K ) up to a factor c, i.e., c · v 0 ≤ Vol( K ) ≤ v 0 , then its running time is at least c · 2 n − ( n + 1) . 16 / 37

  17. Proof of the theorem Suppose the oracle answers ”yes” iff P ∈ S and shows a hyperplane separating P from S otherwise. Suppose, moreover, that it gives the vertices of a regular simplex inscribed in S to be in K . (Without being asked for these points.) Note that this yields an inscribed ball S ′ of K . If the algorithm asks fewer than c · 2 n − ( n + 1) other points then it will ”know” only m ≤ c · 2 n points P 1 , . . . , P m to be in K . Their convex hull C m will have vol ( C m ) < Vol( S ) · m 2 n < Vol( S ) · c by Theorem 1. So the algorithm cannot conclude that K - which may still be as large as S itself or as small as C m - has volume either at least c · Vol( S ) or less than Vol( S ). 17 / 37

  18. Corollary Corollary If an algorithm has access to a well-guaranteed separation oracle and can compute the volume of K up to a factor (2 − ǫ ) n then its running time is exponential. Proof: From the last theorem we have for c = (2 − ǫ ) − n that the running time is at least [2 / (2 − ǫ )] n − ( n + 1). 18 / 37

  19. Randomized approach to the volume computational Against backdrop of deterministic algorithms complexity estimation (computing the volume of an explicit polytope is the # P -hard problem) the breakthrough result of Dyer, Frieze and Kannan established a randomized polynomial-time algorithm for estimating the volume to within any desired accuracy. 19 / 37

  20. Randomized approach to the volume computation The DFK algorithm for computing the volume of a convex body K in R n given by a membership oracle uses a sequence of convex bodies K 0 , K 1 , . . . , K m = K , starting with the unit ball fully contained in K and ending with K . Each successive body K i = 2 i / n B n ∩ K is a slightly larger ball intersected with K . Using random sampling, the algorithm estimates the ratios of volumes of consecutive bodies. The product of these ratios times the volume of the unit ball was the estimate of the volume of K . 20 / 37

  21. Random sampling Sampling is achieved by a random walk in the convex body. There were many technical issues to be addressed, but the central challenge was to show a random walk that “mixed” rapidly, i.e. converged to its stationary distribution in a polynomial number of steps. The overall complexity of the algorithm was O ∗ ( n 23 ) oracle calls. 21 / 37

  22. Markov chain of body K Consider in R n regular grid of size δ and corresponding cubes. Each cube that intersects with body K we consider as a state in the Markov chain with probability 1 / (4 n ) to jump to the neighboring cube and with probability 1 / 2 to not jump at all. 22 / 37

  23. Ergodic Marcov chain It is easy to see that our Markov chain is “irreducible”, i.e. for each pair of states i , j , there is a natural number s such that p ( s ) is ij nonzero. This follows since the graph of the natural random walk is connected. Also the Markov chain can be seen to be aperiodic, i.e. gcd { s : p ( s ) > 0 } = 1 for all i , j . This follows from the facts ij that the graph is connected and each cube has a self-loop. Hence, the chain is “ergodic” and there exist “stationary” probabilities π 1 , π 2 , ..., π N > 0 such that s →∞ p ( s ) lim = π j ∀ i , j ij It means that if we will sample points as big number of steps in random walk we will close to sample from uniform measure. 23 / 37

  24. Speed of convergence and conductance Definition The conductance φ of a Markov chain with state space K , next-step distribution P x and stationary distribution Q is defined as: � S P x ( K \ S ) dQ ( x ) φ = min min { Q ( S ) , Q ( K \ S ) } s ⊂ K In the next slide we will see how speed of convergence to stationary distribution depends on φ . But now take a look on similarity of this expression with C µ : µ + ( A ) C µ = min min { µ ( A ) , µ ( A c ) } A 24 / 37

  25. Speed of convergence and conductance Here is just one example, showing that in Markov chain speed of convergence very depend on conductance Theorem For a time-reversible ergodic Markov chain with all π j ’s equal, and p i , i ≥ 1 / 2 for all i, � t 1 − φ 2 � | p t i , i − π j | ≤ 2 25 / 37

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend