the measurable chromatic number of euclidean space
play

The measurable chromatic number of Euclidean space Christine Bachoc - PowerPoint PPT Presentation

The measurable chromatic number of Euclidean space Christine Bachoc Universit e Bordeaux I, IMB Codes, lattices and modular forms Aachen, September 26-29, 2011 ( R n ) The chromatic number of Euclidean space ( R n ) is the smallest


  1. The measurable chromatic number of Euclidean space Christine Bachoc Universit´ e Bordeaux I, IMB Codes, lattices and modular forms Aachen, September 26-29, 2011

  2. χ ( R n ) ◮ The chromatic number of Euclidean space χ ( R n ) is the smallest number of colors needed to color every point of R n , such that two points at distance apart 1 receive different colors. ◮ E. Nelson, 1950, introduced χ ( R 2 ) . ◮ Dimension 1: χ ( R ) = 2 ◮ No other value is known!

  3. χ ( R 2 ) ≤ 7

  4. χ ( R 2 ) ≤ 7

  5. χ ( R 2 ) ≤ 7

  6. χ ( R 2 ) ≥ 4 Figure: The Moser’s Spindle

  7. The two inequalities: 4 ≤ χ ( R 2 ) ≤ 7 where proved by Nelson and Isbell, 1950. No improvements since then...

  8. χ ( R n ) ◮ Other dimensions: lower bounds based on χ ( R n ) ≥ χ ( G ) for all finite graph G = ( V , E ) embedded in R n ( G ֒ → R n ) i.e. such that V ⊂ R n and E = { ( x , y ) ∈ V 2 : � x − y � = 1 } . ◮ De Bruijn and Erd¨ os (1951): χ ( R n ) = max χ ( G ) G finite → R n G ֒ ◮ Good sequences of graphs: Raiski (1970), Larman and Rogers (1972), Frankl and Wilson (1981), Sz´ ekely and Wormald (1989).

  9. χ ( R n ) for large n ( 1 . 2 + o ( 1 )) n ≤ χ ( R n ) ≤ ( 3 + o ( 1 )) n ◮ Lower bound : Frankl and Wilson (1981). Use graphs with vertices in { 0 , 1 } n and the “linear algebra method” to estimate χ ( G ) . ◮ FW 1 . 207 n is improved to 1 . 239 n by Raigorodskii (2000). ◮ Upper bound: Larman and Rogers (1972). Use Vorono¨ ı decomposition of lattice packings.

  10. χ m ( R n ) ◮ The measurable chromatic number χ m ( R n ) : the color classes are required to be measurable. ◮ Obviously χ m ( R n ) ≥ χ ( R n ) . ◮ Falconer (1981): χ m ( R n ) ≥ n + 3. In particular χ m ( R 2 ) ≥ 5 ◮ The color classes are measurable 1-avoiding sets, i.e. contain no pair of points at distance apart 1.

  11. m 1 ( R n ) � � m 1 ( R n ) = sup δ ( S ) : S ⊂ R n , S measurable, avoids 1 where δ ( S ) is the density of S : vol ( S ∩ B n ( r )) δ ( S ) = lim sup . vol ( B n ( r )) r → + ∞ δ = 1 / 7

  12. m 1 ( R n ) ◮ Obviously 1 χ m ( R n ) ≥ m 1 ( R n ) ◮ Problem: to upper bound m 1 ( R n ) . ◮ Larman and Rogers (1972): m 1 ( R n ) ≤ α ( G ) → R n for all G ֒ | V | where α ( G ) is the independence number of the graph G i.e. the max number of vertices pairwise not connected by an edge.

  13. Finite graphs ◮ An independence set of a graph G = ( V , E ) is a set of vertices pairwise not connected by an edge. ◮ The independence number α ( G ) of the graph is the number of elements of a maximal independent set. ◮ A 1-avoiding set in R n is an independent set of the unit distance graph V = R n E = { ( x , y ) : � x − y � = 1 } .

  14. 1-avoiding sets versus packings S avoids d = 1 δ ( S ) = lim vol ( S ∩ B n ( r )) vol ( B n ( r )) m 1 ( R n ) = sup S δ ( S ) ? S avoids d ∈ ] 0 , 2 [ δ ( S ) = lim | S ∩ B n ( r ) | vol ( B n ( r )) δ n = sup S δ ( S ) ? S avoids d = 1 δ ( S ) = | S | | V | α ( G ) | V | = sup S δ ( S ) ?

  15. The linear programming method ◮ A general method to obtain upper bounds for densities of distances avoiding sets. ◮ For packing problems: initiated by Delsarte, Goethels, Seidel on S n − 1 (1977); Kabatianskii and Levenshtein on compact 2-point homogeneous spaces (1978); Cohn and Elkies on R n (2003). ◮ For finite graphs: Lov´ asz theta number ϑ ( G ) (1979). ◮ For sets avoiding one distance: B, G. Nebe, F . Oliveira, F . Vallentin for m ( S n − 1 , θ ) (2009). F . Oliveira and F . Vallentin for m 1 ( R n ) (2010).

  16. Lov´ asz theta number ◮ The theta number ϑ ( G ) (L. Lov´ asz, 1979) satisfies the Sandwich Theorem: α ( G ) ≤ ϑ ( G ) ≤ χ ( G ) ◮ It is the optimal value of a semidefinite program ◮ Idea: if S is an independence set of G , consider the matrix B S ( x , y ) := 1 S ( x ) 1 S ( y ) / | S | . B S � 0, B S ( x , y ) = 0 if xy ∈ E , | S | = � ( x , y ) ∈ V 2 B S ( x , y ) .

  17. ϑ ( G ) ◮ Defined by: � � : B ∈ R V × V , B � 0 , ϑ ( G ) = max B ( x , y ) ( x , y ) ∈ V 2 � B ( x , x ) = 1 , x ∈ V � B ( x , y ) = 0 xy ∈ E ◮ Proof of α ( G ) ≤ ϑ ( G ) : Let S be an independent set. B S ( x , y ) = 1 S ( x ) 1 S ( y ) / | S | satisfies the constraints of the above SDP . Thus � B S ( x , y ) = | S | ≤ ϑ ( G ) . ( x , y ) ∈ V 2

  18. ϑ ( R n ) ◮ Over R n : take B ( x , y ) continuous, positive definite, i.e. for all k , for all x 1 , . . . , x k ∈ R n , � � B ( x i , x j ) 1 ≤ i , j ≤ k � 0. ◮ Assume B is translation invariant: B ( x , y ) = f ( x − y ) (the graph itself is invariant by translation). ◮ Replace � ( x , y ) ∈ V 2 B ( x , y ) by 1 � δ ( f ) := lim sup f ( z ) dz . vol ( B n ( r )) r → + ∞ B n ( r )

  19. ϑ ( R n ) ◮ Leads to: � ϑ ( R n ) f ∈ C b ( R n ) , f � 0 := sup δ ( f ) : f ( 0 ) = 1 , � f ( x ) = 0 � x � = 1 Theorem (Oliveira Vallentin 2010) m 1 ( R n ) ≤ ϑ ( R n )

  20. The computation of ϑ ( R n ) ◮ Bochner characterization of positive definite functions: � f ∈ C ( R n ) , f � 0 ⇐ R n e ix · y d µ ( y ) , µ ≥ 0 . ⇒ f ( x ) = ◮ f can be assumed to be radial i.e. invariant under O ( R n ) : � + ∞ f ( x ) = Ω n ( t � x � ) d α ( t ) , α ≥ 0 . 0 where Ω n ( t ) = Γ( n / 2 )( 2 / t ) ( n / 2 − 1 ) J n / 2 − 1 ( t ) . ◮ Then take the dual program.

  21. The computation of ϑ ( R n ) ◮ Leads to: ϑ ( R n ) � = inf z 0 : z 0 + z 1 ≥ 1 z 0 + z 1 Ω n ( t ) ≥ 0 for all t > 0 } ◮ Explicitly solvable. For n = 4, graphs of Ω 4 ( t ) and of the optimal function f ∗ 4 ( t ) = z ∗ 0 + z ∗ 1 Ω 4 ( t ) : The minimum of Ω n ( t ) is reached at j n / 2 , 1 the first zero of J n / 2 .

  22. The computation of ϑ ( R n ) ◮ We obtain n ( t ) = Ω n ( t ) − Ω n ( j n / 2 , 1 ) − Ω n ( j n / 2 , 1 ) f ∗ ϑ ( R n ) = 1 − Ω n ( j n / 2 , 1 ) . 1 − Ω n ( j n / 2 , 1 ) ◮ Resulting upper bound for m 1 ( R n ) (OV 2010): − Ω n ( j n / 2 , 1 ) m 1 ( R n ) ≤ ϑ ( R n ) = 1 − Ω n ( j n / 2 , 1 ) ◮ Decreases exponentially but not as fast as Frankl Wilson Raigorodskii bound (1 . 165 − n instead of 1 . 239 − n ). A weaker bound, but with the same asymptotic, was obtained in BNOV 2009 through m ( S n − 1 , θ ) .

  23. ϑ G ( R n ) ◮ To summarize, we have seen two essentially different bounds: m 1 ( R n ) ≤ α ( G ) with FW graphs and lin. alg. bound | V | m 1 ( R n ) ≤ ϑ ( R n ) → R n morally encodes ϑ ( G ) for every G ֒ ◮ The former is the best asymptotic while the later improves the previous bounds in the range 3 ≤ n ≤ 24. ◮ It is possible to combine the two methods, i.e to insert the constraint relative to a finite graph G inside ϑ ( R n ) . Joint work (in progress) with F . Oliveira and F . Vallentin.

  24. ϑ G ( R n ) → R n , for x i ∈ V , let r i := � x i � . Let G ֒ α ( G ) ϑ G ( R n ) := inf { z 0 + z 2 | V | : z 2 ≥ 0 z 0 + z 1 + z 2 ≥ 1 � | V | z 0 + z 1 Ω n ( t ) + z 2 ( 1 i = 1 Ω n ( r i t )) ≥ 0 | V | for all t > 0 } . Theorem m 1 ( R n ) ≤ ϑ G ( R n ) ≤ ϑ ( R n )

  25. Sketch of proof ◮ ϑ G ( R n ) ≤ ϑ ( R n ) is obvious: take z 2 = 0. ◮ Sketch proof of m 1 ( R n ) ≤ ϑ G ( R n ) : let S a measurable set avoiding 1. Let f S ( x ) := δ ( 1 S − x 1 S ) . δ ( S ) f S is continuous bounded, f S � 0, f S ( 0 ) = 1, f S ( x ) = 0 if � x � = 1. Moreover δ ( f S ) = δ ( S ) . ◮ Thus f S is feasible for ϑ ( R n ) , which proves that δ ( S ) ≤ ϑ ( R n ) .

  26. Sketch of proof ◮ If V = { x 1 , . . . , x M } , for all y ∈ R n , M � 1 S − x i ( y ) ≤ α ( G ) . i = 1 ◮ Leads to the extra condition: M � f S ( x i ) ≤ α ( G ) . i = 1 ◮ Design a linear program, apply Bochner theorem, symmetrize by O ( R n ) , take the dual.

  27. ϑ G ( R n ) ◮ Bad knews: cannot be solved explicitly (we don’t know how to) ◮ Challenge: to compute good feasible functions. ◮ First method: to sample an interval [ 0 , M ] , solve a finite LP , then adjust the optimal solution (OV, G = simplex). Figure: f ∗ 4 ( t ) (blue) and f ∗ 4 , G ( t ) (red) for G = simplex

  28. ϑ G ( R n ) ◮ Observation: the optimal has a zero at y > j n / 2 , 1 . ◮ Idea: to parametrize f = z 0 + z 1 Ω n ( t ) + z 2 Ω n ( rt ) with y : f ( y ) = f ′ ( y ) = 0, f ( 0 ) = 1 determines f . ◮ We solve for:  z 0 + z 1 + z 2 = 1   z 0 + z 1 Ω n ( y ) + z 2 Ω n ( ry ) = 0  z 1 Ω ′ n ( y ) + rz 2 Ω ′ n ( ry ) = 0  ◮ Then, starting with y = j n / 2 , 1 , we move y to the right until f y ( t ) := z 0 ( y ) + z 1 ( y )Ω n ( t ) + z 2 ( y )Ω n ( rt ) takes negative values.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend