r odl nibble
play

R odl Nibble David Brandfonbrener Math 345 November 30, 2016 - PDF document

R odl Nibble David Brandfonbrener Math 345 November 30, 2016 Definition 1. For 2 l < k < n , define the covering number M ( n, k, l ) to be the minimal size of a family K of k -element subsets of { 1 , .., n } such that every l


  1. R¨ odl Nibble David Brandfonbrener Math 345 November 30, 2016 Definition 1. For 2 ≤ l < k < n , define the covering number M ( n, k, l ) to be the minimal size of a family K of k -element subsets of { 1 , .., n } such that every l -element subset of { 1 , .., n } is contained in some A ∈ K Note 1. We always have M ( n, k, l ) ≥ ( n l ) � n � l ). To see this, note that there are subsets of l ( k l � k � elements, and we want to put each into a k -element subset, which can fit at most such l l -element subsets. • M(4,3,2). By Note 1, we have M (4 , 3 , 2) ≥ 6 Example 1. 3 = 2. However, drawing it, we must have |K| = 3. • M(7,3,2). This is the smallest Steiner Triple. This can be illustrates with the Fano Plane, each line is an element of K . Theorem 1. For fixed 2 ≤ l < k , where o (1) → 0 as n → ∞ : � n � l M ( n, k, l ) ≤ (1 + o (1)) � k � l Definition 2. Let H = ( V, E ) be an r -uniform hypergraph, with x ∈ V . Then define the degree of x in H , d ( x ) to be the number of edges in E containing x . And for x, y ∈ V , define d ( x, y ) to be the number of edges in E containing both x and y . And a covering of H is a set of edges in C ⊆ E such that every vertex in V is in some edge in C . Note 2. We will let x = ± y denote − y ≤ x ≤ y . Note 3. The idea is to prove something more general about r -uniform hypergraphs with � k � certain properties, then to create an r -uniform hypergraph with r = , that will allow us l to use the lemma to prove the theorem. Lemma 1. For every integer r ≥ 2 and reals k ≥ 1, a > 0, there exist γ = γ ( r, k, a ) > 0 and d 0 = d 0 ( r, k, a ) such that for every n ≥ D ≥ d 0 the following holds. Let H = ( V, E ) be an r -uniform hypergraph with n vertices, all with positive degree and satisfying: 1

  2. 1. For all except at most γn vertices x ∈ V , d ( x ) = (1 ± γ ) D 2. For all x ∈ V , d ( x ) < kD 3. For any distinct x, y ∈ V , d ( x, y ) < γD . Then there exists a cover of H with at most (1 + a ) n r edges. � k � Proof. (of Theorem 1). Let r = . Let H be the r uniform hypergraph, with a vertex for l � k � each l -element subset of { 1 , ..., n } . Then each edge is a collection of l -element subsets l � n that lie within one k -element subset. So we have that | V | = � . And each vertex has degree l � n − l � D = , the number of possible ways to choose a k -subset that contains a specific l -subset. k − l And for two distinct vertices ( l -subsets) they take up at least l +1 points so we have at most � n − l − 1 � common edges which is o ( D ) ≤ γD . Thus, we can apply lemma 1, to get that H has k − l − 1 a cover of size (1 + o (1))( n l ) l ) and we are done. ( k Note 4. The idea is to fix a small ǫ , then randomly choose a set E ′ of expected size ǫ n r edges. Then with high probability only O ( ǫ 2 n ) vertices are covered more than once, so E ′ covers at least ǫn − O ( ǫ 2 n ) vertices. And, removing these vertices covered by E ′ , the induced Hypergraph still satisfies the three desirable properties. So, we can repeatedly randomly take an ǫ fraction of the vertices removed until we have ǫn vertices left. Then we take one edge for each of the last ǫn vertices, which is inefficient, but good enough to prove Lemma 1. This is the motivation for lemma 2 (and also the idea of the ”nibble”). Lemma 2. For every integer r ≥ 2 and reals K > 0, ǫ > 0, and every δ ′ > 0, there exist δ = δ ( r, K, ǫ, δ ′ ) and D 0 = D 0 ( r, K, ǫ, δ ′ ) such that for every n ≥ D ≥ D 0 the following holds. Let H = ( V, E ) be an r -uniform hypergraph with n vertices, satisfying: 1. For all except at most δn vertices x ∈ V , d ( x ) = (1 ± δ ) D 2. For all x ∈ V , d ( x ) < KD 3. For any distinct x, y ∈ V , d ( x, y ) < δD . Then there exist a set of edges E ′ ⊆ E that has the following properties: 4. | E ′ | = ǫn r (1 ± δ ′ ) 5. Define V ′ = V \ ∪ e ∈ E ′ e . Then we have | V ′ | = ne − ǫ (1 ± δ ′ ) 6. For all except at most δ ′ | V ′ | vertices x ∈ V ′ the degree d ′ ( x ) in the induced hypergraph of H on V ′ satisfies d ′ ( x ) = De − ǫ ( r − 1) (1 ± δ ′ ) Proof. (of Lemma 2). Notes: D, n can be assumed to be as large as needed. Each δ i is a constant that goes to zero as δ → 0 , D → ∞ (ie can be smaller than δ ′ ). Choose E ′ by selecting each edge in E independently and uniformly with probability ǫ p = D . 2

  3. Proof of 4) By assumption 1 we have V contains at least (1 − δ ) n vertices of degree at least (1 − δ ) D , so | E | ≥ (1 − δ ) 2 nD . By assumptions 1 and 2, we have that | E | ≤ (1+ δ ) nD + δnKD . r r So, for some δ 1 we have | E | = (1 ± δ 1 ) nD . Now we can easily calculate the expectation and r variance of | E ′ | . This gives us E | E ′ | = p | E | = (1 ± δ 1 ) ǫn . And we have V ar ( | E ′ | ) = | E | p (1 − r p ) ≤ | E | p = (1 ± δ 1 ) ǫn . Then, by Chebyshev’s inequality there exists δ 2 > 0 such that r r ) ≤ V ar ( | E ′ | ) Pr ( || E ′ | − ǫn ǫn = 1 + δ 1 r | ≥ δ 2 ≤ . 01 ǫn δ 2 ǫn r ) 2 ( δ 2 2 r So we have Pr ( | E ′ | = (1 ± δ 2 ) ǫn r ) ≥ . 99 This gives us our result for the desired concentration of the size of E ′ for property 4. Proof of 5) For each vertex x ∈ V define the bernoulli indicator random variable I x of the event that x �∈ ∪ e ∈ E ′ e . So we have that | V ′ | = � x ∈ V I x . For n − δn of the vertices, we have d ( x ) = (1 ± δ ) D , so for those vertices E ( I x ) = Pr ( I x = 1) = (1 − p ) d ( x ) = (1 − ǫ D ) (1 ± δ ) D = e − ǫ (1 ± δ 3 ) So, since for the other δn vertices we have 0 ≤ E ( I x ) ≤ 1, we get that E ( | V ′ | ) = ( n − δn ) e − ǫ (1 ± δ 3 ) ± δn = ne − ǫ (1 ± δ 4 ). Now we compute the variance: � � � V ar ( | V ′ | ) = Cov ( I x , I y ) ≤ E ( | V ′ | ) + V ar ( I x ) + Cov ( I x , I y ) x ∈ V x � = y ∈ V x � = y ∈ V But, we also have that, using assumption 3: Cov ( I x , I y ) = E ( I x I y ) − E ( I x ) E ( I y ) = (1 − p ) d ( x )+ d ( y ) − d ( x,y ) − (1 − p ) d ( x )+ d ( y ) = (1 − p ) d ( x )+ d ( y ) ((1 − p ) − d ( x,y ) − 1) ≤ (1 − p ) − d ( x,y ) − 1 ≤ (1 − ǫ D ) − δD − 1 ≤ e ǫδ − 1 = δ 5 So, putting it together, we get that, since δ 6 just needs to go to 0 as n → ∞ and can be larger than 1 /n , V ar ( | V ′ | ) ≤ E ( | V ′ | ) + n 2 δ 5 ≤ δ 6 ( E ( | V ′ | )) 2 Then, by Chebyshev’s inequality, we can choose some δ 7 > 0 so that: V ar ( | V ′ | ) ( δ 7 E ( | V ′ | )) 2 = δ 6 Pr ( || V ′ | − E ( | V ′ | ) | ≥ δ 7 E ( | V ′ | )) ≤ ≤ . 01 δ 2 7 So we have with probability .99 that V ′ = E ( | V ′ | )(1 ± δ 7 ) = ne − ǫ (1 ± δ 8 ) This gives us our result for the desired concentration of the size of V ′ for property 5. 3

  4. Proof of 6) First we claim that for all vertices x ∈ V except δ 9 n of them that: (A) d ( x ) = (1 ± δ ) D (B) All edges e ∈ E except at most δ 10 D such that x ∈ E satisfy |{ f ∈ E : x �∈ f, f ∩ e � = ∅}| = (1 ± δ 11 )( r − 1) D (A) holds obviously by assumption 1. For (B), note that the number of edges containing vertices with degrees outside of the δ band around D are at most δnKD . So the number of vertices contained in more than δ 10 D such edges must be δnKD/δ 10 D ≤ δ 9 n/ 2 which is negligible, for appropriate choices. Then, if e contains all good vertices, the number of f � r − 1 � that satisfy the condition is at most ( r − 1)(1 ± δ ) D and at least ( r − 1)(1 ± δ ) D − δD , 2 so we have it. Now we just need to show that most vertices (up to a delta window) that satisfy A and B also satisfy 6. Let x be such a vertex, and call an edge e with x ∈ e good if e satisfies (B). Conditioning on x ∈ V ′ , the chance a good edge remains in the hypergraph is (1 − p ) (1 ± δ 11 )( r − 1) D . So, the expectation of d ′ ( x ) becomes: E ( d ′ ( x )) = (1 ± δ ± δ 10 ) D (1 − p ) (1 ± δ 11 )( r − 1) D ± δ 10 D = e − ǫ ( r − 1) D (1 ± δ 12 ) Now for each e containing x , let I e be the indicator random variable that indicates e ⊆ V ′ . Then d ′ ( x ) is the sum of the I e conditioned on x ∈ V ′ , so: � � V ar ( d ′ ( x )) ≤ E ( d ′ ( x )) + Cov ( I e , I f ) ≤ E ( d ′ ( x )) + 2 δ 10 D 2 (1 ± δ ) + Cov ( I e , I f ) x ∈ e,x ∈ f x ∈ e,x ∈ f,fhas ( B ) 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend