counting
play

Counting Iftach Haitner Tel Aviv University. December 2, 2014 - PowerPoint PPT Presentation

Application of Information Theory, Lecture 6 Counting Iftach Haitner Tel Aviv University. December 2, 2014 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 1 / 25 Section 1 Graph Homomorphisms Iftach


  1. Proving the claim ◮ ( X 1 , X 2 , X 3 ) ← Hom ( H , T ) ◮ log | Hom ( H , T ) | = H ( X 1 , X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 1 , X 2 ) ≤ H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 2 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) (by symmetry of H ) ◮ Let D 2 ( x ) be the distribution of X 2 | X 1 = x , and let X ′ 2 ∼ D 2 ( X 1 ) ◮ H ( X 1 , X 2 , X ′ 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 , X 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) ◮ ( X 1 , X 2 ) ∈ E T and ( X 1 , X ′ 2 ) ∈ E T ( X 1 , X 2 , X ′ = ⇒ 2 ) ∈ Hom ( G , T ) H ( X 1 , X 2 , X ′ = ⇒ 2 ) ≤ log | Hom ( G , T ) | Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 4 / 25

  2. Proving the claim ◮ ( X 1 , X 2 , X 3 ) ← Hom ( H , T ) ◮ log | Hom ( H , T ) | = H ( X 1 , X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 1 , X 2 ) ≤ H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 2 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) (by symmetry of H ) ◮ Let D 2 ( x ) be the distribution of X 2 | X 1 = x , and let X ′ 2 ∼ D 2 ( X 1 ) ◮ H ( X 1 , X 2 , X ′ 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 , X 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) ◮ ( X 1 , X 2 ) ∈ E T and ( X 1 , X ′ 2 ) ∈ E T ( X 1 , X 2 , X ′ = ⇒ 2 ) ∈ Hom ( G , T ) H ( X 1 , X 2 , X ′ = ⇒ 2 ) ≤ log | Hom ( G , T ) | = ⇒ log | Hom ( H , T ) | ≤ log | Hom ( G , T ) | . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 4 / 25

  3. Section 2 Perfect Matchings Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 5 / 25

  4. Bregman’s theorem Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  5. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  6. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  7. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  8. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  9. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  10. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  11. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  12. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  13. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  14. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) ≤ H ( M ( 1 )) + H ( M ( 2 )) + . . . + H ( M ( n )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  15. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) ≤ H ( M ( 1 )) + H ( M ( 2 )) + . . . + H ( M ( n )) ≤ log d ( 1 ) + log d ( 2 ) + . . . + log d ( n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  16. Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) ≤ H ( M ( 1 )) + H ( M ( 2 )) + . . . + H ( M ( n )) ≤ log d ( 1 ) + log d ( 2 ) + . . . + log d ( n ) � = log d ( i ) i ∈ [ n ] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25

  17. Proving Bregman’s theorem Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  18. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  19. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  20. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  21. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  22. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  23. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  24. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  25. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k d ( i ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  26. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k d ( i ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  27. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  28. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  29. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  30. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 n � = E P [ H ( M ( i ) | J P ( i ))] i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  31. Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 n � = E P [ H ( M ( i ) | J P ( i ))] i = 1 � ( d ( i )!) 1 / d ( i ) � � ≤ log . i ∈ [ n ] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25

  32. Section 3 Shearer’s Lemma Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 8 / 25

  33. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  34. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  35. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  36. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  37. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  38. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  39. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  40. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  41. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) H ( X 2 , X 3 ) = + H ( X 2 ) + H ( X 3 | X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  42. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) H ( X 2 , X 3 ) = + H ( X 2 ) + H ( X 3 | X 2 ) H ( X 1 , X 3 ) = H ( X 1 ) + H ( X 3 | X 1 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  43. H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) H ( X 2 , X 3 ) = + H ( X 2 ) + H ( X 3 | X 2 ) H ( X 1 , X 3 ) = H ( X 1 ) + H ( X 3 | X 1 ) ◮ but H ( X 2 | X 1 ) ≤ H ( X 2 ) H ( X 3 | X 1 , X 2 ) ≤ H ( X 3 | X 1 ) H ( X 3 | X 1 , X 2 ) ≤ H ( X 3 | X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25

  44. Shearer’s lemma Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  45. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  46. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  47. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  48. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  49. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  50. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  51. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  52. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  53. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  54. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  55. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 n � ≥ m · H ( X i |{ X ℓ : ℓ < i } ) i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  56. Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 n � ≥ m · H ( X i |{ X ℓ : ℓ < i } ) = m · H ( X ) i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25

  57. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  58. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  59. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  60. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  61. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q n ◮ | Q | ≤ 2 k · E F ←F [ H ( X F )] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  62. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q n ◮ | Q | ≤ 2 k · E F ←F [ H ( X F )] ◮ E F [ H ( X F )] is small = ⇒ Q is small Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  63. Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q n ◮ | Q | ≤ 2 k · E F ←F [ H ( X F )] ◮ E F [ H ( X F )] is small = ⇒ Q is small ◮ Q is large = ⇒ E F [ H ( X F )] is large Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25

  64. Example Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  65. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  66. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  67. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  68. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  69. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  70. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n ◮ Assume n = 1000 and k = 5, hence H ( X F ) ≥ 5 − 1 200 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  71. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n ◮ Assume n = 1000 and k = 5, hence H ( X F ) ≥ 5 − 1 200 200 · 2 5 > 31 (and hence 32) values ◮ X F can take at least 2 5 − 1 200 = 2 − 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  72. Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n ◮ Assume n = 1000 and k = 5, hence H ( X F ) ≥ 5 − 1 200 200 · 2 5 > 31 (and hence 32) values ◮ X F can take at least 2 5 − 1 200 = 2 − 1 ◮ Stronger conclusion: X F is close to the uniform distribution. Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25

  73. More generally Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25

  74. More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25

  75. More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 ◮ F = { F ⊆ [ n ]: | F | = k } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25

  76. More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 ◮ F = { F ⊆ [ n ]: | F | = k } ◮ n − d ≤ H ( X ) ≤ n 1 k · |F| · � F ∈F H ( X F ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25

  77. More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 ◮ F = { F ⊆ [ n ]: | F | = k } ◮ n − d ≤ H ( X ) ≤ n 1 k · |F| · � F ∈F H ( X F ) 1 F ∈F H ( X F ) ≥ k − dk ⇒ |F| · � = n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend