Proving the claim ◮ ( X 1 , X 2 , X 3 ) ← Hom ( H , T ) ◮ log | Hom ( H , T ) | = H ( X 1 , X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 1 , X 2 ) ≤ H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 2 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) (by symmetry of H ) ◮ Let D 2 ( x ) be the distribution of X 2 | X 1 = x , and let X ′ 2 ∼ D 2 ( X 1 ) ◮ H ( X 1 , X 2 , X ′ 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 , X 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) ◮ ( X 1 , X 2 ) ∈ E T and ( X 1 , X ′ 2 ) ∈ E T ( X 1 , X 2 , X ′ = ⇒ 2 ) ∈ Hom ( G , T ) H ( X 1 , X 2 , X ′ = ⇒ 2 ) ≤ log | Hom ( G , T ) | Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 4 / 25
Proving the claim ◮ ( X 1 , X 2 , X 3 ) ← Hom ( H , T ) ◮ log | Hom ( H , T ) | = H ( X 1 , X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 1 , X 2 ) ≤ H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 2 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) (by symmetry of H ) ◮ Let D 2 ( x ) be the distribution of X 2 | X 1 = x , and let X ′ 2 ∼ D 2 ( X 1 ) ◮ H ( X 1 , X 2 , X ′ 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 , X 2 ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X ′ 2 | X 1 ) = H ( X 1 ) + 2 · H ( X 2 | X 1 ) ◮ ( X 1 , X 2 ) ∈ E T and ( X 1 , X ′ 2 ) ∈ E T ( X 1 , X 2 , X ′ = ⇒ 2 ) ∈ Hom ( G , T ) H ( X 1 , X 2 , X ′ = ⇒ 2 ) ≤ log | Hom ( G , T ) | = ⇒ log | Hom ( H , T ) | ≤ log | Hom ( G , T ) | . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 4 / 25
Section 2 Perfect Matchings Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 5 / 25
Bregman’s theorem Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) ≤ H ( M ( 1 )) + H ( M ( 2 )) + . . . + H ( M ( n )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) ≤ H ( M ( 1 )) + H ( M ( 2 )) + . . . + H ( M ( n )) ≤ log d ( 1 ) + log d ( 2 ) + . . . + log d ( n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Bregman’s theorem For bi-partite graph G = ( A , B , E ) , let d ( v ) = | N ( v ) = { u ∈ B : ( v , u ) ∈ E }| Theorem 1 Let G = ( A , B , E ) be bi-partite graph with | A | = | B | . Then P ( G ) — the number v ∈ A ( d ( v )!) 1 / d ( v ) . of perfect matching in G — is at most � ◮ Let A = B = [ n ] = { 1 , . . . , n } ◮ It is clear that P ( G ) ≤ � i ∈ [ n ] d ( i ) : ◮ Let M be the perfect matchings in G . ◮ For m ∈ M let m ( i ) be the node in B matched with i by m . ◮ Let M ← M . Hence, log |M| = H ( M )= H ( M ( 1 )) + H ( M ( 2 ) | M ( 1 )) + . . . + H ( M ( n ) | M ( 1 ) , . . . , M ( n − 1 )) ≤ H ( M ( 1 )) + H ( M ( 2 )) + . . . + H ( M ( n )) ≤ log d ( 1 ) + log d ( 2 ) + . . . + log d ( n ) � = log d ( i ) i ∈ [ n ] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 6 / 25
Proving Bregman’s theorem Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k d ( i ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k d ( i ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 n � = E P [ H ( M ( i ) | J P ( i ))] i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Proving Bregman’s theorem ◮ Key observations: H ( M ( i | M ( 1 ) , . . . , M ( i − 1 )) ≤ log | N ( i ) \ { M ( 1 ) , . . . , M ( i − 1 ) }| ◮ Let P be the set of all permutation over [ n ] . For p ∈ P : H ( M ) = H ( M ( p ( 1 ))) + . . . + H ( M ( p ( n )) | M ( p ( 1 )) , . . . , M ( p ( n − 1 ))) ◮ S p ( i ) = { 1 , . . . , p − 1 ( i ) − 1 } — matchings appear above before i ◮ H ( M ) = � n i = 1 H ( M ( i ) | M ( S p ( i ))) ◮ For m ∈ M and P ← P : | N ( i ) \ m ( S P ( i )) | is uniform over { 1 , . . . , d ( i ) } � d ( i ) 1 � ( d ( i )!) 1 / d ( i ) � = ⇒ E P [ H ( M ( i ) | M ( S P ( i )))] ≤ k = 1 log k = log d ( i ) � n = ⇒ � � H ( M ( i ) | M ( S P ( i ))) H ( M ) = E P i = 1 n � = E P [ H ( M ( i ) | J P ( i ))] i = 1 � ( d ( i )!) 1 / d ( i ) � � ≤ log . i ∈ [ n ] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 7 / 25
Section 3 Shearer’s Lemma Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 8 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) H ( X 2 , X 3 ) = + H ( X 2 ) + H ( X 3 | X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) H ( X 2 , X 3 ) = + H ( X 2 ) + H ( X 3 | X 2 ) H ( X 1 , X 3 ) = H ( X 1 ) + H ( X 3 | X 1 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
H ( X 1 , X 2 , X 3 ) Vs. H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ◮ How does H ( X 1 , X 2 , X 3 ) compares to H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 ) ? ◮ If X 1 , X 2 , X 3 are independence, then H ( X 1 , X 2 , X 3 ) = 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ In general: H ( X 1 , X 2 , X 3 ) ≤ 1 2 ( H ( X 1 , X 2 ) + H ( X 2 , X 3 ) + H ( X 3 , X 1 )) ◮ A tighter bounds than H ( X 1 ) + H ( X 2 ) + H ( X 3 ) ◮ Proof : 2 H ( X 1 , X 2 , X 3 ) = 2 H ( X 1 ) + 2 H ( X 2 | X 1 ) + 2 H ( X 3 | X 1 , X 2 ) H ( X 2 , X 3 ) = H ( X 1 ) + H ( X 2 | X 1 ) H ( X 2 , X 3 ) = + H ( X 2 ) + H ( X 3 | X 2 ) H ( X 1 , X 3 ) = H ( X 1 ) + H ( X 3 | X 1 ) ◮ but H ( X 2 | X 1 ) ≤ H ( X 2 ) H ( X 3 | X 1 , X 2 ) ≤ H ( X 3 | X 1 ) H ( X 3 | X 1 , X 2 ) ≤ H ( X 3 | X 2 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 9 / 25
Shearer’s lemma Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 n � ≥ m · H ( X i |{ X ℓ : ℓ < i } ) i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Shearer’s lemma ◮ Let X = ( X 1 , . . . , X n ) ◮ For S = { i 1 , . . . , i k } ⊆ [ n ] , let X S = ( X i 1 , . . . , X i k ) ◮ Example: X 1 , 3 = ( X 1 , X 3 ) Lemma 2 (Shearer’s lemma) Let X = ( X 1 , . . . , X n ) be a rv and let F be a family of subset of [ n ] s.t. each i ∈ [ n ] appears in at least m subset of F . Then H ( X ) ≤ 1 � F ∈F H ( X F ) . m Proof : ◮ H ( X ) = � n i = 1 H ( X i |{ X ℓ : ℓ < i } ) ◮ H ( X F ) = � i ∈ F H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F } ) ◮ Hence, n m � � � H ( X F ) ≥ H ( X i |{ X ℓ : ℓ < i ∧ ℓ ∈ F i , m } ) F ∈F i = 1 j = 1 n � ≥ m · H ( X i |{ X ℓ : ℓ < i } ) = m · H ( X ) i = 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 10 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q n ◮ | Q | ≤ 2 k · E F ←F [ H ( X F )] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q n ◮ | Q | ≤ 2 k · E F ←F [ H ( X F )] ◮ E F [ H ( X F )] is small = ⇒ Q is small Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Corollary Corollary 3 Let F = { F ⊆ [ n ]: | F | = k } . Then H ( X ) ≤ n 1 F ∈F H ( X F ) = n k · k ) · � k · E F ←F [ H ( X F )] . ( n Proof : k � n � n · is the # of times i appears in F . k Implications: ◮ Let Q ⊆ { 0 , 1 } n and X = ( X 1 , . . . , X n ) ← Q n ◮ | Q | ≤ 2 k · E F ←F [ H ( X F )] ◮ E F [ H ( X F )] is small = ⇒ Q is small ◮ Q is large = ⇒ E F [ H ( X F )] is large Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 11 / 25
Example Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n ◮ Assume n = 1000 and k = 5, hence H ( X F ) ≥ 5 − 1 200 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n ◮ Assume n = 1000 and k = 5, hence H ( X F ) ≥ 5 − 1 200 200 · 2 5 > 31 (and hence 32) values ◮ X F can take at least 2 5 − 1 200 = 2 − 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
Example ◮ Q ⊆ { 0 , 1 } n with | Q | = 2 n / 2 = 2 n − 1 ; X ← Q . ◮ F = { F ⊆ [ n ]: | F | = k } ◮ By Corollary 3, log | Q | = n − 1 ≤ n k · E F ←F [ H ( X F )] E F [ H ( X F )] ≥ k ( 1 − 1 n ) = n − k = ⇒ n ∃ F ∈ F s.t. H ( X F ) ≥ n − k = ⇒ n ◮ Assume n = 1000 and k = 5, hence H ( X F ) ≥ 5 − 1 200 200 · 2 5 > 31 (and hence 32) values ◮ X F can take at least 2 5 − 1 200 = 2 − 1 ◮ Stronger conclusion: X F is close to the uniform distribution. Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 12 / 25
More generally Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25
More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25
More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 ◮ F = { F ⊆ [ n ]: | F | = k } Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25
More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 ◮ F = { F ⊆ [ n ]: | F | = k } ◮ n − d ≤ H ( X ) ≤ n 1 k · |F| · � F ∈F H ( X F ) Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25
More generally ◮ | Q | ≥ 2 d · 2 n ; X ← Q 1 ◮ F = { F ⊆ [ n ]: | F | = k } ◮ n − d ≤ H ( X ) ≤ n 1 k · |F| · � F ∈F H ( X F ) 1 F ∈F H ( X F ) ≥ k − dk ⇒ |F| · � = n Iftach Haitner (TAU) Application of Information Theory, Lecture 6 December 2, 2014 13 / 25
Recommend
More recommend