concave programming upper bounds on the capacity of 2 d
play

Concave Programming Upper Bounds on the Capacity of 2-D Constraints - PowerPoint PPT Presentation

Introduction Chain rule Upper bound Concave Programming Upper Bounds on the Capacity of 2-D Constraints Ido Tal Ron M. Roth Work done while at the Computer Science Department Technion, Haifa 32000, Israel Introduction Chain rule Upper


  1. Introduction Chain rule Upper bound Concave Programming Upper Bounds on the Capacity of 2-D Constraints Ido Tal Ron M. Roth Work done while at the Computer Science Department Technion, Haifa 32000, Israel

  2. Introduction Chain rule Upper bound 2-D constraints Example: The square constraint A binary M × N array satisfies the square constraint iff no two ‘1’ symbols are adjacent on a row, column, or diagonal. Example: 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 If a bold-face 0 is changed to 1, then the square constraint does not hold. Notation for the general case Denote by S M all the M × M arrays satisfying the constraint S .

  3. Introduction Chain rule Upper bound Capacity Capacity Definition Definition: 1 cap ( S ) = lim M 2 · log 2 | S M | . M →∞ Intuitively: An M × M which must satisfy S can encode “about” ( cap ( S )) M 2 bits in it. Our goal: Derive an upper bound on cap ( S ).

  4. Introduction Chain rule Upper bound Behind the scenes (If you don’t understand this slide, disregard it.) Behind the scenes In the 1-D case, the capacity of a constraint is equal to the entropy of a corresponding maxentropic (and stationary) Markov chain. Namely, we calculate the entropy of a random variable, maximized over a set of probabilities. Essentially, we try to find a (partial) 2-D analogy.

  5. Introduction Chain rule Upper bound Burton & Steif Burton & Steif Theorem [Burton & Steif]: For all M > 0 there exists a random variable W ( M ) taking values on S M such that: The normalized entropy of W ( M ) approaches capacity. Namely, 1 M 2 · H ( W ( M ) ) = cap ( S ) . lim M →∞ The probability distribution of W ( M ) is stationary. Notice that the theorem promises the existence of a distribution, but does not give a way to calculate it.

  6. Introduction Chain rule Upper bound Bounding H ( W ) Recall that 1 M 2 · H ( W ( M ) ) . cap ( S ) = lim M →∞ Focus on finding an upper bound on H ( W ( M ) ). Fix M and denote W = W ( M ) .

  7. Introduction Chain rule Upper bound Lexicographic order Lexicographic order Define the standard lexicographic order ≺ in 2-D. Namely, ( i 1 , j 1 ) ≺ ( i 2 , j 2 ) iff i 1 < i 2 , or ( i 1 = i 2 and j 1 < j 2 ). Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 An entry labeled p precedes and entry labeled q iff p < q .

  8. Introduction Chain rule Upper bound The chain rule Define the index set T i,j as all the indices preceding ( i, j ) according to ≺ . Let B be the index set of W . By the chain rule, � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B

  9. Introduction Chain rule Upper bound The chain rule Define the index set T i,j as all the indices preceding ( i, j ) according to ≺ . Let B be the index set of W . By the chain rule, � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B •

  10. Introduction Chain rule Upper bound The chain rule Define the index set T i,j as all the indices preceding ( i, j ) according to ≺ . Let B be the index set of W . By the chain rule, � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B •

  11. Introduction Chain rule Upper bound The chain rule Define the index set T i,j as all the indices preceding ( i, j ) according to ≺ . Let B be the index set of W . By the chain rule, � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B •

  12. Introduction Chain rule Upper bound The chain rule Define the index set T i,j as all the indices preceding ( i, j ) according to ≺ . Let B be the index set of W . By the chain rule, � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B •

  13. Introduction Chain rule Upper bound Truncating the chain Let Λ be a relatively small “patch”, contained in B .

  14. Introduction Chain rule Upper bound Truncating the chain Let Λ be a relatively small “patch”, contained in B . Let ( a, b ) be an index contained in Λ. •

  15. Introduction Chain rule Upper bound Truncating the chain Let Λ be a relatively small “patch”, contained in B . Let ( a, b ) be an index contained in Λ. Denote by Λ i,j the shifting of Λ such that ( a, b ) is shifted to ( i, j ). • •

  16. Introduction Chain rule Upper bound Truncating the chain Recall that � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B Previously: Condition on all of the preceding entries, W [ T i,j ∩ B ]. Now: Condition only on preceding entries contained in the patch, W [ T i,j ∩ B ∩ Λ i,j ]. •

  17. Introduction Chain rule Upper bound Truncating the chain Recall that � H ( W ) = H ( W i,j | W [ T i,j ∩ B ]) . ( i,j ) ∈ B Previously: Condition on all of the preceding entries, W [ T i,j ∩ B ]. Now: Condition only on preceding entries contained in the patch, W [ T i,j ∩ B ∩ Λ i,j ]. •

  18. Introduction Chain rule Upper bound Truncating the chain, illustrated Conditioning on only a subset of the preceding entries gives us an upper bound. � H ( W ) ≤ H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) . ( i,j ) ∈ B •

  19. Introduction Chain rule Upper bound Truncating the chain, illustrated Conditioning on only a subset of the preceding entries gives us an upper bound. � H ( W ) ≤ H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) . ( i,j ) ∈ B •

  20. Introduction Chain rule Upper bound Truncating the chain, illustrated Conditioning on only a subset of the preceding entries gives us an upper bound. � H ( W ) ≤ H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) . ( i,j ) ∈ B •

  21. Introduction Chain rule Upper bound Truncating the chain rule Recall that W is stationary. Thus, for all ( i, j ) such that the patch Λ i,j is contained inside the array, H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) = H ( W a,b | W [ T a,b ∩ B ∩ Λ]) . • •

  22. Introduction Chain rule Upper bound � H ( W ) ≤ H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) ( i,j ) ∈ B ≈ M 2 · H ( W a,b | W [ T a,b ∩ B ∩ Λ]) . As long as we’re not near the border, the same term is summed over and over. • •

  23. Introduction Chain rule Upper bound � H ( W ) ≤ H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) ( i,j ) ∈ B ≈ M 2 · H ( W a,b | W [ T a,b ∩ B ∩ Λ]) . As long as we’re not near the border, the same term is summed over and over. • •

  24. Introduction Chain rule Upper bound � H ( W ) ≤ H ( W i,j | W [ T i,j ∩ B ∩ Λ i,j ]) ( i,j ) ∈ B ≈ M 2 · H ( W a,b | W [ T a,b ∩ B ∩ Λ]) . As long as we’re not near the border, the same term is summed over and over. • •

  25. Introduction Chain rule Upper bound Truncating the chain rule Thus, a simple derivation gives H ( W ) ≤ H ( W a,b | W [ T a,b ∩ B ∩ Λ]) + O (1 /M ) . M 2 • •

  26. Introduction Chain rule Upper bound Unknown probability distribution H ( W a,b | W [ T a,b ∩ B ∩ Λ]) . � �� � ♣ In order to calculate ♣ , we must know the probability distribution of W [Λ]. We don’t know the probability distribution of W [Λ], but we do know some of its properties. •

  27. Introduction Chain rule Upper bound Known properties of the probability distribution (1) Trivial knowledge Let x be a realization of W [Λ], with positive probability p x . We know that x satisfies the constraint S . We know that � p x = 1 . x

  28. Introduction Chain rule Upper bound Known properties of the probability distribution (2) Vertical stationarity Since W is stationary, W [Λ] is stationary as well. Thus, for example, � � � � W [Λ] = 1 0 0 1 W [Λ] = ∗ ∗ ∗ ∗ P = P . ∗ ∗ ∗ ∗ 1 0 0 1 The above can be written as � � p x = p x , x ∈ A x ∈ B where x is in A ( B ) iff its first (second) row is 1 0 0 1.

  29. Introduction Chain rule Upper bound Known properties of the probability distribution (3) Horizontal stationarity Another example � � � � W [Λ] = 1 0 0 ∗ W [Λ] = ∗ 1 0 0 P = P . 0 0 0 ∗ ∗ 0 0 0 Again, both sides are marginalizations of ( p x ) x ). To sum up, the probabilities ( p x ) x satisfy a collection of linear equalities and inequalities.

  30. Introduction Chain rule Upper bound An upper bound H ( W a,b | W [ T a,b ∩ B ∩ Λ]) . � �� � ♣ We don’t know the probability distribution of W [Λ], but we do know some of its properties. So, let us choose the probability distribution that maximizes ♣ and is subject to these properties. This is an instance of convex programming. •

  31. Introduction Chain rule Upper bound Conclusion H ( W ) ≤ H ( W a,b | W [ T a,b ∩ B ∩ Λ]) + O (1 /M ) . M 2 � �� � ♣ Using convex programming, we can find an upper bound on ♣ . H ( W ) Since cap ( S ) = lim M →∞ M 2 , this upper bound leads to an upper bound on cap ( S ). Improvements to the basic bound: Combine between different choices of ( a, b ). Combine between different choices of a precedence relation. Use inherent symmetries of the constraint. More than two dimensions Notice that all of the above can be generalized to 3-D, 4-D, . . . constraints.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend