stability and sensitivity of the capacity in continuous
play

Stability and Sensitivity of the Capacity in Continuous Channels - PowerPoint PPT Presentation

Stability and Sensitivity of the Capacity in Continuous Channels Malcolm Egan Univ. Lyon, INSA Lyon, INRIA 2019 European School of Information Theory April 18, 2019 1 / 40 Capacity of Additive Noise Models Consider the (memoryless,


  1. Stability and Sensitivity of the Capacity in Continuous Channels Malcolm Egan Univ. Lyon, INSA Lyon, INRIA 2019 European School of Information Theory April 18, 2019 1 / 40

  2. Capacity of Additive Noise Models Consider the (memoryless, stationary, scalar) additive noise channel Y = X + N , where the noise N is a random variable on ( R , B ( R )) with probability density function p N . The capacity is defined by C = sup I ( µ X , P Y | X ) µ X ∈P subject to µ X ∈ Λ Key Question: What is the capacity for general constraints and non-Gaussian noise distributions? 2 / 40

  3. Non-Gaussian Noise Models In many applications, the noise is non-Gaussian . Example 1: Poisson Spatial Fields of Interferers. Noise in this model is the interference r − η/ 2 � Z = h i X i . i i ∈ Φ 3 / 40

  4. Non-Gaussian Noise Models Suppose that (i) Φ is a homogeneous Poisson point process; (ii) ( h i ) and ( X i ) are processes with independent elements; (ii) E [ | h i X i | 4 /η ] < ∞ . Then, the interference Z converges almost surely to a symmetric α -stable random variable . 0.35 α = 1.1 0.3 α = 1.5 α = 2 0.25 0.2 0.15 0.1 0.05 0 −6 −4 −2 0 2 4 6 4 / 40

  5. Non-Gaussian Noise Models Example 2: Molecular Timing Channel. In the channel Y = X + N , the input X corresponds to time of release. 5 / 40

  6. Non-Gaussian Noise Models In the channel Y = X + N , the noise N corresponds to the diffusion time from the transmitter to the receiver. Under Brownian motion models of diffusion, the noise distribution is inverse Gaussian or L´ evy stable . � − λ ( x − µ ) 2 � � λ p N ( x ) = 2 π x 2 exp . 2 µ 2 x 6 / 40

  7. Capacity of Non-Gaussian Noise Models The capacity is defined by C = sup I ( µ X , P Y | X ) µ X ∈P subject to µ X ∈ Λ The noise is in general non-Gaussian. Question: What is the constraint set Λ? 7 / 40

  8. Constraint Sets A familiar constraint common in wireless communications is Λ P = { µ X ∈ P : E µ X [ X 2 ] ≤ P } corresponding to an average power constraint . Other constraints appear in applications. For example, Λ c = { µ X ∈ P : E µ X [ | X | r ] ≤ c } where 0 < r < 2. This corresponds to a fractional moment constraint (useful in the study of α -stable noise channels). In the molecular timing channel, Λ T = { µ X ∈ P : E µ X [ X ] ≤ T , P µ X ( X < 0) = 0 } is the relevant constraint. 8 / 40

  9. Capacity of Non-Gaussian Noise Channels The capacity is defined by C = sup I ( µ X , P Y | X ) µ X ∈P subject to µ X ∈ Λ Since the channel is additive, � ∞ � ∞ p N ( y − x ) log p N ( y − x ) I ( µ X , P Y | X ) = dyd µ X ( x ) . p Y ( y ) −∞ −∞ There are two basic questions that can be asked: (i) What is the value of the capacity C ? (ii) What is the optimal solution µ ∗ X ? 9 / 40

  10. Topologies on Sets of Probability Measures Point set topology plays an important role in optimization theory. For example, it allows us to determine whether or not the optimum can be achieved (i.e., the sup becomes a max). 10 / 40

  11. Topologies on Sets of Probability Measures In applications, we usually optimize over R n , which has the standard topology induced by Euclidean metric balls. In the capacity problem, we optimize over sets of probability measures in subsets of P . Question: What is a useful topology on the set of probability measures? 11 / 40

  12. Topologies on Sets of Probability Measures A useful choice is the topology of weak convergence . Closed sets S are defined by sequences of probability measures ( µ i ) ⊂ S and a limiting probability measure µ ∈ S such that � ∞ � ∞ lim f ( x ) d µ i ( x ) = f ( x ) d µ ( x ) . i →∞ −∞ −∞ for all bounded and continuous functions f . It turns out that the topology of weak convergence for probability measures is metrizable . There exists a metric d on P such that d metric-balls generate the topology of weak convergence (known as the L´ evy-Prokhorov metric). 12 / 40

  13. Topologies on Sets of Probability Measures In addition, Prokhorov’s theorem gives a characterization of compactness. Prokhorov’s Theorem: If a subset Λ ⊂ P of probability measures is tight and closed, then Λ is compact in the topology of weak convergence. A set of probability measures Λ is tight if for all ǫ > 0, there exists a compact set K ǫ ⊂ R such that µ ( K ǫ ) ≥ 1 − ǫ, ∀ µ ∈ Λ . 13 / 40

  14. Existence of the Optimal Input The capacity is defined by C = sup I ( µ X , P Y | X ) µ X ∈P subject to µ X ∈ Λ Question: Does the capacity-achieving input exist? This is answered by the extreme value theorem. Extreme Value Theorem: If Λ is weakly compact and I ( µ X , P Y | X ) is weakly continuous on Λ, then µ ∗ X exists. 14 / 40

  15. Support of the Optimal Input Question: When is the optimal input discrete and compactly supported? The initial results on this question were due to Smith [Smith1971]. Theorem: For amplitude and average power constraints, the optimal input for the Gaussian noise channel is discrete and compactly supported. 15 / 40

  16. Support of the Optimal Input More generally, the support of the optimal input can be studied via the KKT conditions. Let M be a convex and compact set of channel input distributions. Then, µ ∗ X ∈ M maximizes the capacity if and only if for all µ X ∈ M � dP Y | X ( Y | X ) � �� ≤ I ( µ ∗ E µ X log X , P Y | X ) . dP Y ( Y ) Equality holds at points of increase ⇒ constraints on optimal inputs. Significant progress recently; e.g., [Fahs2018,Dytso2019]. 16 / 40

  17. Characterizing the Capacity In general, it is hard to compute the capacity in closed-form. Exceptions are Gaussian and Cauchy noise channels under various constraints. Theorem [Lapidoth and Moser]: Let the input alphabet X and the output alphabet Y of a channel W ( ·|· ) be seperable metric spaces, and assume that for any Borel subset B ⊂ Y the mapping x �→ W ( B| x ) from X to [0 , 1] is Borel measurable. Let Q ( · ) be any probability measure on X , and R ( · ) any probability measure on Y . Then, the mutual information I ( Q ; W ) can be bounded by � I ( Q ; W ) ≤ D ( W ( ·| x ) || R ( · )) dQ ( x ) 17 / 40

  18. A Change in Perspective New Perspective: the capacity is a map ( p N , Λ) �→ C . Definition Let K = ( p N , Λ) and ˆ p N , ˆ K = (ˆ Λ) be two tuples of channel parameters. The capacity sensitivity due to a perturbation from channel K to the channel ˆ K is defined as ∆ = | C ( K ) − C ( ˆ K ) | . C K→ ˆ K Egan, M., Perlaza, S.M. and Kungurtsev, V., “Capacity sensitivity in additive non-Gaussian noise channels,” Proc. IEEE International Symposium on Information Theory , Aachen, Germany, Jun. 2017. 18 / 40

  19. A Strategy ◮ Consider a differentiable function f : R n → R , which admits a Taylor series representation e f ( x ) T ˜ f ( x + � e � ˜ e ) = f ( x ) + � e � D ˜ e + o ( � e � ) . (˜ e is unit norm). ◮ This yields | f ( x + � e � ˜ e ) − f ( x ) | ≤ � D ˜ e f ( x ) �� e � + o ( � e � ) , i.e., the sensitivity . Question: what is the directional derivative of the optimal value function of an optimization problem (e.g., the capacity)? 19 / 40

  20. A Strategy ◮ In the case of vector, smooth optimization problems there is a good theory. ◮ E.g., envelope theorems. Proposition Let the real valued function f ( x , y ) : R n × R → R be twice differentiable on a compact convex subset X of R n +1 , strictly concave in x . Let x ∗ be the optimal value of f on X and denote ψ ( y ) = f ( x ∗ , y ) . Then, the derivative of ψ ( y ) exists and is given by ψ ′ ( y ) = f y ( x ∗ , y ) . 20 / 40

  21. A Strategy A sketch of the proof: 1. Use the implicit function theorem to write ψ ( y ) = f ( x ∗ ( y ) , y ). 2. Observe that ψ ′ ( y ) = f y ( x ∗ ( y ) , y ) + ( ∇ x f ( x ∗ ( y ) , y )) T d x ∗ ( y ) dy = f y ( x ∗ ( y ) , y ) . Generalizations of this result due to Danskin and Gol’shtein. 21 / 40

  22. A Strategy Recall: C (Λ , p N ) = sup I ( µ X , p N ) µ X ∈ Λ Question: What is the effect of ◮ Constraint perturbations: C (Λ) (fix p N )? ◮ Noise distribution perturbations: C ( p N ) (fix Λ)? 22 / 40

  23. Constraint Perturbations Common Question: What is the effect of power on the capacity? Another Formulation : What is the effect of changing the set of probability measures Λ 2 = { µ X : E µ X [ X 2 ] ≤ P } . Natural Generalization: What is the effect of changing Λ on C (Λ) = sup I ( µ X , P Y | X ) µ X ∈P subject to µ X ∈ Λ . 23 / 40

  24. Constraint Perturbations Question: Do small changes in the constraint set lead to small changes in the capacity? To answer this question, we need to formalize what a small change means. Key Idea: The constraint set is viewed as a point-to-set map. Example: Consider the power constraint Λ 2 ( P ) = { µ X : E µ X [ X 2 ] ≤ P } is a map from R to a compact set of probability measures Λ 2 : R ⇒ P 24 / 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend