Compound channel with feedback: Opportunistic capacity and error exponents
Aditya Mahajan and Sekhar Tatikonda Yale University
March 17, 2010 CISS
Compound channel with feedback: Opportunistic capacity and error - - PowerPoint PPT Presentation
Compound channel with feedback: Opportunistic capacity and error exponents Aditya Mahajan and Sekhar Tatikonda Yale University March 17, 2010 CISS Compound channel Channel model | , =
March 17, 2010 CISS
Channel model
|
∘ ∈ , known to encoder and decoder
Channel model
|
∘ ∈ , known to encoder and decoder
Capacity
= max
∈𝛦𝕐 inf ∈ℚ 𝐽,
Channel model
|
∘ ∈ , known to encoder and decoder
Capacity
= max
∈𝛦𝕐 inf ∈ℚ 𝐽,
Capacity with feedback
𝐺 = inf
∈ℚ max ∈𝛦𝕐 𝐽,
Variable length coding scheme
Achievable rate and opportunistic capacity Probability of error and error exponents
Literature Overview
Variable length communication over DMC Variable length communication over compound channel
Main Result
Lower bound on error exponent region Achievable coding scheme
Example
Assume = {, …, 𝑀}. Variable length coding scheme is a tuple 𝐍, , , 𝜐 Compound message: 𝐍 = , …, 𝑀. Let = ∏𝑀
ℓ={, …, ℓ}.
Encoding strategy: =
, , …
Decoding strategy: = , , … 𝑢 : 𝑢 ↦
𝑀
⋃
ℓ=
{ℓ, , ℓ, , …, ℓ, ℓ} Stopping time 𝜐 with respect to the channel output 𝑢
Compound message 𝐗 =
, …, 𝑀
Encoder =
𝐗,
=
𝐗, ,
⋯ Decoder: ˆ , ˆ = 𝜐
, …, 𝜐
Probability of error 𝐐 =
, …, 𝑀
≠ ˆ
𝑀
Rate: 𝐒 = , …, 𝑀 ℓ = 𝔽ℓ[log ˆ
𝑀]
𝔽ℓ[𝜐]
Encoder Channel Decoder
Variable length communication using 𝐍, , , 𝜐 Higher level application generates an infinite bit-stream
Encoder Channel Decoder
Variable length communication using 𝐍, , , 𝜐 Higher level application generates an infinite bit-stream Let max = max{, …, 𝑀} and min = min{, …, 𝑀}
Encoder Channel Decoder
Variable length communication using 𝐍, , , 𝜐 Higher level application generates an infinite bit-stream Let max = max{, …, 𝑀} and min = min{, …, 𝑀}
Encoding
Transmitter picks log max bits from the bit-stream.
Encoder Channel Decoder
Decoding
At stopping time 𝜐, the receiver passes ˆ , ˆ to a higher layer application. The transmitter removes log ˆ
𝑀 bits from the log max initially
chosen bits and returns the remaining bits to the bit-stream.
Encoder Channel Decoder
Decoding
At stopping time 𝜐, the receiver passes ˆ , ˆ to a higher layer application. The transmitter removes log ˆ
𝑀 bits from the log max initially
chosen bits and returns the remaining bits to the bit-stream. Advantage of being opportunistic: log ˆ
𝑀 − log min
Achievable Rate
Rate 𝐒 = , …, 𝑀 is achievable if ∃ sequence of coding schemes such that for 𝜁 > and sufficiently large , and for all ℓ = , …, ,
2.
ℓ
< 𝜁 and
ℓ
ℓ − 𝜁
Achievable Rate
Rate 𝐒 = , …, 𝑀 is achievable if ∃ sequence of coding schemes such that for 𝜁 > and sufficiently large , and for all ℓ = , …, ,
2.
ℓ
< 𝜁 and
ℓ
ℓ − 𝜁 The union of all achievable rates is called the opportunistic capacity region ℂ𝐺
Error exponent
Given a sequence of coding scheme that achieve a rate vector 𝐒, the error exponent 𝐅 = , …, 𝑀 is given by ℓ = lim
→∞ −log ℓ
𝔽ℓ[𝜐]
Error exponent
Given a sequence of coding scheme that achieve a rate vector 𝐒, the error exponent 𝐅 = , …, 𝑀 is given by ℓ = lim
→∞ −log ℓ
𝔽ℓ[𝜐] For a particular rate 𝐒, the union of all possible error exponents is called the error exponent region 𝔽𝐒.
Variable length coding scheme
Achievable rate and opportunistic capacity Probability of error and error exponents
Literature Overview
Variable length communication over DMC Variable length communication over compound channel
Main Result
Lower bound on error exponent region Achievable coding scheme
Example
Special case of a compound channel when || = . Burnashev-76, “Data transmission over a discrete channel with feedback: Random transmission time” Burnashev exponent , = − where = /.
Achievability scheme
Yamamoto-Itoh-79, “Asymptotic performance of a modified Schalkwijk-Barron scheme with noiseless feedback”. Message mode: Fixed length code at rate − 𝜁 and length Control mode: Send ACCEPT or REJECT for length − Repeat until ACCEPT is received
Rate Exponent BSC. = 0.53 bits = 0.53 bits 0.728 0.728 0.322 0.322 2.536 2.536 Burnashev's exponent
Tchamkerten-Telatar-06, “Variable length coding over unknown channel” Can we achieve Burnashev exponent even if we do not know the channel?
Tchamkerten-Telatar-06, “Variable length coding over unknown channel” Can we achieve Burnashev exponent even if we do not know the channel?
Negative result
Restricted attention to ℓ/ℓ = 𝑑𝑏 Under some restricted conditions, yes. In general, no.
0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.5 1.0 1.5 2.0 2.5 Rate Error Exponent
Burnashev exponent Zero−rate error exponent Combined upper bound Exponent of proposed scheme
What are the error exponents when conditions of Tchamkerten-Telatar-06 are not met? Which coding schemes achieve the best exponent? What about rates when ℓ/ℓ is not a constant?
Variable length coding scheme
Achievable rate and opportunistic capacity Probability of error and error exponents
Literature Overview
Variable length communication over DMC Variable length communication over compound channel
Main Result
Lower bound on error exponent region Achievable coding scheme
Example
Opportunistic Capacity
ℂ𝐺 = {, …, 𝑀 : ℓ < ℓ, ℓ = , …, } where ℓ is the capacity of DMC ℓ.
Opportunistic Capacity
ℂ𝐺 = {, …, 𝑀 : ℓ < ℓ, ℓ = , …, } where ℓ is the capacity of DMC ℓ.
Error Exponent Region
Let 𝑈𝑑
ℓ be the exponent of the channel estimation error when the
channel is ℓ. For any channel estimation scheme, 𝑈𝑑
, …, 𝑈𝑑 𝑀 ∈ 𝕌*.
Opportunistic Capacity
ℂ𝐺 = {, …, 𝑀 : ℓ < ℓ, ℓ = , …, } where ℓ is the capacity of DMC ℓ.
Error Exponent Region
Let 𝑈𝑑
ℓ be the exponent of the channel estimation error when the
channel is ℓ. For any channel estimation scheme, 𝑈𝑑
, …, 𝑈𝑑 𝑀 ∈ 𝕌*.
At rate 𝐒 = , …, 𝑀, the error exponent is ℓ 𝑈𝑑
ℓ
𝑈𝑑
ℓ + ℓ
ℓ ( − ℓ ℓ ) where ℓ = max𝑦𝐵,𝑦𝑆∈𝕐 ℓ⋅|𝑦𝐵, ℓ⋅|𝑦
,
𝑑, Communicate in variable number of epochs. Each epoch is variable length and consists of four phases Training phase of length . Generate channel estimate ˆ
, . Assume that the channel is ˆ . Re-training phase of length . Generate channel estimate ˆ 𝑑. Control phase of length ˆ 𝑑, . Transmit ACCEPT or REJECT assuming that the channel is ˆ 𝑑.
,
𝑑,
Number of epochs
ℓ = 𝑙 = ℓ − ℓ𝑙−, lim
→∞ ℓ =
Number of epochs ≈
,
𝑑,
Rate of transmission
lim
→∞
𝔽ℓ[# messages] 𝔽ℓ[# epochs]𝔽ℓ[epoch length]
,
𝑑,
Rate of transmission
lim
→∞
𝔽ℓ[# messages] 𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# messages] = ( ) −
−𝛾( ) + (
)
−𝛾( )
≈ ℓℓ
,
𝑑,
Rate of transmission
lim
→∞
𝔽ℓ[# messages] 𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# messages] = ( ) −
−𝛾( ) + (
)
−𝛾( )
≈ ℓℓ 𝔽ℓ[# epochs] ≈
,
𝑑,
Rate of transmission
lim
→∞
𝔽ℓ[# messages] 𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# messages] = ( ) −
−𝛾( ) + (
)
−𝛾( )
≈ ℓℓ 𝔽ℓ[# epochs] ≈ 𝔽ℓ[epoch length] = 𝔽ℓ[ + ℓ, + + ] ≈ ℓ
,
𝑑,
Rate of transmission
lim
→∞
𝔽ℓ[# messages] 𝔽ℓ[# epochs]𝔽ℓ[epoch length] ≈ ℓ 𝔽ℓ[# messages] = ( ) −
−𝛾( ) + (
)
−𝛾( )
≈ ℓℓ 𝔽ℓ[# epochs] ≈ 𝔽ℓ[epoch length] = 𝔽ℓ[ + ℓ, + + ] ≈ ℓ
Error exponent
lim
→∞
− log
ℓ
𝔽ℓ[# epochs]𝔽ℓ[epoch length]
Error exponent
lim
→∞
− log
ℓ
𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# epochs] ≈ , 𝔽ℓ[epoch length] ≈ ℓ
Error exponent
lim
→∞
− log
ℓ
𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# epochs] ≈ , 𝔽ℓ[epoch length] ≈ ℓ
−𝛾( ) + −𝛾ℓ,( ) )
× (
−𝛾( ) + −𝛾ℓ,( ) )
× 𝔽ℓ[# epochs]
Error exponent
lim
→∞
− log
ℓ
𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# epochs] ≈ , 𝔽ℓ[epoch length] ≈ ℓ
−𝛾( ) + −𝛾ℓ,( ) )
× (
−𝛾( ) + −𝛾ℓ,( ) )
× 𝔽ℓ[# epochs] − log (
−𝛾( ) + −𝛾ℓ,( ) ) ≈ (
)
Error exponent
lim
→∞
− log
ℓ
𝔽ℓ[# epochs]𝔽ℓ[epoch length] 𝔽ℓ[# epochs] ≈ , 𝔽ℓ[epoch length] ≈ ℓ
−𝛾( ) + −𝛾ℓ,( ) )
× (
−𝛾( ) + −𝛾ℓ,( ) )
× 𝔽ℓ[# epochs] − log (
−𝛾( ) + −𝛾ℓ,( ) ) ≈ (
) − log (
−𝛾( ) + −𝛾ℓ,( ) ) ( )⋅( ) ( )+( ) + ℓ,
Error exponent
lim
→∞
− log
ℓ
𝔽ℓ[# epochs]𝔽ℓ[epoch length]
ℓ ⋅ ℓ
𝑈𝑑
ℓ + ℓ
− ℓ 𝔽ℓ[# epochs] ≈ , 𝔽ℓ[epoch length] ≈ ℓ
−𝛾( ) + −𝛾ℓ,( ) )
× (
−𝛾( ) + −𝛾ℓ,( ) )
× 𝔽ℓ[# epochs] − log (
−𝛾( ) + −𝛾ℓ,( ) ) ≈ (
) − log (
−𝛾( ) + −𝛾ℓ,( ) ) ( )⋅( ) ( )+( ) + ℓ,
Variable length coding scheme
Achievable rate and opportunistic capacity Probability of error and error exponents
Literature Overview
Variable length communication over DMC Variable length communication over compound channel
Main Result
Lower bound on error exponent region Achievable coding scheme
Example
= {BSC𝑞, BSC−𝑞}, known at encoder and decoder
= {BSC𝑞, BSC−𝑞}, known at encoder and decoder Capacity: 𝑞 = −𝑞 = − ℎ
= {BSC𝑞, BSC−𝑞}, known at encoder and decoder Capacity: 𝑞 = −𝑞 = − ℎ Slope of Burnashev exp: 𝑞 = −𝑞 = ∥ −
= {BSC𝑞, BSC−𝑞}, known at encoder and decoder Capacity: 𝑞 = −𝑞 = − ℎ Slope of Burnashev exp: 𝑞 = −𝑞 = ∥ − Channel estimation rule: Transmit the all zero sequence as the training
estimate BSC−𝑞.
= {BSC𝑞, BSC−𝑞}, known at encoder and decoder Capacity: 𝑞 = −𝑞 = − ℎ Slope of Burnashev exp: 𝑞 = −𝑞 = ∥ − Channel estimation rule: Transmit the all zero sequence as the training
estimate BSC−𝑞. Exponent of training error: 𝑈
𝑞 = ∥
𝑈−𝑞 = − ∥
Communication at rate 𝐒 = 𝑞, −𝑞. Let = /.
Error exponents
𝑞 ∥ ⋅ ∥ − ∥ + ∥ − − 𝑞 −𝑞 ∥ − ⋅ ∥ − ∥ − + ∥ − − −𝑞
Communication at rate 𝐒 = 𝑞, −𝑞. Let = /.
Error exponents
𝑞 ∥ ⋅ ∥ − ∥ + ∥ − − 𝑞 −𝑞 ∥ − ⋅ ∥ − ∥ − + ∥ − − −𝑞
Optimal threshold
Choose such that 𝑞 = −𝑞: solve for in 𝜒, = − 𝑞 − −𝑞 where 𝜒, is appropriately defined
0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.1 0.2 0.3 0.4 0.5
γp γ1−p E p γp γ1−p
0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8
0.05 . 1 . 1 5 . 2 . 2 5 . 3 . 3 5 . 4 . 4 5 . 5
0.2 0.4 0.6 0.8 q ϕ(q, 0.1) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10−3 10−2 10−1 1 10 102 103
𝑞 −𝑞
0.5 0.1 0.5861 0.3666 0.5 0.2 0.5695 0.3511 0.5 0.3 0.5501 0.3329 0.5 0.4 0.5273 0.3114 0.5 0.5 0.5000 0.2855 0.5 0.6 0.4666 0.2537 0.5 0.7 0.4247 0.2139 0.5 0.8 0.3698 0.1628 0.5 0.9 0.2918 0.0952
Contributions
Defining opportunistic capacity and corresponding error exponent regions for compound channels with feedback. A simple and easy to implement coding scheme whose error exponents are within a multiplicative factor of the best possible error exponents In the presence of feedback, training based schemes can lead to reasonable performance
Contributions
Defining opportunistic capacity and corresponding error exponent regions for compound channels with feedback. A simple and easy to implement coding scheme whose error exponents are within a multiplicative factor of the best possible error exponents In the presence of feedback, training based schemes can lead to reasonable performance
Future directions
Channels defined over continuous families and continuous alphabets Upper bound on error exponents