Coded Modulation An Information-Theoretic Perspective Young-Han Kim - - PowerPoint PPT Presentation
Coded Modulation An Information-Theoretic Perspective Young-Han Kim - - PowerPoint PPT Presentation
Coded Modulation An Information-Theoretic Perspective Young-Han Kim http://young-han.kim Department of ECE UC San Diego Annual ACC Workshop Tel Aviv University January , Information theory of point-to-point communication
Information theory of point-to-point communication
Channel
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication ∙ Tradeoff between R = k/n, Pe = P(M ̸= ̂
M), and n
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication ∙ Tradeoff between R = k/n, Pe = P(M ̸= ̂
M), and n
∙ Capacity C: maximum R such that Pe → as n → ∞
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication ∙ Tradeoff between R = k/n, Pe = P(M ̸= ̂
M), and n
∙ Capacity C: maximum R such that Pe → as n → ∞
R C Pe
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication ∙ Tradeoff between R = k/n, Pe = P(M ̸= ̂
M), and n
∙ Capacity C: maximum R such that Pe → as n → ∞
R C Pe
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication ∙ Tradeoff between R = k/n, Pe = P(M ̸= ̂
M), and n
∙ Capacity C: maximum R such that Pe → as n → ∞
R C Pe
/
Information theory of point-to-point communication
M k ̂ M Pe Xn Yn Encoder p(y|x) Decoder
∙ “Baseband” picture of communication ∙ Tradeoff between R = k/n, Pe = P(M ̸= ̂
M), and n
∙ Capacity C: maximum R such that Pe → as n → ∞
/
Channel coding theorem (Shannon )
C = max
p(x) I(X; Y)
Gaussian channel
M k ̂ M Pe Xn Yn Encoder Decoder g Z
/
Gaussian channel
M k ̂ M Pe Xn Yn Encoder Decoder g Z
∙ Simple model for wireless, wired, and optical communication
/
Gaussian channel
M k ̂ M Pe Xn Yn Encoder Decoder g Z
∙ Simple model for wireless, wired, and optical communication ∙ Average power constraint ∑n
i= x i (m) ≤ nP
/
Gaussian channel
M k ̂ M Pe Xn Yn Encoder Decoder g Z
∙ Simple model for wireless, wired, and optical communication ∙ Average power constraint ∑n
i= x i (m) ≤ nP
∙ Channel quality measured by SNR = gP
/
Gaussian channel
M k ̂ M Pe Xn Yn Encoder Decoder g Z
∙ Simple model for wireless, wired, and optical communication ∙ Average power constraint ∑n
i= x i (m) ≤ nP
∙ Channel quality measured by SNR = gP Channel coding theorem (Shannon )
C =
log( + SNR)
/
Capacity of the Gaussian channel (Forney–Ungerboeck ’)
/
How to achieve the capacity?
∙ Random coding and joint typicality decoding
(Shannon , Forney , Cover )
/
How to achieve the capacity?
∙ Random coding and joint typicality decoding
(Shannon , Forney , Cover )
/
X n
How to achieve the capacity?
∙ Random coding and joint typicality decoding
(Shannon , Forney , Cover )
/
X n xn(m)
How to achieve the capacity?
∙ Random coding and joint typicality decoding
(Shannon , Forney , Cover )
∙ Find a unique m such that (xn(m), yn) is jointly typical w.r.t. p(x, y)
/
X n Yn yn xn(m)
How to achieve the capacity?
∙ Random coding and joint typicality decoding
(Shannon , Forney , Cover )
∙ Find a unique m such that (xn(m), yn) is jointly typical w.r.t. p(x, y) ∙ Successful w.h.p. if R < I(X; Y)
/
X n Yn yn xn(m)
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code
/
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code 㶳 Most codes are good
/
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code 㶳 Most codes are good 㶳 Works for any alphabet
/
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code 㶳 Most codes are good 㶳 Works for any alphabet 㶳 “All codes are good, except those that we know of”
(Wozencraft–Reiffen , Forney )
/
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code 㶳 Most codes are good 㶳 Works for any alphabet 㶳 “All codes are good, except those that we know of”
(Wozencraft–Reiffen , Forney )
∙ Coding theory
㶳 Algebraic: Hamming, Reed–Solomon, BCH, Reed–Muller, polar codes 㶳 Probabilistic: LDPC, turbo, raptor, spatially coupled codes
/
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code 㶳 Most codes are good 㶳 Works for any alphabet 㶳 “All codes are good, except those that we know of”
(Wozencraft–Reiffen , Forney )
∙ Coding theory
㶳 Algebraic: Hamming, Reed–Solomon, BCH, Reed–Muller, polar codes 㶳 Probabilistic: LDPC, turbo, raptor, spatially coupled codes
∙ Coding practice
㶳 G LTE: Turbo and convolutional codes 㶳 G NR: LDPC and polar codes
/
The other half of the story
∙ Average performance of randomly generate codes is good
㶳 Probabilistic method: there exists a good code 㶳 Most codes are good 㶳 Works for any alphabet 㶳 “All codes are good, except those that we know of”
(Wozencraft–Reiffen , Forney )
∙ Coding theory
㶳 Algebraic: Hamming, Reed–Solomon, BCH, Reed–Muller, polar codes 㶳 Probabilistic: LDPC, turbo, raptor, spatially coupled codes
∙ Coding practice
㶳 G LTE: Turbo and convolutional codes 㶳 G NR: LDPC and polar codes 㶳 Everything is binary
/
Coded modulation
M k CN Xn Encoder Coded modulation Binary ECC
∙ Communication rate: R = (code rate) × (modulation rate) = k
N ⋅ N n = k n
/
Coded modulation
M k CN Xn Encoder Coded modulation Binary ECC
∙ Communication rate: R = (code rate) × (modulation rate) = k
N ⋅ N n = k n
∙ BPSK: X = {−倂P, +倂P} and N = n
㨃→ +倂P 㨃→ −倂P
/
Coded modulation
M k CN Xn Encoder Coded modulation Binary ECC
∙ Communication rate: R = (code rate) × (modulation rate) = k
N ⋅ N n = k n
∙ BPSK: X = {−倂P, +倂P} and N = n
㨃→ +倂P 㨃→ −倂P
∙ Sometimes multiple, independent (binary) codewords are modulated together
/
Coded modulation
M k CN Xn Encoder Coded modulation Binary ECC
∙ Communication rate: R = (code rate) × (modulation rate) = k
N ⋅ N n = k n
∙ BPSK: X = {−倂P, +倂P} and N = n
㨃→ +倂P 㨃→ −倂P
∙ Sometimes multiple, independent (binary) codewords are modulated together ∙ We decompose coded modulation into two operations
/
Coded modulation
M k CN Xn Encoder Coded modulation Binary ECC
∙ Communication rate: R = (code rate) × (modulation rate) = k
N ⋅ N n = k n
∙ BPSK: X = {−倂P, +倂P} and N = n
㨃→ +倂P 㨃→ −倂P
∙ Sometimes multiple, independent (binary) codewords are modulated together ∙ We decompose coded modulation into two operations
㶳 Symbol-level mapping: X = ϕ(U, U, . . . , UL), Ul ∈ {±}
/
Coded modulation
M k CN Xn Encoder Coded modulation Binary ECC
∙ Communication rate: R = (code rate) × (modulation rate) = k
N ⋅ N n = k n
∙ BPSK: X = {−倂P, +倂P} and N = n
㨃→ +倂P 㨃→ −倂P
∙ Sometimes multiple, independent (binary) codewords are modulated together ∙ We decompose coded modulation into two operations
㶳 Symbol-level mapping: X = ϕ(U, U, . . . , UL), Ul ∈ {±} 㶳 Block-level mapping: Un
l = ψ(CN), l = , . . . , L
/
Multiple layers and symbol-level mapping
b b bb U U X
∙ Natural mapping: X = α(U + U)
/
Multiple layers and symbol-level mapping
b b bb U U X ϕ
∙ Natural mapping: X = α(U + U) ∙ Gray mapping: X = α(UU + U)
/
Multiple layers and symbol-level mapping
b b bb U U X ϕ
∙ Natural mapping: X = α(U + U) ∙ Gray mapping: X = α(UU + U) ∙ Similar mapping ϕ exists for higher-order PAM, QPSK, QAM, PSK, MIMO, ...
XQPSK = 倂P exp急i π(UU + U) 怵
/
Multiple layers and symbol-level mapping
b b bb U U X ϕ
∙ Natural mapping: X = α(U + U) ∙ Gray mapping: X = α(UU + U) ∙ Similar mapping ϕ exists for higher-order PAM, QPSK, QAM, PSK, MIMO, ...
XQPSK = 倂P exp急i π(UU + U) 怵
∙ Can be many-to-one (still information-lossless)
∙ Can induce nonuniform X (Gallager )
/
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner )
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
㶳 Find a unique m such that (un
(m), un (m), yn) is jointly typical: R < I(U; U, Y)
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
㶳 Find a unique m such that (un
(m), un (m), yn) is jointly typical: R < I(U; U, Y)
㶳 Combined rate:
R + R < I(U; Y, U) + I(U; Y)
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
㶳 Find a unique m such that (un
(m), un (m), yn) is jointly typical: R < I(U; U, Y)
㶳 Combined rate:
R + R < I(U; Y, U) + I(U; Y) = I(U, U; Y)
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
㶳 Find a unique m such that (un
(m), un (m), yn) is jointly typical: R < I(U; U, Y)
㶳 Combined rate:
R + R < I(U; Y, U) + I(U; Y) = I(U, U; Y) = I(X; Y)
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
㶳 Find a unique m such that (un
(m), un (m), yn) is jointly typical: R < I(U; U, Y)
㶳 Combined rate:
R + R < I(U; Y, U) + I(U; Y) = I(U, U; Y) = I(X; Y)
㶳 Regardless of ϕ or the decoding order
Horizontal superposition coding
M M Un
Un
Xn ϕ
/
∙ Broadcast channels (Cover ), fading channels (Shamai–Steiner ) ∙ Successive cancellation decoding:
㶳 Find a unique m such that (un
(m), yn) is jointly typical: R < I(U; Y)
㶳 Find a unique m such that (un
(m), un (m), yn) is jointly typical: R < I(U; U, Y)
㶳 Combined rate:
R + R < I(U; Y, U) + I(U; Y) = I(U, U; Y) = I(X; Y)
㶳 Regardless of ϕ or the decoding order
∙ Multi-level coding (MLC): Wachsmann–Fischer–Huber ()
Vertical superposition coding
M Un
Un
Xn ϕ
/
Vertical superposition coding
M Un
Un
Xn ϕ
/
∙ Single codeword of length n: Cn = (Cn, Cn
n+)
Cn 㨃→ Un
Cn
n+ 㨃→ Un
Vertical superposition coding
M Un
Un
Xn ϕ
/
∙ Single codeword of length n: Cn = (Cn, Cn
n+)
Cn 㨃→ Un
Cn
n+ 㨃→ Un
∙ Treating the other layer as noise:
Vertical superposition coding
M Un
Un
Xn ϕ
/
∙ Single codeword of length n: Cn = (Cn, Cn
n+)
Cn 㨃→ Un
Cn
n+ 㨃→ Un
∙ Treating the other layer as noise:
㶳 Find a unique m such that
(un
(m), yn) is jointly typical
and (un
(m), yn) is jointly typical
Vertical superposition coding
M Un
Un
Xn ϕ
/
∙ Single codeword of length n: Cn = (Cn, Cn
n+)
Cn 㨃→ Un
Cn
n+ 㨃→ Un
∙ Treating the other layer as noise:
㶳 Find a unique m such that
(un
(m), yn) is jointly typical
and (un
(m), yn) is jointly typical
㶳 Successful w.h.p. if
R < I(U; Y) + I(U; Y)
Vertical superposition coding
M Un
Un
Xn ϕ
/
∙ Single codeword of length n: Cn = (Cn, Cn
n+)
Cn 㨃→ Un
Cn
n+ 㨃→ Un
∙ Treating the other layer as noise:
㶳 Find a unique m such that
(un
(m), yn) is jointly typical
and (un
(m), yn) is jointly typical
㶳 Successful w.h.p. if
R < I(U; Y) + I(U; Y) < I(U, U; Y) = I(X; Y)
Vertical superposition coding
M Un
Un
Xn ϕ
/
∙ Single codeword of length n: Cn = (Cn, Cn
n+)
Cn 㨃→ Un
Cn
n+ 㨃→ Un
∙ Treating the other layer as noise:
㶳 Find a unique m such that
(un
(m), yn) is jointly typical
and (un
(m), yn) is jointly typical
㶳 Successful w.h.p. if
R < I(U; Y) + I(U; Y) < I(U, U; Y) = I(X; Y)
∙ Bit-interleaved coded modulation (BICM): Caire–Taricco–Biglieri ()
Diagonal superposition coding
M Un
Un
Xn ϕ
/
Diagonal superposition coding
M㰀㰀 M㰀 Un
Un
Xn ϕ
/
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn()
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn() Cn()
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn()
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn()
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn() Cn()
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn() Cn() Cn()
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn() Cn() Cn()
∙ Sliding-window decoding:
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn() Cn()
∙ Sliding-window decoding: R < I(U; U, Y) + I(U; Y) = I(X; Y)
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn() Cn()
∙ Sliding-window decoding: R < I(U; U, Y) + I(U; Y) = I(X; Y) ∙ Block Markov coding: Used extensively in relay and feedback communication
Diagonal superposition coding
Un
Un
Xn ϕ M(j − ) M(j)
/
∙ Think outside the block: Sequence of messages M(j) mapped to Cn(j)
U U Block Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn
n+()
Cn() Cn() Cn() Cn() Cn()
∙ Sliding-window decoding: R < I(U; U, Y) + I(U; Y) = I(X; Y) ∙ Block Markov coding: Used extensively in relay and feedback communication ∙ Sliding-window coded modulation (SWCM): Kim et al. (), Wang et al. ()
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U)
/
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U) ∙ Bell Laboratories Layered Space-Time (BLAST) architectures:
/
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U) ∙ Bell Laboratories Layered Space-Time (BLAST) architectures:
㶳 Horizontal: H-BLAST (Foschini et al. /), also known as V-BLAST
/
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U) ∙ Bell Laboratories Layered Space-Time (BLAST) architectures:
㶳 Horizontal: H-BLAST (Foschini et al. /), also known as V-BLAST 㶳 Diagonal: D-BLAST (Foschini )
/
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U) ∙ Bell Laboratories Layered Space-Time (BLAST) architectures:
㶳 Horizontal: H-BLAST (Foschini et al. /), also known as V-BLAST 㶳 Diagonal: D-BLAST (Foschini ) 㶳 Vertical: Single-outer code (Foschini et al. ), but shouldn’t this be “V-BLAST”?
/
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U) ∙ Bell Laboratories Layered Space-Time (BLAST) architectures:
㶳 Horizontal: H-BLAST (Foschini et al. /), also known as V-BLAST 㶳 Diagonal: D-BLAST (Foschini ) 㶳 Vertical: Single-outer code (Foschini et al. ), but shouldn’t this be “V-BLAST”?
∙ Signal layers can be far more general than antenna ports
/
Multiple-antenna transmission
∙ Consider the signal layers U and U as antenna ports: X = (U, U) ∙ Bell Laboratories Layered Space-Time (BLAST) architectures:
㶳 Horizontal: H-BLAST (Foschini et al. /), also known as V-BLAST 㶳 Diagonal: D-BLAST (Foschini ) 㶳 Vertical: Single-outer code (Foschini et al. ), but shouldn’t this be “V-BLAST”?
∙ Signal layers can be far more general than antenna ports ∙ Coded modulation can encompass MIMO transmission
U U U U
/
Comparison
/
Comparison
Horizontal
U U M M
Multi-level coding (MLC) R < I(U; Y) R < I(U; U, Y) Short, nonuniversal
/
Comparison
Horizontal
U U M M
Multi-level coding (MLC) R < I(U; Y) R < I(U; U, Y) Short, nonuniversal Vertical
M M
Bit-interleaved coded modulation (BICM) R < I(U; Y) + I(U; Y) Other layers as noise
/
Comparison
Horizontal
U U M M
Multi-level coding (MLC) R < I(U; Y) R < I(U; U, Y) Short, nonuniversal Vertical
M M
Bit-interleaved coded modulation (BICM) R < I(U; Y) + I(U; Y) Other layers as noise Diagonal
M M
Sliding-window coded modulation (SWCM) R < I(U; U, Y) + I(U; Y) = I(X; Y) Error prop., rate loss
/
BICM vs. SWCM
5 10 15 20 25 30 0.5 1 1.5 2 2.5 3 3.5 4
4PAM 8PAM 16PAM
SNR(dB) Symmetric Rate SWCM BICM
LTE turbo code / ≤-iteration LOG-MAP decoding at b = , n = , BLER = .
/
BICM vs. SWCM
5 10 15 20 25 30 0.5 1 1.5 2 2.5 3 3.5 4
4PAM 8PAM 16PAM
SNR(dB) Symmetric Rate SWCM BICM
LTE turbo code / ≤-iteration LOG-MAP decoding at b = , n = , BLER = .
/
Application: Interference channels
desired signal interference
/
Optimal rate region (Bandemer–El-Gamal–Kim )
X X Y Y p(y|x, x) p(y|x, x)
/
Optimal rate region (Bandemer–El-Gamal–Kim )
X X Y Y p(y|x, x) p(y|x, x)
/
R < I(X; Y|X) R + R < I(X, X; Y)
- r
R < I(X; Y)
R R
Optimal rate region (Bandemer–El-Gamal–Kim )
X X Y Y p(y|x, x) p(y|x, x)
/
R < I(X; Y|X) R + R < I(X, X; Y)
- r
R < I(X; Y)
R R
Optimal rate region (Bandemer–El-Gamal–Kim )
X X Y Y p(y|x, x) p(y|x, x)
/
R < I(X; Y|X) R + R < I(X, X; Y)
- r
R < I(X; Y)
R R
Low-complexity (implementable) alternatives
/
X X Y Y p(y|x, x) p(y|x, x) R R
Low-complexity (implementable) alternatives
∙ PP decoding
/
X X Y Y p(y|x, x) p(y|x, x) R R
Low-complexity (implementable) alternatives
∙ PP decoding
㶳 Treating interference as (Gaussian) noise: R < I(X; Y)
/
X X Y Y p(y|x, x) p(y|x, x) R R
Low-complexity (implementable) alternatives
∙ PP decoding
㶳 Treating interference as (Gaussian) noise: R < I(X; Y) 㶳 Successive cancellation decoding: R < I(X; Y), R < I(X; Y|X)
/
X X Y Y p(y|x, x) p(y|x, x) R R
Low-complexity (implementable) alternatives
∙ PP decoding
㶳 Treating interference as (Gaussian) noise: R < I(X; Y) 㶳 Successive cancellation decoding: R < I(X; Y), R < I(X; Y|X)
∙ + rate splitting (Zhao et al. , Wang et al. )
/
U U X X Y Y p(y|x, x) p(y|x, x) R R
Low-complexity (implementable) alternatives
∙ PP decoding
㶳 Treating interference as (Gaussian) noise: R < I(X; Y) 㶳 Successive cancellation decoding: R < I(X; Y), R < I(X; Y|X)
∙ + rate splitting (Zhao et al. , Wang et al. ) ∙ Novel codes
㶳 Spatially coupled codes (Yedla, Nguyen, Pfister, and Narayanan ) 㶳 Polar codes (Wang and S
¸as ¸o˘ glu )
/
X X Y Y p(y|x, x) p(y|x, x) R R
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x)
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
∙ Sliding-window coded modulation for sender (without alphabet constraints)
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
∙ Sliding-window decoding
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
∙ Sliding-window decoding ∙ Successive cancellation decoding
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
∙ Sliding-window decoding ∙ Successive cancellation decoding
R < I(X; Yj|U)
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
∙ Sliding-window decoding ∙ Successive cancellation decoding
R < I(X; Yj|U) R < I(U; Yj) + I(U; Yj|U, X)
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M()
Sliding-window superposition coding (Wang et al. )
∙ Every corner point: different decoding orders
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() R R
Sliding-window superposition coding (Wang et al. )
∙ Every corner point: different decoding orders ∙ Every point: time sharing or more superposition layers
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() R R
Sliding-window superposition coding (Wang et al. )
∙ Every corner point: different decoding orders ∙ Every point: time sharing or more superposition layers ∙ Extension to Han–Kobayashi (Wang et al. )
/
M(j − ) M(j) M(j) Un
Un
Xn
Xn
Yn
Yn
M(j) → M(j) M(j) → M(j) p(y|x, x) p(y|x, x) X U U Block M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() M() R R
Gaussian channel performance (Park–Kim–Wang )
8 9 10 11 12 INR (dB) 0.2 0.4 0.6 0.8 1 1.2 Symmetric Rate MLD SWCM SWCM (turbo) IAN IAN (turbo) 96% gain 154% gain 25% gain
LTE turbo code with b = , n = , BLER = ., SNR = dB
/
System-level performance (Kim et al. )
/
Cooper’s Law
Source: Arraycomm, Zander–M¨ ah¨
- nen ()
/
Cooper’s Law
Source: Arraycomm, Zander–M¨ ah¨
- nen ()
∙ Gain over the past years = ∝ ηWsysNBS
/
Cooper’s Law
Source: Arraycomm, Zander–M¨ ah¨
- nen ()
∙ Gain over the past years = ∝ ηWsysNBS
㶳 Spectral efficiency η: x
/
Cooper’s Law
Source: Arraycomm, Zander–M¨ ah¨
- nen ()
∙ Gain over the past years = ∝ ηWsysNBS
㶳 Spectral efficiency η: x 㶳 System bandwidth Wsys: x
/
Cooper’s Law
Source: Arraycomm, Zander–M¨ ah¨
- nen ()
∙ Gain over the past years = ∝ ηWsysNBS
㶳 Spectral efficiency η: x 㶳 System bandwidth Wsys: x 㶳 of base stations NBS: x (spatial reuse of frequency)
/
Concluding remarks
∙ Coded modulation as superposition coding
/
Concluding remarks
∙ Coded modulation as superposition coding
㶳 Simple and unifying picture
/
Concluding remarks
∙ Coded modulation as superposition coding
㶳 Simple and unifying picture 㶳 Framework for new coded modulation schemes
/
Concluding remarks
∙ Coded modulation as superposition coding
㶳 Simple and unifying picture 㶳 Framework for new coded modulation schemes
∙ Open problems
㶳 Finer analysis: Single-shot method (Verd´
u )
/
Concluding remarks
∙ Coded modulation as superposition coding
㶳 Simple and unifying picture 㶳 Framework for new coded modulation schemes
∙ Open problems
㶳 Finer analysis: Single-shot method (Verd´
u )
㶳 Shaping and dependence (a la Marton): CCDM (B¨
- cherer et al. )
/
Concluding remarks
∙ Coded modulation as superposition coding
㶳 Simple and unifying picture 㶳 Framework for new coded modulation schemes
∙ Open problems
㶳 Finer analysis: Single-shot method (Verd´
u )
㶳 Shaping and dependence (a la Marton): CCDM (B¨
- cherer et al. )
∙ To learn more
㶳 Kramer and Kim (), “Network information theory for cellular wireless,” in Information
Theoretic Perspectives on G Systems and Beyond, eds. Shamai, Simeone, and Maric
㶳 Wang et al. (), “Sliding-window superposition coding: Two-user interference
channels,” arXiv:.
㶳 Kim et al. (), “Interference management via sliding-window coded modulation for
G cellular networks,” IEEE Commun. Mag.
/