Modulated Sparse Regression Codes Kuan Hsieh and Ramji Venkataramanan - - PowerPoint PPT Presentation

modulated sparse regression codes
SMART_READER_LITE
LIVE PREVIEW

Modulated Sparse Regression Codes Kuan Hsieh and Ramji Venkataramanan - - PowerPoint PPT Presentation

Modulated Sparse Regression Codes Kuan Hsieh and Ramji Venkataramanan University of Cambridge, UK ISIT, June 2020 1/17 Complex AWGN channel communication w 1 , . . . , w n data bits estimated data bits i.i.d. CN (0 , 2 ) m x 1 , . . .


slide-1
SLIDE 1

Modulated Sparse Regression Codes

Kuan Hsieh and Ramji Venkataramanan

University of Cambridge, UK ISIT, June 2020

1/17

slide-2
SLIDE 2

Complex AWGN channel communication

+

Encoder Decoder

x1, . . . , xn y1, . . . , yn

data bits

m

estimated data bits

2/17

w1, . . . , wn

i.i.d.

∼ CN(0, σ2)

y = x + w

slide-3
SLIDE 3

Complex AWGN channel communication

2/17

Power constraint Channel capacity Rate

+

Encoder Decoder

x1, . . . , xn y1, . . . , yn

data bits

m

estimated data bits

w1, . . . , wn

i.i.d.

∼ CN(0, σ2)

R = m n 1 n

n

X

i=1

|xi|2 ≤ P C = log ✓ 1 + P σ2 ◆

slide-4
SLIDE 4

Complex AWGN channel communication

2/17

+

Encoder Decoder

x1, . . . , xn y1, . . . , yn

data bits

m

estimated data bits

w1, . . . , wn

i.i.d.

∼ CN(0, σ2)

Power constraint Channel capacity Rate

R = m n 1 n

n

X

i=1

|xi|2 ≤ P C = log ✓ 1 + P σ2 ◆

slide-5
SLIDE 5

Sparse regression codes (SPARCs)

Design matrix Message vector Codeword

  • ind. Gaussian entries

3/17

[x1, . . . , xn]>

encodes data bits

Encoding

x x = Aβ A β

[Joseph and Barron ’12]

(sparse)

slide-6
SLIDE 6

Sparse regression codes (SPARCs)

Message vector Codeword

[Joseph and Barron ’12] 3/17

[x1, . . . , xn]>

encodes data bits

Encoding

(sparse)

x x = Aβ β

Decoding

Estimate

y = Aβ + w

given

β

Design matrix

  • ind. Gaussian entries

A

slide-7
SLIDE 7

SPARC encoding

4/17

x = Aβ

β :

. . .

...0, 1, 0 1, 0, 0, ... 0, 1, 0, ... > M entries

bits

log2 M

determine location

slide-8
SLIDE 8

SPARC encoding

4/17

bits

log2 M

determine location

x = Aβ R = L log M n

Rate

. . .

Section 1 Section 2 Section L A : n rows β :

. . .

...0, 1, 0 1, 0, 0, ... 0, 1, 0, ... > M entries

slide-9
SLIDE 9

SPARC decoding

5/17

Estimate

y = Aβ + w

given

β

Section Error Rate: (SER)

. . .

Section 1 Section 2 Section L A : n rows β :

. . .

...0, 1, 0 1, 0, 0, ... 0, 1, 0, ... > M entries

1 L

L

X

`=1

n b β` 6= β`

slide-10
SLIDE 10

Maximum likelihood decoding

[Joseph and Barron ’12]

Matrix designs + efficient decoding

Previous results on (unmodulated) SPARCs

6/17

Power allocation

Adaptive, Successive Hard-thresholding

[Joseph and Barron ’14]

Adaptive, Successive Soft-thresholding

[Cho and Barron ’13]

Approximate Message Passing

[Barbier and Krzakala ’17] [Rush, Greig and Venkataramanan ’17]

Spatial coupling

Approximate Message Passing

[Barbier et al. ’14-’19] [Rush, Hsieh and Venkataramanan ’18, ’19, ’20]

slide-11
SLIDE 11

Maximum likelihood decoding

[Joseph and Barron ’12]

Matrix designs + efficient decoding

Previous results on (unmodulated) SPARCs

6/17

Power allocation

Adaptive, Successive Hard-thresholding

[Joseph and Barron ’14]

Adaptive, Successive Soft-thresholding

[Cho and Barron ’13]

Approximate Message Passing

[Barbier and Krzakala ’17] [Rush, Greig and Venkataramanan ’17]

Spatial coupling

Approximate Message Passing

[Barbier et al. ’14-’19] [Rush, Hsieh and Venkataramanan ’18, ’19, ’20]

slide-12
SLIDE 12

Spatial coupling

7/17

Design matrix A n LM β : βc

c = 1 c = C

[Felstrom and Zigangirov ’99] [Kudekar and Pfister ’10] [Barbier, Schülke and Krazakala ’13, ’15] …

slide-13
SLIDE 13

Spatial coupling

7/17

Design matrix A n LM Base matrix W R C β : βc

c = 1 c = C

Aij ∼ CN ✓ 0, 1 LWr(i),c(j) ◆

[Thorpe ’03] [Mitchell, Lentmaier, and Costello ’15] [Liang, Ma and Ping ’17] …

slide-14
SLIDE 14

Modulated SPARC encoding

8/17

R = L log(KM) n

β :

. . .

...0, a1, 0 a2, 0, 0, ... 0, aL, 0, ... > M entries

bits

log2 M

determine location bits determine value

log2 K

Re Im c8 c1 c2 c3 c4 c5 c6 c7

8-PSK

x = Aβ

K-ary Phase Shift Keying (PSK) E.g.

slide-15
SLIDE 15

AMP decoding

9/17

y = Aβ + w

Initialise b β0 to all-zero vector. For t = 0, 1, 2 . . . zt = y A b βt + υt zt1 b βt+1 = η ⇣ b βt + (St A)⇤zt , τ t⌘

slide-16
SLIDE 16

AMP decoding

9/17

}

Effective noise variance

≈ β + Gaussian noise

y = Aβ + w

Initialise b β0 to all-zero vector. For t = 0, 1, 2 . . . zt = y A b βt + υt zt1 b βt+1 = η ⇣ b βt + (St A)⇤zt , τ t⌘

slide-17
SLIDE 17

AMP decoding

Bayes-optimal estimator

9/17

}

Effective noise variance

: standard normal random vector

u

≈ β + Gaussian noise

y = Aβ + w

Initialise b β0 to all-zero vector. For t = 0, 1, 2 . . . zt = y A b βt + υt zt1 b βt+1 = η ⇣ b βt + (St A)⇤zt , τ t⌘

ηj(s, τ) = E h βj

  • s = β + pτ u

i

slide-18
SLIDE 18

AMP decoding

Bayes-optimal estimator

9/17

}

Effective noise variance

: standard normal random vector

State evolution predicts

u

≈ β + Gaussian noise

y = Aβ + w k b βt βk2

Initialise b β0 to all-zero vector. For t = 0, 1, 2 . . . zt = y A b βt + υt zt1 b βt+1 = η ⇣ b βt + (St A)⇤zt , τ t⌘

ηj(s, τ) = E h βj

  • s = β + pτ u

i

slide-19
SLIDE 19

State evolution for K-PSK modulated SPARCs

10/17

Base matrix W R C

E.g.

β : βc

c = 1 c = C

For large and

n L

ψt

c ⇡ k b

βt

c βck2

L/C

slide-20
SLIDE 20

State evolution for K-PSK modulated SPARCs

10/17

Base matrix W R C

E.g.

β : βc

c = 1 c = C

For large and

n L

ψt

c ⇡ k b

βt

c βck2

L/C

Initialise ψ0

c = 1 for c = 1, . . . , C. For t = 0, 1, 2 . . .

φt

r = σ2 + 1

C

C

X

c=1

Wrcψt

c,

τ t

c =

R/2 log(KM)  1 R

R

X

r=1

Wrc φt

r

−1 , ψt+1

c

= mmseβ

  • τ t

c

slide-21
SLIDE 21

11/17 Base matrix W R C

E.g.

For δ ∈ (0, 1

2) and νt c = 1 τ t

c log(KM),

ψt+1

c

≤           

(KM)−α1Kδ2 δ√ log(KM)

if νt

c > 2 + δ,

1 + (KM)−α2Kνt

c

νt

c log(KM)

  • therwise.

Main result

Initialise ψ0

c = 1 for c = 1, . . . , C. For t = 0, 1, 2 . . .

φt

r = σ2 + 1

C

C

X

c=1

Wrcψt

c,

τ t

c =

R/2 log(KM)  1 R

R

X

r=1

Wrc φt

r

−1 , ψt+1

c

= mmseβ

  • τ t

c

slide-22
SLIDE 22

11/17 Base matrix W R C

E.g.

Main result

For δ ∈ (0, 1

2) and νt c = 1 τ t

c log(KM),

ψt+1

c

≤           

(KM)−α1Kδ2 δ√ log(KM)

1 + (KM)−α2Kνt

c

νt

c log(KM)

fixed K and M→∞

− − − − − − − − − − − →          if νt

c > 2,

1

  • therwise.

Initialise ψ0

c = 1 for c = 1, . . . , C. For t = 0, 1, 2 . . .

φt

r = σ2 + 1

C

C

X

c=1

Wrcψt

c,

τ t

c =

R/2 log(KM)  1 R

R

X

r=1

Wrc φt

r

−1 , ψt+1

c

= mmseβ

  • τ t

c

slide-23
SLIDE 23

Asymptotic SE for K-PSK modulated SPARCs

12/17

Base matrix W R C

E.g.

Does not depend on K

For fixed K, as M → ∞ the state evolution simplifies to: Initialise ψ0

c = 1 for c = 1, . . . , C. For t = 0, 1, 2 . . .

φt

r = σ2 + 1

C

C

X

c=1

Wrcψt

c,

ψt+1

c

= ( 1 R

R

X

r=1

Wrc φt

r

≤ R ) .

slide-24
SLIDE 24

Theorem for K-PSK modulated SPARCs

13/17

(ω, Λ) base matrix W ω R = Λ + ω − 1 C = Λ

slide-25
SLIDE 25

Theorem for K-PSK modulated SPARCs

13/17 (ω, Λ) base matrix W ω R = Λ + ω − 1 C = Λ

Consider a K-PSK modulated complex SPARC constructed with an (ω, Λ) base matrix W with ω > ω? and rate satisfying R < ˜ C := C/(1 + !−1

Λ ).

As n → ∞, the SER of the AMP decoder after T iterations = 0 almost surely, where T ∝ Λ 2ω( ˜ C − R) .

slide-26
SLIDE 26
  • 1. Error rate of AMP accurately predicted by state evolution

for large code lengths.

By extending results in [Rush, Hsieh and Venkataramanan ’20].

  • 2. For any , state evolution predicts vanishing error

probability in the large system limit.

  • A. Asymptotic state evolution is the same for any .

Shown in this work.

  • B. Use asymptotic state evolution analysis from

unmodulated ( ) SPARCs.

Shown in [Rush, Hsieh and Venkataramanan ’20].

Steps of proof

14/17

R < C K

K = 1

slide-27
SLIDE 27
  • 1. Error rate of AMP accurately predicted by state evolution

for large code lengths.

By extending results in [Rush, Hsieh and Venkataramanan ’20].

  • 2. For any , state evolution predicts vanishing error

probability in the large system limit.

  • A. Asymptotic state evolution is the same for any .

Shown in this work.

  • B. Use asymptotic state evolution analysis from

unmodulated ( ) SPARCs.

Shown in [Rush, Hsieh and Venkataramanan ’20].

Steps of proof

14/17

R < C K

K = 1

slide-28
SLIDE 28
  • 1. Error rate of AMP accurately predicted by state evolution

for large code lengths.

By extending results in [Rush, Hsieh and Venkataramanan ’20].

  • 2. For any , state evolution predicts vanishing error

probability in the large system limit.

  • A. Asymptotic state evolution is the same for any .

Shown in this work.

  • B. Use asymptotic state evolution analysis from

unmodulated ( ) SPARCs.

Shown in [Rush, Hsieh and Venkataramanan ’18, ’20].

Steps of proof

14/17

R < C K

K = 1

slide-29
SLIDE 29

Simulation results

15/17

Bit error rate Codeword error rate

R = 1.6 bits/dim. R = L log(KM) n n ≈ 2000 L = 960 +256 QAM ω = 6, Λ = 32,

Coded modulation

(6480, 16200) LDPC DVB-S2 standard

slide-30
SLIDE 30

Computational benefits of modulation

16/17

Per iteration complexity AMP decoder (FFT based) Let , then If , approx. times reduction

O(LM(log(LM) + K))

complexity for unmodulated SPARC complexity for modulated SPARC = K · log(LMunmod) + 1 log(LMunmod) + K − log K

Munmod = KMmod K ⌧ log(LMunmod) K×

(approx. 3.8x in simulation example using )

K = 4

slide-31
SLIDE 31

17/17

Background

Sparse regression codes (SPARCs) for the AWGN channel

This work

  • 1. SPARCs for the complex AWGN channel.
  • 2. Introduce (PSK) modulation to SPARC encoding.

Theoretical result

Complex SPARCs with K-PSK modulation are asymptotically capacity achieving for any fixed K

Numerical result

Modulation can significantly reduce complexity without sacrificing error performance.

x = Aβ

slide-32
SLIDE 32
  • A. Joseph and A. R. Barron, “Least squares superposition codes of moderate dictionary size are reliable at rates up

to capacity,” IEEE Trans. Inf. Theory, vol. 58, pp. 2541–2557, May 2012.

  • A. Joseph and A. R. Barron, “Fast sparse superposition codes have near exponential error probability for R < C,”

IEEE Trans. Inf. Theory, vol. 60, no. 2, pp. 919–942, 2014.

  • S. Cho and A. R. Barron, “Approximate iterative Bayes optimal estimates for high-rate sparse superposition codes,”

in The Sixth Workshop on Inf. Theoretic Methods in Sci. and Eng., pp. 35–42, 2013.

  • C. Rush, A. Greig, and R. Venkataramanan, “Capacity-achieving sparse superposition codes via approximate

message passing decoding,” IEEE Trans. Inf. Theory, vol. 63, pp. 1476–1500, March 2017.

  • J. Barbier, M. Dia, and N. Macris, “Proof of threshold saturation for spatially coupled sparse superposition codes,”

in Proc. IEEE Int. Symp. Inf. Theory, 2016.

  • J. Barbier, M. Dia, and N. Macris, “Threshold saturation of spatially coupled sparse superposition codes for all

memoryless channels,” Proc. IEEE Inf. Theory Workshop, 2016.

  • J. Barbier, M. Dia, and N. Macris, “Universal Sparse Superposition Codes with Spatial Coupling and GAMP

Decoding,” ArXiv e-prints arXiv:1707.04203, July 2017.

  • J. Barbier and F. Krzakala, “Approximate message-passing decoder and capacity achieving sparse superposition

codes,” IEEE Trans. Inf. Theory, vol. 63, pp. 4894–4927, Aug 2017.

References

slide-33
SLIDE 33
  • D. L. Donoho, A. Javanmard, and A. Montanari, “Information- theoretically optimal compressed sensing via spatial

coupling and approximate message passing,” IEEE Trans. Inf. Theory, vol. 59, pp. 7434– 7464, Nov. 2013.

  • D. G. M. Mitchell, M. Lentmaier, and D. J. Costello, “Spatially coupled LDPC codes constructed from

protographs,” IEEE Trans. Inf. Theory, vol. 61, pp. 4866–4889, Sept 2015.

  • S. Liang, J. Ma, and L. Ping, “Clipping can improve the performance of spatially coupled sparse superposition

codes,” IEEE Commun. Letters, vol. 21, pp. 2578–2581, Dec. 2017.

  • D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of

the National Academy of Sciences, vol. 106, no. 45, pp. 18914–18919, 2009.

  • S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE
  • Int. Symp. Inf. Theory, 2011.
  • C. Liang, J. Ma, and L. Ping, “Towards Gaussian capacity, universality and short block length,” in Proc. 9th Int.
  • Symp. on Turbo Codes and Iterative Inf. Processing (ISTC), 2016, pp. 412–416.
  • S.Liang, C.Liang, J.Ma, and L.Ping, “Compressed coding, AMP based decoding and analog spatial coupling,” 2020,

https://arxiv.org/abs/2002. 04808.

  • A. Maleki, L. Anitori, Z. Yang, and R. G. Baraniuk, “Asymptotic analysis of complex LASSO via complex

approximate message passing (CAMP),” IEEE Trans. Inf. Theory, vol. 59, no. 7, pp. 4290–4308, July 2013.

References

slide-34
SLIDE 34
  • K. Hsieh, C. Rush, and R. Venkataramanan, “Spatially coupled sparse regression codes: Design and state evolution

analysis,” in Proc. IEEE Int. Symp. Inf. Theory, June 2018, pp. 1016–1020.

  • K. Hsieh and R. Venkataramanan, “Modulated sparse superposition codes for the complex AWGN channel,” 2020,
  • nline: https://arxiv.org/ abs/2004.09549.
  • Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel coding rate in the finite blocklength regime,” IEEE Trans. Inf.

Theory, vol. 56, no. 5, pp. 2307–2359, May 2010.

  • A. Cassagne, O. Hartmann, M. Léonardon, T. Tonnellier, G. Delbergue, C. Leroux, R. Tajan, B. Le Gal, C. Jégo, O.

Aumage, and D. Barthou, “Fast simulation and prototyping with AFF3CT,” in IEEE International Workshop on Signal Processing Systems (SiPS), Oct. 2017.

References

slide-35
SLIDE 35

State evolution

Solid: state evolution Dotted: AMP (average over 100 runs)

(normalised) Column block index

k b βt

c βck2

c

slide-36
SLIDE 36

Data bits to message vector

Step 3: concatenate to form message vector Step 2: use one-hot encoding Step 1: split data into sections

[

]>

M = 8 log2 M = 3 000 001 001 000 00000010 00000001 00000001 00000010 111 10000000 111 10000000 β :