Approximate Message Passing for Unsourced Access with Coded - - PowerPoint PPT Presentation

approximate message passing for unsourced access with
SMART_READER_LITE
LIVE PREVIEW

Approximate Message Passing for Unsourced Access with Coded - - PowerPoint PPT Presentation

Approximate Message Passing for Unsourced Access with Coded Compressed Sensing Vamsi K. Amalladinne Asit Kumar Pradhan, Cynthia Rush , J.-F. Chamberland, Krishna R. Narayanan Electrical and Computer Engineering @ Texas A&M University


slide-1
SLIDE 1

1/ 21

Approximate Message Passing for Unsourced Access with Coded Compressed Sensing

Vamsi K. Amalladinne Asit Kumar Pradhan, Cynthia Rush†, J.-F. Chamberland, Krishna R. Narayanan

Electrical and Computer Engineering @ Texas A&M University

†Statistics @ Columbia University

ISIT 2020 June 6, 2020

This material is based upon work supported, in part, by NSF under Grants CCF-1619085 & CCF-1849883 This material is also based upon work support, in part, by Qualcomm Technologies, Inc., through their University Relations Program

slide-2
SLIDE 2

2/ 21

Uncoordinated and Unsourced MAC

Encoder Encoder Encoder Encoder Encoder Encoder Message 1 Message 2 Message 3 Message 4 Message 5 Message 6 MAC Channel Joint Decoder

Without Personalized Feedback

◮ All devices employ same encoder ◮ No explicit knowledge of identities ◮ Need only return unordered list

Model

y =

i∈Sa si + z

where si = f (wi) is codeword,

  • nly depends on message
  • Y. Polyanskiy. A Perspective on Massive Random-Access. ISIT, 2017
slide-3
SLIDE 3

3/ 21

UMAC – Compressed Sensing Interpretation

Information Bits (101010000) Message Index (21) Columns Are Possible Signals Time →

◮ Bit sequence wi ∈ {0, 1}B converted to index si in [0, 2B − 1] ◮ Stack codewords into N × 2B sensing matrix with B ≈ 128 ◮ Message index determines transmitted codeword

slide-4
SLIDE 4

4/ 21

UMAC – Compressed Sensing with Multiple Messages

Collection of Message Indices

Conceptual MAC Framework

◮ Devices share same codebook (sensing matrix) ◮ Received signal is sum of K columns plus noise

slide-5
SLIDE 5

5/ 21

UMAC – Exact CS Analogy =

Sampling Matrix, n × 2B K-Sparse message vector Non-negative, integer entries Message Indices Received Signal

◮ y = As + z

with s0 = K

◮ Dimensionality of CS problem is huge ◮ Computational complexity of conventional CS solvers: O(poly(2B))

slide-6
SLIDE 6

6/ 21

CCS: Divide and Conquer Information Bits

FEC w1 FEC w2 . . . FEC wK CS v1 CS v2 . . . CS vK

  • Noise

z CS Algorithm Tree Decoder

  • W

CCS Components

◮ Split problem into CS pieces ◮ Get lists of fragments, one

list for every slot

◮ Stitch fragments together

using tree decoder

Encoding Partition Distinct CS Instances

slide-7
SLIDE 7

7/ 21

Coded Compressive Sensing – Device Perspective

B Bits + P Parity Bits · · · Allocating Parity Bits · · · · · · Coupled Messages Slot 1 Slot 2 Slot 3 Slot L

◮ Collection of L CS matrices and 1-sparse vectors ◮ Each CS generated signal is sent in specific time slot

  • V. K. Amalladinne, A. Vem, D. Soma, K. R. Narayanan, J.-F. Chamberland. Coupled Compressive Sensing Scheme for Unsourced

Multiple Access. ICASSP 2018

slide-8
SLIDE 8

8/ 21

Coded Compressive Sensing – Multiple Access

Slot 1 Slot 2 Slot 3 Slot L · · · · · · · · · List 1 List 2 List 3 List L

◮ L instances of CS problem, each solved with non-negative LS ◮ Produces L lists of K decoded sub-packets (with parity) ◮ Must piece sub-packets together using tree decoder

slide-9
SLIDE 9

9/ 21

Coded Compressive Sensing – Stitching Process

List 1 List 2 List 3 List L

Tree Decoding Principles

◮ Every parity is linear

combination of bits in preceding blocks

◮ Late parity bits offer better

performance

◮ Early parity bits decrease

decoding complexity

slide-10
SLIDE 10

10/ 21

Extending CCS Framework

◮ Alexander Fengler, Peter Jung, Giuseppe Caire on arXiv ◮ Connection between CCS indexing and sparse regression codes ◮ Circumvent slotting under CCS and dispersion effects

slide-11
SLIDE 11

11/ 21

UMAC – CCS Unified CS Analogy =

Sampling Matrix, n × 2(B+P)/L L-Sparse message vector

◮ Initial non-linear indexing step ◮ Index vector is block sparse ◮ Connection to sparse regression codes

  • C. Rush, A. Greig, R. Venkataramanan. Capacity-Achieving Sparse Superposition Codes via Approximate Message Passing
  • Decoding. IEEE IT Trans 2017
slide-12
SLIDE 12

12/ 21

CCS-AMP =

Sampling Matrix, n × 2(B+P)/L L-Sparse message vector

◮ Complexity management comes from dimensionality reduction ◮ Use full sensing matrix on sparse regression codes ◮ Decode inner code with low-complexity AMP ◮ Decode outer code with tree decoding

  • A. Fengler, P. Jung, and G. Caire. SPARCs and AMP for Unsourced Random Access. ISIT 2019
slide-13
SLIDE 13

13/ 21

Approximate Message Passing Algorithm

Governing Equations

◮ AMP algorithm iterates through

z(t) = y − ADηt

  • r(t)

+ 1 n div Dηt

  • r(t)
  • Onsager correction

r(t+1) = ATz(t) + D ηt

  • r(t)
  • Denoiser

Initial conditions z(0) = 0 and η0

  • r(0)

= 0

◮ Application falls within framework for (non-separable) functions

Tasks

◮ Define denoiser ◮ Derive correction term

  • R. Berthier, A. Montanari, and P.-M. Nguyen. State Evolution for Approximate Message Passing with Non-Separable Functions.

arXiv 2017

slide-14
SLIDE 14

14/ 21

Marginal Posterior Mean Estimate (PME)

Proposed Denoiser (Fengler, Jung, and Caire)

◮ r(t) d

− − − →

n→∞ Ds + τtζ ◮ State estimate based on Gaussian model

ˆ sOR (q, r, τ) = E

  • s|
  • Pℓs + τζ = r
  • =

q exp

  • − (r−√

Pℓ)2 2τ2

  • (1−q) exp
  • − r2

2τ2

  • +q exp
  • − (r−√

Pℓ)2 2τ2

  • with prior q = 1 − (1 − 1/m)K fixed

◮ ηt

  • r(t)

= ˆ sOR q, r(t), τt

  • is aggregate of PME values

◮ τt is obtained from state evolution or τ 2 t ≈ z(t)2/n

Performance is quite good!

slide-15
SLIDE 15

15/ 21

Marginal PME Revisited

Enhanced CCS-AMP

◮ Can tree code inform decoding of inner code (AMP denoiser)? ◮ Idea: Propagate beliefs through q within PME existing framework

ˆ sOR (q, r, τ) = E

  • s|
  • Pℓs + τζ = r
  • =

q exp

  • − (r−√

Pℓ)2 2τ2

  • (1−q) exp
  • − r2

2τ2

  • +q exp
  • − (r−√

Pℓ)2 2τ2

  • but leverage extrinsic information to compute q = Pr(s = 1)

◮ Proposed denoiser becomes

ηt

  • r(t)

= ˆ sOR q(t), r(t), τt

  • applied component-wise.
slide-16
SLIDE 16

16/ 21

Updated CCS-AMP Equations

◮ Onsager correction from divergence of ηt(r)

1 n div Dηt (r) = z(t−1) nτ 2

t

  • D2ηt (r)
  • 1 − Dηt (r)2

◮ Robust to tree dynamics ◮ Simplified AMP equations

z(t) = y − ADs(t) + z(t−1) nτ 2

t

  • D2 s(t)
  • 1 −
  • Ds(t)
  • 2

s(t+1) = ηt+1

  • ATz(t) + Ds(t)

with ηt

  • r(t)

= ˆ sOR q(t), r(t), τt

  • Tasks
  • 1. Devise a suitable tree code
  • 2. Compute q(t) from tree code
slide-17
SLIDE 17

17/ 21

Redesigning Outer Code

Properties of Original Tree Code

◮ Aimed at stitching message fragments together ◮ Works on short lists of K fragments ◮ Parities allocated to control growth and complexity

Challenges to Integrate into AMP

  • 1. Must compute beliefs for all possible 2v fragments
  • 2. Must provide pertinent information to AMP
  • 3. Should maintain ability to stitch outer code
slide-18
SLIDE 18

18/ 21

Redesigning Outer Code

Solutions to Integrate into AMP

◮ Parity bits are generated over Abelian group amenable to

Hadamard transform (original) or FFT (modified)

◮ Discrimination power proportional to # parities

New Design Strategy

  • 1. Information sections with parity bits interspersed in-between
  • 2. Parity over two blocks (triadic dependencies)
  • 3. Multiplicative effect across concentrated sections
slide-19
SLIDE 19

19/ 21

Redesigning Outer Code

◮ For a parity section ℓ:

Extrinsic Info q(t+1)(ℓ, k) ∝

  • j∈Wℓ gj≡k

 

j∈Wℓ

s(t)(j, G−1

j,ℓ (gj))

 

  • circular convolution structure

where Wℓ = {j ∈ [1 : L] : ∃ an edge between sections j & ℓ}.

◮ The vector of priors can be computed at once as

q(t+1)(ℓ) ∝ FFT−1

j∈Wℓ FFT

  • S(t)

j,l

  • where S(t)

j,l is the stacked vector with entries s(t)(j, G−1 j,ℓ (gj)).

v(1) v(2) v(4) v(5) v(7) v(8) v(10) v(11) v(3) v(6) v(9) v(12) v(13) v(14) v(15) v(16)

slide-20
SLIDE 20

20/ 21

Preliminary Performance Enhanced CCS

50 100 150 200 250 300 1 2 3 4 5 6

Number of active users K Required Eb/N0 (dB) CCS-AMP original CCS-AMP enhanced Sparse IDMA

50 100 150 200 250 6 12 18

Number of active users K Average run-time (sec) CCS-AMP original CCS-AMP enhanced

◮ Performance improves with enhanced CCS-AMP decoding ◮ Computational complexity is approximately maintained ◮ Reparametrization may offer additional gains in performance?

slide-21
SLIDE 21

21/ 21

Discussion – Unsourced Multiple Access Channel

Summary

◮ Introduced new framework for CCS-AMP and Unsourced MAC ◮ There are close connections between compressive sensing,

graph-based codes, and UMAC

◮ Many theoretical and practical challenges/opportunities exist

Thank You!

This material is based upon work supported, in part, by NSF under Grants CCF-1619085 & CCF-1849883 This material is also based upon work support, in part, by Qualcomm Technologies, Inc., through their University Relations Program