Tensor-Based Models for Blind DS-CDMA Receivers by Dimitri Nion and - - PowerPoint PPT Presentation

tensor based models for blind ds cdma receivers
SMART_READER_LITE
LIVE PREVIEW

Tensor-Based Models for Blind DS-CDMA Receivers by Dimitri Nion and - - PowerPoint PPT Presentation

Tensor-Based Models for Blind DS-CDMA Receivers by Dimitri Nion and Lieven De Lathauwer ETIS Lab., CNRS UMR 8051 6 avenue du Ponceau, 95014 CERGY FRANCE ASILOMAR 2007 November 4-7 2007, Pacific Grove, USA Context Research Area: Blind


slide-1
SLIDE 1

Tensor-Based Models for Blind DS-CDMA Receivers

by Dimitri Nion and Lieven De Lathauwer ETIS Lab., CNRS UMR 8051 6 avenue du Ponceau, 95014 CERGY FRANCE

ASILOMAR 2007 November 4-7 2007, Pacific Grove, USA

slide-2
SLIDE 2

2

Context

Research Area: Blind Source Separation (BSS) Application: Wireless Communications (DS-CDMA system here) System: Multiuser DS-CDMA, uplink, antenna array receiver Propagation: P1 Instantaneous channel (single path) P2 Multipath Channel with Inter-Symbol-Interference (ISI) and far- field reflections only (from the receiver point of view) P3 Multipath Channel (ISI) and reflections not only in the far-field (specular channel model) Assumptions: No knowledge of the channel, neither of CDMA codes, noise level and antenna array response (BLIND approach) Objective: Estimate each user’s symbol sequence Method:

  • Deterministic: relies on multilinear algebra
  • How? store observations in a third order tensor and

decompose it in a sum of users’ contributions Idea:

  • Tensor Model « richer » than matrix model
slide-3
SLIDE 3

3

DS DS DS DS-

  • CDMA system: cooperative vs. blind

CDMA system: cooperative vs. blind CDMA system: cooperative vs. blind CDMA system: cooperative vs. blind

Introduction

yk(t)

User 1 User R Channel 1 Channel R

antennas

Equalization and Separation

Cooperative case: Blind case:

and are known

  • f
  • f

are unknown

slide-4
SLIDE 4

4

Blind Approach: Why?

Introduction

Several motivations among others: Elimination or reduction of the learning frames: more than 40 % of the transmission rate devoted to training in UMTS Training not efficient in case of severe multipath fading or fast time varying channels Applications: eavesdropping, source localization, … If learning sequence unavailable or partially received

slide-5
SLIDE 5

5

Blind Approach: How? (1)

Introduction

  • Code

diversity Temporal diversity Spatial Diversity

K receive antennas Chip-Rate Sampling Observation during J.Ts where Ts = symbol period

Build the 3rd order tensor of

  • bservations

Numerical processing: Blind Equalization and Separation performed by decomposition of

  • I=spreading factor
slide-6
SLIDE 6

K J

  • I

J K

  • R

I

=

J K I

  • 1

+ … +

Decomposition of

  • : sum of R users’ contributions

Algebraic structure of

  • r ?

Different according to the propagation scenario Build different tensor decompositions

Part I

Estimation of

  • r ?

Goal: Blind Separation and equalization Build algorithms to compute tensor decompositions

Part II

Introduction

Blind Approach: How? (2)

Identifiability of

  • r ?

Uniqueness of tensor decompositions Constraints on the number of users

Not in this talk

slide-7
SLIDE 7

7

Introduction I. Tensor Decompositions

  • 1. Single path only (instantaneous channel):

PARAFAC decomposition

  • 2. Multipath Channel with ISI and far-field reflections only :

Block-Component-Decomposition in rank-(L,L,1) terms : BCD(L,L,1)

  • 3. Multipath Channel with ISI and reflections not only in the far-field:

Block-Component-Decomposition in rank-(L,P,.) terms : BCD(L,P,.) II. Algorithms to compute tensor decompositions II. Simulation Results Conclusion and Perspectives

slide-8
SLIDE 8

8 Part I: Tensor Decompositions

PARAFAC decomposition PARAFAC decomposition PARAFAC decomposition PARAFAC decomposition

If single path only (instantaneous mixture),

  • follows a PARAFAC

decomposition [Sidiropoulos, Giannakis & Bro, 2000].

a a a a1

1 1 1

a a a aR

R R R

I K J

=

h1 hR c c c c1

1 1 1

s s s sR

R R R

s s s s1

1 1 1

c c c cR

R R R

+ + …

  • 1 (User 1)
  • R (User R)
  • c

c c cr holds the I ‘chips’ rth user’s spreading code a a a ar holds the response of the K antennas s s s sr holds the J consecutive symbols transmitted by user r hr fading factor of the instantaneous channel

R = r kr jr ir r ijk

a s c h = y

1

Analytic Model: Algebraic Model:

slide-9
SLIDE 9

9

Analytic Model:

s0 s1 s2 ……………. sJ-1 s-1 s0 s1 s2 …………… sJ-2

H H H Hr S S S Sr

T

a a a ar

I K J

= ∑ = R r 1

I J L L K

Toeplitz structure because of ISI

  • L interfering

symbols

∑ ∑

= + − =

− + =

L l r l j r R r kr ijk

s I l i h a y

1 ) ( 1 1

) ) 1 ( (

Algebraic Model:

BCD BCD BCD BCD-

  • (L,L,1)

(L,L,1) (L,L,1) (L,L,1)

If multi-paths in the far field + ISI ,

  • follows a

« Block Component Decomposition in rank-(L,L,1) terms », BCD-(L,L,1) [De Lathauwer & De Baynast, 2003], [Nion & De Lathauwer, SPAWC 2007].

Part I: Tensor Decompositions

slide-10
SLIDE 10

10

Analytic Model: 1 path = 1 delay, 1angle of arrival and 1 fading coefficient

∑∑ ∑

R = r P = p L = l (r) + l j rp rp k ijk

)I)s (l + (i h ) (θ a = y

1 1 1 1

1

BCD BCD BCD BCD-

  • (L,P,.)

(L,P,.) (L,P,.) (L,P,.)

If multi-paths not only in the far-field + ISI ,

  • follows a BCD-(L,P,.)

[Nion & De Lathauwer, ICASSP 2005].

Algebraic Model:

K L

Hr S S S Sr

T

A A A Ar

I K J

= ∑

r= 1 R

J L

s0 s1 s2 ……………. sJ-1 s-1 s0 s1 s2 …………… sJ-2

I P P

  • Toeplitz structure (IES)

P paths

Part I: Tensor Decompositions

slide-11
SLIDE 11

11

Unknowns for each decomposition Unknowns for each decomposition Unknowns for each decomposition Unknowns for each decomposition

s s s sr

r r r

I K J

=

h h h hr

r r r

r= 1 R

a a a ar

r r r

PARAFAC

R K R J R I

C C C

× × ×

∈ ∈ ∈ A S H

K

=

L

H H H Hr

r r r

a a a ar

r r r

I

J S S S Sr

T

L

I K J

r= 1 R

BCD-(L,L,1)

R K RL J RL I

C C C

× × ×

∈ ∈ ∈ A S H

Block-Toeplitz

K

=

L

H H H Hr

r r r

A A A Ar

r r r

I P P

S S S Sr

T

J L

I K J

r= 1 R

BCD-(L,P,.)

RP K RL J RPL I

C C C

× × ×

∈ ∈ ∈ A S H

Block-Toeplitz

Part I: Tensor Decompositions

slide-12
SLIDE 12

12

Introduction I. Les décompositions tensorielles II. Algorithms to compute Tensor Decompositions

  • 1. Algorithm 1: ALS (“Alternating Least Squares”)
  • 2. Algorithm 2: ALS + LS (“Line Search”)
  • 3. Algorithm 3: LM (“Levenberg-Marquardt”)

III. Simulation Results Conclusion et Perspectives

slide-13
SLIDE 13

13

Objective of the proposed algorithms

Part II: Algorithms

[ ]

k

cat Y Y KJ

I

=

×

[ ]

i

cat Y Y

IK J

=

×

] [

j

cat Y Y

JI K

=

×

Temporal Diversity J Spatial Diversity

K

Code Diversity

I

k

Y

i

Y

j

Y

  • Decomposition of
  • Estimation of components A, S and H

Minimize frobenius norm of residuals. Cost function:

2

) ˆ , ˆ , ˆ ( Y = Φ

F

Tens A S H −

Tens = PARAFAC or DCB-(L,L,1) or DCB-(L,P,.) Useful Tool: « Matricize » the tensor of observations

3 matrix representations of the same tensor

slide-14
SLIDE 14

14

Algorithm 1: ALS « Algorithm 1: ALS « Algorithm 1: ALS « Algorithm 1: ALS « Alternating Least Squares Alternating Least Squares Alternating Least Squares Alternating Least Squares » » » »

Principle: Alternate between least squares update of the 3 matrices A A A A=[A A A A1,…,A A A AR], S S S S=[S S S S1,…,S S S SR] et H H H H=[H H H H1,…,H H H HR].

[ ] [ ] [ ]

1 ) 3 ( ) ˆ , ˆ ( ˆ ) 2 ( ) ˆ , ˆ ( ˆ ) 1 ( ) ˆ , ˆ ( ˆ 1 , ˆ , ˆ

) ( ) ( ) ( ) 1 ( ) ( ) ( ) 1 ( ) 1 ( ) ( ) ( ) 1 ( ) ( ) (

+ ← ⋅ = ⋅ = ⋅ = = > Φ − Φ =

× − × − − × −

k k while k

k k k k k k k k k k k

S H Z Y A A S Z Y H H A Z Y S H A

JI K KJ I IK J 3 2 1 6

  • )

10 (e.g. : tion Initializa ε ε

Part II: Algorithms

slide-15
SLIDE 15

15

« Easy » Problem «Difficult» Problem

Convergence of ALS Convergence of ALS Convergence of ALS Convergence of ALS

Long swamp

DCB-(L,P,.) I=8, J=50, K=6, L=2, P=2, R=3 DCB-(L,P,.) I=8, J=50, K=6, L=2, P=2, R=3 Part II: Algorithms

Because of long swamps that might occur, we propose 2 algorithms that improve convergence speed.

slide-16
SLIDE 16

16

Algorithm 2: Insert a Line Search step in ALS Algorithm 2: Insert a Line Search step in ALS Algorithm 2: Insert a Line Search step in ALS Algorithm 2: Insert a Line Search step in ALS

For each iteration, perform linear interpolation of the 3 components A A A A, H H H H and S S S S from their values at the 2 previous iterations. Iteration k 1.Line Search:

  • 2. ALS update

Choice of step important

) ˆ ˆ ( ˆ ˆ ) ˆ ˆ ( ˆ ˆ ) ˆ ˆ ( ˆ ˆ

) 2 ( ) 1 ( ) 2 ( ) ( ) 2 ( ) 1 ( ) 2 ( ) ( ) 2 ( ) 1 ( ) 2 ( ) ( − − − − − − − − −

− + = − + = − + =

k k k new k k k new k k k new

H H H H A A A A S S S S ρ ρ ρ

[ ] [ ] [ ]

1 ) ˆ , ˆ ( ˆ ) ˆ , ˆ ( ˆ ) ˆ , ˆ ( ~ ˆ

) ( ) ( 3 ) ( ) ( ) ( 2 ) ( ) ( ) ( 1 ) (

+ ← ⋅ = ⋅ = ⋅ =

× ×

k k Y

k k k new k k JIK new new k

S H Z Y A A S Z Y H H A Z s

JI K KJ I

ρ

Directions of research

Part II: Algorithms

Can be optimally calculated with « Enhanced Line Search with Complex Step» (ELSCS)

slide-17
SLIDE 17

17

Algorithm 3: LM « Algorithm 3: LM « Algorithm 3: LM « Algorithm 3: LM « Levenberg Levenberg Levenberg Levenberg-

  • Marquardt

Marquardt Marquardt Marquardt » » » »

Concatenate vectorized unknowns vec(A A A A), vec(H H H H) and s s s s in a long vector p p p p Update p: p: p: p: Gauss-Newton: Levenberg-Marquardt: The matrix is positive definite: solve (3) by Cholesky decomposition and Gaussian elimination. According to the condition number of J J J JH

H H HJ

J J J + λ I, update λ in each iteration. If ill-conditioned then increase λ : get closer to gradient descent update If well-conditioned then decrease λ: get closer to Gauss-Newton update

g p λ

(k)

1 − ≈

g p J J − ≈

(k) H )

(

(1)

1 (k) (k) ) (k

p p p + =

+

(2) ) ( g p J J − =

(k) H

(3) ) ( g p I J J − + =

(k) H

λ

) ( I J J λ +

H

Part II: Algorithms

) ( I J J λ +

H

) ( I J J λ +

H

slide-18
SLIDE 18

18

Convergence of algorithms ALS, ALS+LS et LM

«easy» problem «difficult» problem

Gradient Descent Gauss Newton (quadratic convergence)

LM and ALS+ELSCS converge much faster than standard ALS, especially for difficult problems: the length of swamps is considerably reduced.

Part II: Algorithms

slide-19
SLIDE 19

19

Introduction I. Tensor Decompositions II. Algorithms to compute Tensor Decompositions III. Simulation Results Conclusion et Perspectives

slide-20
SLIDE 20

20

Impact of number of antennas

BCD-(L,P,.) with: spreading factor I=12, J=100 symbols, L=2 interfering symbols, P=2 paths per user and 10 random initializations, + AWGN K=4 antennas and R=5 users K=6 antennas and R=3 users

Part III: Simulation Results

slide-21
SLIDE 21

21

Impact of Near-Far effect

  • +

= ∑

= F R r r r r 1

α

1 ) ( =

  • κ

5 ) ( =

  • κ

) min( ) max( ) (

r r

α α κ =

  • BCD-(L,L,1) with spreading factor I=4, J=100 symbols, K=4 antennas, L=2

interfering symbols, R=5 users and 10 random initializations, + AWGN Note: more users than antennas (R>K) and overloaded system (R>I)

Part III: Simulation Results

slide-22
SLIDE 22

22

Over-estimation of the number of paths P

  • built with P=3 paths for each user.

Decomposition calculated with over-estimation of P (P=4 and P=5) and under- estimation of P (P=2). MSE of symbol matrix vs. SNR

slide-23
SLIDE 23

23

Tensor Models: PARAFAC receiver: ok if single path (instantaneous mixture) BCD receivers: multipaths + ISI (blind separation and equalization) Approach: Deterministic, exploits multi-linearity of received signal, i.e. algebraic structure of tensor of observations. 1 diversity = 1 dimension of this tensor. Algorithms: standard ALS sensitive to swamps that appear with ill-conditioned data or severe Near-Far effect ALS+ELSCS and LM offers much better performance. Performances: Blind BCD receivers potentially very close to MMSE, provided that enough diversity is exploitable. Uniqueness (not in this talk): Maximum number of users admissible in the system depends on the dimensions of the problem.

Conclusion Conclusion Conclusion Conclusion