tensor based models for blind ds cdma receivers
play

Tensor-Based Models for Blind DS-CDMA Receivers by Dimitri Nion and - PowerPoint PPT Presentation

Tensor-Based Models for Blind DS-CDMA Receivers by Dimitri Nion and Lieven De Lathauwer ETIS Lab., CNRS UMR 8051 6 avenue du Ponceau, 95014 CERGY FRANCE ASILOMAR 2007 November 4-7 2007, Pacific Grove, USA Context Research Area: Blind


  1. Tensor-Based Models for Blind DS-CDMA Receivers by Dimitri Nion and Lieven De Lathauwer ETIS Lab., CNRS UMR 8051 6 avenue du Ponceau, 95014 CERGY FRANCE ASILOMAR 2007 November 4-7 2007, Pacific Grove, USA

  2. Context � Research Area: Blind Source Separation (BSS) � Application: Wireless Communications (DS-CDMA system here) � System: Multiuser DS-CDMA, uplink, antenna array receiver � Propagation: P1 Instantaneous channel (single path) P2 Multipath Channel with Inter-Symbol-Interference (ISI) and far- field reflections only (from the receiver point of view) P3 Multipath Channel (ISI) and reflections not only in the far-field (specular channel model) � Assumptions: No knowledge of the channel, neither of CDMA codes, noise level and antenna array response (BLIND approach) � Objective: Estimate each user’s symbol sequence � Method: - Deterministic: relies on multilinear algebra - How? store observations in a third order tensor and decompose it in a sum of users’ contributions � Idea: - Tensor Model « richer » than matrix model 2

  3. Introduction DS DS DS- DS - -CDMA system: cooperative vs. blind - CDMA system: cooperative vs. blind CDMA system: cooperative vs. blind CDMA system: cooperative vs. blind User 1 User R Channel 1 Channel R y k (t) antennas Cooperative case: are known and of Equalization and Separation of Blind case: 3 are unknown

  4. Introduction Blind Approach: Why? Several motivations among others: � Elimination or reduction of the learning frames: more than 40 % of the transmission rate devoted to training in UMTS � Training not efficient in case of severe multipath fading or fast time varying channels � Applications: eavesdropping, source localization, … � If learning sequence unavailable or partially received 4

  5. Introduction Blind Approach: How? (1) K receive antennas Chip-Rate Sampling Observation during J.T s I=spreading factor where T s = symbol period Spatial Diversity Build the 3 rd order tensor of Temporal observations diversity � � � � Code diversity Numerical processing: 5 Blind Equalization and Separation performed by decomposition of � � � �

  6. Introduction Blind Approach: How? (2) J J J K K K = + … + I I I � 1 � � � � � R � � � � � � Decomposition of � � � : sum of R users’ contributions � Algebraic structure of � � � r ? � Estimation of � � � � r ? Identifiability of � � r ? � � Different according to the Goal: Blind Separation and Uniqueness of tensor propagation scenario equalization decompositions Build different tensor Build algorithms to compute Constraints on the decompositions tensor decompositions number of users Not in this talk Part II Part I

  7. Introduction I. Tensor Decompositions 1. Single path only (instantaneous channel): � PARAFAC decomposition 2. Multipath Channel with ISI and far-field reflections only : � Block-Component-Decomposition in rank-(L,L,1) terms : BCD(L,L,1) 3. Multipath Channel with ISI and reflections not only in the far-field: � Block-Component-Decomposition in rank-(L,P,.) terms : BCD(L,P,. ) II. Algorithms to compute tensor decompositions II. Simulation Results Conclusion and Perspectives 7

  8. Part I: Tensor Decompositions PARAFAC decomposition PARAFAC decomposition PARAFAC decomposition PARAFAC decomposition If single path only (instantaneous mixture), � � follows a PARAFAC � � decomposition [Sidiropoulos, Giannakis & Bro, 2000]. Analytic Model: R ∑ y = h c s a ijk r ir jr kr r = 1 Algebraic Model: a a a a 1 a a R a a 1 1 1 R R R J K h 1 h R + … + = s s s s 1 s s s s R 1 1 1 R R R I c c c c R c c 1 c c R R R 1 1 1 � � � � � 1 (User 1) � � � � R (User R) � � � c r holds the I ‘chips’ r th user’s spreading code c c c a a a r holds the response of the K antennas a s r holds the J consecutive symbols transmitted by user r s s s 8 h r fading factor of the instantaneous channel

  9. Part I: Tensor Decompositions BCD- BCD -(L,L,1) (L,L,1) BCD BCD - - (L,L,1) (L,L,1) If multi-paths in the far field + ISI , � � follows a � � « Block Component Decomposition in rank-(L,L,1) terms », BCD-(L,L,1) [De Lathauwer & De Baynast, 2003], [Nion & De Lathauwer, SPAWC 2007]. Analytic Model: R L ∑ ∑ = + − r y a h i l I s ( ) ( ( 1 ) ) − + ijk kr r j l 1 r = l = 1 1 L interfering Algebraic Model: a r a a a symbols K J = ∑ K R L s 0 s 1 s 2 ……………. s J-1 L s -1 s 0 s 1 s 2 …………… s J-2 S r S S S T I I H H r H H � � � � = r 1 Toeplitz structure J because of ISI 9

  10. Part I: Tensor Decompositions BCD BCD- -(L,P,.) (L,P,.) BCD BCD - - (L,P,.) (L,P,.) If multi-paths not only in the far-field + ISI , � � follows a BCD-(L,P,.) � � [Nion & De Lathauwer, ICASSP 2005]. Analytic Model: R P L ∑∑ ∑ − (r) y = a ( θ ) h (i + (l )I)s 1 − ijk k rp rp j l + 1 r = p = l = 1 1 1 1 path = 1 delay, 1angle of arrival and 1 fading coefficient Algebraic Model: P paths K A A r A A P J K R = ∑ P s 0 s 1 s 2 ……………. s J-1 L s -1 s 0 s 1 s 2 …………… s J-2 � � � � I I H r S r S S S T r = 1 Toeplitz structure (IES) J L 10

  11. Part I: Tensor Decompositions Unknowns for each decomposition Unknowns for each decomposition Unknowns for each decomposition Unknowns for each decomposition PARAFAC J H × ∈ I R C R a r a a a K r r r ∑ s r s s s S × ∈ J R = C r r r � � � � I r = 1 h r h h h A ∈ × K R C r r r a r a a a BCD-(L,L,1) H × ∈ I RL C r K r r R K J ∑ = S ∈ × L J RL C Block-Toeplitz r = 1 � � � � I I S r S S S T H r H H H r r r A × ∈ K R C J L K BCD-(L,P,.) A A r A A r r r H × R ∈ I RPL C P K ∑ J P = S × ∈ J RL r = 1 C L � � � � Block-Toeplitz I I S r S S S T H H r H H A ∈ × K RP C J L r r r 11

  12. Introduction I. Les décompositions tensorielles II. Algorithms to compute Tensor Decompositions 1. Algorithm 1: ALS (“Alternating Least Squares”) 2. Algorithm 2: ALS + LS (“Line Search”) 3. Algorithm 3: LM (“Levenberg-Marquardt”) III. Simulation Results Conclusion et Perspectives 12

  13. Part II: Algorithms Objective of the proposed algorithms Estimation of components A , S and H � Decomposition of � � � � � Minimize frobenius norm of residuals. Cost function: 2 H S A − Tens ˆ ˆ ˆ Φ Tens = PARAFAC or DCB-(L,L,1) or DCB-(L,P,.) = Y ( , , ) F Useful Tool: « Matricize » the tensor of observations [ ] Y Y KJ cat Y = Code I � � � � × k k Diversity I Temporal [ ] Y Y cat Y = Diversity J Spatial J IK i × i Diversity K Y j Y cat Y = [ ] K JI × j 3 matrix representations of 13 the same tensor

  14. Part II: Algorithms Algorithm 1: ALS « Algorithm 1: ALS « Algorithm 1: ALS « Alternating Least Squares Algorithm 1: ALS « Alternating Least Squares Alternating Least Squares Alternating Least Squares » » » » � Principle: Alternate between least squares update of the 3 matrices A A=[A A A A 1 ,…,A A A A A A R ], S S S S=[S S S S 1 ,…,S S S R ] et H S H H H=[H H 1 ,…,H H H H H H R ]. A H = ˆ k ˆ ( 0 ) ( 0 ) Initializa tion : , , 1 - ) − Φ k − Φ k > ε ε = while 6 ( 1 ) ( ) (e.g. 10 [ ] S Y Z A H k = ⋅ k − k − ˆ ˆ ˆ ( ) ( 1 ) ( 1 ) ( , ) ( 1 ) J × IK 1 [ ] H = Y ⋅ Z S A − k k k ˆ ˆ ˆ ( ) ( ) ( 1 ) ( , ) ( 2 ) I KJ × 2 [ ] A Y Z H S k = ⋅ k k ˆ ˆ ˆ ( ) ( ) ( ) ( , ) ( 3 ) K JI × 3 ← + k k 1 14

  15. Part II: Algorithms Convergence of ALS Convergence of ALS Convergence of ALS Convergence of ALS « Easy » Problem «Difficult» Problem Long swamp DCB-(L,P,.) DCB-(L,P,.) I=8, J=50, K=6, I=8, J=50, K=6, L=2, P=2, R=3 L=2, P=2, R=3 Because of long swamps that might occur, we propose 2 algorithms that improve convergence speed. 15

  16. Part II: Algorithms Algorithm 2: Insert a Line Search step in ALS Algorithm 2: Insert a Line Search step in ALS Algorithm 2: Insert a Line Search step in ALS Algorithm 2: Insert a Line Search step in ALS For each iteration, perform linear interpolation of the 3 components A A A A, H H H H and S S from their values at the 2 previous iterations. S S Iteration k Directions of research 1.Line Search: S new = S k − + ρ S k − − S k − ˆ ˆ ˆ ˆ ( ) ( 2 ) ( 1 ) ( 2 ) ( ) ρ A A − A − A − new = k + ρ k − k ˆ ˆ ˆ ˆ Choice of step important ( ) ( 2 ) ( 1 ) ( 2 ) ( ) H H − H − H − new = k + ρ k − k ˆ ˆ ˆ ˆ ( ) ( 2 ) ( 1 ) ( 2 ) ( ) 2. ALS update Can be optimally calculated with [ ] ~ « Enhanced Line Search with s k = Z A new H new ⋅ ˆ ˆ Y ( ) ( ) ( ) ˆ ( , ) JIK 1 Complex Step» (ELSCS) [ ] H Y Z S A k = ⋅ k new ˆ ˆ ˆ ( ) ( ) ( ) ( , ) I KJ × 2 [ ] A Y Z H S k = ⋅ k k ˆ ˆ ˆ ( ) ( ) ( ) ( , ) K JI × 3 ← + k k 16 1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend