Short Overview on Blind Equalization Philippe Ciblat Tlcom - - PowerPoint PPT Presentation

short overview on blind equalization
SMART_READER_LITE
LIVE PREVIEW

Short Overview on Blind Equalization Philippe Ciblat Tlcom - - PowerPoint PPT Presentation

Short Overview on Blind Equalization Philippe Ciblat Tlcom ParisTech Introduction Statistics HOS SOS Other Simulations Ccl and Refs Outline 1. Introduction General problem Problem classification Considered problem 2. Statistical


slide-1
SLIDE 1

Short Overview on Blind Equalization

Philippe Ciblat

Télécom ParisTech

slide-2
SLIDE 2

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Outline

  • 1. Introduction

General problem Problem classification Considered problem

  • 2. Statistical framework
  • 3. High-Order Statistics (HOS) algorithms

Constant Modulus Algorithm (CMA) Adaptive versions

  • 4. Second-Order Statistics (SOS) algorithms

Covariance matching algorithm (Deterministic) Maximum-Likelihood algorithm Some sub-optimal algorithms

  • 5. Other types of algorithms
  • 6. Numerical illustrations: optical-fiber communications use-case

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 2 / 45

slide-3
SLIDE 3

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Part 1: Introduction

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 3 / 45

slide-4
SLIDE 4

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

General model

Unknown signal mixture with additive noise y(n) = fct(s(n)) + w(n) (1) with y(n): observations vector at time-index n w(n): white Gaussian noise with zero-mean Find out the multi-variate input s(n) given − only a set of observations y(n) − statistical model for the noise Blind techniques Unknown fct without deterministic help of s(n) to estimate it

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 3 / 45

slide-5
SLIDE 5

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Problem classification

s(n) belongs to a discrete set: equalization

− Military applications: passive listening − Civilian applications: no training sequence

  • Goal 1: remove the header and increase the data rate (be careful:

with the same raw data rate)

  • Goal 2: follow very fast variation of wireless channel (be careful: set of
  • bservations is small)

s(n) belongs to a uncountable set: source separation

− Audio (cocktail party) − Hyperspectral imaging − Cosmology (Cosmic Microwave Background map with Planck data)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 4 / 45

slide-6
SLIDE 6

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Problem classification (cont’d)

In the context of Blind Source Separation (BSS): Instantaneous mixture: y(n) = Hs(n) + w(n) with a unknown matrix H Convolutive mixture: y(n) =

L

  • ℓ=0

H(ℓ)s(n − ℓ) + w(n) with a unknown set of matrices H(ℓ) Nonlinear mixture: fct is not linear BSS field Vast community mainly working on the instantaneous case Goal: find out s(n) up to scale and permutation operators

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 5 / 45

slide-7
SLIDE 7

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Considered Problem

Go back to equalization (done in blindly manner) Unlike BSS, sources are strongly structured: discrete set (often a lattice, i.e., Z-module) discrete set with specific properties: constant modulus if PSK man-made source (can be even modify to help the blind equalization step) Classification problem rather than Regression problem First questions Do we have a Input/Output model given by Eq. (1)? If yes, what is the shape of the mixture given by fct?

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 6 / 45

slide-8
SLIDE 8

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Signal model

Single-user context Single-antenna context Multipath propagation channel Equivalent discrete-time channel model (by sampling EM wave at the symbol rate) y(n) =

h(ℓ)s(n − ℓ) + w(n), ∀n = 0, . . . , N − 1 ⇔ y = Hs + w where H is a band-Toeplitz matrix, N is the frame size

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 7 / 45

slide-9
SLIDE 9

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Signal model (cont’d)

Sampling at symbol rate leads to no information loss on the symbol sequence but information loss on the electro-magnetic wave, and probably

  • n the channel impulse response (our goal, here)

Go back to the “true” receive signal... y(t) =

  • n

s(n)h(t − nTs) + w(t), ∀t ∈ R with s(n): symbol sequence w(t): white Gaussian noise h(t): filter coming from the channel and the transmitter

  • ccupied band =
  • −1 + ρ

2Ts , 1 + ρ 2Ts

  • with the roll-off factor ρ ∈ (0, 1]

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 8 / 45

slide-10
SLIDE 10

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Signal framework

Shannon-Nyquist sampling theorem ⇒ T = Ts

2

Scalar framework: no filtering anymore ˜ y(n) = y(nT) =

  • k

s(knTs/2 − kTs) + ˜ w(n) Vector framework: SIMO filtering

  • y1(n)

= y(nTs) = h1 ⋆ s(n) + w1(n) y2(n) = y(nTs + Ts/2) = h2 ⋆ s(n) + w2(n) with h1(n) = h(nTs) and h2(n) = h(nTs + Ts/2)

.

h1 h2 y1 y2 Ts h(t) Ts/2 Ts s(n) s(n)

.

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 9 / 45

slide-11
SLIDE 11

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Problems to be solved

Goals Estimate

  • 1. Scalar case: h1 given y1(n) only and h2 given y2(n) only, i.e.,

working with model of Slide 7

  • 2. Vector case: h = [h1, h2]T given y(n) = [y1(n), y2(n)]T jointly

Glossary: without training sequence

  • Non-Data-aided (NDA) or blind/unsupervised

with training sequence

  • Data-aided (DA) or supervised

with decision-feedback

  • Decision-Directed (DD)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 10 / 45

slide-12
SLIDE 12

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Part 2: Statistical framework

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 11 / 45

slide-13
SLIDE 13

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Available data statistics

Only {y(n)}N−1

n=0 is available to estimate H

What is an algorithm here? a function depending only on {y(n)}N−1

n=0 ...

... a statistic of the random process y(n) Θ

  • {y(n)}N−1

n=0

  • Choice of Θ:

P-order polynomial: moments of the random process Question: which orders are relevant? listen to the talk A Deep Neural Network (DNN) Question: how calculating the weights? see Slide 37

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 11 / 45

slide-14
SLIDE 14

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

A not-so toy example

y(n) = Hs(n) + w(n) with y(n) is a vector of length L H is a L × L square full rank matrix s(n), w(n) are i.i.d. circularly-symmetric Gaussian vectors with zero-mean and variances σ2

s and σ2 w respectively

Results y(n) Gaussian with zero-mean and correlation matrix R(H) = σ2

sHHH + σ2 wIdL

R(H) = R(HU) for any unitary matrix U Principal Component Analysis (PCA) is a deadlock s(n) has to be non-Gaussian ⇒ Independent Component Analysis (ICA)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 12 / 45

slide-15
SLIDE 15

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Scalar case

Go back to blind equalization y(n) = h ⋆ s(n) + w(n) As y(n) is stationary, second-order information lies in S(e2iπf) =

  • m

r(m)e−2iπfm = σ2

s|h(e2iπf)|2 + σ2 w

with r(m) = E[y(n + m)y(n)] h(z) =

ℓ h(ℓ)z−ℓ, with z = e2iπf

Results Lack of information on the channel impulse response, except if

h(z) is phase minimum (h(z) = 0 if |z| > 1) non-stationary signal non-Gaussian signal (by resorting to high-order statistics) : OK for PAM, PSK, QAM sources

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 13 / 45

slide-16
SLIDE 16

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Scalar case: the pavement of the HOS road

Let X = [X1, . . . , XN] be a real-valued random vector of length N. Characteristic function of the first kind (MGF) ΨX : ω → E[eiωTx]

  • =
  • pX(x)eiωTxdx
  • Moments (of order s) ∝ component of Taylor series expansion of ΨX

for s-th order Example: N = 2; Second-order means E[X 2

1 ],E[X 2 2 ], and E[X1X2]

Characteristic function of the second kind (CGF) ΦX : ω → log(ΨX(ω)) Cumulants (of order s) ∝ component of Taylor series expansion of ΦX for s-th order

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 14 / 45

slide-17
SLIDE 17

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Useful properties

Why cumulants? let X and Y be independent vectors Ψ[X,Y](ω) = ΨX(ω1).ΨY(ω2) but Φ[X,Y](ω) = ΦX(ω1) + ΦY(ω2) X = [X1, · · · , XN] and Y = [Y1, · · · , YN] be independent vectors cums(Xi1+Yi1, · · · , Xis+Yis) = cums(Xi1, · · · , Xis)+cums(Yi1, · · · , Yis) X = [X1, · · · , XN] with at least two independent components cumN(X1, · · · , XN) = 0 X = [X1, · · · , XN] Gaussian vector cums(Xi1, · · · , Xis) = 0 if s ≥ 3 Remarks No HOS information for Gaussian vector “Distance” to the Gaussian distribution ⇒ (normalized) Kurtosis κx =

cum4(x, x, x, x)

(E[|x|2])2

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 15 / 45

slide-18
SLIDE 18

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Fourth-order information: the trispectrum

S4(e2iπf1; e2iπf2; e2iπf3) =

  • m1,m2,m3

cum4(m1, m2, m3)e−2iπ(f1m1+f2m2+f3m3) = κsh(e2iπf1)h(e2iπf2)h(e2iπf3)h(e2iπ(−f1+f2+f3)) with cum4(m1, m2, m3) = cum(y(n), y(n + m1), y(n − m2), y(n − m3)) Remarks Trispectrum provides information enough on channel impulse response Question: how carrying out algorithms using it (see Part 3)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 16 / 45

slide-19
SLIDE 19

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Vector case

Go back to the signal model y(n) = h ⋆ s(n) + w(n) with y(n) = [y1(n), y2(n)]T and h(n) = [h1(n), h2(n)]T Reminder: oversampling or symbol rate sampling with two RX As y(n) is stationary, second-order information lies in S(e2iπf) =

  • m

R(m)e−2iπfm = σ2

sh(e2iπf)h(e2iπf)H

with R(m) = E

  • y(n + m)y(n)H

and h(e2iπf) =

ℓ h(ℓ)e−2iπfℓ

Results Unique solution if h(z) is phase minimum (h(z) = 0 if |z| > 1) Unrestrictive assumption since often h1(z) = h2(z), ∀z, i.e., no common root, i.e., h1(z) and h2(z) are prime jointly Information enough on channel impulse response

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 17 / 45

slide-20
SLIDE 20

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Vector case: a cyclostationarity point-of-view

Go back to the continuous-time signal model y(t) =

  • k

s(k)h(t − kTs) + w(t) Its autocorrelation is periodic with period Ts t → r(t, τ) = E

  • ya(t + τ)ya(t)
  • Result

˜ y(n) cyclostationary with period (Ts/T) = 2 By denoting ˜ s = (s0, 0, s1, 0, · · · ), we have ˜ y(n) = ˜ h ⋆ ˜ sn Remark : Cyclostationary discrete-time signal with period 1 is stationary

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 18 / 45

slide-21
SLIDE 21

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Cyclostationary second-order information

Fourier series expansion of the correlation: n → r(n, m) = E[˜ y(n + m)˜ y(n)] = r (0)(m) + r (1/2)(m)e2iπ(1/2)n α ∈ {0, 1/2} : cyclic frequencies {r (α)(m)}m: set of cyclic correlation at cyclic frequency α S(α)(e2iπf) =

m r (α)(m)e−2iπfm: cyclic spectrum at cyclic

frequency α Results S(0)(e2iπf) = σ2

s|˜

h(e2iπf)|2, S( 1

2 )(e2iπf) = σ2

s ˜

h(e2iπf)˜ h(e2iπ(f+ 1

2 ))

Cyclic spectra provide information enough on channel impulse response Question: how carrying out algorithms using it (see Part 4)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 19 / 45

slide-22
SLIDE 22

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Take-home message

.

˜ y = ˜ h ⋆ ˜ s y = h ⋆ s Sampling at Ts Sampling at Ts/2 S e p a r a t e p r

  • c

e s s i n g SOS algorithms HOS algorithms cyclostationary SISO stationary SIMO stationary SISO y = h ⋆ s (ante 1991)

. Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 20 / 45

slide-23
SLIDE 23

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Part 3: High-Order Statistics based Algorithms

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 21 / 45

slide-24
SLIDE 24

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Principle

Usually the algorithms rely on blind deconvolution principle, i.e., retrieving the symbol sequence {s(n)}n directly from {y(n)}n Talk done with the stationary SISO model min

p E [f(z(n))]

with z(n) = p ⋆ y(n) p the equalizer filter f a nonlinear and nonquadratic cost function

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 21 / 45

slide-25
SLIDE 25

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Some algorithms

Sato Algorithm [Sato1975] J = E

  • (z(n) − sign(z(n)))2

Constant Modulus Algorithm (CMA) [Godard1980] J = E

  • |z(n)|2 − C

2 with C = E[|sn|4]/E[|sn|2] Kurtosis Minimization (KM) [ShalviWeinstein1990] J = |κz|

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 22 / 45

slide-26
SLIDE 26

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Implementation issue

How finding the minimum of J(p) = E[Jn(p)] ? Blockwise processing Adaptive processing

.

... 1 N block of size N ˆ JN(p) = 1

N

N

n=1 Jn(p) .

We replace J(p) with ˆ JN(p)

.

...

p

p3p2 p1

.

Gradient algo. pi+1 = pi − µ ∂ ˆ

JN(p) ∂p

|pi

.

... 1 n J1(p) Jn(p) .

We replace J(p) with Jn(p) at time/iteration n LMS Newton (Stochastic) Gradient algo. pn+1 = pn − µ ∂Jn(p)

∂p |pn

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 23 / 45

slide-27
SLIDE 27

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Application to CMA

Adaptive implementation pn+1 = pn − µyLp(n)z(n)(|z(n)|2 − Const) = pn − µyLp(n)(z(n) − Fcma(z(n))) with yLp(n) = [y(n), · · · , y(n − Lp)]T Fcma(z(n)) = z(n)(1 + C − |z(n)|2) if KM, Fkm(z(n)) = z(n)(1 + sgn(κs)|z(n)|2) Special case: training sequence (known s(n)) J = E[|z(n) − s(n)|2] Adaptive implementation pn+1 = pn − µyLp(n)(z(n) − s(n)) s(n) may be replaced by ˆ s(n) after initial convergence (DD) s(n) is replaced by F(z(n)) which plays the role of “training”

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 24 / 45

slide-28
SLIDE 28

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Take-home message

.

y(n) + + − + h s(n) w(n) Equalizer p z(n) Threshold detector Adaptive trained equalizer scheme

. .

y(n) +

  • +

− + h s(n) w(n) Equalizer p z(n) Function Threshold detector Adaptive blind equalizer scheme F(z(n))

.

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 25 / 45

slide-29
SLIDE 29

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Part 4: Second-Order Statistics based Algorithms

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 26 / 45

slide-30
SLIDE 30

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Principle

Usually the algorithms rely on blind identification principle, i.e., retrieving the filter h = [h(0)T, · · · , h(L)T]T Talk done with the stationary SIMO model    y(n) . . . y(n − N)   

  • YN(n)

=    h(0) · · · h(L) · · · . . . ... ... . . . · · · h(0) · · · h(L)   

  • T (h)

   s(n) . . . s(n − N − L)   

  • SN+L(n)

with T (h) a 2(N + 1) × (N + L + 1) Sylvester matrix Result If h(z) = 0, ∀z and N > L, then T (h) is full column rank and left-invertible

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 26 / 45

slide-31
SLIDE 31

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Covariance matrix algorithm

Question: what is the best second-order algorithm? Let R(h) = E[YN(n)YN(n)H] and ˆ RNobs =

1 Nobs

Nobs−1

n=0

YN(n)YN(n)H r(h) = [ℜ{vec(R(h))}T, ℑ{vec(R(h))}T]T ˆ rNobs = [ℜ{vec(ˆ RNobs)}T, ℑ{vec(ˆ RNobs)}T]T Result

  • Nobs(ˆ

rNobs − r(h))

D

→ N(0, Γh), i.e., ˆ rNobs ≈ r(h) + wNobs with wNobs zero-mean Gaussian noise with covariance matrix Γh/Nobs

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 27 / 45

slide-32
SLIDE 32

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Covariance matching algorithm (cont’d)

Maximum-Likelihood based on ˆ rNobs instead of data Y = YNobs(Nobs) 1 Nobs log(p(ˆ rNobs|h)) ≈ −(ˆ rNobs − r(h))TΓ−1

h (ˆ

rNobs − r(h)) − log(det(Γh)) 2Nobs + constant Result ˆ hcm = arg min

h

  • Γ

− 1

2

h

(ˆ rNobs − r(h))

  • 2

with W

1 2 x2 = xHWx

Ping-pong procedure for update Γh

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 28 / 45

slide-33
SLIDE 33

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Maximum Likelihood algorithm

Question: Maximum Likelihood based on Y Y = T (h)S + W with W white zero-mean Gaussian noise and unknown S

.

TRUE ML GAUSSIAN ML DETERMINISTIC ML maxh,S p(Y|h, S) maxh p(Y|h) = p(Y|h, S)p(S)dS maxh p(Y|h) = p(Y|h, S)e−SHΓ−1

s SdS

almost always untractable tractable but not optimal tractable but not optimal

.

Deterministic Maximum Likelihood (ˆ h, ˆ S)ML = arg min

h,S Y − T (h)S2

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 29 / 45

slide-34
SLIDE 34

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Maximum Likelihood algorithm (cont’d)

Minimization on S (without constraint): ˆ SML = (T (h)HT (h))−1T (h)HY Then minimization on h: ˆ hML = arg min

h (Id − T (h)(T (h)HT (h))−1T (h)H)

  • P⊥

h

Y2 with P⊥

h the projection on sp(T (h))⊥

ˆ hml = arg max

h

hHYH(T (h)HT (h))−1Yh Quadratic cost function / Y ⇒ Second ordre is fine Non-quadratic cost function / h ⇒ Ping-pong procedure

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 30 / 45

slide-35
SLIDE 35

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Subspace algorithm: principle

Signal model: y(n) = A(θ)s(n) Main required property: sp(A(θ)) = sp(A(θ′)) ⇐ ⇒ θ = θ′ Algorithm main step: ˆ θ = arg min

θ distance(vect(y(n)), sp(A(θ)))

Example 1 : source localization (MUSIC) A(θ) = [a(θ1), · · · , a(θp)] with a(θ) = [1, e2iπθ, · · · , e2iπ(M−1)θ]T (steering vector) M > p

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 31 / 45

slide-36
SLIDE 36

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Subspace algorithm: application to blind equalization

YN(n) = T (h)SN+L(n), i.e., A ← → T (h) and θ ← → h Result Let T (h′) be a Sylvester matrix associated with h′ If N ≥ L and h(z) = 0 ∀z ∈ C, then sp(T (h′)) = sp(T (h)) ⇐ ⇒ h′ = αh up to a constant α Proof: using rational space or C[X]-module

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 32 / 45

slide-37
SLIDE 37

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Subspace algorithm: practical implementation

White source ⇒ R = E[YYH] = T (h)T (h)H ⇒ sp(R) = sp(T (h)) Let Π be the projector on Ker(R) ⇒ Πx = 0 iff x ∈ sp(RY) Then h is the unique vector such that ΠT (h) = 0 In practice, R (resp. Π) is estimated by ˆ R (resp. ˆ Π). ˆ hss = arg min

h=1 ˆ

ΠT (h)2 = arg min

h=1 hHQh

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 33 / 45

slide-38
SLIDE 38

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Linear Prediction algorithm

If h1(z) and h2(z) have no common root, Bezout’s theorem holds: ∃[g1(z), g2(z)] polynomials such that g1(z)h1(z) + g2(z)h2(z) = 1 Result Finite-degree MA = Finite-degree AR y(n) AR process of order L with innovation i(n) = h(0)s(n), i.e., y(n) +

L

  • ℓ=1

A(ℓ)y(n − ℓ) = i(n) Algorithm implementation: Solve Yule-Walker equations (to obtain A(ℓ) then h(ℓ)) E[i(n)[y(n − 1)H, · · · , y(n − L)H]] = 0 Estimate h(0) with the covariance matrix of the innovation

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 34 / 45

slide-39
SLIDE 39

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Part 5: Other types of algorithms

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 35 / 45

slide-40
SLIDE 40

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Semi-blind approach

Combining both criteria DA (with training sequence) blind/NDA (without training sequence) as follows J(h) = αJNDA(h) + (1 − α)JDA(h) Criteria selection (as an example): JDA(h): ML JNDA(h) : Subspace algorithm Result Improve the estimation performance, or decrease the training duration

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 35 / 45

slide-41
SLIDE 41

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Decision directed approach

DA approach followed by NDA well initialized DD

− with hard decisions − with soft decisions (turbo-estimation)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 36 / 45

slide-42
SLIDE 42

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

An other way: clustering based approach (or a step towards Machine Learning)

y(n) = hTs(n)

c(n)

+w(n) with s(n) = [s(n), . . . , s(n − L)]T and h = [h(0), . . . , s(L)]T y(n) is a point in C, and belongs to the cluster labelled by one c K clusters to characterize (where K = card(c) is known) Apply unsupervised clustering algorithm: K-means Now, given c, how retrieving s(n) (with unknown h) Hidden Markov Model (HMM) approach s(n) is a Markov Chain: Pr(s(n)|s(n − 1), . . . ) = Pr(s(n)|s(n − 1)) c(n) observation coming from an unknown Markov Chain state Forward-Backward algorithm to retrieve h

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 37 / 45

slide-43
SLIDE 43

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

An other way: clustering based approach (or a step towards Machine Learning) (cont’d)

y(n) = fct(s(n)) + w(n) ⇒ ˆ s(n) = threshold (Θ(y(n))) with threshold : activation function Θ(•): DNNweights(•) Questions: One DNN per channel? If yes, training step (so it is not a blind approach) Gain in performance or less complex? Some papers on Optical-Fiber communications (trained for one fiber configuration) One DNN available for a large set of fct?

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 38 / 45

slide-44
SLIDE 44

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Part 6: Numerical illustrations

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 39 / 45

slide-45
SLIDE 45

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Second-order vs high-order algorithms

Random multipath channel SIMO with oversampling of factor 2 Observation window 1000Ts

5 10 15 20 25 30 10

−3

10

−2

10

−1

10 EQM sur les symboles Rapport Signal/Bruit (en dB) Ajustement de covariance (perfs. theoriques) Algorithme du Module Constant (perfs empiriques) Egalisation de Wiener avec canal connu (perfs theoriques) Maximisation de Kurtosis (perfs empiriques) 5 10 15 20 25 30 10

−3

10

−2

10

−1

10 10

1

Rapport Signal−à−Bruit (RSB) EQM sur les symboles Ajustement de covariance (perfs. theoriques) Algorithme du Module Constant (perfs empiriques) Egalisation de Wiener avec canal connu (perfs theoriques) Maximisation de Kurtosis (perfs empiriques)

Figure: MSE vs SNR for 4QAM (left) and 16QAM (right) (courtesy of L. Mazet)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 39 / 45

slide-46
SLIDE 46

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

High-order algorithm (CMA)

y(n) = 1 β1 β2 1

  • .s(n) + w(n)

2 4 6 8 10 12 14 SNR 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 BER no ISI [0.5, 0.5] (CMA) [0.25, 0.75] (CMA) [0.5, 0.5] (no CMA) [0.25, 0.75] (no CMA)

Figure: BER vs SNR with 4QAM (warmup step of 1000 samples)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 40 / 45

slide-47
SLIDE 47

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Time-varying channels

Stationary SISO model 4QAM 6-tap equalizer filter p

200 400 600 800 1000 1200 1400 1600 1800 2000 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Time index/Iteration BER adaptive CMA performance SNR=10dB SNR=15dB 200 400 600 800 1000 1200 1400 1600 1800 2000 0.05 0.1 0.15 0.2 0.25 0.3 Time index/Iteration BER adaptive CMA performance when tracking (SNR=10dB) Tracking for gaussian perturbation with 0.5 as standard deviation Tracking for gaussian perturbation with 0.1 as standard deviation

Figure: BER vs iteration: h = [0.3, 0.86, 0.39]T (left), h ← h + std × N (0, 1) at time index 500,

1000 and 1500 (right)

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 41 / 45

slide-48
SLIDE 48

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Use-case: optical-fiber (simulations)

PolMux 16QAM, 112Gbits/s, range 1000km CD=1000ps/nm DGD=50ps OSNR=20dB

50 100 150 200 10

−4

10

−3

10

−2

10

−1

10 Number of Iterations BER N=100 N=500 N=1000 N=2000 N=3000 2000 4000 6000 8000 10000 12000 10

−5

10

−4

10

−3

10

−2

10

−1

10 Length of the observation window BER A−CMA, µ=10−3 AN−CMA, µ=10−3, δ = 10 BO−CMA

Blockwise algorithm converges with N = 1000 and few iterations Adaptive algorithms need more samples to converge BER target (@10−3) satisfied

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 42 / 45

slide-49
SLIDE 49

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Use-case: optical-fiber (experimentation)

PolMux 8PSK, 60Gbits/s, range 800km SSMF fiber OSNR=23.7dB ⇒

2000 4000 6000 8000 10000 10

−4

10

−3

10

−2

10

−1

10 Length of the observation window BER A−CMA, µ = 10−3 AN−CMA, µ = 10−3, δ = 10 BO−CMA

⇒ It works!

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 43 / 45

slide-50
SLIDE 50

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

Conclusion

Blind equalization works in pratice HOS:

− No in-depth theoretical analysis − Drawback: large observation window (not civilian application yet, except optical fiber)

SOS:

− In-depth theoretical analysis (when N large enough) − Easy to use, espcially when SIMO coming from spatial diversity

DNN?

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 44 / 45

slide-51
SLIDE 51

Introduction Statistics HOS SOS Other Simulations Ccl and Refs

References

  • D. Godard, "Self-recovering equalization and carrier tracking in two-dimensional

data communications systems", IEEE Trans. on Communications, Nov. 1980.

  • O. Shalvi and E. Weinstein, “New criteria for blind deconvolution of non-minimum

phase systems", IEEE Trans. on Information Theory, Mar. 1990.

  • A. Benveniste, M. Métivier and P

. Priouret, "Adaptive algorithms and stochastic approximations", Springer, 1990.

  • L. Tong, G. Xu and T. Kailath, "A new approach to blind identification and

equalization of multipath channels", Asilomar, 1991.

  • D. Slock, "Blind fractionally-spaced equalization, perfect-reconstruction filter

banks and multichannel linear predictor", ICASSP , 1994.

  • E. Moulines, P

. Duhamel, J.F . Cardoso and S. Mayrargue, "Subspace method for blind equalization of multichannel FIR filters", IEEE Trans. on Signal Processing,

  • Feb. 1995.
  • D. Boppana and S.S. Rao, “K-harmonic means clustering based blind

equalization in hostile environments”, GLOBECOM, 2003.

  • Z. Ding and G. Li, "Blind equalization and identification", Dekker, 2001.

P . Loubaton, "Signal et télécoms", Hermès, 2004. P . Comon and C. Jutten, "Handbook of Blind Source Separation, Independent Component Analysis and Applications", Academic Press, 2010.

Philippe Ciblat (Télécom ParisTech) Short overview on Blind Equalization 45 / 45