RLS Adaptive Filtering with Sparsity Regularization S IO Asst. - - PowerPoint PPT Presentation

rls adaptive filtering with sparsity regularization
SMART_READER_LITE
LIVE PREVIEW

RLS Adaptive Filtering with Sparsity Regularization S IO Asst. - - PowerPoint PPT Presentation

RLS Adaptive Filtering with Sparsity Regularization S IO Asst. Prof. Ender M. EK GLU Istanbul Technical University Electronics and Communications Engineering Department Main Headings ISSPA 2010, Malaysia RLS Adaptive Filtering with


slide-1
SLIDE 1

RLS Adaptive Filtering with Sparsity Regularization

  • Asst. Prof. Ender M. EK ¸

S˙ IO ˘ GLU Istanbul Technical University Electronics and Communications Engineering Department

slide-2
SLIDE 2

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.2

Main Headings

slide-3
SLIDE 3

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.2

Main Headings

Introduction

slide-4
SLIDE 4

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.2

Main Headings

Introduction ℓ1-RLS Algorithm

slide-5
SLIDE 5

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.2

Main Headings

Introduction ℓ1-RLS Algorithm Simulation Results

slide-6
SLIDE 6

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.2

Main Headings

Introduction ℓ1-RLS Algorithm Simulation Results Conclusions

slide-7
SLIDE 7

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.3

Introduction

Sparse adaptive filtering, where the impulse response for

the system to be identified is assumed to be of a sparse form has acquired attention recently.

slide-8
SLIDE 8

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.3

Introduction

Sparse adaptive filtering, where the impulse response for

the system to be identified is assumed to be of a sparse form has acquired attention recently.

The sparsity prior has applications in acoustic and network

echo cancellation and communication channel identification.

slide-9
SLIDE 9

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.3

Introduction

Sparse adaptive filtering, where the impulse response for

the system to be identified is assumed to be of a sparse form has acquired attention recently.

The sparsity prior has applications in acoustic and network

echo cancellation and communication channel identification.

Proportionate adaptive algorithm is a well-known approach

to the problem.

slide-10
SLIDE 10

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.4

Introduction

Recently, novel LMS type algorithms which incorporate the

sparsity condition directly into the cost function have been developed.

slide-11
SLIDE 11

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.4

Introduction

Recently, novel LMS type algorithms which incorporate the

sparsity condition directly into the cost function have been developed.

The common idea is to add a penalty term in the form of an

ℓp norm of the weight vector into the overall cost function to

be minimized.

slide-12
SLIDE 12

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.4

Introduction

Recently, novel LMS type algorithms which incorporate the

sparsity condition directly into the cost function have been developed.

The common idea is to add a penalty term in the form of an

ℓp norm of the weight vector into the overall cost function to

be minimized.

Sparsity based adaptive algorithms have been mostly

confined to the LMS domain.

slide-13
SLIDE 13

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.5

Introduction

Recursive least squares (RLS) adaptive filtering is another

important modality in the adaptive system identification setting.

slide-14
SLIDE 14

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.5

Introduction

Recursive least squares (RLS) adaptive filtering is another

important modality in the adaptive system identification setting.

In this paper, we propose an RLS adaptive algorithm for

sparse system identification.

slide-15
SLIDE 15

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.5

Introduction

Recursive least squares (RLS) adaptive filtering is another

important modality in the adaptive system identification setting.

In this paper, we propose an RLS adaptive algorithm for

sparse system identification.

The algorithm will utilize the modified RLS cost function with

an additional sparsity inducing ℓ1 penalty term.

slide-16
SLIDE 16

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.5

Introduction

Recursive least squares (RLS) adaptive filtering is another

important modality in the adaptive system identification setting.

In this paper, we propose an RLS adaptive algorithm for

sparse system identification.

The algorithm will utilize the modified RLS cost function with

an additional sparsity inducing ℓ1 penalty term.

We find the recursive minimization procedure in a manner

similar to the conventional RLS approach.

slide-17
SLIDE 17

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.5

Introduction

Recursive least squares (RLS) adaptive filtering is another

important modality in the adaptive system identification setting.

In this paper, we propose an RLS adaptive algorithm for

sparse system identification.

The algorithm will utilize the modified RLS cost function with

an additional sparsity inducing ℓ1 penalty term.

We find the recursive minimization procedure in a manner

similar to the conventional RLS approach.

The difference occurs in the weight vector update equation,

where a novel zero-attracting, sparsity inducing additional term is included.

slide-18
SLIDE 18

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.5

Introduction

Recursive least squares (RLS) adaptive filtering is another

important modality in the adaptive system identification setting.

In this paper, we propose an RLS adaptive algorithm for

sparse system identification.

The algorithm will utilize the modified RLS cost function with

an additional sparsity inducing ℓ1 penalty term.

We find the recursive minimization procedure in a manner

similar to the conventional RLS approach.

The difference occurs in the weight vector update equation,

where a novel zero-attracting, sparsity inducing additional term is included.

We will call this new algorithm as the ℓ1-RLS.

slide-19
SLIDE 19

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.6

Introduction

Firstly give a brief outline of the adaptive system

identification setting.

slide-20
SLIDE 20

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.6

Introduction

Firstly give a brief outline of the adaptive system

identification setting.

Then, we develop the novel ℓ1-RLS algorithm by outlining

the similarities to the development of regular RLS.

slide-21
SLIDE 21

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.6

Introduction

Firstly give a brief outline of the adaptive system

identification setting.

Then, we develop the novel ℓ1-RLS algorithm by outlining

the similarities to the development of regular RLS.

We give the final form of ℓ1-RLS algorithm.

slide-22
SLIDE 22

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.6

Introduction

Firstly give a brief outline of the adaptive system

identification setting.

Then, we develop the novel ℓ1-RLS algorithm by outlining

the similarities to the development of regular RLS.

We give the final form of ℓ1-RLS algorithm. We will present simulation results comparing the novel

ℓ1-RLS algorithm to regular RLS, regular LMS and other

adaptive algorithms.

slide-23
SLIDE 23

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.7

ℓ1-RLS Algorithm

Consider the system identification setting given by the

following input-output equation. y(n) = hTx(n) + η(n) (1)

slide-24
SLIDE 24

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.7

ℓ1-RLS Algorithm

Consider the system identification setting given by the

following input-output equation. y(n) = hTx(n) + η(n) (1)

The aim of the adaptive system identification algorithm is to

estimate the system parameters h from the input and output signals in a sequential manner.

slide-25
SLIDE 25

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.7

ℓ1-RLS Algorithm

Consider the system identification setting given by the

following input-output equation. y(n) = hTx(n) + η(n) (1)

The aim of the adaptive system identification algorithm is to

estimate the system parameters h from the input and output signals in a sequential manner.

In conventional RLS, the cost function to be minimized by

the weight estimate is given by

E(n) =

n

m=0

λn−m|e(m)|2. (2)

slide-26
SLIDE 26

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.8

ℓ1-RLS Algorithm

We assume that the underlying filter coefficient vector h has

a sparse form.

slide-27
SLIDE 27

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.8

ℓ1-RLS Algorithm

We assume that the underlying filter coefficient vector h has

a sparse form.

Hence, we want to modify the cost function in a manner that

underlines this a priori information.

slide-28
SLIDE 28

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.8

ℓ1-RLS Algorithm

We assume that the underlying filter coefficient vector h has

a sparse form.

Hence, we want to modify the cost function in a manner that

underlines this a priori information.

A tractable way to force sparsity is by using the ℓ1-norm of

the weight vector.

slide-29
SLIDE 29

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.8

ℓ1-RLS Algorithm

We assume that the underlying filter coefficient vector h has

a sparse form.

Hence, we want to modify the cost function in a manner that

underlines this a priori information.

A tractable way to force sparsity is by using the ℓ1-norm of

the weight vector.

Hence, we regularize the RLS cost function by including the

weighted ℓ1 norm of the current tab estimate as a sparsifying term.

slide-30
SLIDE 30

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.9

ℓ1-RLS Algorithm

J(n) = 1 2E(n) + γh(n)1 (3)

slide-31
SLIDE 31

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.9

ℓ1-RLS Algorithm

J(n) = 1 2E(n) + γh(n)1 (3)

Here, γ > 0 is a parameter that governs the tradeoff

between sparsity and estimation error.

slide-32
SLIDE 32

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.9

ℓ1-RLS Algorithm

J(n) = 1 2E(n) + γh(n)1 (3)

Here, γ > 0 is a parameter that governs the tradeoff

between sparsity and estimation error.

h(n)1 is the ℓ1 norm of the weight vector and is given by

h(n)1 =

N−1

k=0

|hk(n)|

(4)

slide-33
SLIDE 33

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.10

ℓ1-RLS Algorithm

We want to minimize this regularized cost function J(n) with

respect to the filter tab weights.

slide-34
SLIDE 34

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.10

ℓ1-RLS Algorithm

We want to minimize this regularized cost function J(n) with

respect to the filter tab weights.

In the standard RLS case when the cost function is simply

E(n), the minimization condition is written in terms of the

gradient of E(n) with respect to h(n).

slide-35
SLIDE 35

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.10

ℓ1-RLS Algorithm

We want to minimize this regularized cost function J(n) with

respect to the filter tab weights.

In the standard RLS case when the cost function is simply

E(n), the minimization condition is written in terms of the

gradient of E(n) with respect to h(n).

However, the ℓ1 norm term h(n)1 in J(n) in (3) is

nondifferentiable at any point where hk(n) = 0.

slide-36
SLIDE 36

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.10

ℓ1-RLS Algorithm

We want to minimize this regularized cost function J(n) with

respect to the filter tab weights.

In the standard RLS case when the cost function is simply

E(n), the minimization condition is written in terms of the

gradient of E(n) with respect to h(n).

However, the ℓ1 norm term h(n)1 in J(n) in (3) is

nondifferentiable at any point where hk(n) = 0.

A substitute for the gradient in the case of nondifferentiable

convex functions such as h(n)1 here is offered by the definition of the subgradient.

slide-37
SLIDE 37

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.11

ℓ1-RLS Algorithm

One subgradient vector of the penalized cost function J(n)

with respect to the weight vector h(n) can be written as

∇SJ(n) = 1

2∇E + γ sgn

  • h(n)
  • (5)
slide-38
SLIDE 38

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.11

ℓ1-RLS Algorithm

One subgradient vector of the penalized cost function J(n)

with respect to the weight vector h(n) can be written as

∇SJ(n) = 1

2∇E + γ sgn

  • h(n)
  • (5)

The ith element of this vector is calculated as below.

  • ∇SJ(n)
  • i = −

n

m=0

λn−me(m)x∗(m − i + 1) + γ sgn

  • hi(n)
  • (6)
slide-39
SLIDE 39

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.12

ℓ1-RLS Algorithm

We set the subgradient equal to zero to find the optimal least

squares solution, namely ˆ h(n).

slide-40
SLIDE 40

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.12

ℓ1-RLS Algorithm

We set the subgradient equal to zero to find the optimal least

squares solution, namely ˆ h(n).

n

m=0

λn−m y(m) −

N−1

k=0

ˆ hk(n)x(m − k)

  • x∗(m − i + 1) = −γ sgn

ˆ hi(n) (7)

slide-41
SLIDE 41

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.12

ℓ1-RLS Algorithm

We set the subgradient equal to zero to find the optimal least

squares solution, namely ˆ h(n).

n

m=0

λn−m y(m) −

N−1

k=0

ˆ hk(n)x(m − k)

  • x∗(m − i + 1) = −γ sgn

ˆ hi(n) (7)

Written for all i = 1, . . . , N together in a matrix form, results

in the modified deterministic normal equations.

slide-42
SLIDE 42

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.13

ℓ1-RLS Algorithm

Φ(n) ˆ h(n) = r(n) − γ sgn ˆ h(n)

  • (8)
slide-43
SLIDE 43

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.13

ℓ1-RLS Algorithm

Φ(n) ˆ h(n) = r(n) − γ sgn ˆ h(n)

  • (8)

Here, Φ(n) is the exponentially weighted deterministic

autocorrelation matrix estimate.

slide-44
SLIDE 44

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.13

ℓ1-RLS Algorithm

Φ(n) ˆ h(n) = r(n) − γ sgn ˆ h(n)

  • (8)

Here, Φ(n) is the exponentially weighted deterministic

autocorrelation matrix estimate.

r(n) is the deterministic cross-correlation estimate between

y(n) and x(n).

slide-45
SLIDE 45

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.13

ℓ1-RLS Algorithm

Φ(n) ˆ h(n) = r(n) − γ sgn ˆ h(n)

  • (8)

Here, Φ(n) is the exponentially weighted deterministic

autocorrelation matrix estimate.

r(n) is the deterministic cross-correlation estimate between

y(n) and x(n).

These two quantities can be updated by rank-one recursive

equations.

slide-46
SLIDE 46

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.13

ℓ1-RLS Algorithm

Φ(n) ˆ h(n) = r(n) − γ sgn ˆ h(n)

  • (8)

Here, Φ(n) is the exponentially weighted deterministic

autocorrelation matrix estimate.

r(n) is the deterministic cross-correlation estimate between

y(n) and x(n).

These two quantities can be updated by rank-one recursive

equations. Φ(n) = λΦ(n − 1) + x∗(n)xT(n)

slide-47
SLIDE 47

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.13

ℓ1-RLS Algorithm

Φ(n) ˆ h(n) = r(n) − γ sgn ˆ h(n)

  • (8)

Here, Φ(n) is the exponentially weighted deterministic

autocorrelation matrix estimate.

r(n) is the deterministic cross-correlation estimate between

y(n) and x(n).

These two quantities can be updated by rank-one recursive

equations. Φ(n) = λΦ(n − 1) + x∗(n)xT(n) r(n) = λr(n − 1) + y(n)x∗(n)

slide-48
SLIDE 48

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.14

ℓ1-RLS Algorithm

Instead of solving the normal equations for the optimal least

squares solution ˆ h(n) directly, search for an iterative solution.

slide-49
SLIDE 49

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.14

ℓ1-RLS Algorithm

Instead of solving the normal equations for the optimal least

squares solution ˆ h(n) directly, search for an iterative solution.

We assume that the sign of the weight values do not change

significantly in a single time step.

slide-50
SLIDE 50

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.14

ℓ1-RLS Algorithm

Instead of solving the normal equations for the optimal least

squares solution ˆ h(n) directly, search for an iterative solution.

We assume that the sign of the weight values do not change

significantly in a single time step.

The normal equation can be rewritten as

ˆ h(n) = P(n)θ(n) (9) where P(n) is the inverse of the autocorrelation matrix. P(n) = Φ−1(n)

slide-51
SLIDE 51

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.15

ℓ1-RLS Algorithm

We come up with the following result.

ˆ h(n) = P(n − 1)θ(n − 1) − k(n)xT(n)P(n − 1)θ(n − 1)

+ y(n)k(n) + γ

λ − 1 λ

  • ×
  • P(n − 1) sgn

ˆ h(n − 1) − k(n)xT(n)P(n − 1) sgn ˆ h(n − 1)

slide-52
SLIDE 52

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.15

ℓ1-RLS Algorithm

We come up with the following result.

ˆ h(n) = P(n − 1)θ(n − 1) − k(n)xT(n)P(n − 1)θ(n − 1)

+ y(n)k(n) + γ

λ − 1 λ

  • ×
  • P(n − 1) sgn

ˆ h(n − 1) − k(n)xT(n)P(n − 1) sgn ˆ h(n − 1)

  • Here, k(n) is the gain vector.

k(n) = P(n − 1)x∗(n) λ + xH(n)P(n − 1)x(n) (10)

slide-53
SLIDE 53

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.16

ℓ1-RLS Algorithm

Using the matrix inversion lemma, it can be shown that the

time update for the inverse correlation matrix can be performed by the well known Riccati equation. P(n) = λ−1 P(n − 1) − k(n)xT(n)P(n − 1)

  • (11)
slide-54
SLIDE 54

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.16

ℓ1-RLS Algorithm

Using the matrix inversion lemma, it can be shown that the

time update for the inverse correlation matrix can be performed by the well known Riccati equation. P(n) = λ−1 P(n − 1) − k(n)xT(n)P(n − 1)

  • (11)

The recursive update for the tab weight vector assumes its

final form. ˆ h(n) = ˆ h(n − 1) + k(n)

  • y(n) − ˆ

hT(n − 1)x(n)

  • +

γ λ − 1 λ

  • IN − k(n)xT(n)
  • P(n − 1)sgn

ˆ h(n − 1)

  • (12)
slide-55
SLIDE 55

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.16

ℓ1-RLS Algorithm

Using the matrix inversion lemma, it can be shown that the

time update for the inverse correlation matrix can be performed by the well known Riccati equation. P(n) = λ−1 P(n − 1) − k(n)xT(n)P(n − 1)

  • (11)

The recursive update for the tab weight vector assumes its

final form. ˆ h(n) = ˆ h(n − 1) + k(n)

  • y(n) − ˆ

hT(n − 1)x(n)

  • +

γ λ − 1 λ

  • IN − k(n)xT(n)
  • P(n − 1)sgn

ˆ h(n − 1)

  • (12)

This update equation finalizes the ℓ1-RLS algorithm.

slide-56
SLIDE 56

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

slide-57
SLIDE 57

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n)

slide-58
SLIDE 58

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

slide-59
SLIDE 59

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . .

slide-60
SLIDE 60

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . . kλ(n) = P(n − 1)x∗(n)

slide-61
SLIDE 61

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . . kλ(n) = P(n − 1)x∗(n) k(n) =

kλ(n) λ + xT(n)kλ(n)

slide-62
SLIDE 62

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . . kλ(n) = P(n − 1)x∗(n) k(n) =

kλ(n) λ + xT(n)kλ(n)

ξ(n) = y(n) − hT(n − 1)x(n)

slide-63
SLIDE 63

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . . kλ(n) = P(n − 1)x∗(n) k(n) =

kλ(n) λ + xT(n)kλ(n)

ξ(n) = y(n) − hT(n − 1)x(n) P(n) = 1

λ

  • P(n − 1) − k(n)kH

λ (n)

slide-64
SLIDE 64

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . . kλ(n) = P(n − 1)x∗(n) k(n) =

kλ(n) λ + xT(n)kλ(n)

ξ(n) = y(n) − hT(n − 1)x(n) P(n) = 1

λ

  • P(n − 1) − k(n)kH

λ (n)

  • h(n) = h(n − 1) + k(n)ξ(n)

+ γ

λ − 1 λ

  • IN − k(n)xT(n)
  • P(n − 1)sgn
  • h(n − 1)
slide-65
SLIDE 65

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.17

ℓ1-RLS Algorithm

ℓ1 regularized RLS (ℓ1-RLS) algorithm.

inputs: λ, γ, x(n), y(n) initial values: h(−1) = 0,

P(−1) = δ−1I

for n := 0, 1, 2, . . . kλ(n) = P(n − 1)x∗(n) k(n) =

kλ(n) λ + xT(n)kλ(n)

ξ(n) = y(n) − hT(n − 1)x(n) P(n) = 1

λ

  • P(n − 1) − k(n)kH

λ (n)

  • h(n) = h(n − 1) + k(n)ξ(n)

+ γ

λ − 1 λ

  • IN − k(n)xT(n)
  • P(n − 1)sgn
  • h(n − 1)
  • endfor
slide-66
SLIDE 66

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.18

ℓ1-RLS Algorithm

When we compare the ℓ1-RLS weight update with the

regular RLS update equation, we see that the last term starting with γ λ−1

λ

  • constitutes the difference from regular

RLS.

slide-67
SLIDE 67

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.19

Simulation results

We compare the performance of the novel ℓ1-RLS algorithm

to the regular RLS, regular LMS and other sparsity oriented adaptive algorithm.

slide-68
SLIDE 68

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.19

Simulation results

We compare the performance of the novel ℓ1-RLS algorithm

to the regular RLS, regular LMS and other sparsity oriented adaptive algorithm.

The first experiment considers the tracking capabilities of

ℓ1-RLS, RLS, ZA-LMS (Chen2009) and LMS algorithms

under white excitation.

slide-69
SLIDE 69

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.20

Simulation results

The first experiment considers the tracking capabilities of

ℓ1-RLS, RLS, ZA-LMS (Chen2009) and LMS algorithms

under white excitation.

slide-70
SLIDE 70

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.20

Simulation results

The first experiment considers the tracking capabilities of

ℓ1-RLS, RLS, ZA-LMS (Chen2009) and LMS algorithms

under white excitation.

200 400 600 800 1000 10

−3

10

−2

10

−1

10

iteration MSD

RLS LMS ZA−LMS l1−RLS

Figure 1: Learning curves for ℓ1-RLS, RLS, ZA-LMS and LMS.

slide-71
SLIDE 71

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.20

Simulation results

The first experiment considers the tracking capabilities of

ℓ1-RLS, RLS, ZA-LMS (Chen2009) and LMS algorithms

under white excitation.

200 400 600 800 1000 10

−3

10

−2

10

−1

10

iteration MSD

RLS LMS ZA−LMS l1−RLS

Figure 1: Learning curves for ℓ1-RLS, RLS, ZA-LMS and LMS.

ℓ1-RLS presents convergence and steady-state error

improvements over the regular RLS algorithm.

slide-72
SLIDE 72

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.21

Simulation results

In the second experiment we compare the performance of

the novel ℓ1-RLS algorithm to the regular RLS under different SNR values.

slide-73
SLIDE 73

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.21

Simulation results

In the second experiment we compare the performance of

the novel ℓ1-RLS algorithm to the regular RLS under different SNR values.

50 100 150 200 250 300 350 400 450 10

−4

10

−3

10

−2

10

−1

10

iteration MSE (dB)

RLS, 10 dB RLS, 20 dB RLS, 30 dB RLS, 40 dB l1−RLS, 10 dB l1−RLS, 20 dB l1−RLS, 30 dB l1−RLS, 40 dB

Figure 2: Learning curves for ℓ1-RLS and RLS for SNR=40, 30, 20 and 10 dB.

slide-74
SLIDE 74

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.21

Simulation results

In the second experiment we compare the performance of

the novel ℓ1-RLS algorithm to the regular RLS under different SNR values.

50 100 150 200 250 300 350 400 450 10

−4

10

−3

10

−2

10

−1

10

iteration MSE (dB)

RLS, 10 dB RLS, 20 dB RLS, 30 dB RLS, 40 dB l1−RLS, 10 dB l1−RLS, 20 dB l1−RLS, 30 dB l1−RLS, 40 dB

Figure 2: Learning curves for ℓ1-RLS and RLS for SNR=40, 30, 20 and 10 dB.

The ℓ1-RLS has better convergence and steady-state

properties than the regular RLS.

slide-75
SLIDE 75

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.22

Conclusions

This paper introduced a new RLS algorithm, namely ℓ1-RLS,

applicable for the adaptive identification of systems with sparse impulse response.

slide-76
SLIDE 76

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.22

Conclusions

This paper introduced a new RLS algorithm, namely ℓ1-RLS,

applicable for the adaptive identification of systems with sparse impulse response.

The novel update equations for this algorithm are developed

by regularizing the cost function with an ℓ1 norm term.

slide-77
SLIDE 77

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.22

Conclusions

This paper introduced a new RLS algorithm, namely ℓ1-RLS,

applicable for the adaptive identification of systems with sparse impulse response.

The novel update equations for this algorithm are developed

by regularizing the cost function with an ℓ1 norm term.

Numerical simulations demonstrate that the algorithm

indeed brings about better convergence and steady state performance than regular RLS.

slide-78
SLIDE 78

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.22

Conclusions

This paper introduced a new RLS algorithm, namely ℓ1-RLS,

applicable for the adaptive identification of systems with sparse impulse response.

The novel update equations for this algorithm are developed

by regularizing the cost function with an ℓ1 norm term.

Numerical simulations demonstrate that the algorithm

indeed brings about better convergence and steady state performance than regular RLS.

Future work might include theoretical analysis for the steady

state error and simulations studying performance of the proposed algorithm in the case of sparse, slowly time-varying systems.

slide-79
SLIDE 79

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.23

Thanks

slide-80
SLIDE 80

ISSPA 2010, Malaysia RLS Adaptive Filtering with Sparsity Regularization - p.23

Thanks

Thanks for listening.