Adaptive Filters Algorithms (Part 2) Gerhard Schmidt - - PowerPoint PPT Presentation

adaptive filters algorithms part 2
SMART_READER_LITE
LIVE PREVIEW

Adaptive Filters Algorithms (Part 2) Gerhard Schmidt - - PowerPoint PPT Presentation

Adaptive Filters Algorithms (Part 2) Gerhard Schmidt Christian-Albrechts-Universitt zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System Theory Slide 1 Contents of the


slide-1
SLIDE 1

Slide 1

Gerhard Schmidt

Christian-Albrechts-Universität zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System Theory

Adaptive Filters – Algorithms (Part 2)

slide-2
SLIDE 2

Slide 2 Slide 2 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Today:

Contents of the Lecture

Adaptive Algorithms:

 Introductory Remarks  Recursive Least Squares (RLS) Algorithm  Least Mean Square Algorithm (LMS Algorithm) – Part 1  Least Mean Square Algorithm (LMS Algorithm) – Part 2  Affine Projection Algorithm (AP Algorithm)

slide-3
SLIDE 3

Slide 3 Slide 3 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Basics – Part 1

Least Mean Square (LMS) Algorithm

Optimization criterion:

 Minimizing the mean square error

Assumptions:

 Real, stationary random processes

Structure:

Unknown system Adaptive filter

slide-4
SLIDE 4

Slide 4 Slide 4 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Derivation – Part 2

Least Mean Square (LMS) Algorithm

Method according to Newton

What we have so far: Resolving it to leads to: With the introduction of a step size , the following adaptation rule can be formulated:

slide-5
SLIDE 5

Slide 5 Slide 5 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Derivation – Part 3

Least Mean Square (LMS) Algorithm

Method according to Newton: Method of steepest descent:

LMS algorithm

For practical approaches the expectation value is replaced by its instantaneous

  • value. This leads to the so-called least mean square (LMS) algorithm:
slide-6
SLIDE 6

Slide 6 Slide 6 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Upper Bound for the Step Size

Least Mean Square (LMS) Algorithm

A priori error: A posteriori error: Consequently: For large and input processes with zero mean the following approximation is valid:

slide-7
SLIDE 7

Slide 7 Slide 7 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

System Distance

Least Mean Square (LMS) Algorithm

Old system distance New system distance

How LMS adaptation changes system distance:

Target Current system error vector

slide-8
SLIDE 8

Slide 8 Slide 8 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Sign Algorithm

Least Mean Square (LMS) Algorithm

Update rule:

with

Early algorithm with very low complexity (even used today in applications that operate at very high frequencies). It can be implemented without any multiplications (step size multiplication can be implemented as a bit shift).

slide-9
SLIDE 9

Slide 9 Slide 9 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Analysis of the Mean Value

Least Mean Square (LMS) Algorithm

Expectation of the filter coefficients: If the procedure converges, the coefficients reach stationary end values: So we have orthogonality: Wiener solution

slide-10
SLIDE 10

Slide 10 Slide 10 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 1

Least Mean Square (LMS) Algorithm

Into the equation for the LMS algorithm and get: we insert the equation for the error Expectation of the filter coefficients:

slide-11
SLIDE 11

Slide 11 Slide 11 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 2

Least Mean Square (LMS) Algorithm

Expectation of the filter coefficients: Independence assumption: Difference between means and expectations: Convergence of the means requires:

slide-12
SLIDE 12

Slide 12 Slide 12 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 3

Least Mean Square (LMS) Algorithm

= 0 because of Wiener solution Recursion: Convergence requires the contraction of the matrix:

slide-13
SLIDE 13

Slide 13 Slide 13 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 4

Least Mean Square (LMS) Algorithm

Case 1: White input signal Condition for the convergence of the mean values: For comparison – condition for the convergence of the filter coefficients: Convergence requires the contraction of the matrix (result from last slide):

slide-14
SLIDE 14

Slide 14 Slide 14 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 5

Least Mean Square (LMS) Algorithm

Case 2: Colored input – assumptions

slide-15
SLIDE 15

Slide 15 Slide 15 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 6

Least Mean Square (LMS) Algorithm

Putting the following results together, leads to the following notation for the autocorrelation matrix:

slide-16
SLIDE 16

Slide 16 Slide 16 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of the Expectations – Part 7

Least Mean Square (LMS) Algorithm

Recursion:

slide-17
SLIDE 17

Slide 17 Slide 17 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Condition for Convergence – Part 1

Least Mean Square (LMS) Algorithm

Condition for the convergence of the expectations of the filter coefficients: Previous result:

slide-18
SLIDE 18

Slide 18 Slide 18 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Condition for Convergence – Part 2

Least Mean Square (LMS) Algorithm

A (very rough) estimate for the largest eigenvalue: Consequently:

slide-19
SLIDE 19

Slide 19 Slide 19 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 1

Least Mean Square (LMS) Algorithm

Relation between eigenvalues and power spectral density:

Signal vector: Autocorrelation matrix: Fourier transform: Equation for eigenvalues: Eigenvalue:

slide-20
SLIDE 20

Slide 20 Slide 20 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 2

Least Mean Square (LMS) Algorithm

… previous result … … exchanging the order of the sums and the integral and splitting the exponential term … … lower bound … … upper bound …

Computing lower and upper bounds for the eigenvalues – part 1:

slide-21
SLIDE 21

Slide 21 Slide 21 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 3

Least Mean Square (LMS) Algorithm

Computing lower and upper bounds for the eigenvalues – part 2:

… exchanging again the order of the sums and the integral … … solving the integral first … … inserting the result und using the orthonormality properties of eigenvectors …

slide-22
SLIDE 22

Slide 22 Slide 22 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Eigenvalues and Power Spectral Density – Part 4

Least Mean Square (LMS) Algorithm

… exchanging again the order of the sums and the integral …

Computing lower and upper bounds for the eigenvalues – part 2:

… inserting the result from above to obtain the upper bound … … inserting the result from above to obtain the lower bound … … finally we get…

slide-23
SLIDE 23

Slide 23 Slide 23 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 1

Least Mean Square (LMS) Algorithm

System: System output: Structure:

Unknown system Adaptive filter

slide-24
SLIDE 24

Slide 24 Slide 24 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 2

Least Mean Square (LMS) Algorithm

Error signal: Difference vector: LMS algorithm:

slide-25
SLIDE 25

Slide 25 Slide 25 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 3

Least Mean Square (LMS) Algorithm

The vector will be split into two components: It applies to parallel components: With:

slide-26
SLIDE 26

Slide 26 Slide 26 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Explanation of Convergence – Part 4

Least Mean Square (LMS) Algorithm

Contraction of the system error vector:

… result obtained two slides before … … splitting the system error vector … … using and that is orthogonal to … … this results in …

slide-27
SLIDE 27

Slide 27 Slide 27 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 1

Least Mean Square (LMS) Algorithm

Normalized LMS algorithm: LMS algorithm:

Unknown system Adaptive filter

slide-28
SLIDE 28

Slide 28 Slide 28 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 2

Least Mean Square (LMS) Algorithm

Adaption (in general): A priori error: A posteriori error:

A successful adaptation requires

  • r:
slide-29
SLIDE 29

Slide 29 Slide 29 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 3

Least Mean Square (LMS) Algorithm

Condition: Ansatz: Convergence condition: Inserting the update equation:

slide-30
SLIDE 30

Slide 30 Slide 30 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 4

Least Mean Square (LMS) Algorithm

Condition: Ansatz: Step size requirement fo the NLMS algorithm (after a few lines …): For comparison with LMS algorithm:

  • r
slide-31
SLIDE 31

Slide 31 Slide 31 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

NLMS Algorithm – Part 5

Least Mean Square (LMS) Algorithm

Ansatz: Adaptation rule for the NLMS algorithm:

slide-32
SLIDE 32

Slide 32 Slide 32 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Matlab-Demo: Speed of Convergence

Least Mean Square (LMS) Algorithm

slide-33
SLIDE 33

Slide 33 Slide 33 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence Examples – Part 1

Least Mean Square (LMS) Algorithm

Setup:

White noise:

slide-34
SLIDE 34

Slide 34 Slide 34 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence Examples – Part 2

Least Mean Square (LMS) Algorithm

Setup:

Colored noise:

slide-35
SLIDE 35

Slide 35 Slide 35 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence Examples – Part 3

Least Mean Square (LMS) Algorithm

Setup:

Speech:

slide-36
SLIDE 36

Slide 36 Slide 36 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Today

Contents of the Lecture

Adaptive Algorithms:

 Introductory Remarks  Recursive Least Squares (RLS) Algorithm  Least Mean Square Algorithm (LMS Algorithm) – Part 1  Least Mean Square Algorithm (LMS Algorithm) – Part 2  Affine Projection Algorithm (AP Algorithm)

slide-37
SLIDE 37

Slide 37 Slide 37 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Basics

Affine Projection Algorithm

Signal matrix: Signal vector: Filter vector: Filter output: M describes the order of the procedure

Unknown system

slide-38
SLIDE 38

Slide 38 Slide 38 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Signal Matrix

Affine Projection Algorithm

Definition of the signal matrix:

slide-39
SLIDE 39

Slide 39 Slide 39 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Error Vector – Part 1

Affine Projection Algorithm

Signal matrix: Desired signal vector: Filter output vector: A priori error vector: Adaption rule: A posteriori error vector:

slide-40
SLIDE 40

Slide 40 Slide 40 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Error Vector – Part 2

Affine Projection Algorithm

Requirement: Requirement:

slide-41
SLIDE 41

Slide 41 Slide 41 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Ansatz

Affine Projection Algorithm

Requirement: Ansatz: Step-size condition:

slide-42
SLIDE 42

Slide 42 Slide 42 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Geometrical Interpretation

Affine Projection Algorithm

NLMS algorithm AP algorithm

slide-43
SLIDE 43

Slide 43 Slide 43 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Regularization

Affine Projection Algorithm

Regularised version of the AP algorithm: Non-regularised version of the AP algorithm:

slide-44
SLIDE 44

Slide 44 Slide 44 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 1

Affine Projection Algorithm

White noise:

slide-45
SLIDE 45

Slide 45 Slide 45 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 2

Affine Projection Algorithm

White noise:

slide-46
SLIDE 46

Slide 46 Slide 46 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 3

Affine Projection Algorithm

Colored noise

slide-47
SLIDE 47

Slide 47 Slide 47 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 4

Affine Projection Algorithm

Colored noise:

slide-48
SLIDE 48

Slide 48 Slide 48 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 5

Affine Projection Algorithm

Speech:

slide-49
SLIDE 49

Slide 49 Slide 49 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Convergence of Different Algorithms – Part 6

Affine Projection Algorithm

Speech:

slide-50
SLIDE 50

Slide 50 Slide 50 Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2

Summary and Outlook

Adaptive Filters – Algorithms

This week and last week:

 Introductory Remarks  Recursive Least Squares (RLS) Algorithm  Least Mean Square Algorithm (LMS Algorithm) – Part 1  Least Mean Square Algorithm (LMS Algorithm) – Part 2  Affine Projection Algorithm (AP Algorithm)

Next week:

 Control of Adaptive Filters