adaptive filters algorithms part 2
play

Adaptive Filters Algorithms (Part 2) Gerhard Schmidt - PowerPoint PPT Presentation

Adaptive Filters Algorithms (Part 2) Gerhard Schmidt Christian-Albrechts-Universitt zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System Theory Slide 1 Contents of the


  1. Adaptive Filters – Algorithms (Part 2) Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System Theory Slide 1

  2. Contents of the Lecture Today: Adaptive Algorithms:  Introductory Remarks  Recursive Least Squares (RLS) Algorithm  Least Mean Square Algorithm (LMS Algorithm) – Part 1  Least Mean Square Algorithm (LMS Algorithm) – Part 2  Affine Projection Algorithm (AP Algorithm) Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 2 Slide 2

  3. Least Mean Square (LMS) Algorithm Basics – Part 1 Optimization criterion:  Minimizing the mean square error Assumptions:  Real, stationary random processes Structure: Unknown system Adaptive filter Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 3 Slide 3

  4. Least Mean Square (LMS) Algorithm Derivation – Part 2 What we have so far: Resolving it to leads to: With the introduction of a step size , the following adaptation rule can be formulated: Method according to Newton Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 4 Slide 4

  5. Least Mean Square (LMS) Algorithm Derivation – Part 3 Method according to Newton : Method of steepest descent : For practical approaches the expectation value is replaced by its instantaneous value. This leads to the so-called least mean square (LMS) algorithm : LMS algorithm Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 5 Slide 5

  6. Least Mean Square (LMS) Algorithm Upper Bound for the Step Size A priori error : A posteriori error: Consequently: For large and input processes with zero mean the following approximation is valid: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 6 Slide 6

  7. Least Mean Square (LMS) Algorithm System Distance How LMS adaptation changes system distance: Target Old system distance New system distance Current system error vector Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 7 Slide 7

  8. Least Mean Square (LMS) Algorithm Sign Algorithm Update rule: with Early algorithm with very low complexity (even used today in applications that operate at very high frequencies). It can be implemented without any multiplications (step size multiplication can be implemented as a bit shift). Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 8 Slide 8

  9. Least Mean Square (LMS) Algorithm Analysis of the Mean Value Expectation of the filter coefficients: If the procedure converges, the coefficients reach stationary end values : So we have orthogonality : Wiener solution Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 9 Slide 9

  10. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 1 Into the equation for the LMS algorithm we insert the equation for the error and get: Expectation of the filter coefficients: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 10 Slide 10

  11. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 2 Expectation of the filter coefficients: Independence assumption : Difference between means and expectations : Convergence of the means requires: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 11 Slide 11

  12. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 3 Recursion : = 0 because of Wiener solution Convergence requires the contraction of the matrix : Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 12 Slide 12

  13. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 4 Convergence requires the contraction of the matrix (result from last slide): Case 1: White input signal Condition for the convergence of the mean values : For comparison – condition for the convergence of the filter coefficients : Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 13 Slide 13

  14. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 5 Case 2: Colored input – assumptions Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 14 Slide 14

  15. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 6 Putting the following results together, leads to the following notation for the autocorrelation matrix : Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 15 Slide 15

  16. Least Mean Square (LMS) Algorithm Convergence of the Expectations – Part 7 Recursion: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 16 Slide 16

  17. Least Mean Square (LMS) Algorithm Condition for Convergence – Part 1 Previous result: Condition for the convergence of the expectations of the filter coefficients: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 17 Slide 17

  18. Least Mean Square (LMS) Algorithm Condition for Convergence – Part 2 A (very rough) estimate for the largest eigenvalue : Consequently: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 18 Slide 18

  19. Least Mean Square (LMS) Algorithm Eigenvalues and Power Spectral Density – Part 1 Relation between eigenvalues and power spectral density: Signal vector: Autocorrelation matrix: Fourier transform: Equation for eigenvalues: Eigenvalue: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 19 Slide 19

  20. Least Mean Square (LMS) Algorithm Eigenvalues and Power Spectral Density – Part 2 Computing lower and upper bounds for the eigenvalues – part 1: … previous result … … exchanging the order of the sums and the integral and splitting the exponential term … … lower bound … … upper bound … Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 20 Slide 20

  21. Least Mean Square (LMS) Algorithm Eigenvalues and Power Spectral Density – Part 3 Computing lower and upper bounds for the eigenvalues – part 2: … exchanging again the order of the sums and the integral … … solving the integral first … … inserting the result und using the orthonormality properties of eigenvectors … Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 21 Slide 21

  22. Least Mean Square (LMS) Algorithm Eigenvalues and Power Spectral Density – Part 4 Computing lower and upper bounds for the eigenvalues – part 2: … exchanging again the order of the sums and the integral … … inserting the result from above to obtain the upper bound … … inserting the result from above to obtain the lower bound … … finally we get… Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 22 Slide 22

  23. Least Mean Square (LMS) Algorithm Geometrical Explanation of Convergence – Part 1 Structure: Unknown system Adaptive filter System: System output: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 23 Slide 23

  24. Least Mean Square (LMS) Algorithm Geometrical Explanation of Convergence – Part 2 Error signal: Difference vector: LMS algorithm: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 24 Slide 24

  25. Least Mean Square (LMS) Algorithm Geometrical Explanation of Convergence – Part 3 The vector will be split into two components : It applies to parallel components : With: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 25 Slide 25

  26. Least Mean Square (LMS) Algorithm Geometrical Explanation of Convergence – Part 4 Contraction of the system error vector: … result obtained two slides before … … splitting the system error vector … … using and that is orthogonal to … … this results in … Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 26 Slide 26

  27. Least Mean Square (LMS) Algorithm NLMS Algorithm – Part 1 LMS algorithm: Normalized LMS algorithm: Unknown system Adaptive filter Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 27 Slide 27

  28. Least Mean Square (LMS) Algorithm NLMS Algorithm – Part 2 Adaption (in general): A priori error: A posteriori error: A successful adaptation requires or: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 28 Slide 28

  29. Least Mean Square (LMS) Algorithm NLMS Algorithm – Part 3 Convergence condition: Inserting the update equation: Condition: Ansatz: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 29 Slide 29

  30. Least Mean Square (LMS) Algorithm NLMS Algorithm – Part 4 Condition: Ansatz: Step size requirement fo the NLMS algorithm (after a few lines …): or For comparison with LMS algorithm: Digital Signal Processing and System Theory| Adaptive Filters | Algorithms – Part 2 Slide 30 Slide 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend