lecture 16 linear prediction
play

Lecture 16: Linear Prediction Mark Hasegawa-Johnson ECE 401: Signal - PowerPoint PPT Presentation

Review Second-Order Linear Prediction Predictors Summary Lecture 16: Linear Prediction Mark Hasegawa-Johnson ECE 401: Signal and Image Analysis, Fall 2020 Review Second-Order Linear Prediction Predictors Summary Review: All-Pole Filters


  1. Review Second-Order Linear Prediction Predictors Summary Lecture 16: Linear Prediction Mark Hasegawa-Johnson ECE 401: Signal and Image Analysis, Fall 2020

  2. Review Second-Order Linear Prediction Predictors Summary Review: All-Pole Filters 1 Inverse Filtering 2 Linear Prediction 3 Finding the Linear Predictive Coefficients 4 Summary 5

  3. Review Second-Order Linear Prediction Predictors Summary Outline Review: All-Pole Filters 1 Inverse Filtering 2 Linear Prediction 3 Finding the Linear Predictive Coefficients 4 Summary 5

  4. Review Second-Order Linear Prediction Predictors Summary All-Pole Filter An all-pole filter has the system function: 1 1 H ( z ) = 1 z − 1 ) = 1 − a 1 z − 1 − a 2 z − 2 , (1 − p 1 z − 1 )(1 − p ∗ so it can be implemented as y [ n ] = x [ n ] + a 1 y [ n − 1] + a 2 y [ n − 2] where 1 ) = 2 e − σ 1 cos( ω 1 ) a 1 = ( p 1 + p ∗ a 2 = −| p 1 | 2 = − e − 2 σ 1

  5. Review Second-Order Linear Prediction Predictors Summary Frequency Response of an All-Pole Filter We get the magnitude response by just plugging in z = e j ω , and taking absolute value: 1 | H ( ω ) | = | H ( z ) | z = e j ω = | e j ω − p 1 | × | e j ω − p ∗ 1 |

  6. Review Second-Order Linear Prediction Predictors Summary Impulse Response of an All-Pole Filter We get the impulse response using partial fraction expansion: h [ n ] = ( C 1 p n 1 ) n ) u [ n ] 1 + C ∗ 1 ( p ∗ 1 sin( ω 1 ) e − σ 1 n sin ( ω 1 ( n + 1)) u [ n ] =

  7. Review Second-Order Linear Prediction Predictors Summary Speech is made up of Damped Sinusoids Resonant systems, like speech, trumpets, and bells, are made up from the series combination of second-order all-pole filters.

  8. Review Second-Order Linear Prediction Predictors Summary Outline Review: All-Pole Filters 1 Inverse Filtering 2 Linear Prediction 3 Finding the Linear Predictive Coefficients 4 Summary 5

  9. Review Second-Order Linear Prediction Predictors Summary Speech Speech is made when we take a series of impulses, one every 5-10ms, and filter them through a resonant cavity (like a bell).

  10. Review Second-Order Linear Prediction Predictors Summary Speech Speech is made when we take a series of impulses, one every 5-10ms, and filter them through a resonant cavity (like a bell). 1 S ( z ) = H ( z ) E ( z ) = A ( z ) E ( z ) where the excitation signal is a set of impulses, maybe only one per frame: e [ n ] = G δ [ n − n 0 ] The only thing we don’t know, really, is the amplitude of the impulse ( G ), and the time at which it occurs ( n 0 ). Can we find out?

  11. Review Second-Order Linear Prediction Predictors Summary Speech: The Model

  12. Review Second-Order Linear Prediction Predictors Summary Speech: The Real Thing

  13. Review Second-Order Linear Prediction Predictors Summary Inverse Filtering If S ( z ) = E ( z ) / A ( z ), then we can get E ( z ) back again by doing something called an inverse filter: 1 IF: S ( z ) = A ( z ) E ( z ) THEN: E ( z ) = A ( z ) S ( z ) The inverse filter, A ( z ), has a form like this: p � a k z − k A ( z ) = 1 − k =1 where p is twice the number of resonant frequencies. So if speech has 4-5 resonances, then p ≈ 10.

  14. Review Second-Order Linear Prediction Predictors Summary Inverse Filtering

  15. Review Second-Order Linear Prediction Predictors Summary Inverse Filtering This one is an all-pole (feedback-only) filter: 1 S ( z ) = k =1 a k z − k E ( z ) 1 − � p That means this one is an all-zero (feedfoward only) filter: p � � � a k z − k E ( z ) = 1 − S ( z ) k =1 which we can implement just like this: p � e [ n ] = s [ n ] − a k s [ n − k ] k =1

  16. Review Second-Order Linear Prediction Predictors Summary Outline Review: All-Pole Filters 1 Inverse Filtering 2 Linear Prediction 3 Finding the Linear Predictive Coefficients 4 Summary 5

  17. Review Second-Order Linear Prediction Predictors Summary Linear Predictive Analysis This particular feedforward filter is called linear predictive analysis : p � e [ n ] = s [ n ] − a k s [ n − k ] k =1 It’s kind of like we’re trying to predict s [ n ] using a linear combination of its own past samples: p � s [ n ] = ˆ a k s [ n − k ] , k =1 and then e [ n ], the glottal excitation, is the part that can’t be predicted: e [ n ] = s [ n ] − ˆ s [ n ]

  18. Review Second-Order Linear Prediction Predictors Summary Linear Predictive Analysis Actually, linear predictive analysis is used a lot more often in finance, these days, than in speech: In finance: detect important market movements = price changes that are not predictable from recent history. In health: detect EKG patterns that are not predictable from recent history. In geology: detect earthquakes = impulses that are not predictable from recent history. . . . you get the idea. . .

  19. Review Second-Order Linear Prediction Predictors Summary Linear Predictive Analysis Filter s [ n ] e [ n ] z − 1 − a 1 z − 1 − a 2 z − 1 − a 3 z − 1 − a 4

  20. Review Second-Order Linear Prediction Predictors Summary Linear Predictive Synthesis The corresponding feedback filter is called linear predictive synthesis . The idea is that, given e [ n ], we can resynthesize s [ n ] by adding feedback, because: 1 S ( z ) = k =1 a k z − k E ( z ) 1 − � p means that p � s [ n ] = e [ n ] + a k s [ n − k ] k =1

  21. Review Second-Order Linear Prediction Predictors Summary Linear Predictive Synthesis Filter e [ n ] s [ n ] a 1 z − 1 z − 1 a 2 z − 1 a 3 z − 1 a 4

  22. Review Second-Order Linear Prediction Predictors Summary Outline Review: All-Pole Filters 1 Inverse Filtering 2 Linear Prediction 3 Finding the Linear Predictive Coefficients 4 Summary 5

  23. Review Second-Order Linear Prediction Predictors Summary Finding the Linear Predictive Coefficients Things we don’t know: The timing of the unpredictable event ( n 0 ), and its amplitude ( G ). The coefficients a k . It seems that, in order to find n 0 and G , we first need to know the predictor coefficients, a k . How can we find a k ?

  24. Review Second-Order Linear Prediction Predictors Summary Finding the Linear Predictive Coefficients Let’s make the following assumption: Everything that can be predicted is part of ˆ s [ n ]. Only the unpredictable part is e [ n ].

  25. Review Second-Order Linear Prediction Predictors Summary Finding the Linear Predictive Coefficients Let’s make the following assumption: Everything that can be predicted is part of ˆ s [ n ]. Only the unpredictable part is e [ n ]. So we define e [ n ] to be: p � e [ n ] = s [ n ] − a k s [ n − k ] k =1 . . . and then choose a k to make e [ n ] as small as possible. ∞ � e 2 [ n ] a k = argmin n = −∞

  26. Review Second-Order Linear Prediction Predictors Summary Finding the Linear Predictive Coefficients So we’ve formulated the problem like this: we want to find a k in order to minimize: � 2 p � ∞ ∞ � � � e 2 [ n ] = E = s [ n ] − a m s [ n − m ] n = −∞ n = −∞ m =1

  27. Review Second-Order Linear Prediction Predictors Summary Finding the Linear Predictive Coefficients We want to find the coefficients a k that minimize E . We can do that by differentiating, and setting the derivative equal to zero: p � � ∞ d E � � = 2 s [ n ] − a m s [ n − m ] s [ n − k ] , for all 1 ≤ k ≤ p da k n = −∞ m =1 p � � ∞ � � 0 = s [ n ] − a m s [ n − m ] s [ n − k ] , for all 1 ≤ k ≤ p n = −∞ m =1 This is a set of p different equations (for 1 ≤ k ≤ p ) in p different unknowns ( a k ). So it can be solved.

  28. Review Second-Order Linear Prediction Predictors Summary Autocorrelation In order to write the solution more easily, let’s define something called the “autocorrelation,” R [ m ]: ∞ � R [ m ] = s [ n ] s [ n − m ] n = −∞ In terms of the autocorrelation, the derivative of the error is p � 0 = R [ k ] − a m R [ k − m ] ∀ 1 ≤ k ≤ p m =1 or we could write p � R [ k ] = a m R [ k − m ] ∀ 1 ≤ k ≤ p m =1

  29. Review Second-Order Linear Prediction Predictors Summary Matrices Since we have p linear equations in p unknowns, let’s write this as a matrix equation:       R [1] R [0] R [1] · · · R [ p − 1] a 1 R [2] R [1] R [0] · · · R [ p − 2] a 2        =  .   . . .   .  ... . . . . .       . . . . .      R [ p ] R [ p − 1] R [ p − 2] · · · R [0] a p where I’ve taken advantage of the fact that R [ m ] = R [ − m ]: ∞ � R [ m ] = s [ n ] s [ n − m ] n = −∞

  30. Review Second-Order Linear Prediction Predictors Summary Matrices Since we have p linear equations in p unknowns, let’s write this as a matrix equation: � γ = R � a where     R [1] R [0] R [1] · · · R [ p − 1] R [2] R [1] R [0] · · · R [ p − 2]     � γ =  , R =  .  .   . . .  ... . . . .     . . . .   R [ p ] R [ p − 1] R [ p − 2] · · · R [0]

  31. Review Second-Order Linear Prediction Predictors Summary Matrices Since we have p linear equations in p unknowns, let’s write this as a matrix equation: � γ = R � a and therefore the solution is a = R − 1 � � γ

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend