Part 3. Spectrum Estimation Part 3. Spectrum Estimation 3.2 - - PowerPoint PPT Presentation

part 3 spectrum estimation part 3 spectrum estimation
SMART_READER_LITE
LIVE PREVIEW

Part 3. Spectrum Estimation Part 3. Spectrum Estimation 3.2 - - PowerPoint PPT Presentation

ENEE630 Part ENEE630 Part- -3 3 Part 3. Spectrum Estimation Part 3. Spectrum Estimation 3.2 Parametric Methods for Spectral Estimation 3.2 Parametric Methods for Spectral Estimation Electrical & Computer Engineering Electrical &


slide-1
SLIDE 1

ENEE630 Part ENEE630 Part-

  • 3

3

Part 3. Spectrum Estimation Part 3. Spectrum Estimation

3.2 Parametric Methods for Spectral Estimation 3.2 Parametric Methods for Spectral Estimation

Electrical & Computer Engineering Electrical & Computer Engineering University of Maryland, College Park

Acknowledgment: ENEE630 slides were based on class notes developed by Profs. K.J. Ray Liu and Min Wu. The slides were made by Prof. Min Wu, ith d t f M W i H Ch C t t i @ d d with updates from Mr. Wei-Hong Chuang. Contact: minwu@eng.umd.edu

slide-2
SLIDE 2

Summary of Related Readings on Part Summary of Related Readings on Part-

  • III

III

Overview Haykins 1.16, 1.10 3 1 Non-parametric method 3.1 Non-parametric method

Hayes 8.1; 8.2 (8.2.3, 8.2.5); 8.3

3 2 P t i th d 3.2 Parametric method

Hayes 8.5, 4.7; 8.4

3.3 Frequency estimation

Hayes 8.6

Review

– On DSP and Linear algebra: Hayes 2.2, 2.3

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [3]

– On probability and parameter estimation: Hayes 3.1 – 3.2

slide-3
SLIDE 3

Motivation Motivation

 Implicit assumption by classical methods

– Classical methods use Fourier transform on either windowed data or i d d l i f i (AC ) windowed autocorrelation function (ACF) – Implicitly assume the unobserved data or ACF outside the window are zero => not true in reality – Consequence of windowing: smeared spectral estimate (leading to low resolution)

If prior knowledge about the process is available

 If prior knowledge about the process is available

– We can use prior knowledge and select a good model to approximate the process – Usually need to estimate fewer model parameters (than non- parametric approaches) using the limited data points we have – The model may allow us to better describe the process outside the

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [5]

The model may allow us to better describe the process outside the window (instead of assuming zeros)

slide-4
SLIDE 4

General Procedure of Parametric Methods General Procedure of Parametric Methods

 Select a model (based on prior knowledge)  Estimate the parameters of the assumed model  Obtain the spectral estimate implied by the model (with

the estimated parameters)

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [6]

slide-5
SLIDE 5

Spectral Estimation using AR, MA, ARMA Models Spectral Estimation using AR, MA, ARMA Models

 Physical insight: the process is generated/ approximated by

filtering white noise with an LTI filter of rational transfer func H(z)

 Use observed data to estimate a few lags of r(k)

– Larger lags of r(k) can be implicitly extrapolated by the model

 Relation between r(k) and filter parameters {ak} and {bk}

PARAMETER EQUATIONS f S ti 2 1 2(6) – PARAMETER EQUATIONS from Section 2.1.2(6) – Solve the parameter equations to obtain filter parameters – Use the p.s.d. implied by the model as our spectral estimate p p y p

 Deal with nonlinear parameter equations

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [7]

– Try to convert/relate them to the AR models that have linear equations

slide-6
SLIDE 6

Review: Parameter Equations Review: Parameter Equations

Yule-Walker equations (for AR process) ARMA model MA model

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [8]

slide-7
SLIDE 7

3.2.1 AR Spectral Estimation 3.2.1 AR Spectral Estimation

(1) Review of AR process – The time series {x[n], x[n-1], …, x[n-m]} is a realization of an AR process of order M if it satisfies the difference equation

[ ] + [ 1] + + [ M] [ ] x[n] + a1 x[n-1] + … + aM x[n-M] = v[n]

where {v[n]} is a white noise process with variance 2 . – Generating an AR process with parameters {ai}:

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [9]

slide-8
SLIDE 8

3.2.1 AR Spectral Estimation 3.2.1 AR Spectral Estimation

(1) Review of AR process – The time series {x[n], x[n-1], …, x[n-m]} is a realization of { [ ] [ ] [ ]} an AR process of order M if it satisfies the difference equation

x[n] + a1 x[n-1] + … + aM x[n-M] = v[n]

where {v[n]} is a white noise process with variance 2 . G ti AR ith t { } – Generating an AR process with parameters {ai}:

M

z H 1 ) (

 

i i iz

a

1

1 1

def

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [10]

) (z A 

slide-9
SLIDE 9

P.S.D. of An AR Process P.S.D. of An AR Process

Recall: the p.s.d. of an AR process {x[n]} is given by

) ( ˆ

2 AR

 z P  ) / 1 ( ) ( ) (

AR  

z A z A z P

f j j

e e z

  2

   e e z   

2 2 AR

) ( ˆ 

M

f P 

2 1 2 AR

1 

 

M k fk j ke

a

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [11]

slide-10
SLIDE 10

P.S.D. of An AR Process P.S.D. of An AR Process

Recall: the p.s.d. of an AR process {x[n]} is given by

) ( ˆ

2 AR

 z P  ) / 1 ( ) ( ) (

AR  

z A z A z P

f j j

e e z

  2

   e e z   

2 2 AR

) ( ˆ 

M

f P 

2 1 2 AR

1 

 

M k fk j ke

a

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [12]

slide-11
SLIDE 11

Procedure of AR Spectral Estimation Procedure of AR Spectral Estimation

 Observe the available data points x[0], …, x[N-1], and

Determine the AR process order p

 Estimate the autocorrelation functions (ACF) k=0,…p

Biased (low variance) Unbiased (may not non neg.definite)

   

 

k N n

n x k n x N k r

1

] [ ] [ 1 ) ( ˆ

   

  

k N n

n x k n x k N k r

1

] [ ] [ 1 ) ( ˆ

Biased (low variance) Unbiased (may not non neg.definite)

 Solve { ai } from the Yule-Walker equations or

the normal equation of forward linear prediction

2

) ( ˆ  f P 

– Recall for an AR process, the normal equation of FLP is equivalent to the Yule-Walker equation

O (f)

2 1 2 AR

1 ) (

 

 

M k fk j ke

a f P

 UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [13]

 Obtain power spectrum PAR (f):

slide-12
SLIDE 12

Procedure of AR Spectral Estimation Procedure of AR Spectral Estimation

 Observe the available data points x[0], …, x[N-1], and

Determine the AR process order p

 Estimate the autocorrelation functions (ACF) k=0,…p

Biased (low variance) Unbiased (may not non-neg.definite) Biased (low variance) Unbiased (may not non neg.definite)

   

  

k N n

n x k n x k N k r

1

] [ ] [ 1 ) ( ˆ

   

 

k N n

n x k n x N k r

1

] [ ] [ 1 ) ( ˆ

 Solve { ai } from the Yule-Walker equations or

the normal equation of forward linear prediction

– Recall for an AR process, the normal equation of FLP is equivalent to the Yule-Walker equation

O (f)

2

) ( ˆ  f P 

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [14]

 Obtain power spectrum PAR (f):

2 1 2 AR

1 ) (

 

 

M k fk j ke

a f P

slide-13
SLIDE 13

3.2.2 Maximum Entropy Spectral Estimation 3.2.2 Maximum Entropy Spectral Estimation (MESE)

(MESE)

 View point: Extrapolations of ACF

– {r[0], r[1], …, r[p]} is known; there are generally an infinite b f ibl l i f (k) l l number of possible extrapolations for r(k) at larger lags – As long as { r[p+1], r[p+2], … } guarantee that the correlation matrix is non-negative definite, they all form valid ACFs for w.s.s.

 Maximum entropy principle

– Perform extrapolation s.t. the time series characterized by the extrapolated ACF has maximum entropy – i.e. the time series will be the least constrained thus most random

  • ne among all series having the same first (p+1) ACF values

g g (p )

 Maximizing entropy leads to estimated p.s.d. be the

smoothest one

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [15]

– Recall white noise process has flat p.s.d.

slide-14
SLIDE 14

3.2.2 Maximum Entropy Spectral Estimation 3.2.2 Maximum Entropy Spectral Estimation (MESE)

(MESE)

 Extrapolations of ACF

– {r[0], r[1], …, r[p]} is known; there are generally an infinite b f ibl l i f (k) l l number of possible extrapolations for r(k) at larger lags – As long as { r[p+1], r[p+2], … } guarantee that the correlation matrix is non-negative definite, they all form valid ACFs for w.s.s.

 Maximum entropy principle

– Perform extrapolation s.t. the time series characterized by the extrapolated ACF has maximum entropy – i.e. the time series will be the least constrained thus most random

  • ne among all series having the same first (p+1) ACF values

g g (p )

=> Maximizing entropy leads to estimated p.s.d. be the smoothest one

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [16]

– Recall white noise process has flat p.s.d.

slide-15
SLIDE 15

MESE for Gaussian Process: Formulation MESE for Gaussian Process: Formulation

For a Gaussian random process, the entropy per sample is proportional to



2 1 2 1

) ( ln df f P

The max entropy spectral estimation is

1

2

 2 1 2 1

) ( ln max df f P

subject to

p k k r df e f P

fk j

,..., 1 , for ), ( ) (

2 1 2 1 2

 



subject to

2

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [17]

slide-16
SLIDE 16

MESE for Gaussian Process: Formulation MESE for Gaussian Process: Formulation

For a Gaussian random process, the entropy per sample is proportional to



2 1 2 1

) ( ln df f P

Thus the max entropy spectral estimation is

1

2



2 1 2 1

) ( ln max df f P

subject to

p k k r df e f P

fk j

,..., 1 , for ), ( ) (

2 1 2 1 2

 



subject to

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [18]

2

slide-17
SLIDE 17

MESE for Gaussian Process: Solution MESE for Gaussian Process: Solution

Using the Lagrangian multiplier technique, the solution can be found as can be found as

2 2 ME

) ( ˆ  f P 

h { } f d b l i th Y l W lk

2 1 2

1  

p k fk j ke

a

where {ak} are found by solving the Yule-Walker equations given the ACF values r(0), …, r(p)

 For Gaussian processes, the MESE is equivalent to AR

spectral estimator and the PME(f) is an all-pole spectrum

Diff i h G i AR

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [19]

– Different assumptions on the process: Gaussian vs AR processes

slide-18
SLIDE 18

MESE for Gaussian Process: Solution MESE for Gaussian Process: Solution

Using the Lagrangian multiplier technique, the solution can be found as can be found as

2 2 ME

) ( ˆ  f P 

h { } f d b l i th Y l W lk

2 1 2

1  

p k fk j ke

a

where {ak} are found by solving the Yule-Walker equations given the ACF values r(0), …, r(p)

 For Gaussian processes, the MESE is equivalent to AR

spectral estimator and the PME(f) is an all-pole spectrum

Diff i h G i AR

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [20]

– Different assumptions on the process: Gaussian vs AR processes

slide-19
SLIDE 19

3.2.3 MA Spectral Estimation 3.2.3 MA Spectral Estimation

An MA(q) model

 

 q k q

b B k b ) ( ] [ ] [

can be used to define an MA spectral estimator

 

  

   

k k k k k

z b z B k n v b n x ) ( ] [ ] [

p

2 2 2 MA

1 ) ( ˆ

 

q fk j ke

b f P

Recall important results on MA process:

1 MA

1 ) (

k ke

b f P 

(1) The problem of solving for bk given {r(k)} is to solve a set of nonlinear equations; (2) An MA process can be approximated by an AR process of

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [21]

(2) An MA process can be approximated by an AR process of sufficiently high order.

slide-20
SLIDE 20

Basic Idea to Avoid Solving Nonlinear Equations Basic Idea to Avoid Solving Nonlinear Equations

Consider two processes:

 Process#1: we observed N samples and need to perform  Process#1: we observed N samples, and need to perform

spectral estimate

– We first model it as a high-order AR process, generated by 1/A(z) filter

 Process#2: an MA-process generated by A(z) filter

– Since we know A(z), we can know process#2’s autocorrelation function;

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [22]

– We model process#2 as an AR(q) process => the filter would be 1/B(z)

slide-21
SLIDE 21

Use AR Model to Help Finding MA Parameters Use AR Model to Help Finding MA Parameters ) ( ) ( ) (

1 2 

B B P

Note

– For simplicity, we consider the real coefficients for the MA model.

) ( ) ( ) (

1 2

 z B z B z P

MA

L

Note To approximate it with an AR(L) model, i.e.,

) ( ) ( ) (

1 2 

 z A z A z P

MA

 

 

L k k kz

a z A

1

1 ) ( q L  1

where

) ( ) ( 1 ) ( ) (

1 1  

  z B z B z A z A

d L

  • rder L
  • rder q

 The RHS represents power spectrum of an AR(q) process  The unverse ZT of LHS is the ACF of the AR(q) process  The unverse ZT of LHS is the ACF of the AR(q) process

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [23]

slide-22
SLIDE 22

Use AR Model to Help Finding MA Parameters Use AR Model to Help Finding MA Parameters

– For simplicity, we consider the real coefficients for the MA model.

) ( ) ( ) (

1 2 

B B P

Note

) ( ) ( ) (

1 2

 z B z B z P

MA

L

Note To approximate it with an AR(L) model, i.e.,

) ( ) ( ) (

1 2 

 z A z A z P

MA

 

 

L k k kz

a z A

1

1 ) ( q L  1

where

) ( ) ( 1 ) ( ) (

1 1  

  z B z B z A z A

d L

  • rder L
  • rder q

 The RHS represents power spectrum of an AR(q) process  The unverse ZT of LHS is the ACF of the AR(q) process

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [24]

 The unverse ZT of LHS is the ACF of the AR(q) process

slide-23
SLIDE 23

Use AR Model to Find MA Parameters: Solutions Use AR Model to Find MA Parameters: Solutions

– For simplicity, we consider the real coefficients for the MA model.

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [25]

slide-24
SLIDE 24

Recall: ACF of Output Process After LTI Filtering Recall: ACF of Output Process After LTI Filtering

w.s.s. process process stable LTI filter filter filter filter filter deterministic autocorrelation

  • f filter’s impulse response

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [26]

slide-25
SLIDE 25

Recall: ACF of Output Process After LTI Filtering Recall: ACF of Output Process After LTI Filtering

w.s.s. process process stable LTI filter filter filter filter filter deterministic autocorrelation

  • f filter’s impulse response

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [27]

slide-26
SLIDE 26

Use AR to Help Finding MA Parameters Use AR to Help Finding MA Parameters (cont’d)

(cont’d) A random process with power spectrum A(z)A(z-1) can be viewed as filtering a white process by a filter A(z), and has autocorrelation proportional to for lag k

k L k

a a

proportional to for lag k  Knowing such autocorrelation function,

  n k n na

a

 Knowing such autocorrelation function, we can use Levinson-Durbin recursion to find the optimal linear prediction parameters for the process (or equivalently its AR approximation parameters) (or equivalently its AR approximation parameters) Thus we get {bk} as

1 ) ( ) (

1 

 z A z A

Thus we get {bk} as

) ( ) ( ) ( ) (

1 

 z B z B z A z A

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [28]

slide-27
SLIDE 27

Use AR to Help Finding MA Parameters Use AR to Help Finding MA Parameters (cont’d)

(cont’d) A random process with power spectrum A(z)A(z-1) can be viewed as filtering a white process by a filter A(z), and has autocorrelation proportional to for lag k

k L

a a

proportional to for lag k  Knowing such autocorrelation function,

  n k n na

a

 Knowing such autocorrelation function, we can use Levinson-Durbin recursion to find the optimal linear prediction parameters for the process (or equivalently its AR approximation parameters) (or equivalently its AR approximation parameters) Thus we get {bk} as

1 ) ( ) (

1 

 z A z A

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [29]

Thus we get {bk} as

) ( ) ( ) ( ) (

1 

 z B z B z A z A

slide-28
SLIDE 28

Durbin’s Method Durbin’s Method

  • 1. Use Levinson-Durbin recursion and solve for

here

– That is we first approximate the observed data sequence {x[0]

where

– That is, we first approximate the observed data sequence {x[0], …, x[N]} with an AR model of high order (often pick L > 4q) – We use biased ACF estimate here to ensure nonnegative definiteness

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [30]

and smaller variance than unbiased estimate (dividing by N-k)

slide-29
SLIDE 29

Durbin’s Method Durbin’s Method

  • 1. Use Levinson-Durbin recursion and solve for

where

We first approximate the observed data sequence {x[0] x[N]} – We first approximate the observed data sequence {x[0], …, x[N]} with an AR model of high order (often pick L > 4q) – We use biased ACF estimate here to ensure nonnegative definiteness

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [31]

and smaller variance than unbiased estimate (dividing by N-k)

slide-30
SLIDE 30

Durbin Method (cont’d) Durbin Method (cont’d)

  • 2. Fit the data sequence

to an AR(q) model:

} ˆ ,..., ˆ , ˆ , 1 {

2 1 L

a a a

where

– The result {bi} is the estimated MA parameters for original{x[n]} – Note we add 1/(L+1) factor to allow the interpretation of ra(k) as an

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [32]

( ) p

a( )

autocorrelation function estimator

slide-31
SLIDE 31

Durbin Method (cont’d) Durbin Method (cont’d)

  • 2. Fit the data sequence

to an AR(q) model:

} ˆ ,..., ˆ , ˆ , 1 {

2 1 L

a a a

(q)

} , , , , {

2 1 L

where

– The result {bi} is the estimated MA parameters for original {x[n]} – Note we add 1/(L+1) factor to allow the interpretation of ra(k) as an

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [33]

( ) p

a( )

autocorrelation function estimator

slide-32
SLIDE 32

3.2.4 ARMA Spectral Estimation 3.2.4 ARMA Spectral Estimation

q p

Recall the ARMA(p,q) model

 

 

    

q k k p k k

k n v b k n x a n x

1

] [ ] [ ] [

2 2

ˆ

q fk j

We define an ARMA(p,q) spectral estimator

2 1 2 2 ARMA

1 ˆ ) ( ˆ

 

 

p k fk j ke

b f P

1 2

ˆ 1 

 

p k fk j ke

a

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [34]

slide-33
SLIDE 33

3.2.4 ARMA Spectral Estimation 3.2.4 ARMA Spectral Estimation

q p

Recall the ARMA(p,q) model

 

 

    

q k k p k k

k n v b k n x a n x

1

] [ ] [ ] [

We define an ARMA(p,q) spectral estimator

2 2

ˆ

q fk j 2 1 2 2 ARMA

1 ˆ ) ( ˆ

 

 

p k fk j ke

b f P

1 2

ˆ 1 

 

p k fk j ke

a

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [35]

slide-34
SLIDE 34

Modified Yule Modified Yule-

  • Walker Equations

Walker Equations

Recall the Yule-Walker Eq. for ARMA(p,q) process We can use the equations for k≥q+1 to solve for {al} “Modified Yule Walker Equations” Modified Yule-Walker Equations

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [36]

slide-35
SLIDE 35

Modified Yule Modified Yule-

  • Walker Equations

Walker Equations

Recall the Yule-Walker Eq. for ARMA(p,q) process We can use the equations for k≥q+1 to solve for {al} “Modified Yule Walker Equations”

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [37]

Modified Yule-Walker Equations

slide-36
SLIDE 36

Estimating ARMA Parameters Estimating ARMA Parameters

  • 1. By solving the modified Yule-Walker eq., we get

 

 

p k k kz

a z A

1

ˆ 1 ) ( ˆ

We first filter {x[n]} with , and model the output

) ( ˆ z A

  • 2. To estimate {bk},

with an MA(q) model using Durbin’s method.

) (

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [38]

slide-37
SLIDE 37

Estimating ARMA Parameters Estimating ARMA Parameters

  • 1. By solving the modified Yule-Walker eq., we get

 

 

p k k kz

a z A

1

ˆ 1 ) ( ˆ

We first filter {x[n]} with , and model the output

) ( ˆ z A

  • 2. To estimate {bk},

with an MA(q) model using Durbin’s method.

) (

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [39]

slide-38
SLIDE 38

Extension: LSMYWE Estimator Extension: LSMYWE Estimator

 Performance by solving p modified Yule-Walker equations

followed by Durbin’s method

– May yield highly variable spectral estimates (esp. when the matrix involving ACF is nearly singular due to poor ACF estimates)

 Improvement: use more than p equations to solve {a1 .. ap} in

a least square sense

U Y l W lk ti f k ( +1) M

i || t S ||2

– Use Yule-Walker equations for k = (q+1), … M: min || t – S a ||2 – Least square solution: a = (SH S) —1 SH t Then obtain {b} by Durbin’s method – Then obtain {bi} by Durbin s method

 “Least-square modified Yule-Walker equation” (LSMYWE)

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [40]

Ref: review in Hayes’ book Sec.2.3.6 on least square solution

slide-39
SLIDE 39

Comparison of Different Methods: Revisit Comparison of Different Methods: Revisit

 Test case: a process consists of narrowband components

(sinusoids) and a broadband component (AR)

– x[n] = 2 cos(1 n) + 2 cos(2 n) + 2 cos(3 n) + z[n] where z[n] = a1 z[n-1] + v[n], a1 =  0.85, 2 = 0.1 1/2 = 0.05, 2/2 = 0.40, 3/2 = 0.42 1/2 0.05, 2/2 0.40, 3/2 0.42 – N=32 data points are available  periodogram resolution f = 1/32

 Examine typical characteristics

f i t i d

  • f various non-parametric and

parametric spectral estimators

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [41]

(Fig.2.17 from Lim/Oppenheim book)

slide-40
SLIDE 40

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [42]

slide-41
SLIDE 41

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [43]

slide-42
SLIDE 42

3.2.5 Model Order Selection 3.2.5 Model Order Selection

 The best way to determine the model order is to base it on

the physics of the data generation process

 Example: speech processing

– Studies show the vocal tract can be modeled as an all-pole filter p having 4 resonances in a 4kHz band, thus at least 4 pairs of complex conjugate poles are necessary  Typically 10 12 poles are used in an AR modeling for speech  Typically 10-12 poles are used in an AR modeling for speech

 When no such knowledge is available, we can use some

g statistical test to estimate the order

Ref for in-depth exploration: “Model-order selection ” by P Stoica and Y Selen

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [44]

  • Ref. for in-depth exploration: Model-order selection, by P. Stoica and Y. Selen,

IEEE Signal Processing Magazine, July 2004.

slide-43
SLIDE 43

Considerations for Order Selection Considerations for Order Selection

 Modeling error

– Modeling error measures the (statistical) difference between the true g ( ) data value and the approximation by the model

 e.g. estim ating linear prediction MSE in AR m odeling

U ll f i f d l ( AR ARMA) h d li – Usually for a given type of model (e.g. AR, ARMA), the modeling error decreases as we increase the model order

 Balance between the modeling error and the amount of  Balance between the modeling error and the amount of

model parameters to be estimated

– The number of parameters that need to be estimated and represented increases as we use higher model order  Cost of overmodeling – Can balance modeling error and the cost of going to higher model by imposing a penalty term that increases with the model order

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [45]

imposing a penalty term that increases with the model order

slide-44
SLIDE 44

A Few Commonly Used Criteria A Few Commonly Used Criteria

 Akaike Information Criterion (AIC)

– A general estimate of the Kullback-Leibler divergence between g g assumed and true p.d.f., with an order penalty term increasing linearly – Choose the model order that minimize AIC

i N i

p

2 ln ) ( AIC   

size of model order: i=p for AR(p)

 Minimum Description Length (MDL) Criterion

available data model error i=p for AR(p) i=p+q for ARMA(p,q)

– Impose a bigger penalty term to overcome AIC’s overestimation – Estimated order converges to the true order as N goes to infinity

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [46]

i N N i

p

) (log ln ) ( MDL   

slide-45
SLIDE 45

A Few Commonly Used Criteria A Few Commonly Used Criteria

 Akaike Information Criterion (AIC)

– A general estimate of the Kullback-Leibler divergence between g g assumed and true p.d.f., with an order penalty term increasing linearly – Choose the model order that minimize AIC

i N i

p

2 ln ) ( AIC   

size of model order: i=p for AR(p)

 Minimum Description Length (MDL) Criterion

available data model error i=p for AR(p) i=p+q for ARMA(p,q)

– Impose a bigger penalty term to overcome AIC’s overestimation – Estimated order converges to the true order as N goes to infinity

UMD ENEE630 Advanced Signal Processing (ver.1111) Parametric spectral estimation [47]

i N N i

p

) (log ln ) ( MDL   