Chapter 11: Scaling and Round-off Noise Keshab K. Parhi Outline - - PowerPoint PPT Presentation

chapter 11 scaling and round off noise keshab k parhi
SMART_READER_LITE
LIVE PREVIEW

Chapter 11: Scaling and Round-off Noise Keshab K. Parhi Outline - - PowerPoint PPT Presentation

Chapter 11: Scaling and Round-off Noise Keshab K. Parhi Outline Introduction Scaling and Round-off Noise State Variable Description of Digital Filters Scaling and Round-off Noise Computation Round-off Noise Computation


slide-1
SLIDE 1

Chapter 11: Scaling and Round-off Noise Keshab K. Parhi

slide-2
SLIDE 2

Chapter 11 2

Outline

  • Introduction
  • Scaling and Round-off Noise
  • State Variable Description of Digital Filters
  • Scaling and Round-off Noise Computation
  • Round-off Noise Computation Using State

Variable Description

  • Slow-Down, Retiming, and Pipelining
slide-3
SLIDE 3

Chapter 11 3

Introduction

  • In a fixed-point digital filter implementation, the overall input-output

behavior is non-ideal. The quantization of signals and coefficients using finite word-lengths and propagation of roundoff noises to the output are the sources

  • f noise.
  • Other undesirable behavior include limit-cycle oscillations where undesirable

periodic components are present at filter output even in the absence of any

  • input. These may be caused due to internal rounding or overflow.
  • Scaling is often used to constrain the dynamic range of the variables to a

certain word-length

  • State variable description of a linear filter: provides a mathematical

formulation for studying various structures. These are most useful to compute quantities that depend on the internal structure of the filter. Power at each internal node and the output round-off noise of a digital FIR/IIR filter can be easily computed once the digital filter is described in state variable form

slide-4
SLIDE 4

Chapter 11 4

Scaling and Round-off Noise

  • Scaling: A process of readjusting certain internal gain parameters in
  • rder to constrain internal signals to a range appropriate to the hardware

with the constraint that the transfer function from input to output should not be changed

  • Illustration:

– The filter in Fig.11.1(a) with unscaled node x has the transfer function – To scale the node x, we divide F(z) by some number β and multiply G(z) by the same number as in Fig.11.1(b). Although the transfer function does not change by this operation, the signal level at node x has been changed

) ( ) ( ) ( ) ( z G z F z D z H + =

(11.1)

Scaling Operation

slide-5
SLIDE 5

Chapter 11 5

Fig.11.1 (a) A filter with unscaled node x, (b) A filter with scaled node x’

IN OUT F(z) G(z) D(z)

(a)

IN OUT F(z)/β βG(z) D(z)

(b) x x’

slide-6
SLIDE 6

Chapter 11 6

– The scaling parameter β can be chosen to meet any specific scaling rule such as

  • where f(i) is the unit-sample response from input to the node x

and the parameter δ can be interpreted to represent the number

  • f standard deviations representable in the register at node x if

input is unit-variance white noise – If the input is bounded by , then

  • Equation (11.4) represents the true bound on the range of x and
  • verflow is completely avoided by scaling in (11.2), which

is the most stringent scaling policy

     = − = −

∑ ∑

∞ = ∞ = 2 2 1

, ) ( : , ) ( :

i i

i f scaling l i f scaling l δ β β

(11.2) 1 ) ( ≤ n u

∑ ∑

∞ = ∞ =

≤ − = ) ( ) ( ) ( ) (

i i

i f i n u i f n x

(11.4) (11.3)

1

l

slide-7
SLIDE 7

Chapter 11 7

– Input can be generally assumed to be white noise.For unit-variance white noise input, variance at node x is given by:

  • -scaling is commonly used because most input signals can be

assumed to be white noise

  • (11.5) is a variance (not a strict bound). So, we can increase δ

in (11.3) to prevent possible overflow. But increasing δ will decrease SNR (signal-to-noise ratio). Thus, there is a trade-off between overflow and round-off noise

[ ] ∑

∞ =

=

2 2

) ( ) (

i

i f n x E

(11.5)

2

l

slide-8
SLIDE 8

Chapter 11 8

Scaling and Round-off Noise(cont’d)

Round-off Noise

  • Round-off Noise: Product of two W-bit fixed-point fractions is a (2W-1)

bit number. This product must eventually be quantized to W-bits by rounding or truncation, which results in round-off noise.

  • Example:

– Consider the 1st-order IIR filter shown in Fig. 11.2. Assume that the input wordlength W=8 bits, and the multiplier coefficient wordlength is also 8 bits. To maintain full precision in the output, we need to increase the output wordlength by 8 bits per iteration. This is clearly

  • infeasible. Thus, the result needs to be rounded or truncated to its

nearest 8-bit representation. This introduces a round-off noise e(n) (see Fig. 11.3).

slide-9
SLIDE 9

Chapter 11 9

a

Fig.11.2 A 1ST-order IIR filter (W=8) Fig.11.3 Model of Round-off Error

D a 8-bits 15-bits 8-bits u(n) x(n) D u(n) x(n) e(n): round-off error

slide-10
SLIDE 10

Chapter 11 10

  • Round-off Noise Mathematical Model: usually modeled as an infinite

precision system with an external error input (see Fig.11.3)

  • Rounding is a nonlinear operation, but its effect at the output can be

analyzed using linear system theory with the following assumptions about e(n)

– 1.e(n) is uniformly distributed white noise – 2. e(n) is a wide-sense stationary random process (mean & covariance of e(n) are independent of the time index n) – 3. e(n) is uncorrelated to all other signals such as input and other noise signals

  • Let the wordlength of the output be W-bits, then the round-off error

e(n) can be given by

– The error is assumed to be uniformly distributed over the interval in (11.6), the corresponding probability distribution is shown in Fig.11.4, where ∆ is the length of the interval and

2 2 ) ( 2 2

) 1 ( ) 1 ( − − − −

≤ ≤ −

W W

n e

(11.6)

) 1 (

2

− −

= ∆

W

slide-11
SLIDE 11

Chapter 11 11

Pe(x) X

2 ∆ − 2 ∆ ∆ 1

Fig.11.4 Error probability distribution

  • The mean and variance of this error function:

– (11.8) can be rewritten as (11.9), where is the variance of the round-

  • ff error in a finite precision W-bit wordlength system

[ ]

) (n e E )] ( [ 2 n e E

[ ]

[ ]

       = ∆ = ∆ = = = ∆ = =

− ∆ − ∆ − ∆ − ∆ −

∫ ∫

∆ ∆ ∆ ∆

3 2 12 3 1 ) ( ) ( 2 1 ) ( ) (

2 2 2 2 3 2 2 2 2 2

2 2 2 2

W e e

x dx x P x n e E x dx x xP n e E

(11.7) (11.8)

3 2 2

2 W e −

= σ

2 e

σ

(11.9)

slide-12
SLIDE 12

Chapter 11 12

– The variance is proportional to , so, increase in wordlength by 1 bit decreases the error by a factor of 4.

  • Purpose of analyzing round-off noise: determine its effect at the output

– If the noise variance at output is not negligible in comparison to the output signal level, the wordlength should be increased or some low-noise structure should be used. – We need to compute the SNR at the output, not just the noise gain to the

  • utput

– In noise analysis, we use a double-length accumulator model: rounding is performed after two (2W-1)-bit products are added. Notice: multipliers are the sources for round-off noise

W 2

2

slide-13
SLIDE 13

Chapter 11 13

State Variable Description of Digital Filters

  • Consider the signal flow graph (SFG) of an N-th order digital filter in

Fig.11.5. We can represent it in the following recursive matrix form:

– where x is the state vector, u is the input, and y is the output of the filter; x, b and c are N×1 column vectors; is N×N matrix; d, u and y are scalars.

  • Let be the unit-sample response from the input u(n) to the state

and let be the unit-sample response from the state to the output . It is necessary to scale the inputs to multipliers in

  • rder to avoid internal overflow

     ⋅ + ⋅ = ⋅ + ⋅ = + ) ( ) ( ) ( ), ( ) ( ) 1 ( n u d n x c n y n u b n x A n x

T

(11.10) (11.11)

A { }

) (n fi

) (n xi

{ } ) (n gi

) (n xi ) (n yi

slide-14
SLIDE 14

Chapter 11 14

  • Signals x(n) are input to the multipliers in Fig11.5. We need to compute

f(n) for scaling. Conversely, to find the noise variance at the output, it is necessary to find the unit-sample response from the location of the noise source e(n) to y(n). Thus g(n) represents the unit-sample response of the noise transfer function

  • From the SFG of Fig.11.15, we can write:

A

1 −

z

) (n e

) (n u ) (n y ) (n x

T

c b

) 1 ( + n x

d

Fig.11.5 Signal flow graph of IIR filter

A z I z b z U z X ⋅ − ⋅ =

− − 1 1

) ( ) (

(11.12)

slide-15
SLIDE 15

Chapter 11 15

  • Then, we can write the z-transform of f(n), F(z) as,

– We can compute f(n) by substituting u(n) by δ(n) and using the recursion (11.15) and initial condition f(0)=0: – The unit-sample response g(n) from the state x(n) to the output y(n) can be computed similarly with u(n)=0. The corresponding SFG is shown in Fig.11.6, which represents the following transfer function G(z),

. 1 , ) ( , ) ( ) ( ) ( ) (

1 1 2 2 1

≥ ⋅ = ⇒ ⋅ ⋅ ⋅ + + + = =

− − − −

n b A n f z b z A z A I z U z X z F

n

(11.13) (11.14)

) ( ) ( ) 1 ( n b n f A n f δ ⋅ + ⋅ = +

(11.15)

, ) ( , ) (

1

≥ ⋅ = ⇒ ⋅ − =

n A c n g z A I c z G

n T T

(11.16) (11.17)

slide-16
SLIDE 16

Chapter 11 16

  • State covariance matrix K:

– Because X is an N×1 vector, K is an N×N matrix – K is a measure of error power at various states ( the diagonal element is the energy of the error signal at state due to the input white noise)

1 −

⋅ z A

STATE OUT

T

c

Fig.11.6 Signal flow graph of g(n)

OUT

T

c

1

1

⋅ − z A I

( ) ( )

{ }

n x n x E K

T

⋅ ≡

(11.18)

ii

K

i

x

STATE

slide-17
SLIDE 17

Chapter 11 17

  • Express K in a form that reflects the error properties of the filter:

– State vector X(n) can be obtained by the convolution of u(n) and f(n), by using (11.14) for f(n), we get: – Therefore – Assume u(n) is zero-mean unit-variance white noise, so we have:

( )

∞ =

− − ⋅ = ∗ = ⋅ ⋅ ⋅ =

2 1

) 1 ( ) ( ) ( )] ( , ), ( ), ( [

l l T N

l n u b A n u n f n x n x n x n x (11.19) (11.20)

[ ]

∑ ∑ ∑ ∑ ∑ ∑

∞ = ∞ = ∞ = ∞ = ∞ = ∞ =

− − − − =       − − − − =       − − − − = ) ( ) 1 ( ) 1 ( ) )( 1 ( ) 1 ( ) )( 1 ( ) 1 ( ) (

l m T m l l m T m l m T m l l

b A m n u l n u E b A b A m n u l n u b A E b A m n u l n u b A E K

(11.21)    ≠ = − = , )] ( ) ( [ 1 )] ( [

2

k k n u n u E n u E (11.22) (11.23)

slide-18
SLIDE 18

Chapter 11 18

– Substituting (11.22) & (11.23) into (11.21), we obtain: – Finally, we get the Lyapunov equation:

  • If for some state , has a higher value than other states, then

needs to be assigned more bits, which leads to extra hardware and irregular design. – By scaling, we can ensure that all nodes have equal power, and the same word-length can be assigned to all nodes.

[ ]

T K T K K T K T T K K T K T K K T l T l l T l T l l l T l m T m lm l

A b A b A A b b A b A b A A b b b A b A b b b A b A b b b A b A l f l f b A b A K       ⋅ + = ⋅ + = ⋅ + = ⋅ + = ⋅ = = ⋅ ⋅ =

∑ ∑ ∑ ∑ ∑ ∑ ∑∑

∞ = ∞ = ∞ = + + ∞ = ∞ = ∞ = ∞ = ∞ = 1 1 1

) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( δ

(11.24)

T T

A K A b b K ⋅ ⋅ + ⋅ = ⇒

(11.25)

i

x

] [ 2

i

x E

i

x

slide-19
SLIDE 19

Chapter 11 19

  • Orthogonal filter structure: All internal variables are uncorrelated and

have unit variance assuming a white-noise input, it satisfies the following:

  • The advantages of orthogonal filter structure:

– The scaling rule is automatically satisfied – The round-off noise gain is low and invariant under frequency transformations – Overflow oscillations are impossible

  • Similarly, define the output covariance matrix W as follows:
  • Proceeding in a similar manner as before, we can get

T T

b b A A I K ⋅ + ⋅ = =

(11.26)

∑ ∑

∞ = ∞ =

= = ) ( ) ( ) (

n n T T n T n T

A c A c n g n g W

(11.27)

T T

c c A W A W ⋅ + ⋅ ⋅ =

(11.28)

slide-20
SLIDE 20

Chapter 11 20

Scaling and Round-off Noise Computation

  • The same word-length can be assigned to all the variables of the system
  • nly if all the states have equal power. This is achieved by scaling
  • The state vector is pre-multiplied by inverse of the scaling matrix T.

– If we denote the scaled states by , we can write, – Substituting for x from (11.29) into (11.10) and solving for , we get

Scaling Operation

S

x ) ( ) ( ) ( ) (

1

n x T n x n x T n x

S S

⋅ = ⇒ ⋅ =

(11.29)

S

x ) ( ) ( ) 1 ( ) ( ) ( ) 1 ( ) ( ) ( ) 1 (

1 1

n u b n x A n x n u b T n x T A T n x n u b n x T A n x T

S S S S S S S S

⋅ + ⋅ = + ⇒ ⋅ ⋅ + ⋅ ⋅ ⋅ = + ⇒ ⋅ + ⋅ ⋅ = + ⋅

− −

(11.30) (11.31) (11.32)

slide-21
SLIDE 21

Chapter 11 21

– where

  • Similarly, the output equation (11.11) can be derived as follows
  • The scaled K matrix is given by
  • It is desirable to have equal power at all states, so the transformation

matrix T is chosen such that the Ks matrix of the scaled system has all diagonal entries as 1.

( )

b T b T A T A

S S

⋅ = ⋅ ⋅ =

− − 1 1

,

{ }

d d T c c n u d n x c n u d n x T c n y

S T T S S S T S S T

= ⋅ = ⇒ ⋅ + ⋅ = ⋅ + ⋅ ⋅ = , ) ( ) ( ) ( ) ( ) (

(11.33)

( )

T S T T T T T S S S

T K T K T x x E T T x x T E x x E K ) ( ) ( ] ) ( [ ] [

1 1 1 1 1 1 − − − − − −

⋅ ⋅ = ⇒ ⋅ = ⋅ = ⋅ =

(11.34)

slide-22
SLIDE 22

Chapter 11 22

  • Further assume T to be diagonal, i.e.,

– From (11.34) and (11.35) and let , we can obtain:

  • Conclusion: By choosing i-th diagonal entry in T to be equal to the

square root of the i-th diagonal element of K matrix, all the states can be guaranteed to have equal unity power

  • Example (Example 11.4.1,pp.387) Consider the unscaled 2nd-order filter

shown in Fig.11.7, its state variable matrices are (see the next page):

T NN NN

T t t t diag T t t t diag T ) ( ] 1 , 1 , 1 [ ], , , [

1 22 11 1 22 11 − −

= ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ = ⇒ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ =

(11.35) (11.36)

1 ) ( =

ii S

K

ii ii ii ii ii S

K t t K K = ⇒ = = , 1 ) (

2

(11.37) (11.38)

slide-23
SLIDE 23

Chapter 11 23

– Example (cont’d) – The state covariance matrix K can be computed using (11.25) as

2 1 −

) (n u ) (n y 1

16 1 ) (

2 n

x

1 −

z

1 −

z

) (

1 n

x

16 1

Fig.11.7 An SFG of an unscaled 2nd-order filter

, , 1 , 1

2 1 16 1 16 1

=       − =       =       = d c b A       + =       +       ⋅       ⋅       =       1 1 1 1

11 256 1 12 16 1 21 16 1 22 16 1 22 21 12 11 16 1 22 21 12 11

K K K K K K K K K K K K

slide-24
SLIDE 24

Chapter 11 24

– Thus, we get: – For scaling with δ=1, the transformation matrix is – Thus the scaled filter is described as below and is shown in Fig.11.8 – Note: the state covariance matrix Ks of the scaled filter is

{ }

,

21 12 255 256 22 11

= = = = K K K K

2

l         =

255 16 255 16

T , , , 1

255 8 255 1 16 255 1 16 1 1

=         − = ⋅ =       = ⋅ =       = ⋅ ⋅ =

− − S T S S S

d c T c b T b T A T A       =                       = 1 1

16 255 16 255 255 256 255 256 16 255 16 255 S

K

slide-25
SLIDE 25

Chapter 11 25

1 −

z

) (n u ) (n y

1 −

z

0.998

  • 0.501
  • 0.0626

1/16

) (

2 n

x

) (

1 n

x

Fig.11.8 A SFG of a scaled 2nd-order filter

slide-26
SLIDE 26

Chapter 11 26

  • Computation: Let be the error due to round-off at state . Then the
  • utput round-off noise , due to this error, can be written as the

convolution of the error input with the state-to-output unit-sample response :

– Consider the mean and the variance of . Since is white noise with zero mean, so we have:

) (n e

i

Round-off Noise Computation

Scaling and Round-off Noise Computation (cont’d)

i

x

) (n y

i

) (n e

i

) (n g

i

∞ =

− = ∗ = ) ( ) ( ) ( ) ( ) (

l i i i i i

l n g l e n g n e n y

(11.39)

) (n y

i

) (n e

i

[ ]

, ) ( = n y E

i

(11.40)

[ ]

      − − =

∑ ∑

m i i l i i i

m n g m e l n g l e E n y E ) ( ) ( ) ( ) ( ) (

2

[ ]

∑∑

− − =

l m i i i i

m n g m e l e E l n g ) ( ) ( ) ( ) (

[ ]

[ ]

) ( iance var ) (

2 2

n e n e E let

i i e

= = σ

slide-27
SLIDE 27

Chapter 11 27

– (cont’d) – Expand W in its explicit matrix form, we can observe that all its diagonal entries are of the form :

[ ]

∑ ∑ ∑∑

= − = − ⋅ − =

n i e l i e l m i lm e i i

n g l n g m n g l n g n y E ) ( ) ( ) ( ) ( ) (

2 2 2 2 2 2

σ σ δ σ

(11.41)

∑n

i n

g ) (

2

              ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ =

∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑

n N n N n N n N n n n N n n

n g n g n g n g n g n g n g n g n g n g n g n g n g n g n g ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (

2 2 1 2 2 2 1 2 1 2 1 2 1

[ ]

∑ ∑

⋅⋅ ⋅ ⋅           ⋅ ⋅ ⋅ = =

n N N n T

n g n g n g n g n g n g W ) ( , ), ( ) ( ), ( ) ( ) (

1 1

(11.42) (11.43)

slide-28
SLIDE 28

Chapter 11 28

– Using (11.41), we can write the expression for the total output round-off noise in terms of trace of W: – Note: (11.44) is valid for all cases. But when there is no round-off

  • peration at any node, then the corresponding to that node should not

be included while computing noise power – (11.44) can be extended to compute the total round-off noise for the scaled system, which will simply be the trace of the scaled W matrix: – Replacing the filter parameters with the scaled parameters in (11.27), we can show: – Also, for a diagonal T we can write: noise roundoff total _ _

) ( ) (

2 1 2 1 2 2

W Trace W n g

e N i ii e N i n i e

σ σ σ = = =

∑ ∑ ∑

= =

(11.44)

ii

W

) (

2 S e

W Trace σ =

total round-off noise (scaled system) (11.45)

T W T W

T S

⋅ ⋅ =

(11.46)

( )

( )

( )

∑ ∑

= =

⋅ = =

N i ii ii N i ii S S

W t W W Trace

1 2 1

(11.47)

slide-29
SLIDE 29

Chapter 11 29

– (11.47) can be rewritten as follows because – Conclusion: The round-off noise of the scaled system can be computed using (11.48), i.e., using

  • Example (Example 11.4.2, p.390) To find the output round-off noise for the

scaled filter in Fig.11.8, W can be calculated using (11.28) as

ii ii

K t ≡

( )

( )

⇒ ⋅ =∑

= N i ii ii S

W K W Trace

1

( )

=

⋅ =

N i ii ii e

W K

1 2

σ

total round-off noise (scaled system) (11.48)

{ }

ii ii

W K ,

2

l

      + − − + =       +       ⋅       ⋅       =      

− − 255 64 11 255 8 12 16 1 255 8 21 16 1 255 1 22 256 1 255 64 255 8 255 8 255 1 16 1 22 21 12 11 16 1 22 21 12 11

1 1 W W W W W W W W W W W W

slide-30
SLIDE 30

Chapter 11 30

– Thus – The total output round-off noise for the scaled filter is – For the unscaled filter in Fig.11.7: – Thus

      − − =       2559 . 0332 . 0332 . 0049 .

22 21 12 11

W W W W

( )

2 2 22 11

2608 .

e e

W W σ σ = ⋅ +

      + − − + =       +       ⋅       ⋅       =      

− − 4 1 11 32 1 12 16 1 32 1 21 16 1 256 1 22 256 1 4 1 32 1 32 1 256 1 16 1 22 21 12 11 16 1 22 21 12 11

1 1 W W W W W W W W W W W W       − − =       2549 . 0333 . 0333 . 0049 .

22 21 12 11

W W W W

slide-31
SLIDE 31

Chapter 11 31

– The total output round-off noise for the unscaled filter is – Notice: The scaled filter suffers from larger round-off noise, which can also be observed by comparing the unscaled and scaled filter structure:

  • In the scaled filter, the input is scaled down by multiplying 0.998 to

the input to avoid overflow (See Fig.11.8). Therefore, to keep the transfer functions the same in both filters, the output path of the scaled filter should have a gain which is 1/0.998 times the gain of the

  • utput path of the unscaled filter. Thus the round-off noise of the

scaled filter is times that of the unscaled filter

  • The above observation represents the tradeoff between overflow and

round-off noise: More stringent scaling reduces the possibility of

  • verflow but increases the effect of round-off noise

– Notice: (11.48) can be confirmed by:

( )

2 2 22 11

2598 .

e e

W W σ σ = ⋅ +

1 2608 . 2598 . ) ( _ ) ( _ < = scaled noise roundoff unscaled noise roundoff

2

998 . 1

( ) ( )scaled

unscaled

W W W K W K

22 11 22 22 11 11

2608 . ) 2549 . 0049 . ( 255 256 + = = + = +

slide-32
SLIDE 32

Chapter 11 32

Round-off Noise Computation Using State Variable Description

Algorithms for Computing K and W

  • Parseval’s relation and Cauchy’s residue theorem are useful for finding

signal power or round-off noise of digital filters. But, they are not useful for complex structures.

  • The power at each internal node and the output round-off noise of a

complex digital filter can be easily computed once the digital filter is described in state variable form

  • Algorithm for computing K

– Using (11.24), K can be computed efficiently by the following algorithm: (see it on next page)

slide-33
SLIDE 33

Chapter 11 33

  • Algorithm for computing K (cont’d)

– 1. Initialize: – 2. Loop: – 3. Computation continues until

  • Algorithm analysis:

– After the 1st-loop iteration: – After the 2nd-loop iteration:

T

b b K A F ⋅ ← ← ,

2

, F F F A F K

T

← ⋅ ⋅ ← = F

     = + ⋅ ⋅ =

2

) ( A F b b A b b A K

T T T

(11.49)

     = + + + =

4 2 2 3 3

, ) ( ) ( A F b b A b b A A b b A A b b A K

T T T T T T T

(11.50)

slide-34
SLIDE 34

Chapter 11 34

– Thus, each iteration doubles the number of terms in the sum of (11.24). The above algorithm converges as long as the filter is stable (because the eigen-values of the matrix A are the poles of the transfer function) – This algorithm can be used to compute W after some changes

  • Algorithm for Computing W

– 1. Initialize: – 2. Loop: – 3. Computation continues until

  • Example (Example 11.6.1, p.404) Consider the scaled-normalized lattice

filter in Fig.11.9. We need to compute the signal powers at node 1, 2 and 3:

– Because there are 3 states (1—3), the dimensions of the matrix A, b, c and d are 3×3, 3×1, 3×1, and 1×1, respectively. From Fig.11.9, the state equations can be written as (see next page)

T T

c c W A F ⋅ ← ← ,

2

, F F F W F W

T

← ⋅ ⋅ ← = F

slide-35
SLIDE 35

Chapter 11 35

       + + + = + = + − + = + + + − = + ) ( 0029 . ) ( 3054 . ) ( 1035 . ) ( 0184 . ) ( ) ( 9743 . ) ( 2252 . ) 1 ( ) ( 2093 . ) ( 9054 . ) ( 3695 . ) 1 ( ), ( 8467 . ) ( 0443 . ) ( 1915 . ) ( 4944 . ) 1 (

3 2 1 3 1 3 3 2 1 2 3 2 1 1

n u n x n x n x n y n x n x n x n x n x n x n x n u n x n x n x n x

slide-36
SLIDE 36

Chapter 11 36

Fig.11.9 A 3rd-order scaled-normalized lattice filter (also see Fig.11.18, p.403, Textbook)

) (n u

0029 .

0569 .

z-1 z-1 z-1

) (n y

3209 .

532 . 9293 . 9293 . −

3695 .

3695 .

9743 . − 9743 . 2252 .

2252 .

323 . 9984 . 9471 .

1 # 2 # 3 #

8467 .

slide-37
SLIDE 37

Chapter 11 37

– From these equations, matrices A, b, c and d can be obtained directly. By substituting them into the K-computing algorithm, we get – Since , so no scaling is needed for nodes 1—3. In addition, the K matrix shows that the signals at nodes 1—3 are

  • rthogonal to each other since all off-diagonal elements are zeros

– By the W-computing algorithm, we obtain:

  • Conclusion:

– Using state variable description method, we can compute signal power or round-off noise of a digital filter easily and directly. However, it can not be used on the nodes that are not connected to unit-delay branches because these nodes do not appear in the state variable description

          = 1 1 1 K

} 1 {

33 22 11

= = = K K K } 3096 . , 2952 . , 1455 . {

33 22 11

= = = W W W

slide-38
SLIDE 38

Chapter 11 38

Slow-Down, Retiming, and Pipelining

Introduction

  • Many useful realizations contains roundoff nodes that are not connected

to unit-delay branches. Thus these nodes (variables) do not appear in a state variable description and the scaling and roundoff noise computation methods can not be applied directly.

  • The SRP ( slow-down and retiming/pipelining) transformation technique

can be used as a preprocessing step to overcome this difficulty

– Slow-down: every delay element (Z) in the original filter is changed into M delay element (ZM) – Retiming and Pipelining (Please see Chapters 4 and 3 for details)

slide-39
SLIDE 39

Chapter 11 39

  • Slow-down: Consider the filter in Fig.11.10(b) which is obtained by applying

slow-down transformation (M=3) to the filter in Fig.11.10(a). By 3 slow down transformation, every Z-variable in Fig.11.10(a) is changed into Z3. Thus the transfer function of the transformed filter H’(Z) is related to the original transfer function H(Z) as (11.51): – Thus, if the unit-sample response from the input to the internal node x in Fig.11.10(a) is defined by: – Then, the unit-sample response from the input to the internal node x’ in Fig.11.10(b) is: – We can get: – Similarly it can be shown that:

) ( ) ( ) ( ) ( ' ) ( ' ) ( '

3 3 3

z H z G z F z G z F z H = = =

(11.51)

}, ), 2 ( ), 1 ( ), ( { ) ( ⋅ ⋅ ⋅ = f f f n f

(11.52)

}, , , ), 2 ( , , ), 1 ( , , ), ( { ) ( ' ⋅ ⋅ ⋅ = f f f n f

(11.53)

xx n n xx

K n f n f K = = =

∑ ∑

2 2

) ( )] ( ' [ '

(11.54)

xx xx W

W = '

(11.55)

slide-40
SLIDE 40

Chapter 11 40

Figure 11.10 (a) A filter with transfer function H(z)=F(Z)G(Z). (b) Transformed filter obtained by 3 slow-down transformation H’(Z)=F(Z3)G(Z3). F(Z) G(Z) IN OUT x (a)

F’(Z)=F(Z3) G’(Z)=G(Z3)

IN OUT x’ (b)

slide-41
SLIDE 41

Chapter 11 41

– The foregoing analysis shows that slow-down transformation does not change the finite word-length behavior

  • Pipelining:

– Consider the filter in Fig.11.11(a), which has a non-state variable node x

  • n the feed-forward path. It is obvious that the non-state variable node

cannot be converted into the state variable node by slow-down transformation. – However, since x is on the feed-forward path, a delay can be placed on a proper cut-set location as shown in Fig.11.11(b). This pipelining operation converts the non-state variable node x into state variable node. The output sequence of the pipelined filter is equal to that of the original filter except

  • ne clock cycle delay.

– So, the pipelined filter undergoes the same possibility of overflow and the same effect of round-off noise as in the original filter. Thus it is clear that pipelining does not change the filter finite word-length behavior

slide-42
SLIDE 42

Chapter 11 42

Fig.11.11 (a) A filter with a non-state variable node on a feed-forward path (b) Non-state variable node is converted into state variable node by pipelining

c D D a b x u(n) y(n) (a) c D D a b u(n) y(n-1) D (b)

slide-43
SLIDE 43

Chapter 11 43

  • Retiming:

– In a linear array, if either all the left-directed or all the right-directed edges between modules carry at least 1 delay on each edge, the cut-set localization procedure can be applied to transfer some delays or a fraction

  • f a delay to the opposite directed edges (see Chapter 4) — This is called

retiming

  • SRP transformation technique is summarized as follows:

– 1. Apply slow-down transformation by a factor of M to a linear array, i.e., replace Z by ZM. Also, apply pipelining technique to appropriate locations. – 2. Distribute the additional delays to proper locations such that non-state variable nodes are converted to state variable nodes – 3. Apply the scaling and noise computation method using state variable description

  • Example (Example 11.7.1, p.407) Consider the filter shown in Fig.11.12, same

as the 3rd-order scaled-normalized lattice filter in Fig.11.9 except that it has five more delays. The SFG in Fig.11.12 is obtained by using a 2-slow transformation and followed by retiming or cut-set transformation. (cont’d)

slide-44
SLIDE 44

Chapter 11 44

Fig.11.12 A transformed filter of the 3rd-order scaled-normalized lattice filter in Fig.11.9 (also see Fig.11.21,pp.407) ) (n u

0029 .

0569 .

z-1 z-1 z-1

) (n y

3209 .

3 #

8467 .

532 . 9293 . 9293 . −

3695 .

3695 .

9743 . − 9743 .

2252 .

2252 .

z-1 z-1 z-1 z-1 z-1

323 . 9984 . 9471 .

4 # 2 #

1 # 5 # 6 # 7 # 8 #

slide-45
SLIDE 45

Chapter 11 45

– (cont’d) Notice that signal power or round-off noise at every internal node in this filter can be computed using state variable description since each node is connected to a unit delay branch. Since there are 8 states, the dimensions of the matrices A, b, c, and d are 8×8, 8×1, 8×1, and 1×1,

  • respectively. From Fig.11.12, state equations can be written as follows:

               + = + − = + + = + + = + + = + = + + = + − = + + = + ) ( 0029 . ) ( 323 . ) ( ), ( 2252 . ) ( 9743 . ) 1 ( ), ( 3695 . ) ( 9293 . ) 1 ( ), ( 9471 . ) ( 3209 . ) 1 ( ), ( 9984 . ) ( 0569 . ) 1 ( ), ( ) 1 ( ), ( 9743 . ) ( 2252 . ) 1 ( ), ( 9293 . ) ( 3695 . ) 1 ( ), ( 8467 . ) ( 532 . ) 1 (

5 4 2 8 8 1 7 4 2 6 6 1 5 3 4 4 2 3 8 1 2 7 1

n u n x n y n x n x n x n x n x n x n x n x n x n x n x n x n x n x n x n x n x n x n x n x n u n x n x

slide-46
SLIDE 46

Chapter 11 46

– From the above equations, matrices A, b, c, and d can be obtained directly. Using the K-computing algorithm, we obtain , which means that every internal node is perfectly scaled. Similarly, we get – Thus, the total output round-off noise is: – Note: no round-off operation is associated with node 4 or state . Therefore, is not included in Trace(W) for round-off noise computation

  • Example (omitted, study at home)

– For details, please see Example 11.7.2, p.408 of textbook } 8 , , 2 , 1 , 1 { ⋅ ⋅ ⋅ = = i Kii } 1912 . , 0412 . , 104 . , 1043 . , 3096 . , 3096 . , 2952 . , 1455 . { , ,

88 11

= ⋅ ⋅ ⋅ W W

2 2

191 . 1

e i ii ii e

W K σ σ = =

4

x

44

W