digital signal processing
play

Digital Signal Processing Markus Kuhn Computer Laboratory, - PDF document

Digital Signal Processing Markus Kuhn Computer Laboratory, University of Cambridge https://www.cl.cam.ac.uk/teaching/1819/{DSP,DSP2,L314}/ These notes are provided as an aid for following the lectures, and are not a substitute for attending


  1. Fechner’s scale matches older subjective intensity scales that follow differentiability of stimuli, e.g. the astronomical magnitude numbers for star brightness introduced by Hipparchos ( ≈ 150 BC). Stevens’ power law A sound that is 20 DL over SL is perceived as more than twice as loud as one that is 10 DL over SL, i.e. Fechner’s scale does not describe well perceived intensity. A rational scale attempts to reflect subjective relations perceived between different values of stimulus intensity φ . Stanley Smith Stevens observed that such rational scales ψ follow a power law: ψ = k · ( φ − φ 0 ) a Example coefficients a : brightness 0.33, loudness 0.6, heaviness 1.45, temperature (warmth) 1.6. 15 Units and decibel Communications engineers often use logarithmic units: ◮ Quantities often vary over many orders of magnitude → difficult to agree on a common SI prefix (nano, micro, milli, kilo, etc.) ◮ Quotient of quantities (amplification/attenuation) usually more interesting than difference ◮ Signal strength usefully expressed as field quantity (voltage, current, pressure, etc.) or power, but quadratic relationship between these two ( P = U 2 /R = I 2 R ) rather inconvenient ◮ Perception is logarithmic (Weber/Fechner law → slide 14) Plus: Using magic special-purpose units has its own odd attractions ( → typographers, navigators) Neper (Np) denotes the natural logarithm of the quotient of a field quantity F and a reference value F 0 . (rarely used today) Bel (B) denotes the base-10 logarithm of the quotient of a power P and a reference power P 0 . Common prefix: 10 decibel (dB) = 1 bel. 16

  2. Decibel Where P is some power and P 0 a 0 dB reference power, or equally where F is a field quantity and F 0 the corresponding reference level: P F 10 dB · log 10 = 20 dB · log 10 P 0 F 0 Common reference values are indicated with a suffix after “dB”: 0 dBW = 1 W 0 dBm = 1 mW = − 30 dBW 0 dB µ V = 1 µ V √ 0 dBu = 0 . 775 V = 600 Ω × 1 mW 0 dB SPL = 20 µ Pa (sound pressure level) 0 dB SL = perception threshold (sensation limit) 0 dBFS = full scale (clipping limit of analog/digital converter) Remember: 3 dB = 2 × power, 6 dB = 2 × voltage/pressure/etc. 10 dB = 10 × power, 20 dB = 10 × voltage/pressure/etc. W.H. Martin: Decibel – the new name for the transmission unit. Bell Syst. Tech. J., Jan. 1929. ITU-R Recommendation V.574-4: Use of the decibel and neper in telecommunication. 17 Types of discrete systems discrete . . . , x 2 , x 1 , x 0 , x − 1 , . . . . . . , y 2 , y 1 , y 0 , y − 1 , . . . system T A causal system cannot look into the future: y n = f ( x n , x n − 1 , x n − 2 , . . . ) A memory-less system depends only on the current input value: y n = f ( x n ) A delay system shifts a sequence in time: y n = x n − d T is a time-invariant system if for any d { y n } = T { x n } ⇐ ⇒ { y n − d } = T { x n − d } . T is a linear system if for any pair of sequences { x n } and { x ′ n } T { a · x n + b · x ′ n } = a · T { x n } + b · T { x ′ n } . 18

  3. Example: M -point moving average system M − 1 y n = 1 x n − k = x n − M +1 + · · · + x n − 1 + x n � M M k =0 It is causal, linear, time-invariant, with memory. With M = 4: x y 0 19 Example: exponential averaging system ∞ � (1 − α ) k · x n − k y n = α · x n + (1 − α ) · y n − 1 = α k =0 With α = 1 It is causal, linear, time-invariant, with memory. 2 : x y 0 20

  4. Example: accumulator system n � y n = x k k = −∞ It is causal, linear, time-invariant, with memory. x y 0 21 Example: backward difference system y n = x n − x n − 1 It is causal, linear, time-invariant, with memory. x y 0 22

  5. Other examples Time-invariant non-linear memory-less systems: y n = x 2 n , y n = log 2 x n , y n = max { min {⌊ 256 x n ⌋ , 255 } , 0 } Linear but not time-invariant systems: � x n , n ≥ 0 y n = n < 0 = x n · u n 0 , y n = x ⌊ n/ 4 ⌋ y n = x n · ℜ (e ˙ ω j n ) Linear time-invariant non-causal systems: y n = 1 2( x n − 1 + x n +1 ) 9 x n + k · sin( πkω ) � y n = · [0 . 5 + 0 . 5 · cos( π k/ 10)] πkω k = − 9 23 Constant-coefficient difference equations Of particular practical interest are causal linear time-invariant systems of the form x n b 0 y n N � z − 1 y n = b 0 · x n − a k · y n − k − a 1 k =1 y n − 1 z − 1 − a 2 Block diagram representation y n − 2 of sequence operations: x ′ z − 1 n − a 3 x n + x ′ x n n y n − 3 Addition: Multiplication x n a ax n The a k and b m are by constant: constant coefficients. x n x n − 1 z − 1 Delay: 24

  6. or x n x n − 1 x n − 2 x n − 3 z − 1 z − 1 z − 1 M � b 0 b 1 b 2 b 3 y n = b m · x n − m m =0 y n or the combination of both: a − 1 x n b 0 y n 0 z − 1 z − 1 b 1 − a 1 x n − 1 y n − 1 N M � � a k · y n − k = b m · x n − m z − 1 z − 1 k =0 m =0 b 2 − a 2 x n − 2 y n − 2 z − 1 z − 1 b 3 − a 3 x n − 3 y n − 3 The MATLAB function filter is an efficient implementation of the last variant. 25 Convolution Another example of a LTI systems is ∞ � y n = a k · x n − k k = −∞ where { a k } is a suitably chosen sequence of coefficients. This operation over sequences is called convolution and is defined as ∞ � { p n } ∗ { q n } = { r n } ⇐ ⇒ ∀ n ∈ Z : r n = p k · q n − k . k = −∞ If { y n } = { a n } ∗ { x n } is a representation of an LTI system T , with { y n } = T { x n } , then we call the sequence { a n } the impulse response of T , because { a n } = T { δ n } . 26

  7. Convolution examples A B C D A ∗ B A ∗ C E F C ∗ A A ∗ E D ∗ E A ∗ F 27 Properties of convolution For arbitrary sequences { p n } , { q n } , { r n } and scalars a , b : ◮ Convolution is associative ( { p n } ∗ { q n } ) ∗ { r n } = { p n } ∗ ( { q n } ∗ { r n } ) ◮ Convolution is commutative { p n } ∗ { q n } = { q n } ∗ { p n } ◮ Convolution is linear { p n } ∗ { a · q n + b · r n } = a · ( { p n } ∗ { q n } ) + b · ( { p n } ∗ { r n } ) ◮ The impulse sequence (slide 10) is neutral under convolution { p n } ∗ { δ n } = { δ n } ∗ { p n } = { p n } ◮ Sequence shifting is equivalent to convolving with a shifted impulse { p n − d } = { p n } ∗ { δ n − d } 28

  8. Proof: all LTI systems just apply convolution Any sequence { x n } can be decomposed into a weighted sum of shifted impulse sequences: ∞ � { x n } = x k · { δ n − k } k = −∞ Let’s see what happens if we apply a linear ( ∗ ) time-invariant ( ∗∗ ) system T to such a decomposed sequence: � � ∞ ∞ � � ( ∗ ) T { x n } = T x k · { δ n − k } = x k · T { δ n − k } k = −∞ k = −∞ � � ∞ ∞ � � ( ∗∗ ) = x k · { δ n − k } ∗ T { δ n } = x k · { δ n − k } ∗ T { δ n } k = −∞ k = −∞ = { x n } ∗ T { δ n } q.e.d. ⇒ The impulse response T { δ n } fully characterizes an LTI system. 29 Direct form I and II implementations a − 1 a − 1 x n b 0 y n x n b 0 y n 0 0 z − 1 z − 1 z − 1 b 1 − a 1 − a 1 b 1 x n − 1 y n − 1 = z − 1 z − 1 z − 1 b 2 − a 2 − a 2 b 2 x n − 2 y n − 2 z − 1 z − 1 z − 1 b 3 − a 3 − a 3 b 3 x n − 3 y n − 3 The block diagram representation of the constant-coefficient difference equation on slide 25 is called the direct form I implementation . The number of delay elements can be halved by using the commutativity of convolution to swap the two feedback loops, leading to the direct form II implementation of the same LTI system. These two forms are only equivalent with ideal arithmetic (no rounding errors and range limits). 30

  9. Convolution: optics example If a projective lens is out of focus, the blurred image is equal to the original image convolved with the aperture shape (e.g., a filled circle): ∗ = image plane focal plane Point-spread function h (disk, r = as 2 f ): x 2 + y 2 ≤ r 2 1 � r 2 π , h ( x, y ) = x 2 + y 2 > r 2 0 , a Original image I , blurred image B = I ∗ h , i.e. �� I ( x − x ′ , y − y ′ ) · h ( x ′ , y ′ ) · d x ′ d y ′ B ( x, y ) = s f 31 Convolution: electronics example R U in U in C U out U out t Any passive network (resistors, capacitors, inductors) convolves its input voltage U in with an impulse response function h , leading to U out = U in ∗ h , that is � ∞ U out ( t ) = U in ( t − τ ) · h ( τ ) · d τ −∞ In the above example: � − t U in − U out = C · d U out 1 RC , RC · e t ≥ 0 , h ( t ) = 0 , t < 0 R d t 32

  10. Adding sine waves Adding together sine waves of equal frequency, but arbitrary amplitude and phase, results in another sine wave of the same frequency: A 1 · sin( ωt + ϕ 1 ) + A 2 · sin( ωt + ϕ 2 ) = A · sin( ωt + ϕ ) Why? Think of A · sin( ωt + ϕ ) as the height of ω an arrow of length A , rotating 2 π times per second, A 2 with start angle ϕ (radians) at t = 0. A Consider two more such arrows, ϕ 2 of length A 1 and A 2 , A 1 with start angles ϕ 1 and ϕ 2 . ωt ϕ ϕ 1 A 1 and A 2 stuck together are as high as A , all three rotating at the same frequency. But adding sine waves as vectors ( A 1 , ϕ 1 ) and ( A 2 , ϕ 2 ) in polar coordinates is cumbersome: tan ϕ = A 1 sin ϕ 1 + A 2 sin ϕ 2 � A 2 1 + A 2 A = 2 + 2 A 1 A 2 cos( ϕ 2 − ϕ 1 ) , A 1 cos ϕ 1 + A 2 cos ϕ 2 33 Cartesian coordinates for sine waves Sine waves of any amplitude A and phase (start angle) ϕ can be represented as linear combinations of sin( ωt ) and cos( ωt ): A · sin( ωt + ϕ ) = x · sin( ωt ) + y · cos( ωt ) cos( ωt ) = sin( ωt + 90 ◦ ) where A A · sin( ϕ ) x = A · cos( ϕ ) , y = A · sin( ϕ ) ωt ϕ and tan ϕ = y A · cos( ϕ ) � x 2 + y 2 , A = x. Base: two rotating arrows with start angles 0 ◦ [height = sin( ω )] and 90 ◦ [height = cos( ω )]. Adding two sine waves as vectors in Cartesian coordinates is simple: f 1 ( t ) = x 1 · sin( ω ) + y 1 · cos( ω ) f 2 ( t ) = x 2 · sin( ω ) + y 2 · cos( ω ) f 1 ( t ) + f 2 ( t ) = ( x 1 + x 2 ) · sin( ω ) + ( y 1 + y 2 ) · cos( ω ) 34

  11. Why are sine waves useful? 1) Sine-wave sequences form a family of discrete sequences that is closed under convolution with arbitrary sequences. Convolution of a discrete sequence { x n } with another sequence { y n } is nothing but adding together scaled and delayed copies of { x n } . (Think of { y n } decomposed into a sum of impulses.) If { x n } is a sampled sine wave of frequency f , so is { x n } ∗ { y n } ! The same applies for continuous sine waves and convolution. 2) Sine waves are orthogonal to each other � ∞ sin( ω 1 t + ϕ 1 ) · sin( ω 2 t + ϕ 2 ) d t “=” 0 −∞ ⇐ ⇒ ω 1 � = ω 2 ∨ ϕ 1 − ϕ 2 = (2 k + 1) π / 2 ( k ∈ Z ) They can be used to form an orthogonal function basis for a transform. The term “orthogonal” is used here in the context of an (infinitely dimensional) vector space, where the “vectors” are functions of the form f : R → R (or f : R → C ) and the scalar product is � ∞ defined as f · g = −∞ f ( t ) · g ( t ) d t . 35 1 0 sin(1t) ⋅ sin(2t) sin(1t) sin(2t) −1 0 1.5708 3.1416 4.7124 6.2832 t 36

  12. Why are exponential functions useful? Adding together two exponential functions with the same base z , but different scale factor and offset, results in another exponential function with the same base: A 1 · z t · z ϕ 1 + A 2 · z t · z ϕ 2 A 1 · z t + ϕ 1 + A 2 · z t + ϕ 2 = ( A 1 · z ϕ 1 + A 2 · z ϕ 2 ) · z t = A · z t = Likewise, if we convolve a sequence { x n } of values . . . , z − 3 , z − 2 , z − 1 , 1 , z, z 2 , z 3 , . . . x n = z n with an arbitrary sequence { h n } , we get { y n } = { z n } ∗ { h n } , ∞ ∞ ∞ � � � z n − k · h k = z n · z − k · h k = z n · H ( z ) y n = x n − k · h k = k = −∞ k = −∞ k = −∞ where H ( z ) is independent of n . Exponential sequences are closed under convolution with arbitrary sequences. The same applies in the continuous case. 37 Why are complex numbers so useful? 1) They give us all n solutions (“roots”) of equations involving polynomials up to degree n (the “ √− 1 = j ” story). 2) They give us the “great unifying theory” that combines sine and exponential functions: 1 e j θ + e − j θ � � cos( θ ) = 2 1 � e j θ − e − j θ � sin( θ ) = 2j or � e j( ωt + ϕ ) + e − j( ωt + ϕ ) � cos( ωt + ϕ ) = 1 2 or ω ) n · e j ϕ ] ℜ (e j( ˙ ωn + ϕ ) ) = ℜ [(e j ˙ cos( ˙ ωn + ϕ ) = ω ) n · e j ϕ ] ℑ (e j( ˙ ωn + ϕ ) ) = ℑ [(e j ˙ sin( ˙ ωn + ϕ ) = Notation: ℜ ( a + j b ) := a , ℑ ( a + j b ) := b and ( a + j b ) ∗ := a − j b , where j 2 = − 1 and a, b ∈ R . Then ℜ ( x ) = 1 2 ( x + x ∗ ) and ℑ ( x ) = 2 j ( x − x ∗ ) for all x ∈ C . 1 38

  13. We can now represent sine waves as projections of a rotating complex vector. This allows us to represent sine-wave sequences as exponential sequences with basis e j ˙ ω . A phase shift in such a sequence corresponds to a rotation of a complex vector. 3) Complex multiplication allows us to modify the amplitude and phase of a complex rotating vector using a single operation and value. Rotation of a 2D vector in ( x, y )-form is notationally slightly messy, but fortunately j 2 = − 1 does exactly what is required here: � � � � � � x 3 x 2 − y 2 x 1 = · y 3 y 2 x 2 y 1 ( x 3 , y 3 ) � x 1 x 2 − y 1 y 2 � = ( − y 2 , x 2 ) x 1 y 2 + x 2 y 1 ( x 2 , y 2 ) z 1 = x 1 + j y 1 , z 2 = x 2 + j y 2 ( x 1 , y 1 ) z 1 · z 2 = x 1 x 2 − y 1 y 2 + j( x 1 y 2 + x 2 y 1 ) 39 Complex phasors Amplitude and phase are two distinct characteristics of a sine function that are inconvenient to keep separate notationally. Complex functions (and discrete sequences) of the form ( A · e j ϕ ) · e j ωt = A · e j( ωt + ϕ ) = A · [cos( ωt + ϕ ) + j · sin( ωt + ϕ )] (where j 2 = − 1) are able to represent both amplitude A ∈ R + and phase ϕ ∈ [0 , 2 π ) in one single algebraic object A · e j ϕ ∈ C . Thanks to complex multiplication, we can also incorporate in one single factor both a multiplicative change of amplitude and an additive change of phase of such a function. This makes discrete sequences of the form x n = e j ˙ ωn eigensequences with respect to an LTI system T , because for each ˙ ω , there is a complex number (eigenvalue) H ( ˙ ω ) such that T { x n } = H ( ˙ ω ) · { x n } In the notation of slide 37, where the argument of H is the base, we would write H (e j ˙ ω ). 40

  14. Recall: Fourier transform We define the Fourier integral transform and its inverse as � ∞ g ( t ) · e − 2 π j ft d t F{ g ( t ) } ( f ) = G ( f ) = −∞ � ∞ G ( f ) · e 2 π j ft d f F − 1 { G ( f ) } ( t ) = g ( t ) = −∞ Many equivalent forms of the Fourier transform are used in the literature. There is no strong consensus on whether the forward transform uses e − 2 π j ft and the backwards transform e 2 π j ft , or vice versa. The above form uses the ordinary frequency f , whereas some authors prefer the angular frequency ω = 2 π f : � ∞ h ( t ) · e ∓ j ωt d t F{ h ( t ) } ( ω ) = H ( ω ) = α −∞ � ∞ H ( ω ) · e ± j ωt d ω F − 1 { H ( ω ) } ( t ) = h ( t ) = β −∞ This substitution introduces factors α and β such that αβ = 1 / (2 π ). Some authors set α = 1 and β = 1 / (2 π ), to keep the convolution theorem free of a constant prefactor; others prefer the √ unitary form α = β = 1 / 2 π , in the interest of symmetry. 41 Properties of the Fourier transform If x ( t ) • − ◦ X ( f ) and y ( t ) • − ◦ Y ( f ) are pairs of functions that are mapped onto each other by the Fourier transform, then so are the following pairs. Linearity: ax ( t ) + by ( t ) • − ◦ aX ( f ) + bY ( f ) Time scaling: � f � 1 x ( at ) • − ◦ | a | X a Frequency scaling: � t � 1 | a | x • − ◦ X ( af ) a 42

  15. Time shifting: X ( f ) · e − 2 π j f ∆ t x ( t − ∆ t ) • − ◦ Frequency shifting: x ( t ) · e 2 π j∆ ft • − ◦ X ( f − ∆ f ) Parseval’s theorem (total energy): � ∞ � ∞ | x ( t ) | 2 d t | X ( f ) | 2 d f = −∞ −∞ 43 Fourier transform example: rect and sinc The Fourier transform of the “rectangular function”  if | t | < 1 1 1  2  1 if | t | = 1 rect( t ) = 2 2 0   0 otherwise 2 0 − 1 1 2 is the “(normalized) sinc function” � 1 e − 2 π j ft d t = sin π f 2 F{ rect( t ) } ( f ) = = sinc( f ) π f − 1 2 and vice versa F{ sinc( t ) } ( f ) = rect( f ) . Some noteworthy properties of these functions: ◮ � ∞ � ∞ −∞ sinc( t ) d t = 1 = −∞ rect( t ) d t 1 ◮ sinc(0) = 1 = rect(0) 0 ◮ ∀ n ∈ Z \ { 0 } : sinc( n ) = 0 − 3 − 2 − 1 0 1 2 3 44

  16. Convolution theorem Convolution in the time domain is equivalent to (complex) scalar multiplication in the frequency domain: F{ ( f ∗ g )( t ) } = F{ f ( t ) } · F{ g ( t ) } r z ( r )e − j ωr d r = s x ( s ) y ( r − s )e − j ωr d s d r = � � � � Proof: z ( r ) = s x ( s ) y ( r − s )d s ⇐ ⇒ r t := r − s r y ( r − s )e − j ωr d r d s = s x ( s )e − j ωs � r y ( r − s )e − j ω ( r − s ) d r d s � � � s x ( s ) = s x ( s )e − j ωs � t y ( t )e − j ωt d t d s = s x ( s )e − j ωs d s · t y ( t )e − j ωt d t . � � � Convolution in the frequency domain corresponds to scalar multiplication in the time domain: F{ f ( t ) · g ( t ) } = F{ f ( t ) } ∗ F{ g ( t ) } This second form is also called “modulation theorem”, as it describes what happens in the frequency domain with amplitude modulation of a signal (see slide 52). The proof is very similar to the one above. Both equally work for the inverse Fourier transform: F − 1 { ( F ∗ G )( f ) } = F − 1 { F ( f ) } · F − 1 { G ( f ) } F − 1 { F ( f ) · G ( f ) } = F − 1 { F ( f ) } ∗ F − 1 { G ( f ) } 45 Dirac delta function The continuous equivalent of the impulse sequence { δ n } is known as Dirac delta function δ ( x ). It is a generalized function, defined such that � 1 0 , x � = 0 δ ( x ) = ∞ , x = 0 � ∞ δ ( x ) d x = 1 −∞ 0 x and can be thought of as the limit of function sequences such as � 0 , | x | ≥ 1 /n δ ( x ) = lim n/ 2 , | x | < 1 /n n →∞ or n √ π e − n 2 x 2 δ ( x ) = lim n →∞ The delta function is mathematically speaking not a function, but a distribution , that is an expression that is only defined when integrated. 46

  17. Some properties of the Dirac delta function: � ∞ f ( x ) δ ( x − a ) d x = f ( a ) −∞ � ∞ e ± 2 π j xa d x = δ ( a ) −∞ ∞ ∞ 1 � � e ± 2 π j ixa = δ ( x − i/a ) | a | i = −∞ i = −∞ 1 δ ( ax ) = | a | δ ( x ) Fourier transform: � ∞ δ ( t ) · e − 2 π j ft d t e 0 F{ δ ( t ) } ( f ) = = = 1 −∞ � ∞ · e 2 π j ft d f F − 1 { 1 } ( t ) = 1 = δ ( t ) −∞ 47 Linking the Dirac delta with the Fourier transform The Fourier transform of 1 follows from the Dirac delta’s ability to sample inside an integral: g ( t ) = F − 1 ( F ( g ))( t ) � ∞ �� ∞ � g ( s ) · e − 2 π j fs · d s · e 2 π j ft · d f = −∞ −∞ � ∞ �� ∞ � e − 2 π j fs · e 2 π j ft · d f = · g ( s ) · d s −∞ −∞ � ∞ �� ∞ � e − 2 π j f ( t − s ) · d f = · g ( s ) · d s −∞ −∞ � �� � δ ( t − s ) So if δ has the property � ∞ g ( t ) = δ ( t − s ) · g ( s ) · d s −∞ then � ∞ e − 2 π j f ( t − s ) d f = δ ( t − s ) −∞ 48

  18. � ∞ � 10 e 2 π j tf d f = δ ( t ) i =1 cos(2 π f i t ) ≈ δ ( t ) −∞ f 1 , . . . , f 10 ∈ [0 , 3] chosen uniformly at random 1 0.5 0 −0.5 −1 −4 −3 −2 −1 0 1 2 3 4 10 5 0 −5 −10 −4 −3 −2 −1 0 1 2 3 4 49 Sine and cosine in the frequency domain cos(2 π f 0 t ) = 1 2 e 2 π j f 0 t + 1 sin(2 π f 0 t ) = 1 2j e 2 π j f 0 t − 1 2 e − 2 π j f 0 t 2j e − 2 π j f 0 t 1 2 δ ( f − f 0 ) + 1 F{ cos(2 π f 0 t ) } ( f ) = 2 δ ( f + f 0 ) F{ sin(2 π f 0 t ) } ( f ) = − j 2 δ ( f − f 0 ) + j 2 δ ( f + f 0 ) ℜ ℜ 1 1 2 2 ℑ ℑ 1 1 2 j 2 j − f 0 f 0 f − f 0 f 0 f As any x ( t ) ∈ R can be decomposed into sine and cosine functions, the spectrum of any real-valued signal will show the symmetry X ( − f ) = [ X ( f )] ∗ , where ∗ denotes the complex conjugate (i.e., negated imaginary part). 50

  19. Fourier transform symmetries We call a function x ( t ) odd if x ( − t ) = − x ( t ) even if x ( − t ) = x ( t ) and · ∗ is the complex conjugate, such that ( a + j b ) ∗ = ( a − j b ). Then X ( − f ) = [ X ( f )] ∗ x ( t ) is real ⇔ X ( − f ) = − [ X ( f )] ∗ x ( t ) is imaginary ⇔ x ( t ) is even ⇔ X ( f ) is even x ( t ) is odd ⇔ X ( f ) is odd x ( t ) is real and even ⇔ X ( f ) is real and even x ( t ) is real and odd ⇔ X ( f ) is imaginary and odd x ( t ) is imaginary and even ⇔ X ( f ) is imaginary and even x ( t ) is imaginary and odd ⇔ X ( f ) is real and odd 51 Example: amplitude modulation Communication channels usually permit only the use of a given frequency interval, such as 300–3400 Hz for the analog phone network or 590–598 MHz for TV channel 36. Modulation with a carrier frequency f c shifts the spectrum of a signal x ( t ) into the desired band. Amplitude modulation (AM): y ( t ) = A · cos(2 π tf c ) · x ( t ) X ( f ) Y ( f ) ∗ = f f f − f l 0 f l − f c f c − f c 0 f c The spectrum of the baseband signal in the interval − f l < f < f l is shifted by the modulation to the intervals ± f c − f l < f < ± f c + f l . How can such a signal be demodulated? 52

  20. Sampling using a Dirac comb The loss of information in the sampling process that converts a continuous function x ( t ) into a discrete sequence { x n } defined by x n = x ( t s · n ) = x ( n/f s ) can be modelled through multiplying x ( t ) by a comb of Dirac impulses ∞ � s ( t ) = t s · δ ( t − t s · n ) n = −∞ to obtain the sampled function x ( t ) = x ( t ) · s ( t ) ˆ The function ˆ x ( t ) now contains exactly the same information as the discrete sequence { x n } , but is still in a form that can be analysed using the Fourier transform on continuous functions. 53 The Fourier transform of a Dirac comb ∞ ∞ � � e 2 π j nt/t s s ( t ) = t s · δ ( t − t s · n ) = n = −∞ n = −∞ is another Dirac comb � � ∞ � S ( f ) = F t s · δ ( t − t s n ) ( f ) = n = −∞ ∞ ∞ ∞ � � � f − n � � δ ( t − t s n ) e − 2 π j ft d t = t s · δ . t s n = −∞ n = −∞ −∞ s ( t ) S ( f ) − 2 t s − t s 0 t s 2 t s t − 2 f s − f s 0 f s 2 f s f 54

  21. Sampling and aliasing sample cos(2 π tf) cos(2 π t(k ⋅ f s ± f)) 0 Sampled at frequency f s , the function cos(2 πtf ) cannot be distinguished from cos[2 πt ( kf s ± f )] for any k ∈ Z . 55 Frequency-domain view of sampling x ( t ) s ( t ) x ( t ) ˆ · = . . . . . . . . . . . . t t t 0 − 1 /f s 0 1 /f s − 1 /f s 0 1 /f s ˆ X ( f ) S ( f ) X ( f ) ∗ = . . . . . . . . . . . . f f f 0 − f s f s − f s 0 f s Sampling a signal in the time domain corresponds in the frequency domain to convolving its spectrum with a Dirac comb. The resulting copies of the original signal spectrum in the spectrum of the sampled signal are called “images”. 56

  22. Discrete-time Fourier transform (DTFT) The Fourier transform of a sampled signal ∞ � x ( t ) = t s · ˆ x n · δ ( t − t s · n ) n = −∞ is � ∞ ∞ � x n · e − 2 π j f x ( t ) } ( f ) = ˆ f s n x ( t ) · e − 2 π j ft d t = t s · F{ ˆ X ( f ) = ˆ −∞ n = −∞ The inverse transform is � ∞ � f s / 2 X ( f ) · e 2 π j f ˆ ˆ f s m d f. X ( f ) · e 2 π j ft d f x ( t ) = ˆ or x m = −∞ − f s / 2 The DTFT is also commonly expressed using the normalized frequency ω = 2 π f ˙ f s (radians per sample), and the notation � X (e j ˙ ω ) = x n · e − j ˙ ωn n is customary, to highlight both the periodicity of the DTFT and its relationship with the z -transform of { x n } (see slide 117). 57 1 8 DTFT real DTFT imag 0.8 6 0.6 4 0.4 2 0.2 0 0 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 1 8 DTFT real DTFT imag 0.8 6 0.6 4 0.4 2 0.2 0 0 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 58

  23. 1 8 DTFT real DTFT imag 0.8 6 0.6 4 0.4 2 0.2 0 0 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 1 8 DTFT real DTFT imag 0.8 6 0.6 4 0.4 2 0.2 0 0 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 59 1 8 DTFT real DTFT imag 0.8 6 0.6 4 0.4 2 0.2 0 0 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 1 8 DTFT real DTFT imag 6 0.5 4 0 2 -0.5 0 -1 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 60

  24. 1 8 DTFT real DTFT imag 6 0.5 4 0 2 -0.5 0 -1 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 1 8 DTFT real DTFT imag 0.8 6 0.6 4 0.4 2 0.2 0 0 -2 -5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 61 Properties of the DTFT The DTFT is periodic: X ( f ) = ˆ ˆ X (e j ˙ ω ) = X (e j( ˙ ω +2 π k ) ) X ( f + kf s ) or ∀ k ∈ Z Beyond that, the DTFT is just the Fourier transform applied to a discrete sequence, and inherits the properties of the continuous Fourier transform, e.g. ◮ Linearity ◮ Symmetries ◮ Convolution and modulation theorem: X (e j ˙ ω ) · Y (e j ˙ ω ) = Z (e j ˙ ω ) { x n } ∗ { y n } = { z n } ⇐ ⇒ and � π ω ′ = Z (e j ˙ ω ′ ) · Y (e j( ˙ ω ′ ) ) d ˙ X (e j ˙ ω − ˙ ω ) x n · y n = z n ⇐ ⇒ − π 62

  25. Nyquist limit and anti-aliasing filters If the (double-sided) bandwidth of a signal to be sampled is larger than the sampling frequency f s , the images of the signal that emerge during sampling may overlap with the original spectrum. Such an overlap will hinder reconstruction of the original continuous signal by removing the aliasing frequencies with a reconstruction filter . Therefore, it is advisable to limit the bandwidth of the input signal to the sampling frequency f s before sampling, using an anti-aliasing filter . In the common case of a real-valued base-band signal (with frequency content down to 0 Hz), all frequencies f that occur in the signal with non-zero power should be limited to the interval − f s / 2 < f < f s / 2. The upper limit f s / 2 for the single-sided bandwidth of a baseband signal is known as the “Nyquist limit”. 63 Nyquist limit and anti-aliasing filters Without anti-aliasing filter With anti-aliasing filter single-sided Nyquist X ( f ) X ( f ) bandwidth limit = f s / 2 anti-aliasing filter 0 f − f s 0 f s f double-sided bandwidth ˆ ˆ reconstruction filter X ( f ) X ( f ) − 2 f s − f s 0 f s 2 f s f − 2 f s − f s 0 f s 2 f s f Anti-aliasing and reconstruction filters both suppress frequencies outside | f | < f s / 2. 64

  26. Reconstruction of a continuous band-limited waveform The ideal anti-aliasing filter for eliminating any frequency content above f s / 2 before sampling with a frequency of f s has the Fourier transform � if | f | < f s 1 2 H ( f ) = = rect( t s f ) . if | f | > f s 0 2 This leads, after an inverse Fourier transform, to the impulse response � t � h ( t ) = f s · sin π tf s = 1 · sinc . π tf s t s t s The original band-limited signal can be reconstructed by convolving this with the sampled signal ˆ x ( t ), which eliminates the periodicity of the frequency domain introduced by the sampling process: x ( t ) = h ( t ) ∗ ˆ x ( t ) Note that sampling h ( t ) gives the impulse function: h ( t ) · s ( t ) = δ ( t ) . 65 Impulse response of ideal low-pass filter with cut-off frequency f s / 2: 0 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 t ⋅ f s 66

  27. Reconstruction filter example sampled signal interpolation result scaled/shifted sin(x)/x pulses 1 2 3 4 5 67 If before being sampled with x n = x ( t/f s ) the signal x ( t ) satisfied the Nyquist limit � ∞ x ( t ) · e − 2 π j ft d t = 0 for all | f | ≥ f s F{ x ( t ) } ( f ) = 2 −∞ � � then it can be reconstructed by interpolation with h ( t ) = 1 t t s sinc : t s � ∞ x ( t ) = h ( s ) · ˆ x ( t − s ) · d s −∞ � s � ∞ ∞ � 1 � = sinc · t s x n · δ ( t − s − t s · n ) · d s t s t s −∞ n = −∞ � s � ∞ ∞ � � = x n · sinc · δ ( t − s − t s · n ) · d s t s −∞ n = −∞ ∞ � t − t s · n � ∞ � � x n · sinc = = x n · sinc( t/t s − n ) t s n = −∞ n = −∞ ∞ x n · sin π ( t/t s − n ) � = π ( t/t s − n ) n = −∞ 68

  28. Reconstruction filters The mathematically ideal form of a reconstruction filter for suppressing aliasing frequencies interpolates the sampled signal x n = x ( t s · n ) back into the continuous waveform ∞ x n · sin π ( t/t s − n ) � x ( t ) = π ( t/t s − n ) . n = −∞ Choice of sampling frequency Due to causality and economic constraints, practical analog filters can only approximate such an ideal low-pass filter. Instead of a sharp transition between the “pass band” ( < f s / 2) and the “stop band” ( > f s / 2), they feature a “transition band” in which their signal attenuation gradually increases. The sampling frequency is therefore usually chosen somewhat higher than twice the highest frequency of interest in the continuous signal (e.g., 4 × ). On the other hand, the higher the sampling frequency, the higher are CPU, power and memory requirements. Therefore, the choice of sampling frequency is a tradeoff between signal quality, analog filter cost and digital subsystem expenses. 69 Band-pass signal sampling Sampled signals can also be reconstructed if their spectral components remain entirely within the interval n · f s / 2 < | f | < ( n + 1) · f s / 2 for some n ∈ N . (The baseband case discussed so far is just n = 0.) ˆ X ( f ) X ( f ) anti-aliasing filter reconstruction filter − 5 5 4 f s 0 4 f s f − f s 0 f s f − f s f s 2 2 n = 2 In this case, the aliasing copies of the positive and the negative frequencies will interleave instead of overlap, and can therefore be removed again later by a reconstruction filter. The ideal reconstruction filter for this sampling technique will only allow frequencies in the interval [ n · f s / 2 , ( n + 1) · f s / 2] to pass through. The impulse response of such a band-pass filter can be obtained by amplitude modulating a low-pass filter, or by subtracting two low-pass filters: sin π tf s / 2 � 2 n + 1 � sin π t ( n + 1) f s sin π tnf s h ( t ) = f s · cos 2 π tf s = ( n + 1) f s − nf s . π tf s / 2 4 π t ( n + 1) f s π tnf s 70

  29. Spectrum of a periodic signal A signal x ( t ) that is periodic with frequency f p can be factored into a single period ˙ x ( t ) convolved with an impulse comb p ( t ). This corresponds in the frequency domain to the multiplication of the spectrum of the single period with a comb of impulses spaced f p apart. x ( t ) x ( t ) ˙ p ( t ) = ∗ . . . . . . . . . . . . t t t − 1 /f p 0 1 /f p − 1 /f p 0 1 /f p ˙ X ( f ) X ( f ) P ( f ) = · . . . . . . f f f − f p 0 f p − f p 0 f p 71 Spectrum of a sampled signal A signal x ( t ) that is sampled with frequency f s has a spectrum that is periodic with a period of f s . x ( t ) s ( t ) x ( t ) ˆ · = . . . . . . . . . . . . t t t 0 − 1 /f s 0 1 /f s − 1 /f s 0 1 /f s ˆ X ( f ) S ( f ) X ( f ) ∗ = . . . . . . . . . . . . f f f 0 − f s f s − f s 0 f s 72

  30. Continuous vs discrete Fourier transform ◮ Sampling a continuous signal makes its spectrum periodic ◮ A periodic signal has a sampled spectrum We sample a signal x ( t ) with f s , getting ˆ x ( t ). We take n consecutive samples of ˆ x ( t ) and repeat these periodically, getting a new signal ¨ x ( t ) with period n/f s . Its spectrum ¨ X ( f ) is sampled (i.e., has non-zero value) at frequency intervals f s /n and repeats itself with a period f s . x ( t ) and its spectrum ¨ Now both ¨ X ( f ) are finite vectors of length n . ¨ ¨ x ( t ) X ( f ) . . . . . . . . . . . . f − 1 f − 1 t f − n/f s 0 n/f s − f s − f s /n 0 f s /n f s s s 73 If x ( t ) has period t p = n · t s , then after sampling it at rate t s we have ∞ ∞ n − 1 � � � x ( t ) = x ( t ) · s ( t ) = t s · ¨ x i · δ ( t − t s · i ) = t s · x i · δ ( t − t s · ( i + nl )) i = −∞ l = −∞ i =0 and the Fourier transform of that is � ∞ x ( t ) } ( f ) = ¨ x ( t ) · e − 2 π j ft d t F{ ¨ X ( f ) = ¨ −∞ ∞ n − 1 ∞ n − 1 � � � � f s · ( i + nl ) = t s · x i · e − 2 π j f e − 2 π j f x i · e − 2 π j f f s · nl f s · i = t s · · l = −∞ i =0 l = −∞ i =0 � �� � 1 l δ ( f − l � n f s ) t s n i = −∞ e ± 2 π j ixa = Recall that � ∞ � ∞ 1 n i = −∞ δ ( x − i/a ) and map x = f , a = f s and i = l . | a | After substituting k := f f p = f f s n , i.e. f f s = k n and f = kf p ∞ n − 1 X ( kf p ) = 1 � � x i · e − 2 π j ki ¨ n · δ ( kf p − lf p ) · n l = −∞ i =0 � �� � � �� � X k  δ (0) if k ∈ Z   = 0 if k / ∈ Z   Show that X k = X k ± n for all k ∈ Z . 74

  31. Discrete Fourier Transform (DFT) n − 1 n − 1 x k = 1 � � x i · e − 2 π j ik X i · e 2 π j ik X k = n · n n i =0 i =0 The n -point DFT multiplies a vector with an n × n matrix 1 1 1 1 · · · 1   e − 2 π j n − 1 e − 2 π j 1 e − 2 π j 2 e − 2 π j 3 1 · · · n n n n     e − 2 π j 2( n − 1) e − 2 π j 2 e − 2 π j 4 e − 2 π j 6   1 · · · n n n n     F n = e − 2 π j 3( n − 1) e − 2 π j 3 e − 2 π j 6 e − 2 π j 9   1 · · · n n n n     . . . . . ...  . . . . .  . . . . .     e − 2 π j n − 1 e − 2 π j 2( n − 1) e − 2 π j 3( n − 1) e − 2 π j ( n − 1)( n − 1) 1 · · · n n n n x 0 X 0 X 0 x 0         x 1 X 1 X 1 x 1         1 x 2 X 2 X 2 x 2         n · F ∗ F n · = , n · =         . . . .         . . . .         . . . .         x n − 1 X n − 1 X n − 1 x n − 1 75 Discrete Fourier Transform visualized       x 0 X 0       x 1 X 1             x 2 X 2                x 3   X 3        · =       x 4 X 4             x 5 X 5                   x 6 X 6       x 7 X 7 The n -point DFT of a signal { x i } sampled at frequency f s contains in the elements X 0 to X n/ 2 of the resulting frequency-domain vector the frequency components 0, f s /n , 2 f s /n , 3 f s /n , . . . , f s / 2, and contains in X n − 1 downto X n/ 2 the corresponding negative frequencies. Note that for a real-valued input vector, both X 0 and X n/ 2 will be real, too. Why is there no phase information recovered at f s / 2? 76

  32. Inverse DFT visualized       X 0 x 0     X 1 x 1             X 2 x 2             1 X 3 x 3       8 · · =       X 4 x 4             X 5 x 5             X 6 x 6         X 7 x 7 77 Fast Fourier Transform (FFT) n − 1 � � � x i · e − 2 π j ik F n { x i } n − 1 k = n i =0 i =0 n n 2 − 1 2 − 1 � � − 2 π j ik n/ 2 + e − 2 π j k − 2 π j ik = x 2 i · e x 2 i +1 · e n/ 2 n i =0 i =0  � � � � n n 2 − 1 2 − 1 + e − 2 π j k n · k < n F n 2 { x 2 i } F n 2 { x 2 i +1 } k ,   i =0 i =0 2  k = � � � � n n 2 − 1 2 − 1 + e − 2 π j k  n · k ≥ n F n 2 { x 2 i } F n 2 { x 2 i +1 } ,   i =0 i =0 2 k − n k − n 2 2 The DFT over n -element vectors can be reduced to two DFTs over n/ 2-element vectors plus n multiplications and n additions, leading to log 2 n rounds and n log 2 n additions and multiplications overall, compared to n 2 for the equivalent matrix multiplication. A high-performance FFT implementation in C with many processor-specific optimizations and support for non-power-of-2 sizes is available at http://www.fftw.org/ . 78

  33. Efficient real-valued FFT The symmetry properties of the Fourier transform applied to the discrete Fourier transform { X i } n − 1 i =0 = F n { x i } n − 1 i =0 have the form X ∗ ∀ i : x i = ℜ ( x i ) ⇐ ⇒ ∀ i : X n − i = i ∀ i : X n − i = − X ∗ ∀ i : x i = j · ℑ ( x i ) ⇐ ⇒ i These two symmetries, combined with the linearity of the DFT, allows us to calculate two real-valued n -point DFTs i } n − 1 i } n − 1 i } n − 1 i } n − 1 { X ′ i =0 = F n { x ′ { X ′′ i =0 = F n { x ′′ i =0 i =0 simultaneously in a single complex-valued n -point DFT, by composing its input as x i = x ′ i + j · x ′′ i and decomposing its output as i = 1 i = 1 X ′ 2( X i + X ∗ X ′′ 2j( X i − X ∗ n − i ) n − i ) where X n = X 0 . To optimize the calculation of a single real-valued FFT, use this trick to calculate the two half-size real-value FFTs that occur in the first round. 79 Fast complex multiplication Calculating the product of two complex numbers as ( a + j b ) · ( c + j d ) = ( ac − bd ) + j( ad + bc ) involves four (real-valued) multiplications and two additions. The alternative calculation α = a ( c + d ) ( a + j b ) · ( c + j d ) = ( α − β ) + j( α + γ ) with β = d ( a + b ) γ = c ( b − a ) provides the same result with three multiplications and five additions. The latter may perform faster on CPUs where multiplications take three or more times longer than additions. This “Karatsuba multiplication” is most helpful on simpler microcontrollers. Specialized signal-processing CPUs (DSPs) feature 1-clock-cycle multipliers. High-end desktop processors use pipelined multipliers that stall where operations depend on each other. 80

  34. FFT-based convolution Calculating the convolution of two finite sequences { x i } m − 1 and { y i } n − 1 i =0 i =0 of lengths m and n via min { m − 1 ,i } � z i = x j · y i − j , 0 ≤ i < m + n − 1 j =max { 0 ,i − ( n − 1) } takes mn multiplications. Can we apply the FFT and the convolution theorem to calculate the convolution faster, in just O ( m log m + n log n ) multiplications? { z i } = F − 1 ( F{ x i } · F{ y i } ) There is obviously no problem if this condition is fulfilled: { x i } and { y i } are periodic, with equal period lengths In this case, the fact that the DFT interprets its input as a single period of a periodic signal will do exactly what is needed, and the FFT and inverse FFT can be applied directly as above. 81 In the general case, measures have to be taken to prevent a wrap-over: F −1 [F(A) ⋅ F(B)] A B F −1 [F(A’) ⋅ F(B’)] A’ B’ Both sequences are padded with zero values to a length of at least m + n − 1. This ensures that the start and end of the resulting sequence do not overlap. 82

  35. Zero padding is usually applied to extend both sequence lengths to the next higher power of two (2 ⌈ log 2 ( m + n − 1) ⌉ ), which facilitates the FFT. With a causal sequence, simply append the padding zeros at the end. With a non-causal sequence, values with a negative index number are wrapped around the DFT block boundaries and appear at the right end. In this case, zero-padding is applied in the center of the block, between the last and first element of the sequence. Thanks to the periodic nature of the DFT, zero padding at both ends has the same effect as padding only at one end. If both sequences can be loaded entirely into RAM, the FFT can be applied to them in one step. However, one of the sequences might be too large for that. It could also be a realtime waveform (e.g., a telephone signal) that cannot be delayed until the end of the transmission. In such cases, the sequence has to be split into shorter blocks that are separately convolved and then added together with a suitable overlap. 83 Each block is zero-padded at both ends and then convolved as before: ∗ ∗ ∗ = = = The regions originally added as zero padding are, after convolution, aligned to overlap with the unpadded ends of their respective neighbour blocks. The overlapping parts of the blocks are then added together. 84

  36. Deconvolution A signal u ( t ) was distorted by convolution with a known impulse response h ( t ) (e.g., through a transmission channel or a sensor problem). The “smeared” result s ( t ) was recorded. Can we undo the damage and restore (or at least estimate) u ( t )? ∗ = ∗ = 85 The convolution theorem turns the problem into one of multiplication: � s ( t ) = u ( t − τ ) · h ( τ ) · d τ s = u ∗ h F{ s } = F{ u } · F{ h } F{ u } = F{ s } / F{ h } F − 1 {F{ s } / F{ h }} u = In practice, we also record some noise n ( t ) (quantization, etc.): � c ( t ) = s ( t ) + n ( t ) = u ( t − τ ) · h ( τ ) · d τ + n ( t ) Problem – At frequencies f where F{ h } ( f ) approaches zero, the noise will be amplified (potentially enormously) during deconvolution: u = F − 1 {F{ c } / F{ h }} = u + F − 1 {F{ n } / F{ h }} ˜ 86

  37. Typical workarounds: ◮ Modify the Fourier transform of the impulse response, such that |F{ h } ( f ) | > ǫ for some experimentally chosen threshold ǫ . ◮ If estimates of the signal spectrum |F{ s } ( f ) | and the noise spectrum |F{ n } ( f ) | can be obtained, then we can apply the “Wiener filter” (“optimal filter”) |F{ s } ( f ) | 2 W ( f ) = |F{ s } ( f ) | 2 + |F{ n } ( f ) | 2 before deconvolution: u = F − 1 { W · F{ c } / F{ h }} ˜ 87 Vowel "A" sung at varying pitch 8000 7000 6000 Frequency (Hz) 5000 4000 3000 2000 1000 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Time [w,fs, bits] = auread('sing.au'); specgram(w,2048,fs); ylim([0 8e3]); xlim([0 4.5]); saveas(gcf, 'sing.eps', 'eps2c'); 88

  38. Different vovels at constant pitch 8000 7000 6000 Frequency (Hz) 5000 4000 3000 2000 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 Time 89 Spectral estimation cos(2 *[0:15]/16*4) 1 12 DTFT mag 10 DFT mag 0.5 8 0 6 4 -0.5 2 -1 0 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) cos(2 *[0:15]/16*4.2) 1 12 DTFT mag 10 DFT mag 0.5 8 0 6 4 -0.5 2 -1 0 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 90

  39. We introduced the DFT as a special case of the continuous Fourier transform, where the input is sampled and periodic . If the input is sampled, but not periodic, the DFT can still be used to calculate an approximation of the Fourier transform of the original continuous signal. However, there are two effects to consider. They are particularly visible when analysing pure sine waves. Sine waves whose frequency is a multiple of the base frequency ( f s /n ) of the DFT are identical to their periodic extension beyond the size of the DFT. They are, therefore, represented exactly by a single sharp peak in the DFT. All their energy falls into one single frequency “bin” in the DFT result. Sine waves with other frequencies, which do not match exactly one of the output frequency bins of the DFT, are still represented by a peak at the output bin that represents the nearest integer multiple of the DFT’s base frequency. However, such a peak is distorted in two ways: ◮ Its amplitude is lower (down to 63.7%). ◮ Much signal energy has “leaked” to other frequencies. 91 35 30 25 20 15 10 5 0 16 0 5 10 15.5 15 20 25 30 15 input freq. DFT index The leakage of energy to other frequency bins not only blurs the estimated spectrum. The peak amplitude also changes significantly as the frequency of a tone changes from that associated with one output bin to the next, a phenomenon known as scalloping . In the above graphic, an input sine wave gradually changes from the frequency of bin 15 to that of bin 16 (only positive frequencies shown). 92

  40. Windowing Sine wave Discrete Fourier Transform 300 1 200 0 100 −1 0 0 200 400 0 200 400 Sine wave multiplied with window function Discrete Fourier Transform 100 1 0 50 −1 0 0 200 400 0 200 400 93 The reason for the leakage and scalloping losses is easy to visualize with the help of the convolution theorem: The operation of cutting a sequence of the size of the DFT input vector out of a longer original signal (the one whose continuous Fourier spectrum we try to estimate) is equivalent to multiplying this signal with a rectangular function. This destroys all information and continuity outside the “window” that is fed into the DFT. Multiplication with a rectangular window of length T in the time domain is equivalent to convolution with sin( πfT ) / ( πfT ) in the frequency domain. The subsequent interpretation of this window as a periodic sequence by the DFT leads to sampling of this convolution result (sampling meaning multiplication with a Dirac comb whose impulses are spaced f s /n apart). Where the window length was an exact multiple of the original signal period, sampling of the sin( πfT ) / ( πfT ) curve leads to a single Dirac pulse, and the windowing causes no distortion. In all other cases, the effects of the convolution become visible in the frequency domain as leakage and scalloping losses. 94

  41. Some better window functions 1 0.8 0.6 0.4 0.2 Rectangular window Triangular window 0 Hann window Hamming window 0 0.2 0.4 0.6 0.8 1 All these functions are 0 outside the interval [0,1]. 95 cos(2 *[0:15]/16*4.2) 1 12 DTFT mag 10 DFT mag 0.5 8 0 6 4 -0.5 2 -1 0 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) cos(2 *[0:15]/16*4.2).*hann(16) 1 12 DTFT mag 10 DFT mag 0.5 8 0 6 4 -0.5 2 -1 0 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 96

  42. Rectangular window (64−point) Triangular window 20 20 Magnitude (dB) Magnitude (dB) 0 0 −20 −20 −40 −40 −60 −60 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) Hann window Hamming window 20 20 Magnitude (dB) Magnitude (dB) 0 0 −20 −20 −40 −40 −60 −60 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) 97 Numerous alternatives to the rectangular window have been proposed that reduce leakage and scalloping in spectral estimation. These are vectors multiplied element-wise with the input vector before applying the DFT to it. They all force the signal amplitude smoothly down to zero at the edge of the window, thereby avoiding the introduction of sharp jumps in the signal when it is extended periodically by the DFT. Three examples of such window vectors { w i } n − 1 i =0 are: Triangular window (Bartlett window): � � i � � w i = 1 − � 1 − � � n/ 2 � Hann window (raised-cosine window, Hanning window): � � i w i = 0 . 5 − 0 . 5 × cos 2 π n − 1 Hamming window : � � i w i = 0 . 54 − 0 . 46 × cos 2 π n − 1 98

  43. Does zero padding increase DFT resolution? The two figures below show two spectra of the 16-element sequence s i = cos(2 π · 3 i/ 16) + cos(2 π · 4 i/ 16) , i ∈ { 0 , . . . , 15 } . The left plot shows the DFT of the windowed sequence x i = s i · w i , i ∈ { 0 , . . . , 15 } and the right plot shows the DFT of the zero-padded windowed sequence � s i · w i , i ∈ { 0 , . . . , 15 } x ′ i = 0 , i ∈ { 16 , . . . , 63 } where w i = 0 . 54 − 0 . 46 × cos (2 πi/ 15) is the Hamming window. DFT without zero padding DFT with 48 zeros appended to window 4 4 2 2 0 0 0 5 10 15 0 20 40 60 99 cos(2 *[0:15]/16*3.3) + cos(2 *[0:15]/16*4) 2 8 DTFT mag DFT mag 1 6 0 4 -1 2 -2 0 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) zero-padded to 64 samples 2 8 DTFT mag DFT mag 1 6 0 4 -1 2 -2 0 0 20 40 60 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 100

  44. cos(2 *[0:15]/16*3.3) + cos(2 *[0:15]/16*4) 2 8 DTFT mag DFT mag 1 6 0 4 -1 2 -2 0 0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) with 64 actual samples 2 35 DTFT mag 30 DFT mag 1 25 20 0 15 10 -1 5 -2 0 0 20 40 60 - -¾ -½ -¼ 0 ¼ ½ ¾ time-domain samples DTFT frequency (1 period) 101 Applying the discrete Fourier transform (DFT) to an n -element long real-valued sequence samples the DTFT of that sequence at n/ 2 + 1 discrete frequencies. The DTFT spectrum has already been distorted by multiplying the (hypothetically longer) signal with a windowing function that limits its length to n non-zero values and forces the waveform down to zero outside the window. Therefore, appending further zeros outside the window will not affect the DTFT. The frequency resolution of the DFT is the sampling frequency divided by the block size of the DFT. Zero padding can therefore be used to increase the frequency resolution of the DFT, to sample the DTFT at more places. But that does not change the limit imposed on the frequency resolution (i.e., blurriness) of the DTFT by the length of the window. Note that zero padding does not add any additional information to the signal. The DTFT has already been “low-pass filtered” by being convolved with the spectrum of the windowing function. Zero padding in the time domain merely causes the DFT to sample the same underlying DTFT spectrum at a higher resolution, thereby making it easier to visually distinguish spectral lines and to locate their peak more precisely. 102

  45. Digital filters Filter: supresses (removes, attenuates) unwanted signal components. ◮ low-pass filter – suppress all frequencies above a cut-off frequency ◮ high-pass filter – suppress all frequencies below a cut-off frequency, including DC (direct current = 0 Hz) ◮ band-pass filter – suppress signals outside a frequency interval (= passband) ◮ band-stop filter (aka: band-reject filter) – suppress signals inside a single frequency interval (= stopband) ◮ notch filter – narrow band-stop filter, ideally suppressing only a single frequency For digital filters, we also distinguish ◮ finite impulse response (FIR) filters ◮ infinite impulse response (IIR) filters depending on how far their memory reaches back in time. 103 Window-based design of FIR filters Recall that the ideal continuous low-pass filter with cut-off frequency f c has the frequency characteristic � 1 � f � if | f | < f c H ( f ) = = rect 0 if | f | > f c 2 f c and the impulse response sin 2 π tf c h ( t ) = 2 f c = 2 f c · sinc(2 f c · t ) . 2 π tf c Sampling this impulse response with the sampling frequency f s of the signal to be processed will lead to a periodic frequency characteristic, that matches the periodic spectrum of the sampled signal. There are two problems though: ◮ the impulse response is infinitely long ◮ this filter is not causal, that is h ( t ) � = 0 for t < 0 104

  46. Solutions: ◮ Make the impulse response finite by multiplying the sampled h ( t ) with a windowing function ◮ Make the impulse response causal by adding a delay of half the window size The impulse response of an n -th order low-pass filter is then chosen as h i = 2 f c /f s · sin[2 π ( i − n/ 2) f c /f s ] · w i 2 π ( i − n/ 2) f c /f s where { w i } is a windowing sequence, such as the Hamming window w i = 0 . 54 − 0 . 46 × cos (2 πi/n ) with w i = 0 for i < 0 and i > n . Note that for f c = f s / 4, we have h i = 0 for all even values of i . Therefore, this special case requires only half the number of multiplications during the convolution. Such “half-band” FIR filters are used, for example, as anti-aliasing filters wherever a sampling rate needs to be halved. 105 FIR low-pass filter design example z Plane Impulse Response 1 1 Imaginary Part Amplitude 0.5 0.5 30 0 −0.5 0 −1 −1 0 1 0 10 20 30 Real Part n (samples) 0 0 Phase (degrees) Magnitude (dB) −20 −500 −40 −1000 −60 −1500 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: n = 30, cutoff frequency ( − 6 dB): f c = 0 . 25 × f s / 2, window: Hamming 106

  47. Filter performance An ideal filter has a gain of 1 in the pass-band and a gain of 0 in the stop band, and nothing in between. A practical filter will have ◮ frequency-dependent gain near 1 in the passband ◮ frequency-dependent gain below a threshold in the stopband ◮ a transition band between the pass and stop bands We truncate the ideal, infinitely-long impulse response by multiplication with a window sequence. In the frequency domain, this will convolve the rectangular frequency response of the ideal low-pass filter with the frequency characteristic of the window. The width of the main lobe determines the width of the transition band, and the side lobes cause ripples in the passband and stopband. 107 Low-pass to band-pass filter conversion (modulation) To obtain a band-pass filter that attenuates all frequencies f outside the range f l < f < f h , we first design a low-pass filter with a cut-off frequency ( f h − f l ) / 2. We then multiply its impulse response with a sine wave of frequency ( f h + f l ) / 2, effectively amplitude modulating it, to shift its centre frequency. Finally, we apply a window function: h i = ( f h − f l ) /f s · sin[ π ( i − n/ 2)( f h − f l ) /f s ] · cos[ π i ( f h + f l ) /f s ] · w i π ( i − n/ 2)( f h − f l ) /f s H ( f ) = ∗ − f h − f l f h − f l − f h + f l f h + f l f f f − f h − f l 0 f l f h 0 2 2 2 2 108

  48. Low-pass to high-pass filter conversion (freq. inversion) In order to turn the spectrum X ( f ) of a real-valued signal x i sampled at f s into an inverted spectrum X ′ ( f ) = X ( f s / 2 − f ), we merely have to shift the periodic spectrum by f s / 2: X ′ ( f ) X ( f ) = ∗ . . . . . . . . . . . . − f s f s f f f − f s 0 f s − f s 0 f s 0 2 2 This can be accomplished by multiplying the sampled sequence x i with y i = cos π f s t = cos π i , which is nothing but multiplication with the sequence . . . , 1 , − 1 , 1 , − 1 , 1 , − 1 , 1 , − 1 , . . . So in order to design a discrete high-pass filter that attenuates all frequencies f outside the range f c < | f | < f s / 2, we merely have to design a low-pass filter that attenuates all frequencies outside the range − f c < f < f c , and then multiply every second value of its impulse response with − 1. 109 Linear phase filters A filter where the Fourier transform H ( f ) of its impulse response h ( t ) is real-valued will not affect the phase of the filtered signal at any frequency. Only the amplitudes will be affected. ∀ t ∈ R : h ( t ) = [ h ( − t )] ∗ ∀ f ∈ R : H ( f ) ∈ R ⇐ ⇒ A phase-neutral filter with a real-valued frequency response will have an even impulse response, and will therefore usually be non-causal. To make such a filter causal, we have to add a delay ∆ t (half the length of the impulse response). This corresponds to multiplication with e − 2 π j f ∆ t in the frequency domain: H ( f ) · e − 2 π j f ∆ t h ( t − ∆ t ) • − ◦ Filters that delay the phase of a signal at each frequency by the time ∆ t therefore add to the phase angle a value − 2 π j f ∆ t , which increases linearly with f . They are therefore called linear-phase filters . This is the closest one can get to phase-neutrality with causality. 110

  49. Finite impulse response (FIR) filter M � y n = b m · x n − m m =0 M = 3: x n x n − 1 x n − 2 x n − 3 z − 1 z − 1 z − 1 b 0 b 1 b 2 b 3 y n (see slide 25) Transposed implementation: x n b 3 b 2 b 1 b 0 z − 1 z − 1 z − 1 y n 111 Infinite impulse response (IIR) filter N M � � a k · y n − k = b m · x n − m Usually normalize: a 0 = 1 m =0 k =0 � M � N � � y n = b m · x n − m − a k · y n − k /a 0 m =0 k =1 Direct form I implementation: a − 1 x n b 0 y n 0 z − 1 z − 1 b 1 − a 1 x n − 1 y n − 1 z − 1 z − 1 b 2 − a 2 x n − 2 y n − 2 z − 1 z − 1 b 3 − a 3 x n − 3 y n − 3 112

  50. Infinite impulse response (IIR) filter – direct form II � M � N � � y n = b m · x n − m − a k · y n − k /a 0 m =0 k =1 Direct form II: Transposed direct form II: a − 1 a − 1 x n b 0 y n x n b 0 y n 0 0 z − 1 z − 1 − a 1 b 1 b 1 − a 1 z − 1 z − 1 − a 2 b 2 b 2 − a 2 z − 1 z − 1 − a 3 b 3 b 3 − a 3 113 Polynomial representation of sequences We can represent sequences { x n } as polynomials: ∞ � x n v n X ( v ) = n = −∞ Example of polynomial multiplication: 3 v 2 ) (1 + 2 v + · (2 + 1 v ) 6 v 2 2 + 4 v + 2 v 2 3 v 3 + 1 v + + 8 v 2 3 v 3 = 2 + 5 v + + Compare this with the convolution of two sequences (in MATLAB): conv([1 2 3], [2 1]) equals [2 5 8 3] 114

  51. Convolution of sequences is equivalent to polynomial multiplication: ∞ � { h n } ∗ { x n } = { y n } ⇒ y n = h k · x n − k k = −∞ ↓ ↓ � � � � ∞ ∞ � � h n v n x n v n H ( v ) · X ( v ) = · n = −∞ n = −∞ ∞ ∞ � � h k · x n − k · v n = n = −∞ k = −∞ Note how the Fourier transform of a sequence can be accessed easily from its polynomial form: ∞ � X (e − j ˙ ω ) = x n e − j ˙ ωn n = −∞ 115 Example of polynomial division: ∞ 1 � 1 − av = 1 + av + a 2 v 2 + a 3 v 3 + · · · = a n v n n =0 a 2 v 2 1 + av + + · · · 1 − av 1 1 − av av x n y n a 2 v 2 av − a 2 v 2 v a 2 v 2 a 3 v 3 a − · · · y n − 1 Rational functions (quotients of two polynomials) can provide a convenient closed-form representations for infinitely-long exponential sequences, in particular the impulse responses of IIR filters. 116

  52. The z -transform The z -transform of a sequence { x n } is defined as: ∞ � x n z − n X ( z ) = n = −∞ Note that this differs only in the sign of the exponent from the polynomial representation discussed on the preceeding slides. Recall that the above X ( z ) is exactly the factor with which an exponential sequence { z n } is multiplied, if it is convolved with { x n } : { z n } ∗ { x n } = { y n } ∞ ∞ � � z n − k x k = z n · z − k x k = z n · X ( z ) ⇒ y n = k = −∞ k = −∞ 117 The z -transform defines for each sequence a continuous complex-valued surface over the complex plane C . For finite sequences, its value is defined across the entire complex plane (except possibly at z = 0 or | z | = ∞ ). For infinite sequences, it can be shown that the z -transform converges only for the region � � � � x n +1 x n +1 � � � � lim � < | z | < lim � � � � x n x n � � � n →∞ n →−∞ The z -transform identifies a sequence unambiguously only in conjunction with a given region of convergence . In other words, there exist different sequences, that have the same expression as their z -transform, but that converge for different amplitudes of z . The z -transform is a generalization of the discrete-time Fourier transform, which it contains on the complex unit circle ( | z | = 1): ∞ � t − 1 x ( t ) } ( f ) = X (e j ˙ ω ) = x n e − j ˙ ωn · F{ ˆ s n = −∞ ω = 2 π f where ˙ f s . 118

  53. Properties of the z -transform If X ( z ) is the z -transform of { x n } , we write here { x n } • − ◦ X ( z ). If { x n } • − ◦ X ( z ) and { y n } • − ◦ Y ( z ), then: Linearity: { ax n + by n } • − ◦ aX ( z ) + bY ( z ) Convolution: { x n } ∗ { y n } • − ◦ X ( z ) · Y ( z ) Time shift: ◦ z k X ( z ) { x n + k } • − Remember in particular: delaying by one sample is multiplication with z − 1 . 119 Time reversal: ◦ X ( z − 1 ) { x − n } • − Multiplication with exponential: { a − n x n } • − ◦ X ( az ) Complex conjugate: { x ∗ ◦ X ∗ ( z ∗ ) n } • − Real/imaginary value: ◦ 1 2( X ( z ) + X ∗ ( z ∗ )) {ℜ{ x n }} • − ◦ 1 2j( X ( z ) − X ∗ ( z ∗ )) {ℑ{ x n }} • − Initial value: x 0 = lim z →∞ X ( z ) if x n = 0 for all n < 0 120

  54. Some example sequences and their z -transforms: x n X ( z ) δ n 1 z 1 u n z − 1 = 1 − z − 1 z 1 a n u n z − a = 1 − az − 1 z nu n ( z − 1) 2 z ( z + 1) n 2 u n ( z − 1) 3 z e an u n z − e a � n − 1 � 1 e a ( n − k ) u n − k ( z − e a ) k k − 1 z 2 sin( ϕ ) + z sin( ˙ ω − ϕ ) sin( ˙ ωn + ϕ ) u n z 2 − 2 z cos( ˙ ω ) + 1 121 Example: What is the z -transform of the impulse response { h n } of the discrete system y n = x n + ay n − 1 ? y n = x n + ay n − 1 Y ( z ) = X ( z ) + az − 1 Y ( z ) Y ( z ) − az − 1 Y ( z ) = X ( z ) Y ( z )(1 − az − 1 ) = X ( z ) Y ( z ) 1 z X ( z ) = 1 − az − 1 = z − a Since { y n } = { h n } ∗ { x n } , we have Y ( z ) = H ( z ) · X ( z ) and therefore H ( z ) = Y ( z ) z X ( z ) = z − a We have applied here the linearity of the z -transform, and its time-shift and convolution properties. 122

  55. z -transform of recursive filter structures a − 1 x n b 0 y n Consider the discrete system defined by 0 z − 1 z − 1 k m b 1 − a 1 � � a l · y n − l = b l · x n − l x n − 1 y n − 1 l =0 l =0 z − 1 z − 1 · · · · · · or equivalently · · · · · · ? z − 1 z − 1 k m b m − a k � � a 0 y n + a l · y n − l = b l · x n − l x n − m y n − k l =1 l =0 � m � k � � y n = a − 1 · b l · x n − l − a l · y n − l 0 l =0 l =1 What is the z -transform H ( z ) of its impulse response { h n } , where { y n } = { h n } ∗ { x n } ? 123 Using the linearity and time-shift property of the z -transform: k m � � a l · y n − l = b l · x n − l l =0 l =0 k m � � a l z − l · Y ( z ) = b l z − l · X ( z ) l =0 l =0 k m � � a l z − l = X ( z ) b l z − l Y ( z ) l =0 l =0 � m l =0 b l z − l H ( z ) = Y ( z ) X ( z ) = � k l =0 a l z − l H ( z ) = b 0 + b 1 z − 1 + b 2 z − 2 + · · · + b m z − m a 0 + a 1 z − 1 + a 2 z − 2 + · · · + a k z − k 124

  56. a − 1 The z -transform of the impulse re- x n b 0 y n 0 sponse { h n } of the causal LTI system z − 1 z − 1 defined by b 1 − a 1 x n − 1 y n − 1 k m � � a l · y n − l = b l · x n − l z − 1 z − 1 · · · · · · l =0 l =0 · · · · · · with { y n } = { h n } ∗ { x n } is the z − 1 z − 1 b m − a k rational function x n − m y n − k H ( z ) = b 0 + b 1 z − 1 + b 2 z − 2 + · · · + b m z − m a 0 + a 1 z − 1 + a 2 z − 2 + · · · + a k z − k ( b m � = 0, a k � = 0) which can also be written as H ( z ) = z k � m z m · b 0 z m + b 1 z m − 1 + b 2 z m − 2 + · · · + b m l =0 b l z m − l l =0 a l z k − l = z k . a 0 z k + a 1 z k − 1 + a 2 z k − 2 + · · · + a k z m � k H ( z ) has m zeros and k poles at non-zero locations in the z plane, plus k − m zeros (if k > m ) or m − k poles (if m > k ) at z = 0. 125 This function can be converted into the form m m � � (1 − c l · z − 1 ) ( z − c l ) H ( z ) = b 0 = b 0 · z k − m · l =1 l =1 · a 0 k a 0 k � � (1 − d l · z − 1 ) ( z − d l ) l =1 l =1 where the c l are the non-zero positions of zeros ( H ( c l ) = 0) and the d l are the non-zero positions of the poles (i.e., z → d l ⇒ | H ( z ) | → ∞ ) of H ( z ). Except for a constant factor, H ( z ) is entirely characterized by the position of these zeros and poles. On the unit circle z = e j ˙ ω , H (e j ˙ ω ) is the discrete-time Fourier transform ω = π f/ f s of { h n } ( ˙ 2 ). The DTFT amplitude can also be expressed in ω to the zeros and poles: terms of the relative position of e j ˙ � � � m ω − c l | l =1 | e j ˙ b 0 � � | H (e j ˙ ω ) | = � · � � � k ω − d l | a 0 � l =1 | e j ˙ 126

  57. Example: a single-pole filter Consider this IIR filter: Its z -transform 0 . 8 0 . 8 z x n 0 . 8 y n H ( z ) = 1 − 0 . 2 · z − 1 = z − 0 . 2 has one pole at z = d 1 = 0 . 2 and one z − 1 0 . 2 zero at z = 0. y n − 1 Amplitude | H ( z ) | : a 0 = 1, a 1 = − 0 . 2, b 0 = 0 . 8 2 1.75 x n = δ n ⇒ y n = 1.5 1.25 Impulse Response 0.8 1 0.75 0.6 Amplitude 0.5 0.4 0.25 0 0.2 1 0.5 1 0 0.5 0 0 2 4 0 −0.5 −0.5 n (samples) −1 −1 imaginary real 127 0 . 8 0 . 8 z H ( z ) = 1 − 0 . 2 · z − 1 = z − 0 . 2 (cont’d) Magnitude Response 1 0.95 2 1.75 0.9 1.5 Magnitude 1.25 0.85 1 0.75 0.8 0.5 0.25 0.75 0 1 0.5 1 0.7 0.5 0 0 −0.5 −0.5 −1 −1 imaginary real 0 0.2 0.4 0.6 0.8 Normalized Frequency ( ×π rad/sample) Run this LTI filter at sampling frequency f s and test it with sinusoidial input (frequency f , amplitude 1): x n = cos(2 π fn/f s ) Output: y n = A ( f ) · cos(2 π fn/f s + θ ( f )) What are the gain A ( f ) and phase delay θ ( f ) at frequency f ? Answer: A ( f ) = | H (e j2 π f/f s ) | θ ( f ) = ∠ H (e j2 π f/f s ) = tan − 1 ℑ{ H (e j π f/f s ) } ℜ{ H (e j π f/f s ) } Example: f s = 8 kHz, f = 2 kHz (normalized frequency f/ f s 2 = 0 . 5) ⇒ Gain A (2 kHz) = � 0 . 82+0 . 162 � 0 . 8 j � = 0 . 8 j( − j − 0 . 2) � = � 0 . 8 − 0 . 16 j � = | H (e j π / 2 ) | = | H (j) | = � � � � � � = 0 . 784 . . . � j − 0 . 2 ( j − 0 . 2)( − j − 0 . 2) 1+0 . 04 1 . 042 128

  58. Visual verification in MATLAB: x n = 0:15; 1.5 y (time domain) fs = 8000; y (z−transform) f = 1500; x = cos(2*pi*f*n/fs); b = [0.8]; a = [1 -0.2]; 1 y1 = filter(b,a,x); z = exp(j*2*pi*f/fs); H = 0.8*z/(z-0.2); 0.5 A = abs(H); theta = atan(imag(H)/real(H)); y2 = A*cos(2*pi*f*n/fs+theta); plot(n, x, 'bx-', ... 0 n, y1, 'go-', ... n, y2, 'r+-') legend('x', ... −0.5 'y (time domain)', ... 'y (z-transform)') ylim([-1.1 1.8]) −1 0 5 10 15 129 z 1 H ( z ) = z − 0 . 7 = How do poles affect time domain? 1 − 0 . 7 · z − 1 z Plane Impulse Response 1 1 Imaginary Part Amplitude 0 0.5 −1 0 −1 0 1 0 10 20 30 Real Part n (samples) z 1 H ( z ) = z − 0 . 9 = 1 − 0 . 9 · z − 1 z Plane Impulse Response 1 1 Imaginary Part Amplitude 0 0.5 −1 0 −1 0 1 0 10 20 30 Real Part n (samples) 130

  59. z 1 H ( z ) = z − 1 = 1 − z − 1 z Plane Impulse Response 1 1 Imaginary Part Amplitude 0 0.5 −1 0 −1 0 1 0 10 20 30 Real Part n (samples) z 1 H ( z ) = z − 1 . 1 = 1 − 1 . 1 · z − 1 z Plane Impulse Response 20 1 Imaginary Part Amplitude 0 10 −1 0 −1 0 1 0 10 20 30 Real Part n (samples) 131 z 2 1 H ( z ) = ( z − 0 . 9 · e j π/ 6 ) · ( z − 0 . 9 · e − j π/ 6 ) = 1 − 1 . 8 cos( π/ 6) z − 1 +0 . 9 2 · z − 2 z Plane Impulse Response 2 1 Imaginary Part Amplitude 1 2 0 0 −1 −1 −1 0 1 0 10 20 30 Real Part n (samples) z 2 1 H ( z ) = ( z − e j π/ 6 ) · ( z − e − j π/ 6 ) = 1 − 2 cos( π/ 6) z − 1 + z − 2 z Plane Impulse Response 5 1 Imaginary Part Amplitude 2 0 0 −1 −5 −1 0 1 0 10 20 30 Real Part n (samples) 132

  60. z 2 1 1 H ( z ) = ( z − 0 . 9 · e j π/ 2 ) · ( z − 0 . 9 · e − j π/ 2 ) = 1 − 1 . 8 cos( π/ 2) z − 1 +0 . 9 2 · z − 2 = 1+0 . 9 2 · z − 2 z Plane Impulse Response 1 1 Imaginary Part Amplitude 2 0 0 −1 −1 −1 0 1 0 10 20 30 Real Part n (samples) z 1 H ( z ) = z +1 = 1+ z − 1 z Plane Impulse Response 1 1 Imaginary Part Amplitude 0 0 −1 −1 −1 0 1 0 10 20 30 Real Part n (samples) 133 IIR filter design goals The design of a filter starts with specifying the desired parameters: ◮ The passband is the frequency range where we want to approximate a gain of one. ◮ The stopband is the frequency range where we want to approximate a gain of zero. ◮ The order of a filter is the number of poles it uses in the z -domain, and equivalently the number of delay elements necessary to implement it. ◮ Both passband and stopband will in practice not have gains of exactly one and zero, respectively, but may show several deviations from these ideal values, and these ripples may have a specified maximum quotient between the highest and lowest gain. ◮ There will in practice not be an abrupt change of gain between passband and stopband, but a transition band where the frequency response will gradually change from its passband to its stopband value. 134

  61. IIR filter design techniques The designer can then trade off conflicting goals such as: small transition band, low order, low ripple amplitude or absence of ripples. Design techniques for making these tradeoffs for analog filters (involving capacitors, resistors, coils) can also be used to design digital IIR filters: Butterworth filters: Have no ripples, gain falls monotonically across the pass and transition band. Within the passband, the gain drops slowly down to � 1 − 1 / 2 ( − 3 dB). Outside the passband, it drops asymptotically by a factor 2 N per octave ( N · 20 dB/decade). Chebyshev type I filters: Distribute the gain error uniformly throughout the passband (equiripples) and drop off monotonically outside. Chebyshev type II filters: Distribute the gain error uniformly throughout the stopband (equiripples) and drop off monotonically in the passband. Elliptic filters (Cauer filters): Distribute the gain error as equiripples both in the passband and stopband. This type of filter is optimal in terms of the combination of the passband-gain tolerance, stopband-gain tolerance, and transition-band width that can be achieved at a given filter order. 135 IIR filter design in MATLAB The aforementioned filter-design techniques are implemented in the MATLAB Signal Processing Toolbox in the functions butter , cheby1 , cheby2 , and ellip . They output the coefficients a n and b n of the difference equation that describes the filter. These can be applied with filter to a sequence, or can be visual- ized with zplane as poles/zeros in the z -domain, with impz as an im- pulse response, and with freqz as an amplitude and phase spectrum. The commands and sptool fdatool provide interactive GUIs to design digital filters. MATLAB fdatool 136

  62. Cascade of filter sections Higher-order IIR filters can be numerically unstable (quantization noise). A commonly used trick is to split a higher-order IIR filter design into a cascade of l second-order (biquad) filter sections of the form: x n b 0 y n z − 1 H ( z ) = b 0 + b 1 z − 1 + b 2 z − 2 − a 1 b 1 1 + a 1 z − 1 + a 2 z − 2 z − 1 − a 2 b 2 Filter sections H 1 , H 2 , . . . , H l are then applied sequentially to the input sequence, resulting in a filter b k, 0 + b k, 1 z − 1 + b k, 2 z − 2 l l � � H ( z ) = H k ( z ) = 1 + a k, 1 z − 1 + a k, 2 z − 2 k =1 k =1 Each section implements one pair of poles and one pair of zeros. Jackson’s algorithm for pairing poles and zeros into sections: pick the pole pair closest to the unit circle, and place it into a section along with the nearest pair of zeros; repeat until no poles are left. 137 Butterworth filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 0 Phase (degrees) Magnitude (dB) −20 −50 −40 −60 −100 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 1, cutoff frequency ( − 3 dB): 0 . 25 × f s / 2 138

  63. Butterworth filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 0 Phase (degrees) Magnitude (dB) −20 −200 −40 −400 −60 −600 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 5, cutoff frequency ( − 3 dB): 0 . 25 × f s / 2 139 Chebyshev type I filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 0 Phase (degrees) Magnitude (dB) −20 −200 −40 −400 −60 −600 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 5, cutoff frequency: 0 . 5 × f s / 2, pass-band ripple: − 3 dB 140

  64. Chebyshev type II filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 100 Phase (degrees) Magnitude (dB) 0 −20 −100 −40 −200 −60 −300 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 5, cutoff frequency: 0 . 5 × f s / 2, stop-band ripple: − 20 dB 141 Elliptic filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 0 Phase (degrees) Magnitude (dB) −100 −20 −200 −40 −300 −60 −400 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 5, cutoff frequency: 0 . 5 × f s / 2, pass-band ripple: − 3 dB, stop-band ripple: − 20 dB 142

  65. Notch filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 0 Phase (degrees) Magnitude (dB) −100 −20 −200 −40 −300 −60 −400 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 2, cutoff frequency: 0 . 25 × f s / 2, − 3 dB bandwidth: 0 . 05 × f s / 2 143 Peak filter design example z Plane Impulse Response 1 1 Imaginary Part 0.5 Amplitude 0.5 0 −0.5 0 −1 −1 0 1 0 10 20 Real Part n (samples) 0 100 Phase (degrees) Magnitude (dB) 50 −20 0 −40 −50 −60 −100 0 0.5 1 0 0.5 1 Normalized Frequency ( ×π rad/sample) Normalized Frequency ( ×π rad/sample) order: 2, cutoff frequency: 0 . 25 × f s / 2, − 3 dB bandwidth: 0 . 05 × f s / 2 144

  66. Random variables, vectors, and processes Let S be the set of all possible outcomes (results) of some experiment. We call S the sample space of that experiment. A random variable X is a function X : S → E that assigns to each outcome ζ ∈ S a value X ( ζ ) ∈ E , where usually E ⊆ R or E ⊆ C . X ( ζ ) = ( x 1 ( ζ ) , x 2 ( ζ ) , . . . , x n ( ζ )) T is a vector of n A random vector � random variables, or equivalently a random variable that outputs vectors, e.g. � X ( ζ ) ∈ R n . A continuous-time random process X : S → E R is a function that maps each experimental outcome ζ ∈ S onto a continuous-time function x ζ ( t ), and a discrete-time random process X : S → E Z maps each outcome ζ onto a discrete sequence { x ζ,n } n . The ensemble of a random process is the set of all functions (or sequences) from which it picks its output. In the following, we will usually omit outcome parameter ζ from random variables, etc., for notational convenience, and use boldface roman to distinguish random variables from samples. 145 Random sequences A discrete-time random process or random sequence { x n } can also be thought of as a discrete sequence of random variables . . . , x − 2 , x − 1 , x 0 , x 1 , x 2 , . . . Each time we repeat an experiment, we observe one realization or sample sequence . . . , x − 2 , x − 1 , x 0 , x 1 , x 2 , . . . of that random process. (We cannot observe the outcome ζ directly.) Each individual random variable x n in a random sequence is characterized by its probability distribution function P x n ( a ) = Prob( x n ≤ a ) and the entire random process is characterized completely by all joint probability distribution functions P x n 1 ,..., x nk ( a 1 , . . . , a k ) = Prob( x n 1 ≤ a 1 ∧ . . . ∧ x n k ≤ a k ) for all possible sets { x n 1 , . . . , x n k } and all k > 0. 146

  67. Two random variables x n and x m are called independent if P x n , x m ( a, b ) = P x n ( a ) · P x m ( b ) The derivative p x n ( a ) = P ′ x n ( a ) is called the probability density function . This helps us to define quantities such as the � ◮ expected value E( x n ) = ap x n ( a ) d a � ◮ mean-square value (average power) E( | x n | 2 ) = | a | 2 p x n ( a ) d a ◮ variance Var( x n ) = E[ | x n − E( x n ) | 2 ] = E( | x n | 2 ) − | E( x n ) | 2 ◮ correlation Cor( x n , x m ) = E( x n · x ∗ m ) ◮ covariance Cov( x n , x m ) = E[( x n − E( x n )) · ( x m − E( x m )) ∗ ] = E( x n x ∗ m ) − E( x n )E( x m ) ∗ The expected value E( · ) is a linear operator: E( a x ) = a E( x ) and E( x + y ) = E( x ) + E( y ). Variance is not linear, but Var( a x ) = a 2 Var( x ) and, if x and y are independent, Var( x + y ) = Var( x ) + Var( y ). 147 Stationary processes A random sequence is called strict-sense stationary if P x n 1+ l ,..., x nk + l ( a 1 , . . . , a k ) = P x n 1 ,..., x nk ( a 1 , . . . , a k ) for any shift l and any number k , that is if all joint probability distributions are time invariant. If the above condition holds at least for k = 1, then the mean E( x n ) = m x , and variance E( | x n − m x | 2 ) = σ 2 x are constant over all n . ( σ x is also called standard deviation ). If the above condition holds in addition also for k = 2, we call the random sequence wide-sense stationary (WSS) . If a sequence is strict-sense stationary, it is always also wide-sense stationary, but not vice versa. 148

  68. A wide-sense stationary random process { x n } can not only be characterized by its mean m x = E( x n ) and variance σ 2 x = E( | x n − m x | 2 ) over all sample positions n . It can, in addition, also be characterized by its autocorrelation sequence φ xx ( k ) = E( x n + k · x ∗ n ) The autocorrelation sequence of a zero-mean version of a sequence is called the autocovariance sequence γ xx ( k ) = E[( x n + k − m x ) · ( x n − m x ) ∗ ] = φ xx ( k ) − | m x | 2 where γ xx (0) = σ 2 x . A pair of stationary random processes { x n } and { y n } can, in addition, be jointly wide-sense stationary and therefore be characterized by their crosscorrelation sequence φ xy ( k ) = E( x n + k · y ∗ n ) Their crosscovariance sequence is then γ xy ( k ) = E[( x n + k − m x ) · ( y n − m y ) ∗ ] = φ xy ( k ) − m x m ∗ y The complex conjugates ∗ are only needed with complex-valued sequences. 149 Ergodic processes If . . . , x − 2 , x − 1 , x 0 , x 1 , x 2 , . . . is a WSS random sequence, then we can estimate the mean value and auto-correlation sequence from these random variables from any location n as m x = E( x n ) φ xx ( k ) = E( x n + k x ∗ n ) What if we have just one sample sequence . . . , x − 2 , x − 1 , x 0 , x 1 , x 2 , . . . ? If we still can estimate mean and auto-correlation from that as L N 1 ≈ 1 � � m x = lim x n x n for large N 2 L + 1 N L →∞ n =1 n = − L L N 1 n ≈ 1 � � x n + k x ∗ x n + k x ∗ φ xx ( k ) = lim n 2 L + 1 N L →∞ n =1 n = − L then we call the process mean ergodic and correlation ergodic , resp. Ergodicity means that single-sample-sequence time averages are identical to averages over the entire ensemble for a random process, or, in other words, variation along the time axis looks similar to variation across the ensemble. 150

  69. Deterministic crosscorrelation sequence For deterministic finite-energy sequences { x n } and { y n } , we can define their crosscorrelation sequence as ∞ ∞ � � x i + k · y ∗ x i · y ∗ c xy ( k ) = i = i − k . i = −∞ i = −∞ If { x n } is similar to { y n } , but lags l elements behind ( x n ≈ y n − l ), then c xy ( l ) will be a peak in the crosscorrelation sequence. It can therefore be used to locate shifted versions of a known sequence in another one. MATLAB’s xcorr function calculates the crosscorrelation sequence for two finite sequences (vectors) of equal length (or zero-pads the shorter one). Option unbiased divides for each lag through the length of the overlap, to estimate E( x n + k y ∗ n ) from a pair of sample sequences { x n } and { y n } from jointly correlation-ergodic WSS processes { x n } and { y n } . This crosscorrelation sequence is essentially just convolution, with the second input sequence mirrored: { c xy ( n ) } = { x n } ∗ { y ∗ − n } It can therefore be calculated equally easily via the DTFT: C xy (e j ˙ ω ) = X (e j ˙ ω ) · Y ∗ (e j ˙ ω ) Swapping the input sequences mirrors the output sequence: c xy ( k ) = c ∗ yx ( − k ). 151 Demonstration of covert spread-spectrum communication: n = randn(1,10000); pattern=2*round(rand(1,1000))-1; p1 = [zeros(1,2000), pattern, zeros(1,7000)]; p2 = [zeros(1,4000), pattern, zeros(1,5000)]; r = n + p1/3 - p2/3; figure(1) plot([n;p1/3-3;p2/3-4;r-6]'); figure(2) plot(conv(r,fliplr(pattern))); % or: plot(xcorr(r,pattern)); 152

  70. Deterministic autocorrelation sequence Equivalently, we define the deterministic autocorrelation sequence in the time domain as ∞ � x i + k x ∗ c xx ( k ) = i i = −∞ This is just the sequence convolved with a time-reversed version of itself: { c xx ( k ) } = { x i } ∗ { x ∗ − i } This corresponds in the frequency domain to C xx (e j ˙ ω ) = X (e j ˙ ω ) · X ∗ (e j ˙ ω ) = | X (e j ˙ ω ) | 2 . In other words, the DTFT C xx (e j ˙ ω ) of the autocorrelation sequence { c xx ( n ) } of a sequence { x n } is identical to the squared amplitudes of the DTFT, or power spectrum , of { x n } . This suggests, that the DTFT of the autocorrelation sequence of a random process might be a suitable way for defining the power spectrum of that random process. What can we say about the phase in the Fourier spectrum of a time-invariant random process? 153 Power spectrum of a random sequence For a zero-mean wide-sense stationary random sequence { x n } with absolutely summable autocorrelation sequence φ xx ( k ) = E( x n + k · x ∗ n ) we call the DTFT ∞ � Φ xx (e j ˙ ω ) = φ xx ( n ) · e − j ˙ ωn n = −∞ of its autocorrelation sequence the power density spectrum (PDS) or power spectrum of { x n } . The power spectrum is real, even ∗ , non-negative and periodic. ∗ for real-valued sequences 154

  71. The autocorrelation of a sequence { x n } with power spectrum Φ xx (e j ˙ ω ) is � π φ xx ( k ) = 1 Φ xx (e j ˙ ω )e j k ˙ ω d ˙ ω 2 π − π Since the variance of { x n } is � π Var( x n ) = φ xx (0) = 1 Φ xx (e j ˙ ω )d ˙ ω 2 π − π we can interpret � 2 π f h 1 f s Φ xx (e j ˙ ω )d ˙ ω π f l 2 π f s as the variance of the output of an ideal band-pass filter applied to { x n } with cut-off frequencies 0 ≤ f l < f h . 155 Filtered random sequences Let { x n } be a random sequence from a WSS random process. The output ∞ ∞ � � y n = h k · x n − k = h n − k · x k k = −∞ k = −∞ of an LTI applied to it will then be another random sequence, characterized by � � ∞ ∞ ∞ � � � m y = E( y n ) = E h k · x n − k = h k · E( x n − k ) = m x h k k = −∞ k = −∞ k = −∞ and E( x n + k · x ∗ φ xx ( k ) = n ) ∞ � φ yy ( k ) = φ xx ( k − i ) c hh ( i ) , where � ∞ i = −∞ h i + k h ∗ c hh ( k ) = i . i = −∞ In other words: { φ yy ( n ) } = { c hh ( n ) } ∗ { φ xx ( n ) } { y n } = { h n } ∗ { x n } ⇒ ω ) | 2 · Φ xx (e j ˙ Φ yy (e j ˙ ω ) | H (e j ˙ ω ) = Similarly: { φ yx ( n ) } = { h n } ∗ { φ xx ( n ) } { y n } = { h n } ∗ { x n } ⇒ Φ yx (e j ˙ ω ) H (e j ˙ ω ) · Φ xx (e j ˙ ω ) = 156

  72. Summary: ∗{ h ∗ − n } ∗{ h n } { y n } = { h n } ∗ { x n } ⇒ { φ xx ( n ) } − → { φ yx ( n ) } − → { φ yy ( n ) } Proofs: � � ∞ � φ yx ( l ) = E( x ∗ x ∗ n · y n + l ) = E n · h k · x n + l − k = k = −∞ ∞ ∞ � � WSS h k · E( x ∗ = n · x n + l − k ) = h k · φ xx ( l − k ) k = −∞ k = −∞ � � ∞ ∞ � � φ yy ( l ) = E( y ∗ h ∗ k · x ∗ n · y n + l ) = E n − k · h m · x n + l − m = k = −∞ m = −∞ ∞ ∞ � � WSS h ∗ h m · E( x ∗ = k · n − k · x n + l − m ) = k = −∞ m = −∞ ∞ ∞ � � i := m − k h ∗ = k · h m · φ xx ( l + k − m ) = k = −∞ m = −∞ ∞ ∞ ∞ ∞ � � � � h ∗ h ∗ = k · h k + i · φ xx ( l − i ) = φ xx ( l − i ) k · h k + i k = −∞ i = −∞ i = −∞ k = −∞ � �� � 157 c hh ( i ) White noise A random sequence { x n } is a white noise signal, if m x = 0 and φ xx ( k ) = σ 2 x δ k . The power spectrum of a white noise signal is flat: Φ xx (e j ˙ ω ) = σ 2 x . A commonly used form of white noise is white Gaussian noise (WGN) , where each random variable x n is independent and identically distributed (i.i.d.) according to the normal-distribution probability density function − ( x − mx )2 1 2 σ 2 p x n ( a ) = e x � 2 π σ 2 x Application example: Where an LTI { y n } = { h n } ∗ { x n } can be observed to operate on white noise { x n } with φ xx ( k ) = σ 2 x δ k , the crosscorrelation between input and output will reveal the impulse response of the system: φ yx ( k ) = σ 2 x · h k where φ yx ( k ) = φ ∗ xy ( − k ) = E( y n + k · x ∗ n ). 158

  73. Demonstration of covert spread-spectrum radar: x = randn(1,10000) h = [0 0 0.4 0 0 0.3 0 0 0.2 0 0]; y = conv(x, h); figure(1) plot(1:length(x), x, 1:length(y), y-5); figure(2) c = conv(fliplr(x),y); stem(c(length(c)/2-20:length(c)/2+20)); 4000 2000 0 −2000 0 10 20 30 40 50 159 Spectral estimation: periodogram Estimate amplitude spectrum of the noisy discrete sequence x k = sin(2 π j k × 8 / 64) + sin(2 π j k × 14 . 32 / 64) + n i with φ nn ( i ) = 4 δ i n = 64; % block length m = 1000; % blocks averaged k = 1:(n*m); x = 2*randn(1, n*m) + ... sin(2*pi*k*8/n) + ... sin(2*pi*k*14.32/n); s1 = abs(fft(x(1:n))/n); s1 s2 s2 = abs(fft(x(1:8*n))/(8*n)); s1 Absolute values of a single 64-element DFT of { x n } 64 n =1 (rect. window). The flat spectrum of white noise is only an expected value. In a single discrete Fourier transform of such a sequence, the significant variance of the noise spectrum becomes visible. It almost drowns the two peaks from the sine waves. s2 Absolute values of a single 512-element DFT of { x n } 512 n =1 (rect. window). With an 8 × larger window, the bandwidth of each frequency bin is now reduced 8 × , so the sine functions stand out better from the noise. However, the variance in each frequency bin relative to the expected value remains the same. 160

  74. Spectral estimation: averaging Estimate amplitude spectrum of the noisy discrete sequence x k = sin(2 π j k × 8 / 64) + sin(2 π j k × 14 . 32 / 64) + n i with φ nn ( i ) = 4 δ i n = 64; % block length m = 1000; % blocks averaged k = 1:(n*m); x = 2*randn(1, n*m) + ... sin(2*pi*k*8/n) + ... sin(2*pi*k*14.32/n); xx = reshape(x, n, m)'; s3 = mean(abs(fft(xx, n, 2)/n),1); s3 s4 s4 = abs(mean(fft(xx, n, 2)/n,1)); s3 { x n } 64000 n =1 cut into 1000 consecutive 64-sample windows, showing the average of the absolute values of the DFT of each window. Non-coherent averaging : discard phase information first. This better approximates the shape of the power spectrum: with a flat noise floor. s4 Same 1000 windows, but this time the complex values of the DFTs averaged before the absolute value was taken ⇒ coherent averaging . Because DFT is linear, this is identical to first averaging all 1000 windows and then applying a single DFT and taking its absolute value. The windows start 64 samples apart. Only periodic waveforms with a period length that divides 64 are not averaged away. This periodic averaging step suppresses both the noise and the second sine wave. 161 Welch’s method for estimating PSD “Periodogram”: Single-rectanglar-window DTFT power spectrum of a ω ) | 2 with X ( ˙ ω ) = � N − 1 n =0 x n · e − 2 π j n ˙ ω . random sequence { x n } : | X ( ˙ ω ) | 2 ] Var[ | X ( ˙ Problem: ω ) | 2 ] does not drop with increasing window length N . E[ | X ( ˙ “Welch’s method” for estimating the PSD makes three improvements: ◮ Reduce leakage using a non-rectangular window sequence { w i } (“modified periodogram”) ◮ To reduce the variance, average K periodograms of length N . ◮ Triangular, Hamming, Hanning, etc. windows can be used with 50% overlap ( L = N/ 2), such that all samples contribute with equal weight. 0 ≤ k < K x k,n = x k · L + n · w n , 0 ≤ n < N N − 1 � x k,n · e − 2 π j n ˙ ω X k ( ˙ ω ) = n =0 K − 1 ω ) = 1 � ω ) | 2 P ( ˙ | X k ( ˙ K k =0 162

  75. Periodic averaging If a signal x ( t ) has a periodic component with period length t p , then we can isolate this periodic component from discrete sequence x n = x ( n/f s ) by periodic averaging L N 1 x n + pi ≈ 1 � � x n = lim ¯ x n + pi , n ∈ { 0 , . . . , p − 1 } 2 L + 1 N L →∞ i =1 i = − L but only if the period length in samples p = t p · f s is an integer. Otherwise { x n } may need to be interpolated and resampled at an integer multiple of t − 1 first. p Periodic averaging of x ( t ) corresponds in the time domain to convolution with a windowed Dirac comb a ( t ) = w ( t ) · � i δ ( t − t p i ): � x ( t ) = ¯ x ( t − s ) · a ( s )d s s In the frequency domain, this means multiplication with an t − 1 spaced p Dirac comb that has been convolved with W ( f ). 163 Parametric models of the power spectrum If we understand the physical process that generates a random sequence, we may be able to model and estimate its power spectrum more accurately, with fewer parameters. If { x n } can be modeled as white noise filtered by an LTI system H (e j ˙ ω ), then Φ xx (e j ˙ ω ) = σ 2 w | H (e j ˙ ω ) | 2 . Often such an LTI can be modeled as an IIR filter with ω ) = b 0 + b 1 z − 1 + b 2 z − 2 + · · · + b m z − m H (e j ˙ a 0 + a 1 z − 1 + a 2 z − 2 + · · · + a k z − k . The auto-regressive moving-average model ARMA( k, m ) is m k � � x n = b l · w n − l − a l · x n − l l =0 l =1 where { w n } is stationary white noise with variance σ 2 w . There is also the simpler AR( k ) model x n = w n − � k l =1 a l · x n − l . 164

  76. IQ sampling / downconversion / complex baseband signal Consider signal x ( t ) ∈ R in which only frequencies f l < | f | < f h are of interest. This band has a centre frequency of f c = ( f l + f h ) / 2 and a bandwidth B = f h − f l . It can be sampled efficiently (at the lowest possible sampling frequency) by downconversion : ◮ Shift its spectrum by − f c : y ( t ) = x ( t ) · e − 2 π j f c t ◮ Low-pass filter it with a cut-off frequency of B/ 2: � ∞ z ( t ) = B y ( τ ) · sinc(( t − τ ) B ) · d τ • − ◦ Z ( f ) = Y ( f ) · rect( f/B ) −∞ ◮ Sample the result at sampling frequency f s ≥ B : z n = z ( n/f s ) 165 X ( f ) δ ( f + f c ) ∗ − f c 0 f c f − f c 0 f c f ˆ Y ( f ) Z ( f ) Z ( f ) anti-aliasing filter sample − → − 2 f c − f c 0 f c f − 2 f c − f c 0 B f c f − B B 2 2 Shifting the center frequency f c of the interval of interest to 0 Hz (DC) makes the spectrum asymmetric. This leads to a complex-valued time-domain representation ( ∃ f : Z ( f ) � = [ Z ( − f )] ∗ = ⇒ ∃ t : z ( t ) ∈ C \ R ). 166

  77. IQ upconversion / interpolation Given a discrete sequence of downconverted samples z n ∈ C recorded with sampling frequency f s at centre frequency f c (as on slide 165), how can we reconstruct a continuous waveform ˜ x ( t ) ∈ R that matches the original signal x ( t ) within the frequency interval f l to f h ? Reconstruction steps: ◮ Interpolation of complex baseband signal (remove aliases): ∞ � z ( t ) = ˜ z n · sinc( t · f s − n ) n = −∞ ◮ Upconvert by modulating a complex phasor at carrier frequency f c . Then discard the imaginary part (to reconstruct the negative frequency components of the original real-valued signal): � z ( t ) · e 2 π j f c t � x ( t ) = 2 ℜ ˜ ˜ �� �� �� � � � � = 2 ℜ ℜ z ( t ) ˜ + j ℑ z ( t ) ˜ · cos 2 π f c t + j sin 2 π f c t � � � � = 2 ℜ z ( t ) ˜ · cos 2 π f c t − 2 ℑ z ( t ) ˜ · sin 2 π f c t Recall that 2 ℜ ( c ) = c + c ∗ for all c ∈ C. 167 Example: IQ downconversion of a sine wave What happens if we downconvert the input signal x ( t ) = A · cos(2 π ft + φ ) = A 2 · e 2 π j ft +j φ + A 2 · e − 2 π j ft − j φ using centre frequency f c and bandwidth B < 2 f c with | f − f c | < B/ 2? After frequency shift: y ( t ) = x ( t ) · e − 2 π j f c t = A 2 · e 2 π j( f − f c ) t +j φ + A 2 · e − 2 π j( f + f c ) t − j φ After low-pass filter with cut-off frequency B/ 2 < f c < f + f c : z ( t ) = A 2 · e 2 π j( f − f c ) t +j φ After sampling: z n = A 2 · e 2 π j( f − f c ) n/f s +j φ 168

  78. Software-defined radio (SDR) front end IQ downconversion in SDR receiver: ⊗ sample Q − 90 ◦ x ( t ) y ( t ) z ( t ) z n ⊗ sample I The real part ℜ ( z ( t )) is also known as “in-phase” signal (I) and the imaginary part ℑ ( z ( t )) as “quadrature” signal (Q). cos(2 π f c t ) IQ upconversion in SDR transmitter: ⊗ δ Q +90 ◦ x ( t ) ˜ z ( t ) ˜ z ( t ) ˆ z n ⊗ δ I In SDR, x ( t ) is the antenna voltage and z n appears on the digital interface with the microprocessor. cos(2 π f c t ) 169 SDR front-end hardware examples Low-cost USB-dongle receivers: ≈ £ 20 SDR front ends are also Realtek RTL2832U/R820T (RTL-SDR) commonly used today in USB2, f s < 2 . 5 MHz, f c = 24–1776 MHz, 8-bit IQ samples military radios, spectrum http://osmocom.org/projects/sdr/wiki/rtl-sdr surveillance, amateur-radio stations, mobile-phone base stations, MRI machines, radars, etc. Mid range transceivers: £ 250– £ 2k High-end measurement kit: £ 3k– £ 40k HackRF One, Ettus USRP B200/N200, etc. National Instruments, Rohde&Schwarz, etc. USB3 or 1-Gbit Ethernet, f s = 10–50 MHz, 10 Gbit/PCIe, FPGA, B, f s = 60–1000 MHz, f c = 0–6 GHz, 16-bit IQ samples f c = 0–14 GHz, float32 IQ samples in volts 170

  79. Visualization of IQ representation of sine waves ⊗ Q Q − 90 ◦ x ( t ) y ( t ) z ( t ) ⊗ I cos(2 π f c t ) I Recall these products of sine and cosine functions: ◮ cos( x ) · cos( y ) = 1 2 cos( x − y ) + 1 2 cos( x + y ) ◮ sin( x ) · sin( y ) = 1 2 cos( x − y ) − 1 2 cos( x + y ) ◮ sin( x ) · cos( y ) = 1 2 sin( x − y ) + 1 2 sin( x + y ) Consider: (with x = 2 π f c t ) ◮ sin( x ) = cos( x − 1 2 π ) ◮ cos( x ) · cos( x ) = 1 2 + 1 2 cos 2 x ◮ sin( x ) · sin( x ) = 1 2 − 1 2 cos 2 x ◮ sin( x ) · cos( x ) = 0 + 1 2 sin 2 x ◮ cos( x ) · cos( x − ϕ ) = 1 2 cos( ϕ ) + 1 2 cos(2 x − ϕ ) ◮ sin( x ) · cos( x − ϕ ) = 1 2 sin( ϕ ) + 1 2 sin(2 x − ϕ ) 171 IQ representation of amplitude-modulated signals Assume voice signal s ( t ) contains only frequencies below B/ 2. Antenna signal amplitude-modulated with carrier frequency f c : x ( t ) = s ( t ) · A · cos(2 π f c t + ϕ ) After IQ downconversion with centre frequency f ′ c ≈ f c : z ( t ) = A 2 · s ( t ) · e 2 π j( f c − f ′ c ) t +j ϕ ℑ [ z ( t )] With perfect receiver tuning f ′ c = f c : z ( t ) = A 2 · s ( t ) · e j ϕ ℜ [ z ( t )] Reception techniques: ◮ Non-coherent demodulation (requires s ( t ) ≥ 0): s ( t ) = 2 A | z ( t ) | ◮ Coherent demodulation (requires knowing ϕ and f ′ c = f c ): s ( t ) = 2 A ℜ [ z ( t ) · e − j ϕ ] 172

  80. IQ representation of frequency-modulated signals In frequency modulation, the voice signal s ( t ) changes the carrier frequency f c : f c ( t ) = f c + k · s ( t ) Compared to a constant-frequency carrier signal cos(2 π f c t + ϕ ), to allow variable frequency, we � need to replace the phase-accumulating term 2 π f c t with an integral 2 π f c ( t )d t . Frequency-modulated antenna signal: � t � � x ( t ) = A · cos 2 π · [ f c + k · s ( τ )]d τ + ϕ 0 � t � � = A · cos 2 π f c t + 2 π k · s ( τ )d τ + ϕ 0 After IQ downconversion from centre frequency f c : z ( t ) = A � t 2 · e 2 π j k 0 s ( τ )d τ +j ϕ Therefore, s ( t ) is proportional to the rotational rate of z ( t ). 173 Frequency demodulating IQ samples � t Determine s ( t ) from downconverted signal z ( t ) = A 2 · e 2 π j k 0 s ( τ )d τ +j ϕ . First idea: measure the angle ∠ z ( t ), where the angle operator ∠ is defined such that ∠ a e j φ = φ ( a, φ ∈ R , a > 0). Then take its derivative: 1 d s ( t ) = d t ∠ z ( t ) 2 π k Problem: angle ambiguity, ∠ works only for − π ≤ φ < π . Ugly hack: MATLAB function unwrap removes 2 π jumps from sample sequences Better idea: first take the complex derivative d z ( t ) = A � t 2 · 2 π j k · s ( t ) · e 2 π j k 0 s ( τ )d τ +j ϕ d t ℑ [ z ( t )] d z ( t ) then divide by z ( t ): d t /z ( t ) = 2 π j k · s ( t ) Other practical approaches: � � d z ( t ) ◮ s ( t ) ∝ ℑ · z ∗ ( t ) / | z ( t ) | 2 ℜ [ z ( t )] d t z ( t ) ◮ s ( t ) ∝ ∠ z ( t − ∆ t ) / ∆ t 174

  81. Digital modulation schemes Pick z n ∈ C from a constellation of 2 n symbols to encode n bits: ASK BPSK QPSK 10 00 0 1 0 1 11 01 8PSK 16QAM FSK 101 1 111 100 00 0 01 110 000 11 10 010 001 00 01 11 10 011 175 Basic model of a modem impulses bits symbols transmit filter IQ � a i δ ( t − it s ) ✲ ✲ ✲ ✲ b j ∈ { 0 , 1 } ∗ h t ( t ) upconversion a i ∈ S ❄ noise LTI channel ✲ + n ( t ) ∗ h c ( t ) ❄ receive filter IQ data sampling ✛ ✛ ✛ ✛ bits ∗ h r ( t ) slicer downconv. IQ up/down conversion: only required for pass-band channels (e.g., radio) 176

  82. Pulse Amplitude Modulation (PAM) Baseband transmission (e.g., for wires), no IQ up/down conversion ◮ binary PAM: a i ∈ S = {− A, A } ⊂ R 1 bit/symbol ⇒ bit rate (bit/s) = symbol rate (baud) ◮ m -ary PAM: a i ∈ S = { A 1 , . . . , A m } ⊂ R k = log 2 m bit/symbol ⇒ bit rate (bit/s) > symbol rate (baud) ◮ bit sequence { b j } → symbol sequence { a i } , a i = f ( b ki , . . . , b ki + k − 1 ) Pulse generator (symbol rate f s = t − 1 s ): � x ( t ) = ˆ a i · δ ( t − it s ) i x ∗ h t , X ( f ) = ˆ Transmit filter: x = ˆ X ( f ) · H t ( f ) � x ( t ) = a i · h t ( t − it s ) i Channel: � ∞ z ( t ) = h c ( s ) x ( t − s )d s + n ( t ) 0 177 PAM reception Receive filter applied to channel output z ( t ): � ∞ y ( t ) = h r ( s ) y ( t − s )d s 0 Initial symbol pulses ˆ x ( t ) have now passed through three LTIs: y = h ∗ ˆ x h = h t ∗ h c ∗ h r H ( f ) = H t ( f ) · H c ( f ) · H r ( f ) Sample y ( t ) at times t n = nt s + t d with delay t d where pulse magnitude is largest: � y n = y ( nt s + t d ) = a i h (( n − i ) t s + t d ) + v n i where v n = v ( nt s + t d ) is the sampled noise v = n ∗ h r . Data slicer: compare y n against thresholds and convert detected nearest symbol a ′ n ∈ S back into bits b ′ kn , . . . , b ′ kn + k − 1 . 178

  83. Synchronization The receiver needs to know the times t n when to sample y ( t ). ◮ Local clock generators have temperature-dependent frequency drift. ◮ In some transmission systems, the transmitter provides the sample clock on a separate wire (or wire pair). For example: DVI and HDMI video cables contain four wire pairs: three transmitting red/green/blue pixel bytes (using an 8b/10b line encoding), and one providing a pixel clock signal, which the receiver multiplies 10 × to get a bit clock. ◮ More commonly, the receiver has to extract the sample clock from the received signal, for example by tracking the phase of transitions (phase locked loop, PLL). This works reliably only if there are regular transitions. • Some systems use a line encoding (e.g., Manchester code, 8b/10b encoding) to ensure regular transitions. Some line encodings add a spectral line at the symbol rate, which the receiver can extract with a band-pass filter, others first require a non-linear step, e.g. squaring. • Others use a scrambler : the data bits b i are XORed with the output of synchronized deterministic random-bit generators (e.g., a maximum-length linear feedback shift register), in both the sender and recipient, to make long runs of the same symbol unlikely. 179 Intersymbol interference For notational convenience: set t d = 0 and allow h ( t ) to be non-causal. � y n = a n h (0) + a i h (( n − i ) t s ) + v n i � = n Ideally, we want � 1 , i = 0 h ( it s ) = 0 , i � = 0 otherwise y n will depend on other (mainly previous) symbols, not just on a n ⇒ intersymbol interference. (See also: interpolation function) Nyquist ISI criterion � y n = a n + v n ⇔ h ( t ) · δ ( t − it s ) = δ ( t ) i � ⇔ H ( f ) ∗ f s δ ( f − if s ) = 1 i � ⇔ H ( f − if s ) = t s i 180

  84. Some possible pulse-shape choices ◮ h ( t ) = rect( t/t s ) • − ◦ H ( f ) = t s sinc( f/f s ) Rectangular pulses may be practical on fibre optics and short cables, where there are no bandwidth restriction. Not suitable for radio: bandwidth high compared to symbol rate. ◮ h ( t ) = sinc( t/t s ) • − ◦ H ( f ) = t s rect( f/f s ) Most bandwidth efficient pulse shape, but very long symbol waveform, very sensitive to clock synchronization errors. ◮ Raised-cosine filter: rectangle with half-period cosine transitions  t s , | f | ≤ t s / 2 − β   t s cos 2 π H ( f ) = 4 β ( | f | − t s / 2 + β ) , t s / 2 − β < | f | ≤ t s / 2 + β   0 , | f | > t s / 2 + β h ( t ) = sinc( t/t s ) cos 2 π βt 1 − (4 βt ) 2 Transition width (roll-off) β with 0 ≤ β ≤ t s / 2; for β = 0 this is H ( f ) = t s rect( f/f s ). ◮ Gaussian filter: both h ( t ) and H ( f ) are Gaussians (self-Fourier) Fastest transition without overshot in either time or frequency domain, but does not satisfy Nyquist ISI criterion. 181 Optimal transmit and receive filtering Nyquist ISI criterion dictates H ( f ) = H t ( f ) · H c ( f ) · H r ( f ). Bandwidth limits guide choice of H t ( f ), and channel dictates H c ( f ) and N ( f ). How should we then choose H t ( f ) H r ( f )? Select a received pulse spectrum P r ( f ), e.g. raised cosine. Then for some arbitrary gain factor k > 0: H ( f ) = H t ( f ) · H c ( f ) · H r ( f ) = k · P r ( f ) Optimal filters � N ( f ) | H r ( f ) | 2 d f at slicer relative to Minimize noise variance Var( v n ) = symbol distance. 1 � � 2 � � P r ( f ) � � | H r ( f ) | = � � � N ( f ) H c ( f ) � � 1 � � � 2 � � P r ( f ) N ( f ) � � | H t ( f ) | = k � � H c ( f ) � � If N ( f ) and H c ( f ) are flat: | H r ( f ) | = | H t ( f ) | /k ′ , e.g. root raised cosine. 182

  85. Audiovisual data compression Structure of modern audiovisual communication systems: sensor + perceptual entropy channel signal ✲ ✲ ✲ ✲ sampling coding coding coding ❄ noise ✲ channel ❄ entropy perceptual channel human display ✛ ✛ ✛ ✛ senses decoding decoding decoding 183 Audio-visual lossy coding today typically consists of these steps: ◮ A transducer converts the original stimulus into a voltage. ◮ This analog signal is then sampled and quantized . The digitization parameters (sampling frequency, quantization levels) are preferably chosen generously beyond the ability of human senses or output devices. ◮ The digitized sensor-domain signal is then transformed into a perceptual domain . This step often mimics some of the first neural processing steps in humans. ◮ This signal is quantized again, based on a perceptual model of what level of quantization-noise humans can still sense. ◮ The resulting quantized levels may still be highly statistically dependent. A prediction or decorrelation transform exploits this and produces a less dependent symbol sequence of lower entropy. ◮ An entropy coder turns that into an apparently-random bit string, whose length approximates the remaining entropy. The first neural processing steps in humans are in effect often a kind of decorrelation transform; our eyes and ears were optimized like any other AV communications system. This allows us to use the same transform for decorrelating and transforming into a perceptually relevant domain. 184

  86. Outline of the remaining lectures ◮ Quick review of entropy coding ◮ Transform coding: techniques for converting sequences of highly-dependent symbols into less-dependent lower-entropy sequences. • run-length coding • decorrelation, Karhunen-Lo` eve transform (PCA) • Discrete cosine transform ◮ Introduction to some characteristics and limits of human senses • perceptual scales and sensitivity limits • colour vision ◮ Quantization techniques to remove information that is irrelevant to human senses 185 Entropy coding review – Huffman 1 � Entropy: H = p ( α ) · log 2 1.00 p ( α ) 0 1 α ∈ A = 2 . 3016 bit 0.40 0.60 0 1 0 1 v w 0.25 0.20 0.20 u 0 1 0.35 Mean codeword length: 2.35 bit x 0.10 0.15 0 1 Huffman’s algorithm constructs an optimal code-word tree for a set of symbols with known probability distribution. It iteratively picks the two elements of the set with the smallest probability and combines them into y z a tree by adding a common root. The resulting tree goes back into the 0.05 0.05 set, labeled with the sum of the probabilities of the elements it combines. The algorithm terminates when less than two elements are left. 186

  87. Entropy coding review – arithmetic coding Partition [0,1] according 0.0 0.35 0.55 0.75 0.9 0.95 1.0 to symbol probabilities: u v w x y z Encode text wuvw . . . as numeric value (0.58. . . ) in nested intervals: 1.0 0.75 0.62 0.5885 0.5850 z z z z z y y y y y x x x x x w w w w w v v v v v u u u u u 0.55 0.0 0.55 0.5745 0.5822 187 Arithmetic coding Several advantages: ◮ Length of output bitstring can approximate the theoretical information content of the input to within 1 bit. ◮ Performs well with probabilities > 0.5, where the information per symbol is less than one bit. ◮ Interval arithmetic makes it easy to change symbol probabilities (no need to modify code-word tree) ⇒ convenient for adaptive coding Can be implemented efficiently with fixed-length arithmetic by rounding probabilities and shifting out leading digits as soon as leading zeros appear in interval size. Usually combined with adaptive probability estimation. Huffman coding remains popular because of its simplicity and lack of patent-licence issues. 188

  88. Coding of sources with memory and correlated symbols Run-length coding: ↓ 5 7 12 3 3 Predictive coding: encoder decoder f(t) g(t) g(t) f(t) − + predictor predictor P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...) Delta coding (DPCM): P ( x ) = x n � Linear predictive coding: P ( x 1 , . . . , x n ) = a i x i i =1 189 Old (Group 3 MH) fax code pixels white code black code ◮ Run-length encoding plus modified Huffman 0 00110101 0000110111 code 1 000111 010 2 0111 11 ◮ Fixed code table (from eight sample pages) 3 1000 10 ◮ separate codes for runs of white and black 4 1011 011 pixels 5 1100 0011 ◮ termination code in the range 0–63 switches 6 1110 0010 between black and white code 7 1111 00011 8 10011 000101 ◮ makeup code can extend length of a run by a 9 10100 000100 multiple of 64 10 00111 0000100 ◮ termination run length 0 needed where run 11 01000 0000101 length is a multiple of 64 12 001000 0000111 ◮ single white column added on left side before 13 000011 00000100 14 110100 00000111 transmission 15 110101 000011000 ◮ makeup codes above 1728 equal for black and 16 101010 0000010111 white . . . . . . . . . ◮ 12-bit end-of-line marker: 000000000001 (can 63 00110100 000001100111 be prefixed by up to seven zero-bits to reach 64 11011 0000001111 next byte boundary) 128 10010 000011001000 192 010111 000011001001 Example: line with 2 w, 4 b, 200 w, 3 b, EOL → . . . . . . . . . 1000 | 011 | 010111 | 10011 | 10 | 000000000001 1728 010011011 0000001100101 190

  89. Modern (JBIG) fax code Performs context-sensitive arithmetic coding of binary pixels. Both encoder and decoder maintain statistics on how the black/white probability of each pixel depends on these 10 previously transmitted neighbours: ? Based on the counted numbers n black and n white of how often each pixel value has been encountered so far in each of the 1024 contexts, the probability for the next pixel being black is estimated as n black + 1 p black = n white + n black + 2 The encoder updates its estimate only after the newly counted pixel has been encoded, such that the decoder knows the exact same statistics. Joint Bi-level Expert Group: International Standard ISO 11544, 1993. Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/ 191 Statistical dependence Random variables X , Y are dependent iff ∃ x, y : P ( X = x ∧ Y = y ) � = P ( X = x ) · P ( Y = y ) . If X , Y are dependent, then ⇒ ∃ x, y : P ( X = x | Y = y ) � = P ( X = x ) ∨ P ( Y = y | X = x ) � = P ( Y = y ) ⇒ H ( X | Y ) < H ( X ) ∨ H ( Y | X ) < H ( Y ) Application Where x is the value of the next symbol to be transmitted and y is the vector of all symbols transmitted so far, accurate knowledge of the conditional probability P ( X = x | Y = y ) will allow a transmitter to remove all redundancy. An application example of this approach is JBIG, but there y is limited to 10 past single-bit pixels and P ( X = x | Y = y ) is only an estimate. 192

  90. Practical limits of measuring conditional probabilities The practical estimation of conditional probabilities, in their most general form, based on statistical measurements of example signals, quickly reaches practical limits. JBIG needs an array of only 2 11 = 2048 counting registers to maintain estimator statistics for its 10-bit context. If we wanted to encode each 24-bit pixel of a colour image based on its statistical dependence of the full colour information from just ten previous neighbour pixels, the required number of (2 24 ) 11 ≈ 3 × 10 80 registers for storing each probability will exceed the estimated number of particles in this universe. (Neither will we encounter enough pixels to record statistically significant occurrences in all (2 24 ) 10 contexts.) This example is far from excessive. It is easy to show that in colour images, pixel values show statistical significant dependence across colour channels, and across locations more than eight pixels apart. A simpler approximation of dependence is needed: correlation. 193 Correlation Two random variables X ∈ R and Y ∈ R are correlated iff E { [ X − E( X )] · [ Y − E( Y )] } � = 0 where E( · · · ) denotes the expected value of a random-variable term. Dependent but not correlated: 1 Correlation implies dependence, but dependence does not always lead to correlation (see example to the right). However, most dependency in audio- 0 visual data is a consequence of corre- lation, which is algorithmically much easier to exploit. −1 −1 0 1 Positive correlation: higher X ⇔ higher Y , lower X ⇔ lower Y Negative correlation: lower X ⇔ higher Y , higher X ⇔ lower Y 194

  91. Correlation of neighbour pixels Values of neighbour pixels at distance 1 Values of neighbour pixels at distance 2 256 256 192 192 128 128 64 64 0 0 0 64 128 192 256 0 64 128 192 256 Values of neighbour pixels at distance 4 Values of neighbour pixels at distance 8 256 256 192 192 128 128 64 64 0 0 0 64 128 192 256 0 64 128 192 256 195 Covariance and correlation We define the covariance of two random variables X and Y as Cov( X , Y ) = E { [ X − E( X )] · [ Y − E( Y )] } = E( X · Y ) − E( X ) · E( Y ) and the variance as Var( X ) = Cov( X , X ) = E { [ X − E( X )] 2 } . The Pearson correlation coefficient Cov( X , Y ) ρ X , Y = � Var( X ) · Var( Y ) is a normalized form of the covariance. It is limited to the range [ − 1 , 1]. If the correlation coefficient has one of the values ρ X , Y = ± 1, this implies that X and Y are exactly linearly dependent, i.e. Y = a X + b , with a = Cov( X , Y ) / Var( X ) and b = E( Y ) − E( X ) . 196

  92. Covariance Matrix X = ( X 1 , X 2 , . . . , X n ) ∈ R n we define the For a random vector � covariance matrix � X )) T � Cov( � ( � X − E( � X )) · ( � X − E( � X ) = E = (Cov( X i , X j )) i,j =   Cov( X 1 , X 1 ) Cov( X 1 , X 2 ) Cov( X 1 , X 3 ) · · · Cov( X 1 , X n ) Cov( X 2 , X 1 ) Cov( X 2 , X 2 ) Cov( X 2 , X 3 ) · · · Cov( X 2 , X n )     Cov( X 3 , X 1 ) Cov( X 3 , X 2 ) Cov( X 3 , X 3 ) · · · Cov( X 3 , X n )     . . . . ...   . . . . . . . .   Cov( X n , X 1 ) Cov( X n , X 2 ) Cov( X n , X 3 ) · · · Cov( X n , X n ) The elements of a random vector � X are uncorrelated if and only if Cov( � X ) is a diagonal matrix. Cov( X , Y ) = Cov( Y , X ), so all covariance matrices are symmetric : Cov( � X ) = Cov T ( � X ). 197 Decorrelation by coordinate transform Neighbour−pixel value pairs Decorrelated neighbour−pixel value pairs 256 320 256 192 192 128 128 64 64 0 0 −64 0 64 128 192 256 −64 0 64 128 192 256 320 Probability distribution and entropy Idea: Take the values of a group of cor- correlated value pair (H = 13.90 bit) related symbols (e.g., neighbour pixels) as decorrelated value 1 (H = 7.12 bit) decorrelated value 2 (H = 4.75 bit) a random vector. Find a coordinate trans- form (multiplication with an orthonormal matrix) that leads to a new random vector whose covariance matrix is diagonal. The vector components in this transformed co- ordinate system will no longer be corre- lated. This will hopefully reduce the en- tropy of some of these components. −64 0 64 128 192 256 320 198

  93. X ∈ R n and � Y ∈ R n be random vectors that are linearly Theorem: Let � X + b , where A ∈ R n × n and b ∈ R n are dependent with � Y = A� constants. Then E( � A · E( � Y ) = X ) + b Cov( � A · Cov( � X ) · A T Y ) = Proof: The first equation follows from the linearity of the expected-value operator E( · ), as does E( A · � X · B ) = A · E( � X ) · B for matrices A, B . With that, we can transform � Y )) T � Cov( � ( � Y − E( � Y )) · ( � Y − E( � Y ) = E � X )) T � ( A� X − A E( � X )) · ( A� X − A E( � = E � X )) T A T � A ( � X − E( � X )) · ( � X − E( � = E � X )) T � ( � X − E( � X )) · ( � X − E( � · A T = A · E A · Cov( � X ) · A T = 199 Quick review: eigenvectors and eigenvalues We are given a square matrix A ∈ R n × n . The vector x ∈ R n is an eigenvector of A if there exists a scalar value λ ∈ R such that Ax = λx. The corresponding λ is the eigenvalue of A associated with x . The length of an eigenvector is irrelevant, as any multiple of it is also an eigenvector. Eigenvectors are in practice normalized to length 1. Spectral decomposition Any real, symmetric matrix A = A T ∈ R n × n can be diagonalized into the form A = U Λ U T , where Λ = diag( λ 1 , λ 2 , . . . , λ n ) is the diagonal matrix of the ordered eigenvalues of A (with λ 1 ≥ λ 2 ≥ · · · ≥ λ n ), and the columns of U are the n corresponding orthonormal eigenvectors of A . 200

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend