Contents of the Lecture Multiple random variables Covariance, - - PowerPoint PPT Presentation

contents of the lecture
SMART_READER_LITE
LIVE PREVIEW

Contents of the Lecture Multiple random variables Covariance, - - PowerPoint PPT Presentation

IIT Bombay Slide 1 Aug. 11, 2014 Lecture 09 Math. Preliminaries - 4 Contents of the Lecture Multiple random variables Covariance, correlation and higher


slide-1
SLIDE 1

Contents of the Lecture

  • Multiple random variables

– Covariance, correlation and higher order moments

  • Properties of Linear Systems

IIT Bombay Slide 1 GNR607 Lecture 09 B. Krishna Mohan

  • Aug. 11, 2014 Lecture 09 Math. Preliminaries - 4
slide-2
SLIDE 2

IIT Bombay Slide 2 GNR607 Lecture 09 B. Krishna Mohan

Gaussian Distribution

Source: http://allpsych.com/researchmethods/distributions.html

slide-3
SLIDE 3

IIT Bombay Slide 3 GNR607 Lecture 09 B. Krishna Mohan

Skewed Distributions

Source: http://allpsych.com/researchmethods/distributions.html

Skewness = 0 Skewness > 0 Skewness < 0

slide-4
SLIDE 4

IIT Bombay Slide 4 GNR607 Lecture 09 B. Krishna Mohan

Kurtic Distributions

Source: http://allpsych.com/researchmethods/distributions.html

Kurtosis = 3 Kurtosis > 3 Kurtosis < 3

slide-5
SLIDE 5

Multiple Random Variables

  • The previous definitions can be extended

to collections of random variables

  • A vector random variable is denoted by
  • x = [x1 x2 … xn]t
  • The cumulative distribution for multiple

random variables becomes

  • FX(x) = P(X1 ≤ x1, X2 ≤ x2 … Xn ≤ xn)

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 5

slide-6
SLIDE 6

Multivariate PDF

  • The pdf in multiple variables is denoted by

p(x) = p(x1, x2, …, xn) and is given by

  • Joint expectation of pairs of random variables is

given by

1 2 1 2 1 2

( , ,..., ) ( , ,..., ) ...

n n n

F x x x p x x x x x x ∂ = ∂ ∂ ∂

( ) ( )

k r k r i j i j i j i j

E x x x x p x x dx dx

∞ −∞

= ∫

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 6

slide-7
SLIDE 7

Multivariate PDF

  • Joint expectation of pairs of random variables is

given by

  • This can be written with a simpler notation as

( , ) ( , )

k r k r

E x y x y p x y dxdy

∞ −∞

= ∫ ( ) ( )

k r k r i j i j i j i j

E x x x x p x x dx dx

∞ −∞

= ∫

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 7

slide-8
SLIDE 8

Correlation between Random Variables

  • The correlation between two random variables x,

y is given by

  • If Rxy = E[x]E[y] then x and y are uncorrelated
  • If x and y are independent, they are

automatically uncorrelated, though the converse is not always true

{ } ( , )

xy x y

R E xy xyp x y dxdy

∞ ∞ =−∞ =−∞

= = ∫

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 8

slide-9
SLIDE 9

Joint Central Moments

  • The central moment of order k.r is given

by

  • mx and my are the means of random

variables x and y

  • For instance µ20 = σx

2 = Variance of x

  • Likewise, µ02 = σy

2 = Variance of y

( ) ( ) ( , )

k r kr x y

x m y m p x y dxdy µ

∞ ∞ −∞ −∞

= − −

∫ ∫

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 9

slide-10
SLIDE 10

Covariance

  • The joint central moment µ11, called covariance is given

by

  • Covariance has extremely high importance in dealing

with remotely sensed images acquired in multiple bands. We deal with covariance between data acquired in one wavelength band with the data acquired in another band. Covariance is often represented by Cxy or by matrix Σ.

11

[ ][ ] ( )( ) ( , )

x y x y

E x m y m x m y m p x y dxdy µ

∞ ∞ −∞ −∞

= − − = − −

∫ ∫

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 10

slide-11
SLIDE 11

Correlation and Covariance

  • By expanding the expression for

covariance we obtain

  • Cxy = E[xy] – mxE[y] – myE[x] + E[x]E[y]

= Rxy – mxmy – mxmy + mxmy = Rxy – mxmy

  • If any random variable x or y is zero mean

(mx=0 or my=0) then Cxy = Rxy

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 11

slide-12
SLIDE 12

Correlation Coefficient

Cxy / σxσy = is known as the Correlation Coefficient between random variables x and y, and is

  • ften denoted by γ.
  • γ varies between -1 and +1

,

y x x y

y m x m E σ σ     − −    ÷  ÷      

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 12

slide-13
SLIDE 13

Multivariate Gaussian Function

  • The multivariate Gaussian function is

given by

  • m is the mean vector, n is the

dimensionality of the Gaussian, |C| is the determinant of the covariance matrix

1

( ) ( ) /2 1/2

1 ( ) (2 ) | |

C n

p e C π

− − −

=

T

x m x m

x

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 13

slide-14
SLIDE 14

Multivariate Gaussian Function

1

( ) ( ) /2 1/2

1 ( ) (2 ) | |

C n

p e C π

− − −

=

T

x m x m

x

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 14 Source: www.mathworks.jp

slide-15
SLIDE 15

Discrete Mean and Covariance

  • The mean vector is given by
  • m = [E(x1) E(x2) … E(xn)]T
  • The covariance matrix is given by
  • C = E[(x-m)(x-m)T]
  • A given element Cij is given by
  • Cij = E[(xi - mi)(xj – mj)]

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 15

slide-16
SLIDE 16

Sample Covariance Matrix for 4-bands

IIT Bombay Slide 26a GNR607 Lecture 09 B. Krishna Mohan 34.89 55.62 52.87 22.71 55.62 105.95 99.58 43.33 52.87 99.58 104.02 45.80 22.71 43.33 45.80 21.35 Matrix Eigenvalues 253.44 7.91 3.96 0.89

The bands under consideration are blue, green, red and near infrared and it is evident that there is considerable correlation among the bands based on the spectral response curves seen before. High correlation among bands implies rows of the matrix can be closely approximated by linear combination of the remaining bands. That is why the smallest eigenvalue is so much smaller than the largest eigenvalue.

slide-17
SLIDE 17

Recall some properties!

  • The covariance matrix is REAL and

SYMMETRIC

  • It can easily be diagonalized using its

eigenvectors

  • Λ = A C AT where A is a matrix formed by

using eigenvectors of C as its rows

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 16

slide-18
SLIDE 18

Eigenvectors of Covariance Matrix

  • Vector random variables transformed by

eigenvectors of their covariance matrix, resulting in uncorrelated random variables

  • This transform leads to an important

image processing operation known as Principal Component Transform (PCT)

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 17

slide-19
SLIDE 19

Definition of Linear System

  • A system is one that converts an input f(x) to an
  • utput g(x). The system is denoted by H
  • f(x) H g(x)
  • The operation of the system, with H being the

system operator, can be written as g(x) = H[f(x)]

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 18

slide-20
SLIDE 20

Scaling Property

  • Consider the property of the system operator H:

g(x) = H[f(x)]

  • Suppose

H[w1f1(x)] = w1g1(x) Then H is said to satisfy the scaling property A scaled input produces a scaled output by the same factor

IIT Bombay Slide 19 GNR607 Lecture 09 B. Krishna Mohan

slide-21
SLIDE 21

Additivity Property

Given the system operator H: g(x) = H[f(x)] Let w1 = 1 and w2 = 1. Suppose H[f1(x) + f2(x)] = g1(x) + g2(x) Then H is said to satisfy the additivity property The response of the system to sum of two inputs is equal to the sum of the two corresponding outputs

IIT Bombay Slide 20 GNR607 Lecture 09 B. Krishna Mohan

slide-22
SLIDE 22

Superposition Property

Given the system operator H: g(x) = H[f(x)] Suppose H[w1f1(x) + w2f2(x)] = w1g1(x) + w2g2(x) Then H satisfies the superposition property, that is a combination of additivity and scaling properties

IIT Bombay Slide 21 GNR607 Lecture 09 B. Krishna Mohan

slide-23
SLIDE 23

General Property

Given the system operator H: g(x) = H[f(x)] The superposition property can be stated in general as

IIT Bombay Slide 22 GNR607 Lecture 09 B. Krishna Mohan

1 1

( ) ( )

n n i i i i i i

H w f x w g x

= =

  =    

∑ ∑

slide-24
SLIDE 24

Shift Invariance

  • A system H is space invariant if

H[f(x + xo)] = g(x + xo]

  • This means that the nature of the system does

not alter with the position in the image. The position of the output simply shifts in response to a shift in the position of the input

  • If the response of the system differs for different

values of xo then the system is space variant

IIT Bombay Slide 23 GNR607 Lecture 09 B. Krishna Mohan

slide-25
SLIDE 25

Differentiation as a Linear Operator

  • Let
  • Verify that differentiation is shift

invariant!!! ( ) ( ) df x g x dx =

IIT Bombay Slide 24 GNR607 Lecture 09 B. Krishna Mohan

slide-26
SLIDE 26

Another Example

  • Let H[f(x)] = ln(f(x))
  • Then H[w1f1(x) + w2f2(x)] =

ln(w1f1(x) + w2f2(x))  w1ln(f1(x)) + w2ln(f2(x))

  • Therefore the logarithm operator is not

linear

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 25

slide-27
SLIDE 27

Unit Impulse

  • A unit impulse function δ(x-a) is defined by
  • Properties of unit impulse function:
  • Then
  • Note that Integration operator is linear

( ) ( ) ( ) f x d f x α δ α α − =

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 26 [ ]

( ) ( ) ( ) ( ) H f x d H f x d α δ α α α δ α α   − = −  

∫ ∫

[ ]

( ) ( ) ( ) ( , ) f H x d f h x d α δ α α α α α − =

∫ ∫

slide-28
SLIDE 28

Impulse Response

  • g(x) =
  • h(x,α) is called the impulse response of the

linear system. The integral is known as the superposition integral

  • If the system is shift invariant, then h(x,α) can be

written as h(x-α). If the position x also determines the response of the system, then it is shift variant

GNR607 Lecture 09 B. Krishna Mohan IIT Bombay Slide 27

( ) ( , ) f h x d α α α