In the name of Allah the compassionate, the merciful Digital Image - - PowerPoint PPT Presentation

in the name of allah
SMART_READER_LITE
LIVE PREVIEW

In the name of Allah the compassionate, the merciful Digital Image - - PowerPoint PPT Presentation

In the name of Allah the compassionate, the merciful Digital Image Processing S. Kasaei S. Kasaei Sharif University of Technology Room: CE 307 E-Mail: skasaei@sharif.edu Home Page: http://ce.sharif.edu http://ipl.ce.sharif.edu


slide-1
SLIDE 1
slide-2
SLIDE 2

In the name of Allah

the compassionate, the merciful

slide-3
SLIDE 3

Kasaei 3

Digital Image Processing

  • S. Kasaei
  • S. Kasaei

Sharif University of Technology Room: CE 307 E-Mail: skasaei@sharif.edu Home Page: http://ce.sharif.edu http://ipl.ce.sharif.edu http://sharif.edu/~skasaei

slide-4
SLIDE 4

Chapter 2

Two-Dimensional Systems & Mathematical Preliminaries

slide-5
SLIDE 5

Kasaei 5

Notations & Definitions

slide-6
SLIDE 6

Kasaei 6

Notations & Definitions

1-D continuous signal: 1-D sampled signal: Continuous image: Sampled image:

[2 (or higher)-D sequence of real numbers.]

Complex conjugate: Separable functions:

) ( ), ( ), ( t s x u x f

n

u n u ), (

) , ( ), , ( ), , ( y x v y x u y x f

) , , ( , ), , (

,

k j i u u n m u

n m

*

z

) ( ) ( ) , (

2 1

y f x f y x f =

slide-7
SLIDE 7

Kasaei 7

Notations & Definitions

slide-8
SLIDE 8

Kasaei 8

Notations & Definitions

slide-9
SLIDE 9

Kasaei 9

Notations & Definitions

slide-10
SLIDE 10

Kasaei 10

Notations & Definitions

slide-11
SLIDE 11

Kasaei 11

Notations & Definitions

slide-12
SLIDE 12

Kasaei 12

Linear Systems & Shift Invariance

slide-13
SLIDE 13

Kasaei 13

Linear Systems & Shift Invariance

slide-14
SLIDE 14

Kasaei 14

Linear Systems & Shift Invariance

slide-15
SLIDE 15

Kasaei 15

Linear Systems & Shift Invariance

slide-16
SLIDE 16

Kasaei 16

Linear Systems & Shift Invariance

slide-17
SLIDE 17

Kasaei 17

Linear Systems & Shift Invariance

slide-18
SLIDE 18

Kasaei 18

Linear Systems & Shift Invariance

slide-19
SLIDE 19

Kasaei 19

Linear Systems & Shift Invariance

slide-20
SLIDE 20

Kasaei 20

The Fourier Transform

slide-21
SLIDE 21

Kasaei 21

The Fourier Transform

slide-22
SLIDE 22

Kasaei 22

The Fourier Transform

slide-23
SLIDE 23

Kasaei 23

Spatial Frequency

Spatial frequency measures how fast the image

intensity changes in the image plane.

Spatial frequency can be completely

characterized by the variation frequencies in two

  • rthogonal directions (e.g., horizontal & vertical):

fx: cycles/horizontal unit distance. fy : cycles/vertical unit distance.

It can also be specified by magnitude and angle

  • f change:

) / arctan( ,

2 2 x y y x m

f f f f f = + = θ

slide-24
SLIDE 24

Kasaei 24

Illustration of Spatial Frequency

slide-25
SLIDE 25

Kasaei 25

Illustration of Spatial Frequency

slide-26
SLIDE 26

Kasaei 26

Angular Frequency

ee) cycle/degr ( f 180 f f (degree) 180 n) h/2d(radia 2 (radian) ) 2 / arctan( 2

s s

h d d h d h π θ π θ

θ

= = = ≈ =

Problem with previous defined spatial frequency:

Perceived speed of change depends on the

viewing distance.

slide-27
SLIDE 27

Kasaei 27

The Fourier Transform

slide-28
SLIDE 28

Kasaei 28

The Fourier Transform

slide-29
SLIDE 29

Kasaei 29

The Fourier Transform

slide-30
SLIDE 30

Kasaei 30

The Fourier Transform

slide-31
SLIDE 31

Kasaei 31

The Fourier Transform

slide-32
SLIDE 32

Kasaei 32

The Fourier Transform

slide-33
SLIDE 33

Kasaei 33

The Fourier Transform

slide-34
SLIDE 34

Kasaei 34

The Fourier Transform

slide-35
SLIDE 35

Kasaei 35

The Fourier Transform

slide-36
SLIDE 36

Kasaei 36

The Fourier Transform

slide-37
SLIDE 37

Kasaei 37

The Fourier Transform

slide-38
SLIDE 38

Kasaei 38

The Z-Transform

slide-39
SLIDE 39

Kasaei 39

The Z-Transform

slide-40
SLIDE 40

Kasaei 40

The Z-Transform

slide-41
SLIDE 41

Kasaei 41

The Z-Transform

slide-42
SLIDE 42

Kasaei 42

The Z-Transform

slide-43
SLIDE 43

Kasaei 43

The Z-Transform

slide-44
SLIDE 44

Kasaei 44

The Z-Transform

slide-45
SLIDE 45

Kasaei 45

Matrix Theory Results

slide-46
SLIDE 46

Kasaei 46

Matrix Theory Results

slide-47
SLIDE 47

Kasaei 47

Matrix Theory Results

(a) 2-D Cartesian coordinate representation. (b) Matrix representation.

(a) (b)

slide-48
SLIDE 48

Kasaei 48

Matrix Theory Results

N

slide-49
SLIDE 49

Kasaei 49

Matrix Theory Results

slide-50
SLIDE 50

Kasaei 50

Matrix Theory Results

slide-51
SLIDE 51

Kasaei 51

Matrix Theory Results

slide-52
SLIDE 52

Kasaei 52

Matrix Theory Results

slide-53
SLIDE 53

Kasaei 53

Matrix Theory Results

slide-54
SLIDE 54

Kasaei 54

Matrix Theory Results

slide-55
SLIDE 55

Kasaei 55

Matrix Theory Results

slide-56
SLIDE 56

Kasaei 56

Matrix Theory Results

slide-57
SLIDE 57

Kasaei 57

Matrix Theory Results

slide-58
SLIDE 58

Kasaei 58

Matrix Theory Results

slide-59
SLIDE 59

Kasaei 59

Matrix Theory Results

slide-60
SLIDE 60

Kasaei 60

Matrix Theory Results

slide-61
SLIDE 61

Kasaei 61

Matrix Theory Results

slide-62
SLIDE 62

Kasaei 62

Matrix Theory Results

Red line: direction of the first principal component, Green line: direction of the second principal component.

Data set. Principal axes (eigenvectors). Rotated data set.

slide-63
SLIDE 63

Kasaei 63

Block Matrices & Kronecker Products

slide-64
SLIDE 64

Kasaei 64

Block Matrices & Kronecker Products

slide-65
SLIDE 65

Kasaei 65

Block Matrices & Kronecker Products

slide-66
SLIDE 66

Kasaei 66

Block Matrices & Kronecker Products

slide-67
SLIDE 67

Kasaei 67

Block Matrices & Kronecker Products

slide-68
SLIDE 68

Kasaei 68

Block Matrices & Kronecker Products

slide-69
SLIDE 69

Kasaei 69

Block Matrices & Kronecker Products

slide-70
SLIDE 70

Kasaei 70

Block Matrices & Kronecker Products

slide-71
SLIDE 71

Kasaei 71

Block Matrices & Kronecker Products

slide-72
SLIDE 72

Kasaei 72

Block Matrices & Kronecker Products

slide-73
SLIDE 73

Kasaei 73

Block Matrices & Kronecker Products

slide-74
SLIDE 74

Kasaei 74

Block Matrices & Kronecker Products

slide-75
SLIDE 75

Probability, Random Variables, & Random Signal Processing

A Brief Review

slide-76
SLIDE 76

Kasaei 76

References

1.

review_of_probability, by Rafael C. Gonzalez and Richard E. Woods, 2002.

2.

Lecture Notes on Probability Theory and Random Processes, by Jean Walrand, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, 2004.

3.

Probability, Random Variables, and Random Signal Principles, by Peyton Z. Peebles, JR., McGraw-Hill, 3rd Edition, 1993, ISBN:0-07-112782-8.

4.

Probability, Random Variables, and Stochastic Processes, by Athanasios Papoulis, McGraw-Hill, 1991 (QA 273 .P2 1991).

slide-77
SLIDE 77

Famous Theoreticians

  • f the Field

1654 -1987

slide-78
SLIDE 78

Kasaei 78

Jacob BERNOULLI 1654-1705

slide-79
SLIDE 79

Kasaei 79

Abraham DE MOIVRE 1667 -1754

slide-80
SLIDE 80

Kasaei 80

Thomas BAYES 1701-1761

slide-81
SLIDE 81

Kasaei 81

Thomas SIMPSON 1710-1761

slide-82
SLIDE 82

Kasaei 82

Pierre Simon LAPLACE 1749-1827

slide-83
SLIDE 83

Kasaei 83

Adrien Marie LEGENDRE 1752-1833

slide-84
SLIDE 84

Kasaei 84

Carl Friedrich GAUSS 1777-1855

slide-85
SLIDE 85

Kasaei 85

Andrei Andreyevich MARKOV 1856-1922

slide-86
SLIDE 86

Kasaei 86

Andrei Nikolaevich KOLMOGOROV 1903-1987

slide-87
SLIDE 87

Modeling Uncertainty

slide-88
SLIDE 88

Kasaei 88

Modeling Uncertainty

In this lecture we introduce the concept of a model of an uncertain physical system and we stress the importance of concepts that justify the structure of the theory.

slide-89
SLIDE 89

Kasaei 89

Modeling Uncertainty: Models and Physical Reality

General concept:

physical world uncertain outcomes model of uncertainty Probability Theory

slide-90
SLIDE 90

Kasaei 90

Modeling Uncertainty: Models and Physical Reality

“Probability Theory” is a mathematical model of

uncertainty.

It is important to appreciate the difference between

uncertainty in the physical world and the models of “Probability Theory”.

That difference is similar to that between the real world

and laws of theoretical physics.

Consider flipping a fair coin repeatedly. Designate by 0

and 1 the two possible outcomes of a coin flip (say 0 for head and 1 for tail). This experiment takes place in the physical world. The outcomes are uncertain. Here we try to appreciate the probability model of this

experiment and to relate it to the physical reality.

slide-91
SLIDE 91

Kasaei 91

Modeling Uncertainty: Function of Hidden Variable

One idea is that the uncertainty in the world is fully

contained in the selection of some hidden variable.

If this variable was known, then nothing would be

uncertain anymore.

In other words, everything that is uncertain is a function

  • f that hidden variable.

By function, we mean that if we know the hidden

variable, then we know everything else.

Everything that is random is some function X of some

hidden variable. If we designate the outcome of the 5th coin flip by X,

then we conclude that X is a function of w. We can denote that function by X(w).

slide-92
SLIDE 92

Kasaei 92

Some Basic Definitions

Probability Space:

Now, we describe the probability model of “choosing

an object at random" .

We explain that the key idea is to associate a

likelihood, which we call probability, to sets of

  • utcomes (not to individual outcomes).

These sets are events. The description of the events and of their probability

constitute a probability space that completely characterizes a random experiment.

slide-93
SLIDE 93

Kasaei 93

Some Basic Definitions

Events:

Probability events are modeled as sets. The sets of outcomes to which one assigns a

probability are called events (the event of getting a tail).

It is not necessary (and often not possible) for every

set of outcomes to be an event.

slide-94
SLIDE 94

Kasaei 94

Some Basic Definitions

Probability Space:

Putting together the observations of the sections above, we

have defined a probability space as follows:

A probability space is a triplet {Q, F, P} where:

Q is a nonempty set, called the sample space. F is a collection of subsets - closed under countable set

  • perations. The elements of F are called events.

P is a countable additive function from F into [0, 1] such that P(Q) = 1, called a probability measure. Example:

Books at SUT library (sample space)

CE books (events)

Probability of existence of a special book (probability measure)

slide-95
SLIDE 95

Kasaei 95

Some Basic Definitions

Examples will clarify the probability space definition:

The main point is that one defines the probability of

sets of outcomes (the events).

The probability should be countable additive (to be

continuous).

Accordingly (to be able to write down this property),

and also quite intuitively, the collection of events should be closed under countable set operations.

slide-96
SLIDE 96

Kasaei 96

Sets & Set Operations

A set is a collection of objects, with each object in a

set often referred to as an element or member of the set (e.g., the set of all image processing books).

The set with no elements is called the empty or null

set, denoted by the symbol Ø.

The sample space is defined as the set containing

all elements of interest in a given situation.

slide-97
SLIDE 97

Kasaei 97

Basic Set Operations

The union of two sets A and B is denoted by: The intersection of two sets A and B is denoted by: Two sets having no elements in common are said to be disjoint or mutually exclusive, denoted by:

slide-98
SLIDE 98

Kasaei 98

Basic Set Operations

slide-99
SLIDE 99

Kasaei 99

Basic Set Operations (Venn Diagram)

It is often quite useful to represent sets and sets

  • perations in a so-called Venn diagram, in which:

S is represented as a rectangle. Sets are represented as areas (typically circles). Points are associated with elements.

The following example shows various uses of Venn

diagrams.

The shaded areas are the result (sets of points).

slide-100
SLIDE 100

Kasaei 100

Basic Set Operations (Venn Diagrams)

slide-101
SLIDE 101

Kasaei 101

Basic Set Operations (Venn Diagrams)

The top row are self explanatory. The diagrams in the bottom row are used to prove

the validity of the expression which is used in the proof of some probability relationships.

slide-102
SLIDE 102

Kasaei 102

Relative Frequency & Probability

A random experiment is an experiment in which it is not possible to predict the outcome (e.g., tossing

  • f a coin).

The probability of the event is denoted by P(event). For an event A, we have: where,

slide-103
SLIDE 103

Kasaei 103

Relative Frequency & Probability

If A and B are mutually exclusive it follows that the set AB is empty, and consequently, P(AB) = 0. The conditional probability is denoted by P(A/B), where P(A/B) refers to the probability of A given B.

( )

φ = =      

= = n m N n n n N n

A A if A P A P I U

1 1

slide-104
SLIDE 104

Kasaei 104

Conditional Probability

Assume that we know that the outcome is in B. Given

that information, what is the probability that the

  • utcome is in A?

This probability is written as P[A|B] and is read “the

conditional probability of A given B," or “the probability of A given B", for short.

slide-105
SLIDE 105

Kasaei 105

Relative Frequency & Probability Relative Frequency & Probability

A little manipulation of the preceding results yields the following important relationships: and The second expression may be written as: which is known as Bayes' theorem, so named after the 18th century mathematician Thomas Bayes.

a priori likelihood a posteriori

slide-106
SLIDE 106

Kasaei 106

Bayes' Theorem

The importance of the prior distribution: Bayes' rule.

Bayes understood how to include systematically the

information about the prior distribution (a priori) in the calculation of the posterior distribution (a posteriori).

He discovered what we know today as Bayes' rule, a

simple but very useful identity.

slide-107
SLIDE 107

Kasaei 107

Bayes' Theorem

This formula extends to a finite number of events Bn that

partition Q.

Think of the Bn as possible “causes” of some effect A.

You know the prior probabilities P(Bn) of the causes and

also the probability that each cause provokes the effect A.

The formula tells you how to calculate the probability that

a given cause has provoked the observed effect.

Applications are abound, as we will see in detection

  • theory. For instance:

You alarm can sound either if there is a burglar or also if

there is no burglar (false alarm). Given that the alarm sounds, what is the probability that it is a false alarm?

slide-108
SLIDE 108

Kasaei 108

If A and B are statistically independent, then P(B/A) = P(B) and it follows that: and For mutually exclusive, A ∩ B = Ø from which it follows that P(AB) = P(A ∩ B) = 0. So, the two sets are statistically independent if P(AB)=P(A)P(B), which we assume to be nonzero in

  • general. Thus, for two events to be statistically

independent, they cannot be mutually exclusive.

Relative Frequency & Probability Relative Frequency & Probability

slide-109
SLIDE 109

Kasaei 109

Relative Frequency & Probability

In general, for N events to be statistically independent, it must be true that, for all combinations 1 ≤i ≤ j ≤k ≤ . . . ≤N, we have:

slide-110
SLIDE 110

Kasaei 110

Random Variables

The definition is:

“A real random variable (RV) is a measurable

real-valued function of the outcome of a random experiment”.

Physical examples:

Noise voltage at a given time and place. Temperature at a given time and place. Height of the next person to enter the room.

slide-111
SLIDE 111

Kasaei 111

Random Variables A real random variable, x, is a real-valued

function defined on the events of a sample space, S (i.e., X(s)).

In words, for each event in S, there is a real

number that is the corresponding value of the RV.

Viewed yet another way, a RV maps each event

in S onto the real line.

A complex RV, Z, can be defined in terms of

real RVs, X and Y, by: Z=X+jY (the joint density

  • f X and Y must be used.).
slide-112
SLIDE 112

Kasaei 112

Random Variables

A random variable mapping of a sample space.

slide-113
SLIDE 113

Kasaei 113

Random Variables

Thus far we have been concerned with discrete random variables. In the discrete case, the probabilities of events are numbers between 0 and 1. When dealing with continuous quantities (which are not denumerable) we can no longer talk about the "probability of an event" because that probability is zero.

slide-114
SLIDE 114

Kasaei 114

Random Variables

Thus, instead of talking about the probability of a specific value, we talk about the probability that the value

  • f the RV lies in a specified range.

In particular, we are interested in the probability that the RV is less than or equal to a specified constant, a, as:

  • r
slide-115
SLIDE 115

Kasaei 115

Random Variables

  • Function F is called the cumulative probability

distribution function or simply the cumulative distribution function (CDF).

  • If this function is given for all values of a (i.e., −∞ < a <

∞), then the values of RV x have been defined.

slide-116
SLIDE 116

Kasaei 116

Random Variables

where x+ = x + ε, with ε being a positive, infinitesimally small number (in words, F(x) is continuous from the right).

  • Due to the fact that it is a probability, the CDF has the

following properties:

slide-117
SLIDE 117

Kasaei 117

Random Variables

  • The probability density function (PDF) of the RV x is

defined as the derivative of the CDF:

  • The PDF satisfies the following properties:
slide-118
SLIDE 118

Kasaei 118

Random Variables

(a) CDF and (b) PDF of a discrete random variable.

slide-119
SLIDE 119

Kasaei 119

Expected Values & Moments

The expected value of a function g(x) of a continuous RV is defined as: If the RV is discrete the definition becomes:

slide-120
SLIDE 120

Kasaei 120

Expected Values & Moments

The expected value of x is equal to its average (or mean) value, defined as: For discrete RVs as:

slide-121
SLIDE 121

Kasaei 121

Of particular importance is the variance of RVs that is normalized by subtracting their mean, as: and The square root of the variance is called the standard deviation, and is denoted by σ.

Expected Values & Moments

slide-122
SLIDE 122

Kasaei 122

The nth central moment of a continuous RV is: And, for discrete variables as: where we assume that n ≥ 0. Clearly, µ0=1, µ1=0, and µ2=σ².

Expected Values & Moments

slide-123
SLIDE 123

Kasaei 123

The central moments: the mean of the RV has been subtracted out. The moments about the origin: the mean is not subtracted

  • ut.

In image processing, moments are used for a variety of purposes, including histogram processing, segmentation, and description. In general, moments are used to characterize the PDF of an RV.

Expected Values & Moments

slide-124
SLIDE 124

Kasaei 124

Expected Values & Moments

The second, third, and fourth central moments are intimately related to the shape of the PDF of an RV. The second central moment (the variance) is a measure of spread of values of an RV about its mean value. The third central moment is a measure of skewness (bias to the left or right) of the values of x about the mean value (symmetric PDF0). The fourth moment is a relative measure of flatness. In general, knowing all the moments of a density specifies that density.

slide-125
SLIDE 125

Kasaei 125

Gaussian Probability Density Function

A random variable is called Gaussian if it has a probability density of the form: The CDF corresponding to the Gaussian density is:

slide-126
SLIDE 126

Kasaei 126

Gaussian Probability Density Function

(a) PDF and (b) CDF of a Gaussian random variable.

slide-127
SLIDE 127

Kasaei 127

Gaussian Random Variables

The Gaussian distribution is determined by its mean

and variance.

The sum of independent Gaussian random variables

is Gaussian.

Random variables are jointly Gaussian if an arbitrary

linear combination is Gaussian.

Uncorrelated jointly Gaussian random variables are

independent.

If random variables are jointly Gaussian, then the

conditional expectation is linear.

slide-128
SLIDE 128

Kasaei 128

Several Random Variables

A collection of random variables is a collection of

functions of the outcome of the same random experiment.

Here we extend the idea to multiple numerical

  • bservations about the same random experiment.

Examples include:

We pick a ball randomly from a bag and we note its

weight X and its diameter Y.

We observe the temperature at a few different

locations.

We measure the noise voltage at different times. A transmitter sends some signal and the receiver

  • bserves the signal it receives and tries to guess which

signal the transmitter sent.

slide-129
SLIDE 129

Kasaei 129

Several Random Variables

In case, we might need two RVs. This maps our events onto the xy-plane.

Mapping from the sample space S to the joint sample space SJ(xy plane).

slide-130
SLIDE 130

Kasaei 130

It is convenient to use vector notation when dealing with several RVs. Thus, we represent a vector random variable x as: Now, the CDF introduced earlier becomes:

Several Random Variables

slide-131
SLIDE 131

Kasaei 131

Several Random Variables

As in the single variable case, the PDF of an RV vector is defined in terms of derivatives of the CDF, as: The expected value of a function of x is defined by:

slide-132
SLIDE 132

Kasaei 132

Several Random Variables

The joint moment becomes: It is easy to see that ηk0 is the kth moment of x and η0q is the qth moment of y. The moment η11 = E[xy] is called the correlation of x and y.

slide-133
SLIDE 133

Kasaei 133

If the condition: holds, then the two RVs are said to be uncorrelated. We know that if x and y are statistically independent, then p(x, y) = p(x)p(y), in which case we write: Thus, we see that if two RVs are statistically independent then they are also uncorrelated. The converse of this statement is not true in general.

Several Random Variables

slide-134
SLIDE 134

Kasaei 134

The joint central moment of order kq involving RVs x and y is: where mx = E[x] and my = E[y] are the means of x and y. Note that: are the variances of x and y, respectively. and

Several Random Variables

slide-135
SLIDE 135

Kasaei 135

The moment µ11 is called the covariance of x and y. As in the case of correlation, the covariance is an important concept, usually given a special symbol such as Cxy.

Several Random Variables

slide-136
SLIDE 136

Kasaei 136

By direct expansion of the terms inside the expected value brackets, and recalling the mx = E[x] and my = E[y], it is straightforward to show that: From our discussion on correlation, we see that the covariance is zero if the random variables are either uncorrelated or statistically independent.

Several Random Variables

slide-137
SLIDE 137

Kasaei 137

If we divide the covariance by the square root of the product of the variances we obtain: The quantity γ is called the correlation coefficient of RVs x and y. It can be shown that γ is in the range − 1 ≤γ ≤1 (the correlation coefficient is used in image processing for matching).

Several Random Variables

slide-138
SLIDE 138

Kasaei 138

The multivariate Gaussian PDF, is defined as: where n is the dimensionality (number of components)

  • f the random vector x, C is the covariance matrix (to

be defined below), |C| is the determinant of matrix C, m is the mean vector (also to be defined below) and T indicates transposition.

The Multivariate Gaussian Density Function

slide-139
SLIDE 139

Kasaei 139

The mean vector is defined as: and the covariance matrix is defined as:

The Multivariate Gaussian Density Function

where:

slide-140
SLIDE 140

Kasaei 140

The Multivariate Gaussian Density Function

Covariance matrices are real and symmetric.

  • The elements along the main diagonal of C are the

variances of the elements x, such that cii= σxi².

  • When all the elements of x are uncorrelated or

statistically independent, cij = 0, and the covariance matrix becomes a diagonal matrix. If all the variances are equal, then the covariance matrix becomes proportional to the identity matrix, with the constant of proportionality being the variance

  • f the elements of x.
slide-141
SLIDE 141

Kasaei 141

As an example, consider the bivariate (n = 2) Gaussian PDF of: with and

The Multivariate Gaussian Density Function

slide-142
SLIDE 142

Kasaei 142

Where, because C is known to be symmetric, c12 = c21. A schematic diagram of this density is shown in Part (a) of the following figure. Part (b) is a horizontal slice

  • f Part (a).

The main directions of data spread are in the directions of the eigenvectors of C. If the variables are uncorrelated or statistically independent, the covariance matrix will be diagonal and the eigenvectors will be in the same direction as the coordinate axes x1 and x2 (and the ellipse shown would be oriented along the x1 - and x2-axis).

The Multivariate Gaussian Density Function

slide-143
SLIDE 143

Kasaei 143

The Multivariate Gaussian Density Function

(a) Sketch of the joint density function of two Gaussian RVS. (b) A horizontal slice of (a).

slide-144
SLIDE 144

Kasaei 144

If, the variances along the main diagonal are equal, the density would be symmetrical in all directions (in the form of a bell) and Part (b) would be a circle. Note in Parts (a) and (b) that the density is centered at the mean values (m1,m2).

The Multivariate Gaussian Density Function

slide-145
SLIDE 145

Kasaei 145

Random Processes

We have looked at a finite number of random

variables.

In many applications, one is interested in the

evolution in time of random variables.

For instance:

One watches on an oscilloscope the noise across two

terminals.

One may observe packets that arrive at an Internet

router.

One may observe cosmic rays hitting a detector.

slide-146
SLIDE 146

Kasaei 146

Random Processes

We explained that a collection of random variables is

characterized by their joint CDF.

Similarly, a random process is characterized by the

joint CDF of any finite collection of the random variables.

These joint CDF are called the finite dimensional

distributions of the random process.

Obviously, to correspond to a random process, the

finite dimensional distributions must be consistent.

slide-147
SLIDE 147

Kasaei 147

Random Signals

The concept of random process is based on enlarging the RV concept to include time. A random process represents a family or ensemble

  • f time functions.

Each member time function is called a sample function, ensemble member, or a realization of the process. A complex discrete random signal or a discrete random process is a sequence of RVs u(n).

slide-148
SLIDE 148

Kasaei 148

Random Signals

A continuous random process.

slide-149
SLIDE 149

Kasaei 149

Ergodicity Random Processes

Roughly, a stochastic process is ergodic if statistics

that do not depend on the initial phase of the process are constant.

That is, such statistics do not depend on the

realization of the process.

For instance, if you simulate an ergodic process, you

need only one simulation run; it is representative of all possible runs.

slide-150
SLIDE 150

Kasaei 150

Random Signals

slide-151
SLIDE 151

Kasaei 151

Random Signals

slide-152
SLIDE 152

Kasaei 152

Random Signals

slide-153
SLIDE 153

Kasaei 153

Random Signals

slide-154
SLIDE 154

Kasaei 154

Random Signals

slide-155
SLIDE 155

Kasaei 155

Random Signals

slide-156
SLIDE 156

Kasaei 156

Random Signals

slide-157
SLIDE 157

Kasaei 157

Random Signals

slide-158
SLIDE 158

Kasaei 158

Markov Process

A random process X(t) is Markov if: given X(t), the

past and the future are independent.

Markov chains are examples of Markov process. A process with independent increments is Markov. Note that a function of a Markov process may not be

a Markov process.

slide-159
SLIDE 159

Kasaei 159

Random Signals

slide-160
SLIDE 160

Kasaei 160

Random Signals

slide-161
SLIDE 161

Kasaei 161

Random Signals

slide-162
SLIDE 162

Kasaei 162

Random Signals

slide-163
SLIDE 163

Kasaei 163

Random Signals

slide-164
SLIDE 164

Kasaei 164

Random Signals

Uncorr. Indep.

slide-165
SLIDE 165

Kasaei 165

Discrete Random Fields

In statistical representation of image, each pixel is considered as an RV. We think of a given image as a sample function of an ensemble of images.

Ensemble of images or discrete random field. Sample function or random image. R.V.

slide-166
SLIDE 166

Kasaei 166

Discrete Random Fields

Such an ensemble would be adequately defined by a joint PDF of the array of RVs. For practical image sizes, the number of RVs is very large (262,144 for 512x512 images). Thus, it is difficult to specify a realistic joint PDF. One possibility is to specify the ensemble by its first - and second-order moments only.

slide-167
SLIDE 167

Kasaei 167

Discrete Random Fields

slide-168
SLIDE 168

Kasaei 168

Discrete Random Fields

slide-169
SLIDE 169

Kasaei 169

Discrete Random Fields

slide-170
SLIDE 170

Kasaei 170

Discrete Random Fields

slide-171
SLIDE 171

Kasaei 171

Discrete Random Fields

slide-172
SLIDE 172

Kasaei 172

Discrete Random Fields

slide-173
SLIDE 173

Kasaei 173

Discrete Random Fields

slide-174
SLIDE 174

Kasaei 174

The Spectral Density Function

slide-175
SLIDE 175

Kasaei 175

The Spectral Density Function

slide-176
SLIDE 176

Kasaei 176

The Spectral Density Function

slide-177
SLIDE 177

Kasaei 177

The Spectral Density Function

slide-178
SLIDE 178

Kasaei 178

The Spectral Density Function

slide-179
SLIDE 179

Kasaei 179

The Spectral Density Function

slide-180
SLIDE 180

Kasaei 180

The Spectral Density Function

slide-181
SLIDE 181

Kasaei 181

The Spectral Density Function

slide-182
SLIDE 182

Kasaei 182

The Spectral Density Function

slide-183
SLIDE 183

Kasaei 183

Some Results from Information Theory

  • Information theory gives some important concepts

that are useful in digital representation of images. Some of these concepts will be used in image quantization, image transforms, and image data compression. The information, entropy, and rate-distortion function are the main issues concerned in this regard. They will be briefly introduced in the following.

slide-184
SLIDE 184

Kasaei 184

Some Results from Information Theory

slide-185
SLIDE 185

Kasaei 185

Some Results from Information Theory

slide-186
SLIDE 186

Kasaei 186

Some Results from Information Theory

Entropy of a binary source.

slide-187
SLIDE 187

Kasaei 187

Some Results from Information Theory

slide-188
SLIDE 188

Kasaei 188

Some Results from Information Theory

slide-189
SLIDE 189

Kasaei 189

Some Results from Information Theory

slide-190
SLIDE 190

Kasaei 190

Some Results from Information Theory

Rate-distortion function for a Gaussian source.

slide-191
SLIDE 191

Kasaei 191

Some Results from Information Theory

slide-192
SLIDE 192

Kasaei 192

Some Results from Information Theory

slide-193
SLIDE 193

Kasaei 193

Some Results from Information Theory

slide-194
SLIDE 194

Kasaei 194

Some Results from Information Theory

slide-195
SLIDE 195

Kasaei 195

Detection

The detection problem is roughly as follows. We want

to guess which of finitely many possible causes produced an observed effect. For instance:

You have a fever (observed effect); do you think you

have the flu or a cold or the malaria?

You observe some strange shape on an X-ray; is it a

cancer or some infection of the tissues?

A receiver gets a particular waveform; did the

transmitter send the bit 0 or the bit 1? (Hypothesis testing is similar.)

There are two basic formulations: either we know the

prior probabilities of the possible causes (Bayesian) or we do not (non-Bayesian). When we do not, we can look for the maximum likelihood (ML) detection or we can formulate a hypothesis-testing problem.

slide-196
SLIDE 196

Kasaei 196

Estimation

The estimation problem is similar to the detection

problem except that the unobserved random variable X does not take values in a finite set.

That is, one observes Y and must compute an

estimate of X based on Y that is close to X in some sense.

Once again, one has Bayesian and non-Bayesian

  • formulations. The non-Bayesian case typically uses

maximum likelihood estimation, MLE[X|Y ], defined as in the discussion of detection.

slide-197
SLIDE 197

The End