In the name of Allah the compassionate, the merciful Digital Image - - PowerPoint PPT Presentation
In the name of Allah the compassionate, the merciful Digital Image - - PowerPoint PPT Presentation
In the name of Allah the compassionate, the merciful Digital Image Processing S. Kasaei S. Kasaei Sharif University of Technology Room: CE 307 E-Mail: skasaei@sharif.edu Home Page: http://ce.sharif.edu http://ipl.ce.sharif.edu
In the name of Allah
the compassionate, the merciful
Kasaei 3
Digital Image Processing
- S. Kasaei
- S. Kasaei
Sharif University of Technology Room: CE 307 E-Mail: skasaei@sharif.edu Home Page: http://ce.sharif.edu http://ipl.ce.sharif.edu http://sharif.edu/~skasaei
Chapter 2
Two-Dimensional Systems & Mathematical Preliminaries
Kasaei 5
Notations & Definitions
Kasaei 6
Notations & Definitions
1-D continuous signal: 1-D sampled signal: Continuous image: Sampled image:
[2 (or higher)-D sequence of real numbers.]
Complex conjugate: Separable functions:
) ( ), ( ), ( t s x u x f
n
u n u ), (
) , ( ), , ( ), , ( y x v y x u y x f
) , , ( , ), , (
,
k j i u u n m u
n m
*
z
) ( ) ( ) , (
2 1
y f x f y x f =
Kasaei 7
Notations & Definitions
Kasaei 8
Notations & Definitions
Kasaei 9
Notations & Definitions
Kasaei 10
Notations & Definitions
Kasaei 11
Notations & Definitions
Kasaei 12
Linear Systems & Shift Invariance
Kasaei 13
Linear Systems & Shift Invariance
Kasaei 14
Linear Systems & Shift Invariance
Kasaei 15
Linear Systems & Shift Invariance
Kasaei 16
Linear Systems & Shift Invariance
Kasaei 17
Linear Systems & Shift Invariance
Kasaei 18
Linear Systems & Shift Invariance
Kasaei 19
Linear Systems & Shift Invariance
Kasaei 20
The Fourier Transform
Kasaei 21
The Fourier Transform
Kasaei 22
The Fourier Transform
Kasaei 23
Spatial Frequency
Spatial frequency measures how fast the image
intensity changes in the image plane.
Spatial frequency can be completely
characterized by the variation frequencies in two
- rthogonal directions (e.g., horizontal & vertical):
fx: cycles/horizontal unit distance. fy : cycles/vertical unit distance.
It can also be specified by magnitude and angle
- f change:
) / arctan( ,
2 2 x y y x m
f f f f f = + = θ
Kasaei 24
Illustration of Spatial Frequency
Kasaei 25
Illustration of Spatial Frequency
Kasaei 26
Angular Frequency
ee) cycle/degr ( f 180 f f (degree) 180 n) h/2d(radia 2 (radian) ) 2 / arctan( 2
s s
h d d h d h π θ π θ
θ
= = = ≈ =
Problem with previous defined spatial frequency:
Perceived speed of change depends on the
viewing distance.
Kasaei 27
The Fourier Transform
Kasaei 28
The Fourier Transform
Kasaei 29
The Fourier Transform
Kasaei 30
The Fourier Transform
Kasaei 31
The Fourier Transform
Kasaei 32
The Fourier Transform
Kasaei 33
The Fourier Transform
Kasaei 34
The Fourier Transform
Kasaei 35
The Fourier Transform
Kasaei 36
The Fourier Transform
Kasaei 37
The Fourier Transform
Kasaei 38
The Z-Transform
Kasaei 39
The Z-Transform
Kasaei 40
The Z-Transform
Kasaei 41
The Z-Transform
Kasaei 42
The Z-Transform
Kasaei 43
The Z-Transform
Kasaei 44
The Z-Transform
Kasaei 45
Matrix Theory Results
Kasaei 46
Matrix Theory Results
Kasaei 47
Matrix Theory Results
(a) 2-D Cartesian coordinate representation. (b) Matrix representation.
(a) (b)
Kasaei 48
Matrix Theory Results
N
Kasaei 49
Matrix Theory Results
Kasaei 50
Matrix Theory Results
Kasaei 51
Matrix Theory Results
Kasaei 52
Matrix Theory Results
Kasaei 53
Matrix Theory Results
Kasaei 54
Matrix Theory Results
Kasaei 55
Matrix Theory Results
Kasaei 56
Matrix Theory Results
Kasaei 57
Matrix Theory Results
Kasaei 58
Matrix Theory Results
Kasaei 59
Matrix Theory Results
Kasaei 60
Matrix Theory Results
Kasaei 61
Matrix Theory Results
Kasaei 62
Matrix Theory Results
Red line: direction of the first principal component, Green line: direction of the second principal component.
Data set. Principal axes (eigenvectors). Rotated data set.
Kasaei 63
Block Matrices & Kronecker Products
Kasaei 64
Block Matrices & Kronecker Products
Kasaei 65
Block Matrices & Kronecker Products
Kasaei 66
Block Matrices & Kronecker Products
Kasaei 67
Block Matrices & Kronecker Products
Kasaei 68
Block Matrices & Kronecker Products
Kasaei 69
Block Matrices & Kronecker Products
Kasaei 70
Block Matrices & Kronecker Products
Kasaei 71
Block Matrices & Kronecker Products
Kasaei 72
Block Matrices & Kronecker Products
Kasaei 73
Block Matrices & Kronecker Products
Kasaei 74
Block Matrices & Kronecker Products
Probability, Random Variables, & Random Signal Processing
A Brief Review
Kasaei 76
References
1.
review_of_probability, by Rafael C. Gonzalez and Richard E. Woods, 2002.
2.
Lecture Notes on Probability Theory and Random Processes, by Jean Walrand, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, 2004.
3.
Probability, Random Variables, and Random Signal Principles, by Peyton Z. Peebles, JR., McGraw-Hill, 3rd Edition, 1993, ISBN:0-07-112782-8.
4.
Probability, Random Variables, and Stochastic Processes, by Athanasios Papoulis, McGraw-Hill, 1991 (QA 273 .P2 1991).
Famous Theoreticians
- f the Field
1654 -1987
Kasaei 78
Jacob BERNOULLI 1654-1705
Kasaei 79
Abraham DE MOIVRE 1667 -1754
Kasaei 80
Thomas BAYES 1701-1761
Kasaei 81
Thomas SIMPSON 1710-1761
Kasaei 82
Pierre Simon LAPLACE 1749-1827
Kasaei 83
Adrien Marie LEGENDRE 1752-1833
Kasaei 84
Carl Friedrich GAUSS 1777-1855
Kasaei 85
Andrei Andreyevich MARKOV 1856-1922
Kasaei 86
Andrei Nikolaevich KOLMOGOROV 1903-1987
Modeling Uncertainty
Kasaei 88
Modeling Uncertainty
In this lecture we introduce the concept of a model of an uncertain physical system and we stress the importance of concepts that justify the structure of the theory.
Kasaei 89
Modeling Uncertainty: Models and Physical Reality
General concept:
physical world uncertain outcomes model of uncertainty Probability Theory
Kasaei 90
Modeling Uncertainty: Models and Physical Reality
“Probability Theory” is a mathematical model of
uncertainty.
It is important to appreciate the difference between
uncertainty in the physical world and the models of “Probability Theory”.
That difference is similar to that between the real world
and laws of theoretical physics.
Consider flipping a fair coin repeatedly. Designate by 0
and 1 the two possible outcomes of a coin flip (say 0 for head and 1 for tail). This experiment takes place in the physical world. The outcomes are uncertain. Here we try to appreciate the probability model of this
experiment and to relate it to the physical reality.
Kasaei 91
Modeling Uncertainty: Function of Hidden Variable
One idea is that the uncertainty in the world is fully
contained in the selection of some hidden variable.
If this variable was known, then nothing would be
uncertain anymore.
In other words, everything that is uncertain is a function
- f that hidden variable.
By function, we mean that if we know the hidden
variable, then we know everything else.
Everything that is random is some function X of some
hidden variable. If we designate the outcome of the 5th coin flip by X,
then we conclude that X is a function of w. We can denote that function by X(w).
Kasaei 92
Some Basic Definitions
Probability Space:
Now, we describe the probability model of “choosing
an object at random" .
We explain that the key idea is to associate a
likelihood, which we call probability, to sets of
- utcomes (not to individual outcomes).
These sets are events. The description of the events and of their probability
constitute a probability space that completely characterizes a random experiment.
Kasaei 93
Some Basic Definitions
Events:
Probability events are modeled as sets. The sets of outcomes to which one assigns a
probability are called events (the event of getting a tail).
It is not necessary (and often not possible) for every
set of outcomes to be an event.
Kasaei 94
Some Basic Definitions
Probability Space:
Putting together the observations of the sections above, we
have defined a probability space as follows:
A probability space is a triplet {Q, F, P} where:
Q is a nonempty set, called the sample space. F is a collection of subsets - closed under countable set
- perations. The elements of F are called events.
P is a countable additive function from F into [0, 1] such that P(Q) = 1, called a probability measure. Example:
Books at SUT library (sample space)
CE books (events)
Probability of existence of a special book (probability measure)
Kasaei 95
Some Basic Definitions
Examples will clarify the probability space definition:
The main point is that one defines the probability of
sets of outcomes (the events).
The probability should be countable additive (to be
continuous).
Accordingly (to be able to write down this property),
and also quite intuitively, the collection of events should be closed under countable set operations.
Kasaei 96
Sets & Set Operations
A set is a collection of objects, with each object in a
set often referred to as an element or member of the set (e.g., the set of all image processing books).
The set with no elements is called the empty or null
set, denoted by the symbol Ø.
The sample space is defined as the set containing
all elements of interest in a given situation.
Kasaei 97
Basic Set Operations
The union of two sets A and B is denoted by: The intersection of two sets A and B is denoted by: Two sets having no elements in common are said to be disjoint or mutually exclusive, denoted by:
Kasaei 98
Basic Set Operations
Kasaei 99
Basic Set Operations (Venn Diagram)
It is often quite useful to represent sets and sets
- perations in a so-called Venn diagram, in which:
S is represented as a rectangle. Sets are represented as areas (typically circles). Points are associated with elements.
The following example shows various uses of Venn
diagrams.
The shaded areas are the result (sets of points).
Kasaei 100
Basic Set Operations (Venn Diagrams)
Kasaei 101
Basic Set Operations (Venn Diagrams)
The top row are self explanatory. The diagrams in the bottom row are used to prove
the validity of the expression which is used in the proof of some probability relationships.
Kasaei 102
Relative Frequency & Probability
A random experiment is an experiment in which it is not possible to predict the outcome (e.g., tossing
- f a coin).
The probability of the event is denoted by P(event). For an event A, we have: where,
Kasaei 103
Relative Frequency & Probability
If A and B are mutually exclusive it follows that the set AB is empty, and consequently, P(AB) = 0. The conditional probability is denoted by P(A/B), where P(A/B) refers to the probability of A given B.
( )
φ = =
∑
= = n m N n n n N n
A A if A P A P I U
1 1
Kasaei 104
Conditional Probability
Assume that we know that the outcome is in B. Given
that information, what is the probability that the
- utcome is in A?
This probability is written as P[A|B] and is read “the
conditional probability of A given B," or “the probability of A given B", for short.
Kasaei 105
Relative Frequency & Probability Relative Frequency & Probability
A little manipulation of the preceding results yields the following important relationships: and The second expression may be written as: which is known as Bayes' theorem, so named after the 18th century mathematician Thomas Bayes.
a priori likelihood a posteriori
Kasaei 106
Bayes' Theorem
The importance of the prior distribution: Bayes' rule.
Bayes understood how to include systematically the
information about the prior distribution (a priori) in the calculation of the posterior distribution (a posteriori).
He discovered what we know today as Bayes' rule, a
simple but very useful identity.
Kasaei 107
Bayes' Theorem
This formula extends to a finite number of events Bn that
partition Q.
Think of the Bn as possible “causes” of some effect A.
You know the prior probabilities P(Bn) of the causes and
also the probability that each cause provokes the effect A.
The formula tells you how to calculate the probability that
a given cause has provoked the observed effect.
Applications are abound, as we will see in detection
- theory. For instance:
You alarm can sound either if there is a burglar or also if
there is no burglar (false alarm). Given that the alarm sounds, what is the probability that it is a false alarm?
Kasaei 108
If A and B are statistically independent, then P(B/A) = P(B) and it follows that: and For mutually exclusive, A ∩ B = Ø from which it follows that P(AB) = P(A ∩ B) = 0. So, the two sets are statistically independent if P(AB)=P(A)P(B), which we assume to be nonzero in
- general. Thus, for two events to be statistically
independent, they cannot be mutually exclusive.
Relative Frequency & Probability Relative Frequency & Probability
Kasaei 109
Relative Frequency & Probability
In general, for N events to be statistically independent, it must be true that, for all combinations 1 ≤i ≤ j ≤k ≤ . . . ≤N, we have:
Kasaei 110
Random Variables
The definition is:
“A real random variable (RV) is a measurable
real-valued function of the outcome of a random experiment”.
Physical examples:
Noise voltage at a given time and place. Temperature at a given time and place. Height of the next person to enter the room.
Kasaei 111
Random Variables A real random variable, x, is a real-valued
function defined on the events of a sample space, S (i.e., X(s)).
In words, for each event in S, there is a real
number that is the corresponding value of the RV.
Viewed yet another way, a RV maps each event
in S onto the real line.
A complex RV, Z, can be defined in terms of
real RVs, X and Y, by: Z=X+jY (the joint density
- f X and Y must be used.).
Kasaei 112
Random Variables
A random variable mapping of a sample space.
Kasaei 113
Random Variables
Thus far we have been concerned with discrete random variables. In the discrete case, the probabilities of events are numbers between 0 and 1. When dealing with continuous quantities (which are not denumerable) we can no longer talk about the "probability of an event" because that probability is zero.
Kasaei 114
Random Variables
Thus, instead of talking about the probability of a specific value, we talk about the probability that the value
- f the RV lies in a specified range.
In particular, we are interested in the probability that the RV is less than or equal to a specified constant, a, as:
- r
Kasaei 115
Random Variables
- Function F is called the cumulative probability
distribution function or simply the cumulative distribution function (CDF).
- If this function is given for all values of a (i.e., −∞ < a <
∞), then the values of RV x have been defined.
Kasaei 116
Random Variables
where x+ = x + ε, with ε being a positive, infinitesimally small number (in words, F(x) is continuous from the right).
- Due to the fact that it is a probability, the CDF has the
following properties:
Kasaei 117
Random Variables
- The probability density function (PDF) of the RV x is
defined as the derivative of the CDF:
- The PDF satisfies the following properties:
Kasaei 118
Random Variables
(a) CDF and (b) PDF of a discrete random variable.
Kasaei 119
Expected Values & Moments
The expected value of a function g(x) of a continuous RV is defined as: If the RV is discrete the definition becomes:
Kasaei 120
Expected Values & Moments
The expected value of x is equal to its average (or mean) value, defined as: For discrete RVs as:
Kasaei 121
Of particular importance is the variance of RVs that is normalized by subtracting their mean, as: and The square root of the variance is called the standard deviation, and is denoted by σ.
Expected Values & Moments
Kasaei 122
The nth central moment of a continuous RV is: And, for discrete variables as: where we assume that n ≥ 0. Clearly, µ0=1, µ1=0, and µ2=σ².
Expected Values & Moments
Kasaei 123
The central moments: the mean of the RV has been subtracted out. The moments about the origin: the mean is not subtracted
- ut.
In image processing, moments are used for a variety of purposes, including histogram processing, segmentation, and description. In general, moments are used to characterize the PDF of an RV.
Expected Values & Moments
Kasaei 124
Expected Values & Moments
The second, third, and fourth central moments are intimately related to the shape of the PDF of an RV. The second central moment (the variance) is a measure of spread of values of an RV about its mean value. The third central moment is a measure of skewness (bias to the left or right) of the values of x about the mean value (symmetric PDF0). The fourth moment is a relative measure of flatness. In general, knowing all the moments of a density specifies that density.
Kasaei 125
Gaussian Probability Density Function
A random variable is called Gaussian if it has a probability density of the form: The CDF corresponding to the Gaussian density is:
Kasaei 126
Gaussian Probability Density Function
(a) PDF and (b) CDF of a Gaussian random variable.
Kasaei 127
Gaussian Random Variables
The Gaussian distribution is determined by its mean
and variance.
The sum of independent Gaussian random variables
is Gaussian.
Random variables are jointly Gaussian if an arbitrary
linear combination is Gaussian.
Uncorrelated jointly Gaussian random variables are
independent.
If random variables are jointly Gaussian, then the
conditional expectation is linear.
Kasaei 128
Several Random Variables
A collection of random variables is a collection of
functions of the outcome of the same random experiment.
Here we extend the idea to multiple numerical
- bservations about the same random experiment.
Examples include:
We pick a ball randomly from a bag and we note its
weight X and its diameter Y.
We observe the temperature at a few different
locations.
We measure the noise voltage at different times. A transmitter sends some signal and the receiver
- bserves the signal it receives and tries to guess which
signal the transmitter sent.
Kasaei 129
Several Random Variables
In case, we might need two RVs. This maps our events onto the xy-plane.
Mapping from the sample space S to the joint sample space SJ(xy plane).
Kasaei 130
It is convenient to use vector notation when dealing with several RVs. Thus, we represent a vector random variable x as: Now, the CDF introduced earlier becomes:
Several Random Variables
Kasaei 131
Several Random Variables
As in the single variable case, the PDF of an RV vector is defined in terms of derivatives of the CDF, as: The expected value of a function of x is defined by:
Kasaei 132
Several Random Variables
The joint moment becomes: It is easy to see that ηk0 is the kth moment of x and η0q is the qth moment of y. The moment η11 = E[xy] is called the correlation of x and y.
Kasaei 133
If the condition: holds, then the two RVs are said to be uncorrelated. We know that if x and y are statistically independent, then p(x, y) = p(x)p(y), in which case we write: Thus, we see that if two RVs are statistically independent then they are also uncorrelated. The converse of this statement is not true in general.
Several Random Variables
Kasaei 134
The joint central moment of order kq involving RVs x and y is: where mx = E[x] and my = E[y] are the means of x and y. Note that: are the variances of x and y, respectively. and
Several Random Variables
Kasaei 135
The moment µ11 is called the covariance of x and y. As in the case of correlation, the covariance is an important concept, usually given a special symbol such as Cxy.
Several Random Variables
Kasaei 136
By direct expansion of the terms inside the expected value brackets, and recalling the mx = E[x] and my = E[y], it is straightforward to show that: From our discussion on correlation, we see that the covariance is zero if the random variables are either uncorrelated or statistically independent.
Several Random Variables
Kasaei 137
If we divide the covariance by the square root of the product of the variances we obtain: The quantity γ is called the correlation coefficient of RVs x and y. It can be shown that γ is in the range − 1 ≤γ ≤1 (the correlation coefficient is used in image processing for matching).
Several Random Variables
Kasaei 138
The multivariate Gaussian PDF, is defined as: where n is the dimensionality (number of components)
- f the random vector x, C is the covariance matrix (to
be defined below), |C| is the determinant of matrix C, m is the mean vector (also to be defined below) and T indicates transposition.
The Multivariate Gaussian Density Function
Kasaei 139
The mean vector is defined as: and the covariance matrix is defined as:
The Multivariate Gaussian Density Function
where:
Kasaei 140
The Multivariate Gaussian Density Function
Covariance matrices are real and symmetric.
- The elements along the main diagonal of C are the
variances of the elements x, such that cii= σxi².
- When all the elements of x are uncorrelated or
statistically independent, cij = 0, and the covariance matrix becomes a diagonal matrix. If all the variances are equal, then the covariance matrix becomes proportional to the identity matrix, with the constant of proportionality being the variance
- f the elements of x.
Kasaei 141
As an example, consider the bivariate (n = 2) Gaussian PDF of: with and
The Multivariate Gaussian Density Function
Kasaei 142
Where, because C is known to be symmetric, c12 = c21. A schematic diagram of this density is shown in Part (a) of the following figure. Part (b) is a horizontal slice
- f Part (a).
The main directions of data spread are in the directions of the eigenvectors of C. If the variables are uncorrelated or statistically independent, the covariance matrix will be diagonal and the eigenvectors will be in the same direction as the coordinate axes x1 and x2 (and the ellipse shown would be oriented along the x1 - and x2-axis).
The Multivariate Gaussian Density Function
Kasaei 143
The Multivariate Gaussian Density Function
(a) Sketch of the joint density function of two Gaussian RVS. (b) A horizontal slice of (a).
Kasaei 144
If, the variances along the main diagonal are equal, the density would be symmetrical in all directions (in the form of a bell) and Part (b) would be a circle. Note in Parts (a) and (b) that the density is centered at the mean values (m1,m2).
The Multivariate Gaussian Density Function
Kasaei 145
Random Processes
We have looked at a finite number of random
variables.
In many applications, one is interested in the
evolution in time of random variables.
For instance:
One watches on an oscilloscope the noise across two
terminals.
One may observe packets that arrive at an Internet
router.
One may observe cosmic rays hitting a detector.
Kasaei 146
Random Processes
We explained that a collection of random variables is
characterized by their joint CDF.
Similarly, a random process is characterized by the
joint CDF of any finite collection of the random variables.
These joint CDF are called the finite dimensional
distributions of the random process.
Obviously, to correspond to a random process, the
finite dimensional distributions must be consistent.
Kasaei 147
Random Signals
The concept of random process is based on enlarging the RV concept to include time. A random process represents a family or ensemble
- f time functions.
Each member time function is called a sample function, ensemble member, or a realization of the process. A complex discrete random signal or a discrete random process is a sequence of RVs u(n).
Kasaei 148
Random Signals
A continuous random process.
Kasaei 149
Ergodicity Random Processes
Roughly, a stochastic process is ergodic if statistics
that do not depend on the initial phase of the process are constant.
That is, such statistics do not depend on the
realization of the process.
For instance, if you simulate an ergodic process, you
need only one simulation run; it is representative of all possible runs.
Kasaei 150
Random Signals
Kasaei 151
Random Signals
Kasaei 152
Random Signals
Kasaei 153
Random Signals
Kasaei 154
Random Signals
Kasaei 155
Random Signals
Kasaei 156
Random Signals
Kasaei 157
Random Signals
Kasaei 158
Markov Process
A random process X(t) is Markov if: given X(t), the
past and the future are independent.
Markov chains are examples of Markov process. A process with independent increments is Markov. Note that a function of a Markov process may not be
a Markov process.
Kasaei 159
Random Signals
Kasaei 160
Random Signals
Kasaei 161
Random Signals
Kasaei 162
Random Signals
Kasaei 163
Random Signals
Kasaei 164
Random Signals
Uncorr. Indep.
Kasaei 165
Discrete Random Fields
In statistical representation of image, each pixel is considered as an RV. We think of a given image as a sample function of an ensemble of images.
Ensemble of images or discrete random field. Sample function or random image. R.V.
Kasaei 166
Discrete Random Fields
Such an ensemble would be adequately defined by a joint PDF of the array of RVs. For practical image sizes, the number of RVs is very large (262,144 for 512x512 images). Thus, it is difficult to specify a realistic joint PDF. One possibility is to specify the ensemble by its first - and second-order moments only.
Kasaei 167
Discrete Random Fields
Kasaei 168
Discrete Random Fields
Kasaei 169
Discrete Random Fields
Kasaei 170
Discrete Random Fields
Kasaei 171
Discrete Random Fields
Kasaei 172
Discrete Random Fields
Kasaei 173
Discrete Random Fields
Kasaei 174
The Spectral Density Function
Kasaei 175
The Spectral Density Function
Kasaei 176
The Spectral Density Function
Kasaei 177
The Spectral Density Function
Kasaei 178
The Spectral Density Function
Kasaei 179
The Spectral Density Function
Kasaei 180
The Spectral Density Function
Kasaei 181
The Spectral Density Function
Kasaei 182
The Spectral Density Function
Kasaei 183
Some Results from Information Theory
- Information theory gives some important concepts
that are useful in digital representation of images. Some of these concepts will be used in image quantization, image transforms, and image data compression. The information, entropy, and rate-distortion function are the main issues concerned in this regard. They will be briefly introduced in the following.
Kasaei 184
Some Results from Information Theory
Kasaei 185
Some Results from Information Theory
Kasaei 186
Some Results from Information Theory
Entropy of a binary source.
Kasaei 187
Some Results from Information Theory
Kasaei 188
Some Results from Information Theory
Kasaei 189
Some Results from Information Theory
Kasaei 190
Some Results from Information Theory
Rate-distortion function for a Gaussian source.
Kasaei 191
Some Results from Information Theory
Kasaei 192
Some Results from Information Theory
Kasaei 193
Some Results from Information Theory
Kasaei 194
Some Results from Information Theory
Kasaei 195
Detection
The detection problem is roughly as follows. We want
to guess which of finitely many possible causes produced an observed effect. For instance:
You have a fever (observed effect); do you think you
have the flu or a cold or the malaria?
You observe some strange shape on an X-ray; is it a
cancer or some infection of the tissues?
A receiver gets a particular waveform; did the
transmitter send the bit 0 or the bit 1? (Hypothesis testing is similar.)
There are two basic formulations: either we know the
prior probabilities of the possible causes (Bayesian) or we do not (non-Bayesian). When we do not, we can look for the maximum likelihood (ML) detection or we can formulate a hypothesis-testing problem.
Kasaei 196
Estimation
The estimation problem is similar to the detection
problem except that the unobserved random variable X does not take values in a finite set.
That is, one observes Y and must compute an
estimate of X based on Y that is close to X in some sense.
Once again, one has Bayesian and non-Bayesian
- formulations. The non-Bayesian case typically uses