EE456 Digital Communications Professor Ha Nguyen September 2016 - - PowerPoint PPT Presentation

ee456 digital communications
SMART_READER_LITE
LIVE PREVIEW

EE456 Digital Communications Professor Ha Nguyen September 2016 - - PowerPoint PPT Presentation

Chapter 3: Probability, Random Variables, Random Processes EE456 Digital Communications Professor Ha Nguyen September 2016 EE456 Digital Communications 1 Chapter 3: Probability, Random Variables, Random Processes What is a Random


slide-1
SLIDE 1

Chapter 3: Probability, Random Variables, Random Processes

EE456 – Digital Communications

Professor Ha Nguyen September 2016

EE456 – Digital Communications 1

slide-2
SLIDE 2

Chapter 3: Probability, Random Variables, Random Processes

What is a Random Variable?

R Ω 5 2 3 4 1 6

Figure 1: An example of random variable.

Sample space: Collection of all possible outcomes of a random experiment. Strictly speaking, a random variable is a mapping from the sample space to the set of real numbers. Loosely speaking, a random variable means a numerical quality that can take on different values.

EE456 – Digital Communications 2

slide-3
SLIDE 3

Chapter 3: Probability, Random Variables, Random Processes

Cumulative Distribution Function (cdf)

Of course, not all random variables are the same. A random variable is completely characterized (described) by a cumulative distribution function (cdf), or a probability density function (pdf). A cdf or a pdf can be used to determine the probability (i.e., chance, level of confidence) that the a random variable will take a value in a certain range (such as negative, positive, between 1 and 2, etc.). The cdf of a random variable is defined as: Fx(x) = Probability that x ≤ x = P (x ≤ x). The cdf has the following properties:

  • 1. 0 ≤ Fx(x) ≤ 1.
  • 2. Fx(x) is nondecreasing: Fx(x1) ≤ Fx(x2) if x1 ≤ x2.
  • 3. Fx(−∞) = 0 and Fx(+∞) = 1.
  • 4. P (a < x ≤ b) = Fx(b) − Fx(a).

EE456 – Digital Communications 3

slide-4
SLIDE 4

Chapter 3: Probability, Random Variables, Random Processes

Typical Plots of a cdf

A random variable can be discrete, continuous or mixed.

. 1 ∞ ∞ − x ( ) F x

x

(a) . 1 ∞ ∞ − x ( ) F x

x

(b) . 1 ∞ ∞ − x ( ) F x

x

(c)

EE456 – Digital Communications 4

slide-5
SLIDE 5

Chapter 3: Probability, Random Variables, Random Processes

Probability Density Function (pdf)

The pdf is defined as the derivative of the cdf: fx(x) = dFx(x) dx .

d ( ) ( ) d F x f x x =

x x

x Π ( ) ( ) d Total area under ( ) over P f x x f x

Π

Π ∈ = Π = ∫

x x

Basic properties of a pdf:

  • 1. fx(x) ≥ 0 (a valid pdf must be

nonnegative). 2. ∞

−∞

fx(x)dx = 1 (the total area under a pdf curve must be 1). 3. P (x1 ≤ x ≤ x2) = P (x ≤ x2) − P (x ≤ x1) = Fx(x2) − Fx(x1) = x2

x1

fx(x)dx.

  • 4. In general, P (x ∈ Π) =
  • Π

fx(x)dx.

EE456 – Digital Communications 5

slide-6
SLIDE 6

Chapter 3: Probability, Random Variables, Random Processes

Bernoulli Random Variable

x 1 x 1 p − 1 1 (1 ) p − ( ) p ( ) F x

x

( ) f x

x

A discrete random variable that takes two values 1 and 0 with probabilities p and 1 − p. A good model for a binary data source whose output is 1 or 0. Can also be used to model the channel errors.

EE456 – Digital Communications 6

slide-7
SLIDE 7

Chapter 3: Probability, Random Variables, Random Processes

Uniform Random Variable

x a b − 1 a b x a b 1 ( ) F x

x

( ) f x

x

A continuous random variable that takes values between a and b with equal probabilities over intervals of equal length. The phase of a received sinusoidal carrier is usually modeled as a uniform random variable between 0 and 2π. Quantization error is also typically modeled as uniform.

EE456 – Digital Communications 7

slide-8
SLIDE 8

Chapter 3: Probability, Random Variables, Random Processes

Gaussian (or Normal) Random Variable

x x 1

2

2 1 πσ µ 2 1 µ ( ) f x

x

( ) F x

x

A continuous random variable whose pdf is: fx(x) = 1 √ 2πσ2 exp

  • − (x − µ)2

2σ2

  • ,

µ and σ2 are parameters. Usually denoted as N (µ, σ2). Most important and frequently encountered random variable in communications.

EE456 – Digital Communications 8

slide-9
SLIDE 9

Chapter 3: Probability, Random Variables, Random Processes

Uniform or Gaussian?

50 100 −2 −1 1 2 Trial number Outcome 50 100 −2 −1 1 2 Trial number Outcome 50 100 −5 5 Trial number Outcome 50 100 −5 5 Trial number Outcome

EE456 – Digital Communications 9

slide-10
SLIDE 10

Chapter 3: Probability, Random Variables, Random Processes

Gaussian RVs with Different Average (Mean) Values

Notice the connection between the pdf and observed values

20 40 60 80 100 −6 −4 −2 2 4 6

Trial number Outcome

−5 5 0.1 0.2 0.3 0.4

x f x(x)

20 40 60 80 100 −6 −4 −2 2 4 6

Trial number Outcome

−5 5 0.1 0.2 0.3 0.4

x f x(x)

EE456 – Digital Communications 10

slide-11
SLIDE 11

Chapter 3: Probability, Random Variables, Random Processes

Gaussian RVs with Different Average (Mean) Squared Values

Notice the connection between the pdf and observed values

20 40 60 80 100 −6 −4 −2 2 4 6

Trial number Outcome

−5 5 0.1 0.2 0.3 0.4

x f x(x)

20 40 60 80 100 −6 −4 −2 2 4 6

Trial number Outcome

−5 5 0.1 0.2 0.3 0.4

x f x(x)

EE456 – Digital Communications 11

slide-12
SLIDE 12

Chapter 3: Probability, Random Variables, Random Processes

Histogram of Observed Values

Predicting the pdf based on the observed values 500 1000 −4 −2 2 4 Trial number Outcome −4 −2 2 4 100 200 300 Bin Count 500 1000 −4 −2 2 4 Trial number Outcome −2 2 50 100 150 Bin Count

EE456 – Digital Communications 12

slide-13
SLIDE 13

Chapter 3: Probability, Random Variables, Random Processes

Expectations (Statistical Averages) of a Random Variable

Expectations (statistical averages, or moments), play an important role in the characterization of the random variable. The expected value (also called the mean value, first moment) of the random variable x is defined as mx = E{x} ≡ ∞

−∞

xfx(x)dx, where E denotes the statistical expectation operator. In general, the nth moment of x is defined as E{xn} ≡ ∞

−∞

xnfx(x)dx. For n = 2, E{x2} is known as the mean-squared value of the random variable. The nth central moment of the random variable x is: E{(x − mx)n} = ∞

−∞

(x − mx)nfx(x)dx. When n = 2 the central moment is called the variance, commonly denoted as σ2

x:

σ2

x = var(x) = E{(x − mx)2} =

−∞

(x − mx)2fx(x)dx.

EE456 – Digital Communications 13

slide-14
SLIDE 14

Chapter 3: Probability, Random Variables, Random Processes

The variance provides a measure of the variable’s “randomness”. The mean and variance of a random variable give a partial description of its pdf. Relationship between the variance, the first and second moments: σ2

x = E{x2} − [E{x}]2 = E{x2} − m2 x.

An electrical engineering interpretation: The AC power equals total power minus DC power. The square-root of the variance is known as the standard deviation, and can be interpreted as the root-mean-squared (RMS) value of the AC component.

EE456 – Digital Communications 14

slide-15
SLIDE 15

Chapter 3: Probability, Random Variables, Random Processes

Examples

Example 1: Find the mean and variance of the Bernoulli random variable defined

  • n Page 6.

Example 2: Find the mean and variance of the uniform random variable defined

  • n Page 7.

Example 3: Prove that the mean and variance of the Gaussian random variable defined on Page 8 are exactly µ and σ2, respectively. Example 4: Consider two random variables, x and y with the pdfs given as in the below figure. Then answer the following:

Are the means of x and y are the same or different? Are the variances of x and y are the same or different? If they are different, which random variable has a larger variance? Compute the means and variances of x and y. x ( ) f x

x

1 1 1 − y ( ) f y

y

1 1 1 −

EE456 – Digital Communications 15

slide-16
SLIDE 16

Chapter 3: Probability, Random Variables, Random Processes

Statistical (or Ensemble) Averages vs. Empirical (Sample) Averages

Statistical averages of a random variable are found from its pdf. For the statistical averages to be useful, the pdf has to be accurate enough. Empirical averages are obtained based on the actual observations (no pdf is needed). Let x1, x2, . . . , xN are actual observations (outcomes) of random variable x. The empirical averages of x and x2 would be calculated as follows: x = N

n=1 xn

N (1) x2 = N

n=1 x2 n

N (2) If the number of observations N, i.e., the data size, is large enough, the above empirical averages should be very close to the statistical averages E{x} = ∞

−∞ xfx(x)dx and E{x2} =

−∞ x2fx(x)dx.

Example 1: The Matlab command randn(1,10) generates 10 values of a Gaussian random variable whose pdf is N (0, 1). Compute the “mean” and “mean squared value” of the random variable based on the following actual observations: 0.5377 1.8339 -2.2588 0.8622 0.3188 -1.3077 -0.4336 0.3426 3.5784 2.7694 Example 2: Repeat the calculations for 10 values of a uniform random variable 2*rand(1,10): 1.3115 0.0714 1.6983 1.8680 1.3575 1.5155 1.4863 0.7845 1.3110 0.3424

EE456 – Digital Communications 16

slide-17
SLIDE 17

Chapter 3: Probability, Random Variables, Random Processes

Example

The Matlab function randn defines a Gaussian random variable of zero mean and unit

  • variance. Generate a vector of L = 106 samples according to a zero-mean Gaussian

random variable x with variance σ2 = 10−8. This can be done as follows: x=sigma*randn(1,L). (a) Based on the sample vector x, verify the mean and variance of random variable x. (b) Based on the sample vector x, compute the “probability” that x > A, where A = −10−4. Compare the result with that found in Question 2-(c). (c) Using the Matlab command hist, plot the histogram of sample vector x with 100

  • bins. Next obtain and plot the pdf from the histogram. Then also plot on the

same figure the theoretical Gaussian pdf (note the value of the variance for the pdf). Do the histogram pdf and theoretical pdf fit well?

EE456 – Digital Communications 17

slide-18
SLIDE 18

Chapter 3: Probability, Random Variables, Random Processes

Solution:

L=10^6; % Length of the noise vector sigma=sqrt(10^(-8)); % Standard deviation of the noise A=-10^(-4); x=sigma*randn(1,L); %[2] (a) Verify mean and variance: mean_x=sum(x)/L; % this is the same as mean(x) variance_x=sum((x-mean_x).^2)/L; % this is the same as var(x); % mean_x = -3.7696e-008 % variance_x = 1.0028e-008 %[3] (b) Compute P(x>A) P=length(find(x>A))/L; % P = 0.8410 %[5] (c) Histogram and Gaussian pdf fit N_bins=100; [y,x_center]=hist(x,100); % This bins the elements of x into N_bins equally spaced containers % and returns the number of elements in each container in y, % while the bin centers are stored in x_center. dx=(max(x)-min(x))/N_bins; % This gives the width of each bin. hist_pdf= (y/L)/dx; % This approximate the pdf to be a constant over each bin; pl(1)=plot(x_center,hist_pdf,’color’,’blue’,’linestyle’,’--’,’linewidth’,1.0); hold on; x0=[-5:0.001:5]*sigma; % this specifies the range of random variable x true_pdf=1/(sqrt(2*pi)*sigma)*exp(-x0.^2/(2*sigma^2)); pl(2)=plot(x0,true_pdf,’color’,’r’,’linestyle’,’-’,’linewidth’,1.0); xlabel(’{\itx}’,’FontName’,’Times New Roman’,’FontSize’,16); ylabel(’{\itf}_{\bf x}({\itx})’,’FontName’,’Times New Roman’,’FontSize’,16); legend(pl,’pdf from histogram’,’true pdf’,+1); set(gca,’FontSize’,16,’XGrid’,’on’,’YGrid’,’on’,’GridLineStyle’,’:’,’MinorGridLineStyle’,’none’,’FontName’,’Times New Roman’); EE456 – Digital Communications 18

slide-19
SLIDE 19

Chapter 3: Probability, Random Variables, Random Processes

−5 5 x 10

−4

500 1000 1500 2000 2500 3000 3500 4000 x f x(x) pdf from histogram true pdf

EE456 – Digital Communications 19

slide-20
SLIDE 20

Chapter 3: Probability, Random Variables, Random Processes

Q-function

✁ ✂ ✄ ☎ ✆ ✝ ✆ ✞ ✟ ✠✡ ✆ ☛ ☞ ✌ ✆ ☎ ✍✎ ✆ ✂ ✏ ✏ ✓ ✁ ✄
✄ ✑ ✏ ✒ ✓ ✗ ✑ ✞ ✞ ✑ ✎ ✡ ✏ ✗ ✑ ☞
✑ ✕ ✆ ✎ ✌ ☞ ✟ ✚ ✑ ✎ ✝ ✟ ✘ ✑ ✕ ✆ ✎ ✌ ☞ ✟ ✚ ✑ ✎ ✝ ✟ ✠✡ ✆ ☛ ☞ ✆ ✎ ✏ ✛ ☎ ☎ ✆ ✖ ✑ ☞ ✆ ✎ ✏

λ x ) ( Area x Q =

2

2

e 2 1

λ

π

Q(x) ≡ 1 √ 2π ∞

x

exp

  • − λ2

2

  • dλ.

Useful property: Q(−x) = 1 − Q(x)

1 2 3 4 5 6 10

−10

10

−8

10

−6

10

−4

10

−2

10 x Q(x) EE456 – Digital Communications 20

slide-21
SLIDE 21

Chapter 3: Probability, Random Variables, Random Processes

Example

The noise voltage in an electric circuit can be modeled as a Gaussian random variable with zero mean and variance σ2. Show that the probability that the noise voltage exceeds some level A can be expressed as Q

  • A

σ

  • .

Solution: fx(x) = 1 √ 2πσ e− (x−µ)2

2σ2

(µ=0)

= 1 √ 2πσ e− x2

2σ2 .

The Q-function is defined as: Q(t) = 1 √ 2π ∞

t

e− λ2

2 dλ

The probability that the noise voltage exceeds some level A is P (x > A) = ∞

A

fx(x)dx = 1 √ 2πσ ∞

A

e− x2

2σ2 dx

(λ= x

σ )

=

(∴dλ= dx

σ )

1 √ 2π ∞

A σ

e− λ2

2 dλ = Q

A σ

  • .

EE456 – Digital Communications 21

slide-22
SLIDE 22

Chapter 3: Probability, Random Variables, Random Processes

Joint cdf and Joint pdf of Multiple Random Variables

Let x and y be two random variables. Each random variable can be described/characterized separately by its own cdf, pdf, or statistical averages. When two (or multiple) random variables are related (even remotely) to the same physical phenomenon, there can be relationship between them. It could be very helpful to be able to describe/characterize multiple random variables jointly. The joint cdf is defined as Fx,y(x, y) = P (x ≤ x, y ≤ y). Similarly, the joint pdf is: fx,y(x, y) = ∂2Fx,y(x, y) ∂x∂y . When the joint pdf is integrated over one of the variables, one obtains the pdf of

  • ther variable (also called the marginal pdf):

−∞

fx,y(x, y)dx = fy(y), ∞

−∞

fx,y(x, y)dy = fx(x).

EE456 – Digital Communications 22

slide-23
SLIDE 23

Chapter 3: Probability, Random Variables, Random Processes

A joint pdf is always nonnegative. Furthermore, the total volume under a joint pdf is unity: ∞

−∞

−∞

fx,y(x, y)dxdy = F (∞, ∞) = 1 The conditional pdf of the random variable y, given that the value of the random variable x is equal to x, is defined as fy(y|x) =    fx,y(x, y) fx(x) , fx(x) = 0 0,

  • therwise

. Two random variables x and y are statistically independent if and only if fy(y|x) = fy(y)

  • r equivalently

fx,y(x, y) = fx(x)fy(y). The joint moment is defined as E{xjyk} = ∞

−∞

−∞

xjykfx,y(x, y)dxdy.

EE456 – Digital Communications 23

slide-24
SLIDE 24

Chapter 3: Probability, Random Variables, Random Processes

The joint central moment is E{(x − mx)j(y − my)k} = ∞

−∞

−∞

(x − mx)j(y − my)kfx,y(x, y)dxdy where mx = E{x} and my = E{y}. The most important moments are E{xy} ≡ ∞

−∞

−∞

xyfx,y(x, y)dxdy (correlation) cov{x, y} ≡ E{(x − mx)(y − my)} = E{xy} − mxmy (covariance). Let σ2

x and σ2 y be the variance of x and y. The covariance normalized w.r.t.

σxσy is called the correlation coefficient: ρx,y = cov{x, y} σxσy . ρx,y indicates the degree of linear dependence between two RVs. It can be shown that |ρx,y| ≤ 1. ρx,y = ±1 implies an increasing/decreasing linear relationship. If ρx,y = 0, x and y are said to be uncorrelated. It is easy to verify that if x and y are independent, then ρx,y = 0: Independence implies lack of correlation. However, lack of correlation (no linear relationship) does not in general imply statistical independence.

EE456 – Digital Communications 24

slide-25
SLIDE 25

Chapter 3: Probability, Random Variables, Random Processes

Examples of Uncorrelated and Correlated Random Variables

0.5 1 1.5 0.5 1 1.5 x y 0.5 1 1.5 0.5 1 1.5 x y

EE456 – Digital Communications 25

slide-26
SLIDE 26

Chapter 3: Probability, Random Variables, Random Processes

Other Examples of Uncorrelated and Correlated Random Variables

−2 2 −5 −4 −3 −2 −1 1 2 3 4 5 x y −2 2 −5 −4 −3 −2 −1 1 2 3 4 5 x y

EE456 – Digital Communications 26

slide-27
SLIDE 27

Chapter 3: Probability, Random Variables, Random Processes

Interpretation of Correlation Coefficient

0.5 1 1.5 0.5 1 1.5 x y (a) ρx,y ≈ 0.71 50 100 150 50 100 150 x y (b) ρx,y ≈ 0.71 0.5 1 1.5 0.5 1 1.5 x y (c) ρx,y ≈ 0.97 0.5 1 1.5 −0.5 0.5 x y (d) ρx,y ≈ −0.97

EE456 – Digital Communications 27

slide-28
SLIDE 28

Chapter 3: Probability, Random Variables, Random Processes

Empirical (Sample) Averages of Two Random Variables

Let x1, x2, . . . , xN and y1, y2, . . . , yN are actual observations (outcomes) of a pair of random variable x and y. We know that: x = N

n=1 xn

N , y = N

n=1 yn

N x2 = N

n=1 x2 n

N , y2 = N

n=1 y2 n

N The empirical correlation between x and y: x · y = N

n=1 xn · yn

N The empirical covariance between x and y: (x − x) · (y − y) = N

n=1(xn − x) · (yn − y)

N = x · y − x · y The empirical correlation coefficient between x and y: (x − x) · (y − y)

  • x2 · y2

= x · y − x · y

  • x2 · y2

EE456 – Digital Communications 28

slide-29
SLIDE 29

Chapter 3: Probability, Random Variables, Random Processes

Jointly Gaussian Distribution (Bivariate)

fx,y(x, y) = 1 2πσxσy

  • 1 − ρ2

x,y

exp

1 2(1 − ρ2

x,y)

×

  • (x − mx)2

σ2

x

− 2ρx,y(x − mx)(y − my) σxσy + (y − my)2 σ2

y

, where mx, my, σx, σy are the means and variances. ρx,y is indeed the correlation coefficient. Marginal density is Gaussian: fx(x) ∼ N (mx, σ2

x) and fy(y) ∼ N (my, σ2 y).

When ρx,y = 0 → fx,y(x, y) = fx(x)fy(y) → random variables x and y are statistically independent. For joint Gaussian random variables, uncorrelatedness means statistically independent! Weighted sum of two jointly Gaussian random variables is also Gaussian.

EE456 – Digital Communications 29

slide-30
SLIDE 30

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σx = σy = 1 and ρx,y = 0

−3 −2 −1 1 2 3 −2 2 0.05 0.1 0.15 x ρx,y=0 y fx,y(x,y)

x y −2 −1 1 2 −2.5 −2 −1 1 2 2.5

EE456 – Digital Communications 30

slide-31
SLIDE 31

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.3

−3 −2 −1 1 2 3 −2 2 0.05 0.1 0.15 x ρx,y=0.30 y fx,y(x,y) a cross−section

x y −2 −1 1 2 −2.5 −2 −1 1 2 2.5

EE456 – Digital Communications 31

slide-32
SLIDE 32

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σx = σy = 1 and ρx,y = 0.7

−3 −2 −1 1 2 3 −2 2 0.05 0.1 0.15 0.2 a cross−section x ρx,y=0.70 y fx,y(x,y)

x y −2 −1 1 2 −2.5 −2 −1 1 2 2.5

EE456 – Digital Communications 32

slide-33
SLIDE 33

Chapter 3: Probability, Random Variables, Random Processes

Joint Gaussian pdfs for ρx,y = 0 and ρx,y = 0.5

−2 −1 1 2 −2 −1.5 −1 −0.5 0.5 1 1.5 2 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 x ρ = 0, σx = σy = 1 y fx,y(x, y) −2 −1 1 2 −2 −1.5 −1 −0.5 0.5 1 1.5 2 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 x ρ = 0.5, σx = σy = 1 y fx,y(x, y)

EE456 – Digital Communications 33

slide-34
SLIDE 34

Chapter 3: Probability, Random Variables, Random Processes

Random Processes (Random Signals)

Loosely speaking, a random process is a random variable evolving in time. The figures below plot time functions of several different random processes.

(a) Thermal noise t (b) Uniform phase t

EE456 – Digital Communications 34

slide-35
SLIDE 35

Chapter 3: Probability, Random Variables, Random Processes

t (c) Rayleigh fading process t +V −V (d) Binary random data Tb

EE456 – Digital Communications 35

slide-36
SLIDE 36

Chapter 3: Probability, Random Variables, Random Processes Real number Time tk

. . .

t2 t1

. . . . . . . . . . . .

Denote the random process by x(t). The time functions x1(t), x2(t), x3(t), . . . are specific realizations of the process. At any time instant, t = tk, we have random variable x(tk). At any two time instants, say t1 and t2, we have two different random variables x(t1) and x(t2). Any relationship between random variables x(t1) and x(t2) is completely described by the joint pdf fx(t1),x(t2)(x1, x2; t1, t2).

EE456 – Digital Communications 36

slide-37
SLIDE 37

Chapter 3: Probability, Random Variables, Random Processes

Statistical Averages of a Random Process

The value of a random process at anytime instant, say t1, is a random variable x1 = x(t1). Such a random variable has mean and mean squared values (i.e., first-order and second-order statistics). Most random processes encountered in communications have the property that the mean and mean squared values are constants over time: (P1) mx = E{x(t)} = ∞

−∞

xfx(x)dx. (P2) MSVx = E{x2(t)} = ∞

−∞

x2fx(x)dx The correlation between two random variables x1 = x(t1) and x2 = x(t2) is: Rx(t1, t2) = E{x(t1)x(t2)} = ∞

x1=−∞

x2=−∞

x1x2fx1,x2(x1, x2; t1, t2)dx1dx2. Most random processes in communications have the property that the correlation

  • nly depends on the time difference τ = t2 − t1

(P3) Rx(τ) = E{x(t1)x(t1 + τ)} = E{x(t)x(t + τ)} = E{x(t)x(t + τ)}, ∀t Random processes that satisfy properties (P1), (P2), P(3) are called wide-sense stationarity (WSS). It can be shown that the autocorrelation function Rx(τ) completely describes the power spectral density of the random process.

EE456 – Digital Communications 37

slide-38
SLIDE 38

Chapter 3: Probability, Random Variables, Random Processes

Properties of the Autocorrelation Function

  • 1. Rx(τ) = Rx(−τ). It is an even function of τ because the same set of product

values is averaged across the ensemble, regardless of the direction of translation.

  • 2. |Rx(τ)| ≤ Rx(0). The maximum always occurs at τ = 0, though there may be
  • ther values of τ for which it is as big. Further Rx(0) is the mean-squared value
  • f the random process.
  • 3. If for some τ0 we have Rx(τ0) = Rx(0), then for all integers k,

Rx(kτ0) = Rx(0).

  • 4. If mx = 0 then Rx(τ) will have a constant component equal to m2

x.

  • 5. Autocorrelation functions cannot have an arbitrary shape. The restriction on the

shape arises from the fact that the Fourier transform of an autocorrelation function must be greater than or equal to zero, i.e., F{Rx(τ)} ≥ 0.

EE456 – Digital Communications 38

slide-39
SLIDE 39

Chapter 3: Probability, Random Variables, Random Processes

Example of a Random Process

A random process is generated as follows: x(t) = e−a|t|, where a is a random variable with pdf fa(a) = u(a) − u(a − 1) (1/seconds). (a) Sketch several members of the ensemble. (b) For a specific time, t, over what values of amplitude does the random variable x(t) range? (c) Find the mean and mean-squared value of x(t). Is the process x(t) wide-sense stationary (WSS)?

EE456 – Digital Communications 39

slide-40
SLIDE 40

Chapter 3: Probability, Random Variables, Random Processes

(a) Plot of several member functions of x(t) = e−a|t|:

−2 −1.5 −1 −0.5 0.5 1 1.5 2 0.5 1 a=0.82141 t −2 −1.5 −1 −0.5 0.5 1 1.5 2 0.5 1 a=0.4447 t −2 −1.5 −1 −0.5 0.5 1 1.5 2 0.5 1 a=0.61543 t −2 −1.5 −1 −0.5 0.5 1 1.5 2 0.5 1 a=0.79194 t

EE456 – Digital Communications 40

slide-41
SLIDE 41

Chapter 3: Probability, Random Variables, Random Processes

(b) Since a ranges between 0 and 1, for a specific time, t, the random variable x(t) ranges from e−|t| (when a = 1) to 1 (when a = 0). (c) x is considered as a function of a with t a parameter. The mean and mean-squared values of x(t) are:

E{x(t)} =

  • −∞

x(t)fa(a)da =

1

  • e−a|t|da = e−a|t|

−|t|

  • 1

= 1 |t|

  • 1 − e−|t|

. E{x2(t)} =

  • −∞

x2(t)fa(a)da =

1

  • e−2a|t|da = e−2a|t|

−2|t|

  • 1

= 1 2|t|

  • 1 − e−2|t|

.

Since both the mean and mean-squared value are functions of time t, the process x(t) is not wide-sense stationary.

EE456 – Digital Communications 41

slide-42
SLIDE 42

Chapter 3: Probability, Random Variables, Random Processes

Power Spectral Density of a Random Process; Effects of an LTI System on Random Processes

Power Spectral Density (PSD): Determine how the average power of the process is distributed in frequency. It can be shown that the power spectral density and the autocorrelation function are a Fourier transform pair: Rx(τ) = ∞

f=−∞

Sx(f)ej2πfτ df ← → Sx(f) = ∞

τ=−∞

Rx(τ)e−j2πfτ dτ.

Linear, Time-Invariant (LTI) System Input Output ( ) ( ) h t H f ← → ( ) t x ( ) t y , ( ) ( ) m R S f τ ← →

x x x

, ( ) ( ) m R S f τ ← →

y y y , ( )

R τ

x y

my = E{y(t)} = E ∞

−∞

h(λ)x(t − λ)dλ

  • = mxH(0)

Sy(f) = |H(f)|2 Sx(f) Ry(τ) = h(τ) ∗ h(−τ) ∗ Rx(τ).

EE456 – Digital Communications 42

slide-43
SLIDE 43

Chapter 3: Probability, Random Variables, Random Processes

Thermal Noise in Communication Systems

A natural noise source is thermal noise, whose amplitude statistics are well modeled to be Gaussian with zero mean. The autocorrelation and PSD are well modeled as: Rw(τ) = kθG e−|τ|/t0 t0 (watts), Sw(f) = 2kθG 1 + (2πft0)2 (watts/Hz). where k = 1.38 × 10−23 joule/0K is Boltzmann’s constant, G is conductance of the resistor (mhos); θ is temperature in degrees Kelvin; and t0 is the statistical average of time intervals between collisions of free electrons in the resistor (on the order of 10−12 sec).

EE456 – Digital Communications 43

slide-44
SLIDE 44

Chapter 3: Probability, Random Variables, Random Processes

−15 −10 −5 5 10 15 f (GHz) Sw(f) (watts/Hz) (a) Power Spectral Density, Sw(f) −0.1 −0.08 −0.06 −0.04 −0.02 0.02 0.04 0.06 0.08 0.1 τ (pico−sec) Rw(τ) (watts) (b) Autocorrelation, Rw(τ) N0/2 N0/2δ(τ) White noise Thermal noise White noise Thermal noise

EE456 – Digital Communications 44

slide-45
SLIDE 45

Chapter 3: Probability, Random Variables, Random Processes

The noise PSD is approximately flat over the frequency range of 0 to 10 GHz ⇒ let the spectrum be flat from 0 to ∞: Sw(f) = N0 2 (watts/Hz), where N0 = 4kθG is a constant. Noise that has a uniform spectrum over the entire frequency range is referred to as white noise The autocorrelation of white noise is Rw(τ) = N0 2 δ(τ) (watts). Since Rw(τ) = 0 for τ = 0, any two different samples of white noise, no matter how close in time they are taken, are uncorrelated. Since the noise samples of white noise are uncorrelated, if the noise is both white and Gaussian (for example, thermal noise) then the noise samples are also independent.

EE456 – Digital Communications 45

slide-46
SLIDE 46

Chapter 3: Probability, Random Variables, Random Processes

Example

Suppose that a (WSS) white noise process, x(t), of zero-mean and power spectral density N0/2 is applied to the input of the filter. (a) Find and sketch the power spectral density and autocorrelation function of the random process y(t) at the output of the filter. (b) What are the mean and variance of the output process y(t)? L R ( ) t x ( ) t y

EE456 – Digital Communications 46

slide-47
SLIDE 47

Chapter 3: Probability, Random Variables, Random Processes

H(f) = R R + j2πfL = 1 1 + j2πfL/R . Sy(f) = N0 2 1 1 +

  • 2πL

R

2 f2 ← → Ry(τ) = N0R 4L e−(R/L)|τ|. 2 N (Hz) f (sec) τ L R N 4 ( ) (watts/Hz) S f

y

( ) (watts) R τ

y EE456 – Digital Communications 47

slide-48
SLIDE 48

Chapter 3: Probability, Random Variables, Random Processes

Bandlimiting White Gaussian Noise

The input noise x(t) is modeled as a wide-sense stationary, white, Gaussian random process with a zero mean and two-sided power spectral density N0/2. W 1 f W − ) ( f H ( ) t x ( ) t y Low-pass filter (a) Find and sketch the power spectral density of y(t). (b) Find and sketch the autocorrelation function of y(t). (c) What are the average DC level and the average power of y(t)? (d) Suppose that the output noise is sampled every Ts seconds to obtain the noise samples y(kTs), k = 0, 1, 2, . . .. Find the smallest values of Ts so that the noise samples are statistically independent. Explain.

EE456 – Digital Communications 48

slide-49
SLIDE 49

Chapter 3: Probability, Random Variables, Random Processes

(a) The power spectral density (PSD) of y(t) is Sy(f) = Sx(f) · |H(f)|2 =

  • N0

2 ,

|f| ≤ W 0,

  • therwise

W 2 N f W − ( ) S f

y

2 N f ( ) S f

x

(b) The autocorrelation can be found as the inverse Fourier transform of the PSD: Ry(τ) = F−1{Sy(f)} = +∞

−∞

Sy(f)ej2πfτ df = W

−W

N0 2 ej2πfτ df = N0 4πτ W

−W

ej2πfτ (2πτ)df

x=2πτf

= N0 4πτ 2πW τ

−2πW τ

ejxdx = N0 4πτ · 2 sin(2πW τ) = N0 sin(2πW τ) 2πτ = N0 · W · sinc(2W τ)

EE456 – Digital Communications 49

slide-50
SLIDE 50

Chapter 3: Probability, Random Variables, Random Processes

−3 −2 −1 1 2 3 −0.4 −0.2 0.2 0.4 0.6 0.8 1 Wτ Ry(τ)/(N0W)

(c) The DC level of y(t) is my = E{y(t)} = mx · H(0) = 0 · 1 = 0 The average power of y(t) is σ2

y = E{y2(t)} = Ry(0) = N0W

(d) Since the input x(t) is a Gaussian process, the output y(t) is also a Gaussian process and the samples y(kTs) are Gaussian random variables. For Gaussian random variables, statistical independence is equivalent to uncorrelatedness. Thus

  • ne needs to find the smallest value for τ (or Ts) so that the autocorrelation is
  • zero. From the graph of Ry(τ), the answer is:

τmin = (Ts)min = 1 2W

EE456 – Digital Communications 50

slide-51
SLIDE 51

Chapter 3: Probability, Random Variables, Random Processes

Random Sequences

A random sequence can be obtained by sampling a continuous-time random

  • process. How to characterize a random sequence?

Let x[1], x[2], . . . be a sequence of indexed random variables. Various definitions

  • f continuous-time random processes can be applied, except that the time

variable t is replaced by the sequence index n.

Mean function: mx[n] = E{x[n]} Variance function: σ2

x[n] = E{(x[n] − mx[n])2}

Correlation function: Rx(n, k) = E{x[n]x[k]} Covariance function: Cx(n, k) = E{(x[n] − mx[n])(x[k] − mx[k]}

A Gaussian random sequence is a sequence of random variables, for which any finite number of members of the sequence are jointly Gaussian. When the autocorrelation is only a function of only the difference between n and k, i.e., Rx(n, k) = Rx[n − k], or Rx[m] = E{x[n]x[n − m]}, then the DTFT of Rx[m] gives the power spectral density (PSD) of the random sequence x[n]: Sx(ej ˆ

ω) = ∞

  • m=−∞

Rx[m]e−j ˆ

ωm

An important case is the white random sequence, where Rx[k] = σ2δ[k], and Sx(ej ˆ

ω) = σ2: The sequence is completely uncorrelated and all the average

power in the sequence is equally shared by all frequencies in the sequence!

EE456 – Digital Communications 51

slide-52
SLIDE 52

Chapter 3: Probability, Random Variables, Random Processes

Effects of an LTI System on Random Sequences

[ ] (e )

j

h n H

ω

← → [ ] n x [ ] n y , [ ] (e )

j

m R k S

ω

← →

x x x

, [ ] (e )

j

m R k S

ω

← →

y y y , [ ]

R k

x y

my = E{y[n]} = E   

  • −∞

h[k]x[n − k]    = mx ·

  • k=−∞

h[k] = mx · H(ej ˆ

ω)

  • ˆ

ω=0

Sy(ej ˆ

ω)

=

  • H(ej ˆ

ω)

  • 2

Sx(ej ˆ

ω)

Ry[k] = h[k] ∗ h[−k] ∗ Rx[k].

EE456 – Digital Communications 52

slide-53
SLIDE 53

Chapter 3: Probability, Random Variables, Random Processes

Example: Digital Filtering of a Random Sequence

Let x[n] be a white random sequence with mx[n] = 0 and σ2

x[n] = σ2. Suppose x[n] is the

input to a digital filter (LTI system) with impulse response h[n] = anu[n]. Find and sketch the power spectral density and autocorrelation function of the

  • utput sequence y[n].

Solution: The frequency response of this filter is H(ej ˆ

ω) = ∞

  • n=0

ane−jnˆ

ω =

1 1 − ae−j ˆ

ω .

Since the input PSD is Sx(ej ˆ

ω) = σ2, it follows that

Sy(ej ˆ

ω)

= |H(ej ˆ

ω)|2Sx(ej ˆ ω) =

  • 1

1 − ae−j ˆ

ω

  • 2

σ2 = σ2 1 + a2 − 2a cos ˆ ω Ry[k] = IDFT{Sy(ej ˆ

ω)} =

σ2 1 − a2 a|k| [ ] R k

x

(e )

j

S

ω x

(e )

j

S

ω y

[ ] R k

y

ω

ω

EE456 – Digital Communications 53

slide-54
SLIDE 54

Chapter 3: Probability, Random Variables, Random Processes

Sampling a Continuous-Time Random Process with an ADC

( ) t w 1 f ) ( f H

s

t nT =

2

s

F 2

s

F

− [ ] n w ( ) t w 1

s s

F T = 2 N f ( ) S f

w

2 N f ( ) S f

w 2

s

F 2

s

F

( )

s

F N 2

ω

(e )

j

S

ω w

π π − power of noise sequence 1 (e )d 2 2

j s

N S F

π ω π

ω π − =

w

EE456 – Digital Communications 54

slide-55
SLIDE 55

Chapter 3: Probability, Random Variables, Random Processes

Sending a Random Sequence to a DAC

( ) t w 1

t

) ( t h

s

T [ ] n w 2 N

ω (e )

j

S

ω w

π π − 1

s s

F T =

s

F N ) 2 ( f ( ) S f

w

( ) t w f ( ) S f

w

s

F n 1 2 t

s

T 2 s T

s

F − t

s

T 2 s T

power of noise sequence 1 (e )d 2 2

j

N S

π ω π

ω π − =

w ( ) sinc

s s

f H f T F   =     2

total noise power ( )d ( ) d 2 2

s

N F N S f f H f f

∞ ∞ −∞ −∞

= =

∫ ∫

w s

T N ) 2 (

EE456 – Digital Communications 55

slide-56
SLIDE 56

Chapter 3: Probability, Random Variables, Random Processes

Relationship Between the PSD of w[n] and the PSD of w(t)

The random process w(t) is constructed from the random sequence {w[n]} as w(t) =

  • n=−∞

w[n]δ(t − nTs). To find the PSD of w(t), truncate the random process to a time interval of −T = −NTs to T = NTs to obtain wT (t) = N

n=−N w[n]δ(t − nTs).

Take the Fourier transform of the truncated process: WT (f) =

N

  • n=−N

w[n]F{δ(t − nTs)} =

N

  • n=−N

w[n]e−j2πfnTs. Apply the basic definition of PSD: Sw(f) = lim

T →∞

E

  • |WT (f)|2

2T = lim

N→∞

1 (2N + 1)Ts E   

  • N
  • n=−N

w[n]e−j2πfnTs

  • 2

  = 1 Ts

  • k=−∞

Rw[k]e−j2πkfTs . where Rw[k]) = E {w[n]w[n − k]} is the autocorrelation function of random sequence {w[n]}. Note also that Rw[k] = Rw[−k]. Let ˆ ω = 2πfTs and recognize that Sw(ej ˆ

ω) = ∞ k=−∞ Rw[k]e−jkˆ ω is exactly the PSD

  • f random sequence w[n]. Then the PSD of w(t) is

Sw(f) = 1 Ts Sw(ej ˆ

ω)

  • ˆ

ω=2πfTs EE456 – Digital Communications 56