Chapter 4 Further Topics on Random Variables Peng-Hua Wang - - PowerPoint PPT Presentation

chapter 4
SMART_READER_LITE
LIVE PREVIEW

Chapter 4 Further Topics on Random Variables Peng-Hua Wang - - PowerPoint PPT Presentation

Chapter 4 Further Topics on Random Variables Peng-Hua Wang Graduate Institute of Communication Engineering National Taipei University Chapter Contents 4.1 Derived Distributions 4.2 Covariance and Correlation 4.3 Conditional Expectation and


slide-1
SLIDE 1

Chapter 4

Further Topics on Random Variables

Peng-Hua Wang

Graduate Institute of Communication Engineering National Taipei University

slide-2
SLIDE 2

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 2/36

Chapter Contents

4.1 Derived Distributions 4.2 Covariance and Correlation 4.3 Conditional Expectation and Variance Revisited 4.4 Transforms 4.5 Sum of a Random Number of Independent Random Variables 4.6 Summary and Discussion

slide-3
SLIDE 3

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 3/36

4.1 Derived Distributions

slide-4
SLIDE 4

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 4/36

Concepts

■ Let X be an RV with pdf fX(x) and Y = g(X).

FY(y) = P(Y ≤ y) = P(g(X) ≤ y) =

  • {x|g(x)≤y} fX(x)dx.

fY(y) = dFY(y) dy

slide-5
SLIDE 5

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 5/36

Example 4.1.

Let X be uniform rv on [0, 1] and Y =

  • (X). Find CDF

and PDF of Y.

slide-6
SLIDE 6

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 6/36

Example 4.3.

Let X be an rv with PDF fX(x). and Y = X2. Find PDF of Y in terms of PDF of X.

slide-7
SLIDE 7

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 7/36

Example.

Let X be an rv with PDF fX(x). and Y = aX + b. Find PDF

  • f Y in terms of PDF of X.
  • Hint. Note the sign of a.
slide-8
SLIDE 8

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 8/36

Example 4.5

Let X be a normal rv with PDF mean µ and variance σ2. Let Y = aX + b. Find PDF of Y.

slide-9
SLIDE 9

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 9/36

Example 4.7

Let X and Y be two independent uniform rvs on [0, 1]. Let Z = max(X, Y). Find PDF of Y.

slide-10
SLIDE 10

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 10/36

Example 4.8

Let X and Y be two independent uniform rvs on [0, 1]. Let Z = Y/X. Find PDF of Y.

slide-11
SLIDE 11

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 11/36

Example 4.9

Let X and Y be two independent exponential rvs with parameter λ. Let Z = X − Y. Find PDF of Y.

slide-12
SLIDE 12

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 12/36

Sums of Independent RVs

■ Let X and Y be two independent discrete rvs with PMFS

pX(x) and pY(y). Let Z = X + Y.

pZ(z) = P(X + Y = z) =

x=k,y=z−k

pX(k)pY(z − k)

■ Let X and Y be two independent continuous rvs with

PDFS fX(x) and fY(y). Let Z = X + Y.

P(Z ≤ z|X = x) = P(X + Y ≤ z|X = x) = P(x + Y ≤ z) = P(Y ≤ z − x)

⇒ fZ|X(z|x) = fY(z − x)

since fZ,X(z, x) = fX(x) fZ|X(z|x) = fX(x) fY(z − x)

⇒ fZ(z) =

−∞ fZ,X(z, x)dx =

−∞ fX(x) fY(z − x)dx

■ “convolution” or convolutional sum/integral.

slide-13
SLIDE 13

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 13/36

Example 4.10

Let X and Y be independent rvs uniformly distributed on

[0, 1] and let W = X + Y. Find PDF of W.

slide-14
SLIDE 14

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 14/36

4.2 Covariance and Correlation

slide-15
SLIDE 15

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 15/36

Definition

■ The covariance of two rvs X and Y is defined by

cov(X, Y) = E[(X − E[X])(Y − E[Y])]

■ uncorrelated: cov(X, Y) = 0 ■ “Independent” implies “uncorrelated”. ■ The correlation coefficient of two rvs X and Y is defined

by ρ = cov(X, Y)

  • Var(X)Var(Y)

■ −1 ≤ ρ ≤ 1 ■ ρ2 = 1 if and only if X − E[X] = c(Y − E[Y])

slide-16
SLIDE 16

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 16/36

Properties

■ cov(X, Y) = E[XY] − E[X]E[Y] ■ cov(X, X) = var(X) ■ cov(X, aY + b) = a × cov(X, Y) ■ cov(X, Y + Z) = cov(X, Y) + cov(X, Z)

slide-17
SLIDE 17

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 17/36

Example 4.13

Let X and Y be random variables with joint PMF of PX,Y(0, 1) = PX,Y(1, 0) = PX,Y(0, −1) = PX,Y(−1, 0) = 1 4

■ PX(−1) = 1/4, PX(0) = 1/2, PX(1) = 1/4 ■ PY(−1) = 1/4, PY(0) = 1/2, PY(1) = 1/4 ■ E[X] = E[Y] = 0, E[XY] = 0 ■ cov(X, Y)] = E[XY] − E[X]E[Y] = 0 ■ X and Y are uncorrelated, not independent.

slide-18
SLIDE 18

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 18/36

Example 4.14

Consider n independent tosses of a coin with probability of a head equal to p. Let X and Y be the numbers of heads and of tails, respectively. Calculate the correlation coefficient ρ(X, Y) of X and Y. Since X + Y = n, we have E[X] + E[Y] = n, and X − E[X] = −(Y − E[Y]). Therefore,

cov(X, Y) = E[(X − E[X])(Y − E[Y])]

= −E[(X − E[X])2] = −var(X) ⇒ ρ(X, Y) =

cov(X, Y)

  • Var(X)Var(Y) =

−var(X)

  • Var(X)Var(X) = −1
slide-19
SLIDE 19

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 19/36

4.3 Conditional Expectation As A Random Variable

slide-20
SLIDE 20

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 20/36

Law of iterated expectations

E[X] =

  • x x fX(x)dx =
  • x
  • y x fX,Y(x, y)dydx

=

  • y
  • x x fX|Y(x|y)dx fY(y)dy

=

  • y E[X|Y = y] fY(y)dy = E[E[X|Y]]

■ E[X|Y = y]: a function of y, a deterministic function of y ■ E[X|Y]: a function of Y, a derived random variable from

Y

slide-21
SLIDE 21

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 21/36

Example

Let X and Y be two continuous random variables uniformly distributed on the region specified by x ≥ 0, y ≥ 0, x + y ≤ 1.

■ Find their joint pdf fX,Y(x, y). ■ Find fX|Y(x|y) and E[X|Y = y]. ■ Find E[X]

slide-22
SLIDE 22

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 22/36

Example 4.17

We start with a stick of length ℓ. We break it at a point which is chosen randomly and uniformly over its length, and keep the piece that contains the left end of the stick. We then repeat the same process on the piece that we were left with. What is the expected length of the piece that we are left with after breaking twice? Let Y be the length of the piece after we break for the first

  • time. Let X be the length after we break for the second
  • time. We have

E[X|Y] = Y/2, E[Y] = ℓ/2 therefore, E[X] = E[E[X|Y]] = E[Y/2] = E[Y]/2 = ℓ/4

slide-23
SLIDE 23

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 23/36

Law of iterated Variance

X − E[X] = (X − E[X|Y]) + (E[X|Y] − E[X]) Var(X) = E[(X − E[X])2 = E[((X − E[X|Y]) + (E[X|Y] − E[X]))2]

= E[(X − E[X|Y])2) + E[(E[X|Y] − E[X])2] + 2E[(X − E[X|Y])(E[X|Y] − E[X])] = E[E[(X − E[X|Y])2)|Y] + E[(E[X|Y] − E[E[X|Y]])2] = E[Var(X|Y)] + Var(E[X|Y])

where we use the fact of E[E[X|Y]h(Y)] = E[E[Xh(Y)|Y]] = E[Xh(Y)] to deduce E[(X − E[X|Y])(E[X|Y] − E[X])] = E[XE[X|Y] − XE[X] − E[X|Y]2 + E[X]E[X|Y]]

=E[XE[X|Y]] − E[X]2 − E[E[X|Y]2] + E[E[X]E[X|Y]] =E[XE[X|Y]] − E[X]2 − E[E[XE[X|Y]|Y]] + E[X]2 =E[XE[X|Y]] − E[XE[X|Y]] = 0

slide-24
SLIDE 24

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 24/36

4.4 Transforms

slide-25
SLIDE 25

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 25/36

Moment Generating Function

■ Transform is another representation of probability law. ■ Transform is a mathematical tool for facilitating some

manipulations.

■ The moment generating function (MGF) is one of many

transforms, defined by MX(s) = E[esX] =

  • ∑k eskpX(k),

X is discrete;

−∞ esx fX(x)dx,

X is continuous.

slide-26
SLIDE 26

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 26/36

Example 4.22

Let pX(2) = 1/2, pX(3) = 1/6, pX(5) = 1/3. Find MGF of X.

slide-27
SLIDE 27

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 27/36

Example 4.23

Find MGF of Poisson random variable with parameter λ.

slide-28
SLIDE 28

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 28/36

Example 4.24

Find MGF of exponential random variable with parameter λ.

slide-29
SLIDE 29

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 29/36

Example 4.25

Let Y = aX + b. Express MGF of Y in terms of MGF of X.

slide-30
SLIDE 30

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 30/36

Example 4.26

Find MGF of normal random variable with mean µ and variance σ2.

slide-31
SLIDE 31

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 31/36

Usage of MGF

MX(s) = E[esX] = E

  • 1 + sX + s2X2

2!

+ s3X3

3!

  • ⇒ E[X] = dMX(s)

ds

  • s=0

⇒ E[X2] = d2MX(s)

ds2

  • s=0

⇒ E[Xn] = dnMX(s)

dsn

  • s=0
slide-32
SLIDE 32

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 32/36

Example 4.27

Let pX(2) = 1/2, pX(3) = 1/6, pX(5) = 1/3. Find MGF of X and use MGF to evaluate E[X] and E[X2].

slide-33
SLIDE 33

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 33/36

Inversion Property

■ If MX(s) = MY(s) for all S, then X and Y have the same

probability law.

■ Let X and Y are independent RVs. If Z = X + Y, then

MZ(s) = MX(s)MY(s).

slide-34
SLIDE 34

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 34/36

Example 4.28

The MGF of an RV X is given by MX(s) = 1 4e−s + 1 2 + 1 8e4s + 1 8e5s. Find its PMF.

slide-35
SLIDE 35

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 35/36

Example 4.29

The MGF of an RV X is given by MX(s) = pes 1 − (1 − p)es , Find its PMF.

slide-36
SLIDE 36

Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 36/36

Sums of Independent RVs

Let X and Y be independent random variables. W = X + Y. Then MW(s) = E[esW] = E[es(X+Y)] = E[esX]E[esY] = MX(s)MY(s)

  • Example. Let X1, X2, ...Xn be independent Bernoullui rvs

with parameter p. Y = X1 + X2 + · · · + Xn. Find MGF of Y.