Chapter 2: Transformations and Expectations (a recap) STK4011/9011: - - PowerPoint PPT Presentation

chapter 2 transformations and expectations a recap
SMART_READER_LITE
LIVE PREVIEW

Chapter 2: Transformations and Expectations (a recap) STK4011/9011: - - PowerPoint PPT Presentation

Chapter 2: Transformations and Expectations (a recap) STK4011/9011: Statistical Inference Theory Johan Pensar STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 1 / 18 Overview Distributions of


slide-1
SLIDE 1

Chapter 2: Transformations and Expectations (a recap)

STK4011/9011: Statistical Inference Theory

Johan Pensar

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 1 / 18

slide-2
SLIDE 2

Overview

1

Distributions of Functions of a Random Variable

2

Expected Values

3

Moments and Moment Generating functions Covers parts of Sec 2.1–2.3 in CB.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 2 / 18

slide-3
SLIDE 3

Distributions of Functions of a Random Variable

If X is a random variable, then any function Y = g(X) is also a random variable. Formally, the function y = g(x) maps the original sample space to a new sample space: g:X → Y. Inverse mapping from Y to X: g−1(y) = {x ∈ X: g(x) = y} (or g−1(y) = x if one-to-one) The probability distribution of Y is defined by P(Y ∈ A) = P(X ∈ g−1(A))

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 3 / 18

slide-4
SLIDE 4

Distributions of Functions of a Discrete Random Variable

If X is a discrete random variable and Y = g(X), then:

the sample space X is countable. the sample space Y = {y : y = g(x), x ∈ X} is countable (Y discrete r.v.).

The pmf of Y is fY (y) =

  • x∈g−1(y)

fX(x), for y ∈ Y, and fY (y) = 0 for y ∈ Y.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 4 / 18

slide-5
SLIDE 5

Example: Binomial transformation

Ex 2.1.1: Let X follow a binomial distribution, X ∼ Binomial(n, p): fX(x) = n x

  • px(1 − p)n−x,

x = 0, 1, . . . , n, where n is a positive integer and 0 ≤ p ≤ 1. What is the distribution of the random variable Y = g(X) = n − X?

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 5 / 18

slide-6
SLIDE 6

Example: Binomial transformation

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 6 / 18

slide-7
SLIDE 7

Distributions of Functions of a Random Variable

Thm 2.1.3: Let X have cdf FX(x), let Y = g(X) (g is monotone), and let X = {x : fX(x) > 0} and Y = {y : y = g(x) for some x ∈ X}.

If g is an increasing function on X, then FY (y) = FX(g −1(y)) for y ∈ Y. If g is a decreasing function on X and X is a continuous random variable, then FY (y) = 1 − FX(g −1(y)) for y ∈ Y.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 7 / 18

slide-8
SLIDE 8

Distributions of Functions of a Random Variable

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 8 / 18

slide-9
SLIDE 9

Example: Uniform-exponential relationship

Ex 2.1.4: Let X follow a uniform distribution, X ∼ Uniform(0, 1): fX(x) = 1, 0 < x < 1. What is the cdf of the random variable Y = g(X) = − log(X)?

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 9 / 18

slide-10
SLIDE 10

Example: Uniform-exponential relationship

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 10 / 18

slide-11
SLIDE 11

Distributions of Functions of a Random Variable

Thm 2.1.5: Let X have pdf fX(x) and let Y = g(X), where g is a monotone function. Further, let X and Y be defined as in Thm 2.1.3. Assume that fX(x) is continuous on X and that g−1(y) has a continuous derivative on Y. Then, the pdf of Y is given by fY (y) =    fX(g−1(y))

  • d

dy g−1(y)

  • ,

for y ∈ Y, 0,

  • therwise.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 11 / 18

slide-12
SLIDE 12

Distributions of Functions of a Random Variable

Thm 2.1.8: Let X have pdf fX(x), let Y = g(X), and let X be defined as in Thm 2.1.3. Assume that there exists a partition A0, A1, . . . , Ak of X such that P(X ∈ A0) = 0 and fX(x) is continuous on each Ai. Further, assume that there exist functions g1(x), . . . , gk(x), defined on A1, . . . , Ak, respectively, satisfying

g(x) = gi(x), for x ∈ Ai, gi(x) is monotone on Ai, the set Y = {y : y = gi(x) for some x ∈ Ai} is the same for each i = 1, . . . , k, g −1

i

(y) has a continuous derivative on Y, for each i = 1, . . . , k.

Then, the pdf of Y is given by fY (y) =    k

i=1 fX(g−1 i

(y))

  • d

dy g−1

i

(y)

  • ,

for y ∈ Y, 0,

  • therwise.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 12 / 18

slide-13
SLIDE 13

Probability integral transformation

Thm 2.1.10: Let X have a continuous cdf FX(x). Then, the random variable Y = FX(X) is uniformly distributed on (0, 1). Can be used to generate samples of a random variable X:

1

Generate a uniform random number u from (0, 1).

2

Solve for x in the equation FX(x) = u.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 13 / 18

slide-14
SLIDE 14

Expected Values

Def 2.2.1: The expected value (or mean) of a random variable g(X) is E

  • g(X)
  • =

−∞ g(x)fX(x)dx,

if X is continuous,

  • x∈X g(x)fX(x),

if X is discrete, provided that the integral or sum exists. If E

  • |g(X)|
  • = ∞, we say that E
  • g(X)
  • does

not exist.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 14 / 18

slide-15
SLIDE 15

Expected Values

Thm 2.2.5: Let X be a random variable and let a, b and c be constants. Then, for any functions g1(x) and g2(x) whose expectations exist,

E(ag1(X) + bg2(x) + c) = aE(g1(X)) + bE(g2(x)) + c. If g1(x) ≥ 0 for all x, then E(g1(X)) ≥ 0. If g1(x) ≥ g2(x) for all x, then E(g1(X)) ≥ E(g2(X)). If a ≤ g1(x) ≤ b for all x, then a ≤ E(g1(X)) ≤ b.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 15 / 18

slide-16
SLIDE 16

Moments

Def 2.3.1: For each integer n and a random variable X:

The n:th moment of X is µ′

n = E(X n).

The n:th central moment of X is µn = E((X − µ)n), where µ = µ′

1 = E(X).

Def 2.3.2: The variance of a random variable X is its second central moment: Var(X) = E((X − µ)2) = E(X 2) − E(X)2. Thm 2.3.4: If X is a random variable with finite variance, then for any constants a and b: Var(aX + b) = a2Var(X).

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 16 / 18

slide-17
SLIDE 17

Moment Generating Function

Def 2.3.6: Let X be a random variable with cdf FX. The moment generating function (mgf) of X is then MX(t) = E(etX), provided that the expectation exists for t in some neighborhood of 0, that is, there is an h > 0 such that the mgf exists in −h < t < h. If not, we say that the mgf does not exist.

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 17 / 18

slide-18
SLIDE 18

Moment Generating Function

Thm 2.3.7: If X has mgf MX(t), then E(X n) = M(n)

X (0),

where we define M(n)

X (0) = dn

dtn MX(t)

  • t=0

. The n:th moment is equal to the n:th derivative of the mgf evaluated at t = 0. Although the mgf can be used to generate moments, its main use is in characterizing distributions (see Thms 2.3.11–2.3.12).

STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 18 / 18