Special Discrete Distributions Bernoulli Distribution A Bernoulli - - PowerPoint PPT Presentation

special discrete distributions bernoulli distribution
SMART_READER_LITE
LIVE PREVIEW

Special Discrete Distributions Bernoulli Distribution A Bernoulli - - PowerPoint PPT Presentation

Special Discrete Distributions Bernoulli Distribution A Bernoulli trial is an experiment with two possible outcomes. A random variable X has a Bernoulli( p ) if 1 with probability p X = 0 with probability 1 p X = 1 often


slide-1
SLIDE 1

Special Discrete Distributions

slide-2
SLIDE 2

Bernoulli Distribution

◮ A Bernoulli trial is an experiment with two possible

  • utcomes. A random variable X has a Bernoulli(p) if

X =    1 with probability p with probability 1 − p X = 1 often termed as “success” and X = 0 often termed as “failure”.

◮ Example 1: toss a coin, head=“success” and tail=“failure”. ◮ Example 2: incidence of a disease, not infected=“success”

and infected=“failure”.

slide-3
SLIDE 3

Binomial experiments

  • 1. The experiment consists of n repeated Bernoulli trials -

each trial has only two possible outcomes labeled as success and failure;

  • 2. The trials are independent - the outcome of any trial has

no effect on the probability of the others;

  • 3. The probability of success in each trial is constant which

we denote by p.

slide-4
SLIDE 4

Binomial distribution

Let Y be the total number of successes in n trails, i.e., Y = X1 + X2 + · · · + Xn, where Xi ∼ Bernoulli(p), then Y is said to have Binomial distribution with parameters n and p. Denote it as Y ∼ Binomial(n, p). The probability mass function

  • f Y is

b(k; n, p) = n k

  • pk(1 − p)n−k for k = 0, 1, · · · , n.
slide-5
SLIDE 5

Example 1

Suppose a husband and a wife each is of genotype aA, where the gene a is recessive while A is dominant. Children who are

  • f genotype aa have six toes, whereas those of genotype aA

and AA are normal. If the couple has 6 children, what is the probability that two of them will have 6 toes?

slide-6
SLIDE 6

Expectation and variance

◮ We have shown that

E(Y) = np and Var(Y) = np(1 − p).

slide-7
SLIDE 7

Discrete uniform distribution

◮ A random variable X has a discrete uniform on {1, · · · , N}

if P(X = k) = 1 N k = 1, 2, · · · , N, where N is a specified integer.

◮ The mean and variance of X

E(X) = N + 1 2 Var(X) = (N + 1)(N − 1) 12 .

slide-8
SLIDE 8

Poisson distribution

Poisson distribution is often used to model the number of

  • ccurrence in a given interval such as:

the number of customers arriving in a bank in a given time interval; the number of failures of a machine in a given period; the number of typos on a given page. The basic assumption is that, for a small time interval, the probability of an arrival is proportional to the length of the waiting time.

slide-9
SLIDE 9

Definition of Poisson distribution

A random variable X taking values in the non-negative integers has a Poisson(λ) distribution if P(X = k) = e−λλk k! k = 0, 1, · · · The expectation and variance of poisson distribution are E(X) = λ and Var(X) = λ.

slide-10
SLIDE 10

Example

Suppose that the number of typographical errors on a single page of this book has Poisson distribution with parameter λ = 1

  • 2. Calculate the probability that there is at least one error
  • n this page.
slide-11
SLIDE 11

The approximation by Binomial distribution

If X ∼ Poisson(λ) and Y ∼ Binomial(n, p), fX(k) = e−λλk k! fY(k) = n k

  • pk(1 − p)n−k.

If p is small and n is large such that np → λ as n → ∞, fY(k) → fX(k) for every k.

slide-12
SLIDE 12

The Geometric and Negative Binomial Distributions

In a sequence of independent Bernoulli(p) trials, let the random variable X denote the trial at which r−th success occurs, where r is a fixed integer. The probability mass function for X is fX(k) = k − 1 r − 1

  • pr(1 − p)k−r

for k = r, r + 1, · · ·

  • 1. If r = 1, X is said to be Geometric distributed with

parameter p.

  • 2. For general r, we say X has Negative Binomial distribution

with parameters r and p. Denoted as NB(r,p).

slide-13
SLIDE 13

Another way to define Negative Binomial

◮ Let Y = the number of failures before r-th success. Then

Y = X − r. The PMF of Y is fY(k) = r + k − 1 r − 1

  • pr(1 − p)k = (−1)k

−r k

  • pr(1 − p)k

for k = 0, 1, · · · where −r k

  • = (−r)(−r − 1) · · · (−r − k + 1)

k! .

◮ We have used

r + k − 1 r − 1

  • =

r + k − 1 k

  • = (−1)k

−r k

  • .
slide-14
SLIDE 14

The Banach match problem

A pipe-smoking mathematician carries two match-boxes, one in his left hand and the other one in right-hand pocket. Initially, each box contains N matches. Each time the mathematician requires a match, he is equally likely to take it from either box. At the moment he first discovers one of the boxes to be empty, what is the probability that there are exactly k (k = 0, 1, · · · , N) matches in the other box?

slide-15
SLIDE 15

Expectation and variance of NB(r, p)

Let Di be the additional number of trials for obtaining i-th success after i − 1 successes. Then Di are independent distributed as Geometric(p) and X = D1 + · · · + Dr. It follows that E(X) =

r

  • i=1

E(Di) = r 1 p Var(X) =

r

  • i=1

Var(Di) = r 1 − p p2 .

slide-16
SLIDE 16

Memoryless property of Geometric distribution

If X ∼ Geometric(p) and any integers s > t, P(X > s|X > t) = P(X > s − t) = P(X > s − t|X > 0). The probability of obtaining s − t failures after having observed t failures is the same as the probability getting s − t failures at the first beginning. The geometric distribution “forgot” what has

  • ccurred. The probability depends on the length of the run, not
  • n its location.