variance Var[ X ] = E[( X - ) 2 ], often denoted 2 . The standard - - PowerPoint PPT Presentation

variance
SMART_READER_LITE
LIVE PREVIEW

variance Var[ X ] = E[( X - ) 2 ], often denoted 2 . The standard - - PowerPoint PPT Presentation

properties of expectation One more linearity of expectation practice problem Linearity, II Given a DNA sequence of length n Let X and Y be two random variables derived from e.g. AAATGAATGAATCC outcomes of a single experiment.


slide-1
SLIDE 1

1

properties of expectation Linearity, II Let X and Y be two random variables derived from

  • utcomes of a single experiment. Then

Can extend by induction to say that expectation of sum = sum of expectations

E[X+Y] = E[X] + E[Y]

E(X1 + X2 + . . . + Xn) = E(X1) + E(X2) + . . . + E(Xn) One more linearity of expectation practice problem Given a DNA sequence of length n e.g. AAATGAATGAATCC…… where each position is A with probability pA T with probability pT G with probability pG C with probability pC.

2

What is the expected number of occurrences of the substring AATGAAT? AAATGAATGAATCC AAATGAATGAATCC

variance

3

variance

4

Definitions The variance of a random variable X with mean E[X] = μ is Var[X] = E[(X-μ)2],

  • ften denoted σ2.

The standard deviation of X is 
 σ = √Var[X]

slide-2
SLIDE 2

Var[aX+b] = a2 Var[X]

properties of variance

5

NOT linear; insensitive to location (b), quadratic in scale (a)

Var[aX+b] = a2 Var[X]

properties of variance

6

NOT linear; insensitive to location (b), quadratic in scale (a)

Var[aX+b] = a2 Var[X]

Ex: properties of variance

7

E[X] = 0 Var[X] = 1 Y = 1000 X E[Y] = E[1000 X] = 1000 E[X] = 0 Var[Y] = Var[103 X]=106Var[X] = 106

NOT linear; insensitive to location (b), quadratic in scale (a)

properties of variance

8

slide-3
SLIDE 3

properties of variance

9

Var(X) = E[(X − µ)2] = E[X2 − 2µX + µ2] = E[X2] − 2µE[X] + µ2 = E[X2] − 2µ2 + µ2 = E[X2] − µ2 = E[X2] − (E[X])2

properties of variance

10

Example: What is Var[X] when X is outcome of one fair die? E[X] = 7/2, so

In general: Var[X+Y] ≠ Var[X] + Var[Y]

Ex 1: Let X = ±1 based on 1 coin flip; Y=-X properties of variance

11 ^^^^^^^

NOT linear

In general: Var[X+Y] ≠ Var[X] + Var[Y]

Ex 1: Let X = ±1 based on 1 coin flip; Y=-X Ex 2: As another example, is Var[X+X] = 2Var[X]? properties of variance

12 ^^^^^^^

NOT linear

slide-4
SLIDE 4

In general: Var[X+Y] ≠ Var[X] + Var[Y]

Ex 1: Let X = ±1 based on 1 coin flip As shown above, E[X] = 0, Var[X] = 1 Let Y = -X; then Var[Y] = (-1)2Var[X] = 1 But X+Y = 0, always, so Var[X+Y] = 0 Ex 2: As another example, is Var[X+X] = 2Var[X]? properties of variance

13 ^^^^^^^

NOT linear

more variance examples

14

  • 4
  • 2

2 4 0.00 0.10

  • 4
  • 2

2 4 0.00 0.10

  • 4
  • 2

2 4 0.00 0.10

  • 4
  • 2

2 4 0.00 0.10 0.20

more variance examples

15

  • 4
  • 2

2 4 0.00 0.10

  • 4
  • 2

2 4 0.00 0.10

  • 4
  • 2

2 4 0.00 0.10

  • 4
  • 2

2 4 0.00 0.10 0.20

σ2 = 5.83 σ2 = 10 σ2 = 15 σ2 = 19.7

independence

  • f r.v.s

16

slide-5
SLIDE 5

r.v.s and independence Defn: r.v. X and event E are independent if the event E is independent of the event {X=x} (for any fixed x), i.e. ∀x P({X = x} & E) = P({X=x}) • P(E)

17

r.v.s and independence Defn: r.v. X and event E are independent if the event E is independent of the event {X=x} (for any fixed x), i.e. ∀x P({X = x} & E) = P({X=x}) • P(E) Defn: T wo r.v.s X and Y are independent if the events {X=x} and {Y=y} are independent (for any fixed x, y), i.e. ∀x, y P({X = x} & {Y=y}) = P({X=x}) • P({Y=y})

18

r.v.s and independence Defn: r.v. X and event E are independent if the event E is independent of the event {X=x} (for any fixed x), i.e. ∀x P({X = x} & E) = P({X=x}) • P(E) Defn: T wo r.v.s X and Y are independent if the events {X=x} and {Y=y} are independent (for any fixed x, y), i.e. ∀x, y P({X = x} & {Y=y}) = P({X=x}) • P({Y=y}) Intuition as before: knowing X doesn’t help you guess Y or E and vice versa.

19

r.v.s and independence

Two random variables X and Y are independent if the events {X=x} and {Y=y} are independent (for any x, y), i.e.

∀x, y P({X = x} & {Y=y}) = P({X=x}) • P({Y=y})

Ex: Let X be number of heads in first n of 2n coin flips, Y be number in the last n flips, and let Z be the total. X and Y are independent:

20

slide-6
SLIDE 6

r.v.s and independence

Two random variables X and Y are independent if the events {X=x} and {Y=y} are independent (for any x, y), i.e.

∀x, y P({X = x} & {Y=y}) = P({X=x}) • P({Y=y})

Ex: Let X be number of heads in first n of 2n coin flips, Y be number in the last n flips, and let Z be the total. X and Y are independent: But X and Z are not independent, since, e.g., knowing that X = 0 precludes Z > n. E.g., P(X = 0) and P(Z = n+1) are both positive, but P(X = 0 & Z = n+1) = 0.

21

r.v.s and independence Defn: T wo r.v.s X and Y are independent if the events {X=x} and {Y=y} are independent (for any fixed x, y), i.e. ∀x, y P({X = x} & {Y=y}) = P({X=x}) • P({Y=y})

22

products of independent r.v.s

23

Theorem: If X & Y are independent, then E[X•Y] = E[X]•E[Y]

Note: NOT true in general; see earlier example E[X2]≠E[X]2

products of independent r.v.s

24

Theorem: If X & Y are independent, then E[X•Y] = E[X]•E[Y] Proof:

Note: NOT true in general; see earlier example E[X2]≠E[X]2

independence

slide-7
SLIDE 7

In general: Var[X+Y] ≠ Var[X] + Var[Y]

properties of variance

25 ^^^^^^^

NOT linear

Theorem: If X & Y are independent, then 
 Var[X+Y] = Var[X]+Var[Y] variance of independent r.v.s is additive

26

(Bienaymé, 1853)

Theorem: If X & Y are independent, then 
 Var[X+Y] = Var[X]+Var[Y] Proof: 
 variance of independent r.v.s is additive

27

(Bienaymé, 1853)

28

a zoo of (discrete) random variables

slide-8
SLIDE 8

discrete uniform random variables A discrete random variable X equally likely to take any (integer) value between integers a and b, inclusive, is uniform. Notation: X ~ Unif(a,b) Probability: Mean, Variance: Example: value shown on one 
 roll of a fair die is Unif(1,6): P(X=i) = 1/6 
 E[X] = 7/2
 Var[X] = 35/12

29

1 2 3 4 5 6 7 0.10 0.16 0.22 i P(X=i)

Bernoulli random variables An experiment results in “Success” or “Failure” X is an indicator random variable (1 = success, 0 = failure) P(X=1) = p and P(X=0) = 1-p X is called a Bernoulli random variable: X ~ Ber(p) E[X] = Var(X) = E[X2] – (E[X])2 =

30

Bernoulli random variables An experiment results in “Success” or “Failure” X is an indicator random variable (1 = success, 0 = failure) P(X=1) = p and P(X=0) = 1-p X is called a Bernoulli random variable: X ~ Ber(p) E[X] = E[X2] = p Var(X) = E[X2] – (E[X])2 = p – p2 = p(1-p) Examples: coin flip random binary digit whether a disk drive crashed

31 Jacob (aka James, Jacques) Bernoulli, 1654 – 1705

binomial random variables

Consider n independent random variables Yi ~ Ber(p) X = Σi Yi is the number of successes in n trials X is a Binomial random variable: X ~ Bin(n,p) Pr (X=k) = ? E(X) = ? Var(X) = ?

32

slide-9
SLIDE 9

binomial random variables

Consider n independent random variables Yi ~ Ber(p) X = Σi Yi is the number of successes in n trials X is a Binomial random variable: X ~ Bin(n,p) By Binomial theorem, E[X] = pn Var(X) = p(1-p)n Examples # of heads in n coin flips # of 1’s in a randomly generated length n bit string # of disk drive crashes in a 1000 computer cluster

33

binomial pmfs

34

2 4 6 8 10 0.00 0.05 0.10 0.15 0.20 0.25 0.30

PMF for X ~ Bin(10,0.5)

k P(X=k) µ ± σ 2 4 6 8 10 0.00 0.05 0.10 0.15 0.20 0.25 0.30

PMF for X ~ Bin(10,0.25)

k P(X=k) µ ± σ

binomial pmfs

35

5 10 15 20 25 30 0.00 0.05 0.10 0.15 0.20 0.25

PMF for X ~ Bin(30,0.5)

k P(X=k) µ ± σ 5 10 15 20 25 30 0.00 0.05 0.10 0.15 0.20 0.25

PMF for X ~ Bin(30,0.1)

k P(X=k) µ ± σ

mean, variance of the binomial (II)

36