CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean - - PowerPoint PPT Presentation

cs70 jean walrand lecture 36
SMART_READER_LITE
LIVE PREVIEW

CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean - - PowerPoint PPT Presentation

CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean Walrand: Lecture 36. Continuous Probability 3 1. Review: CDF , PDF 2. Review: Expectation 3. Review: Independence


slide-1
SLIDE 1

CS70: Jean Walrand: Lecture 36.

Continuous Probability 3

slide-2
SLIDE 2

CS70: Jean Walrand: Lecture 36.

Continuous Probability 3

slide-3
SLIDE 3

CS70: Jean Walrand: Lecture 36.

Continuous Probability 3

  • 1. Review: CDF

, PDF

  • 2. Review: Expectation
  • 3. Review: Independence
  • 4. Meeting at a Restaurant
  • 5. Breaking a Stick
  • 6. Maximum of Exponentials
  • 7. Quantization Noise
  • 8. Replacing Light Bulbs
  • 9. Expected Squared Distance
  • 10. Geometric and Exponential
slide-4
SLIDE 4

Review: CDF and PDF.

slide-5
SLIDE 5

Review: CDF and PDF.

Key idea:

slide-6
SLIDE 6

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ.

slide-7
SLIDE 7

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1];

slide-8
SLIDE 8

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target.

slide-9
SLIDE 9

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome],

slide-10
SLIDE 10

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event].

slide-11
SLIDE 11

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event].

slide-12
SLIDE 12

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]]

slide-13
SLIDE 13

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x]

slide-14
SLIDE 14

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ.

slide-15
SLIDE 15

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

slide-16
SLIDE 16

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)].

slide-17
SLIDE 17

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function

slide-18
SLIDE 18

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF)

slide-19
SLIDE 19

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X.

slide-20
SLIDE 20

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X. fX(·) is the probability density function

slide-21
SLIDE 21

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X. fX(·) is the probability density function (PDF)

slide-22
SLIDE 22

Review: CDF and PDF.

Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d

dx FX(x).

Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X. fX(·) is the probability density function (PDF) of X.

slide-23
SLIDE 23

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as

slide-24
SLIDE 24

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

slide-25
SLIDE 25

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

(b) The expectation of a function of a random variable is defined as

slide-26
SLIDE 26

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

(b) The expectation of a function of a random variable is defined as E[h(X)] =

−∞ h(x)fX(x)dx.

slide-27
SLIDE 27

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

(b) The expectation of a function of a random variable is defined as E[h(X)] =

−∞ h(x)fX(x)dx.

(c) The expectation of a function of multiple random variables is defined as

slide-28
SLIDE 28

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

(b) The expectation of a function of a random variable is defined as E[h(X)] =

−∞ h(x)fX(x)dx.

(c) The expectation of a function of multiple random variables is defined as E[h(X)] =

  • ···
  • h(x)fX(x)dx1 ···dxn.
slide-29
SLIDE 29

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

(b) The expectation of a function of a random variable is defined as E[h(X)] =

−∞ h(x)fX(x)dx.

(c) The expectation of a function of multiple random variables is defined as E[h(X)] =

  • ···
  • h(x)fX(x)dx1 ···dxn.

Justifications:

slide-30
SLIDE 30

Expectation

Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =

−∞ xfX(x)dx.

(b) The expectation of a function of a random variable is defined as E[h(X)] =

−∞ h(x)fX(x)dx.

(c) The expectation of a function of multiple random variables is defined as E[h(X)] =

  • ···
  • h(x)fX(x)dx1 ···dxn.

Justifications: Think of the discrete approximations of the continuous RVs.

slide-31
SLIDE 31

Independent Continuous Random Variables

slide-32
SLIDE 32

Independent Continuous Random Variables

Definition:

slide-33
SLIDE 33

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if

slide-34
SLIDE 34

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B.

slide-35
SLIDE 35

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem:

slide-36
SLIDE 36

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if

slide-37
SLIDE 37

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y).

slide-38
SLIDE 38

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof:

slide-39
SLIDE 39

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case.

slide-40
SLIDE 40

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition:

slide-41
SLIDE 41

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if

slide-42
SLIDE 42

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An.

slide-43
SLIDE 43

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem:

slide-44
SLIDE 44

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if

slide-45
SLIDE 45

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if fX(x1,...,xn) = fX1(x1)···fXn(xn).

slide-46
SLIDE 46

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if fX(x1,...,xn) = fX1(x1)···fXn(xn). Proof:

slide-47
SLIDE 47

Independent Continuous Random Variables

Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if fX(x1,...,xn) = fX1(x1)···fXn(xn). Proof: As in the discrete case.

slide-48
SLIDE 48

Meeting at a Restaurant

slide-49
SLIDE 49

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm.

slide-50
SLIDE 50

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes.

slide-51
SLIDE 51

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet?

slide-52
SLIDE 52

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet?

slide-53
SLIDE 53

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant.

slide-54
SLIDE 54

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6,

slide-55
SLIDE 55

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet.

slide-56
SLIDE 56

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two rectangles.

slide-57
SLIDE 57

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two

  • rectangles. When you put them

together, they form a square

slide-58
SLIDE 58

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two

  • rectangles. When you put them

together, they form a square with sides 5/6.

slide-59
SLIDE 59

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two

  • rectangles. When you put them

together, they form a square with sides 5/6. Thus, Pr[meet] = 1−( 5

6)2 =

slide-60
SLIDE 60

Meeting at a Restaurant

Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two

  • rectangles. When you put them

together, they form a square with sides 5/6. Thus, Pr[meet] = 1−( 5

6)2 = 11 36.

slide-61
SLIDE 61

Breaking a Stick

slide-62
SLIDE 62

Breaking a Stick

You break a stick at two points chosen independently uniformly at random.

slide-63
SLIDE 63

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces?

slide-64
SLIDE 64

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces?

slide-65
SLIDE 65

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick.

slide-66
SLIDE 66

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B.

slide-67
SLIDE 67

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5.

slide-68
SLIDE 68

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5. This is the blue triangle.

slide-69
SLIDE 69

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5. This is the blue triangle. If X > Y, we get the red triangle, by symmetry.

slide-70
SLIDE 70

Breaking a Stick

You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5. This is the blue triangle. If X > Y, we get the red triangle, by symmetry. Thus, Pr[make triangle] = 1/4.

slide-71
SLIDE 71

Maximum of Two Exponentials

slide-72
SLIDE 72

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent.

slide-73
SLIDE 73

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}.

slide-74
SLIDE 74

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z].

slide-75
SLIDE 75

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate.

slide-76
SLIDE 76

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has

slide-77
SLIDE 77

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z]

slide-78
SLIDE 78

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z]

slide-79
SLIDE 79

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) =

slide-80
SLIDE 80

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z

slide-81
SLIDE 81

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0.

slide-82
SLIDE 82

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0. Hence, E[Z] =

0 zfZ(z)dz =

slide-83
SLIDE 83

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0. Hence, E[Z] =

0 zfZ(z)dz = 1

λ + 1 µ − 1 λ + µ .

slide-84
SLIDE 84

Maximum of Two Exponentials

Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0. Hence, E[Z] =

0 zfZ(z)dz = 1

λ + 1 µ − 1 λ + µ .

slide-85
SLIDE 85

Maximum of n i.i.d. Exponentials

slide-86
SLIDE 86

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1).

slide-87
SLIDE 87

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}.

slide-88
SLIDE 88

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z].

slide-89
SLIDE 89

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion.

slide-90
SLIDE 90

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows:

slide-91
SLIDE 91

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1).

slide-92
SLIDE 92

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential.

slide-93
SLIDE 93

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z].

slide-94
SLIDE 94

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1

slide-95
SLIDE 95

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1 = 1 n +An−1

slide-96
SLIDE 96

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1 = 1 n +An−1 because the minimum of Expo is Expo with the sum of the rates.

slide-97
SLIDE 97

Maximum of n i.i.d. Exponentials

Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1 = 1 n +An−1 because the minimum of Expo is Expo with the sum of the rates. Hence, E[Z] = An = 1+ 1 2 +···+ 1 n = H(n).

slide-98
SLIDE 98

Quantization Noise

slide-99
SLIDE 99

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits.

slide-100
SLIDE 100

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error

slide-101
SLIDE 101

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise.

slide-102
SLIDE 102

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise?

slide-103
SLIDE 103

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model:

slide-104
SLIDE 104

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value.

slide-105
SLIDE 105

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X.
slide-106
SLIDE 106

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits.
slide-107
SLIDE 107

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y.

slide-108
SLIDE 108

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2].

slide-109
SLIDE 109

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2]. Analysis:

slide-110
SLIDE 110

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)].

slide-111
SLIDE 111

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1).

slide-112
SLIDE 112

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1). The power of the signal X is E[X 2] =

slide-113
SLIDE 113

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1). The power of the signal X is E[X 2] = 1

3.

slide-114
SLIDE 114

Quantization Noise

In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple

  • f 2−n to X. Thus, we can represent Y with n bits. The error is

Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1). The power of the signal X is E[X 2] = 1

3.

slide-115
SLIDE 115

Quantization Noise

slide-116
SLIDE 116

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

slide-117
SLIDE 117

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR)

slide-118
SLIDE 118

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise.

slide-119
SLIDE 119

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1).

slide-120
SLIDE 120

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR)

slide-121
SLIDE 121

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR) = 20(n +1)log10(2)

slide-122
SLIDE 122

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR) = 20(n +1)log10(2) ≈ 6(n +1).

slide-123
SLIDE 123

Quantization Noise

We saw that E[Z 2] = 1

32−2(n+1) and E[X 2] = 1 3.

The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR) = 20(n +1)log10(2) ≈ 6(n +1). For instance, if n = 16, then SNR(dB) ≈ 112dB.

slide-124
SLIDE 124

Replacing Light Bulbs

slide-125
SLIDE 125

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes.

slide-126
SLIDE 126

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out.

slide-127
SLIDE 127

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time?

slide-128
SLIDE 128

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem:

slide-129
SLIDE 129

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t).

slide-130
SLIDE 130

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

slide-131
SLIDE 131

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof:

slide-132
SLIDE 132

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units.

slide-133
SLIDE 133

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε].

slide-134
SLIDE 134

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A]

slide-135
SLIDE 135

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A]

slide-136
SLIDE 136

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε.

slide-137
SLIDE 137

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε. Hence, g(n,t) := Pr[Xt = n]

slide-138
SLIDE 138

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε. Hence, g(n,t) := Pr[Xt = n] is such that

slide-139
SLIDE 139

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε. Hence, g(n,t) := Pr[Xt = n] is such that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε.

slide-140
SLIDE 140

Replacing Light Bulbs

slide-141
SLIDE 141

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes.

slide-142
SLIDE 142

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out.

slide-143
SLIDE 143

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time?

slide-144
SLIDE 144

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem:

slide-145
SLIDE 145

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t).

slide-146
SLIDE 146

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

slide-147
SLIDE 147

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued)

slide-148
SLIDE 148

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that

slide-149
SLIDE 149

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε.

slide-150
SLIDE 150

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0,

slide-151
SLIDE 151

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets

slide-152
SLIDE 152

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t).

slide-153
SLIDE 153

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t). You can check that these equations are solved by g(n,t) = tn

n!e−t.

slide-154
SLIDE 154

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t). You can check that these equations are solved by g(n,t) = tn

n!e−t.

Indeed, then g′(n,t) = tn−1 (n −1)!e−t −g(n,t)

slide-155
SLIDE 155

Replacing Light Bulbs

Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn

n!e−t.

Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t). You can check that these equations are solved by g(n,t) = tn

n!e−t.

Indeed, then g′(n,t) = tn−1 (n −1)!e−t −g(n,t) = g(n −1,t)−g(n,t).

slide-156
SLIDE 156

Expected Squared Distance

slide-157
SLIDE 157

Expected Squared Distance

Problem 1:

slide-158
SLIDE 158

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1].

slide-159
SLIDE 159

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]?

slide-160
SLIDE 160

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis:

slide-161
SLIDE 161

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] =

slide-162
SLIDE 162

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY]

slide-163
SLIDE 163

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2

slide-164
SLIDE 164

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6.

slide-165
SLIDE 165

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2:

slide-166
SLIDE 166

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square?

slide-167
SLIDE 167

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis:

slide-168
SLIDE 168

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] =

slide-169
SLIDE 169

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2]

slide-170
SLIDE 170

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6.

slide-171
SLIDE 171

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6. Problem 3:

slide-172
SLIDE 172

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6. Problem 3: What about in n dimensions?

slide-173
SLIDE 173

Expected Squared Distance

Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6. Problem 3: What about in n dimensions? n

6.

slide-174
SLIDE 174

Geometric and Exponential

slide-175
SLIDE 175

Geometric and Exponential

The geometric and exponential distributions are similar.

slide-176
SLIDE 176

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless.

slide-177
SLIDE 177

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second

slide-178
SLIDE 178

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N,

slide-179
SLIDE 179

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1.

slide-180
SLIDE 180

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H.

slide-181
SLIDE 181

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact:

slide-182
SLIDE 182

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p).

slide-183
SLIDE 183

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis:

slide-184
SLIDE 184

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails]

slide-185
SLIDE 185

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails] = (1− p N )Nt

slide-186
SLIDE 186

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails] = (1− p N )Nt ≈ exp{−pt}.

slide-187
SLIDE 187

Geometric and Exponential

The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails] = (1− p N )Nt ≈ exp{−pt}. Indeed, (1− a

N )N ≈ exp{−a}.

slide-188
SLIDE 188

Summary

Continuous Probability 3

slide-189
SLIDE 189

Summary

Continuous Probability 3

slide-190
SLIDE 190

Summary

Continuous Probability 3

◮ Continuous RVs are essentially the same as discrete RVs

slide-191
SLIDE 191

Summary

Continuous Probability 3

◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε

slide-192
SLIDE 192

Summary

Continuous Probability 3

◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε ◮ Sums become integrals, ....

slide-193
SLIDE 193

Summary

Continuous Probability 3

◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε ◮ Sums become integrals, .... ◮ The exponential distribution is magical:

slide-194
SLIDE 194

Summary

Continuous Probability 3

◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε ◮ Sums become integrals, .... ◮ The exponential distribution is magical: memoryless.