SLIDE 1
CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean - - PowerPoint PPT Presentation
CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean - - PowerPoint PPT Presentation
CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean Walrand: Lecture 36. Continuous Probability 3 CS70: Jean Walrand: Lecture 36. Continuous Probability 3 1. Review: CDF , PDF 2. Review: Expectation 3. Review: Independence
SLIDE 2
SLIDE 3 , PDF
CS70: Jean Walrand: Lecture 36.
Continuous Probability 3
- 1. Review: CDF
- 2. Review: Expectation
- 3. Review: Independence
- 4. Meeting at a Restaurant
- 5. Breaking a Stick
- 6. Maximum of Exponentials
- 7. Quantization Noise
- 8. Replacing Light Bulbs
- 9. Expected Squared Distance
- 10. Geometric and Exponential
SLIDE 4
Review: CDF and PDF.
SLIDE 5
Review: CDF and PDF.
Key idea:
SLIDE 6
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ.
SLIDE 7
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1];
SLIDE 8
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target.
SLIDE 9
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome],
SLIDE 10
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event].
SLIDE 11
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event].
SLIDE 12
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]]
SLIDE 13
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x]
SLIDE 14
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ.
SLIDE 15
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
SLIDE 16
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)].
SLIDE 17
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function
SLIDE 18
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF)
SLIDE 19
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X.
SLIDE 20
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X. fX(·) is the probability density function
SLIDE 21
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X. fX(·) is the probability density function (PDF)
SLIDE 22
Review: CDF and PDF.
Key idea: For a continuous RV, Pr[X = x] = 0 for all x ∈ ℜ. Examples: Uniform in [0,1]; throw a dart in a target. Thus, one cannot define Pr[outcome], then Pr[event]. Instead, one starts by defining Pr[event]. Thus, one defines Pr[X ∈ (−∞,x]] = Pr[X ≤ x] =: FX(x),x ∈ ℜ. Then, one defines fX(x) := d
dx FX(x).
Hence, fX(x)ε = Pr[X ∈ (x,x +ε)]. FX(·) is the cumulative distribution function (CDF) of X. fX(·) is the probability density function (PDF) of X.
SLIDE 23
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as
SLIDE 24
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
SLIDE 25
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
(b) The expectation of a function of a random variable is defined as
SLIDE 26
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
(b) The expectation of a function of a random variable is defined as E[h(X)] =
∞
−∞ h(x)fX(x)dx.
SLIDE 27
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
(b) The expectation of a function of a random variable is defined as E[h(X)] =
∞
−∞ h(x)fX(x)dx.
(c) The expectation of a function of multiple random variables is defined as
SLIDE 28
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
(b) The expectation of a function of a random variable is defined as E[h(X)] =
∞
−∞ h(x)fX(x)dx.
(c) The expectation of a function of multiple random variables is defined as E[h(X)] =
- ···
- h(x)fX(x)dx1 ···dxn.
SLIDE 29
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
(b) The expectation of a function of a random variable is defined as E[h(X)] =
∞
−∞ h(x)fX(x)dx.
(c) The expectation of a function of multiple random variables is defined as E[h(X)] =
- ···
- h(x)fX(x)dx1 ···dxn.
Justifications:
SLIDE 30
Expectation
Definitions: (a) The expectation of a random variable X with pdf f(x) is defined as E[X] =
∞
−∞ xfX(x)dx.
(b) The expectation of a function of a random variable is defined as E[h(X)] =
∞
−∞ h(x)fX(x)dx.
(c) The expectation of a function of multiple random variables is defined as E[h(X)] =
- ···
- h(x)fX(x)dx1 ···dxn.
Justifications: Think of the discrete approximations of the continuous RVs.
SLIDE 31
Independent Continuous Random Variables
SLIDE 32
Independent Continuous Random Variables
Definition:
SLIDE 33
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if
SLIDE 34
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B.
SLIDE 35
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem:
SLIDE 36
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if
SLIDE 37
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y).
SLIDE 38
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof:
SLIDE 39
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case.
SLIDE 40
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition:
SLIDE 41
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if
SLIDE 42
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An.
SLIDE 43
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem:
SLIDE 44
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if
SLIDE 45
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if fX(x1,...,xn) = fX1(x1)···fXn(xn).
SLIDE 46
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if fX(x1,...,xn) = fX1(x1)···fXn(xn). Proof:
SLIDE 47
Independent Continuous Random Variables
Definition: The continuous RVs X and Y are independent if Pr[X ∈ A,Y ∈ B] = Pr[X ∈ A]Pr[Y ∈ B],∀A,B. Theorem: The continuous RVs X and Y are independent if and only if fX,Y (x,y) = fX(x)fY (y). Proof: As in the discrete case. Definition: The continuous RVs X1,...,Xn are mutually independent if Pr[X1 ∈ A1,...,Xn ∈ An] = Pr[X1 ∈ A1]···Pr[Xn ∈ An],∀A1,...,An. Theorem: The continuous RVs X1,...,Xn are mutually independent if and only if fX(x1,...,xn) = fX1(x1)···fXn(xn). Proof: As in the discrete case.
SLIDE 48
Meeting at a Restaurant
SLIDE 49
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm.
SLIDE 50
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes.
SLIDE 51
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet?
SLIDE 52
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet?
SLIDE 53
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant.
SLIDE 54
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6,
SLIDE 55
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet.
SLIDE 56
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two rectangles.
SLIDE 57
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two
- rectangles. When you put them
together, they form a square
SLIDE 58
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two
- rectangles. When you put them
together, they form a square with sides 5/6.
SLIDE 59
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two
- rectangles. When you put them
together, they form a square with sides 5/6. Thus, Pr[meet] = 1−( 5
6)2 =
SLIDE 60
Meeting at a Restaurant
Two friends go to a restaurant independently uniformly at random between noon and 1pm. They agree they will wait for 10 minutes. What is the probability they meet? Here, (X,Y) are the times when the friends reach the restaurant. The shaded area are the pairs where |X −Y| < 1/6, i.e., such that they meet. The complement is the sum of two
- rectangles. When you put them
together, they form a square with sides 5/6. Thus, Pr[meet] = 1−( 5
6)2 = 11 36.
SLIDE 61
Breaking a Stick
SLIDE 62
Breaking a Stick
You break a stick at two points chosen independently uniformly at random.
SLIDE 63
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces?
SLIDE 64
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces?
SLIDE 65
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick.
SLIDE 66
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B.
SLIDE 67
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5.
SLIDE 68
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5. This is the blue triangle.
SLIDE 69
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5. This is the blue triangle. If X > Y, we get the red triangle, by symmetry.
SLIDE 70
Breaking a Stick
You break a stick at two points chosen independently uniformly at random. What is the probability you can make a triangle with the three pieces? Let X,Y be the two break points along the [0,1] stick. You can make a triangle if A < B +C,B < A+C, and C < A+B. If X < Y, this means X < 0.5,Y < X +0.5,Y > 0.5. This is the blue triangle. If X > Y, we get the red triangle, by symmetry. Thus, Pr[make triangle] = 1/4.
SLIDE 71
Maximum of Two Exponentials
SLIDE 72
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent.
SLIDE 73
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}.
SLIDE 74
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z].
SLIDE 75
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate.
SLIDE 76
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has
SLIDE 77
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z]
SLIDE 78
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z]
SLIDE 79
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) =
SLIDE 80
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z
SLIDE 81
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0.
SLIDE 82
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0. Hence, E[Z] =
∞
0 zfZ(z)dz =
SLIDE 83
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0. Hence, E[Z] =
∞
0 zfZ(z)dz = 1
λ + 1 µ − 1 λ + µ .
SLIDE 84
Maximum of Two Exponentials
Let X = Expo(λ) and Y = Expo(µ) be independent. Define Z = max{X,Y}. Calculate E[Z]. We compute fZ, then integrate. One has Pr[Z < z] = Pr[X < z,Y < z] = Pr[X < z]Pr[Y < z] = (1−e−λz)(1−e−µz) = 1−e−λz −e−µz +e−(λ+µ)z Thus, fZ(z) = λe−λz + µe−µz −(λ + µ)e−(λ+µ)z,∀z > 0. Hence, E[Z] =
∞
0 zfZ(z)dz = 1
λ + 1 µ − 1 λ + µ .
SLIDE 85
Maximum of n i.i.d. Exponentials
SLIDE 86
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1).
SLIDE 87
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}.
SLIDE 88
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z].
SLIDE 89
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion.
SLIDE 90
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows:
SLIDE 91
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1).
SLIDE 92
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential.
SLIDE 93
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z].
SLIDE 94
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1
SLIDE 95
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1 = 1 n +An−1
SLIDE 96
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1 = 1 n +An−1 because the minimum of Expo is Expo with the sum of the rates.
SLIDE 97
Maximum of n i.i.d. Exponentials
Let X1,...,Xn be i.i.d. Expo(1). Define Z = max{X1,X2,...,Xn}. Calculate E[Z]. We use a recursion. The key idea is as follows: Z = min{X1,...,Xn}+V where V is the maximum of n −1 i.i.d. Expo(1). This follows from the memoryless property of the exponential. Let then An = E[Z]. We see that An = E[min{X1,...,Xn}]+An−1 = 1 n +An−1 because the minimum of Expo is Expo with the sum of the rates. Hence, E[Z] = An = 1+ 1 2 +···+ 1 n = H(n).
SLIDE 98
Quantization Noise
SLIDE 99
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits.
SLIDE 100
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error
SLIDE 101
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise.
SLIDE 102
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise?
SLIDE 103
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model:
SLIDE 104
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value.
SLIDE 105
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X.
SLIDE 106
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits.
SLIDE 107
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y.
SLIDE 108
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2].
SLIDE 109
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2]. Analysis:
SLIDE 110
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)].
SLIDE 111
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1).
SLIDE 112
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1). The power of the signal X is E[X 2] =
SLIDE 113
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1). The power of the signal X is E[X 2] = 1
3.
SLIDE 114
Quantization Noise
In digital video and audio, one represents a continuous value by a finite number of bits. This introduces an error perceived as noise: the quantization noise. What is the power of that noise? Model: X = U[0,1] is the continuous value. Y is the closest multiple
- f 2−n to X. Thus, we can represent Y with n bits. The error is
Z := X −Y. The power of the noise is E[Z 2]. Analysis: We see that Z is uniform in [0,a = 2−(n+1)]. Thus, E[Z 2] = a2 3 = 1 32−2(n+1). The power of the signal X is E[X 2] = 1
3.
SLIDE 115
Quantization Noise
SLIDE 116
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
SLIDE 117
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR)
SLIDE 118
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise.
SLIDE 119
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1).
SLIDE 120
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR)
SLIDE 121
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR) = 20(n +1)log10(2)
SLIDE 122
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR) = 20(n +1)log10(2) ≈ 6(n +1).
SLIDE 123
Quantization Noise
We saw that E[Z 2] = 1
32−2(n+1) and E[X 2] = 1 3.
The signal to noise ratio (SNR) is the power of the signal divided by the power of the noise. Thus, SNR = 22(n+1). Expressed in decibels, one has SNR(dB) = 10log10(SNR) = 20(n +1)log10(2) ≈ 6(n +1). For instance, if n = 16, then SNR(dB) ≈ 112dB.
SLIDE 124
Replacing Light Bulbs
SLIDE 125
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes.
SLIDE 126
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out.
SLIDE 127
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time?
SLIDE 128
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem:
SLIDE 129
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t).
SLIDE 130
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
SLIDE 131
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof:
SLIDE 132
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units.
SLIDE 133
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε].
SLIDE 134
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A]
SLIDE 135
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A]
SLIDE 136
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε.
SLIDE 137
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε. Hence, g(n,t) := Pr[Xt = n]
SLIDE 138
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε. Hence, g(n,t) := Pr[Xt = n] is such that
SLIDE 139
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: We see how Xt increases over the next ε ≪ 1 time units. Let A be the event that a burns out during [t,t +ε]. Then, Pr[Xt+ε = n] ≈ Pr[Xt = n,Ac]+Pr[Xt = n −1,A] = Pr[Xt = n]Pr[Ac]+Pr[Xt = n −1]Pr[A] ≈ Pr[Xt = n](1−ε)+Pr[Xt = n −1]ε. Hence, g(n,t) := Pr[Xt = n] is such that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε.
SLIDE 140
Replacing Light Bulbs
SLIDE 141
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes.
SLIDE 142
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out.
SLIDE 143
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time?
SLIDE 144
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem:
SLIDE 145
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t).
SLIDE 146
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
SLIDE 147
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued)
SLIDE 148
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that
SLIDE 149
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε.
SLIDE 150
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0,
SLIDE 151
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets
SLIDE 152
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t).
SLIDE 153
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t). You can check that these equations are solved by g(n,t) = tn
n!e−t.
SLIDE 154
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t). You can check that these equations are solved by g(n,t) = tn
n!e−t.
Indeed, then g′(n,t) = tn−1 (n −1)!e−t −g(n,t)
SLIDE 155
Replacing Light Bulbs
Say that light bulbs have i.i.d. Expo(1) lifetimes. We turn a light on, and replace it as soon as it burns out. How many light bulbs do we need to replace in t units of time? Theorem: The number Xt of replaced light bulbs is P(t). That is, Pr[Xt = n] = tn
n!e−t.
Proof: (continued) We saw that g(n,t +ε) ≈ g(n,t)−g(n,t)ε +g(n −1,t)ε. Subtracting g(n,t), dividing by ε, and letting ε → 0, one gets g′(n,t) = −g(n,t)+g(n −1,t). You can check that these equations are solved by g(n,t) = tn
n!e−t.
Indeed, then g′(n,t) = tn−1 (n −1)!e−t −g(n,t) = g(n −1,t)−g(n,t).
SLIDE 156
Expected Squared Distance
SLIDE 157
Expected Squared Distance
Problem 1:
SLIDE 158
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1].
SLIDE 159
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]?
SLIDE 160
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis:
SLIDE 161
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] =
SLIDE 162
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY]
SLIDE 163
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2
SLIDE 164
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6.
SLIDE 165
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2:
SLIDE 166
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square?
SLIDE 167
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis:
SLIDE 168
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] =
SLIDE 169
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2]
SLIDE 170
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6.
SLIDE 171
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6. Problem 3:
SLIDE 172
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6. Problem 3: What about in n dimensions?
SLIDE 173
Expected Squared Distance
Problem 1: Pick two points X and Y independently and uniformly at random in [0,1]. What is E[(X −Y)2]? Analysis: One has E[(X −Y)2] = E[X 2 +Y 2 −2XY] = 1 3 + 1 3 −21 2 1 2 = 2 3 − 1 2 = 1 6. Problem 2: What about in a unit square? Analysis: One has E[||X−Y||2] = E[(X1 −Y1)2]+E[(X2 −Y2)2] = 2× 1 6. Problem 3: What about in n dimensions? n
6.
SLIDE 174
Geometric and Exponential
SLIDE 175
Geometric and Exponential
The geometric and exponential distributions are similar.
SLIDE 176
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless.
SLIDE 177
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second
SLIDE 178
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N,
SLIDE 179
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1.
SLIDE 180
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H.
SLIDE 181
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact:
SLIDE 182
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p).
SLIDE 183
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis:
SLIDE 184
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails]
SLIDE 185
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails] = (1− p N )Nt
SLIDE 186
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails] = (1− p N )Nt ≈ exp{−pt}.
SLIDE 187
Geometric and Exponential
The geometric and exponential distributions are similar. They are both memoryless. Consider flipping a coin every 1/N second with Pr[H] = p/N, where N ≫ 1. Let X be the time until the first H. Fact: X ≈ Expo(p). Analysis: Note that Pr[X > t] ≈ Pr[first Nt flips are tails] = (1− p N )Nt ≈ exp{−pt}. Indeed, (1− a
N )N ≈ exp{−a}.
SLIDE 188
Summary
Continuous Probability 3
SLIDE 189
Summary
Continuous Probability 3
SLIDE 190
Summary
Continuous Probability 3
◮ Continuous RVs are essentially the same as discrete RVs
SLIDE 191
Summary
Continuous Probability 3
◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε
SLIDE 192
Summary
Continuous Probability 3
◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε ◮ Sums become integrals, ....
SLIDE 193
Summary
Continuous Probability 3
◮ Continuous RVs are essentially the same as discrete RVs ◮ Think that X ≈ x with probability fX(x)ε ◮ Sums become integrals, .... ◮ The exponential distribution is magical:
SLIDE 194