ch05 introduction to probability theory
play

Ch05. Introduction to Probability Theory Ping Yu Faculty of - PowerPoint PPT Presentation

Ch05. Introduction to Probability Theory Ping Yu Faculty of Business and Economics The University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations Foundations 1 Random Variables 2 Expectation 3 Multivariate Random Variables 4


  1. Ch05. Introduction to Probability Theory Ping Yu Faculty of Business and Economics The University of Hong Kong Ping Yu (HKU) Probability 1 / 39

  2. Foundations Foundations 1 Random Variables 2 Expectation 3 Multivariate Random Variables 4 Conditional Distributions and Expectation 5 The Normal and Related Distributions 6 Ping Yu (HKU) Probability 2 / 39

  3. Foundations Foundations Ping Yu (HKU) Probability 2 / 39

  4. Foundations Founder of Modern Probability Theory Andrey N. Kolmogorov (1903-1987), Russian Vladimir Arnold, a student of Kolmogorov, once said: "Kolmogorov – Poincaré – Gauss – Euler – Newton, are only five lives separating us from the source of our science" . Ping Yu (HKU) Probability 3 / 39

  5. Foundations Sample Space and Event The set Ω of all possible outcomes of an experiment is called the sample space for the experiment. - Take the simple example of tossing a coin. There are two outcomes, heads and tails, so we can write Ω = f H , T g . - If two coins are tossed in sequence, we can write the four outcomes as Ω = f HH , HT , TH , TT g . An event A is any collection of possible outcomes of an experiment. An event is a subset of Ω , including Ω itself and the null set / 0 . - Continuing the two coin example, one event is A = f HH , HT g , the event that the first coin is heads. Ping Yu (HKU) Probability 4 / 39

  6. Foundations Probability A probability function P assigns probabilities (numbers between 0 and 1) to events A in Ω . Probability of an event is the sum of probabilities of the outcomes in the event: P ( A ) = # of ways (or times) A can occur . total # of outcomes in Ω P satisfies P ( Ω ) = 1, P ( A c ) = 1 � P ( A ) , 1 P ( A ) � P ( B ) if A � B . A probability of 0 means that the event is almost impossible, and a probability of 1 means that the event is almost certain. Continuing the two coin example, P ( A ) = 1 / 2. 1 This implies P ( / 0 ) = 0. Ping Yu (HKU) Probability 5 / 39

  7. Random Variables Random Variables Ping Yu (HKU) Probability 6 / 39

  8. Random Variables Random Variables and CDFs A random variable (r.v.) X is a function from a sample space Ω into the real line. - The r.v. transforms the abstract elements in Ω to analyzable real values, so is a numerical summary of a random outcome. - Notations: we denote r.v.’s by uppercase letters such as X , and use lowercase letters such as x for potential values and realized values. - Caution: Be careful about the difference between a random variable and its realized value. The former is a function from outcomes to values while the latter is a value associated with a specific outcome. For a r.v. X we define its cumulative distribution function (cdf) as F ( x ) = P ( X � x ) . - Notations: Sometimes we write this as F X ( x ) to denote that it is the cdf of X . Ping Yu (HKU) Probability 7 / 39

  9. Random Variables Discrete Variables The r.v. X is discrete if F ( x ) is a step function. - A discrete r.v. can take only finite or countably many values, x 1 , ��� , x J , where J can be ∞ The probability function for X takes the form of the probability mass function (pmf) � � = p j , j = 1 , ��� , J , P X = x j (1) where 0 � p j � 1 and ∑ J j = 1 p j = 1. - F ( x ) = ∑ J j = 1 p j 1 ( x j � x ) , where 1 ( � ) is the indicator function which equals one when the event in the parenthesis is true and zero otherwise. A famous discrete r.v. is the Bernoulli (or binary) r.v., where J = 2, x 1 = 0, x 2 = 1, p 2 = p and p 1 = 1 � p . [Figure here] - The Bernoulli distribution is often used to model sex, employment status, and other dichotomies. Ping Yu (HKU) Probability 8 / 39

  10. Random Variables Jacob Bernoulli (1655-1705), Swiss Jacob Bernoulli (1655-1705) was one of the many prominent mathematicians in the Bernoulli family. Ping Yu (HKU) Probability 9 / 39

  11. Random Variables Continuous Random Variables The r.v. X is continuous if F ( x ) is continuous in x . - In this case P ( X = x ) = 0 for all x 2 R so the representation (1) is unavailable. We instead represent the relative probabilities by the probability density function (pdf) f ( x ) = d dx F ( x ) . R ∞ - A function f ( x ) is a pdf iff f ( x ) � 0 for all x 2 R and � ∞ f ( x ) dx = 1. By the fundamental theorem of calculus, Z x F ( x ) = � ∞ f ( u ) du and Z b P ( a � X � b ) = F ( b ) � F ( a ) = a f ( u ) du . Ping Yu (HKU) Probability 10 / 39

  12. Random Variables Examples of Continuous R.V.s A famous continuous r.v. is the standard normal r.v. [Figure here] The standard normal density is ! � x 2 1 , � ∞ < x < ∞ , φ ( x ) = p exp 2 2 π which is a symmetric, bell-shaped distribution with a single peak - Notations: write X � N ( 0 , 1 ) , and denote the standard normal cdf by Φ ( x ) . Φ ( x ) has no closed-form solution. [Figure here] Another famous continuous r.v. is the standard uniform r.v. whose pdf is f ( x ) = 1 ( 0 � x � 1 ) , i.e., X can occur only on [ 0 , 1 ] and occurs uniformly . We denote X � U [ 0 , 1 ] . [Figure here] - A generalization is X � U [ a , b ] , a < b . Ping Yu (HKU) Probability 11 / 39

  13. Random Variables Carl F . Gauss (1777-1855), Göttingen The normal distribution was invented by Carl F . Gauss (1777-1855), so it is also known as the Gaussian distribution. Ping Yu (HKU) Probability 12 / 39

  14. Random Variables Bernoulli Distribution Standard Normal Distribution Uniform Distribution 0.5 1 0.5 0.4 0.8 0.3 0.6 0.2 0.4 0.1 0.2 0 0 0 0 1 -3 0 1 3 0 1 1 1 1 0.5 0.5 0.5 0 0 0 -1 0 1 2 -3 0 1 3 0 1 Figure: PMF, PDF and CDF: p = 0 . 5 in the Bernoulli Distribution Ping Yu (HKU) Probability 13 / 39

  15. Expectation Expectation Ping Yu (HKU) Probability 14 / 39

  16. Expectation Expectation For any real function g , we define the mean or expectation E [ g ( X )] as follows: if X is discrete, J ∑ E [ g ( X )] = g ( x j ) p j , j = 1 and if X is continuous Z ∞ E [ g ( X )] = � ∞ g ( x ) f ( x ) dx . - The mean is a weighted average of all possible values of X with the weights determined by its pmf or pdf. Since E [ a + bX ] = a + b � E [ X ] , we say that expectation is a linear operator . Ping Yu (HKU) Probability 15 / 39

  17. Expectation Moments For m > 0, we define the m th moment of X as E [ X m ] , the m th central moment as � ( X � E [ X ]) m � � j X j m � E , the m th absolute moment of X as E , and the m th � j X � E [ X ] j m � absolute central moment as E . Two special moments are the first moment - the mean µ = E [ X ] , which is a measure of central tendency, and the second central moment - the variance h ( X � µ ) 2 i h X 2 i σ 2 = E � µ 2 , which is a measure of variability or dispersion. = E p σ 2 the standard deviation of X . - We call σ = h X 2 i = σ 2 + µ 2 , the second moment is the - The definition of variance implies E variance plus the first moment squared. - We also write σ 2 = Var ( X ) , which allows the convenient expression Var ( a + bX ) = b 2 Var ( X ) . The standard normal density has all moments finite, e.g., µ = 0 and σ = 1. Ping Yu (HKU) Probability 16 / 39

  18. Expectation Standardizing a Random Variable For a r.v. X , the standardized r.v. is defined as Z = X � µ = X σ � µ σ . σ Let a = � µ / σ anb b = 1 / σ , we have E [ Z ] = � µ σ + 1 σ µ = 0 , and Var ( Z ) = Var ( X ) = 1 . σ 2 This transformation is frequently used in statistical inference. Ping Yu (HKU) Probability 17 / 39

  19. Expectation Skewness and Kurtosis h ( X � µ ) 3 i h Z 3 i E Skewness: measure of asymmetry of a distribution, E = . σ 3 - If X has a symmetric distribution about µ , then its skewness is zero, e.g., the standard normal distribution. - If X has a long right tail, then its skewness is positive, and X is called positive- (or right-) skewed. - If X has a long left tail, then its skewness is negative, and X is called negative- (or left-) skewed. h ( X � µ ) 4 i h Z 4 i E Kurtosis: measure of the heavy-tailedness of a distribution, E = . σ 4 h Z 4 i - The normal distribution has kurtosis 3: E = 3. - If the kurtosis of a distribution is greater than 3, then it is called heavy-tailed or leptokurtoic. Ping Yu (HKU) Probability 18 / 39

  20. Expectation Figure: Four Distributions with Different Skewness and Kurtosis and the Same Mean 0 and Variance 1 Ping Yu (HKU) Probability 19 / 39

  21. Multivariate Random Variables Multivariate Random Variables Ping Yu (HKU) Probability 20 / 39

  22. Multivariate Random Variables Bivariate Random Variables A pair of bivariate r.v.’s ( X , Y ) is a function from the sample space into R 2 . The joint cdf of ( X , Y ) is F ( x , y ) = P ( X � x , Y � y ) . If F is continuous, the joint pdf is ∂ 2 f ( x , y ) = ∂ x ∂ y F ( x , y ) . - For any set A � R 2 , Z Z P (( X , Y ) 2 A ) = A f ( x , y ) dxdy . - The discrete case can be parallely discussed. [Exercise] For any function g ( x , y ) , Z ∞ Z ∞ E [ g ( X , Y )] = � ∞ g ( x , y ) f ( x , y ) dxdy . � ∞ Ping Yu (HKU) Probability 21 / 39

  23. Multivariate Random Variables Marginal Distributions The marginal distribution of X is Z ∞ Z x F X ( x ) = P ( X � x ) = lim y ! ∞ F ( x , y ) = � ∞ f ( x , y ) dydx , � ∞ So by the fundamental theorem of calculus, the marginal density of X is Z ∞ f X ( x ) = d dx F X ( x ) = � ∞ f ( x , y ) dy . Similarly, the marginal density of Y is Z ∞ f Y ( y ) = � ∞ f ( x , y ) dx . Ping Yu (HKU) Probability 22 / 39

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend