applied bayesian statistics
play

Applied Bayesian Statistics STAT 388/488 Dr. Earvin Balderama - PowerPoint PPT Presentation

Applied Bayesian Statistics STAT 388/488 Dr. Earvin Balderama Department of Mathematics & Statistics Loyola University Chicago August 29, 2017 Applied Bayesian Statistics 1 Last edited August 21, 2017 by Earvin Balderama


  1. Applied Bayesian Statistics STAT 388/488 Dr. Earvin Balderama Department of Mathematics & Statistics Loyola University Chicago August 29, 2017 Applied Bayesian Statistics 1 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  2. Introduction Course Info STAT 388/488 – Applied Bayesian Statistics http://math.luc.edu/~ebalderama/bayes Applied Bayesian Statistics 2 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  3. Introduction A motivating example (See http: //math.luc.edu/~ebalderama/bayes_resources/handouts/Eddy_What_is_Bayesian.pdf ) Alice and Bob play a casino game; first player to 6 points wins . Before the game starts, the casino rolls a ball randomly onto a pool table 1 (that Alice and Bob can’t see) until it comes to a complete stop. It’s position is marked and remains for the duration of the game. Each point is awarded based on another ball being rolled randomly onto 2 the table: If the ball stops to the left of the initial mark, Alice is awarded the point. If the ball stops to the right of the initial mark, Bob is awarded the point. Alice and Bob are told nothing except who is awarded each point. Applied Bayesian Statistics 3 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  4. Introduction First, some questions Let θ be the probability that Alice is awarded a point. Before the game starts, What’s your best guess about θ ? 1 Applied Bayesian Statistics 4 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  5. Introduction First, some questions Let θ be the probability that Alice is awarded a point. Before the game starts, What’s your best guess about θ ? 1 What’s the probability that θ is greater than a half? 2 Applied Bayesian Statistics 4 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  6. Introduction First, some questions Let θ be the probability that Alice is awarded a point. Before the game starts, What’s your best guess about θ ? 1 What’s the probability that θ is greater than a half? 2 Suppose the game is being played, and the score is now Alice 5, Bob 3 . What’s your best guess about θ now? 1 What’s the probability that θ is greater than a half now? 2 Applied Bayesian Statistics 4 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  7. Introduction Frequentist approach The Frequentist approach requires the (theoretical) notion of long-run frequency distributions: Quantifying uncertainty in terms of repeating the sampling process many times . The parameters are fixed and unknown. The sample (data) is random . Probability statements are only made about the randomness in the data . Applied Bayesian Statistics 5 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  8. Introduction Frequentist approach Sample statistic A statistic is a numerical summary of a sample. For example, X is a statistic, and is an estimator of the population mean µ . Here, one would never say “ P ( µ > 0 ) = 0 . 50”. Sampling distribution The distribution of a sample statistic that arises from repeating the process that generated the data many times. Here, one would never say “the distribution of µ is Normal ( 5 . 3 , 0 . 8 ) ”. Applied Bayesian Statistics 6 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  9. Introduction Frequentist approach 95% confidence interval An interval constructed from the data that should contain the true parameter value 95% of the time if we repeated the process that generated the data many times and computed an interval each time. Here, one would never say “the probability that µ is in the interval ( 4 . 2 , 5 . 6 ) is 0.95”. p -value Probability of observing a test statistic at least as extreme as observed in the sample if we repeated the process that generated the data many times. Here, one would never say “the probability that H 0 is true is 0.027”. Applied Bayesian Statistics 7 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  10. Introduction Frequentist approach Examples of repeatable data generation: Sometimes it’s hard to imagine repeating the data generation: Applied Bayesian Statistics 8 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  11. Introduction Some debate about the merits of the p -value http://www.nature.com/news/scientific-method-statistical-errors-1.14700 http://fivethirtyeight.com/features/ not-even-scientists-can-easily-explain-p-values/ http://fivethirtyeight.com/features/science-isnt-broken/ http://www.scientificamerican.com/article/ scientists-perturbed-by-loss-of-stat-tools-to-sift-research-fudge-from-fact/ http://www.tandfonline.com/doi/pdf/10.1080/01973533.2015.1012991 Applied Bayesian Statistics 9 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  12. Introduction What are Frequentist answers to these questions? Before the game starts, What’s your best guess about θ ? 1 What’s the probability that θ is greater than a half? 2 Applied Bayesian Statistics 10 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  13. Introduction What are Frequentist answers to these questions? Before the game starts, What’s your best guess about θ ? 1 What’s the probability that θ is greater than a half? 2 After collecting observations, What’s your best guess about θ now? 1 What’s the probability that θ is greater than a half now? 2 Applied Bayesian Statistics 10 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  14. Introduction What are Frequentist answers to these questions? Before the game starts, What’s your best guess about θ ? 1 What’s the probability that θ is greater than a half? 2 After collecting observations, What’s your best guess about θ now? 1 What’s the probability that θ is greater than a half now? 2 Bonus question, What’s Bob’s probability of winning? 1 Applied Bayesian Statistics 10 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  15. Introduction Bayesian approach The Bayesian approach consists of finding the most credible values of a parameter, conditional on the data: Uncertainty is described using probability distributions that are updated as data is observed . The true parameter values are fixed and unknown, but their uncertainty is described probabilistically and so are treated as random variables. The sample (data) is considered fixed . Probability statements express degree of belief and uncertainty in the unknown parameters . Applied Bayesian Statistics 11 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  16. Introduction Bayesian learning Prior distribution, f ( θ ) The uncertainty distribution of θ , before observing the data. Posterior distribution, f ( θ | y ) The uncertainty distribution of θ , after observing the data. Bayes’ Rule Provides the rule for updating the prior: f ( θ | y ) = f ( y | θ ) f ( θ ) f ( y ) Posterior ∝ Likelihood × Prior Applied Bayesian Statistics 12 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  17. Introduction Bayesian learning Likelihood function, f ( y | θ ) Distribution of the data given θ . This function is created by choosing a reasonable probability model for the data, then writing the “probability of the data” under this model. Regarded as a function of the model’s parameters (Remember, the data is considered fixed!). Bayes’ Rule Provides the rule for updating the prior: f ( θ | y ) = f ( y | θ ) f ( θ ) f ( y ) Posterior ∝ Likelihood × Prior Applied Bayesian Statistics 13 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  18. Introduction Back to example The probability of Alice being awarded a point is a random variable θ ∈ [ 0 , 1 ] Usually, we form a prior by assigning (varying levels of) probabilities across all possible values of θ . If we have no relevant prior information we might use an uninformative prior such as θ ∼ Uniform ( 0 , 1 ) The likelihood may be y | θ ∼ Binomial ( n , θ ) The posterior then turns out to be θ | y ∼ Beta ( y + 1 , n − y + 1 ) Applied Bayesian Statistics 14 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  19. Introduction Specifying a Beta prior distribution A more flexible prior is θ ∼ Beta ( a , b ) , where a and b control the shape. When a = b = 1, this specifies the uniform prior. The posterior then turns out to be θ | y ∼ Beta ( y + a , n − y + b ) . y + a ( y + a )( n − y + b ) E ( θ | y ) = n + a + b , V ( θ | y ) = ( n + a + b ) 2 ( n + a + b + 1 ) A prior is conjugate with respect to the likelihood if the posterior distribution is in the same family as the prior. Thus, the Beta prior is a conjugate prior for the Binomial likelihood. Applied Bayesian Statistics 15 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

  20. Introduction Back to example The score is Alice 5, Bob 3 . What’s your best guess about θ now? 1 What’s the probability that θ is greater than a half now? 2 Bonus question, What’s Bob’s expected probability of winning? 1 Applied Bayesian Statistics 16 Last edited August 21, 2017 by Earvin Balderama <ebalderama@luc.edu>

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend