choosing the summary statistics and the acceptance rate
play

Choosing the Summary Statistics and the Acceptance Rate in - PowerPoint PPT Presentation

Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation (ABC) Michael G.B. Blum Laboratoire


  1. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation (ABC) Michael G.B. Blum Laboratoire TIMC-IMAG, CNRS, Grenoble COMPSTAT 2010; Thursday, August 26

  2. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion A typical application of ABC in population genetics Estimating the time T since the out-of-Africa migration Recent Out-of-Africa Single Origin Population Past N A T Present non-Africa Africa (a) Model of human (b) Data origins

  3. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Flowchart of ABC Different ¡values ¡ Simulated ¡DNA ¡ Simula'ons ¡ of ¡the ¡parameter ¡ sequences ¡ T ¡ Most ¡probable ¡ Observed ¡DNA ¡ ABC ¡ values ¡for ¡T ¡ sequences ¡

  4. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Rejection algorithm for targeting p ( φ | S ) Generate a parameter φ according to the prior distribution 1 π ; Simulate data D ′ according to the model p ( D ′ | φ ) ; 2 Compute the summary statistic S ′ from D ′ and accept the 3 simulation if d ( S , S ′ ) < δ . Potential problem : the curse of dimensionality limits the number of statistics that rejection-ABC can handle.

  5. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Regression-adjustment for ABC Beaumont, Zhang and Balding; Genetics 2002 Local linear regression φ i | S i = m ( S i ) + ǫ i , with a linear function for m . Adjustment i = ˆ φ ∗ m ( S ) + ˜ ǫ i , ˆ m is found with weighted least-squares.

  6. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Regression-adjustment for ABC Weighted least-squares n � { φ i − ( β 0 + ( S i − S ) T β 1 ) } 2 W i , i = 1 where W i ∝ K ( || S − S i || /δ ) . Adjustment ǫ i = φ i − ( S i − S ) T ˆ i = ˆ β 0 β 1 φ ∗ LS + ˜ LS .

  7. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Regression-adjustment for ABC φ i * φ i Csilléry, Blum, Gaggiotti and François; TREE 2010

  8. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Asymptotic theorem for ABC Blum; JASA 2010 If there is a local homoscedastic relationship between φ 1 and S , Bias with regression adjustment < Bias with rejection only But 2 Rate of convergence of the MSE = θ ( n − 4 / ( d + 5 ) ) d = dimension of the summary statistics n = number of simulations

  9. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion A Gaussian example to illustrate potential pitfalls with ABC Toy example 1 : Estimation of σ 2 σ 2 Inv χ 2 ( d . f . = 1 ) ∼ N ( 0 , σ 2 ) µ ∼ N = 50 Summary statistics ( S 1 , . . . , S 5 ) = (¯ x N , s 2 N , u 1 , u 2 , u 3 ) u j ∼ N ( 0 , 1 ) , j = 1 , 2 , 3

  10. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion A Gaussian example to illustrate potential pitfalls with ABC 1 summary statistic 5 summary statistics 150 150 ● ● Empirical Variance Empirical Variance ● ● Accepted ● ● ● ● ● ● ● ● ● ● ● ● ● Rejected ● ● ● 100 100 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 50 ● 50 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.1 1.0 10.0 100.0 0.1 1.0 10.0 100.0 σ 2 σ 2

  11. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Local Bayesian linear regression Hjort; Book chapter 2003 Prior for the regression coefficients β β ∼ N ( 0 , α − 1 I p + 1 ) The Maximum a posteriori minimizes the regularized weighted least-squares problem n ( φ i − ( S i − S ) T β ) 2 W i + α 1 � 2 β T β. E ( β ) = 2 τ 2 i = 1

  12. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Local Bayesian linear regression Posterior distribution of the regression coefficients β ∼ N ( β MAP , V ) , τ − 2 VX T W δ φ β MAP = V − 1 ( α I p + 1 + τ − 2 X T W δ X ) . = Regression-adjustment for ABC i = φ i − ( S i − S ) T ˆ β 1 φ ∗ MAP .

  13. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion The evidence function as an omnibus criterion Empirical Bayes /Evidence approximation � � � p ( φ | τ 2 , α, p δ ) = Π n i = 1 p ( φ i | β, τ 2 ) W i p ( β | α ) d β, α is the precision hyperparameter τ is the variance of the residuals p δ is the percentage of accepted simulations. Maximizing the evidence for choosing p δ 1 choosing the set of summary statistics 2

  14. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion The evidence function as an omnibus criterion A closed-formed formula p + 1 log α − N W 2 log τ 2 − E ( β MAP ) log p ( φ | τ 2 , α, p δ ) = 2 − 1 2 log | V − 1 | − N W 2 log 2 π, where N W = � W i .

  15. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion The evidence function as an omnibus criterion The evidence as a function of the tolerance rate ( α,τ ) log p ( φ | τ 2 , α, p δ ) . log p ( φ | p δ ) = max The evidence as a function of the set of summary statistics ( α,τ, p δ ) log p ( φ | τ 2 , α, p δ ) . log p ( φ | S ) = max

  16. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Iterative algorithm for maximizing the evidence w.r.t. α and τ Updating the value of the hyperparameter γ α = , β T MAP β MAP where γ is the effective number of summary statistics. γ = ( p + 1 ) − α Tr ( V ) . � n i = 1 ( φ i − ( S i − S ) T β ) 2 W i τ 2 = . N W − γ

  17. Introduction Standard algorithms Potential pitfalls with ABC Local Bayesian linear regression Conclusion Using the evidence for choosing p δ Toy example 2 φ ∼ U − c , c , c ∈ R , e φ � � 1 + e φ , σ 2 = ( . 05 ) 2 S ∼ N , Log evidence ● ● ● ● 100 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● −150 ● 0.003 0.010 0.100 1.000 Acceptance rate

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend