approximate bayesian computation
play

Approximate Bayesian Computation Dr. Jarad Niemi STAT 615 - Iowa - PowerPoint PPT Presentation

Approximate Bayesian Computation Dr. Jarad Niemi STAT 615 - Iowa State University December 5, 2017 Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 1 / 27 Outline Stochastic kinetic models Approximate Bayesian


  1. Approximate Bayesian Computation Dr. Jarad Niemi STAT 615 - Iowa State University December 5, 2017 Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 1 / 27

  2. Outline Stochastic kinetic models Approximate Bayesian computation Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 2 / 27

  3. Stochastic kinetic models Terminology Stochastic kinetic models Imagine a well-mixed system in thermal equilibrium with N species: S 1 , . . . , S N with number of molecules X 1 , . . . , X N with elements X j ∈ Z + which change according to M reactions: R 1 , . . . , R M with propensities a 1 ( x ) , . . . , a M ( x ) . The propensities are given by a j ( x ) = θ j h j ( x ) where h j ( x ) is a known function of the system state. If reaction j occurs, the state is updated by the stoichiometry ν j with elements ν ij ∈ {− 2 , − 1 , 0 , 1 , 2 } , i.e. reaction orders 0,1, and 2. Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 3 / 27

  4. Stochastic kinetic models Terminology Michaelis-Menton System The Michaelis-Menton system has N = 4 species: Substrate (S), Enzyme (E), Substrate-Enzyme Complex (SE), and Product (P). The M = 3 reactions as well as their propensities and stoichiometries are Stoichiometry Reaction Propensity S E SE P S + E − → SE θ 1 X S X E -1 -1 1 SE − → S + E θ 2 X SE 1 1 -1 SE − → P+E θ 3 X SE 1 -1 1 where θ = ( θ 1 , θ 2 , θ 3 ) is the parameter of interest. Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 4 / 27

  5. Stochastic kinetic models Terminology Michaelis-Menton snapshot 0.75 species Substrate 0.50 Enzyme y Substrate−Enzyme Complex Product 0.25 0.25 0.50 0.75 x Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 5 / 27

  6. Stochastic kinetic models Gillespie algorithm Gillespie algorithm If reaction j ∈ { 1 , . . . , M } has the following probability dt → 0 P ( reaction j within the interval ( t, t + dt ) | X t ) = a j ( X t ) dt, lim then this defines a continuous-time Markov jump process. Then a realization from this model can be obtained using the Gillespie algorithm: 1. For j ∈ { 1 , . . . , M } , calculate a j ( X t ) . 2. Calculate a 0 ( X t ) = � M j =1 a j ( X t ) . 3. Simulate a reaction time τ ∼ Exp ( a 0 ( X t )) 4. Simulate a reaction id k ∈ { 1 , . . . , M } with probability a k ( X t ) /a 0 ( X t ) 5. Update X according to v k and time by τ . Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 6 / 27

  7. Stochastic kinetic models Gillespie algorithm Michaelis-Menton Gillespie Simulation 300 200 count 100 0 0 10 20 30 time species S E SE P Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 7 / 27

  8. Stochastic kinetic models Complete observations Complete observations Suppose you observe all system transitions: n reactions occur in the interval [0 , T ] t 1 , . . . , t n are the reaction times r 1 , . . . , r n are the reaction indicators, r i ∈ { 1 , . . . , M } Then inference can be performed based on the likelihood M θ n j � L ( θ ) ∝ j exp ( − θ j I j ) j =1 where = � n n j i =1 I( r i = j ) # of j reactions � T = � n I j = 0 h j ( X t ) dt i =1 h j ( X t i − 1 )( t i − t i − 1 ) + h j ( X t n )[ T − t n ] Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 8 / 27

  9. Stochastic kinetic models Inference Inference Maximum likelihood estimation θ j = n j ˆ I j Conjugate Bayesian inference = � M p ( θ ) j =1 Ga ( θ j ; a j , b j ) = � M p ( θ | X ) j =1 Ga ( θ j ; a j + n j , b j + I j ) = a j + n j E [ θ j | X ] b j + I j Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 9 / 27

  10. Stochastic kinetic models Inference Michaelis-Menton Complete Data Inference 150 100 density 50 0 0.00 0.05 0.10 0.15 0.20 x reaction S+E−>E S+E<−SE SE−>P Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 10 / 27

  11. Stochastic kinetic models Discrete observations Discrete observations Suppose you only observe the system at discrete-times: For simplicity, observe the system at times t = 1 , 2 , . . . , T . At these times, we observe y t = X t the system state. But do not observe the system between these times. Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 11 / 27

  12. Stochastic kinetic models Discrete observations Michaelis-Mention discrete observations 300 200 count 100 0 0 10 20 30 time species S E SE P Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 12 / 27

  13. Stochastic kinetic models Inference Inference Inference is still performed based on the likelihood L ( θ ) = p ( y | θ ) = p ( t, y ) but this is the solution to the chemical master equation M ∂ � � � ∂tp ( t, y ) = a j ( y − v m ) p ( t, y − v m ) − a j ( y ) p ( t, y ) j =1 For constitutive production h ( X t ) = 1 and a ( X t ) = θ , we still have L ( θ ) ∝ θ n exp ( − θI ) with � T n = y T − y 0 I = 1 dt = T 0 Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 13 / 27

  14. Stochastic kinetic models Reversible isomerization Reversible isomerization 300 200 count 100 0 30.00 30.25 30.50 30.75 31.00 time species S E SE P How many reactions occurred in the interval [30 , 31] ? � 31 What is 30 X SE dt ? Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 14 / 27

  15. Stochastic kinetic models Summary Summary With complete observations and independent gamma priors, the posterior is M � p ( θ | X ) = Ga ( θ j ; a j + n j , b j + I j ) j =1 where = � n n j i =1 I( r i = j ) � T 0 h j ( X t ) dt = � n I j = i =1 h j ( X t i − 1 )( t i − t i − 1 ) + h j ( X t n )[ T − t n ] For discrete observations, the likelihood is analytically intractable and therefore no closed form exists for the posterior (or MLEs). Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 15 / 27

  16. Sampling methods The idea But we can simulate from the model using the Gillespie algorithm!! Intuitively, if we 1. pick a set of parameters, 2. simulate a realization using these parameters, 3. and it matches our data, 4. then these parameters should be reasonable. Our goal is to formalize this through 1. Rejection sampling 2. Gibbs sampling Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 16 / 27

  17. Sampling methods Simulations from the prior 5 4 3 P 2 1 0 0.00 0.25 0.50 0.75 1.00 time Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 17 / 27

  18. Sampling methods Rejection sampling Rejection sampling Our objective is samples from the posterior � � p ( θ | y ) = p ( θ, X | y ) dX ∝ p ( y | X ) p ( X | θ ) p ( θ ) dX � � n = t =1 I( y t = X t ) p ( X | θ ) p ( θ ) dX A rejection sampling procedure is 1. Sample θ ∼ p ( θ ) . 2. Sample X ∼ p ( X | θ ) a.k.a. Gillespie 3. If y t = X t for t = 1 , 2 , . . . , T , then 4. θ is a sample from p ( θ | y ) and 5. θ, X is a sample from p ( θ, X | y ) . Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 18 / 27

  19. Sampling methods Gibbs sampling Gibbs sampling Our objective is samples from the posterior � � p ( θ | y ) = p ( θ, X | y ) dX ∝ p ( y | X ) p ( X | θ ) p ( θ ) dX A Gibbs sampling procedure is 1. Start with θ (0) , X (0) 2. For k = 1 , . . . , K , a. Sample θ ( k ) ∼ p ( θ | X ( k − 1) ) b. Sample X ( k ) ∼ p ( X | θ ( k ) , y ) a.k.a. rejection sampling θ ( k ) , X ( k ) converge to samples from p ( θ, X | y ) Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 19 / 27

  20. Approximate Bayesian computation An approximate posterior Intuitively, if we 1. pick a set of parameters, 2. simulate a realization using these parameters, 3. and it is similar to our data, 4. then these parameters should be reasonable. We can formalize this using Approximate Bayesian computation Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 20 / 27

  21. Approximate Bayesian computation The Approximation Approximate Bayesian computation (ABC) Our approximate objective is samples from the posterior � � p ( θ | y ) = p ( θ, X | ρ ≤ ǫ ) dX ∝ I( ρ ≤ ǫ ) p ( X | θ ) p ( θ ) dX where ρ = ρ ( y, X ) is a measure of the difference between your data y and simulations X . Choice of ǫ reflects tension between computability and accuracy. As ǫ → ∞ , d p ( θ | ρ ≤ ǫ ) → p ( θ ) acceptance probability converges to 1 As ǫ → 0 , d p ( θ | ρ ≤ ǫ ) → p ( θ | y ) acceptance probability decreases Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 21 / 27

  22. Approximate Bayesian computation ABC rejection sampling ABC rejection sampling Let ρ = � n t =1 | y t − X t | and ǫ = n , An ABC rejection sampling procedure is 1. Sample θ ∼ p ( θ ) 2. Sample X ∼ p ( X | θ ) a.k.a. Gillespie 3. If ρ ( y, X ) ≤ ǫ , then 4. θ is a sample from p ( θ | ρ ≤ ǫ ) and 5. θ, X is a sample from p ( θ, X | ρ ≤ ǫ ) . Jarad Niemi (STAT615@ISU) Approximate Bayesian Computation December 5, 2017 22 / 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend