stochastic gene expression in systems biology
play

Stochastic Gene Expression in Systems Biology Brian Munsky Center - PowerPoint PPT Presentation

Stochastic Gene Expression in Systems Biology Brian Munsky Center for NonLinear Studies, Los Alamos National Lab Mustafa Khammash Center for Control and Dynamical Systems, University of California, Santa Barbara C n e n o t i t e a r


  1. Note: • The probability of A and B colliding was computed for a given a relative velocity of v BA (conditional probability) • The relative velocity is a random variable , and we must average over all velocities. If we denote by f BA ( · ) the probability density of the random variable V BA we have Collision Probability in [t,t+dt] � = R 3 P (collision in [ t, t + dt ] | V BA = v ) f BA ( v ) dv 1 � Ω π ( r A + r B ) 2 � v � dt f BA ( v ) dv = R 3 1 � Ω π ( r A + r B ) 2 dt = R 3 � v � f BA ( v ) dv mean relative speed 28

  2. The probability density function of f BA ( · ) can be easily computed from the Boltzman distribution of the velocity and the independence of V x , V y , and V z . � 3 / 2 � m ˆ 2 kBT � v � 2 m ˆ m = m A + m B − f BA ( v ) = e , where ˆ 2 π k B T 2 Hence � Mean relative speed = R 3 � v � f BA ( v ) dv � 3 / 2 � ˆ 2 kBT � v � 2 m ˆ m − � = R 3 � v � e dv 2 π k B T � 8 k B T = π ˆ m Probability of A-B collision within [t,t+dt]: � 1 8 k B T Ω π ( r A + r B ) 2 dt π ˆ m 29

  3. Not all collisions lead to reactions. One can factor in the ”reaction energy”. Assumption: An A − B collision leads to a reaction only if the kinetic energy associated with the component of the velocity along the line of contact is greater than a critical energy ǫ . v BA Reaction if 1 v 2 2 ˆ m ¯ BA > ǫ v BA ¯ It can be shown that: ǫ − kBT Probability (A-B reaction | A-B collision) = e Probability of A-B reaction within [t,t+dt]: � 1 8 k B T ǫ Ω π ( r A + r B ) 2 kBT dt − m e π ˆ 30

  4. Given N species: S 1 , . . . , S N with populations x 1 , . . . , x N at time t . Consider the bimolecular reaction channel (with distinct species): R : S i + S j → products The number of distinct S i − S j pairs that can react is: x i · x j . Therefore, Probability of an R reaction within [t,t+dt]: � 1 8 k B T ǫ Ω π ( r i + r j ) 2 − kBT dt = w ( x ) x i x j m e π ˆ w ( · ) is called the propensity function . Consider the bimolecular reaction channel (with same species): R ′ : S i + S i → products The number of distinct S i − S i pairs that can react is: x i ( x i − 1) . Therefore, 2 Probability of an R ′ reaction within [t,t+dt]: � x i ( x i − 1) 1 8 k B T ǫ − Ω π r 2 kBT dt = w ( x ) dt m e i 2 π ˆ 31

  5. Reactions and Propensity Functions Propensity Rate Reaction w ( x ) c c φ → Products c c c · x i S i → Products � 1 8 k B T ǫ Ω π ( r i + r j ) 2 − c c · x i x j kBT S i + S j → Products m e π ˆ � 4 1 8 k B T ǫ c · x i ( x i − 1) Ω π r 2 − kBT m e c i S i + S i → Products π ˆ 2 For a monomolecular reaction: c is numerically equal to the reaction rate constant k of conventional deterministic chemical kinetics For a bimolecular reaction: c is numerically equal to k/ Ω , where k is the reaction rate constant of conventional deterministic chemical kinetics 32

  6. A Jump-Markov description of chemical kinetics • At any time, the state of the system is defined by its integer population vector: x ∈ Z N • Reactions are transitions from one state to another: [10, 15] [11, 15] # species 1 # species 2 [11, 14] [12, 14] 33 33

  7. A Jump-Markov description of chemical kinetics • At any time, the state of the system is defined by its integer population vector: x ∈ Z N • Reactions are transitions from one state to another: • These reactions are random, others could have occurred: [9, 15] [12, 15] [10, 15] [11, 15] [9, 14] [10, 14] [11, 14] [12, 14] 34 34

  8. A Jump-Markov description of chemical kinetics Or others... [8, 16] [9, 16] [10, 16] [11, 16] [12, 16] [8, 15] [9, 15] [12, 15] [10, 15] [11, 15] [8, 14] [9, 14] [10, 14] [11, 14] [12, 14] 35 35

  9. A Jump-Markov description of chemical kinetics Or others... [7, 17] [8, 16] [9, 17] [10, 17] [11, 17] [13, 17] [14, 17] [12, 17] [7, 16] [8, 16] [9, 16] [10, 16] [11, 16] [13, 16] [14, 16] [12, 16] [7, 15] [8, 15] [9, 15] [12, 15] [10, 15] [11, 15] [13, 15] [14, 15] [7, 14] [8, 14] [9, 14] [10, 14] [11, 14] [12, 14] [13, 14] [14, 14] [7, 13] [8, 13] [9, 13] [10, 13] [11, 13] [13, 13] [14, 13] [12, 13] 36 36

  10. A Jump-Markov description of chemical kinetics Or others... 37 37

  11. A Jump-Markov description of chemical kinetics Or others... 38 38

  12. A Jump-Markov description of chemical kinetics Or others... At each step, we ask two questions: When is the next jump? Where will that jump lead? 39 39

  13. Reaction Stoichiometry (review) • The Stoichiometric vector, s , refers to the relative change in the population vector after a reaction. • There may be many different reactions for a given stoichiometry. s 1 = [1 , 0] T s 2 = [ − 1 , 0] T s 3 = [0 , 1] T s 4 = [1 , − 1] T S 1 → S 1 + S 1 S 1 + S 1 → S 1 S 2 → S 2 + S 2 S 2 → S 1 S 2 → S 2 + S 1 S 1 + S 2 → S 2 S 1 → S 1 + S 2 S 1 + S 2 → S 1 + S 1 ∅ → S 1 S 1 → ∅ ∅ → S 2 S 2 + S 2 → S 1 + S 2 [12, 15] [9, 15] [10, 15] [11, 15] [9, 14] [10, 14] [11, 14] [12, 14] 40 40

  14. Reaction Propensities (review) • The propensity, , of a reaction is its rate. w • is the probability that the reaction will occur in a µ th w µ dt time step of length . dt • Typically, propensities depend only upon reactant populations. s 2 = [ − 1 , 0] T w 2 ( x 1 , x 2 ) S 1 + S 1 → S 1 k 1 x 2 ( x 1 − 1) / 2 S 1 + S 2 → S 2 k 2 x 1 x 2 S 1 → ∅ k 3 x 1 [12, 15] [9, 15] [10, 15] [11, 15] [9, 14] [10, 14] [11, 14] [12, 14] 41 41

  15. The Chemical Master Equation k w k ( x ) dt + O ( dt 2 ) Prob. that no reactions fire in [ t, t + dt ] = 1 − � Prob. that reaction R k fires once in [ t, t + dt ] = w k ( x ) dt + O ( dt 2 ) Prob. that more than one reaction fires in [ t, t + dt ] = O ( dt 2 ) at x No reaction fires   w k ( x ) dt + O ( dt 2 ) � p ( x, t + dt ) = p ( x, t )  1 −  k   w k ( x ) dt + O ( dt 2 )  + O ( dt 2 ) � � + p ( x − s k , t ) k k more than one R k reaction R k fires once away from x reaction in dt � � p ( x − s k , t ) w k ( x ) dt + O ( dt 2 ) p ( x, t + dt ) − p ( x, t ) = − p ( x, t ) w k ( x ) dt + k k The Chemical Master Equation dp ( x, t ) � � = − p ( x, t ) w k ( x ) + p ( x − s k , t ) w k ( x ) dt k k 42

  16. Relationship of Stochastic and Deterministic Descriptions Given N species S 1 , . . . , S N and M elementary reactions. Let Φ i := [ S i ]. A deterministic description can be obtained from mass-action kinetics: d Φ dt = Sf ( Φ ) where f ( · ) is at most a second order monomial. It depends on the type of reactions and their rates. Example: k 1 A + B − → C k 2 A − → B d Φ A d Φ dt = Sf ( Φ ) where = − k 1 Φ A Φ B − k 2 Φ A dt d Φ A = − k 1 Φ A Φ B + k 2 Φ A   − 1 − 1 dt � � k 1 Φ A Φ B or S = − 1 1  , f ( Φ ) = d Φ A   k 2 Φ A  k 1 Φ A Φ B = 1 0 dt 43

  17. Relationship of Stochastic and Deterministic Descriptions Define X Ω ( t ) = X ( t ) Ω . Question: How does X Ω ( t ) relate to Φ ( t )? Fact: Let Φ ( t ) be the deterministic solution to the reaction rate equa- tions d Φ dt = Sf ( Φ ) , Φ (0) = Φ 0 . Let X Ω ( t ) be the stochastic representation of the same chemical sys- tems with X Ω (0) = Φ 0 . Then for every t ≥ 0: � � � X Ω ( s ) − Φ ( s ) t →∞ sup lim � = 0 a.s. � � Ω s ≤ t 44

  18. Moment Computations • Affine Propensity • Linear Noise Approximation 45

  19. Moment Computations For the first moment E [ X i ], multiply the CME by x i and sum over all ( x 1 , . . . , x N ) ∈ N N For the second moment E [ X i X j ], multiply the CME by x i x j and sum over all ( x 1 , . . . , x N ) ∈ N N Let w ( x ) = [ w 1 ( x ) , . . . , w M ( x )] T In matrix notation: dE [ X ] = SE [ w ( X )] dt dE [ XX T ] SE [ w ( X ) X T ] + E [ w ( X ) X T ] T S T + S { diagE [ w ( X )] } S T = dt 46

  20. Affine Propensity Suppose the propensity function is a ffi ne: w ( x ) = Wx + w 0 , ( W is N × N , w 0 is N × 1) Then E [ w ( X )] = WE [ X ]+ w 0 , and E [ w ( X ) X T ] = WE [ XX T ]+ w 0 E [ X T ]. This gives us the moment equations: d dtE [ X ] = SWE [ X ] + Sw 0 First Moment d SWE [ XX T ] + E [ XX T ] W T S T + S diag ( WE [ X ] + w 0 ) S T dtE [ XX T ] = Sw 0 E [ X T ] + E [ X ] w T 0 S T + Second Moment These are linear ordinary di ff erential equations and can be easily solved! 47

  21. Affine Propensity (cont.) Define the covariance matrix Σ = E [( X − E [ X ])( X − E ( X )] T ]. We can also compute covariance equations: d dt Σ = SW Σ + Σ W T S T + S diag ( WE [ X ] + w 0 ) S T Steady-state Case The steady-state moments and covariances can be obtained by solving linear algebraic equations: Let ¯ t →∞ E [ X ( t )] and ¯ X = lim Σ = lim t →∞ Σ ( t ). Then SW ¯ X = − Sw 0 Σ W T S T + S diag ( W ¯ X + w 0 ) S T = 0 SW ¯ Σ + ¯ 48

  22. Fluctuations Arise from Noise Driven Dynamics � diag ( W ¯ Define A = SW , and B = S X + w 0 ). The steady-state covariances equation Σ W T S T + S diag ( W ¯ X + w 0 ) S T = 0 SW ¯ Σ + ¯ becomes Σ A T + BB T = 0 A ¯ Σ + ¯ Lyapunov Equation The Lyapunov equation characterizes the steady-state covariance of a output of the linear dynamical system y = Ay + B ω ˙ where ω is a unit intensity white Gaussian noise! More precisely, the solution of the vector SDE: dy = Ay dt + B dW t where W t is Brownian motion. This is also called Ornstein-Uhlenbeck process . 49

  23. Moment Computations • Affine Propensity • Linear Noise Approximation 50

  24. Linear Noise Approximation (LNA) Let X Ω ( t ) := X ( t ) Ω 1 Linear Noise Approximation: X Ω ( t ) ≈ Φ ( t ) + Ω V ( t ) √ Write X Ω = Φ 0 ( t ) + Ω V Ω where Φ 0 ( t ) solves the deterministic RRE 1 √ d Φ dt = Sf ( Φ ) where dV ( t ) = A ( t ) V ( t ) dt + B ( t ) dW t A ( t ) = d [ Sf ( Φ )] � ( Φ 0 ( t )) , B ( t ) := S diag [ f ( Φ 0 ( t ))] d Φ 51

  25. 1 Multiplying X Ω ( t ) ≈ ¯ Φ + Ω V ( t ) by Ω , we get √ √ X ( t ) ≈ Ω ¯ population Φ + Ω V ( t ) deterministic zero mean concentration stochastic E [ X ( t )] = Ω ¯ Φ Let ¯ √ Σ be the covariance matrix of Ω · V ( t ). Then d Σ ( t ) A T ( t ) + Ω B ( t ) B ( t ) T Σ ( t ) = A ( t )¯ ¯ Σ ( t ) + ¯ dt A ( t ) = d [ Sf ( Φ )] � ( Φ 0 ( t )) , B ( t ) := S diag [ f ( Φ 0 ( t ))] d Φ At stationary distribution, we have the same Lyapunov equation as in the affine linear case: � A = SW diag ( W ¯ B = S X + w 0 ) 52

  26. Application to Gene Expression 53

  27. Application to Gene Expression Reactants X 1 ( t ) is # of m R N A ; X 2 ( t ) is # of protein Reactions γ p k r φ R 1 : − → m R N A γ r protein R 2 : mRNA − → k p R 3 : mRNA − → protein + mRNA γ p k p R 4 : protein − → φ Stoichiometry and Propensity γ r φ mRNA � � 1 − 1 0 0 S = 0 0 1 − 1 k r       0 0 k r k r DNA       � � 0 0 γ r X 1 X 1 γ r       w ( X ) = = +          0   0  k p X 1 k p X 2             0 0 γ p X 2 γ p w 0 W 54

  28. Steady-State Moments � � � � 0 k r − γ r A = SW = , Sw 0 = k p 0 − γ p  k r  γ r   X = − A − 1 Sw 0 = ¯     k p k r   γ p γ r Steady-State Covariance    2 k r 0 BB T = S diag ( W ¯ X + w 0 ) S T = 2 k p k r  0 γ r The steady-state covariances equation Σ A T + BB T = 0 A ¯ Σ + ¯ Lyapunov Equation can be solved algebraically for ¯ Σ .   k p k r k r γ r ( γ r + γ p ) γ r   ¯   Σ =     k p k r k p k r k p  γ p γ r (1 + γ r + γ p )  γ r ( γ r + γ p ) 55

  29. Coe ffi cients of Variation vr = 1 = 1 C 2 k r ¯ X 1 γ r � � � � 1 k p = 1 k p C 2 vp = 1 + 1 + k r k p ¯ γ r + γ p X 2 γ r + γ p γ r γ p Question: Does a large ¯ X 2 imply a small C vp ? � � 1 k p C 2 = 1 + vp k r k p γ r + γ p γ r γ p � � 1 k p 1 = γ r γ p ≥ · k r k p γ r + γ p k r γ r + γ p γ r γ p X 2 = k r k p ¯ γ r γ p , which can be chosen independently from C vp . Large mean does not imply small fluctuations! 56

  30. E { P } = 100 , γ r = γ p = 1 k r = 0 . 1 k p = 1000 k r = 0 . 01 k p = 10 , 000 C 2 vp � . � � C 2 vp � � . � � 500 80 20 60 15 P P 10 40 E { P } E { P } 5 20 0 0 0 100 200 300 400 500 0 100 200 300 400 500 Time, s Time, s k r = 1 k p = 100 k r = 10 k p = 10 C 2 C 2 vp = 0 . 51 vp = 0 . 06 6 2 1.5 4 P P 1 E { P } E { P } 2 0.5 0 0 0 100 200 300 400 500 0 100 200 300 400 500 Time, s Time, s k r = 100 k p = 1 k r = 1000 k p = 0 . 1 C 2 C 2 vp = 0 . 0105 vp = 0 . 015 1.5 1.5 P P 1 1 E { P } E { P } 0.5 0.5 0 100 200 300 400 500 0 100 200 300 400 500 Time, s Time, s 57

  31. Noise Suppression and Exploitation 58

  32. Noise Attenuation through Negative Feedback Reactants X 1 ( t ) is # of m R N A ; X 2 ( t ) is # of protein Reactions γ p k r k r = k 0 − k 1 · (# protein) φ R 1 : − → m R N A γ r protein R 2 : mRNA − → k p R 3 : mRNA − → protein + mRNA γ p k p R 4 : protein − → φ Stoichiometry and Propensity γ r φ mRNA � � 1 − 1 0 0 S = 0 0 1 − 1 k 0 − k 1 · (# protein)       0 k 0 − k 1 X 2 − k 1 k 0 DNA       � � 0 0 γ r X 1 X 1 γ r       w ( X ) = = +          0   0  k p X 1 k p X 2             0 0 γ p X 2 γ p w 0 W 59

  33. Steady-State Moments � � � � − k 1 k 0 − γ r A = SW = , Sw 0 = k p 0 − γ p   k 0 γ r   1+ k 1 kp   γ p γ r   � � µ r   X = − A − 1 Sw 0 = ¯ =:     µ p k 0 kp     γ r γ p   1+ k 1 kp   γ p γ r Steady-State Covariance � � k 0 + r µ r − k 1 µ p 0 BB T = S diag ( W ¯ X + w 0 ) S T = 0 k p µ r + p µ p The steady-state covariances equation Σ A T + BB T = 0 A ¯ Σ + ¯ Lyapunov Equation can be solved algebraically for ¯ Σ . � � 1 − φ b where φ = k 1 , b = k p , η = γ p Σ 22 = σ 2 ¯ p = 1 + η + 1 µ p 1 + b φ · γ p γ r γ r 60

  34. Feedback vs. No Feedback In order to compare the noise in the two cases, we must ensure that both configuations have the same mean! Impose the constraint: µ FB = µ NFB =: µ ∗ p p p This may be achieved by choosing k 0 = k r + k 1 µ NFB . p γ p γ p φ φ protein protein feedback no feedback k p k p γ r γ r φ φ mRNA mRNA k r k 0 − k 1 · (# protein) DNA DNA µ ∗ µ ∗ Mean p p � � � � b 1 − φ b where φ = k 1 µ ∗ µ ∗ 1 + η + 1 1 + η + 1 p p 1 + b φ · Variance γ p < 1 Protein variance is always smaller with negative feedback! 61

  35. Example γ p = γ r = 1 k p = 10; 0.045 0.04 γ p k 1 = 0 . 2 φ 0.035 more feedback probability k 1 = 0 . 1 protein 0.03 k 1 = 0 . 05 k p 0.025 k 1 = 0 0.02 γ r k 1 = − 0 . 05 φ mRNA 0.015 k 0 − k 1 · (# protein) 0.01 DNA 0.005 0 0 50 100 150 200 protein molecules Note that these distributions are NOT Gaussian. 62

  36. Exploiting the Noise: Failure of the linear noise approximation 400 350 stochastic Number of Molecules of P 300 250 200 deterministic may be approximated by 150 kq 1 φ − → P − → φ 100 1 50 0 10 20 30 40 50 60 70 80 90 100 K = k p /k a q = Time n 1 + n is #S Ω K convex From Jensen’s Inequality: � � 1 1 E [ q ] = E ≥ n 1 + E [ n ] 1 + Ω K Ω K • Noise enhances signal! Johan Paulsson , Otto G. Berg , and Måns Ehrenberg, PNAS 2000 63

  37. On the menu... • Today Overview of Stochastic Gene Expression Stochastic Chemical Kinetics Solutions for Simple Stochastic Processes (Transcription) Importance of Population Size Moment Computations for Linear Propensities Linear Noise Approximation • Tomorrow ‣ Monte Carlo Simulation Techniques ✴ Gillespie (SSA), Tau leaping, Chemical Langevin (SDEs), Slow Scale SSA. ‣ Density Computations with Finite State Projection Techniques ‣ Switch and Trajectory Analyses 64

  38. Monte Carlo Methods 65

  39. C n e n o t i t e a r t f u o p r CC DC Kinetic Monte-Carlo m C o o C n t d r n o a l , D s m y Simulation Methods n e t a s m y S i c l a Brian Munsky • Stochastic Simulation Algorithm •D.T. Gillespie, J. Phys. Chem. A 81 , 2340 (1977) •M. Gibson and J. Bruck, J. Phys. Chem. 104 , 1876 (2000) • τ leaping •D. Gillespie, J. Chem. Phys. 115 , 1716 (2001); 119 , 8229 (2003) •M. Rathinam et al. , J. Chem. Phys. 119 , 12784 (2003) •T. Tian and K. Burrage, J. Chem. Phys. 121 , 10356 (2004) •A. Chatterjee, et al. J. Chem. Phys. 122 , 054104 (2005) •Y. Cao, D. Gillespie and L. Petzold, J. Chem. Phys. 123 , 054104 (2005) • Chemical Langevin Equations •D. Gillespie, J. Chem. Phys. 113 , 1716 (2000) • System Partitioning Methods •C. Rao and A. Arkin, J. Chem. Phys. 118 , 4999 (2003) •Y. Cao et al. , J. Chem. Phys. 122 , 014116 (2005) • Hybrid Methods •E. Haseltine and J. Rawlings, J. Chem. Phys. 117 , 6959 (2002) •H. Salis and Y. Kaznessis, J. Chem. Phys. 122 , 054103 (2005) 66 66

  40. A Jump-Markov description of chemical kinetics • At any time, the state of the system is defined by its integer population vector: x ∈ Z N • Reactions are transitions from one state to another: [10, 15] [11, 15] # species 1 # species 2 [11, 14] [12, 14] 67 67

  41. A Jump-Markov description of chemical kinetics • At any time, the state of the system is defined by its integer population vector: x ∈ Z N • Reactions are transitions from one state to another: • These reactions are random, others could have occurred: [9, 15] [12, 15] [10, 15] [11, 15] [9, 14] [10, 14] [11, 14] [12, 14] 68 68

  42. A Jump-Markov description of chemical kinetics Or others... [8, 16] [9, 16] [10, 16] [11, 16] [12, 16] [8, 15] [9, 15] [12, 15] [10, 15] [11, 15] [8, 14] [9, 14] [10, 14] [11, 14] [12, 14] 69 69

  43. A Jump-Markov description of chemical kinetics Or others... [7, 17] [8, 16] [9, 17] [10, 17] [11, 17] [13, 17] [14, 17] [12, 17] [7, 16] [8, 16] [9, 16] [10, 16] [11, 16] [13, 16] [14, 16] [12, 16] [7, 15] [8, 15] [9, 15] [12, 15] [10, 15] [11, 15] [13, 15] [14, 15] [7, 14] [8, 14] [9, 14] [10, 14] [11, 14] [12, 14] [13, 14] [14, 14] [7, 13] [8, 13] [9, 13] [10, 13] [11, 13] [13, 13] [14, 13] [12, 13] 70 70

  44. A Jump-Markov description of chemical kinetics Or others... 71 71

  45. A Jump-Markov description of chemical kinetics Or others... 72 72

  46. A Jump-Markov description of chemical kinetics Or others... At each step, we ask two questions: When is the next jump? Where will that jump lead? 73 73

  47. Reaction Stoichiometry (review) • The Stoichiometric vector, s , refers to the relative change in the population vector after a reaction. • There may be many different reactions for a given stoichiometry. s 1 = [1 , 0] T s 2 = [ − 1 , 0] T s 3 = [0 , 1] T s 4 = [1 , − 1] T S 1 → S 1 + S 1 S 1 + S 1 → S 1 S 2 → S 2 + S 2 S 2 → S 1 S 2 → S 2 + S 1 S 1 + S 2 → S 2 S 1 → S 1 + S 2 S 1 + S 2 → S 1 + S 1 ∅ → S 1 S 1 → ∅ ∅ → S 2 S 2 + S 2 → S 1 + S 2 [12, 15] [9, 15] [10, 15] [11, 15] [9, 14] [10, 14] [11, 14] [12, 14] 74 74

  48. Reaction Propensities (review) • The propensity, , of a reaction is its rate. w • is the probability that the reaction will occur in a µ th w µ dt time step of length . dt • Typically, propensities depend only upon reactant populations. s 2 = [ − 1 , 0] T w 2 ( x 1 , x 2 ) S 1 + S 1 → S 1 k 1 x 2 ( x 1 − 1) / 2 S 1 + S 2 → S 2 k 2 x 1 x 2 S 1 → ∅ k 3 x 1 [12, 15] [9, 15] [10, 15] [11, 15] [9, 14] [10, 14] [11, 14] [12, 14] 75 75

  49. Exponential Waiting Times Probability reaction will occur in : w ∆ t + O (∆ t ) 2 [ t, t + ∆ t ) Probability reaction will not occur in : 1 − w ∆ t + O (∆ t ) 2 [ t, t + ∆ t ) Probability a reaction will not occur in two such time 1 − w ∆ t + O (∆ t ) 2 � 2 = 1 − 2 w ∆ t + O (∆ t ) 2 intervals : [ t, t + 2∆ t ) � Suppose that, , then the probability that no reaction will τ = K ∆ t occur in the interval is [ t, t + τ ) � K 1 − w τ � K + O ( K − 2 ) Taking the limit as K goes to infinity yields that the probability that no reaction will occur in the interval is [ t, t + τ ) � K 1 − w τ � K + O ( K − 2 ) lim = exp( − w τ ) k →∞ 76 76

  50. Exponential Random Variables The probability that a reaction will occur in the interval [ t, t + τ ) is . This is a cumulative distribution. F T ( τ ) = 1 − exp( − wτ ) The density (derivative) of the random number, , is: T f T ( τ ) = 1 w exp( − wτ ) Such a random number is known as an exponentially distributed random number. is an exponentially Notation: T ∈ EXP( λ ) → T distributed r.v. with parameter: . λ 77 77

  51. Exponential Waiting Times • We have assumed that the system is fully described by the population vectors. • If no reaction occurs, then nothing will have changed. • Waiting times must be memoryless random variables. Probability Density Probability Density Probability Density cut cut re-scale re-scale time (s) time (s) time (s) • No matter where we cut and scale the distribution, it must always looks the same. The exponential is the only continuous r.v. with this property. 78 78

  52. Generating Waiting Times • To generate an exponentially distributed random number, all we need is a uniform random number generator. Cumulative Distribution 1 − exp( − λ t ) • Find the cumulative distribution, F ( t ) = 1 − exp( − λt ) • Generate uniform random number, r ∈ U[0 , 1] • Find intersection where : F ( t ) = r time (s) τ = 1 1 λ log 1 − r • This is the time of the next reaction. 79 79

  53. Monte-Carlo Simulation Methods The Jump Markov Process • Stochastic Simulation Algorithm •D.T. Gillespie, J. Phys. Chem. A 81 , 2340 (1977) •M. Gibson and J. Bruck, J. Phys. Chem. 104 , 1876 (2000) 80 80

  54. Stochastic Simulation Algorithm t = t i + τ t = t i Step 1. Generate the time of the next reaction. Step 2. Decide which reaction s 2 has occurred. Step 3. Update current Time (t=t+ τ ) and State ( x = x + s k ) . 81 81

  55. Monte-Carlo Simulation Methods • Stochastic Simulation Algorithm •D.T. Gillespie, J. Phys. Chem. A 81 , 2340 (1977) •M. Gibson and J. Bruck, J. Phys. Chem. 104 , 1876 (2000) • Possible SSA methods: • First Reaction Method (Gillespie ‘77) • Next Reaction Method (Gibson and Bruck ‘00) • Direct Method (Gillespie ‘77) 82 82

  56. The First Reaction Method (FRM) t = t i + τ Step 1. Generate the time of the next reaction of each type. The time until the next reaction is a random variable of exponential distribution: τ µ ∈ EXP ( w µ ( x )) s 2 To generate each next reaction time, generate r 1 from a uniform distribution on (0,1) and use the equation: w µ ( x ) log 1 1 τ µ = r 1 Step 2. Decide which reaction has occurred. This is simply the reaction with the smallest : τ µ � � k = arg µ ∈{ 0 ,...,M } τ µ min Step 3. Update current Time (t=t+ ) and State ( x = x + s k ) . τ k In the FRM each reaction requires M rv’s. 83 83

  57. The First Reaction Method SSA in Matlab. clear all t=0;tstop = 2000; %%specify initial and final times x = [0; 0]; %% Specify initial conditions S = [1 -1 0 0; 0 0 1 -1]; %% Specify stoichiometry w = inline('[10, 1*x(1), 10*x(1), 1*x(2)]','x'); %% Specify Propensity functions while t<tstop tpos = 1./w(x).*log(1./rand(4,1)); % possible times until first reaction [tpos,i]=min(tpos); % find which is first reaction t=t+tpos; if t<=t_stop x = x+S(:,i); % update the configuration end end 84 84

  58. The Next Reaction Method (NRM) • In the FRM, we generate times, , for all M reactions and { τ µ } choose the reaction, k , with the smallest time, . τ k • Only a few species will change population as a result of this reaction--the rest will remain constant. • For most reactions, the propensity functions will remain constant. • For these, the times can be reused in the subsequent step to find the next reaction: . { τ µ } →{ τ µ − τ k } • When there are many different species and reactions, this NRM approach can be done with far fewer random number than the FRM. • Particularly useful for compartmental or Reaction-Diffusion processes. 85 85

  59. Monte-Carlo Simulation Methods • Stochastic Simulation Algorithm •D.T. Gillespie, J. Phys. Chem. A 81 , 2340 (1977) •M. Gibson and J. Bruck, J. Phys. Chem. 104 , 1876 (2000) • Possible SSA methods: • First Reaction Method (Gillespie ‘77) • Next Reaction Method (Gibson and Bruck ‘00) • Direct Method (Gillespie ‘77) 86 86

  60. Minimum of two Exponential Random Variables Let be a set of exponentially distributed { τ 1 , τ 2 , . . . , τ M } random variables: τ µ ∈ EXP ( w µ ) The minimum of is an exponentially distributed { τ µ } random variable given by: µ ∈{ 0 ,...,M } τ µ ∈ EXP ( | w | 1 ) min The argument, k , of this distribution is also a random variable with distribution: P ( k = µ ) = w µ | w | 1 In the DM we only generate 2 rv’s. 87 87

  61. The Direct Method (DM) t = t i + τ Step 1. Generate the time of the next reaction. The time until the next reaction is a random variable of exponential distribution: s 2 τ ∈ EXP ( | w | 1 ) To generate the next reaction time, generate r 1 from a uniform distribution on (0,1) and use 1 log 1 the equation: τ = | w | 1 r 1 Step 2. Decide which reaction has occurred. To obtain a realization of which reaction will occur, generate a second uniform random number, r 2 , and find the smallest k − 1 k k such that: � � w µ ( x ) ≤ r 2 | w | 1 ≤ w µ ( x ) µ =1 µ =1 Step 3. Update current Time (t=t+ τ ) and State ( x = x + s k ) . 88 88

  62. The Direct Method SSA in Matlab. clear all t=0;tstop = 2000; %%specify initial and final times x = [0; 0]; %% Specify initial conditions S = [1 -1 0 0; 0 0 1 -1]; %% Specify stoichiometry w = inline('[10, 1*x(1), 10*x(1), 1*x(2)]','x'); %% Specify Propensity functions while t<tstop w0 = sum(w(x)); % compute the sum of the prop. functions t = t+1/w0*log(1/rand); % update time of next reaction if t<=t_stop r2w0=rand*w0; % generate second random number and multiply by prop. sum i=1; % initialize reaction counter while sum(w(1:i))<r2w0 % increment counter until sum(w(1:i)) exceeds r2w0 i=i+1; end x = x+S(:,i); % update the configuration end end 89 89

  63. Limitations on the SSA • The SSA is an “exact” simulation of the system. • But… – Stepping through every reaction can take a lot of time. – A statistical representation of the system dynamics may require many realizations (10 4 to 10 6 ). • Faster approximations are available for some problems. 90 90

  64. Monte-Carlo Simulation Methods • Stochastic Simulation Algorithm (SSA). • τ -leaping •D. Gillespie, J. Chem. Phys. 115 , 1716 (2001) •D. Gillespie, L. Petzold, J. Chem. Phys. 119 , 8229 (2003) •M. Rathinam et al. , J. Chem. Phys. 119 , 12784 (2003) •T. Tian and K. Burrage, J. Chem. Phys. 121 , 10356 (2004) •Y. Cao, D. Gillespie and L. Petzold, J. Chem. Phys. 123 , 054104 (2005) 91 91

  65. τ Leaping Step 0. Specify length of each time step, τ . Assume that all propensity functions are constant over the time interval (t,t+ τ ). The number of times each reaction will fire is a Poisson * random number with mean w µ τ : Step 1. For each µ , generate k µ . Step 2. Update the time: t = t + τ M Update the state: � k µ s µ x = x + µ =1 * For some recent studies, binomial RV’s are used (T. Tian and K. Burrage, 2004) 92 92

  66. τ Leaping t = t i t = t i + τ Update Time k 1 = 4; s 1 = [0 , 1] T k 2 = 2; s 1 = [ − 1 , 1] T s 2 k 3 = 3; s 1 = [0 , − 1] T k 4 = 4; s 1 = [1 , − 1] T The number of times each reaction will fire is a Poisson random number with mean w µ τ : Step 1. For each µ , generate k µ . M � Step 2. Update the state: k µ s µ x = x + µ =1 Update the time: t = t + τ 93 93

  67. Limitations of τ leaping • For many situations τ leaping significantly speeds up the Monte Carlo simulation, but: – Poisson r.v.’s are unbounded – Propensity functions may change dramatically over small time intervals. – May result in negative populations. Note that these concerns are most important when the population of some species are very small. Precisely the circumstance where stochastic models are most important! 94 94

  68. Chemical Langevin Equation 95 95

  69. Monte-Carlo Simulation Methods • Stochastic Simulation Algorithm (SSA). • τ -leaping • System Partitioning Methods • Fast--Slow Partitions •C. Rao and A. Arkin, J. Chem. Phys. 118 , 4999 (2003) •Y. Cao et al. , J. Chem. Phys. 122 , 014116 (2005) • Continuous--Discrete Partitions •E. Haseltine and J. Rawlings, J. Chem. Phys. 117 , 6959 (2002) •H. Salis and Y. Kaznessis, J. Chem. Phys. 122 , 054103 (2005) 96 96

  70. Fast--Slow partitions. P SS Separate into “fast” and “slow” partitions. Assume that the “fast” partitions reach probabilistic equilibrium before a slow reaction occurs. 97 97

  71. Fast--Slow partitions. P SS Slow Reaction Average Slow Propensities Reaction Propensities   w µ ( x 1 ) w µ ( x 2 )   X = w µ , for µ = { 1 , 2 , . . . , M } ¯   w µ ( x 3 )   .   . . Use the fast sets’ steady state probability distributions to scale the propensity functions of the slow reactions. Results in a vector of average propensity functions, , ¯ w for the slow reactions. 98 98

  72. Fast--Slow partitions. P SS The projection to the slow manifold results in a new lower dimensional Markov chain. This is simulated with SSA. 99 99

  73. Continuous--Discrete partitions. • In some systems, there are great differences in scale: • Large populations (continuous) • Small populations (discrete) • All discrete models take too long. • All continuous models are inaccurate. • Hybrid models are necessary. 100 100

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend