kinetic monte carlo methods
play

Kinetic Monte Carlo Methods C n e n o t i t e a r t f u - PowerPoint PPT Presentation

Kinetic Monte Carlo Methods C n e n o t i t e a r t f u o p r CC DC Kinetic Monte-Carlo m C o o C n t d r n o a l , D s m y Simulation Methods n e t a s m y S i c l a Brian Munsky Stochastic


  1. The Direct Method (DM) t = t i + τ Step 1. Generate the time of the next reaction. The time until the next reaction is a random variable of exponential distribution: P τ ( t ) = | w ( x ) | 1 e − | w ( x ) | 1 t s 2 To generate the next reaction time, generate r 1 from a uniform distribution on (0,1) and use 1 log 1 the equation: τ = | w | 1 r 1 Step 2. Decide which reaction has occurred. To obtain a realization of which reaction will occur, generate a second uniform random number, r 2 , and find the smallest k − 1 k k such that: � � w µ ( x ) ≤ r 2 | w | 1 ≤ w µ ( x ) µ =1 µ =1 Step 3. Update current Time (t=t+ τ ) and State ( x = x + s k ) . 24

  2. The Direct Method SSA in Matlab. clear all t=0;tstop = 2000; %%specify initial and final times x = [0; 0]; %% Specify initial conditions S = [1 -1 0 0; 0 0 1 -1]; %% Specify stoichiometry w = inline('[10, 1*x(1), 10*x(1), 1*x(2)]','x'); %% Specify Propensity functions while t<tstop w0 = sum(w(x)); % compute the sum of the prop. functions t = t+1/w0*log(1/rand); % update time of next reaction if t<=t_stop r2w0=rand*w0; % generate second random number and multiply by prop. sum i=1; % initialize reaction counter while sum(w(1:i))<r2w0 % increment counter until sum(w(1:i)) exceeds r2w0 i=i+1; end x = x+S(:,i); % update the configuration end end 25

  3. Limitations on the SSA • The SSA is an “exact” simulation of the system. • But… – Stepping through every reaction can take a lot of time. – A statistical representation of the system dynamics may require many realizations (10 4 to 10 6 ). • Faster approximations are available for some problems. 26

  4. Monte-Carlo Simulation Methods • Stochastic Simulation Algorithm (SSA). • τ -leaping •D. Gillespie, J. Chem. Phys. 115 , 1716 (2001) •D. Gillespie, L. Petzold, J. Chem. Phys. 119 , 8229 (2003) •M. Rathinam et al. , J. Chem. Phys. 119 , 12784 (2003) •T. Tian and K. Burrage, J. Chem. Phys. 121 , 10356 (2004) •Y. Cao, D. Gillespie and L. Petzold, J. Chem. Phys. 123 , 054104 (2005) 27

  5. τ Leaping Step 0. Specify length of each time step, τ . Assume that all propensity functions are constant over the time interval (t,t+ τ ). The number of times each reaction will fire is a Poisson * random number with mean w µ τ : P k µ ( n ) = [ w µ ( x ) τ ] n e w µ ( x ) τ n ! Step 1. For each µ , generate k µ . Step 2. Update the time: t = t + τ M � Update the state: k µ s µ x = x + µ =1 * For some recent studies, binomial RV’s are used (T. Tian and K. Burrage, 2004) 28

  6. τ Leaping t = t i t = t i + τ Update Time k 1 = 4; s 1 = [0 , 1] T k 2 = 2; s 1 = [ − 1 , 1] T s 2 k 3 = 3; s 1 = [0 , − 1] T k 4 = 4; s 1 = [1 , − 1] T The number of times each reaction will fire is a Poisson random P k µ ( n ) = [ w µ ( x ) τ ] n number with mean w µ τ : e w µ ( x ) τ n ! Step 1. For each µ , generate k µ . M � Step 2. Update the state: k µ s µ x = x + µ =1 Update the time: t = t + τ 29

  7. Limitations of τ leaping • For many situations τ leaping significantly speeds up the Monte Carlo simulation, but: – Poisson r.v.’s are unbounded – Propensity functions may change dramatically over small time intervals. – May result in negative populations. Note that these concerns are most important when the population of some species are very small. Precisely the circumstance where stochastic models are most important! 30

  8. Chemical Langevin Equation • Comparison of Poisson and Gaussian random variables. µ = 50 0.2 0.06 0.4 µ = 10 0.04 µ = 1 0.2 0.1 0.02 0 0 0 0 2 4 6 8 10 0 5 10 15 20 25 30 0 20 40 60 80 • For small numbers of reaction steps, tau leaping doesn’t give much help. • For large numbers of reactions, replace the Poisson distribution with a normal distribution (same mean and variance. These are cheaper to generate. • This is known as the chemical Langevin equation. 31

  9. Monte-Carlo Simulation Methods • Stochastic Simulation Algorithm (SSA). • τ -leaping • System Partitioning Methods • Fast--Slow Partitions •C. Rao and A. Arkin, J. Chem. Phys. 118 , 4999 (2003) •Y. Cao et al. , J. Chem. Phys. 122 , 014116 (2005) • Continuous--Discrete Partitions •E. Haseltine and J. Rawlings, J. Chem. Phys. 117 , 6959 (2002) •H. Salis and Y. Kaznessis, J. Chem. Phys. 122 , 054103 (2005) 32

  10. Fast--Slow partitions. P SS Separate into “fast” and “slow” partitions. Assume that the “fast” partitions reach probabilistic equilibrium before a slow reaction occurs. 33

  11. Fast--Slow partitions. P SS Slow Reaction Average Slow Propensities Reaction Propensities   w µ ( x 1 ) w µ ( x 2 )   X = w µ , for µ = { 1 , 2 , . . . , M } ¯   w µ ( x 3 )   .   . . Use the fast sets’ steady state probability distributions to scale the propensity functions of the slow reactions. Results in a vector of average propensity functions, , ¯ w for the slow reactions. 34

  12. Fast--Slow partitions. P SS The projection to the slow manifold results in a new lower dimensional Markov chain. This is simulated with SSA. 35

  13. Continuous--Discrete partitions. • In some systems, there are great differences in scale: • Large populations (continuous) • Small populations (discrete) • All discrete models take too long. • All continuous models are inaccurate. • Hybrid models are necessary. 36

  14. Separate into “continuous” and “discrete” partitions. Simulate the continuous part with ordinary or stochastic continuous differential equations. Choose uniform rv, r . Numerically integrate propensity functions until: discrete � t 0 + τ M � w µ ( x ( t )) dt = − log r t 0 µ =1 − log r Choose next discrete reaction. τ

  15. Using the SSA to Find Distributions • The SSA does an excellent job of producing possible trajectories. Bins x x x x x x x xx x x x xx x x x State xx x x x Histogram time 0 T

  16. Convergence of the SSA • To get more accurate distributions, one needs more SSA runs. • Unfortunately, the convergence rate of any Monte Carlo error = O ( n − 1 algorithm is fundamentally limited: 2 ) • If very high precision is required, then MC methods will be very inefficient. Convergence for Coin Toss 0 10 1 � 1 10 After tosses 10 7 error: √ n there is still an � 2 10 � � s H e a d error of about � � . 0 5 � 3 10 � − � n � . 3 × 10 − 4 � � 4 10 1 2 3 4 5 6 7 10 10 10 10 10 10 10 # coin flips 39

  17. Density Computations using the Finite State Projection

  18. The Chemical Master Equation s 2 The probability that the system is in configuration x at t+dt is equal to the probability that the system is at x at t , and no reaction occurs between t and t+dt plus the probability that the system is one reaction removed from x at t and that reaction occurs between t and t+dt . The CME (McQuarrie ‘67): s 1 M M � � p ( x , t ) = − p ( x , t ) ˙ w k ( x ) + p ( x − s k , t ) w k ( x − s k ) k =1 k =1 Define the probability density state vector (pdv): . P ( X , t ) := [ p ( x 1 , t ) , p ( x 2 , t ) , p ( x 3 , t ) , . . . ] T evolves according to the Linear Time Invariant ODE: P ( X , t ) ˙ P ( X , t ) = A · P ( X , t ) The matrix CME

  19. The Chemical Master Equation • The solution of the CME is a transfer operator: P ( t 0 ) P ( t 0 + τ ) CME • The dimension of the CME can be INFINITE. • Most CME’s cannot be solved, so approximations are needed. 42

  20. Forming the Generator A has one row/column for each state. w 1 1 2 Each transition, , x i → x j w 5 w 2 w 4 contributes to A in two 3 4 locations: w 3 goes in the − w µ ( x i )   0 0 − w 1 w 4 diagonal element A i,i 0 w 1 − w 2 w 5   A =   goes in the 0 0 − w 4 − w 5 w 3 + w µ ( x i )   0 0 w 2 − w 3 off-diagonal element A j,i 43

  21. The Finite State Projection Select the states to keep. w 1 1 2 Find the corresponding G w 5 w 2 w 4 projection matrix: 3 4 � � − w 1 w 4 A [1 , 3] = w 3 0 − w 4 − w 5   0 0 − w 1 w 4 Collapse remaining states 0 w 1 − w 2 w 5   A =   0 0 − w 4 − w 5 w 3 into a single absorbing   0 0 w 2 − w 3 state   0 − w 1 w 4  This is the generator for a A F SP [1 , 3] = 0 0 − w 4 − w 5  0 w 1 w 5 new Markov chain 44

  22. The Finite State Projection Method The Full System The Projected System (FSP) ε ( t ) x 9 x 10 x 11 x 12 x 5 x 6 x 7 x 5 x 6 x 7 x 8 x 1 x 2 x 3 x 1 x 2 x 3 x 4 Full Master Equation FSP Master Equation � ˙ � � P J ( t ) ˙ P F SP � � � � P F SP � � � � � ( t ) P J A J A JJ � A J 0 J J = = − 1 T A J − 1 T A J ˙ P J � ( t ) 0 ε ( t ) ˙ A J � J A J � P J � ε Dimension = = Infinite Dimension = = 7 #( J ) + #( J � ) #( J ) + 1 ( t ) and P J ( t ) ≥ P F SP J The FSP Theorem � � �� � P F SP � � � P J ( t ) ( t ) � � � � (Munsky/Khammash JCP ‘06) J = ε ( t ) � � − � � P J � 0 � � � � 1 45

  23. The Finite State Projection Algorithm Inputs: Initial Conditions, System Parameters, Final time ( t f ), Allowable error ( ε max ) Step 1: Choose initial projection space, X J 0 . Step 2: Use projection X J i to find corresponding error, ε i ( t f ). Step 3: If ε i ( t f ) ≤ ε max , Stop . P F SP ( t f ) approximates P ( t f ) to within ε max . J i Step 4: Expand projection, X J i +1 ⊃ X J i , Increment i and return to Step 2. 46

  24. The “error” sink of the FSP to get exit times. ε ( t ) x 9 x 10 x 11 x 12 x 5 x 6 x 7 x 5 x 6 x 7 x 8 x 1 x 2 x 3 x 1 x 2 x 3 x 4 In the original FSP, is the amount of the probability ε ( t ) measure that exits the projection region . X J Median exit time: t 50 = t, s.t. ε ( t ) = 0 . 5 In this form gives information as to when the system ε ( t ) exits , but not how. X J 47

  25. Multiple FSP sinks to get exit directions. By using multiple sinks, one can determine how the probability measure exits . X J ε 3 ( t ) ε ( t ) x 9 x 10 x 11 x 12 x 5 x 6 x 7 x 5 x 6 x 7 x 8 ε 1 ( t ) x 1 x 2 x 3 x 1 x 2 x 3 x 4 Which Reaction Leaves ? X J ε 7 ( t ) ε 6 ( t ) x 9 x 10 x 11 x 12 x 5 x 6 x 7 x 5 x 6 x 7 x 8 ε 3 ( t ) x 1 x 2 x 3 x 1 x 2 x 3 x 4 From which state? 48

  26. The Finite State Projection Algorithm Inputs: Initial Conditions, System Parameters, Final time ( t f ), Allowable error ( ε max ) Step 1: Choose initial projection space, X J 0 . Step 2: Use projection X J i to find corresponding error, ε i ( t f ). Step 3: If ε i ( t f ) ≤ ε max , Stop . P F SP ( t f ) approximates P ( t f ) to within ε max . J i Step 4: Expand projection, X J i +1 ⊃ X J i , Increment i and return to Step 2. 49

  27. Advantages of the FSP. • Deterministic. ★ Every run of the FSP yields the same result. ★ Enables easier comparisons of different systems (sensitivity analysis). • Provides accuracy guarantees. ★ Can be made as precise as required. ★ Allows for analysis of rare events. • Does not depend upon initial conditions. • Is open to many subsequent model reductions. 50

  28. Limitations • Numerical stiffness may lead to computational inefficiency. • Systems may become very large as distributions cover large regions of the configuration space. ★ Compact distributions may drift over time. ★ Dilute distributions may spread over large regions. ★ Dimension grows exponentially with the number of species. • For these problems, the original FSP may not suffice, • BUT, with additional model reductions and systematic techniques, many of these problems may be alleviated. 51

  29. Outline Finite State Projection (FSP) Reductions to the FSP ★ Aggregating unobservable states Munsky/Khammash, CDC , 2006 ★ Time interval discretization ★ Slow manifold projection ★ Coarse meshes for the CME 52

  30. C n e n o t i t e a r t f u o p r CC DC Using Input & Output relations for m C o o C n t d r n o a l , D s m y model reduction. n e t a s m y S i c l a Brian Munsky • Often one is not interested in the entire probability distribution. • Instead one may wish only to estimate: ★ a statistical summary of the distribution, e.g. ✦ means, variances, or higher moments ★ probability of certain traits: ✦ switch rate, extinction, specific trajectories, etc… • In each of these cases, one can define an output y (t): ˙ P ( t ) = AP ( t ) y ( t ) = CP ( t ) 53

  31. Begin with a Full Integer Lattice Description of the System States. 54

  32. Remove Unreachable States and Aggregate the Observable States. 55

  33. Project the Reachable/Observable States onto a Finite Subspace. We now have a solvable approximation, for which the FSP gives bounds on the approximation’s accuracy. 56

  34. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o Outline a l , D s m y n e t a s m y S i c l a Brian Munsky Introduction Monte Carlo Solution Schemes Finite State Projection (FSP) Reductions to the FSP ★ Minimal Realizations ★ Time interval discretization Munsky and Khammash, J. Comp. Phys., 2007 Burrage et al, A.A. Markov 150th Anniv. Meeting, 2006 ★ Slow manifold projection ★ Coarse meshes for the CME 57

  35. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky 0 ★ For many systems, the distribution τ may drift over time. 2 τ ★ At any one time, the distribution may have a limited support, but... 3 τ ★ The FSP solution must include all intermediate configurations. ★ This may lead to an exorbitantly 4 τ large system of ODEs. 5 τ 58

  36. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky 0 τ ★ Instead: 2 τ [0 , τ ] ✴ Discretize the time interval into smaller steps and solve a 3 τ separate projection for each interval. 4 τ 5 τ 59

  37. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky 0 τ ★ Instead: 2 τ [ τ , 2 τ ] ✴ Discretize the time interval into smaller steps and solve a 3 τ separate projection for each interval. 4 τ 5 τ 60

  38. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky 0 τ ★ Instead: 2 τ ✴ Discretize the time interval into [2 τ , 3 τ ] smaller steps and solve a 3 τ separate projection for each interval. 4 τ 5 τ 61

  39. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky 0 τ ★ Instead: 2 τ ✴ Discretize the time interval into smaller steps and solve a 3 τ separate projection for each interval. [3 τ , 4 τ ] 4 τ 5 τ 62

  40. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky 0 τ ★ Instead: 2 τ ✴ Discretize the time interval into smaller steps and solve a 3 τ separate projection for each interval. 4 τ [4 τ , 5 τ ] 5 τ 63

  41. C n e n o t i t e a r t f u o p r CC DC m Time Interval Discretization C o o C n t d r n o a l , D s m y for the FSP n e t a s m y S i c l a Brian Munsky ★ Solving a few smaller systems can 0 be much easier than solving a τ single large system. 2 τ ★ Control the error at each step to obtain a guaranteed final error. 3 τ ★ Caching and reusing information from one step to the next may 4 τ [4 τ , 5 τ ] further reduce effort. 5 τ 64

  42. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o Outline a l , D s m y n e t a s m y S i c l a Brian Munsky Introduction Monte Carlo Solution Schemes Finite State Projection (FSP) Reductions to the FSP ★ Minimal Realizations ★ Time interval discretization ★ Slow manifold projection Peles/Munsky/Khammash, JCP, 2006 ★ Coarse meshes for the CME 65

  43. C n e n o t i t e a r t f u o p r CC DC m Perturbation Theory C o o C n t d r n o a l , D s m y and the FSP n e t a s m y S i c l a Brian Munsky • Some reactions occur faster and more frequently than others. • This can result in a separation of time-scales in the CME. Disadvantages: Often results in numerical stiffness and increased computational complexity. Advantage: May be able to apply perturbation theory to reduce computational effort. 66

  44. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Intuition (Time Scale Separation) n e t a s m y S i c l a Brian Munsky 1. Begin with a finite state X J � (projected) Markov process. 2. Group states connected by x 9 x 10 x 11 x 12 frequent reactions. x 5 x 6 x 7 x 8 x 1 x 2 x 3 x 4 Red Arrows = Fast (Frequent) Reactions Black Arrows = Slow (Rare) Reactions Orange Arrows = (Rare) Transitions to Sink 67

  45. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Intuition (Time Scale Separation) n e t a s m y S i c l a Brian Munsky 1. Begin with a finite state X J � (projected) Markov process. 2. Group states connected by x 9 x 10 x 11 x 12 frequent reactions. x 5 x 6 x 7 x 8 3. Find invariant distribution for each group. x 1 x 2 x 3 x 4 Red Arrows = Fast (Frequent) Reactions Black Arrows = Slow (Rare) Reactions Orange Arrows = (Rare) Transitions to Sink 68

  46. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Intuition (Time Scale Separation) n e t a s m y S i c l a Brian Munsky Reduced Markov Process 1. Begin with a finite state X J � (projected) Markov process. 2. Group states connected by frequent reactions. 3. Find invariant distribution for each group. 4. Average to find the rates of Dotted Black = Averaged Slow Reactions the slow reactions. Dashed Orange = Averaged Transitions to Sink 69

  47. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Intuition (Time Scale Separation) n e t a s m y S i c l a Brian Munsky Reduced Markov Process 1. Begin with a finite state X J � (projected) Markov process. 2. Group states connected by frequent reactions. 3. Find invariant distribution for each group. 4. Average to find the rates of Dotted Black = Averaged Slow Reactions the slow reactions. Dashed Orange = Averaged Transitions to Sink 5. Solve for the solution on the slow-manifold. 6. Lift solution to original coordinate system. 70

  48. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o Outline a l , D s m y n e t a s m y S i c l a Brian Munsky Monte Carlo Solution Schemes Finite State Projection (FSP) 4. Reductions to the FSP ★ Minimal Realizations ★ Time interval discretization ★ Slow manifold projection ★ Coarse meshes for the CME Munsky/Khammash, IEEE Trans , 2008 71

  49. Coarse mesh approximation of the CME • Precision requirements may change for different regions of the configurations space. ★ Small populations require great precision. ★ High populations require far less precision. • By choosing a good coarse approximation of the CME, we can take advantage of this. 72

  50. C n e n o t i t e a r t f u o p r CC DC m Coarse mesh approximation C o o C n t d r n o a l , D s m y of the CME n e t a s m y S i c l a Brian Munsky Start with the full 1-dimensional Markov lattice. 1 2 3 4 5 6 7 8 9 10 11 12 ˙ Original CME P = A · P ( t ) Choose a subset of mesh points. 1 2 3 5 8 12 and specify an approximate relation for the probability of the removed points: P ≈ Φ q ( t ) q = Φ − L AΦq ( t ) Solve the reduced system ODE: ˙ and lift back to the original system coordinates: P ( t ) ≈ Φ exp( Φ − L A Φ t ) Φ − L P (0) 73

  51. C n e n o t i t e a r t f u o p r CC DC m C Coarse Mesh: o o C n t d r n o a l , D s m y n e Multiple-species problems. t a s m y S i c l a Brian Munsky For problems with many species, the method is the same. 1. Begin with original lattice. 2. Choose interpolation points. 3. Form interpolation (shape) function: P ( t ) ≈ Φ q ( t ) 4. Project system to find reduced system of ODEs: q ( t ) = Φ − L AΦq ( t ) ˙ 5. Solve reduced system. 6. Lift back to original coordinates. 74

  52. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o Outline a l , D s m y n e t a s m y S i c l a Brian Munsky Monte Carlo Solution Schemes Finite State Projection (FSP) Reductions to the FSP Example: Heat Shock. Toggle Switch 75

  53. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y The Heat Shock Mechanism n e t a s m y S i c l a Brian Munsky • To survive/compete in a changing environment, biology must quickly adapt to fluctuations in: ★ Temperature, ph level, nutrient availability, etc... • High temperature proteins misfold. • Heat-shock proteins are created to help fix or remove these misfolded proteins. 76

  54. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Toy Heat Shock Model in E. coli n e t a s m y S i c l a Brian Munsky σ 32 -RNAP σ 32 -DnaK σ 32 3 forms for : σ 32 k 1 Fast S 1 S 2 − → ← − σ 32 RNAP σ 32 k 2 σ 32 s 2 Slow s 1 s 3 k 3 RNAP → S 3 S 2 − El Samad et al, PNAS, vol. 102, No. 8, 2005 77

  55. C n e n o t i t e a r t f u o p r CC DC m C o o Toy Heat Shock Model C n t d r n o a l , D s m y n e t a s m y in E. coli (cont.) S i c l a Brian Munsky Five Different FSP Solution Schemes: Population of σ 32 -RNAP 1. Full FSP Population of free σ 32 4459 ODEs 78

  56. C n e n o t i t e a r t f u o p r CC DC m C o o Toy Heat Shock Model C n t d r n o a l , D s m y n e t a s m y in E. coli (cont.) S i c l a Brian Munsky Five Different FSP Solution Schemes: Population of σ 32 -RNAP Population of σ 32 -RNAP 1. Full FSP 2. Slow manifold (FSP-SM) Population of free σ 32 343 ODEs 79

  57. C n e n o t i t e a r t f u o p r CC DC m C o o Toy Heat Shock Model C n t d r n o a l , D s m y n e t a s m y in E. coli (cont.) S i c l a Brian Munsky Five Different FSP Solution Schemes: Population of σ 32 -RNAP 1. Full FSP 2. Slow manifold (FSP-SM) 3. Interpolated (FSP-I) Population of free σ 32 539 ODEs 80

  58. C n e n o t i t e a r t f u o p r CC DC m C o o Toy Heat Shock Model C n t d r n o a l , D s m y n e t a s m y in E. coli (cont.) S i c l a Brian Munsky Five Different FSP Solution Schemes: Population of σ 32 -RNAP Population of σ 32 -RNAP 1. Full FSP 2. Slow manifold (FSP-SM) 3. Interpolated (FSP-I) 4. Hybrid (FSP-SM/I) Population of free σ 32 49 ODEs 81

  59. C n e n o t i t e a r t f u o p r CC DC m C o o Toy Heat Shock Model C n t d r n o a l , D s m y n e t a s m y in E. coli (cont.) S i c l a Brian Munsky Five Different FSP Solution Schemes: 1. Full FSP % y 2. Slow manifold (FSP-SM) t i l i b 3. Interpolated (FSP-I) a b o 4. Hybrid (FSP-SM/I) r P 5. Multiple time interval 0 0 100 200 300 (FSP-MTI) Population of σ 32 -RNAP 70 sets of 195 or fewer ODEs. 82

  60. C n e n o t i t e a r t f u o p r Efficiency and accuracy of CC DC m C o o C n t d r n o a l , the reduced FSP methods D s m y n e t a s m y S i c l a Brian Munsky 0.035 0.03 0.025 Full FSP 0.02 FSP � MTS FSP � SM FSP � I 0.015 FSP � SM/I 10 3 SSA � SM 0.01 0.005 0 0 50 100 150 200 250 300 350 83

  61. C n e n o t i t e a r t f u o p r Efficiency and accuracy of CC DC m C o o C n t d r n o a l , the reduced FSP methods D s m y n e t a s m y S i c l a Brian Munsky For final time t f = 300 s Method Matrix Size ∞ -norm Error J solve J total < 3 . 0 × 10 − 5 FSP 4459 750s 750s 195 1 < 1 . 68 × 10 − 4 FSP-MTS - 40.2s ≈ 5 . 1 × 10 − 4 FSP-SM 343 0.25s 0.94s ≈ 7 . 7 × 10 − 4 FSP-I 539 5.1s 6.1s ≈ 8 . 2 × 10 − 4 FSP-SM/I 49 0.04s 0.78s 10 4 SSA Results would take more than 55 hours. 10 3 SSA-SM - - 84.1s ≈ 0 . 0116 10 4 SSA-SM ≈ 3 . 4 × 10 − 3 - - 925s 10 5 SSA-SM ≈ 1 . 6 × 10 − 3 - - 9360s The Reduced FSP approaches are much faster and more accurate than alternative approaches! 84

  62. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o Outline a l , D s m y n e t a s m y S i c l a Brian Munsky Introduction Monte Carlo Solution Schemes Finite State Projection (FSP) Reductions to the FSP Example: Genetic Toggle Switch 1. SSA and FSP analysis 2. Switch and trajectory analysis 3. Sensitivity and Model Identification. 85

  63. C n e n o t i t e a r t f u o p r CC DC m C o o C n Genetic Toggle Model: t d r n o a l , D s m y n e t a s m y S i c l a Gardner, et al., Nature 403, 339-342 (2000) Brian Munsky α 1 � � 1 v(t) a 1 ( u, v ) = ν 1 = 1 + v β 0 � � 0 α 2 a 3 ( u, v ) = ν 3 = u(t) 1 + u γ 1 � � Two repressors, u and v. − 1 a 2 ( u, v ) = u ν 2 = 0 v inhibits the production of u. � � 0 a 4 ( u, v ) = v ν 4 = − 1 u inhibits the production of v. β = 2 . 5 α 1 = 50 Both u and v degrade exponentially. γ = 1 α 2 = 16

  64. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l A Sample Trajectory , D s m y n e t a s m y S i c l a Brian Munsky We begin with an initial condition: u(t) � � � � u ( t ) 60 = v ( t ) 0 and consider a sample trajectory. v(t) Time

  65. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Choosing the Finite State Projection n e t a s m y S i c l a Brian Munsky 40 30 Species 2 20 10 0 20 40 60 80 Species 1 88

  66. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y Choosing the Finite State Projection n e t a s m y S i c l a Brian Munsky 40 The master equation for this reduced Markov process can be solved very efficiently. 30 Most States are Species 2 very unlikely. 20 These are aggregated. 10 0 20 40 60 80 Species 1 89

  67. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y The Toggle Switch Distribution n e t a s m y S i c l a Brian Munsky FSP 0.05 P r o 0.04 b a b i 0.03 l i t y D 0.02 e n 0.01 s i t y 0 30 80 Species 2 15 40 1 s e i c e p S 0 Method # Simulations Time (s) || Error || 1 Guaranteed ≤ 12 × 10 − 3 FSP – a 5 10 3 SSA ≈ 0 . 33 108 a The FSP algorithm is run only once. 90

  68. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y The Toggle Switch Distribution n e t a s m y S i c l a Brian Munsky FSP SSA (1000 runs) 0.05 P r o 0.04 b a b i 0.03 l i t y D 0.02 e n 0.01 s i t y 0 30 80 80 30 Species 2 15 Species 2 40 1 15 s e i 40 c 1 e p s S e i c 0 e p S 0 Method # Simulations Time (s) || Error || 1 Guaranteed ≤ 12 × 10 − 3 FSP – a 5 No Guarantees 10 3 SSA ≈ 0 . 33 108 a The FSP algorithm is run only once. 91

  69. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y n e t a s m y S i c l a Brian Munsky Switch rates of the gene toggle model. 92

  70. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l Switch Analysis , D s m y n e t a s m y S i c l a Brian Munsky Define the switch to be OFF when v ( t ) > 5 and u ( t ) < 16 and ON when v ( t ) < 5 and u ( t ) > 16. � 0 � u (0) � � We begin with an initial condition, , and ask a few = v (0) 0 questions: 1. What portion will turn OFF first (before they turn ON? u(t) 2. How long until 99% of trajectories will make this first decision? 3. How long until 99% of trajectories will turn ON? v(t) 4. How long until 50% of trajectories will turn OFF first then ON? Time

  71. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y (1) Direction of First Switch n e t a s m y S i c l a Brian Munsky We define some configuration subsets: OFF - absorbing region corresponding to trajectories that OFF v(t) have entered the OFF region. ON - absorbing region corresponding to trajectories that X X ON have entered the ON region. u (t) X - every other reachable state. Aggregate OFF and ON. OFF Keep reactions originating in X, but remove the rest. Solve for OFF(t) and ON(t) ON 94

  72. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y (1) Direction of First Switch n e t a s m y S i c l a Brian Munsky Probability of Turning ON first: 0.78 Cumulative Probability 1 ON Probability of Turning 0.8 0.78 OFF first: 0.22 0.6 0.4 OFF 0.2 0.22 0 − 2 − 1 0 1 2 10 10 10 10 10 Time (s) 95

  73. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y (2) 50% and 99% Time of First Switch n e t a s m y S i c l a Brian Munsky Probability of Turning ON first: 0.78 t 50 t 99 Cumulative Probability 1 ON Probability of Turning 0.8 0.78 OFF first: 0.22 0.6 0.4 OFF t 50 = 0 . 5305 s 0.2 0.22 0 − 2 − 1 0 1 2 10 10 10 10 10 t 99 = 5 . 0595 s Time (s) 96

  74. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y n e (3) 99% Time of first OFF switch t a s m y S i c l a Brian Munsky We define some configuration subsets: OFF - absorbing region v(t) OFF X’ corresponding to trajectories that have entered the OFF region. X’ - unlikely states. X X - everything else. u (t) Aggregate OFF and X’. Keep reactions originating in X, but remove the rest. ε ( t ) OFF(t) X OF Solve for OFF(t) X

  75. C n e n o t i t e a r t f u o p r CC DC m C o o C n t d r n o a l , D s m y n e (3) 99% Time of first OFF switch t a s m y S i c l a Brian Munsky Probability of turning OFF vs. time Provides guaranteed bounds 0% on the probability of switching. 90% 10,000 runs 1000 runs 100,000 runs 100 runs OFF(t) Monte Carlo simulations (SSA) U p p e require many many runs to Lower Bound 99% r B o u n achieve comparable precision. d 99.9% Provide no accuracy 99.99% guarantees. time (s) The FSP approach also provides estimates of every other initial probability distribution supported on . X J Monte Carlo methods only consider a single initial distribution.

  76. C n e n o t i t e a r t f u o p FSP vs. Monte Carlo r CC DC m C o o C n t d r n o a l , D s m Algorithms y n e t a s m y S i c l a Brian Munsky Table 1: A comparison of the efficiency and accuracy of the FSP and SSA solutions to find the time at which 99 percent of cells will have reached the OFF state. # Simulations Comp. Time (s) a Relative Error Method t 99 Full Model N.A. 1 . 9 850 < 0 . 12% FSP 10 3 33 789 ≈ 7 . 3% SSA 10 4 330 806 ≈ 5 . 2% SSA 10 5 3300 838 ≈ 1 . 5% SSA 10 6 3 . 3 × 10 4 845 ≈ 0 . 6% SSA a All computations have been performed in Matlab 7.2 on a 2.0 MHz PowerPC G5. 99

  77. C n e n o t i t e a r t f u o p r CC DC m (4) Median Time of first OFF C o o C n t d r n o a l , D s m y then ON trajectory n e t a s m y S i c l a Brian Munsky Stages: 1. Remain in set of all not OFF states until switch to OFF. v(t) X’ 2. Remain in set of all not ON states until switch to ON. X OFF OFF ON ON ON ON u (t) ON 100

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend