t owards parallelizing the gillespie ssa
play

T owards parallelizing the Gillespie SSA Srivastav Ranganathan - PowerPoint PPT Presentation

T owards parallelizing the Gillespie SSA Srivastav Ranganathan and Aparna JS Indian Institute of Technology Bombay Mumbai, India Gillespie Algorithm A stochastic simulation approach to study the time evolution of a system of reactions


  1. T owards parallelizing the Gillespie SSA Srivastav Ranganathan and Aparna JS Indian Institute of Technology Bombay Mumbai, India

  2. Gillespie Algorithm • A stochastic simulation approach to study the time evolution of a system of reactions (processes) • Each reaction occurs at an average rate • The abundance of various species and their rates decide the propensity of each event to occur • Many independent trajectories are generated to compute ensemble averaged statistical quantities

  3. Where is it used? • In biological systems • Outcomes of cellular processes are driven by stochasticity at a molecular level • Deterministic approaches cannot capture the inherent randomness of biological systems

  4. The algorithm •

  5. Initialize the system (individual rates) Compute the probabilities of going from one state to another (p i ) Generate the fjring time for the next reaction to be fjred Select the next reaction to be fjred Most expensive of all these steps

  6. Selecting the event to be fjred • Draw a uniform random number ran1 Fire Fire Reaction 4 Reaction P1+P2 1 P1+ P2.. P1+P2+.. ….P8 P3 +P4 0 P1 P1+ P2 • Update the system confjguration based on the fjred reaction (abundance, rates etc) • Update the time based on the exponential distribution of wait time between events (

  7. This search for the next event to be fjred is really expensive if there is a large reaction space!

  8. Our attempt (Scheme 1, One-One communications) W1 MPI_SEND Master MPI_SEND W2 • Owns the transition probability matrix MPI_SEND • Keeps the system confjg updated Worker Nodes: W3 Receive blocks of the search space Identify the event to be fjred MPI_REDUCE (MPI_MAX) Pass the event info into a bufger, if a hit is received

  9. Our attempt (Scheme 2, Collective communication) W1 MPI_SCATTER Worker Nodes: Receive blocks of the search space Master MPI_SCATTER Identify the event to be fjred • W2 Owns the transition Pass the event info into a bufger, probability matrix if a hit is received MPI_SCATTER • Also performs part of the search W3 • Keeps the system confjg updated MPI_REDUCE (MPI_MAX)

  10. What worked ● Our naïve serial code was optimized to minimize cache misses (a speedup of 1.5 times) ● The MPI code did give us correct results (compared with the serial code and analytical results!) ● Exposed us to a new way of thinking

  11. What did`nt? ● MPI codes show a speedup from 1 to 3 cores but scale poorly ● Performance slows down at 5 processes or more ● Probably due to huge communication overhead in our code ● Possibly revisit the whole algorithm or use a more parallel-friendly algorithm!

  12. Thank You!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend