perfect sampling using dynamic programming
play

Perfect sampling using dynamic programming Sminaire quipe CALIN - PowerPoint PPT Presentation

Perfect sampling using dynamic programming Sminaire quipe CALIN Christelle Rovetta quipe AMIB(io), Laboratoire LIX - Inria Saclay - LRI - RNALand th April What is Athena doing? State space State space 0.1 0.2 0.7


  1. Perfect sampling using dynamic programming Séminaire Équipe CALIN Christelle Rovetta Équipe AMIB(io), Laboratoire LIX - Inria Saclay - LRI - RNALand th April 

  2. What is Athena doing?

  3. State space

  4. State space 0.1 0.2 0.7

  5. State space 0.2 0.8

  6. Markov chain 0 0 0 1 0 ◮ Arithmetic mean � π

  7. Markov chain 0 0 0.5 0.5 0 ◮ Arithmetic mean � π

  8. Markov chain 0 0 0.67 0.33 0 ◮ Arithmetic mean � π

  9. Markov chain 0.10 0.20 0.60 0.05 0.05 ◮ Arithmetic mean � π

  10. Markov chain 0.10 0.12 0.20 0.16 0.60 0.63 0.05 0.03 0.05 0.04 ◮ Arithmetic mean � π ◮ Stationary distribution π (solution of π P = π )

  11. Sample the stationary distribution ◮ State space: | S | < ∞ ◮ Ergodic Markov chain ( X n ) n ∈ Z on S ◮ Stationary distribution π 0.03 0.04 0.12 0.16 0.63 ◮ Sample a random object according π ◮ Very large S

  12. Markov Chain Monte Carlo (MCMC) Markov chain convergence theorem For all initial distributions X n ∼ π when n → ∞ Simulate the Markov chain ◮ ( U n ) n ∈ Z an i.i.d sequence of random variables � X 0 = x ∈ S X n + 1 = update ( X n , U n + 1 ) States X 2 X 0 X 3 X 1 U U U 1 2 3 Time

  13. Markov Chain Monte Carlo (MCMC) Markov chain convergence theorem For all initial distributions X n ∼ π when n → ∞ Simulate the Markov chain ◮ ( U n ) n ∈ Z an i.i.d sequence of random variables � X 0 = x ∈ S X n + 1 = update ( X n , U n + 1 ) ◮ How to detect the stopping criterion? X 2 X 0 X 3 X 1 X n U 1 U 2 U 3 U n

  14. Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past

  15. Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past U 0 0 -1

  16. Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past U -1 U 0 0 -2 -1

  17. Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past ◮ Starts from all states, complexity at least in O ( | S | ) U U -1 0 0 -n -2 -1

  18. Perfect sampling algorithm ◮ Perfect sampling algorithm [Propp – Wilson, 1996] ◮ Produces y ∼ π ◮ Stopping criterion automatically detected ◮ Uses coupling from the past ◮ Starts from all states, complexity at least in O ( | S | ) ◮ Find strategies (monotone chains, envelopes,. . . )

  19. Queueing networks ◮ Introduced by Erlang in 1917 to describe the Copenhagen telephone exchange ◮ Queues are everywhere in computing systems ◮ Analyze various kinds of system performance (average waiting time, expected number of customers waiting, ...) ◮ Usually modeled by an ergodic Markov chain ◮ Computer simulation

  20. Closed queueing network Customers are not allowed leave the network 0.05 1 2 0.5 0.7 0.9 0.5 5 0.7 0.1 0.05 3 4 0.5 0.2 0.5 ◮ K = 5 queues, M = 4 customers ◮ State: x = ( 2 , 0 , 1 , 1 , 0 ) ◮ Sample π

  21. Product form ◮ Gordon-Newell networks 0.05 1 2 0.5 0.7 0.9 0.5 5 0.7 0.1 0.05 3 4 0.5 0.2 0.5 Gordon-Newell theorem � � � 1 ρ x k ρ x k with π x = G ( K , M ) = k . k G ( K , M ) k ∈Q x ∈ S k ∈Q ◮ G ( K , M ) : normalization constant (partition function) ◮ Compute G ( K , M ) in O ( KM ) , dynamic programming [Buzen ’73]

  22. Introduction Perfect Sampling For Closed Queueing Networks Generic diagrams Application 1: A Boltzmann Sampler Application 2: RNA folding kinetic Conclusion

  23. Closed queueing network (monoclass) ◮ K queues ./ M / 1 (exponential service rate) ◮ Finite capacity C k in each queue ◮ Blocking policy: Repetitive service - random destination ◮ Strongly connected network Example 0.05 1 2 0.5 0.7 0.9 5 0.5 0.7 0.1 0.05 3 4 0.5 0.2 0.5 ◮ K = 5 queues, M = 3 customers, capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 1 , 0 , 1 , 1 , 0 )

  24. State space ◮ K queues, M customers, capacity C = ( C 1 , . . . , C K ) ◮ State space: K � S = { x ∈ N K | x k = M , ∀ k 0 ≤ x k ≤ C k } k = 1 ◮ Number of states ( M ≫ K ): � M + K − 1 � � M + K − 1 � � M K − 1 � in O | S | ≤ = K − 1 M

  25. Transition ◮ Transition on a state : � x − e i + e j if x i > 0 and x j < C j , t i , j ( x ) = otherwise ( x i = 0 or x j = C j ) , x � 1 if i = k , where e i ∈ { 0 , 1 } K e i ( k ) = 0 otherwise . ◮ Transition on a set of states : � t ( S ) := t ( x ) x ∈ S

  26. Markov chain modeling ◮ ( U n ) n ∈ Z := ( i n , j n ) n ∈ Z an i.i.d sequence of random variables ◮ System described by an ergodic Markov chain: � X 0 ∈ S X n + 1 = t U n + 1 ( X n ) ◮ Unique stationary distribution π that is unknown ◮ GOAL: sample π with the perfect sampling algorithm

  27. Perfect sampling algorithm Perfect Sampling with States ◮ Perfect sampling (PSS) 1. n ← 1 2. t ← t U − 1 3. While | t ( S ) | � = 1 4. n ← 2 n −n −n/2 −8 −4 −2 −1 0 5. t ← t U − 1 ◦ . . . ◦ t U − n 6. Return t ( S ) ◮ PROBLEM: | S | in O ( M K − 1 ) ◮ Find a strategy !

  28. A new stategy Difficulty to adapt known stategies ◮ Fixed number of customers ( � K k = 1 x k = M ) ◮ No lattice structure More structured representation of the state space ◮ Reduce complexity: O ( M K − 1 ) to O ( KM 2 ) ◮ Represent states as paths in a graph ◮ Realize transitions directly on the graph

  29. Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram Customers 0 0,0 1,0 Queues

  30. Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram Custom ers 0 0 0,0 1,0 2,0 Queues

  31. Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram 5,3 1 3,2 4,2 0 2 Customers 0,0 1,0 2,0 0 0 Queues

  32. Diagram ◮ 5 queues, 3 customers, ◮ State: capacity C = ( 2 , 1 , 3 , 1 , 2 ) ◮ x = ( 0 , 0 , 2 , 0 , 1 ) ◮ � 5 k = 1 x k = 3 ◮ y = ( 1 , 0 , 1 , 1 , 0 ) ◮ ∀ k 0 ≤ x k ≤ C k ◮ Diagram 4,3 5,3 3,2 4,2 1,1 2,1 Custom ers 0,0 1,0 2,0 Queues

  33. Diagram ◮ A diagram is a graph that encode a set of states ◮ A diagram is complete if it encodes all the states ◮ Number of arcs in a diagram: O ( KM 2 ) Example K = 5 queues 5 columns of arcs M = 3 customers 4 rows C = ( 2 , 1 , 3 , 1 , 2 ) 0 ≤ | slopes | ≤ 3 2,3 3,3 4,3 5,3 1,2 2,2 3,2 4,2 1,1 2,1 3,1 4,1 0,0 1,0 2,0 3,0

  34. States to Diagram: function φ S Diagram φ ( S ) ( 0 , 0 , 2 , 0 , 1 ) 4,3 5,3 ( 1 , 0 , 1 , 1 , 0 ) 3,2 4,2 1,1 2,1 0,0 1,0 2,0

  35. Diagram to states: function ψ Diagram D ψ ( D ) ( 0 , 0 , 2 , 0 , 1 ) 4,3 5,3 ( 1 , 0 , 1 , 1 , 0 ) ( 0 , 0 , 2 , 1 , 0 ) 3,2 4,2 ( 1 , 0 , 1 , 0 , 1 ) 1,1 2,1 0,0 1,0 2,0

  36. Transformation function S φ φ ψ ( D ) D ψ

  37. Transformation function ◮ Galois connexion S φ φ ψ ( D ) D ψ

  38. Transition on a diagram ◮ Transition on a diagram: T i , j = φ ◦ t i , j ◦ ψ ◮ Good properties for perfect sampling ◮ Preserves inclusion S ⊆ ψ ( D ) = ⇒ t i , j ( S ) ⊆ ψ ( T i , j ( D )) ◮ Preserves coupling | ψ ( D ) | = 1 = ⇒ | ψ ( T i , j ( D )) | = 1 ◮ Efficient algorithm to compute transitions T i , j in O ( KM 2 )

  39. Transition on a set of states ◮ Parameters: K = 5 queues, M = 3 customers, capacity C = ( 2 , 1 , 3 , 1 , 2 ) . S = { ( 0 , 1 , 1 , 0 , 0 ) , ( 0 , 1 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 0 , 2 ) , ( 1 , 0 , 1 , 1 , 0 ) } ⊆ S ◮ Transition t 4 , 2 ( S ) ?

  40. Transition t 4 , 2 on S x t 4 , 2 ( x ) = x t 4 , 2 ( x ) � = x t 4 , 2 ( x ) x 4 = 0 OR x 2 = C 2 x 4 > 0 AND x 2 < C 2 01100 01100 • 01110 01110 • 01002 01002 • • 10110 11100 •

  41. Transition t 4 , 2 on S x t 4 , 2 ( x ) = x t 4 , 2 ( x ) � = x t 4 , 2 ( x ) x 4 = 0 OR x 2 = C 2 x 4 > 0 AND x 2 < C 2 01100 01100 • 01110 01110 • 01002 01002 • • 10110 11100 • ◮ t 4 , 2 ( S ) = { ( 0 , 1 , 1 , 0 , 0 ) , ( 0 , 1 , 1 , 1 , 0 ) , ( 0 , 1 , 0 , 0 , 2 ) , ( 1 , 1 , 1 , 0 , 0 ) }

  42. Compute T 4 , 2 ( D ) on D 2 3 4 4 2 S tay D F ull 2 2 3 3 4 4 2 2 3 3 4 4 +1 -1 T ransit T ransit' T 4,2,1 (D)

  43. Compute T 4 , 2 ( D ) on D 2 3 4 4 2 S tay D F ull 2 2 3 3 4 4 2 2 3 3 4 4 +1 -1 T ransit T ransit' T 4,2,1 (D) ◮ Complexity in O ( KM 2 ) compared to O ( M K − 1 ) (transition on a set of states)

  44. Perfect sampling algorithm Perfect Sampling with States Perfect Sampling with (PSS) Diagram (PSD) 1. n ← 1 1. n ← 1 2. t ← t U − 1 2. T ← T U − 1 3. While | t ( S ) | � = 1 3. While | ψ ( T ( D )) | � = 1 4. n ← 2 n 4. n ← 2 n 5. 5. t ← t U − 1 ◦ . . . ◦ t U − n T ← T U − 1 ◦ . . . ◦ T U − n 6. Return t ( S ) 6. Return ψ ( T ( D ))

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend