e ffi ciency improvement techniques
play

E ffi ciency-Improvement Techniques Reading: Ch. 11 in Law & Ch. - PowerPoint PPT Presentation

E ffi ciency-Improvement Techniques Reading: Ch. 11 in Law & Ch. 10 in Handbook of Simulation Peter J. Haas CS 590M: Simulation Spring Semester 2020 1 / 31 E ffi ciency-Improvement Techniques Overview Common Random Numbers Antithetic


  1. E ffi ciency-Improvement Techniques Reading: Ch. 11 in Law & Ch. 10 in Handbook of Simulation Peter J. Haas CS 590M: Simulation Spring Semester 2020 1 / 31

  2. E ffi ciency-Improvement Techniques Overview Common Random Numbers Antithetic Variates Conditional Monte Carlo Control Variates Importance Sampling Likelihood ratios Rare-event estimation 2 / 31

  3. Many Di ff erent Techniques I Common random numbers I Antithetic variates I Conditional Monte Carlo I Control variates I Importance sampling I Stratified sampling I Latin hypercube sampling (HW #1) I Quasi-random numbers I . . . 3 / 31

  4. Variance Reduction and E ffi ciency Improvement Typical goal is variance reduction I I.e., reduce variance of estimator α n of α I Narrower CIs ) less computational e ff ort for given precision I So methods often called “variance reduction” methods Care is needed when evaluating techniques I Reduction in e ff ort must outweigh increased cost of executing V-R method I Increase in programming complexity? I In many cases, additional e ff ort is obviously small I What about more complicated cases? 4 / 31

  5. Comparing E ffi ciency-Improvement Schemes Trading o ff statistical and computational e ffi ciency I Suppose α = E [ X ] = E [ Y ] I Should we generate i.i.d. replicates of X or Y to estimate α ? I Assume large but fixed computer budget c I Let τ X ( i ) be (random) time to generate X i I Assume that � � � � X 1 , τ X (1) , X 2 , τ X (2) , . . . are i.i.d. I Number of X -observations generated within budget c is N x ( c ) = max { n � 0 : τ X (1) + · · · + τ X ( n )  c } P N X ( c ) 1 I So estimator based on budget is α X ( c ) = X i N X ( c ) i =1 5 / 31

  6. Comparing E ffi ciency-Improvement Schemes, Continued Hammersley-Handscomb E ffi ciency Measure I Can show: lim c →∞ N ( c ) / c = λ X a.s., where λ X = 1 / E [ τ X ] I Hence N X ( c ) b λ X c c 1 1 X X α X ( c ) � α = X i � α ⇡ X i � α N X ( c ) b λ X c c i =1 i =1 s Var[ X ] 1 D p ⇠ λ X c N (0 , 1) = p c E [ τ X ] · Var[ X ] N (0 , 1) I Similarly, 1 D p α Y ( c ) � α ⇠ p c E [ τ Y ] · Var[ Y ] N (0 , 1) 1 I E ffi ciency measure: E [ τ Y ] · Var[ Y ] (holds fairly generally) 6 / 31

  7. E ffi ciency-Improvement Techniques Overview Common Random Numbers Antithetic Variates Conditional Monte Carlo Control Variates Importance Sampling Likelihood ratios Rare-event estimation 7 / 31

  8. Common Random Numbers (CRN) Applies when comparing alternate systems I Intuition: Sharper comparisons if systems experience same random fluctuations More precisely: I Goal: Compare two perf. measures distributed as X and Y I Estimate α = E [ X ] � E [ Y ] = E [ X � Y ] I Generate i.i.d. pairs ( X 1 , Y 1 ) , . . . , ( X n , Y n ) I Point estimate: α n = (1 / n ) P n i =1 ( X i � Y i ) Var[ α n ] = 1 nVar [ X � Y ] = 1 � � Var[ X ] + Var[ Y ] � 2 Cov[ X , Y ] n I So want Cov[ X , Y ] > 0 I Note that Cov[ X , Y ] = 0 if X and Y simulated independently 8 / 31

  9. CRN, Continued Simple case: One random number per sample of X and of Y I Use same random number: X i = X i ( U i ) and Y i = Y i ( U i ) I If X ( u ), Y ( u ) both " (or both # ) in u , then Cov[ X , Y ] > 0 I True for inversion method: X i = F − 1 X ( U i ) and Y i = F − 1 Y ( U i ) In practice I Sync random numbers between systems as much as possible I Use multiple random number streams, one per event I Jump-head facility of random number generator is crucial 9 / 31

  10. CRN, Continued Example: Long-run waiting times in two GI/G/1 queues I Suppose that I Interarrival times are i.i.d according to cdf G for both systems I Service times are i.i.d. according to cdf H i for queue i ( i = 1 , 2) I Use one sequence ( U j : j � 0) to generate a single stream of interarrival times for use in both systems I Use one sequence ( V j : j � 0) to generate service times in both systems: S 1 , j = H − 1 1 ( V j ) and S 2 , j = H − 1 2 ( V j ) for j � 1 I Note: Need two streams { U i } and { V i } I Systems get out of sync with only one stream 10 / 31

  11. E ffi ciency-Improvement Techniques Overview Common Random Numbers Antithetic Variates Conditional Monte Carlo Control Variates Importance Sampling Likelihood ratios Rare-event estimation 11 / 31

  12. Antithetic Variates Applies when analyzing a single system I Intuition: Combat “luck of the draw” by pairing each realized scenario with its opposite More precisely: I Goal: Estimate E [ X ] I Generate X 1 , . . . , X 2 n and set α n = ¯ X 2 n I Suppose pairs ( X 1 , X 2 ) , ( X 3 , X 4 ) , . . . , ( X 2 n − 1 , X 2 n ) are i.i.d. (possible corr. within pairs) 1 � Var[ α 2 n ] = Var[ X 1 ] + · · · + Var[ X 2 n ] + 4 n 2 � 2 Cov[ X 1 , X 2 ] + · · · + 2 Cov[ X 2 n � 1 , X 2 n ] I So want Cov[ X 2 j − 1 , X 2 j ] < 0 for j � 1 12 / 31

  13. Antithetic Variates, Continued Simple case: One random number per sample of X and of Y I Use same random number: X i = X i ( U i ) and Y i = Y i (1 � U i ) I If X ( u ), Y ( u ) both " (or both # ) in u , then Cov[ X , Y ] < 0 I E.g., inversion method: X i = F − 1 X ( U i ) and Y i = F − 1 Y (1 � U i ) Ex: Avg. waiting time of first 100 cust. in GI/G/1 queue I Interarrival times (service times) i.i.d according to cdf G ( H ) I Replication 2 k � 1: ( I j , S j ) = � G − 1 ( U j ) , H − 1 ( V j ) � G − 1 (1 � U j ) , H − 1 (1 � V j ) I Replication 2 k : ( I j , S j ) = � � Ex: Alternative method for GI/G/1 queue (Explain?) G − 1 ( U j ) , H − 1 ( V j ) I Replication 2 k � 1: ( I j , S j ) = � � I Replication 2 k : ( I j , S j ) = � G − 1 ( V j ) , H − 1 ( U j ) � Warning: CRN + AV together can backfire! [Law, p. 609] 13 / 31

  14. E ffi ciency-Improvement Techniques Overview Common Random Numbers Antithetic Variates Conditional Monte Carlo Control Variates Importance Sampling Likelihood ratios Rare-event estimation 14 / 31

  15. Conditional Monte Carlo � � Example: Markovian GSMP X ( t ) : t � 0 I All events are simple w. exponential clock-setting dist’ns I Simulation algorithm (up to n th state transition time T n ) D I Generate states S 0 , . . . , S n � 1 ⇠ DTMC w. transition matrix R D I Generate holding time in each S k : H k � � ⇠ exp λ ( S k ) I Goal: Estimate α = E [ Z ] with R T n du = P n − 1 � � Z = X ( u ) k =0 f ( S k ) H k f 0 I Variance reduction trick: I Generate states S 0 , . . . , S n � 1 as above I Set holding time in S k = mean holding time = 1 / λ ( S k ) I Q: Why does this work? 15 / 31

  16. Conditional Monte Carlo, Continued Law of total expectation - { EEUIV : Ecu ) - Pl Vivi ) - Vi ] ⇥ ⇤ E E [ U | V ] = E [ U ] ex - - Variance decomposition ⇥ ⇤ ⇥ ⇤ ⇥ ⇤ Var[ U ] = Var E [ U | V ] + E Var[ U | V ] � Var E [ U | V ] Key Idea I Simulate V and compute ˜ U = E [ U | V ] I Then ˜ U has same mean as U but smaller variance I So generate i.i.d replicates of ˜ U to estimate α = E [ U ] Markovian GSMP example revisited I U = Z = P n − 1 k =0 f ( S k ) H k and V = ( S 0 , . . . , S n − 1 ) I So estimate E [ ˜ Z ] from i.i.d replicates ˜ Z 1 , . . . , ˜ Z m , where Z = E [ Z | S 0 , . . . , S n � 1 ] = P n � 1 ˜ k =0 f ( S k ) E [ H k | S k ] = P n � 1 1 k =0 f ( S k ) λ ( S k ) 16 / 31

  17. E ffi ciency-Improvement Techniques Overview Common Random Numbers Antithetic Variates Conditional Monte Carlo Control Variates Importance Sampling Likelihood ratios Rare-event estimation 17 / 31

  18. Control Variates Intuition: Exploit extra system knowledge I Goal: Estimate α = E [ X ] I Suppose that there exists a random variable Y such that = E CY ] I Y is strongly correlated with X My I E [ Y ] can be computed analytically - My = O - my ] : ECYI ELY I Control variable: C = Y � E [ Y ] I Controlled estimator: X ( λ ) = X � λ C = ECD - LEED = EL x ] - to E Ex I E [ X ( λ )] = I v ( λ ) = Var[ X ( λ )] = Var[ X ] � 2 λ Cov[ X , C ] + λ 2 Var[ C ] , D1 Van [ if car CX I v ( λ ) is minimized at λ ∗ = I Minimizing variance is v ( λ ∗ ) = (1 � ρ 2 ) Var[ X ], where Cov[ X , C ] p ρ = = correlation coe ffi cient of X and C Var[ X ] · Var[ C ] 18 / 31

  19. Control Variates, Continued The method 1. Simulate i.i.d. pairs ( X 1 , C 1 ) , . . . , ( X n , C n ) 2. Estimate λ ∗ by � 1 n n 1 X ( X i � ¯ X λ ⇤ C 2 n = X n ) C i i n � 1 n i =1 i =1 3. Apply usual estimation techniques to Z 1 , . . . , Z n , where Z i = X i � λ ∗ C i for 1  i  n Ex: E[avg delay] for first n customers in GI/G/1 queue I X i = average delay in i th replication I V i , k = k th service time in i replication, with E [ V i , k ] = 5 I Take C i = (1 / n ) P n k =1 V i , k � 5 I Q: Why is this a good choice? 19 / 31

  20. Control Variates, Continued Internal and External Controls I C i in queueing example is an internal control, generated internally to the simulation I Example of an external control: I Simplify original simulation model M to a version M 0 where performance measure α 0 can be computed analytically I Generate replications of M and M 0 using common random numbers to obtain ( X 1 , X 0 1 ) , . . . , ( X n , X 0 n ) I Take C i = X 0 i � α 0 Multiple controls I X ( λ 1 , . . . , λ m ) = X � λ 1 C (1) � · · · � λ m C ( m ) I Can compute ( λ ∗ 1 , . . . , λ ∗ m ) by solving linear syst. of equations I Essentially, we fit a linear regression model and simulate the leftover uncertainty (i.e., the residuals) 20 / 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend