variance reduction
play

Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch - PowerPoint PPT Presentation

Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 1/16 Example Use simulation to compute 1 e x dx I = 0 We know the solution: e 1 = 1 . 7183 Simulation:


  1. Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction – p. 1/16

  2. Example • Use simulation to compute � 1 e x dx I = 0 • We know the solution: e − 1 = 1 . 7183 • Simulation: consider draws two by two • Let r 1 ,. . . , r R be independent draws from U (0 , 1) . • Let s 1 ,. . . , s R be independent draws from U (0 , 1) . � R e r i + e s i I ≈ 1 R 2 i =1 • Use R = 10 ′ 000 (that is, a total of 20’000 draws) • Mean over R draws from ( e r i + e s i ) / 2 : 1 . 720 ,variance: 0 . 123 Variance reduction – p. 2/16

  3. Example • Now, use half the number of draws • Idea: if X ∼ U (0 , 1) , then (1 − X ) ∼ U (0 , 1) • Let r 1 ,. . . , r R be independent draws from U (0 , 1) . � R e r i + e 1 − r i I ≈ 1 R 2 i =1 • Use R = 10 ′ 000 • Mean over R draws of ( e r i + e 1 − r i ) / 2 : 1 . 7183 ,variance: 0 . 00388 • Compared to: mean of ( e r i + e s i ) / 2 : 1 . 720 ,variance: 0 . 123 Variance reduction – p. 3/16

  4. Example 1600 Independent Antithetic 1400 1200 1000 800 600 400 200 0 Variance reduction – p. 4/16

  5. Antithetic draws • Let X 1 and X 2 i.i.d r.v. with mean θ • Then � X 1 + X 2 � = 1 Var 4 (Var( X 1 ) + Var( X 2 ) + 2 Cov( X 1 , X 2 )) 2 • If X 1 and X 2 are independent, then Cov( X 1 , X 2 ) = 0 . • If X 1 and X 2 are negatively correlated, then Cov( X 1 , X 2 ) < 0 , and the variance is reduced. Variance reduction – p. 5/16

  6. Back to the example • Independent draws • X 1 = e U , X 2 = e U E[ e 2 U ] E[ e U ] 2 Var( X 1 ) = Var( X 2 ) = − � 1 e 2 x dx ( e − 1) 2 = − 0 e 2 − 1 ( e − 1) 2 = − 2 = 0 . 2420 Cov( X 1 , X 2 ) = 0 � X 1 + X 2 � = 1 Var 4 (0 . 2420 + 0 . 2420)) = 0 . 1210 2 Variance reduction – p. 6/16

  7. Back to the example • Antithetic draws • X 1 = e U , X 2 = e 1 − U Var( X 1 ) = Var( X 2 ) = 0 . 2420 E[ e U e 1 − U ] E [ e U ] E [ e 1 − U ] Cov( X 1 , X 2 ) = − = e ( e − 1)( e − 1) − = − 0 . 2342 � X 1 + X 2 � = 1 Var 4 (0 . 2420 + 0 . 2420 − 2 0 . 2342)) = 0 . 0039 2 Variance reduction – p. 7/16

  8. Antithetic draws: generalization • Suppose that X 1 = h ( U 1 , . . . , U m ) where U 1 , . . . U m are i.i.d. U (0 , 1) . • Define X 2 = h (1 − U 1 , . . . , 1 − U m ) • X 2 has the same distribution as X 1 • If h is monotone in each of its coordinates, then X 1 and X 2 are negatively correlated. Variance reduction – p. 8/16

  9. Control variates • We use simulation to estimate θ = E[ X ] , where X is an output of the simulation • Let Y be another output of the simulation, such that we know E[ Y ] = µ • We consider the quantity: Z = X + c ( Y − µ ) . • By construction, E[ Z ] = E[ X ] • Its variance is Var( Z ) = Var( X + cY ) = Var( X ) + c 2 Var( Y ) + 2 c Cov( X, Y ) • Find c such that Var( Z ) is minimum Variance reduction – p. 9/16

  10. Control variates • First derivative: 2 c Var( Y ) + 2 Cov( X, Y ) • Zero if c ∗ = − Cov( X, Y ) Var( Y ) • Second derivative: 2 Var( Y ) > 0 • We use Z ∗ = X − Cov( X, Y ) ( Y − µ ) . Var( Y ) • Its variance Var( Z ∗ ) = Var( X ) − Cov( X, Y ) 2 ≤ Var( X ) Var( Y ) Variance reduction – p. 10/16

  11. Control variates In practice... • Cov( X, Y ) and Var( Y ) are usually not known. • We can use their sample estimates: � R 1 � ( X r − ¯ X )( Y r − ¯ Cov( X, Y ) = Y ) n − 1 r =1 and � R 1 � ( Y r − ¯ Y ) 2 . Var( Y ) = n − 1 r =1 Variance reduction – p. 11/16

  12. Control variates In practice... • Alternatively, use linear regression X = aY + b + ε where ε ∼ N (0 , σ 2 ) . • The least square estimators of a and b are � R r =1 ( X r − ¯ X )( Y r − ¯ Y ) ˆ a = � R r =1 ( Y r − ¯ Y ) 2 ˆ ¯ a ¯ b = X − ˆ Y . • Therefore c ∗ = − ˆ a Variance reduction – p. 12/16

  13. Control variates • Moreover, ˆ ¯ a ¯ b + ˆ aµ = X − ˆ Y + ˆ aµ ¯ a ( ¯ = X − ˆ Y − µ ) X + c ∗ ( ¯ ¯ = Y − µ ) � = θ • Therefore, the control variate estimate � θ of θ is obtained by the estimated linear model, evaluated at µ . Variance reduction – p. 13/16

  14. Back to the example � 1 e x dx • Use simulation to compute I = 0 • X = e U • Y = U , E[ Y ] = 1 / 2 , Var( Y ) = 1 / 12 • Cov( X, Y ) = (3 − e ) / 2 ≈ 0 . 14 • Therefore, the best c is c ∗ = − Cov( X, Y ) = − 6(3 − e ) ≈ − 1 . 69 Var( Y ) • Test with R = 10 ′ 000 a = 1 . 6893 , ˆ • Result of the regression: ˆ b = 0 . 8734 • Estimate: ˆ b + ˆ a/ 2 = 1 . 7180 , Variance: 0 . 003847 (compared to 0 . 24 ) Variance reduction – p. 14/16

  15. Back to the example 1600 No control Control 1400 1200 1000 800 600 400 200 0 Variance reduction – p. 15/16

  16. Variance reductions techniques • Conditioning • Stratified sampling • Importance sampling • Draw recycling In general: correlation helps! Variance reduction – p. 16/16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend