product importance sampling for light transport guiding
play

Product Importance Sampling for Light Transport Guiding Herholtz et - PowerPoint PPT Presentation

Product Importance Sampling for Light Transport Guiding Herholtz et al. 2016 presenter: Eunhyouk Shin It is all about convergence Contents - Review on importance sampling - Light transport guiding techniques - Gaussian Mixture Model &


  1. Product Importance Sampling for Light Transport Guiding Herholtz et al. 2016 presenter: Eunhyouk Shin

  2. It is all about convergence

  3. Contents - Review on importance sampling - Light transport guiding techniques - Gaussian Mixture Model & EM - Process overview - Results & Discussion

  4. Source Materials [EUROGRAPHICS 2016] - Main paper for this presentation [SIGGRAPH 2014] - Baseline technology - Useful presentation slides from the authors

  5. Importance Sampling

  6. Rendering Equation - Direct analytic integration is virtually impossible - Recursive, due to the radiance term in the integrand

  7. Monte Carlo Ray Tracing - Random sample direction from hemisphere to cast ray recursively - Unbiased, even if sampling is not uniform

  8. Importance Sampling Better to be... ∝ - Lower variance when PDF is close to integrand distribution - i.e. make more path that contributes more to radiance (light transport guiding) - How can we make a good estimate for the integrand distribution? - BRDF (given) - Illumination (unknown)

  9. Light Transport Guiding Techniques (slides from Vorba et al.)

  10. Previous work photon • Jensen [1995] tracing 10

  11. Previous work photon path • Jensen [1995] tracing tracing 11

  12. Previous work photon path • Jensen [1995] tracing tracing 12

  13. Previous work photon path • Jensen [1995] tracing tracing 13

  14. Previous work • Jensen [1995] : reconstruction k-NN 14

  15. Previous work • Jensen [1995] : reconstruction 15

  16. Previous work • Peter and Pietrek [1998] path photon tracing tracing 16

  17. Previous work • Peter and Pietrek [1998] path importon photon tracing tracing tracing 17

  18. Previous work • Peter and Pietrek [1998] PT 18

  19. Previous work • Peter and Pietrek [1998] PT 19

  20. Previous work • Peter and Pietrek [1998] PT 20

  21. Previous work • Peter and Pietrek [1998] PT 21

  22. Previous work • Peter and Pietrek [1998] PT 22

  23. Previous work • Peter and Pietrek [1998] PT 23

  24. Previous work • Peter and Pietrek [1998] PT 24

  25. Limitations of previous work • Bad approximation of in complex scenes 25

  26. Limitations of previous work 26

  27. Limitations of previous work 27

  28. Limitations of previous work PT 28

  29. Limitations of previous work Not enough memory! 29

  30. Solution: On-line Learning of Parametric Model - Shoot a batch of photons, then summarize into a parametric model - GMM (Gaussian Mixture Model) is used - Parametric model use less memory - Forget previous photon batch and shoot new batch - Keep updating parameters of the model: On-line learning

  31. Overcoming the memory constraint 31

  32. Overcoming the memory constraint 1 st pass 32

  33. Overcoming the memory constraint 1 st pass GMM 33

  34. Overcoming the memory constraint 1 st pass k-NN 34

  35. Overcoming the memory constraint 1 st pass GMM 35

  36. Overcoming the memory constraint 1 st pass GMM 36

  37. Overcoming the memory constraint 1 st pass GMM 37

  38. Overcoming the memory constraint 2 nd pass 1 st pass GMM 38

  39. Overcoming the memory constraint 2 nd pass 1 st pass GMM 39

  40. Overcoming the memory constraint 2 nd pass 3 rd pass 1 st pass … GMM 40

  41. Overcoming the memory constraint 2 nd pass 3 rd pass 1 st pass … GMM 41

  42. Gaussian Mixture Model

  43. Gaussian Distribution (Normal Distribution) Compact: just 6 float numbers for 2D

  44. Gaussian Mixture Model (GMM) Convex combination of Gaussians: Used to approximate PDF

  45. Expectation Maximization (EM) Algorithm - Popular algorithm that can be used for fitting GMM to scattered data points - Consists of 2 steps: E-step (expectation) and M-step (maximization) - Converge to local maximum of likelihood

  46. EM: How It Works

  47. EM: Expectation Step - For each sample, compute soft assignment weight to clusters Soft assignment using Bayes’ rule

  48. EM: Maximization Step - Update each cluster parameters (mean, variance, weight) to fit the data assigned to it

  49. EM example

  50. EM example

  51. EM example

  52. On-line learning: Weighted Stepwise EM Original EM: - Fit to density of finite set of samples, compute sufficient statistics at once Weighted stepwise EM: (variant used for this paper) - Use one sample for each step and extend to infinite stream of samples - Use weighted samples (can be viewed as repeated samples)

  53. Process Overview

  54. Process Overview 1. Preprocessing 2. Training 3. Rendering

  55. Process Overview 1. Preprocessing 2. Training 3. Rendering

  56. Process Overview 1. Preprocessing 2. Training 3. Rendering

  57. Process Overview 1. Preprocessing 2. Training 3. Rendering

  58. 1. Preprocessing - BRDF is approximated by GMM - Cache GMM for each material, for each (viewing) direction BRDF:Given

  59. 2. Training - Photon, importons guide each other in alternating fashion - On-line learning with weighted step-wise EM - Cache the learnt illumination GMMs Illumination: not known in advance

  60. 2. Training

  61. 3. Rendering - For intersection point, query the cached BRDF, radiance GMM - Product distribution is calculated on-the-fly - Sampling based on product distribution How can we calculate efficiently?

  62. Gaussian x Gaussian = Gaussian = X - Extends to multi-dimensional Gaussian

  63. GMM x GMM = GMM BRDF: GMM of N components Illumination: GMM of M components ( + … + ) x ( + … + ) = ( + + … + + ) Product distribution: GMM of M*N components - Parameters for product GMM can be computed directly from original parameters

  64. Reduction of GMM components - For the sake of efficiency, merge similar components

  65. Results & Discussion

  66. Evaluation: 1 hour rendering

  67. Multiple importance sampling Result instead of product dist. No GMM reduction

  68. Result

  69. Discussion - No memory issue indeed - < 10MB for GMM cache in typical scene - Fast convergence for complex glossy-glossy reflection scene - Where product sampling is important - Not efficient for spatially varying BRDF - GMM is cached per material - Possible extension using SVBRDF parameters

  70. Summary - In order to perform importance sampling, we estimate illumination based on particles - In complex scenes, we need more particles for better estimation - On-line learning of GMM by weighted stepwise EM, enables to generate particles without causing memory issues. - BRDF is also approximated as GMM so that we can use the product GMM as direct approximation for the integrand of the rendering equation - Fast convergence for complex, glossy scenes

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend