02941 physically based rendering
play

02941 Physically Based Rendering Monte Carlo Integration Jeppe - PowerPoint PPT Presentation

02941 Physically Based Rendering Monte Carlo Integration Jeppe Revall Frisvad June 2020 Why Monte Carlo? The rendering equation , ) cos d L o ( x , ) = L e ( x , ) + f r ( x , ) L i ( x ,


  1. 02941 Physically Based Rendering Monte Carlo Integration Jeppe Revall Frisvad June 2020

  2. Why Monte Carlo? ◮ The rendering equation � ω ′ , � ω ′ ) cos θ d ω ′ L o ( x , � ω ) = L e ( x , � ω ) + f r ( x , � ω ) L i ( x , � 2 π is difficult, usually impossible, to solve analytically. ◮ Trapezoidal integration and Gaussian quadrature only works well for smooth low-dimensional integrals. ◮ The rendering equation is 5-dimensional and it usually involves discontinuities. ◮ There are (roughly) only three known mathematical methods for solving this type of problem: ◮ Truncated series expansion ◮ Finite basis (discretization) ◮ Sampling (random selection) ◮ Monte Carlo is probably the simplest way to use sampling.

  3. Brush up on probability ◮ A random variable X ∈ A is a value X drawn from the sample space A of some random process. ◮ Applying a function f : X → Y to a random variable X results in a new random variable Y . ◮ A uniform random variable takes on all values in its sampling space with equal probability . ◮ Probability is the chance (represented by a real number in [0,1]) that something is the case or that an event will occur. ◮ The cumulative distribution function (cdf) is the probability that a random variable X is smaller than or equal to a value x : P ( x ) = Pr { X ≤ x } . ◮ The probability density function (pdf) is the relative probability for a random variable X to take on a particular value x : pdf( x ) = d P ( x ) . d x

  4. Properties of the probability density function ◮ For uniform random variables, pdf( x ) is constant. ◮ Of particular interest is the continuous, uniform, random variable ξ ∈ [0 , 1] which has the probability density function � 1 for x ∈ [0 , 1] pdf( x ) = 0 otherwise . ◮ Using the pdf, we can calculate the probability that a random variable lies inside an interval: � b Pr { x ∈ [ a , b ] } = pdf( x ) d x . a ◮ All probability density functions have the properties: � ∞ pdf( x ) ≥ 0 and pdf( x ) d x = 1 . −∞

  5. Expected values and variance ◮ The expected value of a random variable X ∈ A is the average value over the distribution of values pdf( x ): � E { X } = x pdf( x ) d x . A ◮ The expected value of an arbitrary function f ( X ) is then: � E { f ( X ) } = f ( x ) pdf( x ) d x . A ◮ The variance is the expected deviation of the function from its expected value: � ( f ( X ) − E { f ( X ) } ) 2 � V { f ( X ) } = E . ◮ The expected value operator E is linear. Thus: V { f ( X ) } = E { ( f ( X )) 2 } − ( E { f ( X ) } ) 2 .

  6. Properties of variance ◮ The variance operator: V { f ( X ) } = E { ( f ( X )) 2 } − ( E { f ( X ) } ) 2 . ◮ V {·} is not a linear operator. For a scalar a , we have V { a f ( X ) } = a 2 V { f ( X ) } . ◮ And, furthermore, E { ( f ( X ) + f ( Y )) 2 } − ( E { f ( X ) + f ( Y ) } ) 2 V { f ( X ) + f ( Y ) } = E { ( f ( X )) 2 + ( f ( Y )) 2 + 2 f ( X ) f ( Y ) } − ( E { f ( X ) } ) 2 − ( E { f ( Y ) } ) 2 − 2 E { f ( X ) } E { f ( Y ) } = = V { f ( X ) } + V { f ( Y ) } + 2 E { f ( X ) f ( Y ) } − 2 E { f ( X ) } E { f ( Y ) } = V { f ( X ) } + V { f ( Y ) } + 2 Cov { f ( X ) , f ( Y ) } ◮ Thus, if X and Y are uncorrelated ( Cov { f ( X ) , f ( Y ) } = 0), then the variance of the sum is equal to the sum of the variances.

  7. The Monte Carlo estimator ◮ The law of large numbers:   N   1 � Pr f ( X j ) → E { f ( X ) }  = 1 for N → ∞ . N  j =1 “It is certain that the estimator goes to the expected value as the number of samples goes to infinity.” ◮ Approximating an arbitrary integral using N samples: � f ( X ) � � � f ( x ) F = f ( x ) d x = pdf( x ) pdf( x ) d x = E pdf( X ) A A using the law of large numbers N F N = 1 f ( X j ) � pdf( X j ) , N j =1 where X j are sampled on A and pdf( x ) > 0 for all x ∈ A .

  8. Monte Carlo error bound ◮ We found the estimator: N f ( X j ) F N = 1 � pdf( X j ) . N j =1 ◮ The standard deviation is the square root of the variance: σ F N = ( V { F N } ) 1 / 2 and it is a probabilistic error bound for the estimator according to Chebyshev’s inequality: Pr {| F N − E { F N }| ≥ δσ F N } ≤ δ − 2 . “The error is probably not too much larger than the standard deviation.” “There is a less than 1% chance that the error is larger than 10 standard deviations.” ◮ The rate of convergence is then the ratio between the standard deviation of the estimator σ F N and the standard deviation of a single sample σ Y .

  9. Monte Carlo convergence ◮ The standard deviation of the estimator:     1 / 2 N  1  � σ F N = ( V { F N } ) 1 / 2 =  V Y j ,  N   j =1 where f ( X j ) Y j = pdf( X j ) . ◮ Continuing (while assuming that X j and thus Y j are uncorrelated)     1 / 2   1 / 2 N N    1  1 � � σ F N = = V { Y j } N 2 V Y j   N 2   j =1 j =1 � 1 � 1 / 2 1 √ = N V { Y } = σ Y . N ◮ Worst case: Quadruple the samples to half the error.

  10. An estimator for the rendering equation ◮ The rendering equation: � ω ′ ) cos θ d ω ′ . L o ( x , � ω ) = L e ( x , � ω ) + f r ( x , � ω ′ , � ω ) L i ( x , � 2 π ◮ The Monte Carlo estimator: N ω ′ ω ′ f r ( x , � j , � ω ) L i ( x , � j ) cos θ ω ) + 1 � L N ( x , � ω ) = L e ( x , � pdf( � ω ′ j ) N j =1 with cos θ = � ω ′ j · � n , where � n is the surface normal at x . ◮ The Lambertian BRDF: ω ′ , � f r ( x , � ω ) = ρ d /π . ◮ A good choice of pdf would be: ω ′ pdf( � j ) = cos θ/π .

  11. Sampling a pdf (the inversion method) ◮ How to draw samples X i from an arbitrary pdf: � x −∞ pdf( x ′ ) d x ′ . 1. Compute the cdf: P ( x ) = 2. Compute the inverse cdf: P − 1 ( x ) . 3. Obtain a uniformly distributed random number ξ ∈ [0 , 1]. 4. Compute a sample: X i = P − 1 ( ξ ) . ◮ Example: Exponential distribution over sample space [0 , ∞ ) pdf( x ) = ae − ax . ◮ Compute cdf: � x ae − ax ′ d x ′ = 1 − e − ax . P ( x ) = 0 ◮ Invert cdf: P − 1 ( x ) = − ln(1 − x ) . a ◮ To draw samples: X = − ln(1 − ξ ) X = − ln ξ or a . a

  12. Sampling a pdf (the rejection method) ◮ Imagine a pdf which we cannot integrate to find the cdf. ◮ Knowing a function g with the property pdf( x ) < c g ( x ), where c > 1, we can use rejection sampling with g instead of sampling the pdf directly. ◮ Rejection sampling is the following algorithm: ◮ loop forever: ◮ sample X from g ( x ) and ξ from [0 , 1] ◮ if ξ < f ( X ) / ( c g ( X )) then return X ◮ Rejection sampling is only a good idea if c g ( x ) is a tight bound for the pdf.

  13. Uniformly sampling a sphere ◮ The unit box is a (relatively) tight bound for the unit sphere. ◮ Rejection sampling unit directions given by points on the unit sphere: Vec3f direction; do { direction[0] = 2.0f*mt random() - 1.0f; direction[1] = 2.0f*mt random() - 1.0f; direction[2] = 2.0f*mt random() - 1.0f; } while(dot(direction, direction) > 1.0f); direction = normalize(direction); 1 ◮ pdf( � ω ′ j ) = 4 π .

  14. Sampling a 2D joint density function ◮ Suppose we have a joint 2D density function pdf( x , y ). ◮ To sample pdf( x , y ) using two independent random variables X and Y , we find the marginal and the conditional density functions. ◮ The marginal density function is � pdf( x ) = pdf( x , y ) d y . ◮ The conditional density function is pdf( y | x ) = pdf( x , y ) pdf( x ) . ◮ The inversion method is then applied to each of the marginal and conditional density functions.

  15. Cosine-weighted hemisphere sampling ◮ Sampling directions according to the distribution: pdf( � ω ′ j ) = cos θ/π , pdf( θ, φ ) = cos θ sin θ/π . ◮ Compute the marginal and conditional density functions: � 2 π cos θ pdf( θ ) = sin θ d φ = 2 cos θ sin θ . π 0 cos θ sin θ/π 2 cos θ sin θ = 1 pdf( φ | θ ) = 2 π . ◮ The cdf for the marginal density function: � θ � cos θ cos θ ′ sin θ ′ d θ ′ = 2 ( − cos θ ′ ) dcos θ ′ = 1 − cos 2 θ P ( θ ) = 2 0 1 P ( φ | θ ) = φ/ (2 π ) . ◮ Invert these to find the sampling strategy: j = ( θ, φ ) = (cos − 1 � ω ′ � ξ 1 , 2 πξ 2 ) .

  16. Ambient occlusion ◮ Using the Lambertian BRDF for materials, f r = ρ d /π ; the cosine weighted ω ′ hemisphere for sampling, pdf( � j ) = cos θ/π ; and a visibility term V for incident illumination, the Monte Carlo estimator for ambient occlusion is simply: N N f r ( x , � ω ′ j , � ω ) L i ( x , � ω ′ j ) cos θ ω ) = 1 = ρ d ( x ) 1 � � L N ( x , � V ( � ω ′ j ) . N pdf( � ω ′ j ) N j =1 j =1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend