SLIDE 1
- Hello. I am Morgan McGuire, presenting a collaboration with Julie Dorsey, Eric Haines,
John Hughes, Steve Marschner, Matt Pharr, and Peter Shirley on A Taxonomy of Bidirectional Scattering Distribution Function Lobes for Rendering Engineers With the word “engineer” we intend to include researchers, faculty, and students. People who design rendering systems; This is different from scientists and engineers in adjacent fields, people in manufacturing, and content creators, who have different needs and have their own taxonomies. 1
SLIDE 2
We release this work under a creative commons license to facilitate reuse for other authors and teachers. 2
SLIDE 3
To introduce the problem we addressing in this work, I’ll begin with a story. (I will intentionally misattribute my coauthors’ contributions and positions to improve the story…and to tease them, which I assure you comes from my admiration and gratitude of being in such illustrious company.) 3
SLIDE 4
Pete gathered us online several months ago. He noted that there’s a lack of consistency in the terminology used to describe materials in rendering and proposed that we standardize for our own future book editions, courses, and papers in order to reduce confusion when moving between contexts. [click] So, we surveyed major work across several fields and five decades and sought a reasonable set of terms and concepts. 4
SLIDE 5 The first challenge that we noted is that that “material” has many different meanings in the field of computer graphics and interactive techniques taken as a whole. [click] “Material” can describe the properties of:
- physical simulation, including density, chemical properties, friction coefficients,
rigidity, etc.
- audio simulation, including the sounds it makes when struck with different objects
- modeling, including shape of detail features—fur, tiles, bumps, etc.
- surface rendering for outermost surface, or multiple thin layers of the surface,
including emission and light reflection
- volumetric rendering, including phase functions
- game logic, including whether it can be picked up by the player, is breakable, can
be traversed by characters 5
SLIDE 6
For example if you tell me that you have a sphere with a particular material on it, I personally assume something like this, where the color and size of the highlight are the space of variation. But to an artist using the Substance Designer tool, these are all the same object with a different “material”. [click] Coming at this first from rendering, I hadn’t thought about the coverage mask in the wicker example, but that makes sense. That “material” might include a displacement map or all of the complexity of a full 3D modeled world— 6
SLIDE 7
—down to individual procedurally-generated sailboats and houses…took me aback! We addressed this by narrowing our goal to from “materials” to “appearance under thin surface rendering” at the level of “uniform small patches” within a pixel or texel… 7
SLIDE 8 Yet, we found the terminology for surface appearance still inconsistent. [click] I think the most common definition problem in rendering is “what does specular mean?” To some, it is a surface that creates perfectly sharp mirror reflection. To
- thers it is a surface with mirror reflection or refracted transmission. To still others, it
is any blurred reflection that is near the mirror reflection direction. Vendors of paint, metal, and cloth; optical physicists; astronomers; natural media artists; CGI artists; game programmers; film VFX engineers; academic researchers, faculty, and students…all have different words for describing the appearance of this statue even when focusing solely on appearance. And we recognize that we can’t really discuss appearance as a property of a surface anyway… 8
SLIDE 9 The passive appearance results from:
- 1. the combination of the two media, one on either side of a surface where light
scatters
- 2. the incident illumination conditions
- 3. the imaging system’s sensitivities (e.g., camera’s sensor response or human retina)
- 4. the context within the image (light adaptation, bloom, local contrast…all of which
happen in the human visual system as well as modern cameras!) Passive just means that the object isn’t glowing; if it is, that’s another thing… Our conclusion was to abandon materials and appearance. 9
SLIDE 10
Because “Appearance” is the result of shading. We are better served in rendering by describing the key actor in shading, [click] which is the bidirectional scattering distribution function (BSDF) [click] Which is not a property of an object but an emergent property of an interface, although in practice it is common to assume surfaces are in air and attach BSDFs to them. 10
SLIDE 11 The BSDF is a function of the incoming and outgoing direction vectors at a surface. It is the ratio of the change in outgoing radiance to the change in incident irradiance
- ver small solid angles. It has units of “per steradian”.
This means that the BSDF is a distribution of light scattering in every possible direction.
- If I choose some incoming direction of light, then I can visualize a cross section of the
BSDF with a cartoon like this: The gray box is some medium, such as glass, the top of the gray box is the interface between air and the glass…let’s say that it is ground (rough, sand-blasted) glass to make this interesting. The blue arrow is the incoming light’s direction of propagation that I’ve chosen. And then here’s the distribution of scattered light…for ground glass, there’s probably a lot of reflection around the mirror direction, but a fair bit that scatters in every direction, and then some that propagates forward into the glass by transmission but 11
SLIDE 12 is diffused by that rough surface, which is why it provides some privacy on a shower door by “blurring” what is seen through it. In order to draw this, I had to fix both degrees of freedom in the incoming vector and
- ne degree of freedom in the outgoing vector, leaving only the outgoing angle in the
plane of the screen. If I choose a different incoming direction or a different
- rientation for the diagram, then we’ll see a different distribution in 2D.
11
SLIDE 13
12
SLIDE 14
In practice, a rendering system mainly uses the BSDF in two ways. I’ll illustrate these with toy renderer pseudocode. As shown on the left, it will evaluate the BSDF function for “direct illumination”. This happens in your rasterization pixel shader or ray hit shader. This is what everyone spent a lot of time on in the early 80’s, and systems using rasterization continued to spend most of their computation on until recently. As shown on the right with this mock path tracer, a Monte Carlo renderer will also sample directions with a probability distribution proportional to the BSDF (and usually some other things, like the cosine of the angle of incidence and the incoming lighting). Sampling is where the computational and research emphasis is today for offline rendering, and increasingly for game rendering. This sampling use case is also the primary motivation for today’s talk, as we’ll see in a minute. 13
SLIDE 15
Unfortunately, the BSDF terminology in the literature isn’t any better than the appearance terminology. In fact, it is mostly the same. 14
SLIDE 16
Unfortunately, the BSDF terminology in the literature isn’t any better than the appearance terminology. In fact, it is mostly the same. 15
SLIDE 17 When a renderer is sampling a BSDF, it has to handle light scattering differently depending on whether it is described by a discrete or continuous probability distribution function. On the right we show a schematic of a PDF for a surface that always produces a discrete set of light rays from one incident light ray, such as a perfect mirror, and photographs of interfaces between glass, metal, or water and air that could be approximated by such a BSDF.. Now, the BSDF for a mirror-reflector isn't a function in the sense we're used to: for every input it 'evaluates' to either zero or infinity. Consider that in the context of our two main use cases for BSDFs: evaluation for direct illumination and sampling for monte carlo ray tracing methods. It is not useful to evaluate the mirror-reflector BSDF. It is either infinity (which we can’t use for shading, and will occur for a single direction, thus with zero probability)
- r zero…so there was nothing to compute either way. We’d like to classify such BSDFs
to branch in the code and exclude them from evaluation. 16
SLIDE 18
For the sampling use case, we need to employ discrete (probability “mass”) sampling algorithms instead of continuous probability density sampling, so we again will require a code branch. This is the implication of monte carlo for taxonomy that I promised you. This branch point in the code becomes a branch point in the taxonomy. For the case where a single incident light ray produces a continuous distribution, we can further subdivide the scenarios, which will be driven by the monte carlo considerations… 16
SLIDE 19 Outgoing rays sampled proportional the continuous distribution can either be clustered together (and highly influenced by a particular direction), or spread out widely over many directions. When the rays are clustered together, that creates a more coherent ray cast. This encourages certain kinds of data structures and scheduling operations for parallel processing that computes ray intersection. Monte Carlo renderers use stochastic samples, which yields noise in the final image. A common step is denoising by blurring together the contributions of nearby pixels in the image (or pixels from previous frames) that we expect should have had similar values. When denoising animation, knowing that the outgoing rays are clustered together tells us that we should not blur too far in screen space. Knowing whether the retroreflection, transmission, or mirror vector most strongly influences the BSDF shape also tells us how to blur contributions from the previous frame into the current
17
SLIDE 20
For the case of rays that are spread widely, we need a very large denoising kernel (since there will be a lot of noise), but it also tells us that we can reproject samples directly on the surface, as they are less dependent on the view direction. Finally, we further divide this spread out case… 17
SLIDE 21
When the distribution is uniform over the hemisphere, that’s a special case that allows preintegration as the incoming direction is irrelevant. We can also perfectly reproject during denoising. The remainder of the spread out case is “everything else”…it has no special properties to exploit, but also avoids the singularity of the mirror case, so must be handled in a general way. Most BSDFs can’t be described by just one of these categories. Instead, the shape of the BSDF expressible as a sum of these…. 18
SLIDE 22 Measured or simulated BSDFs can be projected onto these terms as a kind of basis. Because one category is “everything else”, there’s no error or approximation by doing so. Artistic or fitted analytic BSDFs are already separated into weighted terms by construction. All that is left is to name the different shapes that appear in our functional decomposition of BSDFs. We intentionally chose names that are commonly in use. What’s new are:
- a pragmatic rationale for the branches of the taxonomy
- consistent definitions we plan for development, research, and teaching
- relating these names to physical processes
- disambiguation of common terms
19
SLIDE 23
[read terms] 20
SLIDE 24
We also define some other key terms that describe BSDFs. For example, An isotropic BSDF is independent of the rotation of the tangent plane about the normal with respect to the incident angle. An anisotropic one has BSDF lobes that change with orientation. A spool of fine wire and the circular pattern on the bottom of an aluminum pot are examples 21
SLIDE 25
The paper gives these definitions and I just covered most of them, so I’ll skip them here. 22
SLIDE 26
Here’s a brief Rosetta stone in which you can see that our taxonomy aligns reasonablty well with previous terminology but clarifies and refines it. Please see our paper for details and extensive citations. 23
SLIDE 27 I’ve explained why we declined to define “Material” or “Appearance”; those aren’t the important distinctions for researchers and engineers working on renderers. Likewise, artists, lighting engineers, those working on physical simulation, etc. have their own important distinctions. Our terminology is solely for people who design and implement rendering algorithms. Phase functions and emission functions are beyond the scope of this work. We’ve heard two different definitions of “layered BSDFs” and mention them without suggesting a standard: Games: alpha-compositing of BSDF parameters; mostly for masking. E.g., rust
Film: materials with translucent physical layers. E.g., varnished wood We currently use different notation for common rendering terms (e.g., clamped cosine, unit vectors, vector names). This is a minor annoyance but is easier to define concisely than the concepts that we just presented, so I did not address it here. 24
SLIDE 28
We hope that everyone will adopt and extend our taxonomy. At least for ourselves we can now have consistency in our libraries, textbooks, publications, and courses. The methodology we arrived at is one that I’ll now use for creating terminology and taxonomies in general: Computer graphics is a large field and affects design, manufacturing, theory and practice, entertainment and predictive simulation, physical and virtual worlds, and with augmented reality even blends physical and virtual worlds. Because it is so diverse, there are many places where ambiguous and competing jargon appears. We encourage further work on standardization to make interdisciplinary communication more effective and graphics more accessible to newcomers. Thank you. 25
SLIDE 29
26
SLIDE 30
27
SLIDE 31
28
SLIDE 32
29
SLIDE 33
30
SLIDE 34
31
SLIDE 35
32