Radiance-Predicting Neural Networks SIMON KALLWEIT, Disney Research - - PowerPoint PPT Presentation

β–Ά
radiance predicting neural networks
SMART_READER_LITE
LIVE PREVIEW

Radiance-Predicting Neural Networks SIMON KALLWEIT, Disney Research - - PowerPoint PPT Presentation

Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks SIMON KALLWEIT, Disney Research and ETH Zrich et al. ACM Transactions on Graphics, Publication date: November 2017 Presenter: MinKu Kang In Previous Talk


slide-1
SLIDE 1

Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks

SIMON KALLWEIT, Disney Research and ETH ZΓΌrich et al. Presenter: MinKu Kang ACM Transactions on Graphics, Publication date: November 2017

slide-2
SLIDE 2

Ambient sound propagation

In Previous Talk from Dennis

slide-3
SLIDE 3

edge-darkening effects silverlining

Cloud Rendering

https://www.youtube.com/watch?v=0MJl9IF_3fI

slide-4
SLIDE 4

http://ww2010.atmos.uiuc.edu/(Gh)/guides/mtr/opt/mch/sct.rxml

Scattering of Light

Light scattering in microscale, not just in macro scale

slide-5
SLIDE 5

Problem Configuration & Notation

πœ•: π‘’π‘—π‘ π‘“π‘‘π‘’π‘—π‘π‘œ 𝑦: π‘šπ‘π‘‘π‘π‘’π‘—π‘π‘œ We want to know (compute) the radiance at (𝑦, πœ•) To render a whole cloud image, We need to know the radiance at all (visible) positions and directions Problem: How to efficiently compute the radiance at a specific position and a direction ?

slide-6
SLIDE 6

Problem Configuration & Notation

πœ•: π‘’π‘—π‘ π‘“π‘‘π‘’π‘—π‘π‘œ 𝑦: π‘šπ‘π‘‘π‘π‘’π‘—π‘π‘œ We want to know (compute) the radiance at (𝑦, πœ•) To render a whole cloud image, We need to know the radiance at all (visible) positions and directions Problem: How to efficiently compute the radiance at a specific position and a direction ?

But, there are too many discrete particles to consider (they are not even polygons!). Is this possible to use rendering equation we have learned ?

slide-7
SLIDE 7

Radiative Transfer

The radiative transfer equation Integrating both sides of the differential RTE along Ο‰

: extinction coefficient

slide-8
SLIDE 8

Radiative Transfer

πœ• 𝑦 ෝ πœ•

π‘‘π‘π‘œπ‘’π‘ π‘—π‘π‘£π‘’π‘—π‘π‘œ 𝑔𝑏𝑑𝑒𝑝𝑠: πœ• βˆ™ ෝ πœ• Neighborhood surface 𝑇2 Boundary

slide-9
SLIDE 9

RADIANCE-PREDICTING NEURAL NETWORKS

The in-scattered radiance Rule out uncollided radiance (directly from the sun)

This is what the NN predicts (estimate)

slide-10
SLIDE 10

A combination of Monte Carlo integration and neural networks

The in-scattered radiance

This is what the NN predicts (estimate)

Monte-Carlo Integration

slide-11
SLIDE 11

RADIANCE-PREDICTING NEURAL NETWORKS Want to find (learn) a function Such that, given it predicts S: shading configuration around 𝑦, πœ•

slide-12
SLIDE 12

RADIANCE-PREDICTING NEURAL NETWORKS Want to find (learn) a function via using

slide-13
SLIDE 13

The Descriptor at a specific configuration (𝑦, πœ•)

  • Each descriptor consists of 5 Γ— 5 Γ— 9 stencils
  • The stencil at level k is scaled by 2π‘™βˆ’1
  • They use K=10 levels (10 stenciles)
  • Each stencil is formed by 225 points
  • The stencil is oriented towards the light source
  • Two levels of the hierarchy are shown here
slide-14
SLIDE 14

The Descriptor at a specific configuration (𝑦, πœ•) The Descriptor:

𝑦: π‘šπ‘π‘‘π‘π‘’π‘—π‘π‘œ πœ•: π‘’π‘—π‘ π‘“π‘‘π‘’π‘—π‘π‘œ πœ•π‘š: π‘’π‘—π‘ π‘“π‘‘π‘’π‘—π‘π‘œ 𝑒𝑝π‘₯𝑏𝑠𝑒𝑑 π‘’β„Žπ‘“ π‘šπ‘—π‘•β„Žπ‘’ 𝑑𝑝𝑣𝑠𝑑𝑓

slide-15
SLIDE 15

Neural Network Architecture (progressive feeding)

The most finest scale stencil The most coarse scale Outout (L)

slide-16
SLIDE 16

Ground Truth data from Path Tracing N = ~15 million samples Adam update rule using the default learning rate The minibatches of size |B| = 1000 It requires ∼12 h of training on a single GPU

Training Configuration

slide-17
SLIDE 17

Path Tracing Radiance-Predicting Neural Networks (RPNN)

Result (Test Time)

slide-18
SLIDE 18

Result (Test Time)

They argued that RPNN (seconds to minutes.) converges 24 times faster than PT

slide-19
SLIDE 19

Experiment - Neural Network Architecture

Progressive feeding The entire stencil hierarchy is input to the first layer This highlights the benefit of the progressive feeding that provides means to better adapt to signals at different frequency scales. Validation error

slide-20
SLIDE 20

Experiment – Stencil Size

A good balance between accuracy and the cost of querying the density values and number of trainable parameters in the network Validation error

slide-21
SLIDE 21

Summary

Radiative Transfer Equation (RTE) Hierarchical Stencil Descriptor Progressive Feeding Neural Network