Reconstruction II Neural Networks in Monte Carlo Rendering Philipp - - PowerPoint PPT Presentation

reconstruction ii
SMART_READER_LITE
LIVE PREVIEW

Reconstruction II Neural Networks in Monte Carlo Rendering Philipp - - PowerPoint PPT Presentation

Reconstruction II Neural Networks in Monte Carlo Rendering Philipp Slusallek Karol Myszkowski Gurprit Singh 1 Realistic Image Synthesis SS2018 Previous Lecture 2 Realistic Image Synthesis SS2018 Slide from Kartic Subr 3


slide-1
SLIDE 1

Realistic Image Synthesis SS2018

Reconstruction II

Philipp Slusallek Karol Myszkowski Gurprit Singh

  • 1

Neural Networks in Monte Carlo Rendering

slide-2
SLIDE 2

Realistic Image Synthesis SS2018

Previous Lecture

2

slide-3
SLIDE 3

Realistic Image Synthesis SS2018 3

Slide from Kartic Subr

slide-4
SLIDE 4

Depth of field

4

Slide from Jakko Lehtinen

slide-5
SLIDE 5

1 scanline


5

Slide from Jakko Lehtinen

slide-6
SLIDE 6

Screen x Lens u

6

Slide from Jakko Lehtinen

slide-7
SLIDE 7

The trajectories of
 samples originating from a single
 apparent surface never intersect.


Visibility: SameSurface

7

Slide from Jakko Lehtinen

slide-8
SLIDE 8

Realistic Image Synthesis SS2018 8

Hachisuka et al. [2008]

slide-9
SLIDE 9

Realistic Image Synthesis SS2018 9

Hachisuka et al. [2008]

slide-10
SLIDE 10

Realistic Image Synthesis SS2018 10

Sen and Darabi [2012]

slide-11
SLIDE 11

Realistic Image Synthesis SS2018 11

Pixels,Random Params,Features

The algorithm computes the statistical dependency of (c-f) on the random parameters in (b)

Sen and Darabi [2012]

slide-12
SLIDE 12

Realistic Image Synthesis SS2018

Gaussian Filtering

12

Paris et al. [2009]

slide-13
SLIDE 13

Realistic Image Synthesis SS2018

Bilateral Filtering

13

Paris et al. [2009]

slide-14
SLIDE 14

Realistic Image Synthesis SS2018 14

Bilateral vs Gaussian Filtering

Paris et al. [2009]

slide-15
SLIDE 15

Realistic Image Synthesis SS2018

À la Carte

15

  • Introduction to Multi-Layer perceptrons (Neural Networks)
  • Machine Learning for Filtering Monte Carlo Noise [Kalantari et al. 2015]
slide-16
SLIDE 16

Realistic Image Synthesis SS2018

Motivation

16

Bako et al. [2017]

slide-17
SLIDE 17

Realistic Image Synthesis SS2018

History of Neural Networks

  • In 1943, McCulloch and Pitts created a computational

model for neural networks

  • In 1975, Werbos's back propagation algorithm generally

accelerated the training of multi-layer networks.

  • In 1980s, Recurrent Neural Networks were developed

17

slide-18
SLIDE 18

Realistic Image Synthesis SS2018

Multi-Layer Perceptrons

18

slide-19
SLIDE 19

Realistic Image Synthesis SS2018

Classifiers

19

yj = f(wjxj + bj)

slide-20
SLIDE 20

Realistic Image Synthesis SS2018

Classifiers

20

yj = f(wjxj + bj)

slide-21
SLIDE 21

Realistic Image Synthesis SS2018

Complex Classifiers

21

Complex classifier

yj = f(wjxj + bj)

slide-22
SLIDE 22

Realistic Image Synthesis SS2018

Complex Classifiers

22

Complex classifier What features can produce this decision rule ?

slide-23
SLIDE 23

Realistic Image Synthesis SS2018

Perceptron Classifier

23

x1

x2 x3 x4 x5

. . .

1 Classifier

slide-24
SLIDE 24

Realistic Image Synthesis SS2018

Perceptron Classifier

24

x1

x2 x3 x4 x5

. . .

Classifier

y = f(w1x1 + w2x2 + ... + w0)

1

w1 w2 w0

Output

slide-25
SLIDE 25

Realistic Image Synthesis SS2018 25

x1

1

Multi-layer Perceptron

slide-26
SLIDE 26

Realistic Image Synthesis SS2018 26

x1

1

X X X

f f f

Multi-layer Perceptron

slide-27
SLIDE 27

Realistic Image Synthesis SS2018 27

x1

1

w11 w21 w31

w30 w20 w10

X X X

f f f

Multi-layer Perceptron

slide-28
SLIDE 28

Realistic Image Synthesis SS2018 28

x1

1

w11 w21 w31

w30 w20 w10

X X X

f f f

w11 w21

w30 w20 w10

w31

x1 x1 x1

+ + +

Multi-layer Perceptron

slide-29
SLIDE 29

Realistic Image Synthesis SS2018 29

x1

1

w11 w21 w31

w30 w20 w10

X X X

f f f

w11 w21

w30 w20 w10

w31

x1 x1 x1

+ + +

y2 y1

y3

= = =

f f f

( ( ( ( ( (

Multi-layer Perceptron

slide-30
SLIDE 30

Realistic Image Synthesis SS2018 30

w1

w2 w3

X

Output

x1

1

w11 w21 w31

w30 w20 w10

X X X

f f f

w11 w21

w30 w20 w10

w31

x1 x1 x1

+ + +

y2 y1

y3

= = =

f f f

( ( ( ( ( (

Multi-layer Perceptron

slide-31
SLIDE 31

Realistic Image Synthesis SS2018 31

w1

w2 w3

w1

w2 w3

X

Output

x1

1

w11 w21 w31

w30 w20 w10

X X X

f f f

w11 w21

w30 w20 w10

w31

x1 x1 x1

+ + +

y1 y2

y3

y1 y2

y3

= = =

f f f

( ( ( ( ( ( Input features Hidden layers Output layers

Multi-layer Perceptron

slide-32
SLIDE 32

Realistic Image Synthesis SS2018

Multi-layer Perceptron

32

w1

w2 w3

w1

w2 w3

X

Output

x1 1

w11 w21 w31 w30 w20 w10

X X X

f f f

w11 w21

w30 w20 w10

w31

Input features Hidden layers Output layers "Features" are outputs of perceptrons Perceptrons Matrix of first layer weights Matrix of second layer weights

slide-33
SLIDE 33

Realistic Image Synthesis SS2018

Features of MLPs

33

Input features

Perceptron: Step function with linear decision boundary

slide-34
SLIDE 34

Realistic Image Synthesis SS2018

Features of MLPs

34

Layer 1

2-layer:

"Features" are now decision boundaries (partitions) All linear combination of those partitions give complex partitions These outputs are now input features to the next layer

slide-35
SLIDE 35

Realistic Image Synthesis SS2018

Features of MLPs

35

Layer 1 Layer 2 These complex outputs become the features for the new layer

slide-36
SLIDE 36

Realistic Image Synthesis SS2018

Features of MLPs

36

Layer 1 Layer 2 Deep Neural Networks

slide-37
SLIDE 37

Realistic Image Synthesis SS2018

Computational Graph representation of Neural Networks

37

slide-38
SLIDE 38

Realistic Image Synthesis SS2018

Neural Networks

38

ReLU

W1 x1

N × 1 N × N

Fully connected layers

slide-39
SLIDE 39

Realistic Image Synthesis SS2018

Neural Networks

39

ReLU Fully connected layers

W1 x1 x2

N × 1 N × N N × 1

slide-40
SLIDE 40

Realistic Image Synthesis SS2018

Neural Networks

40

ReLU Fully connected layers ReLU ...

W1 W2 x1 x2

N × 1 N × N N × 1 N × N

slide-41
SLIDE 41

Realistic Image Synthesis SS2018

Neural Networks

41

ReLU Fully connected layers ReLU ...

W1 W2 x1 x2

N × 1 N × N N × 1 N × N

N represents number of pixels in an image

data

slide-42
SLIDE 42

Realistic Image Synthesis SS2018

Neural Networks

42

ReLU Fully connected layers ReLU ...

W1 W2 x1 x2 *

max ReLU Computational Graph Unstructured data

slide-43
SLIDE 43

Realistic Image Synthesis SS2018

Neural Networks

43

Fully connected layers ReLU ReLU ...

W1 W2 x1 x2

Computational Graph

*

max ReLU

*

max ReLU ... Unstructured data

slide-44
SLIDE 44

Realistic Image Synthesis SS2018

Two-layer model

44

*

max ReLU

*

max ReLU Fully connected layers

What can be a loss function ?

R R .

W1 W2

slide-45
SLIDE 45

Realistic Image Synthesis SS2018

Two-layer model

45

*

max ReLU

*

max ReLU Fully connected layers Reference R R .

W1 W2

What can be a loss function ?

slide-46
SLIDE 46

Realistic Image Synthesis SS2018

Two-layer model

46

*

max ReLU

*

max ReLU Fully connected layers R R .

W1 W2

What can be a loss function ?

Reference

slide-47
SLIDE 47

Realistic Image Synthesis SS2018 47

Fully connected layers

*

max ReLU

*

max ReLU L2 Loss R R .

W1 W2

What can be a loss function ?

Two-layer model

Reference

slide-48
SLIDE 48

Realistic Image Synthesis SS2018

Two-layer model

48

Reference max ReLU L2 Loss R R .

W1 W2

  • 2

What can be a loss function ?

slide-49
SLIDE 49

Realistic Image Synthesis SS2018

Two-layer model

49

max ReLU L2 Loss R R .

W1 W2

What can be a loss function ?

Reference

  • 2
slide-50
SLIDE 50

Realistic Image Synthesis SS2018

Two-layer model: Back propagation

50

Source link

slide-51
SLIDE 51

Realistic Image Synthesis SS2018

Two-layer model: Back propagation

51

Random initialization Global cost minimum Gradient Descent Algorithm for back propagation

slide-52
SLIDE 52

Realistic Image Synthesis SS2018

Back Propagation

52

Slides courtesy: Stanford Online Course

slide-53
SLIDE 53

Realistic Image Synthesis SS2018

Back Propagation

53

Slides courtesy: Stanford Online Course

slide-54
SLIDE 54

Realistic Image Synthesis SS2018

Back Propagation

54

Slides courtesy: Stanford Online Course

slide-55
SLIDE 55

Realistic Image Synthesis SS2018

Back Propagation

55

Slides courtesy: Stanford Online Course

slide-56
SLIDE 56

Realistic Image Synthesis SS2018

Back Propagation

56

Slides courtesy: Stanford Online Course

slide-57
SLIDE 57

Realistic Image Synthesis SS2018

Back Propagation

57

Slides courtesy: Stanford Online Course

slide-58
SLIDE 58

Realistic Image Synthesis SS2018

Back Propagation

58

Slides courtesy: Stanford Online Course

slide-59
SLIDE 59

Realistic Image Synthesis SS2018

Back Propagation

59

Slides courtesy: Stanford Online Course

slide-60
SLIDE 60

Realistic Image Synthesis SS2018

Back Propagation

60

Slides courtesy: Stanford Online Course

slide-61
SLIDE 61

Realistic Image Synthesis SS2018

Machine Learning for Filtering Monte Carlo Noise

61

Kalantari et al. [SIGGRAPH 2015]

slide-62
SLIDE 62

Realistic Image Synthesis SS2018

Reconstruction / Denoising

62

,

Pixel neighborhood Filter weights

slide-63
SLIDE 63

Realistic Image Synthesis SS2018

Filter weights

63

Pixel neighborhood Filter weights For cross Bilateral filters:

slide-64
SLIDE 64

Realistic Image Synthesis SS2018 64

Pixel neighborhood Filter weights For cross Bilateral filters:

Filter weights

slide-65
SLIDE 65

Realistic Image Synthesis SS2018

For cross Bilateral filters:

65

Pixel neighborhood Filter weights

Filter weights

Sen and Darabi [2012]

slide-66
SLIDE 66

Realistic Image Synthesis SS2018

For cross Bilateral filters:

66

Filter weights

Pixel screen coordinates Mean sample color value Scene features

slide-67
SLIDE 67

Realistic Image Synthesis SS2018 67

Filter weights

What are the optimal parameters ? For cross Bilateral filters: Pixel screen coordinates Mean sample color value Scene features

slide-68
SLIDE 68

Realistic Image Synthesis SS2018

Neural Network Approach

  • Feed-forward Neural network
  • Best part: We can learn weights in a training phase
  • Back propagation: Important for training weights
  • For Back propagation, the Loss function should be

differentiable and

  • all the intermediate functionals should be differentiable.

68

slide-69
SLIDE 69

Realistic Image Synthesis SS2018

One Hidden-layer model

69

Relative Mean Square Error:

slide-70
SLIDE 70

Realistic Image Synthesis SS2018

One Hidden-layer model

70

Relative Mean Square Error:

???

slide-71
SLIDE 71

Realistic Image Synthesis SS2018

One Hidden-layer model

71

Relative Mean Square Error:

slide-72
SLIDE 72

Realistic Image Synthesis SS2018

Results

72

slide-73
SLIDE 73

Realistic Image Synthesis SS2018

Results

73

slide-74
SLIDE 74

Realistic Image Synthesis SS2018

Results

74

slide-75
SLIDE 75

Realistic Image Synthesis SS2018 75

Recurrent Autoencoder for Interactive Reconstruction

Chaitanya et al. [2017]

slide-76
SLIDE 76

Realistic Image Synthesis SS2018

Recurrent Neural Networks

76

Source link

slide-77
SLIDE 77

Realistic Image Synthesis SS2018

Recurrent Neural Networks

77

Source link

slide-78
SLIDE 78

Realistic Image Synthesis SS2018

Recurrent Autoencoder [Chaitanya et al. 2017]

78

slide-79
SLIDE 79

Realistic Image Synthesis SS2018

Recommended Reading

79

  • Machine Learning for Filtering Monte Carlo Noise [Kalantari et al. 2015]
  • Recurrent Autoencoder for Interactive Reconstruction [Chaitanya et al. 2017]
  • Kernel-Predicting CNNs for Monte Carlo Denoising [Bako et al. 2017]
slide-80
SLIDE 80

Realistic Image Synthesis SS2018

References & Bonus

  • Ian Goodfellow: Deep Learning
  • Deep Dream Generator (Google)
  • Deep Mind (Google)

80

Image by Google Deep Dream