Understanding camera trade-o fg s through a Bayesian analysis of - - PowerPoint PPT Presentation

understanding camera trade o fg s through a bayesian
SMART_READER_LITE
LIVE PREVIEW

Understanding camera trade-o fg s through a Bayesian analysis of - - PowerPoint PPT Presentation

Understanding camera trade-o fg s through a Bayesian analysis of light field projections Anat Levin 1 , Bill Freeman 1,2 , Fredo Durand 1 Computer Science and Artificial Intelligence Lab (CSAIL), 1 Massachusetts Institute of Technology and 2 Adobe


slide-1
SLIDE 1

Understanding camera trade-ofgs

through a Bayesian analysis of light field projections

Anat Levin1, Bill Freeman1,2, Fredo Durand1 Computer Science and Artificial Intelligence Lab (CSAIL),

1Massachusetts Institute of Technology

and 2Adobe Systems

slide-2
SLIDE 2

Cameras, old and new

Traditional camera: Lens forms final 2D image

slide-3
SLIDE 3

Cameras, old and new

Traditional camera: Lens forms final 2D image Computational camera: Recorded data is not the final output.

  • Visual array estimated from sensor measurements.
  • Extra design degree of freedom.

Beyond 2D images--acquisition of light field or depth. Post-exposure re-synthesis of image.

slide-4
SLIDE 4

An explosion of cameras

slide-5
SLIDE 5

An explosion of cameras

Conventional single- lens cameras

slide-6
SLIDE 6

An explosion of cameras

Conventional single- lens cameras Stereo and trinocular cameras

slide-7
SLIDE 7

An explosion of cameras

Conventional single- lens cameras Stereo and trinocular cameras Coded aperture

slide-8
SLIDE 8

An explosion of cameras

Conventional single- lens cameras Stereo and trinocular cameras Coded aperture Plenoptic cameras

slide-9
SLIDE 9

An explosion of cameras

Conventional single- lens cameras Stereo and trinocular cameras Coded aperture Plenoptic cameras Wavefront coding

slide-10
SLIDE 10
  • Best way to capture image and depth: Stereo? Plenoptic camera?

Coded aperture? or...?

  • What aspects of these cameras contribute to their performance?
  • Can we design new cameras with improved reconstruction

performance?

An explosion of cameras

Conventional single- lens cameras Stereo and trinocular cameras Coded aperture Plenoptic cameras Wavefront coding

slide-11
SLIDE 11

Camera evaluation, old and new

Traditional optics evaluation: 2D image sharpness (eg, Modulation Transfer Function)

contrast vs. spatial frequency

slide-12
SLIDE 12

Camera evaluation, old and new

Traditional optics evaluation: 2D image sharpness (eg, Modulation Transfer Function)

contrast vs. spatial frequency

Our modern camera evaluation: How well does the recorded data allow us to estimate the visual world - the lightfield?

lightfield reconstruction

slide-13
SLIDE 13
  • Characteristics of the signal to be estimated.
  • Projection functions of various cameras.
  • Bayesian lightfield analysis

– Reconstructing the lightfield from camera data. – Comparing performance tradeoffs of different

cameras.

Computational photography camera evaluation: an estimation problem

slide-14
SLIDE 14
  • Characteristics of the signal to be estimated.
  • Projection functions of various cameras.
  • Bayesian lightfield analysis

– Reconstructing the lightfield from camera data. – Comparing performance tradeoffs of different

cameras.

Computational photography camera evaluation: an estimation problem

so let’s talk about lightfields and cameras

slide-15
SLIDE 15

What does a camera sensor element record?

camera

  • ptics
slide-16
SLIDE 16

What does a camera sensor element record?

camera

  • ptics

Sensor element

Some linear combination

  • f lightrays.
slide-17
SLIDE 17

Sensor element data

slide-18
SLIDE 18

x

The lightfield (4D)

Sensor element data

slide-19
SLIDE 19

Ti

The camera 4D->2D linear projection

x

The lightfield (4D)

Sensor element data

slide-20
SLIDE 20

yi =

datum

Ti

The camera 4D->2D linear projection

x

The lightfield (4D)

Sensor element data

slide-21
SLIDE 21

yi =

datum

Ti

The camera 4D->2D linear projection

x

The lightfield (4D)

Sensor element data

noise

+ ni

slide-22
SLIDE 22

x + n y = T

camera data The camera 4D->2D linear projection The lightfield (4D) noise

Camera: all-positive linear projection of a 4D lightfield

What is a camera?

slide-23
SLIDE 23

A more revealing parameterization of the lightfield

Light field: parameterization of the 4D space of light rays in the world Provides a convenient way to model different lenses and cameras designs

slide-24
SLIDE 24

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-25
SLIDE 25

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-26
SLIDE 26

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-27
SLIDE 27

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-28
SLIDE 28

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-29
SLIDE 29

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-30
SLIDE 30

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-31
SLIDE 31

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-32
SLIDE 32

hello 7

depth horizontal position

Lightfield tutorial

flatworld 1D scene 2D lightfield a b

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-33
SLIDE 33

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-34
SLIDE 34

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-35
SLIDE 35

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-36
SLIDE 36

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-37
SLIDE 37

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-38
SLIDE 38

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-39
SLIDE 39

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-40
SLIDE 40

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-41
SLIDE 41

hello 8

depth horizontal position

flatworld 1D scene 2D lightfield a b

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-42
SLIDE 42

hello 9

flatworld 1D scene 2D lightfield a b

depth horizontal position

Lightfield tutorial

2 plane parameterization [Levoy and Hanrahan 96]

b plane a plane

a b

slide-43
SLIDE 43

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-44
SLIDE 44

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-45
SLIDE 45

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-46
SLIDE 46

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-47
SLIDE 47

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-48
SLIDE 48

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-49
SLIDE 49

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-50
SLIDE 50

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-51
SLIDE 51

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-52
SLIDE 52

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-53
SLIDE 53

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-54
SLIDE 54

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-55
SLIDE 55

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-56
SLIDE 56

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

b plane a plane

a b

slide-57
SLIDE 57

hello 10

Pinhole camera

flatworld 1D scene 2D lightfield

depth horizontal position

a b

sensor plane aperture

y = T x

b plane a plane

a b

slide-58
SLIDE 58

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-59
SLIDE 59

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-60
SLIDE 60

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-61
SLIDE 61

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-62
SLIDE 62

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-63
SLIDE 63

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-64
SLIDE 64

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-65
SLIDE 65

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-66
SLIDE 66

hello 11

depth horizontal position

sensor plane aperture

Lens, focused at green object

flatworld 1D scene 2D lightfield a b

y = T x

b plane a plane

a b

slide-67
SLIDE 67

hello 12

depth horizontal position

sensor plane aperture

Lens, focused at blue object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-68
SLIDE 68

hello 12

depth horizontal position

sensor plane aperture

Lens, focused at blue object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-69
SLIDE 69

hello 12

depth horizontal position

sensor plane aperture

Lens, focused at blue object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-70
SLIDE 70

hello 12

depth horizontal position

sensor plane aperture

Lens, focused at blue object

flatworld 1D scene 2D lightfield a b

b plane a plane

a b

slide-71
SLIDE 71

hello 14

depth horizontal position

apertures

Stereo

flatworld 1D scene 2D lightfield a b

sensor plane

b plane a plane

a b

slide-72
SLIDE 72

hello 15

Plenoptic camera

flatworld 1D scene 2D lightfield a b

depth horizontal position

sensor plane aperture

Adelson and Wang 92, Ng et al 05

b plane a plane

a b

micro-lenses main lens

slide-73
SLIDE 73

hello 16

depth horizontal position

sensor plane aperture

Wavefront coding

flatworld 1D scene 2D lightfield a b

Dowski and Cathey,94

b plane a plane

a b

cubic phase plate

slide-74
SLIDE 74

y= T x + n

data camera lightfield noise

Computational imaging

Camera: Rank deficient projection of a 4D lightfield. Decoding: ill-posed inversion, need prior on lightfield signals. Camera evaluation: How well can recover the lightfield from projection?

y = Tx + n

slide-75
SLIDE 75

Varying imaging goals by weighted lightfield reconstruction b a

slide-76
SLIDE 76

Varying imaging goals by weighted lightfield reconstruction b a

Weigh reconstruction error differently in different light field entries

slide-77
SLIDE 77

Varying imaging goals by weighted lightfield reconstruction

  • Full light field reconstruction (potentially image&depth)

b a

Weigh reconstruction error differently in different light field entries

slide-78
SLIDE 78

Varying imaging goals by weighted lightfield reconstruction

  • Full light field reconstruction (potentially image&depth)
  • Reconstruct a bounded view range

b a

Weigh reconstruction error differently in different light field entries

slide-79
SLIDE 79

Varying imaging goals by weighted lightfield reconstruction

  • Full light field reconstruction (potentially image&depth)
  • Reconstruct a bounded view range
  • Single row light field reconstruction (pinhole all focused image)

b a

Weigh reconstruction error differently in different light field entries

slide-80
SLIDE 80
  • Specify lightfield reconstruction goals
  • Full lightfield / Single, all-focus view /…
  • Specify lightfield prior
  • Imaging with one computational camera
  • Specify camera projection matrix
  • Camera decoding - Bayesian inference
  • Comparing computational cameras
  • Specify camera projection matrices
  • Evaluate expected error in lightfield reconstruction

Bayesian lightfield imaging - Outline

slide-81
SLIDE 81
  • Specify lightfield reconstruction goals
  • Full lightfield / Single, all-focus view /…
  • Specify lightfield prior
  • Imaging with one computational camera
  • Specify camera projection matrix
  • Camera decoding - Bayesian inference
  • Comparing computational cameras
  • Specify camera projection matrices
  • Evaluate expected error in lightfield reconstruction

Bayesian lightfield imaging - Outline

slide-82
SLIDE 82

Our light field prior: a mixture of signals at difgerent slopes

Hidden variable S modeling local slope Conditioning on slope: small variance along slope direction high variance along spatial direction

slide-83
SLIDE 83

Our light field prior: a mixture of signals at difgerent slopes

Hidden variable S modeling local slope Conditioning on slope: small variance along slope direction high variance along spatial direction

Piecewise smooth prior on slopes Given slope, lightfield prior is Gaussian and simple

Light field prior is a mixture of oriented Gaussians (MOG):

slide-84
SLIDE 84
  • Specify lightfield reconstruction goals
  • Full lightfield / Single, all-focus view /…
  • Specify lightfield prior
  • Imaging with one computational camera
  • Specify camera projection matrix
  • Camera decoding - Bayesian inference
  • Comparing computational cameras
  • Specify camera projection matrices
  • Evaluate expected error in lightfield reconstruction

Bayesian lightfield imaging - Outline

slide-85
SLIDE 85

Reconstruction using light field prior

Prior efgect on reconstruction

Band-limited reconstruction to account for unknown depth See paper for inference details

slide-86
SLIDE 86
  • Specify lightfield reconstruction goals
  • Full lightfield / Single, all-focus view /…
  • Specify lightfield prior
  • Imaging with one computational camera
  • Specify camera projection matrix
  • Camera decoding - Bayesian inference
  • Comparing computational cameras
  • Specify camera projection matrices
  • Evaluate expected error in lightfield reconstruction

Bayesian lightfield imaging - Outline

slide-87
SLIDE 87

Camera evaluation

Posterior probability

P(x|y, T)

lightfield given data, camera,

and prior

Lightfield, x

(schematic picture of the very high-dimensional vector) true lightfield, x0

Goal: evaluate inherent ambiguity of a camera projection, independent of inference algorithm

slide-88
SLIDE 88

Camera evaluation

Posterior probability

P(x|y, T)

lightfield given data, camera,

and prior good camera

Lightfield, x

(schematic picture of the very high-dimensional vector) true lightfield, x0

Goal: evaluate inherent ambiguity of a camera projection, independent of inference algorithm

slide-89
SLIDE 89

Camera evaluation

Posterior probability

P(x|y, T)

lightfield given data, camera,

and prior bad camera good camera

Lightfield, x

(schematic picture of the very high-dimensional vector) true lightfield, x0

Goal: evaluate inherent ambiguity of a camera projection, independent of inference algorithm

slide-90
SLIDE 90

Camera evaluation function: expected squared error

slide-91
SLIDE 91

Camera evaluation function: expected squared error

With our mixture model prior, conditioned on the lightfield slopes S, everything is Gaussian and analytic. So let’s write the posterior as:

slide-92
SLIDE 92

Camera evaluation function: expected squared error

With our mixture model prior, conditioned on the lightfield slopes S, everything is Gaussian and analytic. So let’s write the posterior as: Then our expected squared error becomes an integral over all slope fields:

slide-93
SLIDE 93

Camera evaluation function: expected squared error

With our mixture model prior, conditioned on the lightfield slopes S, everything is Gaussian and analytic. So let’s write the posterior as: Then our expected squared error becomes an integral over all slope fields: Approximate by Monte Carlo sampling near the true slope field:

slide-94
SLIDE 94

Bayesian camera evaluation tool

Matlab software online: people.csail.mit.edu/alevin/papers/lightfields-Code-Levin-Freeman- Durand-08.zip Input parameters:

  • Reconstruction goals (weight on light field entries)
  • Camera matrix
  • Noise level
  • Spatial and depth resolution

Output: expected reconstruction error

slide-95
SLIDE 95

1D camera evaluation- full light field reconstruction

expected lightfield estimation error

slide-96
SLIDE 96

1D camera evaluation- full light field reconstruction

expected lightfield estimation error Observation: As expected, a pinhole camera doesn’t estimate the lightfield well

slide-97
SLIDE 97

Observation: When depth variation is limited, some depth from defocus exist in a single monocular view from a standard lens

1D camera evaluation- full light field reconstruction

expected lightfield estimation error

slide-98
SLIDE 98

1D camera evaluation- full light field reconstruction

expected lightfield estimation error Observation: Wavefront coding, not designed to estimate the lightfield, doesn’t.

slide-99
SLIDE 99

1D camera evaluation- full light field reconstruction

expected lightfield estimation error Observation: Depth-from-defocus (DFD) outperforms the coded aperture at these settings

slide-100
SLIDE 100

1D camera evaluation- full light field reconstruction

Observation: Stereo error is less than Plenoptic Since depth variation is smaller than texture variation, no need to sacrifice so much spatial resolution to capture directional information expected lightfield estimation error

slide-101
SLIDE 101

33

1D camera evaluation- single row reconstruction

b a

expected lightfield estimation error

slide-102
SLIDE 102

33

Observations:

1D camera evaluation- single row reconstruction

b a

expected lightfield estimation error

slide-103
SLIDE 103

33

Observations: Pinhole camera- poor estimation due to noise

1D camera evaluation- single row reconstruction

b a

expected lightfield estimation error

slide-104
SLIDE 104

33

Observations: Pinhole camera- poor estimation due to noise Wavefront coding- no depth information, but accurate reconst for a single view

1D camera evaluation- single row reconstruction

b a

expected lightfield estimation error

slide-105
SLIDE 105

speed-invariant blur allows non-blind deconvolution

Application: motion invariant photography a b

Time Space

Static camera

Depth invariant integration Motion invariant integration SIGGRAPH 2008, Levin et al.

slide-106
SLIDE 106

speed-invariant blur allows non-blind deconvolution

Application: motion invariant photography a b

Time Space

Static camera motion invariant input

  • utput after deblurring

Depth invariant integration Motion invariant integration SIGGRAPH 2008, Levin et al.

slide-107
SLIDE 107

Summary: Bayesian lightfield imaging

  • Model imaging as linear light field

projection

  • New prior on light field signals
  • Camera decoding expressed as a Bayesian

inference problem

  • Framework and software for comparison

across camera configurations, by evaluating uncertainty in light field reconstruction

  • Principled novel camera design

y = T x