understanding camera trade o fg s through a bayesian
play

Understanding camera trade-o fg s through a Bayesian analysis of - PowerPoint PPT Presentation

Understanding camera trade-o fg s through a Bayesian analysis of light field projections Anat Levin 1 , Bill Freeman 1,2 , Fredo Durand 1 Computer Science and Artificial Intelligence Lab (CSAIL), 1 Massachusetts Institute of Technology and 2 Adobe


  1. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  2. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  3. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  4. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  5. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  6. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  7. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  8. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 11

  9. Lens, focused at green object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane y = T x hello 11

  10. Lens, focused at blue object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 12

  11. Lens, focused at blue object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 12

  12. Lens, focused at blue object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 12

  13. Lens, focused at blue object flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture sensor plane hello 12

  14. Stereo flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane apertures sensor plane hello 14

  15. Plenoptic camera flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane main lens aperture micro-lenses sensor plane Adelson and Wang 92, Ng et al 05 hello 15

  16. Wavefront coding flatworld 1D scene 2D lightfield horizontal position a a b plane depth b b a plane aperture cubic phase plate sensor plane Dowski and Cathey,94 hello 16

  17. Computational imaging Camera: Rank deficient projection of a 4D lightfield. Decoding: ill-posed inversion, need prior on lightfield signals. Camera evaluation: How well can recover the lightfield from projection? y = Tx + n y= T x + n noise data camera lightfield

  18. Varying imaging goals by weighted lightfield reconstruction a b

  19. Varying imaging goals by weighted lightfield reconstruction a Weigh reconstruction b error differently in different light field entries

  20. Varying imaging goals by weighted lightfield reconstruction • Full light field reconstruction (potentially image&depth) a Weigh reconstruction b error differently in different light field entries

  21. Varying imaging goals by weighted lightfield reconstruction • Full light field reconstruction (potentially image&depth) • Reconstruct a bounded view range a Weigh reconstruction b error differently in different light field entries

  22. Varying imaging goals by weighted lightfield reconstruction • Full light field reconstruction (potentially image&depth) • Reconstruct a bounded view range • Single row light field reconstruction (pinhole all focused image) a Weigh reconstruction b error differently in different light field entries

  23. Bayesian lightfield imaging - Outline • Specify lightfield reconstruction goals • Full lightfield / Single, all-focus view /… • Specify lightfield prior • Imaging with one computational camera • Specify camera projection matrix • Camera decoding - Bayesian inference • Comparing computational cameras • Specify camera projection matrices • Evaluate expected error in lightfield reconstruction

  24. Bayesian lightfield imaging - Outline • Specify lightfield reconstruction goals • Full lightfield / Single, all-focus view /… • Specify lightfield prior • Imaging with one computational camera • Specify camera projection matrix • Camera decoding - Bayesian inference • Comparing computational cameras • Specify camera projection matrices • Evaluate expected error in lightfield reconstruction

  25. Our light field prior: a mixture of signals at di fg erent slopes Hidden variable S modeling local slope Conditioning on slope: small variance along slope direction high variance along spatial direction

  26. Our light field prior: a mixture of signals at di fg erent slopes Hidden variable S modeling local slope Conditioning on slope: small variance along slope direction high variance along spatial direction Light field prior is a mixture of oriented Gaussians (MOG): Given slope, Piecewise smooth lightfield prior is prior on slopes Gaussian and simple

  27. Bayesian lightfield imaging - Outline • Specify lightfield reconstruction goals • Full lightfield / Single, all-focus view /… • Specify lightfield prior • Imaging with one computational camera • Specify camera projection matrix • Camera decoding - Bayesian inference • Comparing computational cameras • Specify camera projection matrices • Evaluate expected error in lightfield reconstruction

  28. Prior e fg ect on reconstruction Band-limited reconstruction to account for unknown depth See paper for inference details Reconstruction using light field prior

  29. Bayesian lightfield imaging - Outline • Specify lightfield reconstruction goals • Full lightfield / Single, all-focus view /… • Specify lightfield prior • Imaging with one computational camera • Specify camera projection matrix • Camera decoding - Bayesian inference • Comparing computational cameras • Specify camera projection matrices • Evaluate expected error in lightfield reconstruction

  30. Camera evaluation Goal: evaluate inherent ambiguity of a camera projection, independent of inference algorithm Posterior probability P(x|y, T) l ightfield given data, camera, and prior true lightfield, x 0 Lightfield, x (schematic picture of the very high-dimensional vector)

  31. Camera evaluation Goal: evaluate inherent ambiguity of a camera projection, independent of inference algorithm Posterior probability good camera P(x|y, T) l ightfield given data, camera, and prior true lightfield, x 0 Lightfield, x (schematic picture of the very high-dimensional vector)

  32. Camera evaluation Goal: evaluate inherent ambiguity of a camera projection, independent of inference algorithm Posterior probability good camera P(x|y, T) l ightfield given data, camera, and prior bad camera true lightfield, x 0 Lightfield, x (schematic picture of the very high-dimensional vector)

  33. Camera evaluation function: expected squared error

  34. Camera evaluation function: expected squared error With our mixture model prior, conditioned on the lightfield slopes S, everything is Gaussian and analytic. So let’s write the posterior as:

  35. Camera evaluation function: expected squared error With our mixture model prior, conditioned on the lightfield slopes S, everything is Gaussian and analytic. So let’s write the posterior as: Then our expected squared error becomes an integral over all slope fields:

  36. Camera evaluation function: expected squared error With our mixture model prior, conditioned on the lightfield slopes S, everything is Gaussian and analytic. So let’s write the posterior as: Then our expected squared error becomes an integral over all slope fields: Approximate by Monte Carlo sampling near the true slope field:

  37. Bayesian camera evaluation tool Input parameters: • Reconstruction goals (weight on light field entries) • Camera matrix • Noise level • Spatial and depth resolution Output: expected reconstruction error Matlab software online: people.csail.mit.edu/alevin/papers/lightfields-Code-Levin-Freeman- Durand-08.zip

  38. 1D camera evaluation- full light field reconstruction expected lightfield estimation error

  39. 1D camera evaluation- full light field reconstruction expected lightfield estimation error Observation: As expected, a pinhole camera doesn’t estimate the lightfield well

  40. 1D camera evaluation- full light field reconstruction expected lightfield estimation error Observation: When depth variation is limited, some depth from defocus exist in a single monocular view from a standard lens

  41. 1D camera evaluation- full light field reconstruction expected lightfield estimation error Observation: Wavefront coding, not designed to estimate the lightfield, doesn’t.

  42. 1D camera evaluation- full light field reconstruction expected lightfield estimation error Observation: Depth-from-defocus (DFD) outperforms the coded aperture at these settings

  43. 1D camera evaluation- full light field reconstruction expected lightfield estimation error Observation: Stereo error is less than Plenoptic Since depth variation is smaller than texture variation, no need to sacrifice so much spatial resolution to capture directional information

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend