The SpaceFusion* project: applications to remote sensing and 3D - - PowerPoint PPT Presentation

the spacefusion project
SMART_READER_LITE
LIVE PREVIEW

The SpaceFusion* project: applications to remote sensing and 3D - - PowerPoint PPT Presentation

PASEO The SpaceFusion* project: applications to remote sensing and 3D topographic reconstruction Andr Jalobeanu PASEO Research Group MIV team @ LSIIT, Illkirch, France *Projet ANR Jeunes Chercheurs 2006-2008 Outline Introduction


slide-1
SLIDE 1

The SpaceFusion* project:

applications to remote sensing and 3D topographic reconstruction

André Jalobeanu PASEO Research Group MIV team @ LSIIT, Illkirch, France

PASEO

*Projet ANR “Jeunes Chercheurs” 2006-2008

slide-2
SLIDE 2

Outline

Introduction Objectives

Remote sensing: reflectance Small bodies: geometry Earth, planets: reflectance and topography

Proposed approach

Bayesian inference from multiple observations Accurate forward modeling Preliminary results: 1D and 2D in astronomy

Extensions

Deformation fields Non-optical signal fusion

slide-3
SLIDE 3

Why multisource data fusion?

Multisource data fusion:

Optimally combine all observations into a single object Preserve all the information from the original data set

  • Increase resolution if needed
  • Compute the uncertainties
  • Reconstruct the 3D geometry if required

Enhance the object quality (optional)

Denoise or deblur depending on the degradation

Problem: lots of data, same object!

Images are recorded with various: position, orientation (pose)

sensors (resolution, noise, bad pixels)

  • bserving conditions (transparency, haze)

instruments (blur, distortions)

slide-4
SLIDE 4

The SpaceFusion project

Name Position, lab time% André Jalobeanu CR, LSIIT/MIV Illkirch 90% Christophe Collet PU, LSIIT/MIV Illkirch 40% Mireille Louys MCF, LSIIT/MIV Illkirch 40% Fabien Salzenstein MCF, InESS Strasbourg 40% Françoise Nerry CR, LSIIT/TRIO Illkirch 20% Albert Bijaoui / Eric Slezak A/AA, OCA Nice 10% Bernd Vollmer A, Obs. Strasbourg 10% Jorge A. Gutiérrez + ? PhD+, Illkirch 20 mo total

Projet ANR “Jeunes Chercheurs 2005” (French Research Agency) 3-year grant, Jan 2006 - Dec 2008

slide-5
SLIDE 5

Objectives

Reconstruct a reflectance function in remote sensing Recover the geometry of small bodies and planetary surfaces Reconstruct both reflectance and topography in Earth/Space Sciences

slide-6
SLIDE 6

Remote sensing: 2D reflectance reconstruction (3D space)

Input: Multiple images (single band, multispectral or hyperspectral) Optical / calibrated or not / missing or corrupted data Output: 2D reflectance map (image-like), well-sampled Uncertainties (simplified inverse covariance) If applicable, spatial and spectral super-resolution

ReflectanceFusion

Multisource data fusion for flat terrain BRDF recovery Remote Sensing, Planetary Imaging

slide-7
SLIDE 7

Build a 2D reflectance map

View 1 View 2 View 3

Single model: Reflectance map

(possibly multispectral, bidirectional)

Observations

Assumption: flat terrain

2D rigid data fusion is not sufficient

slide-8
SLIDE 8

Small bodies: 3D surface recovery (geometry only)

3DShapeInference

3D shape recovery via Bayesian inference Planetary Imaging (small bodies and planets)

SurfaceModelRender

Accurate rendering and modeling

  • f natural 3D surfaces

Input: Multiple images (single band) Optical, IR / calibrated or not / missing or corrupted data Output: 3D geometry (height field, planar/spherical topology) Uncertainties (simplified inverse covariance)

slide-9
SLIDE 9

Asteroid data fusion

Single model: 3D geometry

(spherical height field) radar lidar

  • ptical

Observations

2D data fusion is not sufficient

slide-10
SLIDE 10

Comparing synthetic and real images...

(or why use forward modeling)

Synthetic Image

3D Surface

2D image = corrupted measurement

  • f a 2D rendering of a 3D scene

accurately render 3D surfaces take into account the imaging model we use a probabilistic approach providing a formal framework for image comparison and uncertainties

Observed Image

simplest, intuitive approach: compare real and synthetic images until best fit is found...

slide-11
SLIDE 11

3D surface rendering

Blur (convolution with the PSF) Intensity formation (use visibility polygons and irradiance) Irradiance computation (use albedo and reflectance model) Shadow computation Visible area determination for each triangle w.r.t. each pixel Projection project vertices onto the image plane Simulated observed image of 433 Eros

  • Unif. albedo, Lambert reflectance,

Gaussian blur

A possible rendering algorithm

slide-12
SLIDE 12

Earth & Planetary Sciences: reflectance and topography recovery

Input: Multiple images (single band, multispectral or hyperspectral) Optical, IR / calibrated or not / missing or corrupted data Output: the works! 3D geometry + reflectance map Uncertainties (simplified inverse covariance) If applicable, spatial and spectral super-resolution (reflectance)

3DSpaceFusion

Multisource data fusion, 3D surface recovery and super-resolution Planetary Imaging

3DEarthFusion

Multisource data fusion, 3D surface recovery, BRDF inference and super-resolution Remote Sensing

slide-13
SLIDE 13

Topography and albedo recovery

Cameras (registration) Light sources Reflectance field (shading-free) Geometry (DEM, height field) N images

NASA NASA

slide-14
SLIDE 14

The proposed approach

Bayesian inference from multiple

  • bservations

Uncertainty estimates, recursive data processing In 2D: recover a well-sampled image, possibly super-resolved In 3D: recover the geometry Check the validity of this approach

  • f

Y

  • ,
  • h

resolution camera pose parameters internal camera parameters global PSF sensor sampling grid noise

  • bserved

image geometric mapping rendering coefficients

slide-15
SLIDE 15

Bayesian Vision & inverse problems

Computer vision:

model reconstruction from multiple observations, inverse problem of rendering

Bayesian inference

applied to this inverse problem: everything is described by random variables

Data fusion into a single model becomes a

parameter estimation problem

It can be solved by existing efficient

  • ptimization techniques
slide-16
SLIDE 16

Bayesian inference

p(θ | observations) = p(observations | θ)× p(θ) p(observations)

evidence (useful for model comparison) likelihood image formation model prior model (a priori knowledge about the observed object) parameters of interest (unknown solution)

! All parameters are random variables ! Bayesian inference functional optimization / approximations ! Deterministic optimization techniques for speed

OBJECTIVE: posterior probability density function (pdf)

slide-17
SLIDE 17

Probabilistic fusion vs. averaging

Average Probabilistic fusion

Result #1 Result #2

Take into account uncertainties: variance, correlations

Formal framework for the combination of multiple observations

Propagate uncertainties

From the observation noise to the end result! Downside: algorithms ought to account for input uncertainties

slide-18
SLIDE 18

Bayesian inference from multiple observations

Forward modeling:

  • Object modeling (image, 3D geometry, reflectance...)
  • Image formation = rendering + degradations

Bayesian inference:

  • Estimate the optimal object given all observations:

find the max/mean of the posterior distribution

  • Integrate w.r.t. all nuisance variables: “marginalization”
  • Evaluate the uncertainties:

covariance matrix (Gaussian approx. of the posterior)

  • Model selection and assessment

Y

  • Y
  • Y
  • X

unknown

  • bject
  • bservations

image model parameters camera parameters

slide-19
SLIDE 19

Building blocks of the joint posterior probability density function

likelihood prior density posterior density

Camera prior Dirac pdf

(calibrated camera)

Camera pose Camera physics

Image formation model Gaussian pdf

Rendering I(v,L,) + additive Gauss. noise

N(I(v,L,), 2) {vk},{Lj} {i } {Xi}

Surface model Gaussian pdf

(smoothness prior)

Geometry, Reflectanfce...

P(surface, cameras |images) P(surface) P(camerai)

i

  • P(imagei |surface, camerai)

i

  • Y
  • X
slide-20
SLIDE 20

Accurate forward modeling

2D object, 3D space Resampling, account for perspective,

atmosphere, deformations, blur

3D object, 3D space Rendering in the object space, account for

  • cclusions, shadows, perspective, atmosphere, deformations, blur

PSF h

L(x)

Probabilistic image formation: sensor noise modeling

Independent Gaussian noise, spatially variable variance

Deterministic image formation: rendering

slide-21
SLIDE 21

2D reflectance map model

Model of the unknown object (2D image) Choose an appropriate parametrization and topology

  • Sampling grid size , rectangular or hexagonal lattice

Understand the sampling theorem!

  • Don’t try to go beyond the Nyquist rate (optical frequency cut-off)
  • Near-optimal sampling, band-limited: BSpline-3 kernel

Constrain and stabilize this inverse problem

  • Use smoothness priors to avoid noise amplification (oversampled

areas will undergo a deconvolution even if we just want data fusion...)

  • Use efficiently designed prior models (e.g. multiscale, wavelets) to

help preserve useful information while filtering the noise

O u t p u t p i x e l s i z e < i n p u t b l u r s i z e / 2

Target PSF (B-Spline 3)

slide-22
SLIDE 22

3D object model example:

fractal surface geometry model

Adaptive scale-invariant Gaussian model for wavelet coefficients:

a priori surface roughness

geometric wavelet details = Gaussian random variables adaptive parameters local scales

Statistical self-similarity ~ 3D analog of the fractional Brownian motion in 2D ... also works for the Mars DEM (MOLA)

slide-23
SLIDE 23

Samples from a fractal surface model

P(image, surface | camera) = P(image | surface, camera)P(surface)

Surface model Image formation model

q = 1.1 = 0.5 q = 1.1 = 1.5 q = 1.1 = 5.0 Assumptions: uniform albedo and roughness

Take samples from the joint density:

slide-24
SLIDE 24

Geometric transforms

Scaling Simple parametrization

Usually known

Optical distortions Radial parametrization fk(r)=r+k3r3+k5r5+... Polynomial, e.g. Hubble Space Telescope

Usually calibrated at instrument commissionning

Rotation & translation (viewing) Rotation matrix [R]

Translation vector T

Perspective (world to camera) Rational function

slide-25
SLIDE 25

2D image reconstruction

Related problems

Image registration: external camera parameter estimation Image modeling for regularization Automatic parameter estimation & model selection

Goal: combine N images (different blur, resolution, FOV, noise...) into a single object: pixel values + uncertainties Preserve the information from the original data set: photometry and astrometry in astronomy

slide-26
SLIDE 26

Related, existing methods (2D object, 2D space)

Frame coaddition Increase the SNR and the dynamic range Mosaicing Increase the field of view Super-resolution De-aliasing Bandwidth extrapolation

Could be generalized to 2D objects in a 3D space but no uncertainties, and some other drawbacks...

slide-27
SLIDE 27

Existing methods for 3D reconstruction

Shape from Stereo [Zhang et al. 94]

Drawbacks :

Relies on finding point matches in both images The density of the recovered surface points

is data dependent

Shape from Shading [Horn & Brooks 89]

Drawbacks :

Density of points is fixed at the image resolution Assumptions: Lambertian reflection and constant albedo

Generalized stereo [Fua & Leclerc 94]

Drawbacks :

Not a Bayesian method: difficult to infer prior model parameters

the data term is not natural

slide-28
SLIDE 28

Resampling scheme for a 2D object

Deterministic image formation

Deformation (geometric mapping f, param. ,,) Convolution with the Point Spread Function (PSF) h Sampling on a discrete pixel grid

BSpline interpolation coefficients L such that the target image X = L!BSpline

Target PSF (B-Spline 3) Instrument PSF (at pixel p)

Sensor space Model (world) space

g

Sampling grid (irregular)

Warped, shifted PSF

p j

n instruments

slide-29
SLIDE 29

How the inference algorithm works

1 2 3

Compute the marginal Maximum A Posteriori, and a Gaussian approximation at this optimum

Iterative optimization of a marginal energy U’

U’= - log P(object | observations)

quadratic form

Inverse covariance matrix (uncertainties) Second derivatives of the energy

at the optimum

Sparse matrix

(interaction range depends

  • n the size of the blur kernel)
slide-30
SLIDE 30

First results: 2X super-resolution (1D signals)

2 observations blur, noise 1/3 sample shift Reconstructed signal 95% confidence interval data points ideal signal De-aliasing + regularized deconvolution

real PSF

target PSF 1

  • 1

(model space)

slide-31
SLIDE 31

2D Super-Resolution results (astronomy)

1 2 3

pointing pattern (model space) 1 2 3

Experimental setting: 4 noisy images

undersampled (factor 2), shifted (1/2 pixel)

Image fusion result (mean)

Inverse covariance of the result

diagonal terms, and near-diagonal (covariance) terms

slide-32
SLIDE 32

Comparison with existing techniques

Pixel interlacing result Reference image Initialization Drizzling result Image fusion result (mean)

slide-33
SLIDE 33

2D shape reconstruction experiment

texture (irradiance)

  • bserved “image” 1
  • bserved “image” 2

surface camera 1 camera 2

v L I1 I2

geometry (vertices)

Assumptions:

  • Similar lighting conditions

(stereo setting)

  • Change of variables:

irradiance L = albedo x reflectance does not depend on the viewing conditions

Recover the surface geometry uncertainty given the observed images

slide-34
SLIDE 34

2D shape reconstruction results

camera 1 camera 2 true geometry inferred geometry P>0.1 image 1 image 2

z

initialization

Diagonal quasi-Newton, 10 iterations

slide-35
SLIDE 35

How our work compares to the state of the art

Spline interpolation in the presence of noise [Unser & Blu 05] Single observation; sampling resolution = model resolution Assumed blur kernel = spline kernel Gaussian noise Spline interpolation and irregular sampling [Arigovindan 05] Similar assumptions (single, spline, Gauss) Irregular sampling in the sensor space

Existing methods, special cases of the proposed framework:

The proposed approach is a generalization to multiple

  • bservations, arbitrary noise, arbitrary geometry

Uncertainties are provided, recursive inference is made possible

slide-36
SLIDE 36

Extensions

Inferring deformation fields from satellite images (Earth Sciences) non-optical signal fusion

slide-37
SLIDE 37

Deformation fields in Earth Sciences

Infer the parameters of the geometric transform

2 images: one before, one after earthquake or deformation

Deformation field = spatially variable translation Challenge: subpixel accuracy (0.1 pixel to detect a 10 cm shift) Use a smoothness prior allowing for discontinuities on segments (faults)

Before EQ (simulation) After EQ (simulation)

  • D. Fitzenz, J. Van der Woerd - IPG Strasbourg
slide-38
SLIDE 38

Existing methods - image correlation

Klinger et al, 2006, Kunlun fault. 1m accuracy, 320m resolution “optical image correlation”

slide-39
SLIDE 39

Existing methods: preliminary evaluation

Noblet et al. (LSIIT, Medical imaging) Regularized multigrid dense matching reference Cross-Correlation (Sliding window Fourier transform)

Techniques from motion estimation: Complex wavelets - not accurate enough Optical flow? We need a good accuracy and a measure of the uncertainties: Deformation field " 3D inversion procedure

slide-40
SLIDE 40

Existing methods: preliminary evaluation

CIV 32 CIV 128 Reference Nonrigid registration

slide-41
SLIDE 41

Conclusions

Accomplishments

Bayesian approach to data fusion in 2D (theory) Validation in 1D/2D (bandlimited signal reconstruction)

Super-resolution from multiple undersampled observations Uncertainty computation - covariance & inverse covariance matrices

To do...

2D/3D: more complex imaging model, but same approach Full 3D surface recovery:

Extension of the 2D curve reconstruction method [MaxEnt04] Forward model (rendering): radial basis functions? Reflectance map inference

Validation on real data (Ikonos / VO)