The SpaceFusion* project:
applications to remote sensing and 3D topographic reconstruction
André Jalobeanu PASEO Research Group MIV team @ LSIIT, Illkirch, France
PASEO
*Projet ANR “Jeunes Chercheurs” 2006-2008
The SpaceFusion* project: applications to remote sensing and 3D - - PowerPoint PPT Presentation
PASEO The SpaceFusion* project: applications to remote sensing and 3D topographic reconstruction Andr Jalobeanu PASEO Research Group MIV team @ LSIIT, Illkirch, France *Projet ANR Jeunes Chercheurs 2006-2008 Outline Introduction
André Jalobeanu PASEO Research Group MIV team @ LSIIT, Illkirch, France
PASEO
*Projet ANR “Jeunes Chercheurs” 2006-2008
Introduction Objectives
Remote sensing: reflectance Small bodies: geometry Earth, planets: reflectance and topography
Proposed approach
Bayesian inference from multiple observations Accurate forward modeling Preliminary results: 1D and 2D in astronomy
Extensions
Deformation fields Non-optical signal fusion
Denoise or deblur depending on the degradation
Problem: lots of data, same object!
Images are recorded with various: position, orientation (pose)
sensors (resolution, noise, bad pixels)
instruments (blur, distortions)
Name Position, lab time% André Jalobeanu CR, LSIIT/MIV Illkirch 90% Christophe Collet PU, LSIIT/MIV Illkirch 40% Mireille Louys MCF, LSIIT/MIV Illkirch 40% Fabien Salzenstein MCF, InESS Strasbourg 40% Françoise Nerry CR, LSIIT/TRIO Illkirch 20% Albert Bijaoui / Eric Slezak A/AA, OCA Nice 10% Bernd Vollmer A, Obs. Strasbourg 10% Jorge A. Gutiérrez + ? PhD+, Illkirch 20 mo total
Projet ANR “Jeunes Chercheurs 2005” (French Research Agency) 3-year grant, Jan 2006 - Dec 2008
Objectives
Reconstruct a reflectance function in remote sensing Recover the geometry of small bodies and planetary surfaces Reconstruct both reflectance and topography in Earth/Space Sciences
ReflectanceFusion
Multisource data fusion for flat terrain BRDF recovery Remote Sensing, Planetary Imaging
View 1 View 2 View 3
Single model: Reflectance map
(possibly multispectral, bidirectional)
Observations
Assumption: flat terrain
2D rigid data fusion is not sufficient
3DShapeInference
3D shape recovery via Bayesian inference Planetary Imaging (small bodies and planets)
SurfaceModelRender
Accurate rendering and modeling
Single model: 3D geometry
(spherical height field) radar lidar
Observations
2D data fusion is not sufficient
(or why use forward modeling)
Synthetic Image
3D Surface
2D image = corrupted measurement
accurately render 3D surfaces take into account the imaging model we use a probabilistic approach providing a formal framework for image comparison and uncertainties
Observed Image
simplest, intuitive approach: compare real and synthetic images until best fit is found...
Blur (convolution with the PSF) Intensity formation (use visibility polygons and irradiance) Irradiance computation (use albedo and reflectance model) Shadow computation Visible area determination for each triangle w.r.t. each pixel Projection project vertices onto the image plane Simulated observed image of 433 Eros
Gaussian blur
A possible rendering algorithm
3DSpaceFusion
Multisource data fusion, 3D surface recovery and super-resolution Planetary Imaging
3DEarthFusion
Multisource data fusion, 3D surface recovery, BRDF inference and super-resolution Remote Sensing
Cameras (registration) Light sources Reflectance field (shading-free) Geometry (DEM, height field) N images
NASA NASA
The proposed approach
Bayesian inference from multiple
Uncertainty estimates, recursive data processing In 2D: recover a well-sampled image, possibly super-resolved In 3D: recover the geometry Check the validity of this approach
Y
resolution camera pose parameters internal camera parameters global PSF sensor sampling grid noise
image geometric mapping rendering coefficients
model reconstruction from multiple observations, inverse problem of rendering
applied to this inverse problem: everything is described by random variables
parameter estimation problem
p(θ | observations) = p(observations | θ)× p(θ) p(observations)
evidence (useful for model comparison) likelihood image formation model prior model (a priori knowledge about the observed object) parameters of interest (unknown solution)
! All parameters are random variables ! Bayesian inference functional optimization / approximations ! Deterministic optimization techniques for speed
OBJECTIVE: posterior probability density function (pdf)
Average Probabilistic fusion
Result #1 Result #2
Formal framework for the combination of multiple observations
From the observation noise to the end result! Downside: algorithms ought to account for input uncertainties
find the max/mean of the posterior distribution
covariance matrix (Gaussian approx. of the posterior)
Y
unknown
image model parameters camera parameters
likelihood prior density posterior density
Camera prior Dirac pdf
(calibrated camera)
Camera pose Camera physics
Image formation model Gaussian pdf
Rendering I(v,L,) + additive Gauss. noise
N(I(v,L,), 2) {vk},{Lj} {i } {Xi}
Surface model Gaussian pdf
(smoothness prior)
Geometry, Reflectanfce...
P(surface, cameras |images) P(surface) P(camerai)
i
i
atmosphere, deformations, blur
PSF h
L(x)
Probabilistic image formation: sensor noise modeling
Deterministic image formation: rendering
areas will undergo a deconvolution even if we just want data fusion...)
help preserve useful information while filtering the noise
O u t p u t p i x e l s i z e < i n p u t b l u r s i z e / 2
Target PSF (B-Spline 3)
fractal surface geometry model
a priori surface roughness
geometric wavelet details = Gaussian random variables adaptive parameters local scales
Statistical self-similarity ~ 3D analog of the fractional Brownian motion in 2D ... also works for the Mars DEM (MOLA)
P(image, surface | camera) = P(image | surface, camera)P(surface)
Surface model Image formation model
q = 1.1 = 0.5 q = 1.1 = 1.5 q = 1.1 = 5.0 Assumptions: uniform albedo and roughness
Take samples from the joint density:
Usually known
Usually calibrated at instrument commissionning
Translation vector T
Goal: combine N images (different blur, resolution, FOV, noise...) into a single object: pixel values + uncertainties Preserve the information from the original data set: photometry and astrometry in astronomy
Could be generalized to 2D objects in a 3D space but no uncertainties, and some other drawbacks...
Drawbacks :
is data dependent
Drawbacks :
Drawbacks :
the data term is not natural
Deterministic image formation
Deformation (geometric mapping f, param. ,,) Convolution with the Point Spread Function (PSF) h Sampling on a discrete pixel grid
BSpline interpolation coefficients L such that the target image X = L!BSpline
Target PSF (B-Spline 3) Instrument PSF (at pixel p)
Sensor space Model (world) space
g
Sampling grid (irregular)
Warped, shifted PSF
p j
n instruments
1 2 3
Compute the marginal Maximum A Posteriori, and a Gaussian approximation at this optimum
Iterative optimization of a marginal energy U’
U’= - log P(object | observations)
quadratic form
at the optimum
(interaction range depends
2 observations blur, noise 1/3 sample shift Reconstructed signal 95% confidence interval data points ideal signal De-aliasing + regularized deconvolution
real PSF
target PSF 1
(model space)
1 2 3
pointing pattern (model space) 1 2 3
Experimental setting: 4 noisy images
undersampled (factor 2), shifted (1/2 pixel)
Image fusion result (mean)
Inverse covariance of the result
diagonal terms, and near-diagonal (covariance) terms
Pixel interlacing result Reference image Initialization Drizzling result Image fusion result (mean)
texture (irradiance)
surface camera 1 camera 2
v L I1 I2
geometry (vertices)
Assumptions:
(stereo setting)
irradiance L = albedo x reflectance does not depend on the viewing conditions
Recover the surface geometry uncertainty given the observed images
camera 1 camera 2 true geometry inferred geometry P>0.1 image 1 image 2
z
initialization
Diagonal quasi-Newton, 10 iterations
Existing methods, special cases of the proposed framework:
The proposed approach is a generalization to multiple
Uncertainties are provided, recursive inference is made possible
Extensions
Inferring deformation fields from satellite images (Earth Sciences) non-optical signal fusion
2 images: one before, one after earthquake or deformation
Before EQ (simulation) After EQ (simulation)
Klinger et al, 2006, Kunlun fault. 1m accuracy, 320m resolution “optical image correlation”
Noblet et al. (LSIIT, Medical imaging) Regularized multigrid dense matching reference Cross-Correlation (Sliding window Fourier transform)
Techniques from motion estimation: Complex wavelets - not accurate enough Optical flow? We need a good accuracy and a measure of the uncertainties: Deformation field " 3D inversion procedure
CIV 32 CIV 128 Reference Nonrigid registration
Accomplishments
Bayesian approach to data fusion in 2D (theory) Validation in 1D/2D (bandlimited signal reconstruction)
Super-resolution from multiple undersampled observations Uncertainty computation - covariance & inverse covariance matrices
To do...
2D/3D: more complex imaging model, but same approach Full 3D surface recovery:
Extension of the 2D curve reconstruction method [MaxEnt04] Forward model (rendering): radial basis functions? Reflectance map inference
Validation on real data (Ikonos / VO)