Muse needs adding all 1 hour exposure datacubes obtained in - - PowerPoint PPT Presentation

muse needs
SMART_READER_LITE
LIVE PREVIEW

Muse needs adding all 1 hour exposure datacubes obtained in - - PowerPoint PPT Presentation

Goals : generic fusion scheme New Bayesian Fusion scheme Reconstruction of a single hyperspectral image Y f thanks to n noisy ob- for Hyperspectral servations Y i . Gather in an ideal image X the whole information included within each


slide-1
SLIDE 1

New Bayesian Fusion scheme for Hyperspectral Astronomical Data

Ch Collet, M. Petremand, A. Jalobeanu,

  • F. Salzenstein, V. Mazet, M. Louys

University of Strasbourg - FRANCE LSIIT UMR CNRS http://lsiit-miv.u-strasbg.fr/paseo/

1

Goals : generic fusion scheme

2

  • Reconstruction of a single hyperspectral image Y f thanks to n noisy ob-

servations Y i.

  • Gather in an ideal image X the whole information included within each
  • bservation.
  • Each Y i may eventually be acquired with different sensors characterized by

different acquisition conditions : spatial sampling on heteregenous lattices, geometric deformation, different Field Spread Function (FSP or spatial PSF) or Line Spread Function (LSF).

  • Decrease data corruption due to cosmic rays
  • Give the fusion result + uncertainty associated with

3

Sensor Space Model Space

Muse needs

  • adding all 1 hour exposure datacubes obtained in different
  • bserving conditions to detect faint sources (deep field) in 3D

volume ;

  • maximising the signal-to-noise ratio and spatial/spectral

resolution of the final cube ;

  • recovery of a compound model “image+uncertainties” that best

relates to the observations ;

  • remove cosmic rays at best

4

slide-2
SLIDE 2

Data Fusion in the framework of new generation IFU : MUSE instrument

5 Y : sensor space ˆ X : Reconstruction

  • T : model space
  • F : continuous (3D-B-spline filtered)
  • X : discrete (spatial and spectral sampling)

Introduction I- Direct Model II- Inverse Model III- Preliminary Results IV- Conclusion and perspectives

6

Plan of the talk

I- Sensor modeling (IFU) and Image acquisition Direct model

7

MUSE IFU (Integral Field Unit)

8

  • Galaxy field observation with large redshift
  • Requires a set of 80 observations Y i, one hour each, to avoid non-reversible

cosmic rays corruption

  • Each Y i comes from the same IFU, whose final size after astronomic re-

duction equals 300 × 300 × 3500 pixels. Fusion challenge : to gather these 80 observations into a single one in an

  • ptimal (Bayesian) way.
slide-3
SLIDE 3

MUSE IFU (Integral Field Unit)

9

This fusion process needs to eliminate cosmic rays or outliers pixels, take into account seeing, variation of acquisition (sky background, registration on the same lattice, etc.). Fusion process has to

  • eliminate cosmic rays : easier if done on the CCD matrix without any pre-

processing because cosmic rays corrupt a neighborhood around a central location, according to impact angle ;

  • take into account dead pixels (more generally all outliers including cosmic

rays), FSF-LSF and sensor noise ;

  • Make information fusion and give uncertainty on each location on the

reconstructed hyperspectral data cube .

Direct Model

10

Image formation modeling : direct problem

  • Model Space : 3D space where objects are observed (part of the celest

sphere), with a topology and an arbitrary geometry : spatial square sam- pling grid, spectral sampling grid from λ0 with a step λp. F stands for continous ideal image whereas X represents the sampled ideal image.

  • Image space (2D space) : focal plane where the image is formed within

MUSE IFU, between the fore optic and the field Splitter. After succes- sive cut by splitter and slicers, each sub-image is spread thanks to the spectroscope and is printed on the CCD surface (Sensor space).

  • Each column in the sensor space Y corresponds to a spectrum line, each

line stands for spatial image. Image reconstruction modeling : inverse problem in a Bayesian framework

Model space

11

F

Ideal image (continuous)

X

Ideal image (sampled)

T

Truth

  • Let T be the truth defined within the model space (u, z).
  • Let F = T ⋆ϕ be the ideal image, with finite spatial and spectral resolution

corresponding to the truth T observed with a perfect telescope modelized by a B-spline of degree 3 ϕ.

  • F, T, X are all hyperspectral cubes (also called image hereafter)

F = T ⋆ ϕ the truth 3D-filtered by a B-spline of degree 3

Image F sampled according to Shan- non condition : in this sense, X is the ideal sampled image. 4

Sensor Space

Model Space

Model space

12

F

Ideal image (continuous)

X

Ideal image (sampled)

T

Truth F = T ⋆ ϕ

X = L ⋆ϕ sampled on a lattice of variable resolu- tion : Xp = F(p)

F can be interpolated on each location (u, z) using the following expression: F(u, z) =

  • j
  • k

Ljkϕ(u − j)ϕ(z − k) (1) where j ∈ Z2 stands for the spatial samples and k ∈ Z for the spectral samples. Coefficients Ljk are now called interpolation coefficients or B-spline coeffi-

  • cients. Xp = F(p) is the digital version of F linked to interpolation coefficients

by : X = L ⋆ϕ (2)

4

Sensor Space

Model Space

slide-4
SLIDE 4

Model space : MUSE observation

13

Spatial side

  • 1. T is convolved by the FSF hu

uz which depends on the spatial location u

and spectral z location: T(u, z) ⋆ hu

uz(u)

(1)

  • 2. Spatial sampling of T(u, z)⋆hu

uz(u) on the regular lattice us. This sampling

grid takes into account spatial shifts δi

xλ δi yλ, variable with the wavelength

(atmospheric refraction). Irregularities of the sampling due to field cut by the splitter and the slicers remain unimportant : Js(z) = T(us, z) ⋆ hu

usz(us)

(2) where us is the spatial coordinates of the Model space linked to location s within Sensor space,

  • 3. Js(z) is the continuous spectrum, at spatial position us ;

Model space : MUSE observation

14

Spectral side

  • 1. convolution of Js(z) by LSF hz is assumed independent of spatial position,

hz does not depend of u : Js(z) ⋆ hz(z) (1)

  • 2. Spectral sampling of Js(z) on the regular spectral lattice zst :

Ist = Js(zst) ⋆ hz(zst) (2) where zst stands for the coordinate z of the Model space corresponding to point (s, t) in the Sensor space. The shifts between IFU on spectral sampling grids are integrated within spectral sampling process modeling which depends on the spatial location s within Sensor space.

Model space : MUSE observation

15

To summary, a pixel observed on Y can be written in the following manner: Yst = Ist + N(0, σst) (1) with Ist = (Js ⋆ hz) (us, zst) =

  • T ⋆ hu

uszst ⋆ hz

(us, zst) (2) where σst stands for the noise standard-deviation observed in (s, t).

Rendering coefficients : exposure time and spectral sampling process

16

  • After spectral sampling process, one obtains the following expression :

Ist =

  • j
  • k

Ljkαstjk avec αstjk = hu

uszst(us − j)hz(zst − k)

(1)

  • a gain factor W modeling exposure time and sensor sensitivity on each

pixel (s, t) is finally integrated within the rendering coefficients : αstjk = Wsthu

uszst(us − j)hz(zst − k)

(2)

Observed values on the sensor at (s, t) location expressed as the combi- nation of spline coefficients (Lj,k) within the model space (X) and rendering co- efficients linking both space together (Sensor space indexed with (s, t) and Model space indexed with (j, k)).

slide-5
SLIDE 5

Direct problem : MUSE case

17

  • T : model space
  • F : continuous (3D-B-spline filtered)
  • X : discrete (spatial and spectral sampling)

F(u, z) =

  • j
  • k

Ljkϕ(u − j)ϕ(z − k)

X = L ⋆ϕ

Y i =

  • j
  • k

Ljkαi

stjk + N(0, σi st)

Y : sensor space ˆ X : Reconstruction

αi

stjk = W i sthiu uszst(us − j)hz(zst − k) 18

II- Inverse model Inverse Model

19

Image reconstruction modeling : inverse problem in a Bayesian framework

  • Huge data set : cubes, covariance matrix, FSF, LSF, rendering coeffi-

cients...

  • No sequential algorithm : the fusion process has to be computed with the

whole data set at a time

  • New approach : sequential algorithm and uncertainty propagation
  • Possible improvement when new data will be available

Strasbourg Scheme May 2010

20

slide-6
SLIDE 6

21

Available raw observations on the sensor

Each observation Y i, i ∈ {1..n} on location p = (s, t) can be written as : Y i

p =

  • l

αi

plLl + Bi p o`

u Bi

p ∼ N(0, σi p 2)

(1) where l = (j, k) is the 3D coordinate in the model space and Bi

p a white Gaussian

noise, with std σi

p.

22

B : Available observations on the sensor

Y i

p =

  • l

αi

plLl + Bi p o`

u Bi

p ∼ N(0, σi p 2)

(1) can be rewritten in the following form : Y i = αiL + Bi (2) where Y i =         Y i . . . Y i

p

. . . Y i

η0

        (3)

Inversion model

23 24

Geometric transforms

  • Cube fusion have to take place within the same Model space, successive

shifts must be estimated.

  • Such estimation is possible with reconstructed cubes outcoming from the

DRS (Data Reduction System associated to MUSE sensor).

  • Estimated parameters are then integrated through rendering coefficients.

Thus they are implicitely taken into account during the fusion process.

  • Spectral shifts are known and constant through all the observations, but

δi

xλ and δi yλ require to be estimated. This can be done on several band-

width and interpolated for all wavelengths.

slide-7
SLIDE 7

25

Geometric transforms

Geometric distorsion during acquisition process :

  • spatial shift of sampling lattice that depends on Y i and wavelength λ :

(x, y) → (x + δi

xλ, y + δi yλ) ;

  • spectral shift of the spectral sampling grid of each IFU (but constant and

known for all the observation) : λ → λ + δλ ;

  • spatial rotation of sampling lattice, weak in the MUSE case.

Global scheme of Fusion with raw data

26 Fusion Deconvolution Drizzling Shift estimation Convolution Cosmic ray detection

A D E G H E’

DRS cubes Computation of render coefficients

C

Export to HDF5

B

Cosmic ray detection

F

Drizzling

D’

Fusion

F’

Deconvolution

G’

Convolution

H’

Y i

p =

  • l

αi

plLl + Bi p o`

u Bi

p ∼ N(0, σi p 2)

27

B : HDF5 format

  • HDF5 is the last version of a “Hierarchical Data Format”.
  • It allows packing of observed data, related metadata and additional in-

terpretation material like sensitivity maps, and weight maps in the same hierarchical file.

  • HDF format is dedicated to handling large datasets and is commonly used

within the scientific community when dealing with such data. Access in large tables is improved using B-tree structures programming.

  • See http://www.hdfgroup.org/hdf-java-html/hdfview/
  • Visualization : QuickViz (plugin of Aladin) : http://lsiit-miv.u-strasbg.fr/paseo/

28

slide-8
SLIDE 8

29

C : Drizzling method

The Fusion method we propose, based on drizzling approach, allows sequen- tial reconstruction of each observation Y i in an estimated image Ii : Ii

l =

  • p

αi

pl

σi

p 2 × Y i p

(1)

ith observed value on the sensor (Y i) located on (p = s, t) expressed as the combination of rendering coefficients and noise on p

Reconstructed image (hyperspectral cube) corresponding to ith observation located on l in the 3D cube space.

30

Y i

p =

  • l

αi

plLl + Bi p o`

u Bi

p ∼ N(0, σi p2) Fusion Deconvolution Drizzling Shift estimation Convolution Cosmic ray detection

A D E G H E’

DRS cubes Computation of render coefficients

C

Export to HDF5

B

Cosmic ray detection

F

Drizzling

D’

Fusion

F’

Deconvolution

G’

Convolution

H’

p

Ii

l =

  • p

αi

pl

σi

p2 × Y i p

If =

  • i

Ii =

  • i
  • αiT DiαiL + N i

= αfL + N f o` u N f ∼ N(0, Σf) αf =

  • i

αiT Diαi Σf =

  • i

αiT Diαi = αf

Global scheme of Fusion with raw data

31 Fusion Deconvolution Drizzling Shift estimation Convolution Cosmic ray detection

A D E G H E’

DRS cubes Computation of render coefficients

C

Export to HDF5

B

Cosmic ray detection

F

Drizzling

D’

Fusion

F’

Deconvolution

G’

Convolution

H’

Global scheme of Fusion with raw data

32

E : Detection of outliers

  • 80 observations per pixel ;
  • Median value and median of the deviation ;
  • Threshold step : detection and masking of obvious outliers
slide-9
SLIDE 9

33

G : Déconvolution

After fusion step, 3 matrices are available :

  • If matrix of the fused cube defined as If(l) = If

l ;

  • αf matrix of average rendering coefficients ;
  • Σf covariance matrix of the fused image.

34

G : Déconvolution

The reconstruction estimates the ground truth X from spline coefficients L, it is necessary to deconvolve If by αf in order to estimate L: If =

  • i

Ii =

  • i
  • αiT DiαiL + N i

= αfL + N f o` u N f ∼ N(0, Σf) (1)

35

E’ : Outlier detection after fusion process

  • n the sensor plane

Outliers detection (e.g., cosmic rays) is now done within Sensor space :

  • Raw images Y i can be compared with reconstructed cube thanks to L

coefficients (cleaned of obvious outliers in the first fusion step). Given these L, it is possible to reconstruct each observation Y i : ¯ Y i = αiL (1)

  • Then, we can compare ¯

Y i and Y i in order to determine the set of cosmic- rays contaminated pixels in Y i. Finally, we generate a binary map (0 : cosmic contamination, 1 : normal). Main advantage : we are able to take into account neighboors around outliers

  • bviously detected because this detection is proceed directly on the CCD matrix

(Sensor space).

36

F’ : Final Fusion

Outliers or contaminated pixels are now masked : : i.e., the corresponding value within Di is set to 0. Then, new coefficients L are convolved by a B-spline

  • f degree 3 in order to obtain ˆ

X : ˆ X = L ⋆ϕ (1) where ϕ stands for the B-spline kernel.

slide-10
SLIDE 10

37

F’ : Uncertainty estimation

Under Gaussian assumption, covariance matrix estimation on X is given by inverting the Hessian matrix ∇2

XU(X). With L = S−1X, one can write

U(L) = 1 2LT (αf + 2ωRT R)L − If T L + 1 2If T αf −1If as a function of X : U(X) = 1 2(S−1X)T A(S−1X) − If T S−1X + 1 2If T αf −1If (1) whose gradient is : ∇XU(X) = S−1T AS−1X − IfS−1 (2) and the Hessian : ∇2

XU(X) = S−1T AS−1 = S−1T ∇2 LU(L)S−1 = S−1T αfS−1+2ωDT D = Σ−1 X

(3)

Main advantages

38

Image reconstruction modeling : inverse problem in a Bayesian framework

  • Huge data set : cubes, covariance matrix, FSF, LSF, rendering coeffi-

cients...

  • No sequential algorithm : the fusion process has to be computed with the

whole data set at a time

  • New approach : sequential algorithm and uncertainty propagation
  • Possible improvement when new data will be available

Does it work ?

39

Introduction I- Direct Model II- Inverse Model III- Preliminary Results IV- Conclusion and perspectives

40

Plan of the talk

slide-11
SLIDE 11

41

Large amount of Data to manage in an

  • ptimal way....

Preliminary Results

  • Validate the whole fusion scheme
  • Images : fuzzy / noisy , truth / reconstructed , spatial/spectral

profiles monoband image : galaxy multiband image : star + galaxy (multispectral cube) compass

42

Result 1 – Simulated galaxies – 2D

  • Two galaxies with different resolutions on the same observation (64x128 p.) + gaussian

noise

  • 4 different observations with different FSF and noise

Ideal image Result 1 – Observations – (left FSF, right FSF, Var., SNR)

  • Obs. 1 – (2, 5,16, 26.6)
  • Obs. 2 – (5, 2,13, 28)
  • Obs. 3 – (4, 4, 4, 32.8)
  • Obs. 4 – (3, 3, 4, 33)
slide-12
SLIDE 12

Result 1 – Fusion Ideal image Fusion (MSE : 25) Mean image (MSE : 75)

Result 1 – Profiles

Blue (light and dark) : 4 observations. Red: fusion

Result 1 – Profiles

Blue : ideal image. Red: fusion. Black: mean

Result 2 – Observations – (left FSF, right FSF, Var., SNR)

  • Obs. 1 – (2, 5, 36, 23.3)
  • Obs. 2 – (5, 2, 36, 23.25)
  • Obs. 3 – (4, 4, 16, 26.7)
  • Obs. 4 – (3, 3, 9, 29.4)
slide-13
SLIDE 13

Result 2 – Fusion Ideal image Fusion (MSE : 19.8) Mean image (MSE : 79)

Result 2 – Profiles

Blue (light and dark): 4 observations. Red : fusion

Result 2 – Profiles

Blue : ideal image. Red : fusion. Black : mean

Result 3 – Real images – 2D

  • 2 different FSF for each observation (62x150p.)

Ideal image

slide-14
SLIDE 14

Result 3 – Observations – (left FSF, right FSF, Var., SNR)

  • Obs. 1 – (3, 8, 1, 43.2)
  • Obs. 2 – (8, 3, 1, 43.3)

Result 3 – Fusion Ideal image Fusion (MSE : 17) Mean image (MSE : 24)

Result 4 – 3D simulation

  • Three different galaxies : gaussian spatial profile + mixture of functions for spectra

(dirac + gaussian functions)

  • 4 different observations (size : 32x32x32 p.) with different LSF and FSF + variable

spatial shifts + gaussian noise (depending on intensities) + 4 IFU (management of the pixtable)

Observation FSF (FWMH) LSF (FWMH) Shift (x,y) SNR (min – max) 1 3 2 (0,0) 1 - 35 2 2 3.5 (0.5,0.5) 11 - 35 3 4 3 (0.75,0.75) 5 - 33 4 3 4 (1.2,1.6) 13 - 33 56

slide-15
SLIDE 15

Result 4 – Observations – First bands

  • Obs. 1
  • Obs. 2
  • Obs. 3
  • Obs. 4

Result 4 – Fusion

  • Mean image, MSE = 1.53e-008
  • Fusion, MSE = 1.06e-009

Result 4 – Profiles (first band)

Blue : ideal image. Red : fusion. Black : mean

Result 4 – First galaxy spectrum

Blue : ideal image. Red : fusion. Black : mean

slide-16
SLIDE 16

Result 4 – Second galaxy spectrum

Blue : ideal image. Red : fusion. Black : mean

Perspectives

  • Ability to deal with different observation parameters : PSF, noise, spatial shift,

geometric distortion…

  • Generic capacity to deal with different kind of instruments and to fuse their observation

together in an optimal way (Bayesian framework)

  • Promising results especially with :
  • PSF widths depending on spatial and spectral positions
  • different spectral shifts according to spatial position
  • the deconvolution step : add astronomical a priori
  • cosmic rays disturbance and outliers pixels
  • To do :
  • computation, simplification and analysis of the covariance matrix of the fused image
  • apply the fusion algorithm on raw MUSE simulations (in progress)
  • Cosmic ray contamination on neighboors
  • fusion pipeline on large hyperspectral images computing time for four 32x32x32

simulations ~ 30 min. (mainly depends on the width of PSF/LSF)

62