. Inverse problems in signal and image processing and Bayesian - - PowerPoint PPT Presentation

inverse problems in signal and image processing and
SMART_READER_LITE
LIVE PREVIEW

. Inverse problems in signal and image processing and Bayesian - - PowerPoint PPT Presentation

. Inverse problems in signal and image processing and Bayesian inference framework: from basic to advanced Bayesian computation Ali Mohammad-Djafari Laboratoire des Signaux et Syst` emes (L2S) UMR8506 CNRS-CentraleSup elec-UNIV PARIS SUD


slide-1
SLIDE 1

. Inverse problems in signal and image processing and Bayesian inference framework: from basic to advanced Bayesian computation

Ali Mohammad-Djafari

Laboratoire des Signaux et Syst` emes (L2S) UMR8506 CNRS-CentraleSup´ elec-UNIV PARIS SUD SUPELEC, 91192 Gif-sur-Yvette, France http://lss.centralesupelec.fr Email: djafari@lss.supelec.fr http://djafari.free.fr http://publicationslist.org/djafari

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 1/77

slide-2
SLIDE 2

Contents

  • 1. Signal and Image Processing:

Classical/Inverse problems approaches

  • 2. Inverse problems examples

◮ Instrumentation ◮ Imaging systems to see outside of a body ◮ Imaging systems to see inside of a body ◮ Other imaging systems (Acoustics, Radar, SAR,...)

  • 3. Analytical/Algebraic methods
  • 4. Deterministic regularization methods and their limitations
  • 5. Bayesian approach
  • 6. Two main steps: Priors and Computational aspects
  • 7. Case studies:

Instrumentation, X ray Computed Tomography, Microwave imaging, Acoustic source localisation, Ultrasound imaging, Satellite image restoration, etc.

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 2/77

slide-3
SLIDE 3

Signal and Image Processing: Classical/Inverse problems approach

◮ Classical: You have given a signal or an image, process it.

Examples:

◮ Signal:

Detect periodicities, changes, Model it for prediction, ... AR, MA, ARMA modeling,... Parameter estimation,...

◮ Image: Enhancement, Restoration, Segmentation, Contour

detection, Compression, ...

◮ Model based or Inverse problem approach:

◮ What represent the observed signal or image? ◮ How they are related to the desired unknowns? ◮ Forward modelling / Inversion ◮ Examples: Deconvolution, Image restoration, Image

reconstruction in Computed Tomography (CT), ...

◮ PCA, ICA / Blind source Separation, ◮ Compressed Sensing / L1 Regularization, Bayesian sparsity

enforcing

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 3/77

slide-4
SLIDE 4

Inverse Problems examples

◮ Example 1:

Instrumentation: Measuring the temperature with a thermometer Deconvolution

◮ f (t) input of the instrument ◮ g(t) output of the instrument

◮ Example 2: Seeing outside of a body: Making an image using

a camera, a microscope or a telescope: Image restoration

◮ f (x, y) real scene ◮ g(x, y) observed image

◮ Example 3: Seeing inside of a body: Computed Tomography

usng X rays, US, Microwave, etc.: Image reconstruction

◮ f (x, y) a section of a real 3D body f (x, y, z) ◮ gφ(r) a line of observed radiographe gφ(r, z)

◮ Example 4: Seeing differently: MRI, Radar, SAR, Infrared,

etc.: Fourier Synthesis

◮ f (x, y) a section of body or a scene ◮ g(u, v) partial data in the Fourier domain

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 4/77

slide-5
SLIDE 5

Measuring variation of temperature with a therometer

◮ f (t) variation of temperature over time ◮ g(t) variation of length of the liquid in thermometer ◮ Forward model: Convolution

g(t) =

  • f (t′) h(t − t′) dt′ + ǫ(t)

h(t): impulse response of the measurement system

◮ Inverse problem: Deconvolution

Given the forward model H (impulse response h(t))) and a set of data g(ti), i = 1, · · · , M find f (t)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 5/77

slide-6
SLIDE 6

Measuring variation of temperature with a therometer

Forward model: Convolution g(t) =

  • f (t′) h(t − t′) dt′ + ǫ(t)

f (t)− → Thermometer h(t) − → g(t) Inversion: Deconvolution f (t) g(t)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 6/77

slide-7
SLIDE 7

Seeing outside of a body: Making an image with a camera, a microscope or a telescope

◮ f (x, y) real scene ◮ g(x, y) observed image ◮ Forward model: Convolution

g(x, y) =

  • f (x′, y′) h(x − x′, y − y′) dx′ dy′ + ǫ(x, y)

h(x, y): Point Spread Function (PSF) of the imaging system

◮ Inverse problem: Image restoration

Given the forward model H (PSF h(x, y))) and a set of data g(xi, yi), i = 1, · · · , M find f (x, y)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 7/77

slide-8
SLIDE 8

Making an image with an unfocused camera

Forward model: 2D Convolution g(x, y) =

  • f (x′, y′) h(x − x′, y − y′) dx′ dy′ + ǫ(x, y)

f (x, y) ✲ h(x, y)

✲ ✍✌ ✎☞

+

ǫ(x, y)

g(x, y) Inversion: Image Deconvolution or Restoration ? ⇐ =

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 8/77

slide-9
SLIDE 9

Seeing inside of a body: Computed Tomography

◮ f (x, y) a section of a real 3D body f (x, y, z) ◮ gφ(r) a line of observed radiography gφ(r, z) ◮ Forward model:

Line integrals or Radon Transform gφ(r) =

  • Lr,φ

f (x, y) dl + ǫφ(r) =

  • f (x, y) δ(r − x cos φ − y sin φ) dx dy + ǫφ(r)

◮ Inverse problem: Image reconstruction

Given the forward model H (Radon Transform) and a set of data gφi(r), i = 1, · · · , M find f (x, y)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 9/77

slide-10
SLIDE 10

2D and 3D Computed Tomography

3D 2D gφ(r1, r2) =

  • Lr1,r2,φ

f (x, y, z) dl gφ(r) =

  • Lr,φ

f (x, y) dl Forward probelm: f (x, y) or f (x, y, z) − → gφ(r) or gφ(r1, r2) Inverse problem: gφ(r) or gφ(r1, r2) − → f (x, y) or f (x, y, z)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 10/77

slide-11
SLIDE 11

Computed Tomography: Radon Transform

Forward: f (x, y) − → g(r, φ) Inverse: f (x, y) ← − g(r, φ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 11/77

slide-12
SLIDE 12

Microwave or ultrasound imaging

Measures: diffracted wave by the object g(ri) Unknown quantity: f (r) = k2

0(n2(r) − 1)

Intermediate quantity : φ(r) g(ri) =

  • D

Gm(ri, r′)φ(r′) f (r′) dr′, ri ∈ S φ(r) = φ0(r) +

  • D

Go(r, r′)φ(r′) f (r′) dr′, r ∈ D Born approximation (φ(r′) ≃ φ0(r′)) ): g(ri) =

  • D

Gm(ri, r′)φ0(r′) f (r′) dr′, ri ∈ S Discretization: g = GmF φ φ= φ0 + GoF φ − →    g = H(f) with F = diag(f) H(f) = GmF (I − GoF )−1φ0

r r r r r r r r r r r r r r r ✲ ✱ ✱ ❊❊ ❡ ❡ ✪ ✪ ❛ ❛ ▲ ▲ ✦ ✦

φ0 (φ, f ) g

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 12/77

slide-13
SLIDE 13

Fourier Synthesis in X ray Tomography

g(r, φ) =

  • f (x, y) δ(r − x cos φ − y sin φ) dx dy

G(Ω, φ) =

  • g(r, φ) exp [−jΩr] dr

F(u, y) =

  • f (x, y) exp [−jvx, yy] dx dy

F(v, y) = G(Ω, φ) for u = Ω cos φ and v = Ω sin φ f (x, y) φ g(r, φ)–FT–G(Ω, φ)

x

y

r

❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ■

s

✁ ✁ ✁

❅ ❅

❅ ❍ ❍ ❍

φ

u

v

❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ■

α F(ωx, ωy) φ

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 13/77

slide-14
SLIDE 14

Fourier Synthesis in X ray tomography

G(u, v) =

  • f (x, y) exp [−j (ux + vy)] dx dy

? = ⇒

Forward problem: Given f (x, y) compute G(u, v) Inverse problem: Given G(u, v) on those lines estimate f (x, y)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 14/77

slide-15
SLIDE 15

Fourier Synthesis in Diffraction tomography

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 15/77

slide-16
SLIDE 16

Fourier Synthesis in Diffraction tomography

G(u, v) =

  • f (x, y) exp [−j (ux + vy)] dx dy

? = ⇒

Forward problem: Given f (x, y) compute G(u, v) Inverse problem : Given G(u, v) on those semi cercles estimate f (x, y)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 16/77

slide-17
SLIDE 17

Fourier Synthesis in different imaging systems

G(u, v) =

  • f (x, y) exp [−j (ux + vy)] dx dy

X ray Tomography Diffraction Eddy current SAR & Radar Forward problem: Given f (x, y) compute G(u, v) Inverse problem : Given G(u, v) on those algebraic lines, cercles

  • r curves, estimate f (x, y)
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 17/77

slide-18
SLIDE 18

Linear inverse problems

◮ Deconvolution

g(t) =

  • f (τ)h(t − τ) dτ

◮ Image restoration

g(x, y) =

  • f (x′, y′)h(x − x′, y − y′) dx dy

◮ Image reconstruction in X ray CT

g(r, φ) =

  • f (x, y)δ(r − x cos φ − y sin φ) dx dy

◮ Fourier synthesis

g(u, v) =

  • f (x, y) exp [−j(ux + vy)] dx dy

◮ Unified linear relation

g(s) =

  • f (r) h(s, r) dr
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 18/77

slide-19
SLIDE 19

Linear Inverse Problems

g(si) =

  • h(si, r) f (r) dr + ǫ(si),

i = 1, · · · , M

◮ f (r) is assumed to be well approximated by

f (r) ≃

N

  • j=1

fj φj(r) with {φj(r)} a basis or any other set of known functions g(si) = gi ≃

N

  • j=1

fj

  • h(si, r) φj(r) dr,

i = 1, · · · , M g = Hf + ǫ with Hij =

  • h(si, r) φj(r) dr

◮ H is huge dimensional

◮ 1D: 103 × 103,

2D: 106 × 106, 3D: 109 × 109

◮ Due to ill-posedness of the inverse problems, Least squares

(LS) methods:

  • f = arg minf {J(f)} with

J(f) = g − Hf2 do not give satisfactory result. Need for regularization methods: J(f) = g − Hf2 + λf2

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 19/77

slide-20
SLIDE 20

Regularization theory

Inverse problems = Ill posed problems − → Need for prior information Functional space (Tikhonov): g = H(f ) + ǫ J(f ) = ||g − H(f )||2

2 + λ||Df ||2 2

Finite dimensional space (Philips & Towmey): g = Hf + ǫ J(f) = g − Hf2 + λf2

  • Minimum norme LS (MNLS):

J(f) = ||g − H(f)||2 + λ||f||2

  • Classical regularization:

J(f) = ||g − H(f)||2 + λ||Df||2

  • More general regularization:

J(f) = Q(g − H(f)) + λΩ(Df)

  • r

J(f) = ∆1(g, H(f)) + λ∆2(Df, f0) Limitations:

  • Errors are considered implicitly white and Gaussian
  • Limited prior information on the solution
  • Lack of tools for the determination of the hyperparameters
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 20/77

slide-21
SLIDE 21

Inversion: Probabilistic methods

Taking account of errors and uncertainties − → Probability theory

◮ Maximum Likelihood (ML) ◮ Minimum Inaccuracy (MI) ◮ Probability Distribution Matching (PDM) ◮ Maximum Entropy (ME) and Information Theory (IT) ◮ Bayesian Inference (Bayes)

Advantages:

◮ Explicit account of the errors and noise ◮ A large class of priors via explicit or implicit modeling ◮ A coherent approach to combine information content of the

data and priors Limitations:

◮ Practical implementation and cost of calculation

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 21/77

slide-22
SLIDE 22

Bayesian estimation approach

M : g = Hf + ǫ

◮ Observation model M + Hypothesis on the noise ǫ −

→ p(g|f; M) = pǫ(g − Hf)

◮ A priori information

p(f|M)

◮ Bayes :

p(f|g; M) = p(g|f; M) p(f|M) p(g|M) Link with regularization :

◮ Maximum A Posteriori (MAP) :

  • f

= arg max

f

{p(f|g)} = arg max

f

{p(g|f) p(f)} = arg min

f {J(f) = − ln p(g|f) − ln p(f)} ◮ Regularization:

  • f = arg min

f {J(f) = Q(g, Hf) + λΩ(f)}

with Q(g, Hf) = − ln p(g|f) and λΩ(f) = − ln p(f)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 22/77

slide-23
SLIDE 23

Case of linear models and Gaussian priors

g = Hf + ǫ

◮ Prior knowledge on the noise:

ǫ ∼ N(0, σ2

ǫ I) → p(g|f) ∝ exp

  • − 1

2σ2

ǫ

g − Hf2

  • ◮ Prior knowledge on f:

f ∼ N(0, σ2

f (D′D)−1) → p(f) ∝ exp

  • − 1

2σ2

f

Df2

  • ◮ A posteriori:

p(f|g) ∝ exp

  • − 1

2σ2

ǫ

g − Hf2 − 1 2σ2

f

Df2

  • ◮ MAP :
  • f = arg maxf {p(f|g)} = arg minf {J(f)}

with J(f) = g − Hf2 + λDf2, λ = σ2

ǫ

σ2

f

◮ Advantage : characterization of the solution

p(f|g) = N( f, P ) with f = P H′g,

  • P =
  • H′H + λD′D

−1

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 23/77

slide-24
SLIDE 24

MAP estimation with other priors:

  • f = arg min

f {J(f)}

with J(f) = g − Hf2 + λΩ(f) Separable priors:

◮ Gaussian: p(fj) ∝ exp

  • −α|fj|2

− → Ω(f) = α

j |fj|2 ◮ Gamma: p(fj) ∝ f α j exp [−βfj] −

→ Ω(f) = α

j ln fj + βfj ◮ Beta:

p(fj) ∝ f α

j (1 − fj)β −

→ Ω(f) = α

j ln fj + β j ln(1 − fj) ◮ Generalized Gaussian:

p(fj) ∝ exp [−α|fj|p] , 1 < p < 2 − → Ω(f) = α

j |fj|p,

Markovian models: p(fj|f) ∝ exp  −α

  • i∈Nj

φ(fj, fi)   − → Ω(f) = α

  • j
  • i∈Nj

φ(fj, fi),

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 24/77

slide-25
SLIDE 25

MAP estimation with markovian priors:

  • f = arg min

f {J(f)}

with J(f) = g − Hf2 + λΩ(f) Ω(f) =

  • j

φ(fj − fj−1) with φ(t) : Convex functions: |t|α,

  • 1 + t2 − 1, log(cosh(t)),

t2 |t| ≤ T 2T|t| − T 2 |t| > T

  • r Non convex functions:

log(1 + t2), t2 1 + t2 , arctan(t2), t2 |t| ≤ T T 2 |t| > T

◮ A great number of methods, optimization algorithms,...

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 25/77

slide-26
SLIDE 26

Main advantages of the Bayesian approach

◮ MAP = Regularization ◮ Posterior mean ? Marginal MAP ? ◮ More information in the posterior law than only its mode or

its mean

◮ Tools for estimating hyper parameters ◮ Tools for model selection ◮ More specific and specialized priors, particularly through the

hidden variables and hierarchical models

◮ More computational tools:

◮ Expectation-Maximization for computing the maximum

likelihood parameters

◮ MCMC for posterior exploration ◮ Variational Bayes for analytical computation of the posterior

marginals

◮ ...

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 26/77

slide-27
SLIDE 27

Bayesian Estimation: Simple priors

◮ Linear model: g = Hf + ǫ ◮ Gaussian case:

p(g|f, θ1) = N(Hf, θ1I) p(f|θ2) = N(0, θ2I) − → p(f|g, θ) = N( f, P ) with P = (H′H + λI)−1, λ = θ1

θ2

  • f =

P H′g

  • f = arg min

f {J(f)} with J(f) = g − Hf2 2 + λf2 2 ◮ Generalized Gaussian prior & MAP:

  • f = arg min

f {J(f)} with J(f) = g − Hf2 2 + λfβ ◮ Double Exponential (β = 1):

  • f = arg min

f {J(f)} with J(f) = g − Hf2 2 + λf1

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 27/77

slide-28
SLIDE 28

Full (Unsupervised) Bayesian approach

M : g = Hf + ǫ

◮ Forward & errors model: −

→ p(g|f, θ1; M)

◮ Prior models −

→ p(f|θ2; M)

◮ Hyperparameters θ = (θ1, θ2) −

→ p(θ|M)

◮ Bayes: −

→ p(f, θ|g; M) = p(g|f,θ;M) p(f|θ;M) p(θ|M)

p(g|M) ◮ Joint MAP:

( f, θ) = arg max

(f,θ) {p(f, θ|g; M)} ◮ Marginalization:

p(f|g; M) =

  • p(f, θ|g; M) dθ

p(θ|g; M) =

  • p(f, θ|g; M) df

◮ Posterior means:

  • f

= f p(f, θ|g; M) dθ df

  • θ

= θ p(f, θ|g; M) df dθ

◮ Evidence of the model:

p(g|M) =

  • p(g|f, θ; M)p(f|θ; M)p(θ|M) df dθ
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 28/77

slide-29
SLIDE 29

Two main steps in the Bayesian approach

◮ Prior modeling

◮ Separable:

Gaussian, Gamma, Sparsity enforcing: Generalized Gaussian, mixture of Gaussians, mixture of Gammas, ...

◮ Markovian:

Gauss-Markov, GGM, ...

◮ Markovian with hidden variables

(contours, region labels)

◮ Choice of the estimator and computational aspects

◮ MAP, Posterior mean, Marginal MAP ◮ MAP needs optimization algorithms ◮ Posterior mean needs integration methods ◮ Marginal MAP and Hyperparameter estimation need

integration and optimization

◮ Approximations: ◮ Gaussian approximation (Laplace) ◮ Numerical exploration MCMC ◮ Variational Bayes (Separable approximation)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 29/77

slide-30
SLIDE 30

Different prior models for signals and images: Separable

Gaussian Generalized Gaussian p(fj) ∝ exp

  • −α|fj|2

p(fj) ∝ exp [−α|fj|p] , 1 ≤ p ≤ 2 Gamma Beta p(fj) ∝ f α

j exp [−βfj]

p(fj) ∝ f α

j (1 − fj)β

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 30/77

slide-31
SLIDE 31

Sparsity enforcing prior models

◮ Sparse signals: Direct sparsity ◮ Sparse signals: Sparsity in a Transform domain

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 31/77

slide-32
SLIDE 32

Sparsity enforcing prior models

◮ Simple heavy tailed models:

◮ Generalized Gaussian, Double Exponential ◮ Symmetric Weibull, Symmetric Rayleigh ◮ Student-t, Cauchy ◮ Generalized hyperbolic ◮ Elastic net

◮ Hierarchical mixture models:

◮ Mixture of Gaussians ◮ Bernoulli-Gaussian ◮ Mixture of Gammas ◮ Bernoulli-Gamma ◮ Mixture of Dirichlet ◮ Bernoulli-Multinomial

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 32/77

slide-33
SLIDE 33

Which images I am looking for?

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 33/77

slide-34
SLIDE 34

Which image I am looking for?

Gauss-Markov Generalized GM Piecewize Gaussian Mixture of GM

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 34/77

slide-35
SLIDE 35

Different prior models for signals and images: Separable

◮ Simple Gaussian, Gamma, Generalized Gaussian

p(f) ∝ exp  

j

φ(f j)  

◮ Simple Markovian models: Gauss-Markov, Generalized

Gauss-Markov p(f) ∝ exp  

j

  • j∈N(i)

φ(f j − f i)  

◮ Hierarchical models with hidden variables:

Bernouilli-Gaussian, Gaussian-Gamma p(f|z) ∝ exp  

j

p(f j|zj)   and p(z) ∝ exp  

j

p(zj)   with different choices for p(f j|zj) and p(zj)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 35/77

slide-36
SLIDE 36

Hierarchical models and hidden variables

◮ Student-t model

St(f |ν) ∝ exp

  • −ν + 1

2 log

  • 1 + f 2/ν
  • ◮ Infinite Scaled Gaussian Mixture (ISGM) equivalence

St(f |ν) ∝= ∞ N(f |, 0, 1/z) G(z|α, β) dz, with α = β = ν/2              p(f|z) =

j p(fj|zj) = j N(fj|0, 1/zj) ∝ exp

  • − 1

2

  • j zjf 2

j

  • p(z|α, β)

=

j G(zj|α, β) ∝ j zj (α−1) exp [−βzj]

∝ exp

  • j(α − 1) ln zj − βzj
  • p(f, z|α, β)

∝ exp

  • − 1

2

  • j zjf 2

j + (α − 1) ln zj − βzj

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 36/77

slide-37
SLIDE 37

Gauss-Markov-Potts prior models for images

f (r) z(r) c(r) = 1 − δ(z(r) − z(r′)) p(f (r)|z(r) = k, mk, vk) = N(mk, vk) p(f (r)) =

  • k

P(z(r) = k) N(mk, vk) Mixture of Gaussians

◮ Separable iid hidden variables:

p(z) =

r p(z(r)) ◮ Markovian hidden variables:

p(z) Potts-Markov: p(z(r)|z(r′), r′ ∈ V(r)) ∝ exp  γ

  • r′∈V(r)

δ(z(r) − z(r′))   p(z) ∝ exp  γ

  • r∈R
  • r′∈V(r)

δ(z(r) − z(r′))  

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 37/77

slide-38
SLIDE 38

Four different cases

To each pixel of the image is associated 2 variables f (r) and z(r)

◮ f|z Gaussian iid,

z iid : Mixture of Gaussians

◮ f|z Gauss-Markov,

z iid : Mixture of Gauss-Markov

◮ f|z Gaussian iid,

z Potts-Markov : Mixture of Independent Gaussians (MIG with Hidden Potts)

◮ f|z Markov,

z Potts-Markov : Mixture of Gauss-Markov (MGM with hidden Potts) f (r) z(r)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 38/77

slide-39
SLIDE 39

Bayesian Computation and Algorithms

◮ Joint posterior probability law of all the unknowns f, z, θ

p(f, z, θ|g) ∝ p(g|f, θ1) p(f|z, θ2) p(z|θ3) p(θ)

◮ Often, the expression of p(f, z, θ|g) is complex. ◮ Its optimization (for Joint MAP) or

its marginalization or integration (for Marginal MAP or PM) is not easy

◮ Two main techniques:

MCMC and Variational Bayesian Approximation (VBA)

◮ MCMC:

Needs the expressions of the conditionals p(f|z, θ, g), p(z|f, θ, g), and p(θ|f, z, g)

◮ VBA: Approximate p(f, z, θ|g) by a separable one

q(f, z, θ|g) = q1(f) q2(z) q3(θ) and do any computations with these separable ones.

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 39/77

slide-40
SLIDE 40

Hierarchical models

Simple case (1 layer): θ2

f

✒✑ ✓✏ ❄

H

✒✑ ✓✏

g θ1

ǫ

✒✑ ✓✏

g = Hf + ǫ p(f|g, θ) ∝ p(g|f, θ1) p(f|θ2) Objective: Infer on f MAP: f = arg maxf {p(f|g, θ)} Posterior Mean (PM): f =

  • p(f|g, θ) df

Unsupervised case (2 layers): β0

θ2

✒✑ ✓✏ ❄

α0

θ1

✒✑ ✓✏ ❄

ǫ

✒✑ ✓✏

f

✒✑ ✓✏ ❄

H

✒✑ ✓✏

g

p(f, θ|g) ∝ p(g|f, θ1) p(f|θ2) p(θ) Objective: Infer on f, θ JMAP: ( f, θ) = arg max(f,θ) {p(f, θ|g)} Marginalization: p(θ|g) =

  • p(f, θ|g) df

VBA: Approximate p(f, θ|g) by q1(f) q2(θ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 40/77

slide-41
SLIDE 41

Hierarchical models (3 layers)

γ0

θ3

✒✑ ✓✏ ❄

z

✒✑ ✓✏ ❄

α0

θ1

✒✑ ✓✏ ❄

β0

θ2

✒✑ ✓✏ ❅ ❅ ❘ f ✒✑ ✓✏

ǫ

✒✑ ✓✏ ❄

H

✒✑ ✓✏

g p(f, z, θ|g) ∝ p(g|f, θ1) p(f|z, θ2) p(z|θ3) p(θ) p(θ) = p(θ1|α0) p(θ2|β0) p(θ3|γ0) Objective: Infer on f, z, θ JMAP: ( f, z, θ) = arg max(f,z,θ) {p(f, z, θ|g)} Marginalization: p(z, θ|g) =

  • p(f, z, θ|g) df

p(θ|g) =

  • p(z, θ|g) dz
  • r p(f|g) =
  • p(f, z, θ|g) dz dθ

VBA: Approximate p(f, z, θ|g) by q1(f) q2(z) q3(θ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 41/77

slide-42
SLIDE 42

MCMC based algorithm

p(f, z, θ|g) ∝ p(g|f, z, θ1) p(f|z, θ2) p(z|θ3) p(θ) General scheme:

  • f ∼ p(f|

z, θ, g) − → z ∼ p(z| f, θ, g) − → θ ∼ (θ| f, z, g)

◮ Estimate f using p(f|

z, θ, g) ∝ p(g|f, θ) p(f| z, θ) When Gaussian, can be done via optimisation of a quadratic criterion.

◮ Estimate z using p(z|

f, θ, g) ∝ p(g| f, z, θ) p(z) Often needs sampling (hidden discrete variable)

◮ Estimate θ using

p(θ| f, z, g) ∝ p(g| f, σ2

ǫ I) p(

f| z, (mk, vk)) p(θ) Use of Conjugate priors − → analytical expressions.

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 42/77

slide-43
SLIDE 43

Variational Bayesian Approximation

◮ Approximate p(f, θ|g) by q(f, θ|g) = q1(f|g) q2(θ|g)

and then continue computations.

◮ Criterion KL(q(f, θ|g) : p(f, θ|g)) ◮ KL(q : p) =

q ln q/p = q1q2 ln q1q2

p

=

  • q1 ln q1 +
  • q2 ln q2 −

q ln p = −H(q1) − H(q2)− < ln p >q

◮ Iterative algorithm q1 −

→ q2 − → q1 − → q2, · · ·    q1(f) ∝ exp

  • ln p(g, f, θ; M)q2(θ)
  • q2(θ)

∝ exp

  • ln p(g, f, θ; M)q1(f)
  • p(f, θ|g) −

→ Variational Bayesian Approximation − → q1(f) − → f − → q2(θ) − → θ

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 43/77

slide-44
SLIDE 44

JMAP

( f, θ) = arg max

(f,θ) {p(p(f, θ|g)}

Alternate optimization:

  • θ = arg minθ {p(f, θ|g)}
  • f = arg minf {p(f, θ|g)}

Main drawbacks:

◮ Convergence ◮ Uncertainties in each step are not accounted for

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 44/77

slide-45
SLIDE 45

Marginalization

◮ Marginal MAP:

  • θ = arg maxθ {p(θ|g)} where

p(θ|g) =

  • p(f, θ|g) df ∝ p(g|θ) p(θ)

and then

  • f = arg maxf
  • p(f|

θ, g)

  • r

Posterior Mean:

  • f =
  • f p(f|

θ, g) df

◮ Main drawback: Needs the expression of the Likelihood:

p(g|θ) =

  • p(g|f, θ1) p(f|θ2) df

Not always analytically available − → EM, SEM and GEM algorithms

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 45/77

slide-46
SLIDE 46

EM and GEM algorithms

◮ EM and GEM Algorithms: f as hidden variable,

g as incomplete data, (g, f) as complete data ln p(g|θ) incomplete data log-likelihood ln p(g, f|θ) complete data log-likelihood

◮ Iterative algorithm:

   E-step: Q(θ, θ

(k)) = Ep(f|g, θ

(k)) {ln p(g, f|θ)}

M-step:

  • θ

(k) = arg maxθ

  • Q(θ,

θ

(k−1))

  • ◮ GEM (Bayesian) algorithm:

   E-step: Q(θ, θ

(k)) = Ep(f|g, θ

(k)) {ln p(g, f|θ) + ln p(θ)}

M-step:

  • θ

(k) = arg maxθ

  • Q(θ,

θ

(k−1))

  • p(f, θ|g) −

→ EM, GEM − → θ − → p(f| θ, g) − → f

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 46/77

slide-47
SLIDE 47

JMAP, Marginalization, VBA

◮ JMAP:

p(f, θ|g)

  • ptimization

− → f − → θ

◮ Marginalization

p(f, θ|g) Joint Posterior − → p(θ|g) Marginalize over f − → θ − → p(f| θ, g) − → f

◮ Variational Bayesian Approximation

p(f, θ|g) − → Variational Bayesian Approximation − → q1(f) − → f − → q2(θ) − → θ

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 47/77

slide-48
SLIDE 48

Variational Bayesian Approximation

◮ Approximate p(f, θ|g) by q(f, θ) = q1(f) q2(θ)

and then use them for any inferences on f and θ respectively.

◮ Criterion KL(q(f, θ|g) : p(f, θ|g))

KL(q : p) = q ln q p = q1q2 ln q1q2 p

◮ Iterative algorithm q1 −

→ q2 − → q1 − → q2, · · ·   

  • q1(f)

∝ exp

  • ln p(g, f, θ; M)

q2(θ)

  • q2(θ)

∝ exp

  • ln p(g, f, θ; M)

q1(f)

  • p(f, θ|g) −

→ Variational Bayesian Approximation − → q1(f) − → f − → q2(θ) − → θ

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 48/77

slide-49
SLIDE 49

Variational Bayesian Approximation

p(f, θ|g, M) = p(f, θ, g|, M) p(g|M) p(f, θ, g|M) = p(g|f, θ, M) p(f|θ, M) p(θ|M) KL(q : p) =

  • q(f, θ) ln p(f, θ|g; M)

q(f, θ) df dθ p(g|M) =

  • q(f, θ)p(g, f, θ|M)

q(f, θ) df dθ ≥

  • q(f, θ) ln p(g, f, θ|M)

q(f, θ) df dθ. Free energy: F(q) =

  • q(f, θ) ln p(g, f, θ|M)

q(f, θ) df dθ Evidence of the model M: p(g|M) = F(q) + KL(q : p) Minimizing KL(q : p) = Maximaizing F(q)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 49/77

slide-50
SLIDE 50

VBA: Separable Approximation

p(g|M) = F(q) + KL(q : p) q(f, θ) = q1(f) q2(θ) Minimizing KL(q : p) = Maximizing F(q) ( q1, q2) = arg min

(q1,q2) {KL(q1q2 : p)} = arg max (q1,q2) {F(q1q2)}

KL(q1q2 : p) is convexe wrt q1 when q2 is fixed and vise versa: q1 = arg minq1 {KL(q1 q2 : p)} = arg maxq1 {F(q1 q2)}

  • q2

= arg minq2 {KL( q1q2 : p)} = arg maxq2 {F( q1q2)}   

  • q1(f)

∝ exp

  • ln p(g, f, θ; M)

q2(θ)

  • q2(θ)

∝ exp

  • ln p(g, f, θ; M)

q1(f)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 50/77

slide-51
SLIDE 51

BVA: Choice of family of laws q1 and q2

◮ Case 1 : −

→ Joint MAP

  • q1(f|

f) = δ(f − f)

  • q2(θ|

θ) = δ(θ − θ) − →   

  • f= arg maxf
  • p(f,

θ|g; M)

  • θ= arg maxθ
  • p(

f, θ|g; M)

  • ◮ Case 2 : −

→ EM q1(f) ∝ p(f|θ, g)

  • q2(θ|

θ) = δ(θ − θ) − →    Q(θ, θ)= ln p(f, θ|g; M)q1(f|

θ)

  • θ

= arg maxθ

  • Q(θ,

θ)

  • ◮ Appropriate choice for inverse problems
  • q1(f)

∝ p(f| θ, g; M)

  • q2(θ)

∝ p(θ| f, g; M) − → Accounts for the uncertainties of

  • θ for

f and vise versa. Exponential families, Conjugate priors

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 51/77

slide-52
SLIDE 52

JMAP, EM and VBA

JMAP Alternate optimization Algorithm:

θ(0) − → θ− →

  • f = arg maxf
  • p(f,

θ|g)

→ f − → f ↑ ↓

  • θ ←

− θ← − θ = arg maxθ

  • p(

f, θ|g)

− f EM: θ(0) − → θ− → q1(f) = p(f| θ, g) − →q1(f) − → f ↑ ↓

  • θ ←

− θ← − Q(θ, θ) = ln p(f, θ|g)q1(f)

  • θ = arg maxθ
  • Q(θ,

θ)

− q1(f) VBA: θ(0) − → q2(θ)− → q1(f) ∝ exp

  • ln p(f, θ|g)q2(θ)

→q1(f) − → f ↑ ↓

  • θ ←

− q2(θ)← − q2(θ) ∝ exp

  • ln p(f, θ|g)q1(f)

−q1(f)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 52/77

slide-53
SLIDE 53

Computed Tomography: Discretization

fN f1 fj gi Hij

◗◗◗◗◗◗◗ ◗ ◗ ◗

g(r, φ) =

  • L

f (x, y) dl f (x, y) =

j fj bj(x, y)

bj(x, y) = 1 if (x, y) ∈ pixel j else gi =

N

  • j=1

Hij fj + ǫi g = Hf + ǫ f (x, y)

x

y

✁ ✁

❍ ❍

Case study: Reconstruction from 2 projections g1(x) =

  • f (x, y) dy,

g2(y) =

  • f (x, y) dx

Very ill-posed inverse problem f (x, y) = g1(x) g2(y) Ω(x, y) Ω(x, y) is a Copula:

  • Ω(x, y) dx = 1
  • Ω(x, y) dy = 1
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 53/77

slide-54
SLIDE 54

Simple example

1 3 2 4 4 6 3 7 ? ? ? ? 4 6 3 7 f1 f3 f2 f4 g3 g4 g1 g2 1

  • 1
  • 1

1

  • 1

1 1

  • 1

    g1 g2 g3 g4     =     1 1 1 1 1 1 1 1         f1 f2 f3 f4     f1 f4 f7 f2 f5 f8 f3 f6 f9 g4 g5 g6 g1 g2 g3

◮ Hf = g −

→ f = H−1g if H invertible.

◮ H is rank deficient: rank(H) = 3 ◮ Problem has infinite number of solutions. ◮ How to find all those solutions ? ◮ Which one is the good one? Needs prior information. ◮ To find an unique solution, one needs either more data or

prior information.

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 54/77

slide-55
SLIDE 55

Application in CT: Reconstruction from 2 projections

g|f f|z z c g = Hf + ǫ iid Gaussian iid q(r) ∈ {0, 1} g|f ∼ N(Hf, σ2

ǫ I)

  • r
  • r

1 − δ(z(r) − z(r′)) Gaussian Gauss-Markov Potts binary p(f, z, θ|g) ∝ p(g|f, θ1) p(f|z, θ2) p(z|θ3) p(θ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 55/77

slide-56
SLIDE 56

Proposed algorithms

p(f, z, θ|g) ∝ p(g|f, θ1) p(f|z, θ2) p(z|θ3) p(θ)

  • MCMC based general scheme:
  • f ∼ p(f|

z, θ, g) − → z ∼ p(z| f, θ, g) − → θ ∼ (θ| f, z, g) Iterative algorithme:

◮ Estimate f using p(f|

z, θ, g) ∝ p(g|f, θ) p(f| z, θ) Needs optimisation of a quadratic criterion.

◮ Estimate z using p(z|

f, θ, g) ∝ p(g| f, z, θ) p(z) Needs sampling of a Potts Markov field.

◮ Estimate θ using

p(θ| f, z, g) ∝ p(g| f, σ2

ǫ I) p(

f| z, (mk, vk)) p(θ) Conjugate priors − → analytical expressions.

  • Variational Bayesian Approximation

◮ Approximate p(f, z, θ|g) by q1(f) q2(z) q3(θ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 56/77

slide-57
SLIDE 57

Results

Original Backprojection Filtered BP LS Gauss-Markov+pos GM+Line process GM+Label process c z c

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 57/77

slide-58
SLIDE 58

Application in Acoustic source localization

(Ning Chu et al.)

x (m) y (m) −1.2 −1 −0.8 −0.6 −0.4 −0.2 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 −10 −8 −6 −4 −2 2 x (m) y (m) −1.2 −1 −0.8 −0.6 −0.4 −0.2 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 −4 −2 2 4 6 8 10

Source powers Beamforming powers

x (m) y (m) −1.2 −1 −0.8 −0.6 −0.4 −0.2 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 −12 −10 −8 −6 −4 −2 2 x (m) y (m) −1.2 −1 −0.8 −0.6 −0.4 −0.2 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 −12 −10 −8 −6 −4 −2 2

Bayesian MAP inversion Proposed VBA inversion

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 58/77

slide-59
SLIDE 59

Application in Acoustic source localization

(Ning Chu et al.) Wind tunnel Beamforming DAMAS Proposed VBA inference

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 59/77

slide-60
SLIDE 60

Application in Microwave imaging

g(ω) =

  • f (r) exp [−j(ω.r)] dr + ǫ(ω)

g(u, v) =

  • f (x, y) exp [−j(ux + vy)] dx dy + ǫ(u, v)

g = Hf + ǫ f (x, y) g(u, v)

  • f IFT
  • f Proposed method
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 60/77

slide-61
SLIDE 61

Microwave Imaging for Breast Cancer detection

(L. Gharsalli et al.)

2 cm

( D D₄ ) ( D D₃ )

12.2 cm

receivers

10 cm 7.5 cm

D₁ D breast S tumor skin ( D D₂ )

source

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 61/77

slide-62
SLIDE 62

Microwave Imaging for Breast Cancer detection

CSI: Contrast Source Inversion, VBA: Variational Bayesian Approach, MGI: Independent Gaussian mixture, MGM: Gauss-Markov mixture

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 62/77

slide-63
SLIDE 63

Images fusion and joint segmentation

(with O. F´ eron)    gi(r) = fi(r) + ǫi(r) p(fi(r)|z(r) = k) = N(mi k, σ2

i k)

p(f|z) =

i p(fi|z)

g1 g2 − →

  • f 1
  • f 2
  • z
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 63/77

slide-64
SLIDE 64

Data fusion in medical imaging

(with O. F´ eron)    gi(r) = fi(r) + ǫi(r) p(fi(r)|z(r) = k) = N(mi k, σ2

i k)

p(f|z) =

i p(fi|z)

g1 g2 − →

  • f 1
  • f 2
  • z
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 64/77

slide-65
SLIDE 65

Joint segmentation of hyper-spectral images

(with N. Bali & A. Mohammadpour)        gi(r) = fi(r) + ǫi(r) p(fi(r)|z(r) = k) = N(mi k, σ2

i k),

k = 1, · · · , K p(f|z) =

i p(fi|z)

mi k follow a Markovian model along the index i

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 65/77

slide-66
SLIDE 66

Segmentation of a video sequence of images

(with P. Brault)        gi(r) = fi(r) + ǫi(r) p(fi(r)|zi(r) = k) = N(mi k, σ2

i k),

k = 1, · · · , K p(f|z) =

i p(fi|zi)

zi(r) follow a Markovian model along the index i

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 66/77

slide-67
SLIDE 67

Image separation in Sattelite imaging

(with H. Snoussi & M. Ichir)            gi(r) =

N

  • j=1

Aijfj(r) + ǫi(r) p(fj(r)|zj(r) = k) = N(mj k, σ2

j k)

p(Aij) = N(A0ij, σ2

0ij)

f g

  • f
  • z
  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 67/77

slide-68
SLIDE 68

Conclusions

◮ Inverse problems arise in many science and engineering

applications

◮ Deterministic Algorithms: Optimization of a two terms

criterion, penalty term, regularization term

◮ Probabilistic: Bayesian approach ◮ Hierarchical prior model with hidden variables are very

powerful tools for Bayesian approach to inverse problems.

◮ Gauss-Markov-Potts models for images incorporating hidden

regions and contours

◮ Main Bayesian computation tools: JMAP, MCMC and VBA ◮ Application in different imaging system (X ray CT,

Microwaves, PET, Ultrasound, Optical Diffusion Tomography (ODT), Acoustic source localization,...) Current Projects:

◮ Efficient implementation in 2D and 3D cases

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 68/77

slide-69
SLIDE 69

Thanks to:

Present PhD students:

◮ L. Gharsali (Microwave imaging for Cancer detection) ◮ M. Dumitru (Multivariate time series analysis for biological signals) ◮ S. AlAli (Diffraction imaging for geophysical applications)

Freshly Graduated PhD students:

◮ C. Cai (2013: Multispectral X ray Tomography) ◮ N. Chu (2013: Acoustic sources localization) ◮ Th. Boulay (2013: Non Cooperative Radar Target Recognition) ◮ R. Prenon (2013: Proteomic and Masse Spectrometry) ◮ Sh. Zhu (2012: SAR Imaging) ◮ D. Fall (2012: Emission Positon Tomography, Non Parametric

Bayesian)

◮ D. Pougaza (2011: Copula and Tomography) ◮ H. Ayasso (2010: Optical Tomography, Variational Bayes)

Older Graduated PhD students:

◮ S. F´

ekih-Salem (2009: 3D X ray Tomography)

◮ N. Bali (2007: Hyperspectral imaging) ◮ O. F´

eron (2006: Microwave imaging)

◮ F. Humblot (2005: Super-resolution) ◮ M. Ichir (2005: Image separation in Wavelet domain) ◮ P. Brault (2005: Video segmentation using Wavelet domain)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 69/77

slide-70
SLIDE 70

Thanks to:

Older Graduated PhD students:

◮ H. Snoussi (2003: Sources separation) ◮ Ch. Soussen (2000: Geometrical Tomography) ◮ G. Mont´

emont (2000: Detectors, Filtering)

◮ H. Carfantan (1998: Microwave imaging) ◮ S. Gautier (1996: Gamma ray imaging for NDT) ◮ M. Nikolova (1994: Piecewise Gaussian models and GNC) ◮ D. Pr´

emel (1992: Eddy current imaging) Post-Docs:

◮ J. Lapuyade (2011: Dimentionality Reduction and multivariate

analysis)

◮ S. Su (2006: Color image separation) ◮ A. Mohammadpour (2004-2005: HyperSpectral image

segmentation) Colleagues:

◮ B. Duchˆ

ene & A. Joisel (L2S)& G. Perruson (Inverse scattering and Microwave Imaging)

◮ N. Gac (L2S) (GPU Implementation) ◮ Th. Rodet (L2S) (Computed Tomography)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 70/77

slide-71
SLIDE 71

Thanks to:

National Collaborators

◮ A. Vabre & S. Legoupil (CEA-LIST), (3D X ray Tomography) ◮ E. Barat (CEA-LIST) (Positon Emission Tomography, Non

Parametric Bayesian)

◮ C. Comtat (SHFJ, CEA) (PET, Spatio-Temporal Brain activity) ◮ J. Picheral (SSE, Sup´

elec) (Acoustic sources localization)

◮ D. Blacodon (ONERA) (Acoustic sources separation) ◮ J. Lagoutte (Thales Air Systems) (Non Cooperative Radar Target

Recognition)

◮ P. Grangeat (LETI, CEA, Grenoble) (Proteomic and Masse

Spectrometry)

◮ F. L´

evi (CNRS-INSERM, Hopital Paul Brousse) (Biological rythms and Chronotherapy of Cancer) International Collaborators

◮ K. Sauer (Notre Dame University, IN, USA)

(Computed Tomography, Inverse problems)

◮ F. Marvasti (Sharif University), (Sparse signal processing) ◮ M. Aminghafari (Amir Kabir University) (Independent Components

Analysis)

◮ A. Mohammadpour (AKU) (Statistical inference) ◮ Gh. Yari (Tehran Technological University) (Probability and

Analysis)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 71/77

slide-72
SLIDE 72

References 1

  • A. Mohammad-Djafari, “Bayesian approach with prior models which enforce sparsity in signal and

image processing,” EURASIP Journal on Advances in Signal Processing, vol. Special issue on Sparse Signal Processing, (2012).

  • A. Mohammad-Djafari (Ed.) Probl`

emes inverses en imagerie et en vision (Vol. 1 et 2), Hermes-Lavoisier, Trait´ e Signal et Image, IC2, 2009,

  • A. Mohammad-Djafari (Ed.) Inverse Problems in Vision and 3D Tomography, ISTE, Wiley and sons,

ISBN: 9781848211728, December 2009, Hardback, 480 pp.

  • A. Mohammad-Djafari, Gauss-Markov-Potts Priors for Images in Computer Tomography Resulting to

Joint Optimal Reconstruction and segmentation, International Journal of Tomography & Statistics 11:

  • W09. 76-92, 2008.

A Mohammad-Djafari, Super-Resolution : A short review, a new method based on hidden Markov modeling of HR image and future challenges, The Computer Journal doi:10,1093/comjnl/bxn005:, 2008.

  • H. Ayasso and Ali Mohammad-Djafari Joint NDT Image Restoration and Segmentation using

Gauss-Markov-Potts Prior Models and Variational Bayesian Computation, IEEE Trans. on Image Processing, TIP-04815-2009.R2, 2010.

  • H. Ayasso, B. Duchene and A. Mohammad-Djafari, Bayesian Inversion for Optical Diffraction

Tomography Journal of Modern Optics, 2008.

  • N. Bali and A. Mohammad-Djafari, “Bayesian Approach With Hidden Markov Modeling and Mean

Field Approximation for Hyperspectral Data Analysis,” IEEE Trans. on Image Processing 17: 2. 217-225 Feb. (2008).

  • H. Snoussi and J. Idier., “Bayesian blind separation of generalized hyperbolic processes in noisy and

underdeterminate mixtures,” IEEE Trans. on Signal Processing, 2006.

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 72/77

slide-73
SLIDE 73

References 2

  • O. F´

eron, B. Duch` ene and A. Mohammad-Djafari, Microwave imaging of inhomogeneous objects made

  • f a finite number of dielectric and conductive materials from experimental data, Inverse Problems,

21(6):95-115, Dec 2005.

  • M. Ichir and A. Mohammad-Djafari,

Hidden Markov models for blind source separation, IEEE Trans. on Signal Processing, 15(7):1887-1899, Jul 2006.

  • F. Humblot and A. Mohammad-Djafari,

Super-Resolution using Hidden Markov Model and Bayesian Detection Estimation Framework, EURASIP Journal on Applied Signal Processing, Special number on Super-Resolution Imaging: Analysis, Algorithms, and Applications:ID 36971, 16 pages, 2006.

  • O. F´

eron and A. Mohammad-Djafari, Image fusion and joint segmentation using an MCMC algorithm, Journal of Electronic Imaging, 14(2):paper no. 023014, Apr 2005.

  • H. Snoussi and A. Mohammad-Djafari,

Fast joint separation and segmentation of mixed images, Journal of Electronic Imaging, 13(2):349-361, April 2004.

  • A. Mohammad-Djafari, J.F. Giovannelli, G. Demoment and J. Idier,

Regularization, maximum entropy and probabilistic methods in mass spectrometry data processing problems, Int. Journal of Mass Spectrometry, 215(1-3):175-193, April 2002.

  • H. Snoussi and A. Mohammad-Djafari, “Estimation of Structured Gaussian Mixtures: The Inverse EM

Algorithm,” IEEE Trans. on Signal Processing 55: 7. 3185-3191 July (2007).

  • N. Bali and A. Mohammad-Djafari, “A variational Bayesian Algorithm for BSS Problem with Hidden

Gauss-Markov Models for the Sources,” in: Independent Component Analysis and Signal Separation (ICA 2007) Edited by:M.E. Davies, Ch.J. James, S.A. Abdallah, M.D. Plumbley. 137-144 Springer (LNCS 4666) (2007).

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 73/77

slide-74
SLIDE 74

References 3

  • N. Bali and A. Mohammad-Djafari, “Hierarchical Markovian Models for Joint Classification,

Segmentation and Data Reduction of Hyperspectral Images” ESANN 2006, September 4-8, Belgium. (2006)

  • M. Ichir and A. Mohammad-Djafari, “Hidden Markov models for wavelet-based blind source

separation,” IEEE Trans. on Image Processing 15: 7. 1887-1899 July (2005)

  • S. Moussaoui, C. Carteret, D. Brie and A Mohammad-Djafari, “Bayesian analysis of spectral mixture

data using Markov Chain Monte Carlo methods sampling,” Chemometrics and Intelligent Laboratory Systems 81: 2. 137-148 (2005).

  • H. Snoussi and A. Mohammad-Djafari, “Fast joint separation and segmentation of mixed images”

Journal of Electronic Imaging 13: 2. 349-361 April (2004)

  • H. Snoussi and A. Mohammad-Djafari, “Bayesian unsupervised learning for source separation with

mixture of Gaussians prior,” Journal of VLSI Signal Processing Systems 37: 2/3. 263-279 June/July (2004)

  • F. Su and A. Mohammad-Djafari, “An Hierarchical Markov Random Field Model for Bayesian Blind

Image Separation,” 27-30 May 2008, Sanya, Hainan, China: International Congress on Image and Signal Processing (CISP 2008).

  • N. Chu, J. Picheral and A. Mohammad-Djafari, “A robust super-resolution approach with sparsity

constraint for near-field wideband acoustic imaging,” IEEE International Symposium on Signal Processing and Information Technology pp 286–289, Bilbao, Spain, Dec14-17,2011

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 74/77

slide-75
SLIDE 75

Questions, Discussions, Open mathematical problems

◮ Sparsity representation, low rank matrix decomposition

◮ Sparsity and positivity or other constraints ◮ Group sparsity ◮ Algorithmic and implementation issues for great dimensional

applications (Big Data)

◮ Joint estimation of Dictionary and coefficients

◮ Optimization of the KL divergence for Variational Bayesian

Approximation

◮ Convergency of alternate optimization ◮ Other possible algorithms

◮ Properties of the obtained approximation

◮ Does the moments of q’s corresponds to the moments of p? ◮ How about any other statistics: entropy, ...

◮ Other divergency or Distance measures? ◮ Using Sparsity as a prior in Inverse Problems ◮ Applications in Biological data and signal analysis, Medical

imaging, Non Destructive Testing (NDT) Industrial Imaging, Communication, Geophysical imaging, Radio Astronomy, ...

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 75/77

slide-76
SLIDE 76

Blind deconvolution (1)

θ2

f

✒✑ ✓✏ ❄

H g

✒✑ ✓✏

θ1

ǫ

✒✑ ✓✏

θ3

h

✒✑ ✓✏

❅ ❅ ❘

γ0

θ2

✒✑ ✓✏ ❄

f

✒✑ ✓✏ ❄

H g

✒✑ ✓✏

α0

θ1

✒✑ ✓✏ ❄

ǫ

✒✑ ✓✏

γ0

θ3

✒✑ ✓✏ ❄

h

✒✑ ✓✏

❅ ❅ ❘ g = h ∗ f + ǫ = Hf + ǫ = F h + ǫ Simple priors: p(f, h|g, θ) ∝ p(g|f, h, θ1) p(f|θ2) p(h|θ3) Objective: Infer on f, h JMAP: ( f, z) = arg max(f,z) {p(f, z|g)} VBA: Approximate p(f, h|g) by q1(f) q2(h) Unsupervised: p(f, h, θ|g) ∝ p(g|f, h, θ1) p(f|θ2) p(h|θ3) p(θ) Objective: Infer on f, h, θ JMAP: ( f, z, θ) = arg max(f,z,θ) {p(f, z, θ|g)} VBA: Approximate p(f, h, θ|g) by q1(f) q2(h) q3(θ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 76/77

slide-77
SLIDE 77

Blind deconvolution (2)

γf 0

θ2

✒✑ ✓✏ ❄

zf

✒✑ ✓✏ ❄

f

✒✑ ✓✏ ❄

H g

✒✑ ✓✏

α0

α0

θ1

✒✑ ✓✏ ❄

ǫ

✒✑ ✓✏

γh0

θ3

✒✑ ✓✏ ❄

zh

✒✑ ✓✏ ❄

h

✒✑ ✓✏

❅ ❅ ❘ g = h ∗ f + ǫ = Hf + ǫ = F h + ǫ Simple priors: p(f, h|g, θ) ∝ p(g|f, h, θ1) p(f|θ2) p(h|θ3) Unsupervised: p(f, h, θ|g) ∝ p(g|f, h, θ1) p(f|θ2) p(h|θ3) p(θ) Sparsity enforcing prior for f: p(f, zf , h, θ|g) ∝ p(g|f, h, θ1) p(f|zf ) p(zf |θ2) p(h|θ3) p(θ) Sparsity enforcing prior for h: p(f, h, zh, θ|g) ∝ p(g|f, h, θ1) p(f|θ2) p(h|z) p(z|θ3) p(θ) Hierarchical models for both f and h: p(f, zf , h, zh, θ|g) ∝ p(g|f, h, θ1) p(f|zf ) p(zf |θ2) p(h|zh) p(zh|θ3) p(θ) JMAP: ( f, zf , h, zh, θ) = arg max(f,zf ,

h, zh,θ)

  • p(f, zf ,

h, zh, θ|g)

  • VBA: Approximate p(f, zf , h, zh, θ|g) by

q1(f) q2(zf ) q3(h) q4(zh) q5(θ)

  • A. Mohammad-Djafari,

Inverse problems and Bayesian inference, Scube seminar, L2S, CentraleSupelec, March 27, 2015 77/77