Introduction Hierarchical stochastic model Conclusions
Statistical image segmentation with Bayesian approach Tapio Helin - - PowerPoint PPT Presentation
Statistical image segmentation with Bayesian approach Tapio Helin - - PowerPoint PPT Presentation
Introduction Hierarchical stochastic model Conclusions Statistical image segmentation with Bayesian approach Tapio Helin Institute of Mathematics, Helsinki University of Technology Workshop on Inverse and Partial Information Problems:
Introduction Hierarchical stochastic model Conclusions
Outline
1
Introduction
2
Hierarchical stochastic model
3
Conclusions
Introduction Hierarchical stochastic model Conclusions
Outline
1
Introduction
2
Hierarchical stochastic model
3
Conclusions
Introduction Hierarchical stochastic model Conclusions
Hierarchical priors
Suppose some parameter of the prior distribution is unknown or it is preferable not to approximate it exactly. The prior is called hierarchical if these parameters are modelled also with random variables. One way to think about it: suppose the distribution of U depends
- n V
M = AU + E ⇒ M = A U V
- + E
Introduction Hierarchical stochastic model Conclusions
Data segmentation
In 1984 S. Geman and D. Geman proposed a statistical approach to image denoising. Let U,M and E be Rn-valued random variables such that M = AU + E, where E ∼ N(0, σ2I). They introduced a {0, 1}n -valued random variable L that described the edge set. The a priori distribution for the pair (U, L) was πpr(u, ℓ) ∝ exp
- −
- i
- α(1 − ℓi+ 1
2 )(ui+1 − ui)2 + βℓi+ 1 2
- .
Introduction Hierarchical stochastic model Conclusions
Data segmentation
In this setting the posteriori distribution is π(u, ℓ|m) ∝ exp (−E(u, ℓ, m)) where the free energy E(u, ℓ, m) is given by E(u, ℓ, m) =
- i
- α
- 1 − ℓi+ 1
2
- (ui+1 − ui)2 +
+βℓi+ 1
2 +
1 2σ2 ((Au)i − mi)2
- .
G-G proposed to maximize the πpost, i.e., to solve the MAP estimate.
Introduction Hierarchical stochastic model Conclusions
Mumford-Shah -functional
In 1989 Mumford and Shah presented the following continuous version of G-G’s method arg min
u,K
- T\K
|Du|2dx + ♯(K) +
- T
|u − m|2dx (1) where K is the ”discontinuity” or the ”jump-set” of a piecewise regular function u(x).
Introduction Hierarchical stochastic model Conclusions
Ambrosio-Tortorelli
Ambrosio and Tortorelli proposed approximating Mumford-Shah functional by elliptic functionals Fǫ(u, v) =
- T
(v2+ǫ2)|Du|2+ǫ|Dv|2+ (1 − v)2 4ǫ +|Au−m|2dx. Now Fǫ Γ-converges in L1(T) × L1(T) -topology to the weak formulation of Mumford-Shah -functional: F(u, v) =
- T\Su
|Du|2dx + ♯(Su) +
- T
|Au − m|2dx where u ∈ SBV(T) and v(x) = 1 almost everywhere. Otherwise F(u, v) = ∞.
Introduction Hierarchical stochastic model Conclusions
Ambrosio-Tortorelli
Minimizing
- T
(v2 + ǫ2)|Du|2 + ǫ|Dv|2 + (1 − v)2 4ǫ + |u − m|2dx results to m u v
Introduction Hierarchical stochastic model Conclusions
Ambrosio-Tortorelli
If this arg min
u,v∈H1(T)
- T
(v2 + ǫ2)|Du|2 + ǫ|Dv|2 + (1 − v)2 4ǫ + |Au − m|2dx is what you want to happen qualitatively - how to do it?
Introduction Hierarchical stochastic model Conclusions
Outline
1
Introduction
2
Hierarchical stochastic model
3
Conclusions
Introduction Hierarchical stochastic model Conclusions
Motivation
Suppose you are given a linear inverse problem M = AU + E in Rn with a hierarchical prior such that U ∝ N(u0, CU(v)) and V ∝ N(v0, CV ) and E white noise.
Introduction Hierarchical stochastic model Conclusions
Motivation
Then πpost(u, v | m) ∝ exp(−1 2E(u, v)) with a free energy of the form E(u, v) = log (det CU(v)) + u − u0, CU(v)−1(u − u0)+ + v − v0, C −1
V (v − v0) + Au − m 2.
Introduction Hierarchical stochastic model Conclusions
More motivation
Put u0 = 0, v0 = 1, CV = 1 4ǫId − ǫ∆n −1 and CU(v) = (−Dn(ǫ2+v2)Dn)−1 Then the free energy is close to A-T functional, in fact, E(u, v) = log (det CU(v)) + u, −Dn(ǫ2 + v2)Dnu +v − 1, 1 4ǫId − ǫ∆n
- (v − 1) + Au − m 2
= −
- T
n log(ǫ2 + v2)dx + Fǫ(u, v).
Introduction Hierarchical stochastic model Conclusions
Things to consider
One could ask e.g. (i) is the corresponding L2-valued model well-defined? (ii) how to discretize invariantly? (iii) how does the posterior distribution behave asymptotically? (iv) are we still doing what we were supposed to?
Introduction Hierarchical stochastic model Conclusions
About MAP estimate
Consider the minimization problem min
u,v∈H1(T)∩Xn
- T
−n log(ǫ2 + v2) + (ǫ2 + v2)|Dnu|2+ + 1 4ǫ(1 − v)2 + ǫ|Dnv|2 + |Au − m|2dx By letting u = 0 and v = 1 we notice that the minimum value decreases as n gets larger. Also we see that minimizers diverge.
Introduction Hierarchical stochastic model Conclusions
Definition of V
To simplify our notations we assume that the probability space has the structure Ω = Ω1 × Ω2, Σ = Σ1 × Σ2 and P = P1 × P2. Define V as Gaussian random variable on L2(T) with mean v0 = 1 and covariance operator CV = 1
4ǫId − ǫ∆
−1. Assume that V = V (ω2). Lemma For 0 < α < 1
2 we have V ∈ C 0,α(T) almost surely.
Introduction Hierarchical stochastic model Conclusions
Definition of U
Define a mapping U for every test function φ ∈ C ∞(T) φ → {ω → W (ω1), AV (ω2)φ | ω = (ω1, ω2) ∈ Ω}, where Av = (˜ D∗(ǫ2 + v2)˜ D)−1/2. Lemma The mapping U is a generalized random variable. Corollary We have U ∈ L2(T) almost surely.
Introduction Hierarchical stochastic model Conclusions
Exponential moment
When dealing with additive Gaussian noise the following property is needed. Theorem For every b > 0 there exists a constant Cb > 0 such that the exponential moments satisfy Eeb(U,V )L2×L2 < Cb and Eeb(Un,Vn)L2×L2 < Cb. for all n ∈ N.
Introduction Hierarchical stochastic model Conclusions
CM estimates converge
Corollary Given the introduced prior, Gaussian noise and weakly converging discretization scheme, the CM estimates converge. A numerical example: deconvolution m u v
Introduction Hierarchical stochastic model Conclusions
Outline
1
Introduction
2
Hierarchical stochastic model
3
Conclusions
Introduction Hierarchical stochastic model Conclusions
Conclusions
Summary: (1) We introduced a new prior model for segmenting signals. (2) CM estimates converge for a linear problem. (3) Connection to Mumford-Shah functional. Future work includes (1) understanding the limiting estimate better, (2) analysing error and (3) understanding MAP estimates better.
Introduction Hierarchical stochastic model Conclusions
If you want to know more...
- M. Lassas, E. Saksman, S.Siltanen: Discretization invariant
Bayesian inversion and Besov space priors, submitted.
- S. Geman, D. Geman: Stochastic Relaxation, Gibbs Distributions,
and the Bayesian Restoration of Images. IEEE Trans. PAMI, PAMI-6(6), 1984.
- D. Mumford, J. Shah: Optimal approximation by piecewise smooth
functions and associated variational problems. Comm. Pure Appl. Math., 1989.
- L. Ambrosio, V.M.Tortorelli: Approximation of functionals
depending on jumps by elliptic functionals via Γ-convergence.
- Comm. Pure Appl. Math., 1990.