Modélisation statistique, estimation et problèmes inverses
Jalal Fadili
GREYC CNRS, Caen France
Cours Méthodes variationnelles et parcimonieuses en traitement des signaux et des images IHP 21 Mars 2008
Modlisation statistique, estimation et problmes inverses Jalal - - PowerPoint PPT Presentation
Modlisation statistique, estimation et problmes inverses Jalal Fadili GREYC CNRS, Caen France Cours Mthodes variationnelles et parcimonieuses en traitement des signaux et des images IHP 21 Mars 2008 De quoi sagit-il dans ce cours ?
GREYC CNRS, Caen France
Cours Méthodes variationnelles et parcimonieuses en traitement des signaux et des images IHP 21 Mars 2008
Cours IHP-
2
= X
(n x 1) (n x L) (L x 1)
Dictionnaire dʼatomes
y = x + ε
Cours IHP-
2
= X
(n x 1) (n x L) (L x 1)
Dictionnaire dʼatomes
Débruitage Déconvolution
y = x + ε
Cours IHP-
Modélisation statistique. Transformées et parcimonie. Estimation par seuillage. Estimation bayésienne. Problèmes inverses. Restauration dʼimages.
3
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
4
Cours IHP-
Paradigme bayésien. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
5
Cours IHP-
6
The image is viewed as realization(s) of a RV or a random field whose degra- dation equation is:
Ys = M [F ((HX))s) ⊙ εs] ,
where
⊙ is any composition of two arguments (e.g. ’+’, ’·’). s ∈ S is the location index. εs is the noise (random) (generally assumed AWGN but not necessarily
so, e.g. speckle, Poisson, 1
f ).
H is a (possibly non-linear) bounded degradation operator (e.g. convolu-
tion with a PSF).
F is a transformation not necessarily linear nor invertible (e.g. sensor-
specific, variance stabilizing transform, sensing operator, etc).
M operator reflecting missing data mechanism.
How to recover unobserved X from observed Y An inverse ill-posed problem
Cours IHP-
7
Cours IHP-
8
Y = F
p(x, z): prior distribution. z some other image features. p(y|x, z): likelihood (given x and z). (p(ε)). p(y): marginal distribution =
p(x, z|y), posterior distribution:
p(y|x,z)p(x,z) p(y)
.
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
9
Cours IHP-
10
Bayesian estimation amounts to finding the operator D : Y → X, y → ˆ
x = D(y), ˆ x = arg inf
D∈On
R (x, ˆ x = D(y)) = EY,X [L(x, D(y))] .
Proposition 1 Let X and Y be two RVs taking values in X and Y according to the above observation model. (i) The MAP estimator corresponds to L (x, ˆ
x) = 1 − δ(x − ˆ x), ˆ xMAP = arg max
x∈X
pX|Y (x|y) .
(ii) The MMSE estimator corresponds to L (x, ˆ
x) = (x − ˆ x)2, ˆ xMMSE = E [X|Y ] .
(iii) The MMAE estimator corresponds to L (x, ˆ
x) = |x − ˆ x|, ˆ xMMAE such that Pr(X > ˆ x|Y = y) = 1 2 .
Cours IHP-
11
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
12
Cours IHP-
13
p (Tφ(X)) = p (X) , Tφ (X) (s) = X
No scale-invariant probability measure on image functions, but on Schwartz distributions [MumfordGidas01]. Self-similarity: SX(ω) = h(θ)ω−r, ω =
1 + ω2 2.
Images with clutter c are random samples from an infinitely divisible prob- ability measure pc ∈ D′, such that pd+c = pc ∗ pd. An image X with clutter c can be formed as a sum of independent (Xi)1≤i≤m each with clutter c/n. Form an image with a certain clutter as the superposition of independent images with less clutter. Be careful with partial occlusions (violates independence).
Cours IHP-
14
The Levy-khintchine theorem makes explicit the nature of infinite divisibility,
X(s1, s2) =
Gi(λis1 + ui, λis2 + vi),
with (λi, ui, vi) a Poisson process in R+ × R2 with density dudvdλ/λ, and Gi are independent random objects sampled from a reduced Levy measure. [MumfordGidas01] called this a random wavelet expansion. They proved its convergence as distributions under mild conditions on the re- duced Levy measure.
Cours IHP-
Choose a Hilbert space equipped with a basis or frame. Project the original data in that space where all components (coefficients) but a few are zero (concept of sparsity) : Matter of dimensionality reduction. Many images have a sparse gradient and fall within this intuitive model (e.g. TV norm).
15
sorted index
few big many small
= X
(n x 1) (n x L) (L x 1)
Non-linear approximation theory (GP) Dictionary of atoms
Cours IHP-
Expected marginal statistical behavior: Unimodal, centered at zero. Non-gaussian sharply peaked distribution. Heavy tails : weak sparsity and NLA error decay. Expected joint statistical behavior: Persistence across bands/scales. Intra-band/scale dependence.
16
Dashed: GGD [Mallat 89], dotted: α-stable [Achim et al. 01, Boubchir and Fadili 04], solid: Bessel K form [Grenander et al. 01, Fadili et
!100 !50 50 100 150 10
!610
!510
!410
!310
!2Histogram
Cours IHP-
17
Transported Generator Model [Grenander et al. 02]:
X(s1, s2) =
AiGi(λis1 + ui, λis2 + vi), (s1, s2) ∈ [0, 1]2 .
where ai are iid N(0, 1), λi are iid uniform in [0, 1], and (ui, vi) are independent 2D homogeneous P(µ). Lemma 1 Let Xh = X ∗ h, then
Xh
d
= Z √ U ,
where U =
i(Gi ∗ h)2 > 0. Moreover, the RV Xh is leptokurtic and heavy-
tailed. Typically, h stems from a band-pass filter bank, e.g. wavelets.
Cours IHP-
18
Definition 1 (Andrews and Mallows 74) Let X be a RV with real-valued realiza-
such that:
X
d
= Z √ U .
Property 1 SMG is a subset of the elliptically symmetric distributions [Kotz et al. 89].
pX(0) exists iif E[U −1/2] < +∞.
The pdf of X is:
pX(x) = 1 √ 2π +∞ u−1/2e− x2
2u fU(u)du .
It is unimodal, symmetric around the mode and differentiable almost every- where. The characteristic function of X is:
ΦX(ω) = L[pU] ω2 2
Cours IHP-
19
Necessary and sufficient conditions for such a representation to exist: Proposition 1 (Andrews and Mallows 74) The RV X has a SMG representation iff the kth derivatives of fX(√y) have alternating sign, i.e.:
dy k pX(√y) ≥ 0 ∀y > 0
Lemma 1 (Fadili et al. 05) If X
d
= Z √ U with random U ≥ 0 and Z ∼ N(0, σ2),
then kurtosis(X) > 0 =
⇒ the symmetric distribution of X is necessarily sharply
peaked (leptokurtic) with heavy tails. Good candidate as a sparsity prior. Generalization of Laplacian (ℓ1-penalty), GGD, α-stable, BKF. There is explicit relationship between the parameters of the SMG prior model and the Besov space to which the wavelet coeffs of the image belong a.s. [Fadili et al. 05].
Cours IHP-
20
c | β
Cours IHP-
21
Barbara DWT coeffs Empirical (-•-), FM α-stable (solid), α-stable (dash-dotted), BKF (dashed), GGD (dotted).
HL1 LH1 HH1 HL2 LH2 HH2 HL3 LH3 HH3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 KL BKF−cumulants BKF−EM !−stable !−stable mixture GGD
Comparison on a 100 image database.
Cours IHP-
22
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
23
Cours IHP-
24
Problem at hand, estimate X ∈ F from noisy data
Ys = Xs + εs .
The minimax risk is defined as
R(F) = inf
D∈On sup X∈F
E
.
The prior set F in minimax estimation is typically: BV images/signals, Besov body, etc. In the Bayesian framework, an explicit prior distribution of signals/images in F (e.g. correspondence Besov-GSM). The central minimax theorem states that the minimax risk is the minimum Bayes risk for a least favorable prior,
R(F) = sup
pX∈Θ
inf
D∈On EY,X
.
Cours IHP-
25
In Bayesian denoising with sparse-promoting prior in bases, coefficients are supposed independent (e.g. univariate SMG). Clearly wrong because of the geometry (intra-band dependencies and inter- scale persistence). Still valuable, why ? Because this kind of prior is often close to a least favorable prior.
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
26
Cours IHP-
27
Signal domain: Y = X + ε, ε AWGN of variance σ2. Transform domain: Φ∗Y
D
= Φ∗X
α
+ η, η also AWGN.
Lemma 1 (Oracle projection ideal risk) Let ˆ
α = D·1(|α| > σ). If eNLA(M) ∼ C2M −2β with 1 ≤ C/σ ≤ nβ+1/2, then the oracle projector ideal risk Rp(X) = E
X
∼ C2/(1+2β)σ
4β 2β+1 .
In fact,
Rp(X) = E
X
=
n
min
= eNLA(Mσ) + Mσσ2 .
The oracle estimator risk depends on the precision of non-linear approximation in Φ. Approximation theoretic results have a direct implication for statistical estima- tion. The sparser, the smaller the risk.
Cours IHP-
28
Thresholding estimation
y
Φ∗
− → {d}
Threshold/shrink Dn
− − − − − − − − − − − → {ˆ α}
Φ
− → ˆ α
Estimator Prior MAP energy
ˆ α
Hard thresholding
C exp(−λ2 αℓ0 /2) d − α2 + λ2 αℓ0 d · 1 (|d| > λ).
Soft thresholding
C exp(−λ αℓ1)
1 2 d − α2 + λ αℓ1
(d − λsign(d)) · 1 (|d| > λ).
Many other choices in the literature: SCAD, Firm, Garotte, etc. Generalizations in [Antoniadis99,Fadili06]. Theorem 1 ([DonohoJohnstone92]) Let λU = σ√2 log n, then the risk of HT or ST satisfies for all n ≥ 4
Rt(X) = E
XU − X
≤ (2 log n + 1)(σ2 + Rp(X)) . ˆ XU = HTλU(Y ) or ˆ XU = STλU (Y ). The threshold λU is optimal among all
diagonal (term-by-term) estimators in Φ. The universal threshold λU is conservative. Variations on this threshold (lower values) by other authors [Antoniadis99, etc]. Extension to colored noise [Johnstone95].
Cours IHP-
29
Adaptive near minimaxity of λU thresholding over almost the full array of Besov bodies Θβ
p,q,
Θβ
p,q(C) =
(αj,k)j,k
2jβ′q
2j−1
|αj,k|p
q p
≤ Cq , p ≥ 1, q ≥ 1 and β′ = (β + 1
2 − 1 p).
Theorem 1 ([DonohoJohnstone98]) Assume that β > (1/p − 1/2)+, 0 <
p, q ≤ +∞, 0 < C < +∞. If p < 2; then assume also that β > 1/p. Then for
any Besov body Θβ
p,q(C)
sup
Θβ
p,q(C)
Rt( ˆ XU, X) ≤ cβ,p(2 log n)C2/(2β+1)σ
4β 2β+1 .
A better risk can be achieved, but the threshold would depend on (β, p) of the unknown signal.
Cours IHP-
30
Convergence rate O(σ
4β 2β+d ) over d-dimensional (isotropic) Besov bodies.
For Cβ − Cβ images, best convergence rate O
2β β+1
rostelevTsybakov93]. Wavelets are nearly-minimax for BV images (O(σ), Θ1
1,1 ⊂ BV ⊂ Θ1 1,∞).
[Cand` es99] has shown that individual thresholding with ridgelets is nearly min- imax for recovering piecewise smooth images away from discontinuities along lines. [CandesDonoho00] showed near minimaxity (O
) of curvelet thresholding uniformly over C2 − C2 images. [LePennec et al. 07]: near minimaxity (O
2β β+1
in an adaptively selected best bandlet orthobasis for Cβ − Cβ images.
Cours IHP-
31
Since the seminal work of Donoho and Johnstone, literature have been inun- dated by modifications or extensions of this original idea (thresholding within sparse representations). Just take a look at this brief overview. Classical term-by-term Minimax estimation, SureShrink, etc [Donoho et al. 92-95]. Modifications on Donoho’s shrinkage operators [Bruce and Gao, Antoniadis and Fan]. Translation invariant threshold [Coifman and Donoho 95]. Hypothesis testing [Abramovich and Benjamini 95-96, Ogden and Parzen 96]. Cross-validation [Green et Silverman 94, Eubank 99], etc. Bayesian term-by-term (univariate) Bernoulli-Gaussian FM [Abramovich et al. 98, Clyde and George 99,00]. Bayesian hypothesis testing [Vidakovic et al. 98]. SMG with exponential multiplier prior [Vi- dakovic et al. 00]. Two Gaussians FM [Chipman et al. 97]. t-Student prior [Vi- dakovic 98]. GGD [Mallat99, Liu et Moulin 99]. Adaptive variance gaussian prior [Si- moncelli 99]. α-stable [Achim et al. 01], BKF [Fadili et al. 04-06].
Cours IHP-
32
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
33
Cours IHP-
34
−0.5 0.5 1
⌊log n⌋
Let B = ⌊log n⌋, Jc = ⌊log2 B⌋, Uj,K = {k| (K − 1)B ≤ k ≤ KB − 1}, K ∈
{1, . . ., ⌊2j/B⌋}. Let λ∗ be the root of x − log x = 3 (i.e. λ∗ = 4, 50524...). Estimate α = (αj,k)j,k by (ˆ αB
j,k)j,k where
ˆ αB
j,k =
dj,k,
for j ∈ {0, ..., Jc − 1}, k ∈ {0, ..., 2j − 1},
dj,k
λ∗σ2B P
k∈Uj,K d2 j,k
,
for k ∈ Uj,K, all K and j ≥ Jc .
Cours IHP-
35
Theorem 1 ([Cai99]) There exists a constant C > 0 such that
sup
θ∈Θβ
p,q(C)
R(ˆ αB, α) ≤ C σ4β/(2β+1),
for p ≥ 2,
σ4β/(2β+1)(log n)(2−p)/(p(2β+1)),
for p < 2, sp ≥ 1. The log factor has disappeared compared to ˆ
αU ⇒ optimally minimax.
Result generalized to d-dimensions (d = 2 images), with frames and non- necessarily iid noise in transform domain. Theorem 2 ([ChesneauFadili08]) The block estimator in d-dimensions satisfies,
sup
α∈Θβ,d
p,q (C)
R(ˆ αB, α) ≤ Cρn = σ4β/(2β+d),
for p ≥ 2,
σ4β/(2β+d)(log n)2β/(2β+d),
for p < 2, sp > d ∨ (1 − p/2)d. where Θβ,d
p,q (M) =
j=0 2qj(β+d/2−d/p) ( k |αj,k|p)q/p ≤ Cq
.
Cours IHP-
36
Bayesian block Non-overlapping block bayesian estimation [Abramovich et Sapatinas 00]. Multivari- ate gaussian prior [Huang and Cressie 00]. Mixed effects models [Huang and Lu 00]. MRF [Malfait et al. 97, Crouse et al. 98, Pizurica et al. 02]. HMT model [DSP Rice (Romberg, Baraniuk et al. 00-02)]. Scale mixture of gaussians [Li and Orchard 00, Mihchak et al. 99, Sendur and Selesnick 02, Portilla et al. 03, Koruglu et Achim 04]. AMMGD [Boubchir and Fadili 05].
Cours IHP-
!=20 PSNR=22dB
37
Cours IHP-
!=20 PSNR=22dB
37
DWT Hard PSNR=23.484673
Cours IHP-
!=20 PSNR=22dB
37
DWT Hard PSNR=23.484673 DWT Term bayesian (GSM) PSNR=27.396434
Cours IHP-
!=20 PSNR=22dB
37
DWT Hard PSNR=23.484673 DWT Term bayesian (GSM) PSNR=27.396434 DWT Block PSNR=28.407828
Cours IHP-
!=20 PSNR=22dB
37
DWT Hard PSNR=23.484673 DWT Term bayesian (GSM) PSNR=27.396434 DWT Block PSNR=28.407828 DWT Block bayesian PSNR=27.906088
Cours IHP-
38
!=20 PSNR=22dB
Cours IHP-
38
!=20 PSNR=22dB UDWT Hard PSNR=27.363653
Cours IHP-
38
!=20 PSNR=22dB UDWT Hard PSNR=27.363653 UDWT Term bayesian (GSM) PSNR=28.259774
Cours IHP-
38
!=20 PSNR=22dB UDWT Hard PSNR=27.363653 UDWT Term bayesian (GSM) PSNR=28.259774 UDWT Block PSNR=29.055158
Cours IHP-
38
!=20 PSNR=22dB UDWT Hard PSNR=27.363653 UDWT Term bayesian (GSM) PSNR=28.259774 UDWT Block PSNR=29.055158 UDWT Block bayesian PSNR=29.497852
Cours IHP-
!=20 PSNR=22dB
39
Cours IHP-
!=20 PSNR=22dB
39
Curvelets Hard PSNR=28.808145
Cours IHP-
!=20 PSNR=22dB
39
Curvelets Hard PSNR=28.808145 Curvelets Term bayesian (GSM) PSNR=29.492531
Cours IHP-
!=20 PSNR=22dB
39
Curvelets Hard PSNR=28.808145 Curvelets Term bayesian (GSM) PSNR=29.492531 Curvelets Block PSNR=29.929692
Cours IHP-
!=20 PSNR=22dB
39
Curvelets Hard PSNR=28.808145 Curvelets Term bayesian (GSM) PSNR=29.492531 Curvelets Block PSNR=29.929692 Curvelets Block bayesian PSNR=29.946323
Cours IHP-
La représentation parcimonieuse. Lʼopérateur de seuillage/contraction.
40
Estimateur Simplicité implémentation Optimalité théorique Paramètres Performances pratiques Temps Individuel classique
DWT Trame
Individuel Bayésien
Dépend de lʼa priori
DWT > classique Trame > classique
Bloc classique
Bloc Bayésien
Dépend de lʼa priori
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
41
Cours IHP-
42
y = HΦα + ε, Var (ε) = σ2.
Consider the following (equivalent) minimization problems:
(Pψ
σ) :
min Ψ (α) s.t. y − HΦαℓ2 ≤ σ (Pψ
λ) :
min 1
2 y − HΦα2 ℓ2
+λΨ (α) Ψ (α) =
γ∈Γ ψ (αγ).
ψ is a lsc proper and convex sparsity-promoting penalty, e.g. |·|. (Pψ
λ) is the MAP estimator with an iid prior γ exp (−ψ(αγ)).
Cours IHP-
43
Quelques algorithmes pour r´ esoudre (Pψ
σ ) :
LASSO/LARS [Tibshirani96,Efron et al. 04] (ℓ1). SOCP [Cand` es et al. 05] (ℓ1). Formulation duale [MalgouyresZeng07] (ℓ1). D´ ecomposition d’op´ erateurs monotones maximaux [Fadili08] (ψ lsc con- vexe). Bcp de travaux sur (Pψ
λ):
BPDN: programme lin´ eaire perturb´ e [CDS95] (ℓ1). BCR [Sardy et al. 98] (ℓ1). Seuillage it´ er´ e [Daubechies04,CombettesWajs05,Elad05,...] (ℓ1). etc. On se focalise sur la r´ esolution de (Pψ
λ) et le seuillage it´
er´ e.
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
44
Cours IHP-
45
Une fonction f est convexe si ∀(x, x′) ∈ H2, f(ρx + (1 − ρ)x′) ≤ ρf(x) +
(1 − ρ)f(x′), 0 < ρ < 1.
La sous-diff´ erentielle ∂f de f est,
∂f : H → 2H : x → {u ∈ H|f(y) ≥ f(x) + u, y − x, ∀y ∈ H}.
Un ´ element de ∂f est appel´ e un sous-gradient. Le r´ esolvant de J∂f est l’op´ erateur J∂f = (I + ∂f)−1. Un op´ erateur T : H → H est contractant (non-expansif) si
∀(x, x′) ∈ H2 Tx − Tx′ ≤ x − x′ .
Cours IHP-
46
Definition 1 ([Moreau 1962]) Soit f sci, propre et convexe. Pour tout x ∈ H, la fonction y → f(y) + x − y2 /2 poss` ede un minimum strict not´ e proxfx. proxf :
H → H ainsi d´
efini est l’op´ erateur proximal de f. ∀x, p ∈ H
p = proxfx ⇐ ⇒ x − p ∈ ∂f(p) ⇐ ⇒ p = J∂f .
Cours IHP-
46
Definition 1 ([Moreau 1962]) Soit f sci, propre et convexe. Pour tout x ∈ H, la fonction y → f(y) + x − y2 /2 poss` ede un minimum strict not´ e proxfx. proxf :
H → H ainsi d´
efini est l’op´ erateur proximal de f. ∀x, p ∈ H
p = proxfx ⇐ ⇒ x − p ∈ ∂f(p) ⇐ ⇒ p = J∂f .
Cours IHP-
47
Pour la p´ enalit Ψ(α) = αℓ1, proxµtλΨ est le seuillage doux avec un seuil
µtλ.
Forme g´ en´ erale proxµtλΨ dans [Fadili et al. 06]. Complexit´ e 2Niter(VH + VΦ) + O(L), VΦ la complexit´ e de Φ (Φ∗), et VH celle de H (H∗).
Theorem 1 Soit Φ une trame de constantes A et B, et H un op´ erateur lin´ eaire born´ e. Soit {µt, t ∈ N} une suite tq 0 < inft µt ≤ supt µt < 2/ Φ2 = 2/(H2 B). Fixer α0 ∈ ℓ∞, et d´ efinir l’it´ eration:
αt+1 = proxµtλΨ
y − HΦαt ,
u proxµtλΨ est l’op´ erateur proximal qui agit term-` a-terme. Alors, {α(t), t ≥ 0} con- verge vers un minimiseur de (Pψ
λ).
Cours IHP-
48
Cours IHP-
Le paradigme bayésien en images. Estimation bayésienne.
Estimation minimax. Estimateurs par seuillage individuel. Estimateurs par seuillage joint.
Formulation MAP vs formulation variationnelle. Cadre dʼoptimisation.
49
Cours IHP-
50
Blurred and noisy BSNR=25.00dB
Cours IHP-
50
Blurred and noisy BSNR=25.00dB Proximal iteration DWT PSNR=28.71dB
Cours IHP-
50
Blurred and noisy BSNR=25.00dB Proximal iteration DWT PSNR=28.71dB Proximal iteration UDWT PSNR=29.20dB
Cours IHP-
50
Blurred and noisy BSNR=25.00dB Proximal iteration DWT PSNR=28.71dB Proximal iteration UDWT PSNR=29.20dB Proximal iteration Curvelets PSNR=28.93dB
Cours IHP-
51
Projections RealFourier m/n=0.17 SNR=30 dB Original image LASSO−Prox Iter=1000 PSNR=22.0734 dB
Cours IHP-
52
La parcimonie est un bon modèle dʼa priori pour une grande classe dʼimages En adéquation avec des les modèles génératifs linéaires. Estimation par seuillage Lʼapproximation non-linéaire joue un rôle essentiel: rôle de la représentation parcimonieuse. Garanties dʼoptimalité. En pratique,
distinguer lʼestimateur de la transformée. la dépendance et la redondance sont importantes.
Vous savez débruiter, vous pouvez inverser. Seuillage itératif (cadre proximal). Un large panel dʼapplications.
Cours IHP-
http://www.greyc.ensicaen.fr/~jfadili/themes.html http://www.morphologicaldiversity.org
53