Video denoising via Bayesian modeling of patches
Pablo Arias joint work with Thibaud Ehret and Jean-Michel Morel CMLA, ENS Paris-Saclay IHP workshop: Statistical modeling for shapes and imaging Paris - 14/03/2019
Video denoising via Bayesian modeling of patches Pablo Arias joint - - PowerPoint PPT Presentation
Video denoising via Bayesian modeling of patches Pablo Arias joint work with Thibaud Ehret and Jean-Michel Morel CMLA, ENS Paris-Saclay IHP workshop: Statistical modeling for shapes and imaging Paris - 14/03/2019 Why are we still researching
Pablo Arias joint work with Thibaud Ehret and Jean-Michel Morel CMLA, ENS Paris-Saclay IHP workshop: Statistical modeling for shapes and imaging Paris - 14/03/2019
Denoising is mainly for people (i.e. not as much as a pre-processing for a CV task)
◮ Photography ◮ Smartphone cameras ◮ Film industry ◮ Night vision cameras ◮ Surveillance cameras ◮ Medical imaging ◮ Astronomy
A lot of research has been devoted to still image denoising, but there is still room for improvement in video denoising.
2
frame
In this talk: video denoising via Bayesian estimation
3
denoise frame
In this talk: video denoising via Bayesian estimation
4
Non-recursive
◮ Non-local means [Buades et al.’05],
[Liu, Freeman ’10]
◮ K-SVD [Protter, Elad ’07] ◮ V-BM3D [Dabov et al.’07] ◮ Graph regularization [Ghoniem et al.
’10]
◮ BM4D, V-BM4D [Maggioni et al.’12] ◮ SPTWO [Buades et al.’17] ◮ VNLB [Arias, Morel’18] ◮ VIDOSAT [Wen et al.’19] ◮ SALT [Wen et al.’19] ◮ VNLnet [Davy et al.’19]
Recursive
◮ Bilateral + Kalman filter [Zuo et al.’13] ◮ Bilateral + Kalman filter [Pfleger et
al.’17]
◮ Recursive NL-means [Ali, Hardie’17] ◮ Gaussian-Laplacian fusion [Ehmann et
al.’18] Non-recursive/mixed approaches good results at high computational cost. Recursive methods: real-time performance but worse results.
5
Non-recursive
◮ Non-local means [Buades et al.’05],
[Liu, Freeman ’10]
◮ K-SVD [Protter, Elad ’07] ◮ V-BM3D [Dabov et al.’07] ◮ Graph regularization [Ghoniem et al.
’10]
◮ BM4D, V-BM4D [Maggioni et al.’12] ◮ SPTWO [Buades et al.’17] ◮ VNLB [Arias, Morel’18] ◮ VIDOSAT [Wen et al.’19] ◮ SALT [Wen et al.’19] ◮ VNLnet [Davy et al.’19]
Recursive
◮ Bilateral + Kalman filter [Zuo et al.’13] ◮ Bilateral + Kalman filter [Pfleger et
al.’17]
◮ Recursive NL-means [Ali, Hardie’17] ◮ Gaussian-Laplacian fusion [Ehmann et
al.’18] Non-recursive/mixed approaches good results at high computational cost. Recursive methods: real-time performance but worse results.
6
Non-recursive methods Recursive method I Recursive method II Empirical comparison
7
Non-recursive methods Recursive method I Recursive method II Empirical comparison
8
9
Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters [ q1, . . . , qn] [ q1, . . . , qn] [ p1, . . . , pn] set as guide
10
Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters [ q1, . . . , qn] [ q1, . . . , qn] [ p1, . . . , pn] set as guide
11
Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters [ q1, . . . , qn] [ q1, . . . , qn] [ p1, . . . , pn] set as guide
12
Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters [ q1, . . . , qn] [ g1, . . . , gn] [ p1, . . . , pn] set as guide set as guide
13
p1, . . . , pn : n “similar” clean patches from u q1, . . . , qn : n “similar” noisy patches from v Assumption: patches are IID samples of a Gaussian distribution P(pi) = N(pi | µ, C) P(qi|pi) = N(qi | pi, σ2I) For each noisy patch qi estimate pi as the MAP/MMSE, given by the Wiener filter: ˆ pi = µ + C(C + σ2I)−1(qi − µ)
14
p1, . . . , pn : n “similar” clean patches from u q1, . . . , qn : n “similar” noisy patches from v Assumption: patches are IID samples of a Gaussian distribution P(pi) = N(pi | µ, C) P(qi|pi) = N(qi | pi, σ2I) For each noisy patch qi estimate pi as the MAP/MMSE, given by the Wiener filter: ˆ pi = µ + C(C + σ2I)−1(qi − µ)
15
Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters [ q1, . . . , qn] [ q1, . . . , qn] [ p1, . . . , pn]
16
How to estimate µ and C? STEP 2: Use the guide patches g1, . . . , gn ∼ N(µ, C):
1 n
n
gi
1 n
n
(gi − µ)(gi − µ)T STEP 1: Use the noisy pathes q1, . . . , qn ∼ N(µ, C + σ2I):
n
n
qi
Cq − σ2I = 1 n
n
(qi − µ)(qi − µ)T − σ2I
17
We use the following hard-thresholding of the eigenvalues of the sample covariance matrix of the noisy patches: ˆ λH
i = Hτ(ˆ
ξi − σ2) =
ξi − σ2 if ˆ ξi τσ2 if ˆ ξi < τσ2. Based on the results of Debashis Paul (2007) on the aymptotic properties for the eigenvalues and eigenvectors of the noisy sample covariance matrix.
18
Groups of 200 similar 9 × 9 × 4 patches, in a 37 × 37 × 5 search region. Nearest neighbors 1 to 5. Nearest neighbors 6 to 45. Nearest neighbors 46 to 200.
19
20 nearest neighbors (noisy). sample mean and first 19 principal directions. corresponding MAP estimates.
20
◮ 3D rectangular patches, typical sizes are 10 × 10 × 2. ◮ Motion compensated search region: a square tracing the motion trajectory of the
target patch. Motion estimation using optical flow (TV-L1 [Zach et al. ’07]).
◮ Search region size: 21 × 21 × 9 ◮ Two iterations over the whole video (as in BM3D [Dabov et al. ’07]). 21
Non-recursive methods Recursive method I Recursive method II Empirical comparison
22
ut = recursive-method(ft, ut−1) Design constraints:
◮ reduce computational cost ◮ reduce memory cost (Mt = O(1))
Appealing properties:
◮ natural mechanism to enforce temporal consistency ◮ can aggregate an arbitrary number of past frames 23
ut, Mt = recursive-method(ft, Mt−1) Design constraints:
◮ reduce computational cost ◮ reduce memory cost (Mt = O(1))
Appealing properties:
◮ natural mechanism to enforce temporal consistency ◮ can aggregate an arbitrary number of past frames 24
p = p0 p1 p2 p3 p4 “Static” model:
q ∼ N(p, σ2I). We assume a Gaussian linear dynamic model for a patch trajectory pt:
p0 = µ0 + w0, w0 ∼ N(0, P0), pt = pt−1 + wt, wt ∼ N(0, Wt), qt = pt + rt, rt ∼ N(0, σ2I).
25
p = p0 p1 p2 p3 p4 “Static” model:
q ∼ N(p, σ2I). We assume a Gaussian linear dynamic model for a patch trajectory pt:
p0 = µ0 + w0, w0 ∼ N(0, P0), pt = pt−1 + wt, wt ∼ N(0, Wt), qt = pt + rt, rt ∼ N(0, σ2I).
26
p = p0 p1 p2 p3 p4 Full patch model:
K ),
q ∼ N(p, σ2I). µK =
µ0 µ0 µ0 µ0 µ0
, QK =
P −1 +W −1
1
−W −1
1
−W −1
1
W −1
1
+W −1
2
−W −1
2
−W −1
2
W −1
2
+W −1
3
−W −1
3
−W −1
3
W −1
3
+W −1
4
−W −1
4
−W −1
4
W −1
5
27
q =
q10
p1, P1, W2)} Kalman filter: recursive computation of P(pt|qt, . . . , q1) ∼ N pt, Pt
pt−1 + Ktqt state mean Pt = (I − Kt)(Pt−1 + Wt)(I − Kt)T + σ2K2
t
state covariance Kt = (Pt−1 + Wt)(Pt−1 + Wt + σ2I)−1 Kalman gain
28
q =
q10
p1, P1, W2)}
p1, P1, W2)} Kalman filter: recursive computation of P(pt|qt, . . . , q1) ∼ N pt, Pt
pt−1 + Ktqt state mean Pt = (I − Kt)(Pt−1 + Wt)(I − Kt)T + σ2K2
t
state covariance Kt = (Pt−1 + Wt)(Pt−1 + Wt + σ2I)−1 Kalman gain
29
q =
q10
p1, P1, W2)} Kalman filter: recursive computation of P(pt|qt, . . . , q1) ∼ N pt, Pt
pt−1 + Ktqt state mean Pt = (I − Kt)(Pt−1 + Wt)(I − Kt)T + σ2K2
t
state covariance Kt = (Pt−1 + Wt)(Pt−1 + Wt + σ2I)−1 Kalman gain
30
q =
q10
p1, P1, W2)}
Kalman filter: recursive computation of P(pt|qt, . . . , q1) ∼ N pt, Pt
pt−1 + Ktqt state mean Pt = (I − Kt)(Pt−1 + Wt)(I − Kt)T + σ2K2
t
state covariance Kt = (Pt−1 + Wt)(Pt−1 + Wt + σ2I)−1 Kalman gain
31
q =
q10
p1, P1, W2)}
Rauch-Tung-Streibel smoother: back-recursion for P(pt|qT , . . . , q1) ∼ N pt, Pt
pt + St pt+1 state mean
Pt+1 − Pt − Wt+1)St state covariance St = Pt(Pt + Wt+1)−1 smoothing gain
32
The only model parameters that need to be estimated are the state transition covariances Wt associated to the group. E{(qt − qt−1)(qt − qt−1)T } = Wt + 2σ2I. We assume that similar patches are iid realizations of the same dynamic model: pt,i = pt−1,i + wt,i with wt,i ∼ N(0, Wt) (1) qt,i = pt,i + rt,i with rt,i ∼ N(0, σ2I). (2) We estimate Wt via the sample covariance matrix of the innovations:
Wt−1 + (1 − β)
n
n
(qt,i − qt−1,i)(qt,i − qt−1,i)T − 2σ2I
. A forgetting factor β ∈ [0, 1] is introduced to increase temporal stability.
33
f1 Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters u1 Patch groups 1 FOF t−1 → t {q1,i} {q1,i} W1 { p1,i }
34
f1 Extract n similar patches Wiener filtering Aggregate by averaging Estimate model parameters u1 Patch groups 1 FOF t−1 → t {q1,i} {q1,i} { p1,i }, P1 W1 W1 coordinates { p1,i }
35
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters Patch groups t − 1 Patch groups t FOF t−1 → t coordinates t−1
36
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters Patch groups t − 1 Patch groups t FOF t−1 → t coordinates t−1 coordinates t
37
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters Patch groups t − 1 Patch groups t FOF t−1 → t { qt,i } { qt−1,i } coordinates t−1 Wt−1
38
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters Patch groups t − 1 Patch groups t FOF t−1 → t { qt,i } { qt−1,i } Wt coordinates t−1 Wt−1 Wt
39
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters Patch groups t − 1 Patch groups t FOF t−1 → t {qt,i} { qt,i } { qt−1,i } { pt,i }, Pt Wt coordinates t−1 Wt−1 { pt−1,i }, Pt−1
40
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters ut Patch groups t − 1 Patch groups t FOF t−1 → t {qt,i} { qt,i } { qt−1,i } { pt,i }, Pt Wt coordinates t−1 Wt−1 { pt,i } { pt−1,i }, Pt−1
41
◮ Occlusions:
Terminate patch trajectories if an occlusion is detected.
◮ Dis-occlusions:
Create groups for parts not covered by existing groups. Patch groups t For each of these groups, we store
◮ coordinates of the patches in the group ◮ estimated clean patches
pt,i
◮ covariance of estimated patches Pt ◮ transition covariance matrix
Wt
42
Non-recursive methods Recursive method I Recursive method II Empirical comparison
43
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters ut Patch groups t − 1 Patch groups t FOF t−1 → t {qt,i} { qt,i } { qt−1,i } Wt { pt,i }, Pt { pt,i } { pt−1,i }, Pt−1 coordinates t−1 Wt−1
44
ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters ut Patch groups t − 1 Patch groups t FOF t−1 → t {qt,i} { qt,i } { qt−1,i } Wt { pt,i } { pt,i }, Pt { pt,i } { pt−1,i }, Pt−1 coordinates t−1 Wt−1
45
pt−1 pt qt Given pt−1, Pt−1, at time t we have
pt−1 ∼ N( pt−1, Pt−1 ), pt = pt−1 + wt, wt ∼ N(0, Wt), qt = pt + rt rt ∼ N(0, σ2I). Kalman filter: Recursive computation of P(pt|qt, . . . , q1) ∼ N pt, Pt
pt−1 + Ktqt state mean Pt = (I − Kt)(Pt−1 + Wt)(I − Kt)T + σ2K2
t
state covariance Kt = (Pt−1 + Wt)(Pt−1 + Wt + σ2I)−1 Kalman gain
46
pt−1 pt qt If we don’t have pt−1, Pt−1 we introduce them as parameters
pt−1 ∼ N( µt−1, Ct−1 ), pt = pt−1 + wt, wt ∼ N(0, Wt), qt = pt + rt rt ∼ N(0, σ2I). We have the following posterior P(pt|qt) ∼ N pt, Pt
state mean Pt = (I − Kt)(Ct−1 + Wt)(I − Kt)T + σ2K2
t
state covariance Kt = (Ct−1 + Wt)(Ct−1 + Wt + σ2I)−1 “Kalman” gain
47
We assume that similar patches are iid realizations of the same dynamic model:
pt−1,i ∼ N(µt−1, Ct−1) pt,i = pt−1,i + wt,i wt,i ∼ N(0, Wt) qt,i = pt,i + rt,i rt,i ∼ N(0, σ2I).
m
m
pt−1,i
m
m
(pt−1,i − µt−1,i)(pt−1,i − µt−1,i)T
n
n
(qt,i − pt−1,i)(qt,i − pt−1,i)T − σ2. NOTE: We introduced m to control the spatial averaging in µt−1. Typically m < n.
48
Assumption: Wt = UDiag(νt)UT where U is the DCT basis Then:
νt)UT
49
ut−1/ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters ut Patch groups t FOF t−1 → t {qt,i , pt−1,i } { qt,i , pt−1,i } { qt−1,i } Wt µt−1 , Ct−1 { pt,i }
50
ut−1/ft−1 ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters ut Patch groups t BOF t → t−1 {qt,i , pt−1,i } { qt,i , pt−1,i } { qt−1,i } Wt µt−1 , Ct−1 { pt,i }
51
uw
t−1/fw t−1
ft Extract n similar patches Kalman filtering Aggregate by averaging Estimate model parameters ut Patch groups t OF t → t−1 {qt,i , pt−1,i } { qt,i , pt−1,i } { qt−1,i } Wt µt−1 , Ct−1 { pt,i }
52
Algorithm 1: Recursive video filtering input : Noisy video f, noise level σ
1 for t = 1 . . . T do 2
vb
t ← compute-optical-flow(ft, ˆ
ut−1, σ)
3
ˆ uw
t−1 ← warp-bicubic(ˆ
ut−1, vb
t) 4
ˆ gt ← bwd-nlkalman-filter(ft, ˆ uw
t−1, σ) 5
ˆ ut ← bwd-nlkalman-filter(ft, ˆ uw
t−1, ˆ
gt, σ) Algorithm 2: Recursive video smoothing input : Noisy video f, noise level σ
1 for t = 1 . . . T do 2
vf
t ← compute-optical-flow(ˆ
ut, ˜ ut+1, σ)
3
˜ uw
t+1 ← warp-bicubic(˜
ut+1, vb
t) 4
˜ ut ← bwd-nlkalman-smoother(ˆ ut, ˜ uw
t−1, σ)
53
VNLB
◮ Fixed 3D patch size ◮ No distinction between space and time ◮ Does not require OF ◮ Two iterations
FNLK
◮ Patch with arbitrary duration ◮ Processing organized by patch groups ◮ A lot of house-keeping ◮ Sensitive to OF ◮ Costly in memory
BNLK
◮ Patch with arbitrary duration (kind of) ◮ Very cheap in memory ◮ Processing by raster order ◮ DCT domain (for the moment) ◮ Two iterations ◮ Simple smoother ◮ Sensitive to OF 54
Non-recursive methods Recursive method I Recursive method II Empirical comparison
55
FNLK BNLK1 BNLK2 BNLK2+S V-BM3D-np V-BM4D-mp BM4D-OF SPTWO VNLnet VNLB 38.27 36.20 38.17 38.69 38.24 38.89 39.54 39.56 40.21 40.46 34.59 32.19 34.59 35.26 34.68 35.10 36.19 35.99 36.47 36.81 30.28 28.20 31.19 31.95 31.11 31.40 32.81 30.93 32.51 32.96 Average PSNR over 7 grayscale sequences 960 × 540 × 100. σ = 10 σ = 20 σ = 40
56
FNLK BNLK1 BNLK2 BNLK2+S V-BM3D-np V-BM4D-mp BM4D-OF SPTWO VNLnet VNLB .9575 .9278 .9570 .9609 .9599 .9534 .9671 .9675 .9732 .9731 .9164 .8447 .9163 .9251 .9100 .9169 .9371 .9368 .9414 .9428 .8245 .8460 .8648 .8360 .8432 .8836 .7901 .8752 .8856 Average SSIM over 7 grayscale sequences 960 × 540 × 100. σ = 10 σ = 20 σ = 40
57
◮ Current state-of-the-art in video denoising: either good results or fast results. ◮ Presented two recursive approaches bridging the gap between costly good methods
and fast methods.
◮ They integrate information accross longer time ranges and are allow to recover
many more details.
◮ Still very sensitive to optical flow.
Ongoing work
◮ Joint denoising and optical flow computation. ◮ Implement BNLK in an adaptive basis (instead of DCT). ◮ Fixed lag smoothers. ◮ Multiscale versions. 58
Reproducibility! Code and results:
◮ Non-recursive results:
http://dev.ipol.im/˜pariasm/video_denoising_models/
◮ VNLB: http://github.com/pariasm/vnlb ◮ SPTWO: https://doi.org/10.5201/ipol.2018.224 ◮ BM4D-OF: https://github.com/pariasm/vbm3d ◮ BNLK: http://github.com/pariasm/bwd-nlkalman ◮ FNLK: http://github.com/tehret/nlkalman 59
[Dabov’07] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian. Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. on Image Processing, 16, 2007. [Mairal’09] J. Mairal, F. Bach, J. Ponce, G. Sapiro and A. Zisserman, Non-local sparse models for image restoration. CVPR 2009. [Ji et al.’10] H. Ji, C. Liu, Z. Shen, Y. Xu, Robust video denoising using low-rank matrix
[He et al.’11] Y. He, T. Gan, W. Chen, H. Wang, Adaptive denoising by Singular Value
[Zoran’11] D. Zoran and Y. Weiss, From learning models of natural image patches to whole image restoration, ICCV 2011. [Wang et al.’12] Wang S., Zhang L., Liang Y., Nonlocal Spectral Prior Model for Low-Level
[Yu et al.’12] G. Yu, G. Sapiro and S. Mallat, Solving Inverse Problems With Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity, IEEE TIP, 21(5), 2012.
60
[Dong et al.’13] Dong, W., Shi, G., Li, X., Nonlocal Image Restoration With Bilateral Variance Estimation: A Low-Rank Approach, IEEE TIP, 22(2), 2013. [Lebrun’13] M. Lebrun, A. Buades and J.M. Morel. A Nonlocal Bayesian image denoising
[Gu et al.’14] S. Gu, L. Zhang, W. Zuo, X. Feng, Weighted Nuclear Norm Minimization with Application to Image Denoising, CVPR 2014. [Guo et al.’16] Q. Guo, C. Zhang, Y. Zhang and H. Liu, An Efficient SVD-Based Method for Image Denoising, IEEE Trans. on Circuits and Systems for Video Tech., 26(5), 2016. [Badri et al.’16] H. Badri, H. Yahia and D. Aboutajdine, Low-Rankness Transfer for Realistic Denoising, in IEEE TIP, 25(12), 2016. [Xie et al.’16] Y. Xie, S. Gu, Y. Liu, W. Zuo, W. Zhang and L. Zhang, Weighted Schatten p-Norm Minimization for Image Denoising and Background Subtraction,” in IEEE TIP, 25(10), 2016.
61
ML estimator of the inverse covariance matrix Qy = C−1
y
results from the following convex SDP: max log det Qy − tr(Qy Cy) subject to 0 ≺ Qy σ−2Id. Although there are efficient algorithms for solving such an SDP, they are still prohibitive for the present application.
62
Spiked covariance model: Samples x1, . . . , xn ∼ N(0, C). We observe yi ∼ N(xi, σ2I). C = U Diag(λ1, . . . , λm, 0, . . . , 0) UT . Theorem (Paul 2007). Suppose that d/n → γ ∈ (0, 1) as n → ∞. Let ξi be the ith eigenvector of the sample covariance matrix
ξi converges almost surely to
σ2(1 + √γ)2 if λi √γσ2, f(λi) := (λi + σ2)
λi
We can estimate λi as
i (
ξi) =
ξi σ2(1 + √γ)2, f−1( ξi) if ξi > σ2(1 + √γ)2.
63
Spiked covariance model: Samples x1, . . . , xn ∼ N(0, C). We observe yi ∼ N(xi, σ2I). C = U Diag(λ1, . . . , λm, 0, . . . , 0) UT . Theorem (Paul 2007). Suppose that d/n → γ ∈ (0, 1) as n → ∞. Let ξi be the ith eigenvector of the sample covariance matrix
ξi converges almost surely to
σ2(1 + √γ)2 if λi √γσ2, f(λi) := (λi + σ2)
λi
We can estimate λi as
i (
ξi) =
ξi σ2(1 + √γ)2, f−1( ξi) if ξi > σ2(1 + √γ)2.
64
65
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2
10 20 30 40 50 60 5 10 15 20 25 30
ˆ λ = λ ˆ λ = ˆ ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) √γσ2 66
We propose an additional estimator by hard thresholding the difference ˆ ξ − σ2. The value of the threshold is given by a parameter τ: ˆ λH
i = Hτ(ˆ
ξi − σ2) =
ξ − σ2 if ˆ ξ τσ2 if ˆ ξ < τσ2.
67
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
1 2 3 4 5 6 7 8 1 1.5 2 2.5 3 3.5 4 τ MSE/MMSE
ˆ λ = λ ˆ λ = ξ − σ2 ˆ λ = ˆ λS(ˆ ξ) ˆ λ = Hτ(ξ − σ2)
Normalized MSE obtained by estimating the data samples from their noisy
68