by EM-Adaptation Purdue University Joint work with Enming Luo and - - PowerPoint PPT Presentation

by em adaptation
SMART_READER_LITE
LIVE PREVIEW

by EM-Adaptation Purdue University Joint work with Enming Luo and - - PowerPoint PPT Presentation

Adaptive Patch-based Image Denoising by EM-Adaptation Purdue University Joint work with Enming Luo and Truong Nguyen (UCSD) 1 Image Denoising AGAIN!? 2 Image Denoising Consider an additive iid Gaussian noise model: where Our goal is to


slide-1
SLIDE 1

1

Adaptive Patch-based Image Denoising by EM-Adaptation

Joint work with Enming Luo and Truong Nguyen (UCSD) Purdue University

slide-2
SLIDE 2

2

Image Denoising … AGAIN!?

slide-3
SLIDE 3

3

Image Denoising

Consider an additive iid Gaussian noise model: where Our goal is to estimate from Our Approach: Maximum-a-Posteriori

slide-4
SLIDE 4

4

Since the noise iid Gaussian, the conditional distribution is

MAP Framework

Therefore, the MAP is

slide-5
SLIDE 5
  • Markov Random Field (80s)
  • Gradients (80s)
  • Total Variation (90s)
  • X-lets (wavelet, contourlet, curvelet, …, 90s)
  • Lp norm (00s)
  • Dictionary (KSVD, 00s)
  • Example (00s)
  • Non-local (BM3D, nonlocal means, 2005, 2007)
  • Shotgun! (2011)
  • Graph Laplacian (2012)

5

Image Priors

slide-6
SLIDE 6

6

Patch-based Priors

What is a patch? Why patch? What is patch-based prior?

A patch is a small block of pixels in an image

slide-7
SLIDE 7

7

Training a Patch-based Prior

EM Algorithm

e.g., Gaussian mixture: Typically, we train a patch-based prior from a large collection of images

slide-8
SLIDE 8

8

Good Training Set

slide-9
SLIDE 9

9

How good? Example: Text Image

clean image noisy image BM3D (single image method) [Luo-Chan-Nguyen, 15] (use targeted training)

slide-10
SLIDE 10

10

This Talk: Can priors be learned adaptively? Challenge: (1)Finding good examples are HARD (2)Finding a lot of good examples are EVEN HARDER

Generic database [Zoran-Weiss ‘11] 2 million 8x8 image patches Target image update Gaussian mixture model

slide-11
SLIDE 11

11

Our Proposed Idea

slide-12
SLIDE 12

12

Toy Example

Imagine that:

(a) Original generic database (A LOT of samples) (b) Ideal targeted database (A LOT of samples) (c) In reality, samples from targeted database is FEW!!!

slide-13
SLIDE 13

13

slide-14
SLIDE 14

14

EM Adaptation

slide-15
SLIDE 15

15

EM Adaptation

slide-16
SLIDE 16

16

Classical EM: EM Adaptation:

EM Adaptation

slide-17
SLIDE 17

17

EM Adaptation

slide-18
SLIDE 18

18

Classical EM: EM Adaptation:

EM Adaptation

slide-19
SLIDE 19

19

EM Adaptation

slide-20
SLIDE 20

20

Classical EM: EM Adaptation

EM Adaptation

slide-21
SLIDE 21

21

slide-22
SLIDE 22

22

  • D.A. Reynolds, T.F. Quatieri, and R.B. Dunn, “Speaker verification using adapted

gaussian mixture models,” Digital signal process., vol. 10, no. 1, pp. 19–41, 2000.

  • P.C. Woodland, “Speaker adaptation for continuous density hmms: A review,” in In

ITRW on Adaptation Methods for Speech Recognition, pp. 11–19, Aug. 2001.

  • M. Dixit, N. Rasiwasia, and N. Vasconcelos, “Adapted gaussian models for image

classification,” in IEEE Conference Computer Vision and Pattern Recognition (CVPR’11), pp. 937–943, Jun. 2011.

EM Adaptation in the literature

  • J. Gauvain and C. Lee, “Maximum a posteriori estimation for multivariate

Gaussian mixture observations of Markov chains,” IEEE Transactions Speech and Audio Process., vol. 2, no. 2, pp. 291–298, Apr. 1994. Theory of EM Adaptation

slide-23
SLIDE 23

23

EM Adaptation for Noisy Images

In this case, the M-step becomes Assume the pre-filtered image satisfies i.e., denoise the image with a method you like.

slide-24
SLIDE 24

24

Stein’s Unbiased Risk Estimator (SURE)

What is the difference? Clean: Pre-filtered:

slide-25
SLIDE 25

25

slide-26
SLIDE 26

26

Results

slide-27
SLIDE 27

27

slide-28
SLIDE 28

28

slide-29
SLIDE 29

29

slide-30
SLIDE 30

30

EPLL 31.48dB proposed 31.80dB

slide-31
SLIDE 31

31

Conclusion

slide-32
SLIDE 32

32

EM Adaption automatically swings between

  • Generic database
  • When noise is extremely high
  • When patches are relatively smooth
  • Where there are insufficient training samples
  • Noisy image
  • When there is sharp edges in a patch
  • When there are enough training samples

EM Adaption is

  • a method to combine generic database and the noisy image
slide-33
SLIDE 33

33

Questions?

slide-34
SLIDE 34

34

slide-35
SLIDE 35

35

We want to address two questions for MAP:

Question 1 : How to SOLVE this optimization problem?

(If we cannot solve this problem, then there is no point of continuing.)

Question 2 : How to ADAPTIVELY learn a prior?

Generic prior (from an arbitrary databased) Specific prior (match the image of interest)

slide-36
SLIDE 36

36

We want to address two questions for MAP:

Question 1 : How to SOLVE this optimization problem?

(If we cannot solve this problem, then there is no point of continuing.)

Question 2 : How to ADAPTIVELY learn a prior?

Generic prior (from an arbitrary databased) Specific prior (match the image of interest)

slide-37
SLIDE 37

37

Half Quadratic Splitting

The Algorithm: General Principle

[Geman-Yang, T-IP, 1995]

slide-38
SLIDE 38

38

Solution to Problem (1): Example

If , then the solution to (1) is

Gaussian Mixture Model [Zoran-Weiss ‘11]

where

slide-39
SLIDE 39

39

Solution to Problem (2):

The solution to (2) is

slide-40
SLIDE 40

40

Question 1 : How to SOLVE this optimization problem? For Gaussian Mixture:

slide-41
SLIDE 41

41

We want to address two questions for MAP:

Question 1 : How to SOLVE this optimization problem?

(If we cannot solve this problem, then there is no point of continuing.)

Question 2 : How to ADAPTIVELY learn a prior?

Generic prior (from an arbitrary databased) Specific prior (match the image of interest)