nl means method nl means method buades 2005 buades 2005
play

NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) q - PowerPoint PPT Presentation

New Idea: New Idea: NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) Same goals: Smooth within Similar Regions KEY INSIGHT : Generalize, extend Similarity Bilateral: Averages neighbors with similar intensities


  1. New Idea: New Idea: NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Same goals: ‘Smooth within Similar Regions’ • KEY INSIGHT : Generalize, extend ‘Similarity’ – Bilateral: Averages neighbors with similar intensities ; – NL-Means: Averages neighbors with similar neighborhoods!

  2. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) • For each and every pixel p:

  3. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) • For each and every pixel p: – Define a small, simple fixed size neighborhood;

  4. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) 0.74 V p = 0.32 0.41 0.55 … … … • For each and every pixel p: – Define a small, simple fixed size neighborhood; – Define vector V p : a list of neighboring pixel values.

  5. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) q ‘Similar’ pixels p, q � SMALL vector distance; p || V p – V q || 2

  6. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) q ‘Dissimilar’ pixels p, q � LARGE vector distance; q p || V p – V q || 2

  7. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) ‘Dissimilar’ pixels p, q � LARGE vector distance; q p || V p – V q || 2 Filter with this!

  8. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) p, q neighbors define a vector distance; q || V p – V q || 2 p Filter with this: No spatial term!

  9. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) pixels p, q neighbors Set a vector distance; q || V p – V q || 2 p Vector Distance to p sets weight for each pixel q

  10. NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) NL-Means maximizes the conditional probability of the central pixel given its neighborhood. BF/WLS/RE assume the image is smooth. NL-Means assumes there are many repetitions in the image (i.e. the image is a fairly general stationary random process).

  11. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Noisy source image:

  12. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Gaussian Filter Low noise, Low detail

  13. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Anisotropic Diffusion (Note ‘stairsteps’: ~ piecewise constant)

  14. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Bilateral Filter (better, but similar ‘stairsteps’:

  15. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • NL-Means: Sharp, Low noise, Few artifacts.

  16. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005)

  17. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005)

  18. NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005)

  19. Isotropic Diffusion (Heat Equation) The basic idea is simple: Embed the original image in a family of Derived images I(x,y,t) obtained by convolving the original image I_0(x,y) with a Gaussian kernel G(x,y;t) of variance t: I(x,t) = I_0(x,y)*G(x,y;t) This one parameter family of derived images may equivalently be viewed as the solution of the heat conduction, or diffusion, equation: With the initial condition I(x,y,0) = I_0(x,y), the original image.

  20. Criteria for the Diffusion Equation We would like diffusion to satisfy two criteria: 1) Causality: Any feature at a coarse level of resolution is required to possess a (not necessarily unique) � “cause” at a finer level of resolution although the reverse need not be true. In other words, no spurious detail should be generated when the resolution is diminshed 2) Homogeneity and Isotropy: The blurring is required to be space invariant. The second criteria is not necessary, and as we shall see, using something else is a good idea!

  21. Anisotropic Diffusion divergence gradient Laplacian This reduces to the isotropic heat diffusion equation if c(x,y,t)=const. Suppose we knew at time (scale) t, the location of region bounaries appropriate for that scale. We would want to encourage smoothing within a region in preference to smoothing across boundaries. This could be achieved by setting the conduction coefficient to be 1 in the interior of each region and 0 at the boundary. The problem: We don’t know the boundaries!

  22. Criteria for the Anisotropic Diffusion Equation We would like diffusion to satisfy three criteria: 1) Causality: Any feature at a coarse level of resolution is required to possess a (not necessarily unique) � “cause” at a finer level of resolution although the reverse need not be true. In other words, no spurious detail should be generated when the resolution is diminshed 2) Immediate Localization. The boundaries should remain sharp and stable at all scales 3) Piecewise smooth: At all scales, intra-region smoothing should occur before inter-region smoothing. The second criteria is not necessary, and as we shall see, using something else is a good idea!

  23. Solution: Guestimate! Let E(x,y,t) be an estimate of edge locations. It should ideally have the following properties: 1)E(x,y,t) = 0 in the interior of each region 2)E(x,y,t) = Ke(x,y,t) at each edge point, where e is a unit vector normal to the edge at the point and K is the local contrast (difference in the image intensities on the left and right) of the edge If an estimate E(x,y,t) is available, then the conduction coefficient c(x,y,t) can be chosen to be a function c = g(||E||). According to the previous discussion, g(.) has to be nonnegative monotonically decreasing function with g(0)=1

  24. Properties of AD 1) AD maintains the causality principle (proof omitted) 2) AD enhances edges Proof: w.l.o.g assume the edge is aligned with y axis: And we choose c to be a function of the gradient of I: c(x,y,t) = g(I_x(x,y,t)). Let � (I_x) = g(I_x) I_x denote c I_x (known as flux) Then, the 1D version of equation (3) becomes: We are interested in the variation in time of the slope of the edge: We want it to increase for strong (real) edge and decrease for weak (probably noise) edges.

  25. Edge Enhancement Given that I is smooth, we can invert the order of differentiation Suppose the edge is oriented in such a way that I_x>0. At the point of inflection I_xx=0, and I_xxx<<0 since the point of inflection corresponds to the point with maximum slope. Then in a neighborhood of the point of inflection has sign opposite to � ’(I_x) That’s because � ’’(I_x)=0 at the point of inflection. So, if � ’(I_x)>0 the slope of the edge will decrease in time; if � ’(I_x)<0 the slope will increase with time

  26. The choice of function � () that leads to edge enhancement When � increases, its derivative is greater than zero and the edge will get blurred. At some point, � decreases and the edge will be enhanced. The g functions used in the paper are:

  27. Implementation Discretizing equation: Leads to (a discrete Laplacian equation): And the conduction coefficients are:

  28. Results

  29. Results

  30. Results

  31. Anisotropic Diffusion, Robust Statistics and Line Process Robust statistics is about dealing with outliers Within the context of image denoising, edges are outliers, because if we had no edges then a simple Gaussian blur would do the job. Formally, we want to minimize: That is, every pixel s should be close to its neighbors p This objective function can be solved with the following gradient descent scheme: where

  32. The Quadratic Error Norm

  33. The Lorenzian Error Norm

  34. The connection between AD and RS Anisotropic Diffusion: Robust Statistics Objective Function Gradient Descent By defining: we get equivalence between the two approaches. In the discrete case we have: with

  35. Example Perona-Malik suggest: For a positive constant K. We want to find a \rho() function such that the iterative solution of the diffusion equation and the robust statistics equation are equivalent. Letting ��������������� we have:

  36. PM and the Lorenzian

  37. Tukey’s biweight Why settle for the Lorenzian? Maybe we can choose a more robust error measure?

  38. Or Huber’s minimax norm? Huber’s minimax norm is equivalent to the L_1 norm for large values. But, for normally distributed data, the L_1 norm produces estimates with higher variance than the optimal L_2 (quadratic) norm, so Huber’s minmax norm is designed to be quadratic for small values

  39. Comparing all three functions Now we can compare the three error norms directly. The modified L_1 norm gives all outliers a constant weight of one while the Tukey norm gives zero weight to outliers whose magnitude is above a certain value. The Lorentzian (or Perona-Malik) norm is in between the other two. Based on the shape of the influence function \psi() we would correctly predict that diffusing with the Tukey normproduces sharper boundaries than diffusing with the Lorentzian (standard Perona-Malik) norm, and that both produce sharper boundaries than the modified L1 norm

  40. Results

  41. Results

  42. Results

  43. Robust Statistics Line Processes Robust estimation minimizes: Where Equivalently, we can formulate the following line process minimization problem: where One benefit of the line-process approach is that the “outliers” are made explicit and therefore can be manipulated. As we will see shortly

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend