Edge Preserving Filtering Median Filter Bilateral Filter
Shai Avidan Tel-Aviv University
Edge Preserving Filtering Median Filter Bilateral Filter Shai - - PowerPoint PPT Presentation
Edge Preserving Filtering Median Filter Bilateral Filter Shai Avidan Tel-Aviv University Slide Credits (partial list) Rick Szeliski Steve Seitz Alyosha Efros Yacov Hel-Or Marc Levoy Bill Freeman Fredo
Shai Avidan Tel-Aviv University
Sylvain Paris – MIT CSAIL
average
input square neighborhood
normalized box function sum over all pixels q intensity at pixel q result at pixel p
q q p
input
unrelated pixels unrelated pixels related pixels
pixel position pixel weight
box window Gaussian window
average
input per-pixel multiplication
input
box average
Gaussian blur
normalized Gaussian function
q q p
1
unrelated pixels unrelated pixels uncertain pixels uncertain pixels related pixels
pixel position pixel weight
2
2 exp 2 1 ) (
x G
size of the window
q q p
large input limited smoothing strong smoothing
input
Same Gaussian kernel everywhere.
Bilateral Filter No Averaging across Edges
input
The kernel shape depends on the image content.
[Aurich 95, Smith 97, Tomasi 98]
space weight
not new
range weight
I new
normalization factor
new
I I I G G W I BF
q q q p p p
q p | | || || 1 ] [
r s
pixel intensity pixel position
space
space range normalization
Gaussian blur
I I I G G W I BF
q q q p p p
q p | | || || 1 ] [
r s
[Aurich 95, Smith 97, Tomasi 98] space space range p p q q
I G I GB
q q p
q p || || ] [
the considered neighborhood.
I I I G G W I BF
q q q p p p
q p | | || || 1 ] [
r s
Depends on the application. For instance:
– e.g., 2% of image diagonal
amplitude
– e.g., mean or median of image gradients
features thinner than ~2s
close-up kernel
) ( ) 1 ( n n
input
1 iteration
2 iterations
4 iterations
I I I G G W I BF
q q q p p p
q p | | || || 1 ] [
r s
G G W I BF
q q q p p p
C C C q p || || || || 1 ] [
r s
For color images
intensity difference color difference
scalar 3D vector (RGB, Lab)
input
– Key idea: – Break image into base and detail layers – Compress base – Recompose image
depend only on the distance between points
[Paris and Durand 06]
pixel intensity pixel position
1D image Plot I = f ( x ) far in range close in space
S
I I G G W I I I G G W I BF
q q p p q q q p p p
q p q p | | || || | | || || 1 ] [
r s r s
| | || || | | || || ] [
r s r s
S
I I G G W I I I G G I BF W
q q p p q q q p p p
q p q p
| | || || | | || || ] [
r s r s
S
I I G G W I I I G G I BF W
q q p p q q q p p p
q p q p
space range
Gaussian
space range
downsampled convolution Conceptual view, not exactly the actual algorithm
Bilateral two kinds of weights NEW : get them from two kinds of images.
Why do this?
To get ‘best of both images’
Bilateral two kinds of weights, one image A :
A A A G G W A BF
q q q p p p
q p | | || || 1 ] [
r s
c
s s
Image A:
f(x) x x
NEW: two kinds of weights, two images
A B B G G W A BF
q q q p p p
q p | | || || 1 ] [
r s
c
s s
A: Noisy, dim
(ambient image)
c c
s s
B: Clean,strong
(Flash image)
Image A: Warm, shadows, but too Noisy
(too dim for a good quick photo) (too dim for a good quick photo)
No-flash
(flash: simple light, ALMOST no shadows) (flash: simple light, ALMOST no shadows)
From this video: ASTA: Adaptive S Spatio- T Temporal Accumulation Filter
Replace pixel difference with a general dissimilarity measure D(x,x)=0, D(x,y)=D(y,x)
Instead of comparing pixel intensities, look at their local spatial neighborhood
(from FIFO center)
(estimate gain for each pixel)
– Average recent similar values, – Reject outliers (avoids ‘ghosting’), spatial avg as needed – Tone Mapping
(from FIFO center)
(estimate gain for each pixel)
– Average recent similar values, – Reject outliers (avoids ‘ghosting’), spatial avg as needed – Tone Mapping
(from FIFO center)
(estimate gain for each pixel)
– Average recent similar values, – Reject outliers (avoids ‘ghosting’), spatial avg as needed – Tone Mapping
(color: # avg’ pixels)
(from FIFO center)
(estimate gain for each pixel)
– Average recent similar values, – Reject outliers (avoids ‘ghosting’), spatial avg as needed – Tone Mapping
Bilateral Filter Variant: Mostly Temporal
– Carry gain estimate for each pixel; – Use future as well as previous values;
– Static scene? Temporal-only avg. works well – Motion? Bilateral rejects outliers: no ghosts!
Multispectral Bilateral Video Fusion (Bennett,07)
– Produces watchable result from unwatchable input – – VERY VERY robust; accepts almost any dark video; – Exploits temporal coherence to emulate Low-light HDR video, without special equipment