Super-Resolution
Shai Avidan Tel-Aviv University
Super-Resolution Shai Avidan Tel-Aviv University Slide Credits - - PowerPoint PPT Presentation
Super-Resolution Shai Avidan Tel-Aviv University Slide Credits (partial list) Rick Szeliski Steve Seitz Alyosha Efros Yacov Hel-Or Yossi Rubner Miki Elad Marc Levoy Bill Freeman Fredo Durand
Super-Resolution
Shai Avidan Tel-Aviv University
Slide Credits (partial list)
Basic SuperResolution Idea
A set of lowquality images: Fusion of these images into a higher resolution image
How?
Comment: This is an actual super resolution reconstruction result
40 images ratio 1:4
Example – Surveillance
Example – Enhance Mosaics
Super-Resolution - Agenda
D
Intuition
2D
Intuition
!"
%
2D 2D
Intuition
&
2D 2D
Intuition
'
2D 2D
Intuition
(
)
) *+ ,-*
Rotation/Scale/Disp.
Defining the relation between the given and the desired images
The MaximumLikelihood Solution
A simple solution based on the measurements
Bayesian SuperResolution Reconstruction
Taking into account behavior of images
Some Results and Variations
Examples, Robustifying, Handling color
SuperResolution: A Summary
The bottom line
Assumed known
The Model
X
High Resolution Image
1 N
!
1
N Geometric Warp " "1
N
Decimation V 1 V N Additive Noise Y1 YN Low Resolution Images
{ }
N k k k k k k
V X Y
1 =
+ = F H D
{ }
N k k k k k k
V X Y
1 =
+ = F H D
The Model as One Equation
V X V V V X Y Y Y Y
N N N N N
+ = + = = H F H D F H D F H D
1 2 2 2 1 1 1 2 1
A Rule of Thumb
X X Y Y Y Y
N N N N
H F H D F H D F H D = = =
2 2 1 1 1 2 1
In the noiseless case we have Clearly, this linear system of equations should have more than #$ in order to make it possible to have a unique LeastSquares solution. Example: Assume that we have N images of 100by100 pixels, and we would like to produce an image X of size 300
% &'(#
Super-Resolution - Model
{ }
N k n k k k k k k
V V X Y
1 2
, ~ ,
=
+ = σ N F H D
.
! / (
2 ' 31 3 4 / 56
Simplified Model
{ }
N k n k k k k
V V X Y
1 2
, ~ ,
=
+ = σ N DHF
.
! / (
2 ' 31 3 4 / 56
The Super-Resolution Problem
Yk – The measured images (noisy, blurry, down-sampled ..) H – The blur can be extracted from the camera characteristics D – The decimation is dictated by the required resolution ratio Fk – The warp can be estimated using motion estimation σ σ σ σn – The noise can be extracted from the camera / image
X – HR image
{ }
2
, ~ ,
n k k k k
V V X Y σ N DHF + =
V X V V V X Y Y Y Y
N N N N N
+ = + = = G F H D F H D F H D
1 2 2 2 1 1 1 2 1
The Model as One Equation
[ ] [ ] [ ]
1 size
, size
1 size
× × × M r V X M r NM NM Y G 7 8.87- 7 779 8.87-71:::.1::: 771:
7;1:8×1< 7;1:8×1=8< 7;1=8×1<
4
SR - Solutions
∑
=
− =
N k k k X
Y X X
1 2
min arg DHF
&
{ }
X A Y X X
N k k k X
λ + − =
∑
=1 2
min arg DHF
ML Reconstruction (LS)
( ) ∑
=
− =
N k k k ML
Y X X
1 2 2
DHF ε 8-%
( )
( ) 0
ˆ 2
1 2
= − = ∂ ∂
∑
= N k k k T T T k ML
Y X X X DHF D H F ε @%
k N k T T T k N k k T T T k
Y X ∑
∑
= =
= ⋅
1 1
ˆ D H F DHF D H F
A B B A = X ˆ
LS - Iterative Solution
( )
∑
= +
− − =
N k k n k T T T k n n
Y X X X
1 1
ˆ ˆ ˆ DHF D H F β
&
'
LS - Iterative Solution
( )
∑
= +
− − =
N k k n k T T T k n n
Y X X X
1 1
ˆ ˆ ˆ DHF D H F β
n
X ˆ
1
ˆ
+ n
X
Y
β
F H D
T
D
T
H
T k
F
"71
) *+ ,,
The Model – A Statistical View
V X V V V X Y Y Y Y
N N N N N
+ = + = = H F H D F H D F H D
1 2 2 2 1 1 1 2 1
We assume that the noise vector, V, is Gaussian and white.
{ }
{ }
22
exp Pr
v TVV
Const V
σ
− ⋅ =
For a known X, Y is also Gaussian with a “shifted mean”
{ }
( ) ( )
{ }
22
exp | Pr
v TX Y X Y
Const X Y
σ H H − −
− ⋅ =
MaximumLikelihood … Again
The ML estimator is given by
{ }
X Y
ArgMax X
X ML
| Pr ˆ =
which means: Find the image X such that the measurements are the most likely to have happened. In our case this leads to what we have seen before
{ }
2
| Pr ˆ Y X ArgMin X Y
ArgMax X
X X ML
− = = H
ML Often Sucks !!! For Example …
For the image denoising problem we get We got that the best ML estimate for a noisy image is … the noisy image itself. The ML estimator is quite useless, when we have insufficient information. A better approach is needed. The solution is *+,.
Y X ˆ =
2 X ML
Y X ArgMin X ˆ − =
Using The Posterior
{ }
X Y | Pr
Instead of maximizing the Likelihood function maximize the Posterior probability function
{ }
Y X | Pr
This is the MaximumAposteriori Probability (MAP) estimator: Find the most probable X, given the measurements
/
Why Called Bayesian?
Bayes formula states that
{ } { } { }
{ }
Y X X Y Y X Pr Pr Pr Pr = and thus MAP estimate leads to
{ } { }
{ }
X X Y ArgMax Y X ArgMax X
X X MAP
Pr Pr Pr ˆ = = This part is already known What shall it be?
Image Priors?
This is the probability law of images. How can we describe it in a relatively simple expression? Much of the progress made in image processing in the past 20 years (PDE’s in image processing, wavelets, MRF, advanced transforms, and more) can be attributed to the answers given to this question.
MAP Reconstruction
{ }
{ } { }
X A X Y ArgMin X X Y ArgMax X
X X MAP
λ + − = =
2
Pr Pr ˆ H If we assume the distribution with some energy function A(X) for the prior, we have
{ } { } { }
X A Const X − ⋅ = exp Pr This additional term is also known as regularization
Choice of Regularization
( ) { }
X A X Y X
N k k k k k MAP
λ ε + − =∑
=1 2 2
F H D
sparse representations, …
{ } ( ) X
X X X A
T T
{ } { }
X X A
= Possible Prior functions Examples:
{ }
2
X X A
{ }
( )
∑ ∑
− = − =
− ⋅ =
P P n P P m m v n h mn
X X a X A S S ρ
MAP Reconstruction
– Tikhonov cost function – Total variation – Bilateral filter
( ) { }
X A Y X X
N k k k MAP
λ ε + − =∑
=1 2 2
DHF
{ }
2
X X A
T
Γ =
{ }
1
X X A
TV
∇ =
{ } ∑ ∑
− = − = +
− =
P P l P P m m y l x m l B
X S S X X A
1
α
Robust Estimation + Regularization
( )
∑ ∑ ∑
− = − = + =
− + − =
P P l P P m m y l x m l N k k k
X S S X Y X X
1 1 1 2
α λ ε DHF
8-%
( )
[ ]
( )
− − + − − =
∑ ∑ ∑
− = − = − − + = + P P l P P m n m y l x n m y l x m l N k k n k T T T k n n
X S S X S S I Y X X X ˆ ˆ sign ˆ sign ˆ ˆ
1 1
α λ β DHF D H F
Robust Estimation + Regularization
1
ˆ
+ n
X
Y
( )
[ ]
( )
− − + − − =
∑ ∑ ∑
− = − = − − + = + P P l P P m n m y l x n m y l x m l N k k n k T T T k n nX S S X S S I Y X X X ˆ ˆ sign ˆ sign ˆ ˆ
1 1α λ β DHF D H F
n
X ˆ
m+
λα
"71 7AA
1
The higher resolution
The reconstructed result One of the low resolution images
Synthetic case: 9 images, no blur, 1:3 ratio
Example 0 – Sanity Check
16 scanned images, ratio 1:2
Example 1 – SR for Scanners
Taken from
the given images Taken from the reconstructed result
8 images*, ratio 1:4
Example 2 – SR for IR Imaging
* This data is courtesy of the US Air Force
Example 3 – Surveillance
40 images ratio 1:4
Robust SR
( ) { }
X A X Y X
N k k k k k MAP
λ ε + − =∑
=1 2 2
F H D
Cases of measurements outlier:
( ) { }
X A X Y X
N k k k k k MAP
λ ε + − =∑
=1 1 2
F H D
Example 4 – Robust SR
20 images, ratio 1:4 L2 norm based L1 norm based
Example 5 – Robust SR
20 images, ratio 1:4 L2 norm based L1 norm based
Handling Color in SR
( ) { }
X A X Y X
N k k k k k MAP
λ ε + − =∑
=1 2 2
F H D
Handling color: the classic approach is to convert the measurements to YCbCr, apply the SR on the Y and use trivial interpolation on the Cb and Cr. Better treatment can be obtained if the statistical dependencies between the color layers are taken into account (i.e. forming a prior for color images). In case of mosaiced measurements, demosaicing followed by SR is suboptimal. An algorithm that directly fuse the mosaic information to the SR is better.
Example 6 – SR for Full Color
20 images, ratio 1:4
Example 7 – SR+Demoaicing
20 images, ratio 1:4 Mosaiced input Mosaicing and then SR Combined treatment
3 4'
Example-based Super Resolution
NN Failure
Markov Network Model
Single Pass
Cubic Spline Original 70x70 Example based, training: generic True 280x280
Super Resolution Result
Results
MRF Network One pass Original Cubic-spline One pass
Failure
Original Cubic-spline One pass
5 '
Idea
Classical Multi-Image SR Single-Image Multi-Patch SR
Why should it work?
All image patches High variance patches only (top 25%) Image scales
Putting everything together
Results
Input. Bicubic interpolation (3). Unified single-image SR (3). Ground truth image. http://www.wisdom.weizmann.ac.il/~vision/SingleImageSR.html