Image Editing in the Gradient Domain Shai Avidan Tel Aviv - - PowerPoint PPT Presentation
Image Editing in the Gradient Domain Shai Avidan Tel Aviv - - PowerPoint PPT Presentation
Image Editing in the Gradient Domain Shai Avidan Tel Aviv University Slide Credits (partial list) Rick Szeliski Steve Seitz Alyosha Efros Yacov Hel-Or Marc Levoy Bill Freeman Fredo Durand Sylvain Paris
Slide Credits (partial list)
- Rick Szeliski
- Steve Seitz
- Alyosha Efros
- Yacov Hel-Or
- Marc Levoy
- Bill Freeman
- Fredo Durand
- Sylvain Paris
Image Composition
Target Image Source Images
Basics
- Images as scalar fields
– R2 -> R
Vector Field
- A vector function G: R2 →R2
- Each point (x,y) is associated with a vector (u,v)
G(x,y)=[ u(x,y) , v(x,y) ]
Gradient Field
- Partial derivatives of scalar field
- Direction
– Maximum rate of change of scalar field
- Magnitude
– Rate of change
- Not all vector fields are gradients of an image.
– Only if they are curl-free (a.k.a conservative) – What’s the difference between 1D and 2D Gradient field?
} , { y I x I I ∂ ∂ ∂ ∂ = ∇ ) , ( y x I
Continues v.s. Discrete
Image I(x,y)
Ix Iy
Continues case → derivative Discrete case → Finite differences
[ ] [ ]
I y I I x I
T ∗
− → ∂ ∂ ∗ − → ∂ ∂ 1 1 1 1
Interpolation
- ∂
- ver
function scalar unknown : f \
- ver
function scalar known : f boundary with ,
- f
subset closed a :
- f
subset closed a :
* 2
S S R S
I
Intuition – hole filling
- 1D:
- 2D:
x1 x2
Membrane Interpolation
2
min∫∫
- ∇f
f
Solve the following minimization problem: Subject to Dirichlet boundary conditions:
- ∂
- ∂ =
*
f f
Variational Methods to the Rescue! Calculus: When we want to minimize g(x) over the space of real values x We derive and set g’(x)=0 What’s the derivative of a function? Variational Methods: Express your problem as an energy minimization over a space of functions
Derivative Definition
1D Derivative:
( ) ( ) ( )
ε ε
ε
x f x f x f − + = ′
→0
lim Multidimensional derivative for some direction vector w
( ) ( ) ( )
ε ε
ε
x f w x f x f Dw
- −
+ =
→0
lim We want to minimize
( ) ( ) ( )
∫
= = ′
2 1
1 1 2
and with
x x
b x f a x f dx x f
Assume we have a solution f and try to define some notion of 1D derivative wrt to a 1D parameter ε in a given direction of functional space: For a perturbation function η(x) that also respects boundary conditions (i.e. η(x_1)=η(x_2) = 0) and a scalar ε the integral
( ) ( ) ( )
∫
′ ′ + ′
2 1
alone n bigger tha be should
2 x x
f dx x x f η ε
And we are left with: But since this must be true for every η, it holds that f’’(x) = 0 everywhere.
Calculus of Variations
Lets open the parenthesis:
( ) ( ) ( ) ( ) dx
x η ε x f x η ε x f
x x
∫
′ + ′ ′ + ′
2 1
2 2 2
2
The third term is always positive and is negligible when ε is going to zero. So derive the rest with respect to ε and set to zero:
( ) ( )
∫
= ′ ′
2 1
2
x x
dx x f x η
Integrate by parts:
( ) ( ) ( ) ( ) [ ] ( ) ( )
∫ ∫
′ ′ ′ − ′ = ′ ′
2 1 2 1 2 1
x x x x x x
dx x f x η x f x η dx x f x η Where:
( ) ( ) [ ] ( ) ( ) ( ) ( )
a g a f b g b f x g x f
b a
− = And since η(x_1)=η(x_2) = 0 then the expression in the squared brackets is equal to zero
( ) ( )
2 1
= ′ ′ ′
∫
x x
dx x f x η
Intuition
interval
- ver the
integrated slove the is
- f
min The
∫
′ f
Locally, if the second derivative was not zero, this would mean that the First derivative is varying which is bad since we want to be minimized ∫
′ f
Recap: We start with the functional we need to minimize Introduce the perturbation function Use Calculus of variation Set to zero Integrate by parts And obtain the solution.
Euler-Lagrange Equation
( )
∫
=
2 1
, ,
x x
x
dx f f x F J
A fundamental equation of calculus of variations, that states that if J is defined by an integral of the form Then J has stationary value if the following differential equation is satisfied
= ∂ ∂ − ∂ ∂
x
f F dx d f F
Equation (2) Equation (1)
2
min∫∫
- ∇f
f
Recall, we want to solve the following minimization problem: Subject to Dirichlet boundary conditions:
- ∂
- ∂ =
*
f f
Membrane Interpolation
( )
2 2 2
: case
- ur
In
y x
f f f F + = ∇ = : becomes (2) equation Then = ∂ ∂ − ∂ ∂ − ∂ ∂
y x
f F dy d f F dx d f F
( ) ( ) ( )
2 2 2 2 2 2 2 2 2 2
2 2 2 2 y f f dy d f f f dx d f F dy d x f f dx d f f f dx d f F dx d f f f f F
y y y x y x x y x x y x
∂ ∂ = = ∂ + ∂ = ∂ ∂ ∂ ∂ = = ∂ + ∂ = ∂ ∂ = ∂ + ∂ = ∂ ∂ : Laplacian get the we and
2 2 2 2
=
- =
∂ ∂ + ∂ ∂ f y f x f
Smooth image completion
- ∂
- ∂
- =
∇
∫∫
* 2
. . min arg : Lagrange
- Euler
f f t s f
f
The minimum is achieved when:
- ∂
- ∂ =
- =
- *
. . f f t s
- ver
f
Discrete Approximation (Membrane interpolation)
- ∂
- ∂ =
- =
- *
. .
- ver
f f t s f
2 2 2 2
y f x f f ∂ ∂ + ∂ ∂ =
- y
x y x y x y x y x
f f f x f f f x f
, 1 , , 1 2 2 , , 1
2
− + +
+ − ≅ ∂ ∂ − ≅ ∂ ∂
( )
4 2 2 ,
, 1 , 1 , , 1 , 1 1 , , 1 , , 1 , , 1
= − + + + = + − + + − ≅
- −
+ − + − + − + y x y x y x y x y x y x y x y x y x y x y x
f f f f f f f f f f f y x f
Discrete Approximation
= − − − b x x 1 1 1 4 1 1 1 4 1 1 1 1 4 1 1
2 1
Each fx,y is an unknown variable xi, there are N unknown (the pixel values) This reduces to a sparse linear system of equations: We have A_x * I = 0 A_y * I = 0 A_boundary * I = boundary so We can combine all and get Ax = b
Gradient constraints Boundary conditions
What’s in the picture?
What’s in the picture?
What’s in the picture?
Editing in Gradient Domain
- Given vector field G=(u(x,y),v(x,y)) (pasted gradient) in a
bounded region . Find the values of f in that optimize:
- ∂
∗
- ∂
- =
− ∇
∫∫
f f with G f
f 2
min
G=(u,v)
f* f
I f*
Intuition - What if G is null?
- 1D:
- 2D:
x1 x2
- ∂
∗
- ∂
- =
∇
∫∫
f f with f
f 2
min
What if G is not null?
- 1D case
Seamlessly paste
- nto
- Add a linear function so that the boundary condition is respected
- Gradient error is equally distributed all over in order to respect the
boundary condition
2D case
From Perez et al. 2003
2D case
From Perez et al. 2003
2D case
Poisson Equation
( )
( )
2 2 2
: case
- ur
In
y y x x
G f G f G f F − + − = − ∇ =
= ∂ ∂ − ∂ ∂ − ∂ ∂
y x
f F dx d f F dx d f F ∂ ∂ − ∂ ∂ = − ∂ ∂ = ∂ ∂ ∂ ∂ − ∂ ∂ = − ∂ ∂ = ∂ ∂ y G y f G y f dy d f F dy d x G x f G x f dx d f F dx d
y y y y y x x x x x 2 2 2 2
2 2 2 2 divG f y G x G f y G x G y f x f
y x y x
=
- ∂
∂ + ∂ ∂ =
- =
∂ ∂ − ∂ ∂ − ∂ ∂ + ∂ ∂ : get we and
2 2 2 2
Discrete Approximation (Poisson Cloning)
- ∂
- ∂ =
- =
- *
. . f f t s
- ver
divG f
2 2 2 2
y f x f f ∂ ∂ + ∂ ∂ =
- y
x y x y x y x y x
f f f x f f f x f
, 1 , , 1 2 2 , , 1
2
− + +
+ − ≅ ∂ ∂ − ≅ ∂ ∂
( )
4 2 2 ,
, 1 , 1 , , 1 , 1 1 , , 1 , , 1 , , 1
= − + + + = + − + + − ≅
- −
+ − + − + − + y x y x y x y x y x y x y x y x y x y x y x
f f f f f f f f f f f y x f
( ) ( ) ( ) ( )
1 , , , 1 , − − + − − ≅ ∂ ∂ + ∂ ∂ = y x G y x G y x G y x G y G x G divG
y y x x y x
Alternative Derivation (discrete notation)
= v u I
y x
D D
− − − − − − − = 1 1 1 1 1 1 1 1 1 1 1 1 1 1
x
D
[ ]*
1 1 * − = ∂ ∂ x
I
x
D I x = ∗ ∂ ∂
- Let Dx - Toeplitz matrix
2
min
x I y
D D − u I v
( ) ( )
= v u I
T y T x y x T y T x
D D D D D D
- Normal equation:
( )
v u I
T y T x y T y x T x
D D D D D D + = +
[ ] [ ]*
1 1 * 1 1 , − = − ⇒ flip D Note
T x
Numerical Solution
- Discretize Laplacian
= + ≡ ∇
y T y x T x
D D D D
2
[ ]
∗ − = ∗ − + − 1 1 4 1 1 1 2 1 1 2 1
− − − − − − − 1 4 1 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 1 1 4 1 1 1 4 1
Sparse Toeplitz Matrix
Comments:
– A is a sparse. – A is symmetric and can be inverted. – If is rectangular A is a Toeplitz matrix. – Size of A is ∼NxN. – Impractical to form or store A. – Impractical to invert A
( )
v u I
T y T x y T y x T x
D D D D D D + = +
b I = A
Iterative Solution: Conjugate Gradient
- Solves a linear system Ax=b
(in our case x=I)
- A is square, symmetric, positive semi-definite.
- Advantages:
– Fast! – No need to store A but calculating Ax – In our case Ax can be calculated using a single convolution. – Can deal with constraints.
Conjugate Gradient as a minimization problem
- Minimizes
And since A is symmetric
Steepest Descent Method
- Pick gradient
direction r(i)
- Find optimum
along this direction x(i)+αr(i)
Gradient direction
Energy along the gradient
Behavior of gradient descent
- Zigzag or goes straight depending if we’re lucky
– Ends up doing multiple steps in the same direction
Conjugate gradient
- For each step i:
– Take the residual d(i)=b-Ax(i) (= -gradient) – Make it A-orthogonal to the previous ones – Find minimum along this direction
- Needs at most N iterations.
- Matlab command:
x=cgs(A,b)
A can be a function handle afun such that afun(x) returns A*x
Solving Poisson equation with boundary conditions
- *
- S
T
( ) ( )
- +
= + S D D D D D D D D
y T y x T x y T y x T x
I
s.t.
∗ ∗
- =T
I
- ∇S
- Define a circumscribing square Π=∪*
– Let ⊂ Π denotes the edited image area. – Let *= Π- denotes the surrounding area.
y T y x T x
k ∂ ∗ ∂ + ∂ ∗ ∂ =
( )
∗
- ∗
= I I k AI
- *
- S
T ( )
*
- ∗
= T S k b
- x=cgs(A,b)
- The above requirements can be expressed as a linear
set of equations:
- ∇S
[ ] [ ]
b AI T S D D D D I I D D D D
y y T x x T y y T x x T
= ⇒ + = + ⇒
∗ ∗
Image stitching
Gradient Domain Composition Gradient Domain Composition
Cut & Paste Cut & Paste Paste in Gradient Domain Paste in Gradient Domain
Another example
Transparent Cloning Transparent Cloning
I S
- ∇
= ∇ 2 S T I
- ∇
+ ∇ ∇ =
( )
max , I S T
- ∇
= ∇ ∇
Transparent Cloning
Another example
Another example
Changing local illumination
Defect concealment
High Dynamic Range Compression
Small exposure: Dark inside
High Dynamic Range Compression
Large exposure: Outside Saturated
Manipulate gradients
α is set to 0.1 of average gradient magnitude β is set between 0.8 and 0.9 Where the gradient is given by:
High Dynamic Range Compression
Desired Image
Software Tone Mapping Short Exposure Long Exposure
High Dynamic Range Compression
Shadow Removal
Color2Grey Algorithm
Optimization:
min Σ Σ
( (gi - gj) - δi,j )2
j=i-+ If δij == L then ideal image is g Otherwise, selectively modulated by Cij i i++
Original Color2Grey Photoshop Grey
Results
Color2Grey + Color
Color2Grey+Color Original PhotoshopGrey Color2Grey
Original PhotoshopGrey Color2Grey
Original PhotoshopGrey Color2Grey