Old and New Algorithms for Blind Deconvolution Yair Weiss Hebrew - - PowerPoint PPT Presentation

old and new algorithms for blind deconvolution
SMART_READER_LITE
LIVE PREVIEW

Old and New Algorithms for Blind Deconvolution Yair Weiss Hebrew - - PowerPoint PPT Presentation

Old and New Algorithms for Blind Deconvolution Yair Weiss Hebrew University of Jerusalem joint work with Anat Levin, WIS (thanks to Meir Feder, EE, TAU) Blind Deblurring y = x * k Much Recent Progress (Fergus et al. 06, Cho et al. 07,


slide-1
SLIDE 1

Old and New Algorithms for Blind Deconvolution

Yair Weiss

Hebrew University of Jerusalem joint work with Anat Levin, WIS (thanks to Meir Feder, EE, TAU)

slide-2
SLIDE 2

Blind Deblurring

y = x * k

slide-3
SLIDE 3

Much Recent Progress

(Fergus et al. 06, Cho et al. 07, Jia 07, Joshi et al. 08, Whyte et al. 10, Shan et al. 07, Harmeling et al. 10, Sroubek and Milanfar 11 · · · )

slide-4
SLIDE 4

MAP using sparse derivative prior

−100 −50 50 100 10 10

5

dx log counts

log P(x, k|y) = C +

  • i

|xi|α + λk ∗ x − y2

  • MAPxk: (x∗, k∗) = arg max P(x, k|y). Guaranteed to fail

with global optimization. Often works well in practice.

  • MAPk: (k∗) = arg maxk
  • x P(x, k|y). Guaranteed to

succeed if images sampled from prior. Can be difficult to

  • ptimize.

(Levin et al. 09)

slide-5
SLIDE 5

This talk:

  • Old algorithms for blind deconvolution from communication
  • systems. = MAP

. Rigorous correctness proofs.

  • Can explain success of some MAPx,k algorithms.
slide-6
SLIDE 6

Blind Deconvolution in Communication

  • x(t) the transmitted signal.
  • y(t) =

τ kτx(t − τ) received signal.

  • y=x * k
slide-7
SLIDE 7

Blur makes signals Gaussians (Central Limit Theorem)

  • Orig. Signal (IID)

Histogram (κ=1)

−1 −0.5 0.5 1 100 200 300 400 500 600

Blurred Histogram (κ = 2.683)

1 2 3 4 5 10 20 30 40 50

κ(y) = 1 N

  • i

¯ y4

i , ¯

yi = yi/std(y) (Shalvi and Weinstein 1990)

slide-8
SLIDE 8

Blur makes signals Gaussians (Central Limit Theorem)

  • Orig. Signal (IID)

Histogram (κ=26.43)

−4 −2 2 4 100 200 300 400

Blurred Histogram (κ = 5.45)

−4 −2 2 4 20 40 60 80 100

slide-9
SLIDE 9

Simple Blind Deconvolution Algorithm (Shalvi and Weinstein 1980)

Assume xi IID. y = x ∗ k. Solve for the “inverse filter” e such that e ∗ y = x.

  • xi sub-Gaussian (κ < 3):

e∗ = arg min

e κ(e ∗ y)

(Godard 1970)

  • xi super-Gaussian (κ > 3):

e∗ = arg max

e

κ(e ∗ y) Claim: Guaranteed to find the correct e: e∗ ∗ y = x. No local minima.

slide-10
SLIDE 10

Simple Proof

Assume xi IID. xi super-Gaussian (κ > 3): y = x ∗ k. e∗ = arg max

e

κ(e ∗ y) Proof: ˆ x = e ∗ y = e ∗ k ∗ x = (e ∗ k) ∗ x ⇒ Unless e ∗ k = δ, ˆ x is more Gaussian than x so κ(ˆ x) < κ(x)

slide-11
SLIDE 11

Proof by pictures:

  • Orig. Signal (IID)

Histogram (κ=26.43)

−4 −2 2 4 100 200 300 400

Blurred Histogram (κ = 5.45)

−4 −2 2 4 20 40 60 80 100

deblurred (wrong e) Histogram (κ = 14.14)

−3 −2 −1 1 2 3 20 40 60 80 100 120 140 160 180
slide-12
SLIDE 12

Blind Deconvolution by Maximizing non Gaussianity

Assume xi IID. xi super-Gaussian. (κ > 3): y = x ∗ k. e∗ = arg max

e

κ(e ∗ y)

  • Universal proof of correcteness (don’t need to know Pr(xi),

just IID and sub/super Gaussianity).

  • Proofs of global convergence of iterative algorithms.
  • Used in microwave radio transmissions (1980s), cable

set-top boxes (mid 1990s) and wireless communication (late 1990s-today) (Johnson et al. 1998)

  • Can we use this for blind image deblurring?
slide-13
SLIDE 13

Blur makes derivatives more Gaussian

Sharp image dx Histogram (κ=20.02)

−100 −50 50 100 10 10

5

Blurred image dx Histogram (κ = 17.99)

−100 −50 50 100 10 10

5

blurred sharp

slide-14
SLIDE 14

New class of image blind deblurring alglrithms

y: blurred (deriv) image Histogram (κ = 17.99)

−40 −20 20 40 10 10

5

x sharp (deriv) image Histogram (κ=20.02)

−100 −50 50 100 10 10

5

Find (x∗, k∗) such that:

  • y = x∗ ∗ k∗
  • x∗ as non Gaussian as possible.

If derivatives are IID, this algorithm provably finds the correct blur kernel.

slide-15
SLIDE 15

Measuring non Gaussianity using normalized moments

1 N

  • i

¯ xα

i , ¯

xi = xi/std(x)

−5 5 −2.5 −2 −1.5 −1 −0.5 x log prob −5 5 −5 −4 −3 −2 −1 x log prob −5 5 −25 −20 −15 −10 −5 x log prob

  • i ¯

x4

i = 25.73

  • i ¯

x4

i = 5.8

  • i ¯

x4

i = 3

  • i ¯

x1/2

i

= 0.60

  • i ¯

x1/2

i

= 0.74

  • i ¯

x1/2

i

= 0.81

slide-16
SLIDE 16

A new(?) algorithm

y: Blurred image Histogram (κ = 17.99)

−40 −20 20 40 10 10

5

(x∗, k∗) = arg min

x,k

  • i

|¯ xi|α + λk ∗ x − y2 ¯ xi = xi/std(x) Guaranteed to succeed.

slide-17
SLIDE 17

MAPx,k algorithm

y: Blurred image Histogram (κ = 17.99)

−40 −20 20 40 10 10

5

(x∗, k∗) = arg min

x,k

  • i

|xi|α + λk ∗ x − y2 Guaranteed to fail.

slide-18
SLIDE 18

Normalized Sparsity (Krishnan et al. 11)

min

x,k λx ∗ k − y2 + x1

x2 + Ψk1 Can be seen as special case of “new” algorithm.

slide-19
SLIDE 19

MAPxk with bilateral filtering (Cho and Lee 09, Hirsch et al. 11)

¯ x ← BilateralFilter(x) k ← arg min

k

y − k ∗ ¯ x x ← arg min

x

λy − k ∗ x2 + sparsity(x) Can be seen as approximating special case of “new” algorithm.

slide-20
SLIDE 20

A new(?) algorithm

y: Blurred image Histogram (κ = 17.99)

−40 −20 20 40 10 10

5

(x∗, k∗) = arg min

x,k

  • i

|¯ xi|α + λk ∗ x − y2 ¯ xi = xi/std(x) Guaranteed to succeed.

slide-21
SLIDE 21

So what does this buy us?

  • Understanding recent algorithms. Proof of success.
  • Improving recent algorithms. Filters with better

independence properties. Iterative algorithms with global convergence.

slide-22
SLIDE 22

Conclusions

  • MAP algorithms for blind image deconvolution. May work

even when not supposed to.

  • Old algorithms for blind deconvolution in communications.

Universal guarantees, global convergence, used in millions

  • f devices.
  • Can help us understand and improve image deblurring

algorithms.