Sparsity and image processing Aurlie Boisbunon INRIA-SAM, AYIN - - PowerPoint PPT Presentation

sparsity and image processing
SMART_READER_LITE
LIVE PREVIEW

Sparsity and image processing Aurlie Boisbunon INRIA-SAM, AYIN - - PowerPoint PPT Presentation

Sparsity and image processing Aurlie Boisbunon INRIA-SAM, AYIN March 26, 2014 Why sparsity? Main advantages Dimensionality reduction Fast computation Better interpretability Image processing pattern recognition denoising


slide-1
SLIDE 1

Sparsity and image processing

Aurélie Boisbunon

INRIA-SAM, AYIN

March 26, 2014

slide-2
SLIDE 2

Why sparsity?

Main advantages

◮ Dimensionality reduction ◮ Fast computation ◮ Better interpretability

Image processing

◮ pattern recognition ◮ denoising / deblurring ◮ compression ◮ super-resolution ◮ source separation

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 2 / 14

slide-3
SLIDE 3

Context and objectives

Linear regression

x = D ∗ α + ε

  • x

(vectorized) image D dictionary ε noise

Assumption

α is a sparse vector/matrix

Dictionary

D = {φj}J

j=1 ◮ Fixed: Fourier basis, Wavelets ◮ Learned

Source: [Hastie et al., 2008] Source: [Donoho et al., 1995]

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 3 / 14

slide-4
SLIDE 4

Sparse optimization problem

min

α

  • x − Dα2

2 + pen(α)

  • |

goodness of fit / distortion rate

Goodness of fit

Measures how close two images are Original Salt & pepper Gaussian Negative

S&P Gauss Neg 1 2 3 4 5 x 10

4

Goodness of fit

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 4 / 14

slide-5
SLIDE 5

Sparse optimization problem

min

α

  • x − Dα2

2 + pen(α)

  • |

penalty / regularization

Penalty

Special case: non-differentiable in zero1 ⇒ sparse solution ˆ α

0.2 0.4 0.6 0.8 1 1.2 1.2 1.2 1.2 1.4 1.4 1.4 1.4 1.6 1.6 1.6 1.6 1.8 1.8 1.8 1.8

β1 β2 −1 1 −1 1

ℓ0

0.5 1 1.5 2 2.5 2.5 2.5 2.5 3 3 3 3 3.5 3.5 3.5 3.5 4 4

β1 β2 −1 1 −1 1

ℓ1/Lasso

0.5 1 1.5 1.5 2 2 2.5 2.5 3 3 3.5 3.5 3.5 3.5 4 4 4 4 4.5 4.5

β1 β2 −1 1 −1 1

Reweighted-ℓ1

0.2 0.4 0.6 0.8 1 1.2 1.2 1.2 1.2 1.4 1.4 1.4 1.4 1.6 1.6 1.6 1.6 1.8 1.8 1.8 1.8 2 2 2 2

β1 β2 −1 1 −1 1

MCP

MCP = Minimax Concave Penalty [Zhang, 2010]

1with 0 belonging to subgradient of pen

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 5 / 14

slide-6
SLIDE 6

Sparse optimization problem

min

α

  • x − Dα2

2 + pen(α)

  • |

penalty / regularization

Penalty

Special case: non-differentiable in zero2 ⇒ sparse solution ˆ α

−10 −5 5 10 −10 −5 5 10

βj

LS

βj

l0 l0 LS

ℓ0/hard threshold

−10 −5 5 10 −10 −5 5 10

βj

LS

βj

lasso Lasso LS

ℓ1/Soft threshold

−6 −4 −2 2 4 6 −6 −4 −2 2 4 6

βj

LS

βj

adalasso Adaptive LS

Reweighted-ℓ1

−10 −5 5 10 −10 −5 5 10

βj

LS

βj

FS FS LS

MCP

MCP = Minimax Concave Penalty [Zhang, 2010]

2with 0 belonging to subgradient of pen

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 6 / 14

slide-7
SLIDE 7

Matching/Basis pursuit

Algorithm

Start: α = 0, J = ∅ Repeat

  • 1. Find vector φj most correlated with residual

arg max |φt

j (x − D(J)α(J))|

  • 2. Add it to the “active set”

J ← J ∪ {j}

  • 3. Update the coefficients α(J)

until stopping rule.

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 7 / 14

slide-8
SLIDE 8

Matching/Basis pursuit

2 4 6 1 2 3 4 5

step βLS

ℓ0/matching p.

2 4 6 1 2 3 4 5

step βLAR

ℓ1/Basis p.

2 4 6 1 2 3 4 5

step βAda

Reweighted-ℓ1

2 4 6 8 10 1 2 3 4 5

step βMCP

MCP

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 8 / 14

slide-9
SLIDE 9

Applications

Compression3

Original ℓ1 Reweighted-ℓ1

3[Candes et al., 2008]

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 9 / 14

slide-10
SLIDE 10

Applications

Denoising/Deblurring4

Original Noisy ℓ1 (FISTA)

4[Beck and Teboulle, 2009]

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 10 / 14

slide-11
SLIDE 11

Dictionary learning

Optimization problem

min

α, D

  • x − Dα2

2 + pen(α)

  • Algorithm

Start: α = 0, D0

  • 1. Extract patches from image
  • 2. Repeat

◮ Solve optimization problem for α with D

fixed

◮ Solve optimization problem for D with α

fixed

until stopping rule.

Source: [Bach et al., 2011]

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 11 / 14

slide-12
SLIDE 12

Dictionary learning

Applications

Inpainting5 Texture recognition6

5[Mairal et al., 2009] 6[Mairal et al., 2008]

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 12 / 14

slide-13
SLIDE 13

Thank you!

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 13 / 14

slide-14
SLIDE 14

References

Bach, F., Jenatton, R., Mairal, J., and Obozinski, G. (2011). Optimization for Machine Learning, chapter Convex optimization with sparsity-inducing norms, pages 19–54. MIT Press. Beck, A. and Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202. Candes, E. J., Wakin, M. B., and Boyd, S. P. (2008). Enhancing sparsity by reweighted ℓ1 minimization. Journal of Fourier analysis and applications, 14(5-6):877–905. Donoho, D. L., Buckheit, J. B., Chen, S., Johnstone, I., and Scargle, J. D. (1995). About wavelab. Hastie, T., Tibshirani, R., and Friedman, J. (2008). The Elements of Statistical Learning: Data Mining, Inference and Prediction (2nd Edition), volume 1. Springer Series in Statistics. Mairal, J., Bach, F., Ponce, J., and Sapiro, G. (2009). Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 689–696. ACM. Mairal, J., Ponce, J., Sapiro, G., Zisserman, A., and Bach, F. R. (2008). Supervised dictionary learning. In Advances in Neural Information Processing Systems, pages 1033–1040. Zhang, C. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38(2):894–942.

  • A. Boisbunon

(AYIN) Sparsity & Image Proc. March 26, 2014 14 / 14