optimising data for pde based inpainting and compression
play

Optimising Data for PDE-Based Inpainting and Compression Laurent - PowerPoint PPT Presentation

Dagstuhl Seminar Inpainting-Based Image Compression Optimising Data for PDE-Based Inpainting and Compression Laurent Hoeltgen hoeltgen@b-tu.de Chair for Mathematics of Engineering & Numerical Optimisation Brandenburg University of


  1. Dagstuhl Seminar Inpainting-Based Image Compression Optimising Data for PDE-Based Inpainting and Compression Laurent Hoeltgen hoeltgen@b-tu.de Chair for Mathematics of Engineering & Numerical Optimisation Brandenburg University of Technology, Cottbus - Senftenberg November 17th, 2016 This work is licensed under a Creative Commons “Attribution-ShareAlike 4.0 International” license.

  2. Motivation (1) PDE-Based Image Compression � alternative to transform based approaches � has three important pillars � each requires advanced optimisation Image Compression Data Data Data Storage Selection Reconstruction

  3. Motivation (2) Challenges in Mathematical Modelling We have to take a few tough decisions: � How to reconstruct the image? � PDE has significant influence on our design principles � What are our optimisation criteria? � Which data should we optimise? � Should we maximise the quality or minimise the file size?

  4. Outlook � Data Reconstruction Data Selection � Data Selection in the Domain � Data Selection in the Co-Domain � � Summary and Conclusion

  5. Data Reconstruction (1) PDE-Based Image Inpainting We could use any diffusion type PDE. However, we want it to be: � simple to analyse � linear, parameter free, ... � applicable to any domain and co-domain � no restriction on image type, size, quantisation, ... � fast to carry out Laplace interpolation fulfils all these requirements.

  6. Data Reconstruction (2) Laplace Interpolation ( Noma, Misuglia, 1959) � consider Laplace equation with mixed boundary conditions: ∂ Ω Ω 8 − ∆ u = 0, on Ω Ω K > < u = g , on Ω K Ω K > ∂ n u = 0, on ∂ Ω : � Ω K : represents known data � Ω \ Ω K : region to be inpainted (i.e. missing data) � image reconstructions given as solutions u

  7. Data Selection (1) Highest Accuracy or Smallest Memory Footprint? Optimisation strategies can be split into 2 groups: 1. maximise accuracy at the expense of file size � data locations have no apparent structure � data values are real valued � large amounts of data are preferred 2. minimise memory footprint at the expense of accuracy � data positions stored in memory efficient structures � data values are quantised � sparse sets of data are preferred

  8. Data Selection (2) Finding Optimal Interpolation Data � We can’t optimise accuracy and file size at the same time. � Using all image data yields perfect accuracy. � Using no image data at all yields perfect file size. Our Strategy: � We target a fixed amount of image data and optimise accuracy. � In case of identical accuracy, we chose the one with smaller size.

  9. Data Selection (3) � brute force search of optimal pixel locations is impossible � 10 5600 ways to choose 5% of data from 0.07 megapixel image � most smart-phones yield 12 megapixel photographs � for comparison: 10 23 × =

  10. Data Selection in the Domain (1) Laplace Interpolation (Reminder) ∂ Ω Ω 8 − ∆ u = 0, on Ω \ Ω K Ω K > < u = g , on Ω K Ω K > ∂ n u = 0, on ∂ Ω : can be rewritten as (Mainberger et al. 2011): ( c ( u − g ) + (1 − c ) ( − ∆ ) u = 0, on Ω on ∂ Ω ∂ n u = 0, with c ≡ 1 on Ω K and c ≡ 0 on Ω \ Ω K

  11. Data Selection in the Domain (2) Regularised Laplace Interpolation ( c ( u − g ) + (1 − c ) ( − ∆ ) u = 0, on Ω ∂ n u = 0, on ∂ Ω � PDE makes also sense if c : Ω → R � can be seen as regularisation � Regions with c > 1 violate Max-Min principle � PDE resembles backward diffusion process � may also be interpreted as a Helmholtz equation � contrast enhancement � unconditional existence of a solution difficult to show

  12. Data Selection in the Domain (3) An Optimal Control Model for Sparse Masks (H., Setzer, Weickert 2013) � optimal non-binary and sparse masks c given by Z 1 ff 2 ( u − g ) 2 + λ | c | + ε 2 | c | 2 d x arg min u , c Ω ( c ( u − g ) + (1 − c )( − ∆ ) u = 0, on Ω subject to: ∂ n u = 0, on ∂ Ω � model has strong similarities to Belhachmi et al. 2009 2 ( u − g ) 2 favours accurate reconstructions 1 � � λ | c | favours sparse sets of data 2 | c | 2 required for technical reasons ε � � PDE enforces feasible solutions

  13. Data Selection in the Domain (4) Interpretation Z 1 ff 2 ( u − g ) 2 + λ | c | + ε 2 | c | 2 d x arg min u , c Ω ( c ( u − g ) + (1 − c )( − ∆ ) u = 0, on Ω subject to: ∂ n u = 0, on ∂ Ω � energy reflects trade-off between accuracy and sparsity � objectives cannot be fulfilled simultaneously � λ steers sparsity of the interpolation data � small, positive λ : many data points, but good reconstruction � large, positive λ : few data points, but bad reconstruction � non-convex, non-smooth, and large scale optimisation

  14. Data Selection in the Domain (5) A Solution Strategy � linearise constraint to handle non-convexity: T ( u , c ) := c ( u − g ) + (1 − c )( − ∆ ) u T ( u , c ) ≈ T ( u k , c k ) + D u T ( u k , c k )( u − u k ) + D c T ( u k , c k )( c − c k ) � add proximal term and solve iteratively Z 1 2 ( u − g ) 2 + λ | c | + ε 2 | c | 2 arg min u , c Ω ff + µ u − u k ” 2 + µ c − c k ” 2 “ “ d x 2 2 T ( u k , c k ) + D u T ( u k , c k )( u − u k ) + D c T ( u k , c k )( c − c k ) = 0 until fixed point is reached

  15. Data Selection in the Domain (6) Theoretical Findings algorithm yields several interesting results: 1. energy decreasing as long as: 1 “ � u k +1 − g � 2 2 − � u k − g � 2 ” 6 2 2 + ε “ ” “ ” � c k +1 � 1 − � c k � 1 � c k +1 � 2 2 − � c k � 2 λ 2 2 gain in sparsity must outweigh loss in accuracy 2. fixed-points fulfil necessary optimality conditions: u − g − D u T ( u , c ) ⊤ p = 0 λ ∂ ( �·� 1 ) ( c ) + ε c + D c T ( u , c ) p ∋ 0 T ( u , c ) = 0

  16. Data Selection in the Domain (7) Example (Input)

  17. Data Selection in the Domain (8) Evolution of the Iterates mean squared error energy 0 100 200 300 400 500 600 700 iteration

  18. Data Selection in the Domain (8) Evolution of the Iterates mean squared error energy 0 100 200 300 400 500 600 700 iteration

  19. Data Selection in the Domain (8) Evolution of the Iterates mean squared error energy 0 100 200 300 400 500 600 700 iteration

  20. Data Selection in the Domain (8) Evolution of the Iterates mean squared error energy 0 100 200 300 400 500 600 700 iteration

  21. Data Selection in the Domain (9) Example (5% of Mask Points and Reconstruction)

  22. Data Selection in the Co-Domain (1) Tonal Optimisation � optimal pixel values necessary to maximise quality � quantisation to n ≪ 256 colours essential for compression � best number of colours hard to determine � often 30% less memory needed � optimisation criteria should take file size into account � hard to predict

  23. Data Selection in the Co-Domain (2) The Inpainting Operator � given mask c , solution u = u ( c ) of c ( u − g ) + (1 − c ) ( − ∆ ) u = 0, on Ω ∂ n u = 0, on ∂ Ω can be expressed in terms of a linear inpainting operator M c : u = M c ( cg ) � optimal data values can be found by solving: Z ff | M c ( cx ) − f | 2 d x arg min x Ω

  24. Data Selection in the Co-Domain (3) Continuous Tonal Optimisation � least squares model suggested by Mainberger et al. 2011 � yields arbitrary values in R � equivalence to mask optimisation optimal non-binary masks c equivalent to optimal colours in R with binary masks (H., Weickert 2015) � maximises reconstruction quality � storage of colour values in R is very expensive

  25. Data Selection in the Co-Domain (4) Remarks � The presented model yields high quality data. � The optimisation is time consuming. � Storing this data as-is is too expensive for compression tasks. We need algorithms to simplify the data: � memory efficient representation of arbitrary binary masks � good colour quantisation strategies

  26. Data Selection in the Co-Domain (5) Quantisation of Optimal Data Values � Optimal colour values gather around few dominant colours. � replacing all colours by their closest dominant colour yields: � reconstruction error becomes slightly larger � data becomes much easier to compress � achievable by using clustering strategies Important questions: � Which clustering method is the best? � How to chose our feature values?

  27. Data Selection in the Co-Domain (6) Distribution of Optimal Colour Values optimal colour values of the trui test image on mask positions 60 40 20 0 64 128 192

  28. Data Selection in the Co-Domain (7) Evaluated Strategies we tested with various feature choices: � k-means++ � hierarchical clustering � Gaussian mixture models Findings: k-means++ on colour values at mask positions works quite well

  29. Data Selection in the Co-Domain (8) Discrete/Continuous Tonal Optimisation (H., Breuß 2016) � apply k-means++ on colour values from mask locations hierarchical mean squared error k-means 55 probabilistic no optimisation 50 45 10 15 20 25 30 35 40 45 50 55 60 65 70 75 number of clusters � k-means++ 30 colours outperforms original data (170 colours) � optimal quantisation found with silhouette statistics

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend