a benchmark for inpainting based
play

A Benchmark for Inpainting-Based 5 Image Reconstruction and - PowerPoint PPT Presentation

M I Dagstuhl Seminar on Inpainting-Based Image Compression A Dagstuhl, November 13 18, 2016 1 2 3 4 A Benchmark for Inpainting-Based 5 Image Reconstruction and Compression 6 7 8 9 Sarah Andris, Leif Bergerhoff, and Joachim


  1. M I Dagstuhl Seminar on Inpainting-Based Image Compression A Dagstuhl, November 13 – 18, 2016 1 2 3 4 A Benchmark for Inpainting-Based 5 Image Reconstruction and Compression 6 7 8 9 Sarah Andris, Leif Bergerhoff, and Joachim Weickert 10 11 Mathematical Image Analysis Group Faculty of Mathematics and Computer Science 12 Saarland University, Saarbr¨ ucken, Germany 13 14 15

  2. M I Motivation A 1 Motivation 2 � Benchmarks can encourage research advances by allowing fair comparison. 3 4 � classical examples in computer vision: 5 Middlebury Berkeley 6 Optical Flow Benchmark Segmentation Benchmark 7 8 9 10 11 � inpainting test set with 100 images: Kawai et al. 2009 12 � Our goal: Introduce such a benchmark for inpainting-based reconstruction 13 and compression. 14 What features should we include? 15

  3. M I Outline A 1 Outline 2 � Motivation 3 4 � Structure 5 � Data Optimisation 6 � Optimal Reconstruction and Complete Codecs 7 8 � Error Measures 9 � Conclusions 10 11 12 13 14 15

  4. M I Structure (1) A 1 Structure 2 � Provide test sets, code for basic methods, and rankings. 3 4 � Layer 1: Data Optimisation 5 • mask positions • grey / colour values 6 7 � Layer 2: Optimal Reconstruction 8 • mask positions • grey / colour values 9 • inpainting operator 10 � Layer 3: Complete Codecs 11 • mask generation and grey / colour value optimisation 12 • optimal inpainting operator 13 • quantisation methods • storage and coding of acquired data 14 15

  5. M I Structure (2) A 1 Test Sets 2 � currently three test sets: 3 4 SmallGrey SmallColour LargeColour 5 6 7 8 · · · 9 � small sets: 5 well-known images 10 large set: 24 images of the Kodak database 11 � The Kodak database has been released for unrestricted usage. 12 13 � We might have to separate into training and test set. 14 15

  6. M I Structure (3) A 1 Possible Extensions 2 � Generate own images tailored towards inpainting-specific challenges: 3 • texture, 4 • cartoon, 5 • · · · 6 � Include high resolution images: 7 • 4k, 8 • 1080p, • · · · 9 10 � Add data sets for special image types: 11 • high dynamic range (HDR) images, • spectral images, 12 • medical images, 13 • depth maps, • · · · 14 15

  7. M I Data Optimisation (1) A 1 Data Optimisation 2 � Users can upload masks and corresponding grey or colour values for every test set. 3 4 � Homogeneous diffusion inpainting is performed automatically. 5 � Permitted masks contain 3%, 5%, or 10% of total amount of image pixels. 6 � We measure the quality of reconstructed images in terms of MSE and SSIM. 7 8 9 10 11 12 13 Original Mask Colour Values Reconstruction 14 � keep it simple: only binary masks and integer values 15

  8. M I Data Optimisation (2) A 1 Possible Extensions 2 � Provide additional inpainting methods: 3 • biharmonic, EED (Weickert 1996), exemplar-based (Facciolo et al. 2009), ... 4 • Problem: Reconstruction quality may depend on implementation. 5 � Including methods from users is a security risk and therefore not planned. 6 7 Two Showcase Methods 8 � Method 1: Dithering on Laplace magnitude (Belhachmi et al. 2009) 9 • Laplace magnitude indicates good mask positions 10 • Realise dithering through electrostatic halftoning (Schmaltz et al. 2010). 11 � Method 2: Probabilistic sparsification (Mainberger et al. 2012) 12 • Randomly remove points from full mask. 13 • Add back certain percentage with large inpainting error. 14 • Repeat until desired density is reached. 15

  9. M I Data Optimisation (3) A 1 Results 2 Dithering Sparsification 3 4 5 6 5% MSE: 60.7 5% MSE: 55.7 7 8 9 10 11 3% MSE: 148.4 3% MSE: 146.3 12 13 14 15

  10. M I Optimal Reconstruction and Complete Codecs (1) A 1 Optimal Reconstruction and Complete Codecs 2 � Layer 2: Optimal Reconstruction 3 • mask generation and grey / colour value optimisation 4 • optimal inpainting operator 5 Task: Given a mask density, produce the best reconstruction. 6 � Layer 3: Complete Codecs 7 • mask generation and grey / colour value optimisation 8 • optimal inpainting operator 9 • quantisation methods • storage and coding of acquired data 10 Task: Given a compression rate, produce the best reconstruction. 11 12 � Efficiency can only be judged by the outcome of the complete process. 13 � Acquire error of reconstructed image w.r.t. original. 14 15

  11. M I Optimal Reconstruction and Complete Codecs (2) A 1 Verification 2 � Verification through hidden ground truth is not possible. 3 4 � Should we check for fraud by verifying programs? 5 Verification No Verification 6 + fraud becomes much harder – fraud may be undetected 7 (but not impossible) – security risk + no attacks through 8 malicious programs 9 – high maintenance + low maintenance – restrictions on submitted code + no restrictions necessary 10 11 � Verifying programs is risky, burdensome, and error-prone. 12 � Users can still supply a link to their code. 13 14 � Require corresponding paper as a reference. 15

  12. M I Error Measures (1) A 1 Error Measures 2 � Choosing an appropriate error measure is essential for a benchmark. 3 4 � Most benchmarks present several measures or use a combination. 5 � very well-known measure: MSE (or equivalently PSNR) 6 � Problem: often does not reflect human perception 7 Remedy: error measures taking structure / statistics into account 8 � Structural similarity measure (SSIM) has become popular. 9 10 � Is the SSIM really superior? 11 • Dosselmann and Yang 2011: strong correlation between localised MSE and SSIM • Gali´ c et al. 2012: MSE and SSIM behave similarly in compression context 12 13 14 15

  13. M I Error Measures (2) A 1 MSE versus SSIM 2 � Hypersphere: Apply image distortions resulting in similar MSE / SSIM. 3 4 � Example: MSE ≈ 200 , SSIM ≈ 0 . 7 (image courtesy: P. Peter) 5 6 7 8 9 10 11 12 Blur Noise JPEG Value Shift 13 14 � Here, both measures do not represent human perception well. 15

  14. M I Error Measures (3) A 1 Texture and Colour 2 � Judging the quality of texture is particularly challenging. 3 4 5 6 7 8 Original MSE: 13136.9 MSE: 7479.2 9 SSIM: 0.25 SSIM: 0.48 10 Possible remedy: texture error measure (e.g. ˇ � Zujovi´ c et al. 2012) 11 12 � Error measures for colour images suffer similar problems. 13 � Choosing an error measure influences the design of methods. 14 15

  15. M I Conclusions A 1 Conclusions 2 � We propose a benchmark with three layers of optimisation: 3 • Layer 1: Data Optimisation 4 • Layer 2: Optimal Reconstruction 5 • Layer 3: Complete Codecs 6 � Data optimisation can already be tested on the website. 7 � The benchmark website is intended as a service to our research community. 8 9 � The choice of appropriate error measures is up to discussion. 10 11 Thank you! 12 http://benchmark.mia.uni-saarland.de 13 14 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend