unsupervised deconvolution segmentation of textured image
play

Unsupervised Deconvolution-Segmentation of Textured Image Bayesian - PowerPoint PPT Presentation

Unsupervised Deconvolution-Segmentation of Textured Image Bayesian approach: optimal strategy and stochastic sampling Cornelia Vacar and Jean-Fran cois Giovannelli Audrey Giremus, Roxana Rosu, Andrei Barbos Groupe Signal Image


  1. Unsupervised Deconvolution-Segmentation of Textured Image Bayesian approach: optimal strategy and stochastic sampling Cornelia Vacar and Jean-Fran¸ cois Giovannelli Audrey Giremus, Roxana Rosu, Andrei Barbos Groupe Signal – Image Laboratoire de l’Int´ egration du Mat´ eriau au Syst` eme Universit´ e de Bordeaux – CNRS – BINP New mathematical methods in computational imaging Maxwell Insitute for Mathematical Sciences School of Mathematical and Computer Sciences Heriot-Watt University 2017, 29-th June 1 / 43

  2. Inversion: standard question ε y x + H Direct model / Inverse problem Direct model — Do degradations: noise, blur, mixing,. . . y = H ( x ) + ε = Hx + ε = h ⋆ x + ε Inverse problem — Undo: denoising, deblurring, unmixing,. . . x = F ( y ) � 2 / 43

  3. Various fields, modalities, problems,. . . Fields Medical: diagnosis, prognosis, theranostics,. . . Astronomy, geology, hydrology,. . . Thermodynamics, fluid mechanics, transport phenomena,. . . Remote sensing, airborne imaging,. . . Surveillance, security,. . . Non destructive evaluation, control,. . . Computer vision, under bad conditions,. . . Photography, games, recreational activities, leisures,. . . . . . � Health, knowledge, leisure,. . . � Aerospace, aeronautics, transport, energy, industry,. . . 3 / 43

  4. Various fields, modalities, problems,. . . Modalities Magnetic Resonance Imaging Tomography (X-ray, optical wavelength, tera-Hertz,. . . ) Thermography,. . . Echography, Doppler echography,. . . Ultrasonic imaging, sound,. . . Microscopy, atomic force microscopy Interferometry (radio, optical, coherent,. . . ) Multi-spectral and hyper-spectral,. . . Holography Polarimetry: optical and other Synthetic aperture radars . . . � Essentially “wave ↔ matter” interaction 4 / 43

  5. Various fields, modalities, problems,. . . “Signal – Image” problems Denoising Deconvolution Inverse Radon Fourier synthesis Resolution enhancement, super-resolution Inter / extra-polation, inpainting Component unmixing / source separation . . . And also: Segmentation, labels and contours Detection of impulsions, salient points,. . . Classification, clustering,. . . . . . 5 / 43

  6. Inversion: standard question y = H ( x ) + ε = Hx + ε = h ⋆ x + ε ε y x + H x = F ( y ) � Restoration, deconvolution, inter / extra-polation Issue: inverse problems Difficulty: ill-posed problems, i.e., lack of information Methodology: regularisation, i.e., information compensation Specificity of the inversion methods Compromise and tuning parameters 6 / 43

  7. Inversion: advanced questions y = H ( x ) + ε = Hx + ε = h ⋆ x + ε ε , γ x , ℓ , γ y H , η + � � x , � � ℓ , � γ , � = F ( y ) η Additional estimation problems Hyperparameters, tuning parameters: self-tuned, unsupervised Instrument parameters (resp. response): myopic (resp. blind ) Hidden variables: edges, regions / labels, peaks,. . . : augmented Different models for image, noise, response,. . . : model selection 7 / 43

  8. Some historical landmarks Quadratic approaches and linear filtering ∼ 60 Phillips, Twomey, Tikhonov Kalman Hunt (and Wiener ∼ 40) Extension: discrete hidden variables ∼ 80 Kormylo & Mendel (impulsions, peaks,. . . ) Geman & Geman, Blake & Zisserman (lines, contours, edges,. . . ) Besag, Graffigne, Descombes (regions, labels,. . . ) Convex penalties (also hidden variables,. . . ) ∼ 90 L 2 − L 1 , Huber, hyperbolic: Sauer, Blanc-Fraud, Idier. . . L 1 : Alliney-Ruzinsky, Taylor ∼ 79, Yarlagadda ∼ 85 . . . And. . . L 1 -boom ∼ 2005 Back to more complex approaches ∼ 2000 Problems: unsupervised, semi-blind / blind, latent / hidden variables Models: stochastic and hierarchical models Methodology: Bayesian approaches and optimality Algorithms: stochastic sampling (MCMC, Metropolis-Hastings,. . . ) 8 / 43

  9. Addressed problem in this talk � � � ℓ , x k , � = F ( y ) θ k , � γ ε ε , γ ε ℓ , x k , θ k y T , H + Image specificities Piecewise homogeneous Textured images, oriented textures Defined by: label ℓ and texture parameters θ k for k = 1 , . . . , K Observation: triple complication Convolution 1 Missing data, truncation, mask 2 Noise 3 9 / 43

  10. Outline Image model Textured images, orientation Piecewise homogeneous images, labels Observation system model Convolution and missing data Noise Hierarchical model Conditional dependancies / independancies Joint distribution Estimation / decision strategy and computations Cost, risk and optimality � Posterior distribution and estimation Convergent computations: stochastic sampler ⊕ empirical estimates Gibbs loop Inverse cumulative density function Metropolis-Hastings First numerical assessment Behaviour, convergence,. . . Labels, texture parameters and hyperparameters Quantification of errors 10 / 43

  11. Texture model: stationary Gauss Random Field Original image x ∼ N ( 0 , R x ), in C P Parametric covariance R x = R x ( γ x , θ ) Natural parametrization: R x ( γ x , θ ) = γ − 1 x P − 1 ( θ ) x Parameters: scale γ x and shape θ � � � � f ( x | θ , γ x ) = (2 π ) − P γ P − γ x x † P x ( θ ) x x det P x ( θ ) exp Whittle (circulant) approximation Matrix P x ( θ ) ← → eigenvalues λ p ( θ ) ← → field inverse PSD � � x p | 2 � (2 π ) − 1 γ x λ p ( θ ) exp f ( x | θ , γ x ) = − γ x λ p ( θ ) | ◦ ◦ Separablilty w.r.t. the Fourier coefficients x p ◦ Precision parameter of the Fourier coefficients x p : γ x λ p ( θ ) Any PSD, e.g., Gaussian, Laplacian, Lorentzian,. . . more complex,. . . . . . and K such models (PSD): x k for k = 1 , . . . , K 11 / 43

  12. Examples: Power Spectral Density and texture Laplacian PSD � � ( ν 0 x , ν 0 θ = y ) , ( ω x , ω y ) : central frequency and widths � � + | ν y − ν 0 | ν x − ν 0 y | x | λ − 1 ( ν x , ν y , θ ) = exp − ω x ω y 12 / 43

  13. Labels: a Markov field Usual Potts model: favors large homogeneous regions Piecewise homogeneous image P pixels in K classes ( K is given) Labels ℓ p for p = 1 , . . . , P with discrete value in { 1 , . . . K } Count pairs of identical neighbour, ”parcimony of a gradient” � ν ( ℓ ) = δ ( ℓ p ; ℓ q ) = ” � Grad ℓ � 0 ” p ∼ q ∼ : four nearest neighbours relation δ : Kronecker function Probability law (exponential family) Pr [ ℓ | β ] = C ( β ) − 1 exp [ βν ( ℓ ) ] β : ”correlation” parameter (mean number / size of the regions) C ( β ): normalization constant Various extensions: neighbour, interaction 13 / 43

  14. Labels: a Potts field Example of realizations: Ising ( K = 2) β = 0 β = 0 . 3 β = 0 . 6 β = 0 . 7 β = 0 . 8 β = 0 . 9 β = 1 Example of realizations for K = 3 β = 0 β = 0 . 8 β = 1 . 1 14 / 43

  15. Labels: a Potts field Partition function Pr [ ℓ | β ] = C ( β ) − 1 exp [ βν ( ℓ ) ] Partition function (normalizing coefficient) � C ( β ) = exp [ βν ( ℓ ) ] ℓ ∈{ 1 ,... K } P ¯ C ( β ) = log [ C ( β )] Crucial in order to estimate β No closed-form expression (except for K = 2, P = + ∞ ) Enormous summation over the K P configurations 15 / 43

  16. Partition: an expectation computed as an empirical mean Distribution and partition � Pr [ ℓ | β ] = C ( β ) − 1 exp [ βν ( ℓ )] with C ( β ) = exp [ βν ( ℓ )] A well-known result [Mac Kay] for exponential family: � C ′ ( β ) = ν ( ℓ ) exp [ βν ( ℓ )] Yields the log-partition derivative: � � � ν ( ℓ ) C ( β ) − 1 exp [ βν ( ℓ )] = E ¯ C ′ ( β ) = ν ( ℓ ) Approximated by an empirical mean � C ′ ( β ) ≃ 1 ¯ ν ( ℓ ( q ) ) Q where the ℓ ( q ) are realizations of the field (given β ) Only few weeks of computations. . . but once for all ! 16 / 43

  17. Partition Log Partition 3 2.5 2 1.5 1 0.5 0 0 0.5 1 1.5 2 2.5 3 1 First Der. 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Second Der. 50 40 30 20 10 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Parameter β 17 / 43

  18. Image formation model Image x writes � K x = S k ( ℓ ) x k k =1 x k for k = 1 , . . . , K : textured images (previous models) S k ( ℓ ) for k = 1 , . . . , K : binary diagonal indicator of region k � � S k ( ℓ ) = diag s k ( ℓ 1 ) , . . . s k ( ℓ P ) � 1 if the pixel p is in the class k s k ( ℓ p ) = δ ( ℓ p ; k ) = 0 if not ℓ x 1 x 2 x 3 S 1 ( ℓ ) x 1 S 2 ( ℓ ) x 2 S 3 ( ℓ ) x 3 x 18 / 43

  19. Outline Image model Textured images, orientation Piecewise homogeneous images Observation system model Convolution and missing data Noise Hierarchical model Conditional dependancies / independancies Joint distribution Estimation / decision strategy and computations Cost, risk and optimality � Posterior distribution and estimation Convergent computations: stochastic sampler ⊕ empirical estimates Gibbs loop Inverse cumulative density function Metropolis-Hastings First numerical assessment Behaviour, convergence,. . . Labels, texture parameters and hyperparameters Quantification of errors 19 / 43

  20. Observation model Observation: triple complication Convolution: low-pass filter H Missing data: truncation matrix T , size M × P Noise: ε accounts for measure and model errors ε , γ ε ℓ , x k , θ k y T , H + � y m = [ Hx ] m + ε m for observed pixels y = T Hx + ε = Nothing for missing pixels 20 / 43

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend