covariance matrix simplification for efficient
play

Covariance Matrix Simplification For Efficient Uncertainty - PowerPoint PPT Presentation

MaxEnt 2007 PASEO Covariance Matrix Simplification For Efficient Uncertainty Management Andr Jalobeanu, Jorge A. Gutirrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part of SpaceFusion project ANR Jeunes


  1. MaxEnt 2007 PASEO Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part of SpaceFusion project ANR “Jeunes Chercheurs” 2005-2008

  2. Outline Uncertainty and error propagation Why do we need to simplify the covariances? Computing/storing/using uncertainties Inverse covariance simplification Information theory-based methods The proposed algorithm Some results Conclusions

  3. Uncertainties and error propagation Error propagation from the source to the end result Computing uncertainties Storing uncertainties Using uncertainties

  4. Error modeling: from source to result processing input output data model algorithm pdf transformed pdf ๏ Input noise: stochastic process (observation = realization of a random variable) ‣ Several additive processes, zero-mean ‣ Stochastic independence between detectors (white noise) ‣ Stationary process ๏ Processing algorithm: deterministic transform in general ๏ Output noise: stochastic process (result = realization of a random variable) ‣ Additive & zero mean assumption, stochastic independence, stationarity

  5. Example: 2D Super-Resolution in astronomy 3 2 0 1 0 1 pointing pattern (model space) Image fusion result (mean) 2 3 Experimental setting: 4 noisy images undersampled (factor 2), shifted (1/2 pixel) [ADA 2006] Inverse covariance of the result diagonal terms, and near-diagonal (covariance) terms

  6. Uncertain knowledge of model variables Posterior pdf P(X | Y) prop. to exp -U(X) Gaussian approximation of the posterior pdf Inverse covariance matrix = 2nd derivatives of U at the optimum sparse but not enough! ๏ Diagonal terms ‣ Inverse variance, if all other terms are zero (1/variable) ๏ Near-diagonal terms ‣ Nearest neighbors (left/right and up/down in 2D) ‣ Longer range (diagonal directions in 2D) ๏ Long-range terms (n/pixel) ‣ Should be zero (more convenient, realistic)

  7. Approximating uncertainties inverse covariance covariance ‣ Inverse covariance matrix: true sparse, but not enough... approx. ๏ Inverse covariance approximation Goal: provide a 1st-order Markovian model ‣ Drop the long-range interactions for simplicity ‣ Minimize a distance between Gaussian distributions, e.g.: � � G (0 , Σ X ) , G (0 , ˜ inf Σ X ) γ D KL ‣ Preserve the variance and nearest neighbor covariance

  8. Storing uncertainties - 2D images, 1 band vertical diagonal diagonal type of interaction self horizontal Optimum Uncertainties: NxN x ( 1 + 2 [+ 2]) parameters NxN pixels limited redundancy: 3 or 5

  9. Multispectral uncertainties ... ... Add interactions between bands Optimum M bands of NxN pixels Uncertainties: NxN x (M + 2M [+2M] + M-1) parameters limited redundancy: max. 4 or 6

  10. Using uncertainties Example: recursive data fusion Probabilistic fusion Average Result #2 Result #1 ๏ Bayesian updating and uncertainty propagation ‣ Use a simplified posterior pdf (approx. inverse covariance matrix) as a prior density for subsequent data processing ‣ Recursive (vs. batch) data fusion: allow for model updates Φ ( X ) ( k +1) = X T � Σ − 1 ( k ) � ˜ X

  11. Analyzing processed images when uncertainties are provided Extended data term: � 1 � � 2 d p ( X − ˆ p ( X − ˆ X ) p ( X − ˆ p ( X − ˆ X ) p ( X − ˆ X ) 2 p + c h X ) r ( p ) + c v D ( X ) = X ) u ( p ) p usual term extra off-diagonal terms (horizontal and vertical inv. covariances) ๏ Bayesian approach: data analysis from processed images ‣ Goal: compute the posterior pdf of the parameters of interest ‣ Use the extended data term: approximate posterior pdf ‣ Update existing statistical methods (Bayesian or not) to use this extended term - no other changes required! ‣ If possible, provide uncertainties on the analysis result as well ‣ Example: prediction X=F(parameters), F=star profile...

  12. Inverse covariance simplification State of the art The proposed algorithm Some results Approximations for large matrices

  13. Problem statement for Markov Chains Given model Simplified model (computed posterior pdf) Graphical model Graphical model ˜ Σ − 1 Σ − 1 Inverse covariance matrix Inverse covariance matrix

  14. Information-theory based approaches ๏ Set to zero - bad idea: ˜ Σ positive definite ๏ Minimize a distance between distributions Σ , ˜ Σ ‣ Kullback-Leibler or - relative entropy ‣ Symmetric Kullback-Leibler ‣ Bhattacharyya distance... Relative entropy maximization: ˜ ˜ Σ − 1 M.E. subject to constraints Σ positive definite uv = 0 � det Σ � Σ − 1 ˜ � � D KL ( P ˜ Σ | P Σ ) ∝ log + tr Σ det ˜ Σ Nonlinear problem and constraints difficult to enforce. Any ideas?

  15. Proposed approach ˜ Σ − 1 C = I x = ˜ Σ − 1 C I ๏ Enforce the constraints: ˜ Σ − 1 uv = 0 if ( u, v ) ∈ Ω C kl = Σ kl if ( k, l ) ∈ ¯ Ω ๏ Minimize the norm of the residual E = � ˜ Σ − 1 C − I � 2 to find the unknown inverse covariance entries X and the unknown covariances Z (even if not needed)

  16. Alternate optimization scheme E = � ˜ Σ − 1 C − I � 2 This is not a quadratic form in (X,Z) ๏ Z fixed, minimize E with respect to X ‣ Quadratic form - use a conjugate gradient descent E Z ( X ) = 1 2 X t A Z X + B t Z X + const ๏ X fixed, minimize E with respect to Z ‣ Quadratic form - use a conjugate gradient descent E X ( Z ) = 1 2 Z t A X Z + B t X Z + const

  17. Some details... � ( a Z ) ij ( a Z ) t A Z = ij ij � B Z = − ( a Z ) ii i � ( a X ) ij ( a X ) t A X = ij ij � B X = − ( a X ) ii i

  18. Test 1 (simulation, 6x6 matrix) ˜ Σ − 1 Σ − 1 C Convergence monitoring

  19. Test 2 (simulation, 6x6 matrix) C ˜ Σ − 1

  20. How to simplify large matrices? Block sweeping/averaging technique Original method Using block sweeping and averaging

  21. Block size and 2D Markov Random Fields? Input inverse covariance What is the minimum block size? Simplified (block sweeping) Example: simplification of a 8-neigbor MRF to obtain a 4-neighbor MRF

  22. Conclusions Accomplishments New algorithm to simplify inverse covariance matrices using covariance and support constraints Fast alternate optimization scheme What’s next? Extension to general 2D Markov Random Fields Improve the block size determination technique Application to image processing (e.g. data fusion) in remote sensing and astronomy

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend