o constrained fits with non normal distributions
play

o Constrained fits with non-normal distributions R. Frhwirth 1 , 2 - PowerPoint PPT Presentation

Introduction A Simple Example Other Applications Summary and Outlook o Constrained fits with non-normal distributions R. Frhwirth 1 , 2 and O. Cencic 3 1 Institute of High Energy Physics Austrian Academy of Sciences 2 Institute of Statistics


  1. Introduction A Simple Example Other Applications Summary and Outlook o Constrained fits with non-normal distributions R. Frühwirth 1 , 2 and O. Cencic 3 1 Institute of High Energy Physics Austrian Academy of Sciences 2 Institute of Statistics and Mathematical Methods in Economics Vienna University of Technology 3 Institute of Water Quality, Resources and Waste Management Vienna University of Technology Connecting the Dots 2015 Berkeley, 9–11 February 2015 R. Frühwirth 1 CTD 2015

  2. Introduction A Simple Example Other Applications Summary and Outlook Outline Introduction 1 A Simple Example 2 Other Applications 3 Summary and Outlook 4 R. Frühwirth 2 CTD 2015

  3. Introduction A Simple Example Other Applications Summary and Outlook Outline Introduction 1 A Simple Example 2 Other Applications 3 Summary and Outlook 4 R. Frühwirth 3 CTD 2015

  4. Introduction A Simple Example Other Applications Summary and Outlook Motivation Material flow Motivation for this work was data reconciliation in material flow Trace valuable materials in cycle of production, consumption, waste deposit, recycling,. . . Data are often just expert assessments and thus not normally distributed , but uniform, triangular, trapezoidal, etc. Improve quality by imposing constraints: conservation of mass Example flow chart R. Frühwirth 4 CTD 2015

  5. Introduction A Simple Example Other Applications Summary and Outlook Motivation Kinematic fit Very similar problem Track parameters are not always normally distributed, e.g. for electrons fitted with the Gaussian-sum filter Constraints are given by conservation of momentum and energy Example process Particle 1 Particle 3 Scattering Particle 2 Particle 4 Process R. Frühwirth 5 CTD 2015

  6. Introduction A Simple Example Other Applications Summary and Outlook Motivation Vertex fit Again, track parameters are not always normally distributed Constraints are given by common vertex Constraints may contain unobserved variables Combination of experiments Measurements are often not normally distributed, in particular if the measured parameter is non-negative Constraints are given by the fact that the same quantity is measured Feasible only if experiments publish full marginal likelihood (posterior density) R. Frühwirth 6 CTD 2015

  7. Introduction A Simple Example Other Applications Summary and Outlook Motivation Vertex fit Again, track parameters are not always normally distributed Constraints are given by common vertex Constraints may contain unobserved variables Combination of experiments Measurements are often not normally distributed, in particular if the measured parameter is non-negative Constraints are given by the fact that the same quantity is measured Feasible only if experiments publish full marginal likelihood (posterior density) R. Frühwirth 7 CTD 2015

  8. Introduction A Simple Example Other Applications Summary and Outlook Principle and details Principle idea Restrict joint prior density of all observed and unobserved variables to the constraint manifold Unobserved variables are included by an uninformative or weakly informative prior Renormalize restricted posterior density to 1 Details A detailed description of the algorithm can be found in: O. Cencic and R. Frühwirth, A general framework for data reconciliation—Part I: Linear constraints Computers and Chemical Engineering, in press Available at: http://tinyurl.com/CencicFruhwirth R. Frühwirth 8 CTD 2015

  9. Introduction A Simple Example Other Applications Summary and Outlook Principle and details Principle idea Restrict joint prior density of all observed and unobserved variables to the constraint manifold Unobserved variables are included by an uninformative or weakly informative prior Renormalize restricted posterior density to 1 Details A detailed description of the algorithm can be found in: O. Cencic and R. Frühwirth, A general framework for data reconciliation—Part I: Linear constraints Computers and Chemical Engineering, in press Available at: http://tinyurl.com/CencicFruhwirth R. Frühwirth 9 CTD 2015

  10. Introduction A Simple Example Other Applications Summary and Outlook Graphical illustration A problem in 2D, x 1 = x 2 Prior density of (x 1 ,x 2 ) Prior density cut along x 1 =x 2 0.05 0.05 0.04 0.04 0.03 0.03 0.02 0.02 0.01 0.01 0 0 15 15 10 10 5 5 x 2 x 2 15 15 0 10 0 10 5 5 0 0 x 1 x 1 Marginal density of x 1 Marginal density of x 2 0.3 0.3 prior prior 0.25 0.25 posterior posterior 7 =3.21 7 =3.21 0.2 0.2 < =1.52 < =1.52 0.15 0.15 0.1 7 =3.75 0.1 7 =4.00 < =2.37 < =4.00 0.05 0.05 0 0 0 5 10 15 0 5 10 15 x 1 x 2 R. Frühwirth 10 CTD 2015

  11. Introduction A Simple Example Other Applications Summary and Outlook Graphical illustration A problem in 3D, x 3 = x 1 + x 2 R. Frühwirth 11 CTD 2015

  12. Introduction A Simple Example Other Applications Summary and Outlook Outline Introduction 1 A Simple Example 2 Other Applications 3 Summary and Outlook 4 R. Frühwirth 12 CTD 2015

  13. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Observables Three measurements X 1 , X 2 , X 3 of a small cross section x Three experimental densities f 1 ( x 1 ) , f 2 ( x 2 ) , f 3 ( x 3 ) , support restricted to the positive axis Considered as prior densities in this context Densities need not be given in closed form Must be possible to compute the densities and to draw random numbers from them In this simple example: X 1 ∼ Ex(1.1) Exponential X 2 ∼ Ga(2,0.5) Gamma X 3 ∼ TrNorm(0,1.2,0, ∞ ) Half Normal Assume independence at the moment, will allow correlations later R. Frühwirth 13 CTD 2015

  14. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Prior densities X 1 X 2 X 3 1 1 1 prior prior prior 0.9 0.9 0.9 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 f(x) f(x) f(x) 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 7 =1.10 7 =1.00 7 =0.96 0.2 0.2 0.2 < =1.10 < =0.71 < =0.72 0.1 0.1 0.1 0 0 0 0 2 4 0 2 4 0 2 4 x x x R. Frühwirth 14 CTD 2015

  15. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Constraints Want to combine the measurements by imposing X 1 = X 2 = X 3 If the posteriors are normal densities, the combined measurement is the weighted mean If not, we compute the joint density of of ( X 1 , X 2 , X 3 ) under the constraints X 1 = X 2 and X 1 = X 3 The constraint manifold is the line x = λ ( 1 , 1 , 1 ) T There is one free variable , which we choose to be y = x 1 The dependent variables z = ( x 2 , x 3 ) T are functions of y : z = − Dy − d or Iz + Dy + d = 0 with: � x 2 � � − 1 � � 0 � z = y = x 1 , D = d = , , x 3 − 1 0 R. Frühwirth 15 CTD 2015

  16. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Derivation of the posterior The joint prior density of the measurements is given by: f ( x ) = f 1 ( x 1 ) · f 2 ( x 2 ) · f 3 ( x 3 ) = f f ( y ) · f d ( z ) We compute the posterior density of x conditional on the constraints The posterior is the prior restricted to the constraint manifold, renormalized to 1 It is easy to show that the posterior of y is given by: f d ( − Dy − d ) · f f ( y ) π ( y ) = � f d ( − Dy − d ) · f f ( y ) d y π ( y ) is also called the target density R. Frühwirth 16 CTD 2015

  17. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Normalization of the posterior In this example the integral can be computed by numerical integration In more complex cases this may get rather tedious, in particular if the dimension of y is large Explicit calculation of the integral can be avoided by drawing a random sample from the posterior π ( y ) by Markov chain Monte Carlo (MCMC) We use the Metropolis-Hastings algorithm for sampling Need a proposal density p ( y ) to generate values of the free variables No need to draw from the dependent variables Independence sampler most suitable in this context R. Frühwirth 17 CTD 2015

  18. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Generating the Markov chain Set i = 1, choose the sample size L and draw the starting value 1 y 1 from p ( y ) y from p ( y ) Draw a proposal value ˆ 2 Compute the acceptance probability α by 3 � 1 , π ( ˆ y ) p ( y i ) � α ( y i , ˆ y ) = min π ( y i ) p ( ˆ y ) Draw a uniform random number u ∈ [ 0 , 1 ] 4 If u ≤ α , accept the proposal and set y i + 1 = ˆ y , otherwise set 5 y i + 1 = y i Increase i by 1. If i < L , go to 2, otherwise stop sampling 6 R. Frühwirth 18 CTD 2015

  19. Introduction A Simple Example Other Applications Summary and Outlook Combination of measurements Advantages of the independence sampler There is a natural proposal density : p ( y ) = f f ( y ) There is need for “burn-in” If the observations are independent, the acceptance probability has a very simple form : � 1 , f d ( − D ˆ y − d ) � α ( y , ˆ y ) = min f d ( − Dy − d ) Sampler can be SIMDized by precomputing proposal values and their pdf values Sampler can be parallelized by generating several independent Markov chains on different cores and combining them afterwards R. Frühwirth 19 CTD 2015

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend