introduction to unfolding in high energy physics
play

Introduction to Unfolding in High Energy Physics Mikael Kuusela - PowerPoint PPT Presentation

Introduction to Unfolding in High Energy Physics Mikael Kuusela Institute of Mathematics, EPFL Advanced Scientific Computing Workshop, ETH Zurich July 15, 2014 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 1 / 66 Outline


  1. Introduction to Unfolding in High Energy Physics Mikael Kuusela Institute of Mathematics, EPFL Advanced Scientific Computing Workshop, ETH Zurich July 15, 2014 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 1 / 66

  2. Outline Introduction 1 Basic unfolding methodology 2 Maximum likelihood estimation Regularized frequentist techniques Bayesian unfolding Challenges in unfolding 3 Choice of the regularization strength Uncertainty quantification MC dependence in the smearing matrix Unfolding with RooUnfold 4 Conclusions 5 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 2 / 66

  3. Outline Introduction 1 Basic unfolding methodology 2 Maximum likelihood estimation Regularized frequentist techniques Bayesian unfolding Challenges in unfolding 3 Choice of the regularization strength Uncertainty quantification MC dependence in the smearing matrix Unfolding with RooUnfold 4 Conclusions 5 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 3 / 66

  4. The unfolding problem Unfolding refers to the problem of estimating the particle-level distribution of some physical quantity of interest on the basis of observations smeared by an imperfect measurement device What would the distribution look like when measured with a device having a perfect experimental resolution? Cf. deconvolution in optics, image reconstruction in medical imaging Folding ← − − Unfolding − − → Figure : Smeared density Figure : True density Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 4 / 66

  5. Why unfold? Unfolding is usually done to achieve one or more of the following goals: 1 Comparison of the measurement with future theories 2 Comparison of experiments with different responses 3 Input to a subsequent analysis 4 Exploratory data analysis Unfolding is most often used in measurement analyses (as opposed to discovery analyses): QCD, electroweak, top, forward physics,... Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 5 / 66

  6. Examples of unfolding in LHC data analysis Hadronic event shape Inclusive jet cross section 10 13 d|y| (pb/GeV) 90 GeV/c < p < 125 GeV/c CMS |y| < 0.5 ( × 10 4 ) T,1 × 3 0.20 0.20 10 11 0.5 < |y| < 1.0 ( 10 ) anti-k , R=0.5, p > 30 GeV/c s = 7 TeV × T 1.0 < |y| < 1.5 ( 10 2 ) T -1 L = 5.0 fb 1.5 < |y| < 2.0 ( × 10 1 ) Pythia6 anti-k R = 0.7 × 0 2.0 < |y| < 2.5 ( 10 ) T Pythia8 ,C ,C 7 10 0.15 0.15 Herwig++ τ τ MadGraph+Pythia6 dN/dln dN/dln T /dp Alpgen+Pythia6 3 10 Data 0.10 0.10 σ 2 d 1 1 N N -1 10 0.05 0.05 CMS µ = µ = p 10 -5 T R F s = 7 TeV, L = 3.2 pb -1 NNPDF2.1 ⊗ NP Corr. 0.00 0.00 -12 -12 -10 -10 -8 -8 -6 -6 -4 -4 -2 -2 200 300 1000 2000 ln ln Jet p (GeV) τ τ ,C ,C T Charged particle multiplicity W boson cross section Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 6 / 66

  7. Problem formulation Notation: λ ∈ R p + bin means of the true histogram x ∈ N p 0 bin counts of the true histogram µ ∈ R n + bin means of the smeared histogram y ∈ N n 0 bin counts of the smeared histogram Assume that: The true counts are independent and Poisson distributed 1 x | λ ∼ Poisson ( λ ) , ⊥ ⊥ x i | λ The propagation of events to neighboring bins is multinomial 2 conditional on x i and independent for each true bin It follows that the smeared counts are also independent and Poisson distributed y | λ ∼ Poisson ( K λ ) , ⊥ ⊥ y i | λ Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 7 / 66

  8. Problem formulation Here the elements of the smearing matrix K ∈ R n × p are given by K ij = P (smeared event in bin i | true event in bin j ) and assumed to be known The unfolding problem: Problem statement Given the smeared observations y and the Poisson regression model y | λ ∼ Poisson ( K λ ) , what can be said about the means λ of the true histogram? The problem here is that typically K is an ill-conditioned matrix Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 8 / 66

  9. Unfolding is an ill-posed inverse problem The unfolding problem is typically ill-posed in the sense that the (pseudo)inverse of K is very sensitive to small perturbations in the data From y | λ ∼ Poisson ( K λ ) we have that µ = K λ λ = K † ˆ µ = K † y ıvely estimate ˆ We could na¨ But this can lead to catastrophic results! Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 9 / 66

  10. Demonstration of the ill-posedness Smeared histogram True histogram 500 500 400 400 300 300 200 200 100 100 0 0 −6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 10 / 66

  11. Demonstration of the ill-posedness 13 x 10 2 Pseudoinverse True 1.5 1 0.5 0 −0.5 −1 −1.5 −2 −2.5 −6 −4 −2 0 2 4 6 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 11 / 66

  12. Outline Introduction 1 Basic unfolding methodology 2 Maximum likelihood estimation Regularized frequentist techniques Bayesian unfolding Challenges in unfolding 3 Choice of the regularization strength Uncertainty quantification MC dependence in the smearing matrix Unfolding with RooUnfold 4 Conclusions 5 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 12 / 66

  13. Outline Introduction 1 Basic unfolding methodology 2 Maximum likelihood estimation Regularized frequentist techniques Bayesian unfolding Challenges in unfolding 3 Choice of the regularization strength Uncertainty quantification MC dependence in the smearing matrix Unfolding with RooUnfold 4 Conclusions 5 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 13 / 66

  14. The likelihood function The likelihood function in unfolding is: �� p � y i n n j =1 K ij λ j � � e − � p λ ∈ R p j =1 K ij λ j , L ( λ ) = p ( y | λ ) = p ( y i | λ ) = + y i ! i =1 i =1 This function uses our Poisson regression model to link the observations y with the unknown λ The likelihood function plays a key role in all sensible unfolding methods In most statistical problems, the maximum of the likelihood, or equivalently the maximum of the log-likelihood, provides a good estimate of the unknown In ill-posed problems, this is usually not the case , but the maximum likelihood solution still provides a good starting point Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 14 / 66

  15. Maximum likelihood estimation Any histogram that maximizes the log-likelihood of the unfolding problem is called a maximum likelihood estimator ˆ λ MLE of λ Hence, we want to solve:     n p p � � �  y i log   −  + const max log p ( y | λ ) = K ij λ j K ij λ j λ ∈ R p + i =1 j =1 j =1 Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 15 / 66

  16. Maximum likelihood estimation Theorem (Vardi et al. (1985)) Assume K ij > 0 and y � = 0 . Then the following hold for the log-likelihood log p ( y | λ ) of the unfolding problem: 1 The log-likelihood has a maximum. 2 The log-likelihood is concave and hence all the maxima are global maxima. 3 The maximum is unique if and only if the columns of K are linearly independent So a unique MLE exists when the columns of K are linearly independent but how do we find it? Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 16 / 66

  17. Maximum likelihood estimation Proposition λ = K − 1 y ≥ 0 . Let K be an invertible square matrix and assume that ˆ Then ˆ λ is the MLE of λ . That is, matrix inversion gives us the MLE if K is invertible and the resulting estimate is positive Note that this result is more restrictive than it may seem K is often non-square Even if K was square, it is often not invertible And even if K was invertible, K − 1 y often contains negative values Is there a general recipe for finding the MLE? Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 17 / 66

  18. Maximum likelihood estimation The MLE can always be found computationally by using the expectation-maximization (EM) algorithm (Dempster et al. (1977)) This is a widely used iterative algorithm for finding maximum likelihood solutions in problems that can be seen as containing incomplete observations Starting from some initial value λ (0) > 0 , the EM iteration for unfolding is given by: λ ( k ) n � K ij y i λ ( k +1) j = � n , j = 1 , . . . , p � p j l =1 K il λ ( k ) i =1 K ij i =1 l The convergence of this iteration to an MLE (i.e. λ ( k ) k →∞ → ˆ − λ MLE ) was proved by Vardi et al. (1985) Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 18 / 66

  19. Maximum likelihood estimation The EM iteration for finding the MLE in Poisson regression problems has been rediscovered many times in different fields: Optics: Richardson (1972) Astronomy: Lucy (1974) Tomography: Shepp and Vardi (1982); Lange and Carson (1984); Vardi et al. (1985) HEP: Kondor (1983); M¨ ulthei and Schorr (1987); M¨ ulthei et al. (1987, 1989); D’Agostini (1995) In modern use, the algorithm is most often called D’Agostini iteration in HEP and Lucy–Richardson deconvolution in astronomy and optics In HEP, also the name “Bayesian unfolding” is used but this is an unfortunate misnomer D’Agostini iteration is a fully frequentist technique for finding the MLE There is nothing Bayesian about it! Mikael Kuusela (EPFL) Unfolding in HEP July 15, 2014 19 / 66

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend