using kinematic fitting in clas eg6 beam spin asymmetry
play

Using Kinematic Fitting in CLAS EG6: Beam-Spin Asymmetry of - PowerPoint PPT Presentation

Using Kinematic Fitting in CLAS EG6: Beam-Spin Asymmetry of Exclusive Nuclear DVMP Frank Thanh Cao Advisor: K. Joo Co-Advisor: K. Hafidi University of Connecticut March 2018 Motivation After particle identification, we are left with a set


  1. Using Kinematic Fitting in CLAS EG6: Beam-Spin Asymmetry of Exclusive Nuclear DVMP Frank Thanh Cao Advisor: K. Joo Co-Advisor: K. Hafidi University of Connecticut March 2018

  2. Motivation After particle identification, we are left with a set of particles and we want to know whether they are part of the same process of interest. Usually, we rely on forming exclusivity variables from the measured 4-momenta of the positively identified particles. We must confront the fact that 4-vectors coming from detectors are not perfect and it may be possible to do better. This presentation will outline kinematic fitting as an answer to this and some surprising results when applying it to the relatively rare process of DV π 0 P off 4 He in CLAS EG6.

  3. Outline ◮ Kinematic Fitting in a Nutshell ◮ Kinematic Fitting Formalism ◮ Constructing Constraints and Covariance Matrix ◮ Obtaining Fitted Variables ◮ Quality of Fit ◮ Kinematic Fitting Applied to EG6 ◮ 4 C -fit on DVCS: Validation ◮ 4 C -fit on DV π 0 P: Case for Kinematic Fit ◮ 5 C -fit on DV π 0 P: Folding in π 0 Decay ◮ Comparison to Previous Exclusivity Cuts ◮ Conclusion

  4. Kinematic Fitting in a Nutshell Kinematic fitting takes measure values and allows them to move within the measured values’ errors and are directed by a set of constraints. This is perfectly applicable for taking a set of measured 3-momenta and allowing each to move simultaneously, within detector resolutions, to satisfy energy and momentum conservation.

  5. Formalism Let # η be a vector of n -measured variables. Then the true vector of » the n -variables, # y , will be displaced by n -variables, # » » ε . They are related simply by: # y = # » η + # » » ε If there are, say m , unmeasured variables too, then they can be put in a vector, # » x . The two vectors, # x and # » » y , are then related by r constraint equations, indexed by k : f k ( # x , # » » y ) = 0

  6. x 0 and # y 0 are our best guess (measurements) of the Suppose # » » vectors # x and # » y , respectively. Then Taylor expanding to first » order each f k ( # x , # » y ) about #» » x 0 and #» y 0 gives: m � � � ∂ f k � x 0 , # y 0 � � x 0 � � # � # f k ( # x , # » » » » x − # » » y ) ≈ f k + � i ∂ x i � � � #» # » y 0 � i =0 x 0 , (1) n � � � ∂ f k � � y 0 � � # y − # » » + � ∂ y j j � j =0 � ( # x 0 , # » y 0 ) » � # x − # » x 0 � » � # y − # » y 0 � » where i and j denote the i -th and j -th components of vector differences, respectively.

  7. For convenience, let’s introduce � ∂ f i � � � A ij := � ∂ x j � � ( # x 0 , # » y 0 ) » � ∂ f i � � , (2) � B ij := � ∂ y j � � x 0 , # y 0 ) ( # » » x 0 , # y 0 � � # » » c i := f i and # » x 0 ξ := # x − # » » . # » y 0 δ := # y − # » »

  8. Then, since f k ( # x , # » » y ) ≡ 0 ∀ k , Eq. 1 can be written in matrix form as: # 0 ≡ A # » ξ + B # » » δ + # » (3) c where A and B are ( r × n ) and ( r × m ) matrices with components a ij and b ij , respectively, as defined by Eqn.’s 2 .

  9. Kinematic fitting can be done iteratively to get the best ∗ value of # y and # » » x as possible. Let ν be the index that denotes the ν -th iteration. Then, we have ξ ν = # x ν − # # ξ → # » » x ν − 1 » » δ ν = # x ν − # # δ → # » » x ν − 1 » » and A → A ν B → B ν c ν # c → # » » Finally, we introduce the overall difference: ǫ ν := # y ν − # y 0 # » » » (4) ∗ We can quantify best by introducing and minimizing χ 2 .

  10. Constructing χ 2 If we have a really good understanding of the correlations between y 0 , then we can construct a our initial measured values, in # η ≡ # » » covariance matrix, C η : σ η T ρ η # C η = # » » σ η where # » σ η is a vector of the resolution errors of η and ρ η is a symmetric correlation matrix whose components, ρ ij ∈ [ − 1 , 1], house pairwise correlations coefficients, between η i and η j ( ⇒ ρ ii = 1).

  11. Consider χ 2 , generalized to include correlations between measurements, to be: χ 2 � ν = ( # ǫ ν ) T C − 1 ǫ ν � » # » (5) η Then, if there are no correlations, ρ η is the unit matrix and so the covariance matrix is just a diagonal matrix of the variances of η . In this case, the χ 2 becomes the recognizable: � 2 m m i ) 2 � y ν i − y 0 ( ǫ ν χ 2 � ν = � i � � = 2 2 ( σ η ) i ( σ η ) i i =0 i =0

  12. Now that we have a χ 2 to minimize, we can introduce a Lagrangian, L , with Lagrange multipliers # » µ such that: ǫ ν ) T C − 1 ǫ ν + 2 ( # µ ν ) T � ξ ν + B ν # δ ν + # c ν � A ν # » » » L = ( # » # » » (6) η is to be minimized. Minimization conditions are then: 0 ≡ 1 ∂ L ǫ ν + ( B ν ) T # # » δ ν = C − 1 µ ν # » » (7) ∂ # » η 2 0 ≡ 1 ∂ L ξ ν + B ν # δ ν + # # » µ ν = A ν # » » c ν » (8) ∂ # » 2 0 ≡ 1 ∂ L # » ξ ν = ( A ν ) T # µ ν » . (9) ∂ # » 2

  13. Solving for such # » µ ν , # δ ν that satisfy these conditions result in: » ξ ν , # » ξ ν = − C ν x ( A ν ) T C ν # » r ν B # » µ ν = C ν � ξ ν + # r ν � A ν # » # » » . (10) B δ ν = − C η ( B ν ) T # µ ν − # # » ǫ ν − 1 » » where C ν B is conveniently defined as B ν C η ( B ν ) T � − 1 � C ν B := B A ν � − 1 ( A ν ) T C ν � C ν x := r ν := # c ν − B ν # ǫ ν − 1 # » » »

  14. With these new incremental vectors that satisfies the minimization x ν and # y ν : condition, we can finally form our new fitted vectors # » » x ν = # x ν − 1 + # » ξ ν # » » (11) y ν = # y ν − 1 + # » δ ν # » » with new covariance matrices: � T � ∂ # » � � ∂ # » x x C x = C η ∂ # » ∂ # » η η � − 1 � A T C B A = . � T � ∂ # » � � ∂ # » y y C y = C η ∂ # » ∂ # » η η � � � � AC x A T � � B T C B B B T C B = C η − C η C η + C η C B B C η

  15. Quality of Fit To check on the quality of the fit, we look to two sets of distributions: The Confidence levels and the Pull distributions .

  16. Confidence Levels Since χ 2 := # ǫ will produce an χ 2 distribution for N ǫ T C − 1 » # » η degrees of freedom, let’s define the confidence level, CL as: � ∞ CL := x = χ 2 f N ( x ) d x , where f N ( x ) is the χ 2 distribution for N degrees of freedom. The fit is then referred to as a N C-fit. Characteristics ◮ ◮ If there is no background in the fit, the distribution is uniform and flat. Confidence Levels confLevels confLevels 80 Entries Entries 1000 1000 Mean Mean 0.499 0.499 70 Std Dev 0.2882 Std Dev 0.2882 60 50 40 30 20 10 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Conf. Levels []

  17. Confidence Levels Characteristics ◮ ◮ In the presence of background, there will be a sharp rise as CL → 0. Confidence Levels Confidence Levels ( 977 events, est. SNR = 36.902261, sig. pct. = 97.361635% with conf. cut @ 0.04) confLevels confLevels confLevels confLevels 3 Entries Entries 2000 2000 3 Entries Entries 2000 2000 10 10 Mean Mean 0.2545 0.2545 Mean Mean 0.2544 0.2544 Std Dev 0.3184 Std Dev 0.3184 Std Dev 0.3184 Std Dev 0.3184 10 2 10 2 10 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Conf. Levels [] Conf. Levels [] Cutting out the sharp rise as CL → 0 will cut out the much of the background while keeping much of the signal intact.

  18. Pull Distributions To see if the covariance matrix is correctly taking into account all pairwise correlations between the variables, we look to the pull distributions. Let’s define # » z to house the pulls, z i , defined as y i − η i z i := � σ 2 y i − σ 2 η i

  19. Pull Distributions Characteristics ◮ Since these are normalized differences, the distributions should be normally distributed with ◮ mean 0 and ◮ width 1. Pulls Pulls After CLC pulls pulls pulls_1 pulls_1 70 Entries Entries 1000 1000 Entries Entries 4000 4000 − − Mean Mean 0.006005 0.006005 − − 60 Mean Mean 0.4344 0.4344 60 Std Dev Std Dev 0.9641 0.9641 χ χ 2 2 / ndf / ndf 29.33 / 51 29.33 / 51 Std Dev 2.543 Std Dev 2.543 ± ± Constant Constant 41.06 41.06 1.64 1.64 50 − − ± ± Mean Mean 0.01657 0.01657 0.03139 0.03139 50 ± ± Sigma Sigma 0.9488 0.9488 0.0237 0.0237 40 40 30 30 20 20 10 0 − − − − − − − − − − 5 4 3 2 1 0 1 2 3 4 5 5 4 3 2 1 0 1 2 3 4 5 pull [] pull []

  20. Kinematic Fit Applied to EG6: DVCS 4 C -fit Validation

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend