longitudinal vector boson scattering with deep machine
play

Longitudinal Vector Boson Scattering with Deep Machine Learning - PowerPoint PPT Presentation

Longitudinal Vector Boson Scattering with Deep Machine Learning Jake Searcy, Lillian Huang, Marc-Andre Pleier, Junjie Zhu 1 VBS and the Higgs Without a Higgs the matrix element for Longitudinal VBS (VLVL) grows with energy until it becomes


  1. Longitudinal Vector Boson Scattering with Deep Machine Learning Jake Searcy, Lillian Huang, Marc-Andre Pleier, Junjie Zhu 1

  2. VBS and the Higgs Without a Higgs the matrix element for Longitudinal VBS (VLVL) grows with energy until it becomes strongly coupled A Higgs fixes this The Questions: Does our Higgs fix this? How can we find out? 2

  3. Extra useful info Longitudinal Scattering http://www.sciencedirect. com/science/article/pii/0550321385 900380 Longitudinal bosons grow with M(WW) if there is no Higgs Fraction of Longitudinal W Bosons Higgs-126 GeV Higgs-less No Higgs With Higgs 3

  4. Event Signature ● Look in Same Sign W ∓ W ∓ ● Two same sign leptons ● Two jets ○ high M(j,j) ○ high Δn(j,j) Phys. Rev. Lett. 113, 141803 4

  5. Same Sign W ∓ W ∓ Di-leptonic ● Aim: Longitudinal scattering ● Pros ○ Easy to find ○ Signal is clean ● Cons Phys. Rev. Lett. 113, 141803 ○ Two neutrinos ● Other options ○ OS WWjj-All top ○ Semileptonic WWjj-not yet sensitive to SM ■ ZZjj,WZjj-Very few events ● What can we do with 2 neutrinos Phys. Rev. Lett. 114, 051801 ● Study parton level truth for 5 W ∓ W ∓

  6. How do we do it? Some Proposals K. Doroba, J. Kalinowski, J. Kuczmarski, S. Pokorski, J. Rosiek, M. Szleper, S. Tkaczyk Use specially built R pT variable R pT = p T (Lep1)p T (Lep2) / p T (Jet1)p T (Jet2) http://xxx.lanl.gov/pdf/1201.2768v2 A. Freitas, J. S. Gainer Use Matrix Element Analysis to differentiate different Higgs models http://xxx.lanl.gov/pdf/1212.3598v2.pdf We want to try and measure the longitudinal fraction directly 6

  7. Measuring VLVL ● We’ve seen the first signs of VBS in W + W + ○ Next step is to see V L V L ■ Then can we measure V L V L at high M(W,W)? ● Effect of polarization is on the θ* distribution e + θ* Boost to W rest frame W + Direction ν e 7

  8. Cos(θ*) distributions - 1D http://arxiv.org/pdf/1203.2165v2.pdf ● Fits give polarization fractions ● Of course can’t do this in real events because of the two missing neutrinos ● Do we have any sensitivity with measurable quantities? 8

  9. Machine Learning Neural Networks Really common in HEP to use multivariate techniques classification (discret estimation) Signalness . . . . . . ... . . . Squash output between (0,1) Event Hidden Layers Outputs Inputs Just a simple f(x i ) ➝ Output Train weights so this mapping gives you the best discriminate between signal and background 9

  10. Machine Learning Neural Networks You can also train NN to approximate continuous functions (Regression) Signalness . . . . . . ... . . . Squash outputbetween (0,1) Event Hidden Layers Outputs Inputs Don’t squash output 10

  11. Machine Learning Neural Networks You can also train NN to approximate continuous functions (Regression) My favorite truth value . . . . . . My other favorite truth ... . . . value Event Hidden Layers Outputs Inputs Just a simple f(x i ) ➝ Output Train weights so this mapping gives you the best approximation of the function you want (minimum error 2 ) 11

  12. Goal 1. Take measurable quantities in Same Sign W ∓ W ∓ , (pt,eta,phi, leptons and jets + met) 2. Train Neural Network to output the two true values of Cos(θ*) (one of each W) 3. Fit Neural Networks Cos(θ*) approximation to measure Longitudinal Fractions Run Neural Select Signal Subtract Fit Data Network Events Background 12

  13. Training the Neural Network: Deep Learning ● Deep learning is simply extending a simple neural network with many layers ○ Conceptually simple, computationally a little difficult ● Has had a lot of success in recent days ○ In HEP and elsewhere ○ P.Baldi, P. Sadowski, D. Whiteson http://arxiv.org/pdf/1410.3469.pdf “The deep networks trained on the low-level variables performed better than shallow networks trained on the high-level variables engineered by physicists, and almost as well as the deep network trained high-level variables,” ● Since we have only one “High-level variable” this is exactly what we want. 13

  14. Train Neural Network ● Use Deep Learning ○ Network with 20 Layers 200 Nodes ○ Instead of shallow one-layer ■ Though Results look pretty good with 1 layer ■ Gain ~20% with Deep learning ● Validate on independent data ○ Far from perfect, but certainly usable 14

  15. 1 ab -1 Fit Fit Neural Network ● 6 templates ○ ++,--,+-.LL,+L,-L ● Combine into 3 ○ Transverse-Transverse ○ Transverse-Longitudinal ○ Longitudinal-Longitudinal 15

  16. Other Fit option Also possible to do all six at once 16

  17. Real Life ● Have to separate Signal and Background ● Get a little help from the NN ● Still need to make event level cuts ● Finite detector resolution ● ATLAS CUTS ○ MET > 30 GeV ○ M(j,j) > 500 ○ Lepton pT > 25 GeV GeV ○ dY(j,j) > 2.4 ○ Jet pT > 30 GeV 17

  18. Sensitivity ATLAS Cuts + Delphes Delphes Simulation ATLAS Cuts Parton Level 18

  19. Some Comparisons Compare against variable from Doroba et al. R pT = p T (Lep1)p T (Lep2)/p T (jet1)p T (jet2) Linear Scale Log Scale 19

  20. Fit Comparison Precision at 3ab -1 68% Parton with Cuts NN: 7.5 +2.6 -2.4 % -7.5 % RpT: 7.5 +11.3 NN has > 4x the sensitivity to the LL fraction. (Equivalent to 16x the data or hundreds of years of running the LHC!) 20

  21. Future Studies ● A few nice properties of this regression that could be explored more ● What differential measurements possible? ● Train on Delphes samples ● Can be combined with cuts to enhance LL fraction (i.e. Doroba et. al.) 21

  22. Conclusions ● Regression is a great tool for pulling out “hidden” information ○ Many applications beyond this one ● Measuring the properties of VBS possible in the same sign WW state ○ Neural Networks can more than double the sensitivity of the current state of the art methods. ● Limits can be made as early as the first measurements. ● Strong bounds will require the High Lumi LHC ● Differential distributions could be possible ● Better understanding (generation) of the polarization distributions will help ● Extra Thanks to ○ Olivier Mattelaer ○ Sally Dawson 22

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend