dune cvn
play

DUNE CVN Alexander Radovic College of William and Mary on behalf - PowerPoint PPT Presentation

DUNE CVN Alexander Radovic College of William and Mary on behalf of the DUNE Experiment Who is DUNE CVN? Alexander Radovic Leigh Whitehead Robert Sulej Dorota Stefan Evan Niner College of William and Mary CERN CERN CERN Fermilab +the


  1. DUNE CVN Alexander Radovic College of William and Mary on behalf of the DUNE Experiment

  2. Who is DUNE CVN? Alexander Radovic Leigh Whitehead Robert Sulej Dorota Stefan Evan Niner College of William and Mary CERN CERN CERN Fermilab +the many good people of NOvA CVN Alexander Radovic Deep Learning at DUNE 1

  3. Deep Learning Deep Neural Networks Convolutional Neural Networks Recurrent Neural Networks Unsupervised Learning Adversarial Networks Neural Turing Machines Alexander Radovic Deep Learning at DUNE 2

  4. Deep Learning Deep Neural Networks Convolutional Neural Networks Recurrent Neural Networks Unsupervised Learning Adversarial Networks Neural Turing Machines Alexander Radovic Deep Learning at DUNE 2

  5. Why Deep Neural Networks? • Measuring neutrino oscillations is all about measuring how neutrinos change between different lepton flavor states as a function of distance traveled and neutrino energy. without Oscillations Monte Carlo sin 2 (2 θ 23 ) with oscillations ∆ m 2 32 Alexander Radovic Deep Learning at DUNE 3

  6. Why Deep Neural Networks? • Measuring neutrino oscillations is all about measuring how neutrinos change between different lepton flavor states as a function of distance traveled and neutrino energy. From S. Parke, “Neutrino Oscillation Phenomenology” in Neutrino Oscillations: Present Status and Future Alexander Radovic Plans

  7. Why Deep Neural Networks? • Any oscillation analysis can benefit from precise identification of the interaction in two ways: • Estimating the lepton flavor of the incoming neutrino. • Correctly identifying the type of neutrino interaction, to better estimate the neutrino energy, aka is it a quasi elastic event or a resonance event? Quasi-Elastic Resonance Alexander Radovic Deep Learning at DUNE 3

  8. Why Deep Neural Networks? • Liquid argon detectors are also the perfect domain: • Large ~uniform volumes where spatially invariant response is a benefit. • One, main, detector system. DUNE ν e Candidate Alexander Radovic Deep Learning at DUNE 4

  9. Why Deep Neural Networks? • Liquid argon detectors are also the perfect domain: • Large ~uniform volumes where spatially invariant response is a benefit. • One, main, detector system. DUNE ν e Candidate Alexander Radovic Deep Learning at DUNE 4

  10. Convolutional Neural Networks Instead of training a weight for every input pixel, try learning weights that describe kernel operations, convolving that kernel across the entire image to exaggerate useful features. Inspired by research showing that cells in the visual cortex are only responsive to small portions of the visual field. Input Feature Map Kernel 5 http://setosa.io/ev/image-kernels/

  11. Convolutional Neural Networks Instead of training a weight for every input pixel, try learning weights that describe kernel operations, convolving that kernel across the entire image to exaggerate useful features. Inspired by research showing that cells in the visual cortex are only responsive to small portions of the visual field. https://developer.nvidia.com/deep-learning-courses Feature Map Alexander Radovic Deep Learning at DUNE 5

  12. Our Input Each “pixel” is the integrated ADC response in that time/ space slice. These maps are chosen to be 500 wires long and 1.2ms wide (split into 500 time chunks). Alexander Radovic Alexander Radovic Deep Learning at DUNE Deep Learning at DUNE 6

  13. The Training Sample • 1.2M events, only preselection requiring 100 hits split across any number of planes. • Labels are from GENIE truth, neutrino vs. antineutrino is ignored. • No oscillation information, just the raw input distributions. • 80% for training and 20% for testing. Work in progress ART Events ν µ COH ν e COH ν τ COH ν µ RES ν e RES ν τ RES ν µ DIS ν e DIS NC ν τ DIS ν e QE ν µ QE ν τ QE Alexander Radovic Alexander Radovic 7 7

  14. Our Architecture 6/29/2017 Netscope Based on the NOvA CNN, named CVN . Small edits to better 1504 layer { 1505 name: "inception_5b/1x1" 1506 type: "Convolution" 1507 bottom: "merge_x_y" Warning 1508 top: "inception_5b/1x1" 1509 param { Can't infer network data shapes. Can't infer 1510 lr_mult: 1 output shape of the 'data' layer of type 'Data'. TypeError: Cannot read property 'batch_size' of 1511 decay_mult: 1 undefined 1512 } suit a larger input image and three distinct views. 1513 param { 1514 lr_mult: 2 1515 decay_mult: 0 1516 } 1517 convolution_param { Untitled Network 1518 num_output: 384 1519 kernel_size: 1 1520 weight_filler { 1521 type: "xavier" 1522 } The architecture 1523 bias_filler { data jitter 1524 type: "constant" 1525 value: 0.2 1526 } 1527 } jitteredData 1528 } 1529 1530 1531 attempts to 1532 layer { slice 1533 name: "inception_5b/relu_1x1" 1534 type: "ReLU" 1535 bottom: "inception_5b/1x1" 1536 top: "inception_5b/1x1" conv1/11x11_s4_x conv1/11x11_s4_y conv1/11x11_s4_z 1537 } 1538 layer { conv1/relu_11x11_x conv1/relu_11x11_y conv1/relu_11x11_z 1539 name: "inception_5b/3x3_reduce" 1540 type: "Convolution" categorize events 1541 bottom: "merge_x_y" 1542 top: "inception_5b/3x3_reduce" pool1/3x3_s2_x pool1/3x3_s2_y pool1/3x3_s2_z 1543 param { 1544 lr_mult: 1 1545 decay_mult: 1 1546 } pool1/norm1_x pool1/norm1_y pool1/norm1_z 1547 param { 1548 lr_mult: 2 1549 decay_mult: 0 as { ν µ , ν e , ν τ } × 1550 } conv2/3x3_reduce_x conv2/3x3_reduce_y conv2/3x3_reduce_z 1551 convolution_param { 1552 num_output: 192 conv2/relu_3x3_reduce_x conv2/relu_3x3_reduce_y conv2/relu_3x3_reduce_z 1553 kernel_size: 1 1554 weight_filler { 1555 type: "xavier" 1556 } conv2/3x3a_x conv2/3x3a_y conv2/3x3a_z 1557 bias_filler { 1558 type: "constant" conv2/relu_3x3a_x conv2/relu_3x3a_y conv2/relu_3x3a_z {QE,RES,DIS}, 1559 value: 0.2 1560 } 1561 } 1562 } conv2/3x3_x conv2/3x3_y conv2/3x3_z 1563 1564 conv2/relu_3x3_x conv2/relu_3x3_y conv2/relu_3x3_z 1565 1566 layer { 1567 name: "inception_5b/relu_3x3_reduce" NC. 1568 type: "ReLU" conv2/norm2_x conv2/norm2_y conv2/norm2_z 1569 bottom: "inception_5b/3x3_reduce" 1570 top: "inception_5b/3x3_reduce" 1571 } 1572 layer { 1573 name: "inception_5b/3x3" pool2/3x3_s2_x pool2/3x3_s2_y pool2/3x3_s2_z 1574 type: "Convolution" 1575 bottom: "inception_5b/3x3_reduce" 1576 top: "inception_5b/3x3" inception_3a/3x3_reduce_x inception_3a/5x5_reduce_x inception_3a/3x3_reduce_y inception_3a/5x5_reduce_y inception_3a/3x3_reduce_z inception_3a/5x5_reduce_z inception_3a/pool_x inception_3a/pool_y inception_3a/pool_z inception_3a/relu_3x3_reduce_x inception_3a/relu_5x5_reduce_x inception_3a/relu_3x3_reduce_y inception_3a/relu_5x5_reduce_y inception_3a/relu_3x3_reduce_z inception_3a/relu_5x5_reduce_z inception_3a/1x1_x inception_3a/3x3_x inception_3a/5x5_x inception_3a/pool_proj_x inception_3a/1x1_y inception_3a/3x3_y inception_3a/5x5_y inception_3a/pool_proj_y inception_3a/1x1_z inception_3a/3x3_z inception_3a/5x5_z inception_3a/pool_proj_z inception_3a/relu_1x1_x inception_3a/relu_3x3_x inception_3a/relu_5x5_x inception_3a/relu_pool_proj_x inception_3a/relu_1x1_y inception_3a/relu_3x3_y inception_3a/relu_5x5_y inception_3a/relu_pool_proj_y inception_3a/relu_1x1_z inception_3a/relu_3x3_z inception_3a/relu_5x5_z inception_3a/relu_pool_proj_z Built in the inception_3a/output_x inception_3a/output_y inception_3a/output_z pool3a/3x3_s2_x pool3a/3x3_s2_y pool3a/3x3_s2_z excellent CAFFE merge_x_y inception_5b/3x3_reduce inception_5b/5x5_reduce inception_5b/pool inception_5b/relu_3x3_reduce inception_5b/relu_5x5_reduce framework. inception_5b/1x1 inception_5b/3x3 inception_5b/5x5 inception_5b/pool_proj inception_5b/relu_1x1 inception_5b/relu_3x3 inception_5b/relu_5x5 inception_5b/relu_pool_proj inception_5b/output pool5/6x5_s1 pool5/drop_6x5_s1 loss3/classi�er15 loss3/loss3 Alexander Radovic Alexander Radovic 8 http://ethereon.github.io/netscope/#/editor 1/1

  15. Training Performance No sign of overtraining- exceptional training test set performance agreement! Work in progress Alexander Radovic Deep Learning at DUNE 9

  16. Example CVN Kernels In Action: First Convolution X Here the earliest convolutional layer in the network starts by pulling out primitive shapes and = lines. Already “showers” and “tracks” are starting to form. 10

  17. Example CVN Kernels In Action: First Inception Module Output True NuMu DIS Event Deeper in the network, now after the first inception module we can see more complex features have started to be extracted. Some seem particularly sensitive to muon tracks, EM showers. Alexander Radovic Deep Learning at DUNE 11

  18. Example CVN Kernels In Action: First Inception Module Output True NuE COH Event Deeper in the network, now after the first inception module we can see more complex features have started to be extracted. Some seem particularly sensitive to muon tracks, EM showers. Alexander Radovic Deep Learning at DUNE 11

  19. NuMu PID Neutrino Beam Anti-Neutrino Beam DUNE FD Events, With Oscillations, DUNE FD Events, With Oscillations, Arbitrary Exposure Arbitrary Exposure Work in progress Work in progress Cut at 0.5, guarantees no double counting due to sofmax output of CVN Alexander Radovic Deep Learning at DUNE 12

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend