Data Selection Workshop: Welcome, Overview, Goals Josh Klein, Penn - - PowerPoint PPT Presentation

data selection workshop welcome overview goals
SMART_READER_LITE
LIVE PREVIEW

Data Selection Workshop: Welcome, Overview, Goals Josh Klein, Penn - - PowerPoint PPT Presentation

Data Selection Workshop: Welcome, Overview, Goals Josh Klein, Penn Welcome to Penn First University in the US, Ben Franklin, ENIAC, blah blah blah Overview Data Selection is hierarchical Trigger Primitives generated at front-end


slide-1
SLIDE 1

Data Selection Workshop: Welcome, Overview, Goals

Josh Klein, Penn

slide-2
SLIDE 2

Welcome to Penn

First University in the US, Ben Franklin, ENIAC, blah blah blah

slide-3
SLIDE 3

Overview

  • Data Selection is hierarchical
  • Trigger Primitives generated at front-end
  • Trigger Candidates formed at “APA Level” or “L0”
  • Module level trigger commands generated from candidates at “L1”; sent to buffer(s)
  • External triggers across all modules, other detectors
  • After event building, additional HLT (L2) may be applied to filter triggers
  • Readout for TDR is simple-minded
  • For a high-energy (> 10 MeV) L1 trigger, everything is read out for 5.4 ms
  • Supernova bursts are handled differently
  • Many low-threshold candidates required at L1 (hopefully with weighting scheme)
  • Long buffer (-10 s, 20 s) of everything is saved and “dumped” if SN burst detected
  • Some calibrations are handled differently
  • FE electronics
  • Laser system
  • 39Ar?
slide-4
SLIDE 4

APA 1 APA 150

Module 1

Trigger Primitive Generation Trigger Candidate Generation Trigger Primitive Generation Trigger Candidate Generation Module-Level Triggers

Trigger commands

Det 1 Det N

Module 4

Trigger Primitive Generation Trigger Candidate Generation Trigger Primitive Generation Trigger Candidate Generation Module-Level Triggers

Trigger commands

External Triggers

SN Trigger commands SN Trigger commands

Software or Firmware [Mostly] Software Software Software

slide-5
SLIDE 5

DS Scope: 1 SP TPC

Front End System (APA level trigger primitive generation) [x150] L0 (APA level) data selection L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

slide-6
SLIDE 6

Goals for this Workshop

slide-7
SLIDE 7

Goals for this Workshop

LBNC Meeting has reinforced the need for:

  • Requirements for DAQ that follow ultimately from physics requirements
  • Hardware costs that are grounded in actual tests and calculations
  • Labor needs (and costs) that are matched to requirements
  • Narrowing of options in advance of TDR and focus on development of those

The era of “It could be this way or that way…” and “This will probably work…” is over.

slide-8
SLIDE 8

Goals for this Workshop: Requirements

This is not so easy.

  • Physics requirements relevant for DAQ are not always directly useful:
slide-9
SLIDE 9

This is not so easy.

  • Physics requirements relevant for DAQ are not always directly useful:

Goals for this Workshop: Requirements

slide-10
SLIDE 10

Goals for this Workshop: Requirements

Data Selection Physics requirements takeaway:

  • We must be better than 90% efficient for Evis>100 MeV
  • Goal is 99% achieved by 50% at ~10 MeV
  • But other factors affect this: dead APAs, HV downtime, DAQ downtime…
  • So it is not clear what our requirement is….
  • We should plan to detect neutrons
  • “Full” 5.4 ms readout achieves this
  • Zero suppression is either a requirement or not:
  • We should “optimize energy resolution over efficiency” for Evis<100 MeV
  • But also need to see de-excitation gammas from CC events…
  • We need beam timeline in data selection (Module Level/L1)
  • Already discussed
  • No dead time
  • Already discussed
  • Need to identify fake SN bursts as fake in 1 minute (!)
slide-11
SLIDE 11

Goals for this Workshop: Requirements

But nothing about:

  • How much data to store or length of window in the event of a supernova burst
  • Trigger efficiency per se or distinction between “acceptance” and efficiency
  • T
  • tal data volume
  • What to do about calibrations

We have our own versions of these but they are not tied to the physics requirements in any direct way. We need to have agreement on what these should be, and how the tie to physics requirements.

slide-12
SLIDE 12

Goals for this Workshop: Informing Costs

Data selection scope extends from algorithms (but not hardware) for TP generation to L2 and “external triggering.”

Front End System (APA level trigger primitive generation) [x150] L0 (APA level) data selection L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

slide-13
SLIDE 13

Goals for this Workshop: Informing Costs

Basic cost model so far assumes:

  • 1. Processing of trigger primitives (L0) to find trigger candidates done by

hardware distinct from readout (e.g., FELIX+CPU, etc.) hardware 2. Generation of Module Level (L1) triggers done by yet more distinct hardware (perhaps one single server) and needs PDS, GPS, calibration information

  • 3. External Triggering is likewise a single machine with interface to other modules

and detectors

  • 4. Level 2 high-level triggering is also distinct hardware that looks at built events

but also possibly TPs (less hardware because built event rate is low)

slide-14
SLIDE 14

Goals for this Workshop: Informing Costs

Basic cost model so far assumes:

  • 1. Processing of trigger primitives (L0) to find trigger candidates done by

hardware distinct from readout (e.g., FELIX+CPU, etc.) hardware T

  • defend this approach (or decide against it) we need to:
  • Define the interface between TP generation and L0
  • Develop L0 algorithms that satisfy requirements
  • T

est L0 algorithms on “realistic” hardware and data samples to determine speeds and latencies

  • Determine total resources needed to make this work

A lot of what we’ll discuss these two days is how to do this by…October.

slide-15
SLIDE 15

Goals for this Workshop: Informing Costs

Basic cost model so far assumes:

  • 2. Generation of Module Level (L1) triggers done by yet more distinct hardware

(perhaps one single server) and needs PDS, GPS, calibration information T

  • defend this approach (or decide against it) we need to:
  • Define the interface between L0 trigger candidates and L1
  • Define interface between PDS and L1
  • Define interface between calibrations and L1
  • Define interface between global timing and L1
  • Define interface between L1 and FE readout (trigger commands)
  • Define interface between L1 and supernova buffer (trigger commands)
  • Develop L1 algorithms that satisfy requirements
  • T

est L1 algorithms on “realistic” hardware and data samples to determine speeds and latencies

  • Determine total resources needed to make this work

A lot of what we’ll discuss these two days is how to do this by…October.

slide-16
SLIDE 16

Goals for this Workshop: Informing Costs

Basic cost model so far assumes:

  • 3. External Triggering is likewise a single machine with interface to other modules

and detectors T

  • defend this approach (or decide against it) we need to:
  • Define the bidirectional interface between L1 and external trigger
  • Define interface between external trigger and any other systems
  • Develop L2 algorithms that satisfy requirements
  • T

est external triggering on “realistic” hardware and data samples to determine speeds and latencies

  • Determine total resources needed to make this work

A lot of what we’ll discuss these two days is how to do this by…October.

slide-17
SLIDE 17

Goals for this Workshop: Informing Costs

Basic cost model so far assumes:

  • 4. Level 2 high-level triggering is also distinct hardware that looks at built events

but also possibly TPs (less hardware because built event rate is low) T

  • defend this approach (or decide against it) we need to:
  • Define the interface between TP generation and L2 (if necessary)
  • Define interface between event builder and L2
  • Define interface between L2 and permanent storage
  • Develop L2 algorithms that satisfy requirements
  • T

est L2 algorithms on “realistic” hardware and data samples to determine speeds and latencies

  • Determine total resources needed to make this work

A lot of what we’ll discuss these two days is how to do this by…October.

slide-18
SLIDE 18

The Way Forward

Existing task list (next slides) needs to get applied to 3 parallel paths:

  • 1. ProtoDUNE data (very good for testing TP algorithms and maybe L0)
  • 2. LArSOFT simulation (for L0/L1 algorithm development that satisfies physics

requirements

  • 3. “Local” test stands (e.g., BNL, Columbia,…)

By my count, DUNE data selection has about 3-4 FTEs working on it now. That sounds…a little bit light…. (Current costing has 12.4 in CY19; max of 17.4 in CY20)

slide-19
SLIDE 19

Data Selection Task List

1. Trigger primitive development and testing (SP and DP) A. Determination of primitive data 1. Summed ADC, peak, time…?) [Tyler, Phil] 2. Windowed or not (from an algorithm perspective, not implementation)? [Brett,Phil] 3. Threshold target and efficiency [Tyler, Phil, Georgia] 4. Filtering [Babak,Russel,Phil…] B. Algorithm development 1. Choice of hit-finding approach (discriminator, gauss, etc.) [Phil,Brett,?] 2. Baseline determination and consequent transfer function [Phil, Brett] C. Adding primitive generation in simulation [Tyler, Phil,Chris,?] 1. “Clean” MC waveforms+offline noise models [Phil,Chris,Miquel,Babak,Rivera…] D. Firmware tests on simulated waveforms [Russel, ?] 1. Determination of implementation constraints on algorithms [Russel,?] E. Firmware tests on (e.g.) ProtoDUNE waveforms [Russel, ?] F. Software tests on simulated waveforms [Phil, Alex, Joe, Georgia,…] 1. Determination of implementation constraints on algorithms [Phil, Brett,?] G. Software tests on (e.g.) ProtoDUNE waveforms [Rivera,?]

slide-20
SLIDE 20

Data Selection Task List

  • 2. Trigger Candidate Algorithm Development

A. Supernova burst algorithm [Alex,Pierre,?]

  • 1. Background source term investigation [?]
  • 2. Spallation backgrounds [JRK+Beacom,?]
  • 3. Channel-to-channel variations, dead channels, robustness [?]

B. Beam triggering [David L, David R, ?] C. Atmospheric neutrino triggering [Olivia,?] D. Solar neutrino (?) triggering [Alex, Nuno, David R?] E. PDK triggers [?] F. “High-Level” (L2) Trigger algorithm development [Giles, Georgia,?]

slide-21
SLIDE 21

3. PDS trigger development [hardware/software] A. Determination of trigger primitive data packets[Lasorak,Edinburgh] (charge, time, etc.) B. Decision on where primitive generation happens [?] C. Test of primitive generation in candidate hardware [?] D. Test of dispatch of PDS trigger primitives [?] 4. Trigger candidate algorithm testing [software] A. Define framework for software candidate generation [Weinstein, Warburton,Chris,?] B. Define hardware environment for candidate generation [Viren?,?] C. Create sample trigger primitive data streams [?] D. Implement algorithms on candidate hardware [Rodrigues,Viren,?] E. Implement High-Level Algorithms on candidate hardware [Barr, Karagiorgi,?] F. Speed tests with realistic rates [Rodrigues,?] G. Dispatch of candidates to module-level logic [?]

Data Selection Task List

slide-22
SLIDE 22

5. Module-level trigger logic A. Define framework for software module trigger generation [Weinstein,?] B. Define hardware environment for module trigger generation [Viren?,Rodrigues?,?] C. Create sample trigger candidate data streams [?] D. Implement algorithms on candidate hardware [?] E. Speed and latency tests [?] F. Trigger command generation development [?] 6. “External” trigger development A. Define framework for external trigger generation [Weinstein,?] B. Define hardware environment for external trigger generation [Viren?,Rodrigues,?] C. Create sample module-level trigger data streams [?] D. Implement algorithms on candidate hardware [?] E. Speed tests with realistic rates [?] F. Development of trigger commands to send back to modules [?]

Data Selection Task List

slide-23
SLIDE 23
  • 7. Integrated tests [hardware/software]

Data Selection Task List

slide-24
SLIDE 24

Summary

  • Need to start nailing things down on:
  • Requirements and tying them to physics requirements
  • Data Selection interfaces
  • Need to converge on prototype algorithms that satisfy the requirements
  • Need active efforts on ProtoDUNE data and test stands (+ongoing simulation)
slide-25
SLIDE 25

BACKUPS (Requirements, Costing, DAQ details)

slide-26
SLIDE 26

Suggested Data Selection Requirements Discussed Previously

slide-27
SLIDE 27
  • operate on both SP and DP data streams

Rationale: Collaboration requirement

  • act independently and in concert from all four detector modules

Rationale: Maintain sensitivity during downtimes as well as full 40 kt for Supernova bursts

  • enable filtering of data stream so that data on disk can be limited to < 30 PB/year

Rationale: Reasonable limit for downstream analysis and storage Comments: “Enable” means downstream nearline can filter further

  • be deadtimeless

Rationale: No excuse to be otherwise; maximum livetime for follower events, supernova bursts

  • act on information from TPC, PDS, timing system, and any auxiliary calibration systems

Rationale: Provide robustness to variations in noise; needed for detector calibrations

  • include timestamps for all events of interest

Rationale: Allow tests in offline analysis; coordinate with other expts; provide unique IDs

  • provide tags for various types of selected data, written as part of data stream

Rationale: Provide simple sorting for online and offline analyses; allow systematic checks

  • act as a “master” for calibration systems as necessary

Rationale: Simplify tagging and run control interface

  • provide self-triggering modes (e.g., random triggers, pulsers)

Rationale: Systematic checks of overall detector health

Data selection shall:

slide-28
SLIDE 28
  • provide pre-scaling for all trigger types as well as rejected data

Rationale: May need to deal with high-rate calibration sources, or low-threshold triggers

  • Provide a coarse estimate of event centroid

Rationale: Allow zero suppression via location

  • provide statistics for data selection to online monitoring

Rationale: Detector health feedback

  • allow partitioning for commissioning and debugging, within modules

Rationale: Commissioning of elements that may come together at different times

Data selection shall:

slide-29
SLIDE 29
  • be >99% efficient for selection of neutrino events within beam spills with Evis > 100 MeV

Rationale: Best exploitation of beam data Efficiency needs to be defined!

  • be >99% efficient for selection of atmospheric ns and NDK events with Evis > 100 MeV

Rationale: Smaller total flux in low (<100 MeV) window Comments: Change this to 50 MeV? 20 MeV…?

  • be >90% efficient for selection of supernova bursts within the Milky Way

Rationale: Seems reasonable?

  • be >90% efficient for single supernova events within a burst for Evis>5 MeV

Rationale: Seems reasonable?

  • Supernova Burst false trigger rate contributes < 10% of total data bandwidth

Data selection shall:

slide-30
SLIDE 30

Georgia’s Costing Slides

slide-31
SLIDE 31

DS Scope: 1 SP TPC

Front End System (APA level trigger primitive generation) [x150] L0 (APA level) data selection L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

slide-32
SLIDE 32

Front End System (APA level trigger primitive generation) [x150] L0 (APA level) data selection L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

DS Scope: 1 SP TPC

Within DS scope is development of trigger primitive generation algorithms (that can be implemented in CPU or FPGA or GPU) but not the hardware where these algorithms reside, and physics validation studies. Notes:

  • Hardware development and production is not within scope; only

software development and physics validation

  • We are also aiming on helping with teststand construction and DS

development and validation on the teststand

  • Candidate DS-focused teststand construction and

development site: Columbia U. (also at other sites)

  • Within the scope is assistance with pre-/production testing,

installation, commissioning of FE system (DS focused)

Guideline: PD=$100k/yr GS=$120k/yr

slide-33
SLIDE 33

Front End System (APA level trigger primitive generation) [x150] L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

DS Scope: 1 SP TPC

L0 (APA level) data selection

Within DS scope is development of L0 trigger algorithms (APA- level) as well as the hardware where these algorithms reside, and physics validation studies.

More specifically, within DS scope:

  • Hardware development and production

(CPU or GPU; possibility of FPGA?)

  • Software and/or firmware development

and physics validation

  • Teststand with fake trigger primitive

inputs

  • Pre-/poduction testing, installation and

commissioning of system Example scheme 1: Each APA reports the following candidates to L1:

  • SN (“blob”)
  • Track
  • Shower

Example scheme 2:

  • Low-energy cluster (SN-like)
  • High-energy cluster (every other physics event of interest)

Each candidate is reported with some position, time and calorimetric info (TBD)

slide-34
SLIDE 34

Front End System (APA level trigger primitive generation) [x150] L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

DS Scope: 1 SP TPC

L0 (APA level) data selection

L0 hardware scope: For a CPU approach, we can use current DUNE clustering resource use as a basis for evaluating system requirements Trigger primitives, if dominated by Ar39, amount to ~50k argon decays per drift per module * 4 x 2B words per primitive, or ~200 MB/s. This amounts to ~0.5 PB/3 months so need local(?) disk with 0.5 PB total long-term storage (disk) for trigger primitives. à Do we want these stored on-site

  • r shipped offline?
slide-35
SLIDE 35

Front End System (APA level trigger primitive generation) [x150] L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

DS Scope: 1 SP TPC

L0 (APA level) data selection

Clustering in DUNE:

  • Currently, depending on algorithm can

take up to 10 sec per simulated event (1x2x6 APA x 5.4ms readout) for events with radiologicals. This is probably too large! (translates to >45k dunegpvm interactive machines in order to keep up)

  • Need simple clustering algorithms

tested for resource usage per APA; need to check average and max hit multiplicity per event, and different clustering algorithms Overall L0 hardware scope (CPU solution): Is 25 multi-core machines dedicated to clustering and generating APA-level trigger primitives at $10k each sufficient for 150 APAs?

slide-36
SLIDE 36

Front End System (APA level trigger primitive generation) [x150] L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals

DS Scope: 1 SP TPC

L0 (APA level) data selection

Within DS scope is development of L1 trigger algorithms (that can be implemented in CPU as well as the hardware where these algorithms reside, and physics validation studies. Also included is integration with external systems (calibration, accelerator, SNEWS, and timing systems) More specifically, within DS scope:

  • Hardware development and production (most likely single CPU server with necessary hardware to

receive timing and other (ext) trigger info, e.g. GPS card, or WD node card)

  • Software and/or firmware development and physics validation
  • Teststand with L0 inputs
  • Pre-/production testing, installation and commissioning of system
slide-37
SLIDE 37

Front End System (APA level trigger primitive generation) [x150] L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals L0 (APA level) data selection

Within DS scope is development of L2 selection algorithms (that can be implemented in CPU or GPU as well as the hardware where these algorithms reside, and physics validation studies. Also included is integration with external systems (nearline processing, DQM, or any other systems which might be clients

  • f the event filtering system)

More specifically, within DS scope:

  • Hardware development and production (CPU or GPU; see next slide on the latter)
  • Software development and physics validation
  • Teststand? Need only simulated events as fake inputs
  • Pre-/production testing, installation and commissioning of system

DS Scope: 1 SP TPC

slide-38
SLIDE 38

Front End System (APA level trigger primitive generation) [x150] L1 (module level) data selection L2 (High-level data selection) Applied to each event independently Event Builder (per Module) Permanent storage at Fermilab Front End System (APA level trigger primitive generation) [x150] Front End System (APA level trigger primitive generation) [x150] Long-term storage

  • f primitives

Run Control & Config Databases External & Calibration Signals L0 (APA level) data selection

DS Scope: 1 SP TPC

GPU option: Based on CNN studies at Columbia: 6-fold classification takes 35 ms (up to 50 ms) per APA (collection only) per drift on single NVIDIA GeForce TITAN X 12 GB A single GPU card could process 1s/0.050s = 20 APA * 1 drift images in 1 second. Assuming a 1 Hz maximum trigger rate from event builder (single module) stage, we’d need (150 APA * 2 drifts /event) / (20 APA * 1 drift /card) = 15 cards to keep up GPU hardware scope could be:

  • 15 GPU cards
  • 5 GPU servers (3 cards

per server)

  • 1 PB capacity

Hard Drives (sized for 5 SN events 100 s each, ~150 TB x 5)

slide-39
SLIDE 39

DAQ Paradigm Slides

slide-40
SLIDE 40

Current DAQ Paradigm

  • All data from front-end is passed to a temporary buffer, without zero

suppression (~10 Tb/s/10 kt)

  • Rationale: simplicity, preserves flexibility
  • Trigger “primitives” from collection wires are passed to data selection
  • Integrated/peak charge, time, time-over-threshold
  • If an interaction is above threshold-equivalent (e.g., 10 MeV), 5.4 ms of data

from all channels is stored

  • Rationale: best to have low bias at channel-level for ”good events”
  • Rationale: 2x maximum drift time ensures we bracket entire event
  • Rationale: u/v zero suppression is still evolving and is noise-sensitive
  • Rationale: neutrons from beam, atmospherics, and cosmics travel far

“Standard” Triggering (not supernova burst triggering!) A single “DUNE Event” is therefore 6.22 GB (uncompressed). We have a cap of 30 PB/year for all 4 modules=4.8 million events/year.

slide-41
SLIDE 41

Current DAQ Paradigm

  • All data from front-end is passed to a temporary buffer, without zero

suppression (~10 Tb/s/10 kt)

  • Trigger “primitives” from collection wires are passed to data selection
  • If an interaction in a volume viewed by a single APA is above a low threshold

equivalent (e.g., 3.5 MeV), it is passed on as a “trigger candidate” to a higher level

  • If there are enough such events in a given time window (e.g., 10 s) a

supernova burst is triggered

  • EVERYTHING (every sample on every wire) is then stored in a window that

is (-10 s to +20 s) around the time at which the burst criterion was satisfied

  • After 20 s can continue to acquire “normal” triggers or event some with a

lower threshold for a while (few minutes?)

  • We plan on doing this no more than 12/year=0.5 PB/year/10 ktonne

Supernova Triggering

slide-42
SLIDE 42

[TPC] Trigger Primitives

Working assumption:

  • Channel identifier
  • Hit time
  • Summed charge and/or peak in ADC counts

above baseline

  • Time over threshold (ToT)

Sets of 960 collection wires/APA “streaming” primitives out 100% live. Need to:

  • Re-map channels if necessary
  • Filter each channel (NB: baseline determination is a low-pass filter)
  • Perform hit-finding with high efficiency and purity
  • Integrate charge/find peak
  • Count time over threshold
  • P. Rodrigues
slide-43
SLIDE 43

[TPC] Trigger Primitives

  • Pulse widths vary from shaping time to 2.25 ms (tdriftmax)
  • For simplicity, need to define a time window for Q and ToT
  • Length of window could be equal to downstream “clustering” window (e.g. 50 µs)

For a 10 MeV threshold, maximum Dt is ~50 µs (= 3 39Ar decays/APA) More extended tracks (in time) could have primitives broken up into 50 µs chunks In this model, list of channels, times, charges, sent as primitives every 50 µs (or whatever Dt).

  • D. Last
slide-44
SLIDE 44

APA Level “L0”

Going from Trigger Primitives to “Trigger Candidates”

Trigger Candidates are based on “clustering” of hit wires, total charge of adjacent wires, and maximum time-over-threshold Trigger Candidates must include

  • Either putative energy and/or an energy-dependent “weight”
  • (At a minimum, must have 2 thresholds: Low E (for SNs) and High E for beam, atm, etc.)
  • Start time for “track”

Dominant source of background is neutrons from rock (etc.)---detector is entirely unshielded

Alex Booth

slide-45
SLIDE 45

Module Level “L1”

Goal is to go from Trigger Candidates to Trigger Commands

For supernova bursts, simplest case is to count trigger candidates in a fixed time window (cf. Alex Booth) such as 10 s

Alex Booth

Tradeoff will always be between fake burst rate (=coverage) and total duration of burst

slide-46
SLIDE 46

Module Level “L1”

Goal is to go from Trigger Candidates to Trigger Commands

More sophisticated approach would be to use weights Nint = S10 s(wi(Ei)) >Nburst So, e.g., wi=1.0 for Ei=18 “MeV” and wi=0.01(?) for Ei=3 “MeV”(?) Which has the nice behavior that we would see (maybe even) just two 18 MeV neutrinos that would extend our sensitivity for some fraction of more distant events (and non-standard events).

slide-47
SLIDE 47

Module Level “L1”

Goal is to go from Trigger Candidates to Trigger Commands L1 is also the place where we would add in information from the photon system (cf. Pierre Lasorak). Could also help to reject “neutron bursts” from cosmics going through dead regions like rock.

slide-48
SLIDE 48

High Level Trigger “L2”

Goal is to Provide Additional Filtering After L1

  • May or may not operate in same physical server as L1
  • Apply more sophisticated algorithms (e.g. machine learning) to reject

high rate single events that are clearly not physics (e.g., HV discharge) or fake supernova bursts

slide-49
SLIDE 49

External Trigger (“L3”?)

Goal is to do Inter-Module and Inter-Detector Triggers

  • Views all modules at once
  • Reduces rate from uncorrelated events (e.g., instrumentals,

neutrons, etc.)

  • Also talks to other local detectors (e.g., LZ, HALO+, THEIA….) and

could form realtime coincidences

slide-50
SLIDE 50

TP DAQ Schemes

slide-51
SLIDE 51

Technical Proposal Model

The “Nominal” design

slide-52
SLIDE 52

Technical Proposal Model

The “Alternate” design