DUNE Trigger Requirements Requirements are not specifications - - PowerPoint PPT Presentation

dune trigger requirements
SMART_READER_LITE
LIVE PREVIEW

DUNE Trigger Requirements Requirements are not specifications - - PowerPoint PPT Presentation

DUNE Trigger Requirements Requirements are not specifications But best if they are not so generic as to be useless: System X will do what it needs to Should be absolute big-picture goals that system must meet (and be


slide-1
SLIDE 1

DUNE Trigger Requirements

  • Requirements are not “specifications”
  • But best if they are not so generic as to be useless: “System X will do what

it needs to…”

  • Should be absolute big-picture goals that system must meet (and be

achievable!)

  • In principle should be derived from higher-level (e.g. physics) requirements
  • Usually phrased as, “System X shall…”

Have contacted various group leaders and experts but still need more input! (Probably some from Amanda’s talk today)

slide-2
SLIDE 2

DUNE Trigger Requirements

Starting Points:

slide-3
SLIDE 3
  • operate on both SP and DP data streams

Rationale: Collaboration requirement

  • act independently and in concert from all four detector modules

Rationale: Maintain sensitivity during downtimes as well as full 40 kt for Supernova bursts

  • enable filtering of data stream so that data on disk can be limited to < 10 PB/year

Rationale: Reasonable limit for downstream analysis and storage Comments: “Enable” means downstream nearline can filter further

  • be deadtimeless

Rationale: No excuse to be otherwise; maximum livetime for follower events, supernova bursts

  • act on information from TPC, PDS, timing system, and any auxiliary calibration systems

Rationale: Provide robustness to variations in noise; needed for detector calibrations

  • include timestamps for all events of interest

Rationale: Allow tests in offline analysis; coordinate with other expts; provide unique IDs

  • provide tags for various types of selected data, written as part of data stream

Rationale: Provide simple sorting for online and offline analyses; allow systematic checks

  • act as a “master” for calibration systems as necessary

Rationale: Simplify tagging and run control interface

  • provide self-triggering modes (e.g., random triggers, pulsers)

Rationale: Systematic checks of overall detector health

Data selection shall:

slide-4
SLIDE 4
  • provide pre-scaling for all trigger types as well as rejected data

Rationale: May need to deal with high-rate calibration sources, or low-threshold triggers

  • Provide a coarse estimate of event centroid

Rationale: Allow zero suppression via location

  • provide statistics for data selection to online monitoring

Rationale: Detector health feedback

  • allow partitioning for commissioning and debugging, within modules

Rationale: Commissioning of elements that may come together at different times

Data selection shall:

slide-5
SLIDE 5
  • be >99% efficient for selection of neutrino events within beam spills with Evis > 100 MeV

Rationale: Best exploitation of beam data

  • be >99% efficient for selection of atmospheric ns and nucleon decay events with Evis > 100

MeV

Rationale: Smaller total flux in low (<100 MeV) window Comments: Change this to 50 MeV? 20 MeV…?

  • be >90% efficient for selection of supernova bursts within the Milky Way

Rationale: Seems reasonable?

  • be >90% efficient for single supernova events within a burst for Evis>5 MeV

Rationale: Seems reasonable?

  • Supernova Burst false trigger rate contributes < 10% of total data bandwidth

Data selection shall:

100 µs x 2 MHz x 12 bits/8bits x 40 MBq =12 GB/s for full detector) = 120 PB/year

“Loose” zero suppression still needs about x10 reduction

slide-6
SLIDE 6

Data Selection Critical Questions

  • Is a simple trigger criterion enough to remove 39Ar “SN Bursts”?

Number of Adj Hits Number of Adj Hits

At an adjacent multiplicity of 6 hits and Dt=750 µs, we get about 1 Hz (1 event each!) in 10 ktonnes. So 10 “SN events” in 10 s. Tiny probability of 100 events in 10s. (Expect ~250 events/10 ktonnes at 10 kpc)

Dt = 50 µs

Adjacent wire multiplicity 2 4 6 8 10 12 14 16 Ntuple trigger rate (Hz)

1 −

10 1 10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

t ∆ 5ms 2ms 1ms 0.75ms 0.5ms 0.25ms

Warburton Rivera

slide-7
SLIDE 7

Data Selection Critical Questions

  • Is a simple trigger criterion enough to remove 39Ar “SN Bursts”?

Number of Adj Hits

At an adjacent multiplicity of 4 hits and Dt=50 µs, we get about 0.1 Hz (1 event each!) in 10 ktonnes. So 1 “SN event” in 10 s. Tiny probability of 100 events in

  • 10s. (Expect ~250 events/10 ktonnes at 10 kpc)

Dt = 50 µs

Adjacent wire multiplicity 2 4 6 8 10 Ntuple trigger rate (Hz)

2 −

10

1 −

10 1 10

2

10

3

10

4

10

5

10

6

10

7

10

8

10

t ∆ s µ 500 s µ 50 s µ 40 s µ 20

Rivera Warburton

slide-8
SLIDE 8

Data Selection Critical Questions

  • What are we storing for any given “SN Burst”?

If we store everything (no zero suppression for 10 s) then that is ~450 TB for each burst. We can tolerate just 2 such bursts each year if we stick with our 10% of total data requirement. Zero-suppressing just collection wires brings this down to ~250 GB, so we could tolerate ~4000 fake bursts, or ~10/day. If we only save zero-suppressed data for everything, data selection needs on 1/5 reduction. It basically becomes a tagging scheme.

slide-9
SLIDE 9

Data Selection Critical Questions

  • Is simple adjacent multiplicity enough for high energy (non-beam) events?

K+-antinu can have K+ going normal to wire planes---we lose these if we require high multiplicity, but acceptance loss probably small. Nevertheless, we can gain these back with time-over-threshold and/or total charge---what does multiplicity vs. time-over-threshold vs. charge look like for high-E events?

slide-10
SLIDE 10

Data Selection Critical Questions

  • What should we assume about coherent noise?

Coherent noise may be rejected with a Dt>N µs cut---how big is acceptance loss? And what does collection wire “charge” look like for this?