data selection workshop welcome overview goals
play

Data Selection Workshop: Welcome, Overview, Goals Josh Klein, Penn - PowerPoint PPT Presentation

Data Selection Workshop: Welcome, Overview, Goals Josh Klein, Penn Welcome to Penn First University in the US, Ben Franklin, ENIAC, blah blah blah Overview Data Selection is hierarchical Trigger Primitives generated at front-end


  1. Data Selection Workshop: Welcome, Overview, Goals Josh Klein, Penn

  2. Welcome to Penn First University in the US, Ben Franklin, ENIAC, blah blah blah

  3. Overview • Data Selection is hierarchical • Trigger Primitives generated at front-end • Trigger Candidates formed at “APA Level” or “L0” • Module level trigger commands generated from candidates at “L1”; sent to buffer(s) • External triggers across all modules, other detectors • After event building, additional HLT (L2) may be applied to filter triggers • Readout for TDR is simple-minded • For a high-energy (> 10 MeV) L1 trigger, everything is read out for 5.4 ms • Supernova bursts are handled differently • Many low-threshold candidates required at L1 (hopefully with weighting scheme) • Long buffer (-10 s, 20 s) of everything is saved and “dumped” if SN burst detected • Some calibrations are handled differently • FE electronics • Laser system 39 Ar? •

  4. Module 1 Software [Mostly] Software Software or Firmware Software Trigger Trigger Primitive APA 1 Candidate Generation Generation … Module-Level Trigger commands SN Trigger commands Triggers Trigger Trigger Primitive APA 150 Candidate Generation Generation External … Triggers Module 4 Trigger Trigger Primitive Det 1 Candidate Generation Generation … Module-Level Trigger commands SN Trigger commands Triggers Trigger Trigger Primitive Det N Candidate Generation Generation

  5. DS Scope: 1 SP TPC Run Control & Config Databases Front End System Front End System Front End System (APA level trigger L0 (APA level trigger (APA level trigger L1 (module level) data primitive generation) (APA level) primitive generation) primitive selection data selection generation) [x150] [x150] [x150] External & Calibration Signals Long-term storage of primitives L2 (High-level data selection) Event Builder Permanent storage at (per Module) Fermilab Applied to each event independently

  6. Goals for this Workshop

  7. Goals for this Workshop LBNC Meeting has reinforced the need for: Requirements for DAQ that follow ultimately from physics requirements • • Hardware costs that are grounded in actual tests and calculations • Labor needs (and costs) that are matched to requirements • Narrowing of options in advance of TDR and focus on development of those The era of “It could be this way or that way…” and “This will probably work…” is over.

  8. Goals for this Workshop: Requirements This is not so easy. Physics requirements relevant for DAQ are not always directly useful: •

  9. Goals for this Workshop: Requirements This is not so easy. Physics requirements relevant for DAQ are not always directly useful: •

  10. Goals for this Workshop: Requirements Data Selection Physics requirements takeaway: • We must be better than 90% efficient for E vis >100 MeV • Goal is 99% achieved by 50% at ~10 MeV But other factors affect this: dead APAs, HV downtime, DAQ downtime… • • So it is not clear what our requirement is…. • We should plan to detect neutrons “Full” 5.4 ms readout achieves this • Zero suppression is either a requirement or not: • We should “optimize energy resolution over efficiency” for E vis <100 MeV • • But also need to see de-excitation gammas from CC events… We need beam timeline in data selection (Module Level/L1) • Already discussed • • No dead time • Already discussed Need to identify fake SN bursts as fake in 1 minute (!) •

  11. Goals for this Workshop: Requirements But nothing about: How much data to store or length of window in the event of a supernova burst • Trigger efficiency per se or distinction between “acceptance” and efficiency • • T otal data volume • What to do about calibrations We have our own versions of these but they are not tied to the physics requirements in any direct way. We need to have agreement on what these should be, and how the tie to physics requirements.

  12. Goals for this Workshop: Informing Costs Data selection scope extends from algorithms (but not hardware) for TP generation to L2 and “external triggering.” Run Control & Config Databases Front End System Front End System Front End System (APA level trigger L0 (APA level trigger (APA level trigger L1 (module level) data primitive generation) (APA level) primitive primitive generation) selection data selection generation) [x150] [x150] [x150] External & Calibration Signals Long-term storage of primitives L2 (High-level data selection) Event Builder Permanent storage at Fermilab (per Module) Applied to each event independently

  13. Goals for this Workshop: Informing Costs Basic cost model so far assumes: 1. Processing of trigger primitives (L0) to find trigger candidates done by hardware distinct from readout (e.g., FELIX+CPU, etc.) hardware 2. Generation of Module Level (L1) triggers done by yet more distinct hardware (perhaps one single server) and needs PDS, GPS, calibration information 3. External Triggering is likewise a single machine with interface to other modules and detectors 4. Level 2 high-level triggering is also distinct hardware that looks at built events but also possibly TPs (less hardware because built event rate is low)

  14. Goals for this Workshop: Informing Costs Basic cost model so far assumes: 1. Processing of trigger primitives (L0) to find trigger candidates done by hardware distinct from readout (e.g., FELIX+CPU, etc.) hardware T o defend this approach (or decide against it) we need to: Define the interface between TP generation and L0 • • Develop L0 algorithms that satisfy requirements • T est L0 algorithms on “realistic” hardware and data samples to determine speeds and latencies Determine total resources needed to make this work • A lot of what we’ll discuss these two days is how to do this by…October.

  15. Goals for this Workshop: Informing Costs Basic cost model so far assumes: 2. Generation of Module Level (L1) triggers done by yet more distinct hardware (perhaps one single server) and needs PDS, GPS, calibration information T o defend this approach (or decide against it) we need to: Define the interface between L0 trigger candidates and L1 • • Define interface between PDS and L1 • Define interface between calibrations and L1 • Define interface between global timing and L1 Define interface between L1 and FE readout (trigger commands) • Define interface between L1 and supernova buffer (trigger commands) • • Develop L1 algorithms that satisfy requirements • T est L1 algorithms on “realistic” hardware and data samples to determine speeds and latencies Determine total resources needed to make this work • A lot of what we’ll discuss these two days is how to do this by…October.

  16. Goals for this Workshop: Informing Costs Basic cost model so far assumes: 3. External Triggering is likewise a single machine with interface to other modules and detectors T o defend this approach (or decide against it) we need to: Define the bidirectional interface between L1 and external trigger • • Define interface between external trigger and any other systems • Develop L2 algorithms that satisfy requirements T est external triggering on “realistic” hardware and data samples to determine • speeds and latencies Determine total resources needed to make this work • A lot of what we’ll discuss these two days is how to do this by…October.

  17. Goals for this Workshop: Informing Costs Basic cost model so far assumes: 4. Level 2 high-level triggering is also distinct hardware that looks at built events but also possibly TPs (less hardware because built event rate is low) T o defend this approach (or decide against it) we need to: Define the interface between TP generation and L2 (if necessary) • • Define interface between event builder and L2 • Define interface between L2 and permanent storage Develop L2 algorithms that satisfy requirements • T est L2 algorithms on “realistic” hardware and data samples to determine • speeds and latencies • Determine total resources needed to make this work A lot of what we’ll discuss these two days is how to do this by…October.

  18. The Way Forward Existing task list (next slides) needs to get applied to 3 parallel paths: 1. ProtoDUNE data (very good for testing TP algorithms and maybe L0) 2. LArSOFT simulation (for L0/L1 algorithm development that satisfies physics requirements 3. “Local” test stands (e.g., BNL, Columbia,…) By my count, DUNE data selection has about 3-4 FTEs working on it now. That sounds…a little bit light…. (Current costing has 12.4 in CY19; max of 17.4 in CY20)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend