data storage and event building
play

Data storage and event building Kurt Biery 07 November 2019 DUNE - PowerPoint PPT Presentation

Data storage and event building Kurt Biery 07 November 2019 DUNE DAQ Physics Performance Working Group Meeting Wes talk from DSWS, Aug 18 2 07-Nov-2019 KAB | Storage & Event Building My talk in one slide We should work together to


  1. Data storage and event building Kurt Biery 07 November 2019 DUNE DAQ Physics Performance Working Group Meeting

  2. Wes’ talk from DSWS, Aug ‘18 2 07-Nov-2019 KAB | Storage & Event Building

  3. My talk in one slide We should work together to ensure that the downstream DAQ provides the optimal capability for physics given the cost and effort. Thank you. Questions? 3 07-Nov-2019 KAB | Storage & Event Building

  4. “EB” performance questions ‘What is the highest trigger rate that the downstream DAQ can deal with?’ One approach is to use experience from protoDUNE as a guide. Another approach is to turn the question around and ask what trigger rates will be needed. This group understands well that we should keep in mind both short-term peak rates (actually rate and duration) and long-term average rates. Long-term average: < 0.15 TriggerRecord/sec [30 PB/yr / 6.5 GB/TR] The number that I’ve heard most often is 0.1 TR/sec Less data per TR is likely! During the call, 200 MB/TR was mentioned as a possibility 4 07-Nov-2019 KAB | Storage & Event Building

  5. Turning the question around Elements of this perspective • ‘the EB shall not be an element limiting the performance of the DAQ” • Peak data transfer from SURF-> FNAL will be 100 Gb/s, so EB and storage should cope with more than that Some possible questions: • ‘what would be the data rate if we recorded all activity in the detector, identifying regions of interest?’ (say 5 neighboring wires • ‘how much minimum bias data should we take to have the noise under control?’ 5 07-Nov-2019 KAB | Storage & Event Building

  6. A ‘collaborative’ approach DFWG works to understand what is possible. PP WG works to understand what is needed. We meet together to pick the optimal solution. 6 07-Nov-2019 KAB | Storage & Event Building

  7. Possibly useful graph 1 To help move this discussion forward, imagine the following hypothetical graph produced by the Dataflow working group Downstream system cost Supported downstream DAQ throughput 7 07-Nov-2019 KAB | Storage & Event Building

  8. Possibly useful graph 1b As much as possible, this could/should include different EB/storage models and technologies. Downstream system cost Supported downstream DAQ throughput 8 07-Nov-2019 KAB | Storage & Event Building

  9. Possibly useful graph 2 From the perspective of ‘what is needed’, we can also imagine a different hypothetical graph, this one produced by the Physics Performance working group. Possibly such graphs already exist. Usefulness for physics Supported downstream DAQ throughput 9 07-Nov-2019 KAB | Storage & Event Building

  10. Complications The simple graphs on the previous pages may be good starting points, but I’m sure that people will suggest variations. For example, certain studies may be best implemented with a non-standard readout scheme, e.g. partial detector readout To make fair comparisons, those would need to be fed back to the DFWG graphs. Worse, different readout schemes might be best handled by different event building schemes. 10 07-Nov-2019 KAB | Storage & Event Building

  11. Dealing with the complication Maybe there will be a better choice of x-axis that will avoid some of the complication. To understand if that is true, we’ll need to know more about the needs Usefulness for physics Supported downstream fragment rate? 11 07-Nov-2019 KAB | Storage & Event Building

  12. For various physics scenarios Description: Proposed readout scheme: Useful ranges of trigger rates: Expected range of data volume: [other characteristics?] [how to generate these and where to keep track of them?] 12 07-Nov-2019 KAB | Storage & Event Building

  13. Using experience as a guide A couple of plots from protoDUNE show: • 60 Hz of 36 MB events with 11 EBs (2.1 GB/s) • 38 Hz of 67 MB events with 7 EBs (2.5 GB/s) I’m sure that others have additional data Taking 2.5 GB/s as a demonstrated building/writing performance value, and the 6.5 GB/TriggerRecord size from the TDR: • ~0.4 TriggerRecords / second • Some uncertainty in that calculation 13 07-Nov-2019 KAB | Storage & Event Building

  14. Plots from protoDUNE SP 14 07-Nov-2019 KAB | Storage & Event Building

  15. Background regarding storage/EB One of the storage/’event building’ models that we are considering is a distributed one. • Each ‘readout process’ writes its ‘data fragment’ for each Trigger Record into a ‘shared file’. The subdetector components that contribute a fragment to each TR are tracked. • The full or partial data from TriggerRecords (whichever is needed) is delivered to real-time DQM, event displays, etc. within the DAQ. Also to the HLF, if needed. • The data from each Trigger Record is packaged in a format that is convenient for later analysis and shipped to Fermilab. In this description, I’ve glossed over lots of details (“magic happens”), and the work of understanding and demonstrating those details is part of what we need to do in the DFWG. 15 07-Nov-2019 KAB | Storage & Event Building

  16. A few notes on the discussion There was a lot of discussion during this presentation, which was great. As such, the slides were not really presented in the order in which they appear here. One outcome of the discussion was that Josh volunteered to give a presentation at an upcoming meeting on the physics scenarios that are currently being considered, and the table on slide 12 was seen as a useful guide for parts of that. 16 07-Nov-2019 KAB | Storage & Event Building

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend