wh t h what have we learned from what have we learned
play

Wh t h What have we learned from What have we learned from Wh t h - PowerPoint PPT Presentation

Wh t h What have we learned from What have we learned from Wh t h l l d f d f building the LHC (CMS) DAQ building the LHC (CMS) DAQ systems. systems. S. Cittolin PH-CMD. CERN Openlab meeting. 3-4 March 2009 DAQ at LHC overview CMS


  1. Wh t h What have we learned from What have we learned from Wh t h l l d f d f building the LHC (CMS) DAQ building the LHC (CMS) DAQ systems. systems. S. Cittolin PH-CMD. CERN Openlab meeting. 3-4 March 2009 DAQ at LHC overview CMS systems Project timeline CMS experience CMS experience

  2. Trigger and DAQ overview Trigger and DAQ overview Collisions at LHC Th The four experiments f i Readout and event selection Trigger levels architecture S. Cittolin. CERN/CMS 2

  3. Proton-proton collisions at LHC Proton-proton collisions at LHC Collision rate ~10 9 Hz Collision Rates: E Event Selection: t S l ti ~1/10 13 1/10 13 S. Cittolin. CERN/CMS 3

  4. Data detection and event selection Data detection and event selection Operating conditions: Operating conditions: one “good” event (e.g Higgs in 4 muons ) one “good” event (e.g Higgs in 4 muons ) Collision rate + ~20 minimum bias events) + ~20 minimum bias events) All charged tracks with pt > 2 GeV All charged tracks with pt > 2 GeV ~ 10 8 cells Detector granularity Event size: ~ 1 Mbyte Processing Power: Processing Power: ~ Multi-TFlop Multi TFlop Reconstructed tracks with pt > 25 GeV Reconstructed tracks with pt > 25 GeV S. Cittolin. CERN/CMS 4

  5. General purpose p-p detectors at LHC General purpose p-p detectors at LHC S. Cittolin. CERN/CMS 5

  6. The Experiments The Experiments ATLAS ATLAS ATLAS ATLAS Study of pp collisions µ Tracker: Si( Pixel and SCT), TRT Calorimeters:LAr, Scintillating Tiles Muon System: MDT, RPC, TGC, CSC, Mag Magnets: Solenoid and Toroid gnetic character Study of pp & heavy ion collisions CMS CMS Tracker: Si (Pixel, Strips, Discs) µ µ Calorimeters: BGO, Brass Scintillators, Preshower Muon System: RPC, MDT, CSC, Supraconducting solenoid ALICE ALICE Study of heavy ion collisions Tracker: Si (ITS), TPC, Chambers, TRD, TOF Particle Id: RICH, PHOS (scintillating crystals) RPC, FMD(froward mult.; Si) ZDC (0 degree cal) Magnets: Solenoid, Dipol Magnets: Solenoid Dipol LHCb LHCb Study of CP violation in B decays (pp) Tracker (Si, Velo), 2 RICH, 4 Tracking stations (Straw- Tubes, Si), SPD (scinitll. Pads), Preshower, ECAL (lead scintillator) HCAL(steel scintillator), Muon stations (MWPCs) S. Cittolin. CERN/CMS 6

  7. 40 MHz crossing. Front-end structure 40 MHz crossing. Front-end structure High precision (~ 100ps) timing, trigger and control distribution 40 MHz digitizers and 25ns pipeline readout buffers 40 MHz Level-1 trigger ( massive parallel pipelined processors ) M lti l Multi-level event selection architecture l t l ti hit t Front-end pipeline readout S. Cittolin. CERN/CMS 7

  8. CMS front-end readout systems CMS front-end readout systems S. Cittolin. CERN/CMS S.C. CERN/EP-CMD 8 8

  9. Level-1 trigger systems. Pipelines massive parallel Level-1 trigger systems. Pipelines massive parallel S. Cittolin. CERN/CMS 9

  10. Multi-level trigger DAQ architecture Multi-level trigger DAQ architecture On-line requirements Event rate Event rate 1 GHz Level-1 input Event size Event size 1 Mbyte 1 Mbyte O ON-line Level-1 Trigger input 40 MHz Level-2 Trigger input 100 kHz Level-2 input Mass storage rate ~100 Hz Level-3 …. Level 3 Online rejection 99.999% System dead time ~ % Selected events to archive DAQ design issues OFF-l Data network bandwidth (EVB) ~ Tb/s Computing power (HLT) ~ 10 Tflop C Computing cores ti ~ 10000 10000 ine Local storage ~ 300 TB Minimize custom design Exploit data communication and computing technologies DAQ staging by modular design (scaling) S. Cittolin. CERN/CMS 10

  11. LHC trigger and DAQ summary LHC trigger and DAQ summary No.Levels Level-0,1,2 Event Readout HLT Out Trigger Rate (Hz) Size (Byte) Bandw.(GB/s) MB/s (Event/s) 3 3 LV-1 10 5 10 5 1 5 10 6 1.5x10 6 4 5 4.5 300 300 (2x10 2 ) 2 LV-2 3x10 3 2 LV-1 10 5 10 6 100 O(1000) (10 2 ) 2 LV-0 10 6 3x10 4 30 40 (2x10 2 ) 4 Pb-Pb 500 5x10 7 25 1250 (10 2 ) 10 3 3 2x10 6 6 200 (10 2 ) p-p S. Cittolin. CERN/CMS 11

  12. LHC DAQ architecture LHC DAQ architecture DAQ technologies DAQ systems at LHC S. Cittolin. CERN/CMS 12

  13. Evolution of DAQ technologies and structures Evolution of DAQ technologies and structures PS:1970-80: Minicomputers Detector Readout Detector Readout Readout custom design Event building First standard: CAMAC On-line processing S ft Software: noOS, Assembler OS A bl Off-line data store • kByte/s p-p/LEP:1980-90: Microprocessors p-p/LEP:1980-90: Microprocessors HEP proprietary (Fastbus), Industry standards (VME) Embedded CPU, servers Software: RTOS, Assembler, Fortran • MByte/s MByte/s LHC 200X N t LHC: 200X: Networks/Clusters/Grids k /Cl t /G id PC, PCI, Clusters, point to point switches Software: Linux, C,C++,Java,Web services Protocols: TCP/IP, I2O, SOAP, • TByte/s • TByte/s S. Cittolin. CERN/CMS 13

  14. LHC trigger and data acquisition systems LHC trigger and data acquisition systems LHC DAQ : A computing&communication network p g Alice Alice A single network cannot satisfy at once all the LHC requirements, therefore present LHC DAQ designs are implemented as multiple (specialized) networks ATLAS LHCb LHCb S. Cittolin. CERN/CMS 14

  15. CMS DAQ baseline structure CMS DAQ baseline structure Collision rate Collision rate 40 MHz 40 MHz Readout concentrators/links Readout concentrators/links 512 x 4 Gb/s 512 x 4 Gb/s Level-1 Maximum trigger rate Level-1 Maximum trigger rate 100 kHz 100 kHz Event Builder bandwidth max. Event Builder bandwidth max. 2 Tb/s 2 Tb/s Average event size Average event size ≈ 1 Mbyte ≈ 1 Mbyte Event filter computing power Event filter computing power ≈ 10 TeraFlop ≈ 10 TeraFlop ≈ 10 6 Mssg/s ≈ 10 6 Mssg/s Flow control&monitor Flow control&monitor Data production Data production ≈ Tbyte/day ≈ Tbyte/day Processing nodes Processing nodes ≈ Thousands ≈ Thousands S. Cittolin. CERN/CMS 15

  16. Two trigger levels Two trigger levels Level-1: Massive parallel processors 40 MHz synchronous -Particle identification: Particle identification: -high pT electron, muon, jets, missing ET -Local pattern recognition and energy -evaluation on prompt macro-granular -information from calorimeter and muon detectors 99.99 % rejected: 0.01 Accepted Level-2: Full event readout into PC farms 100 kHz asynchronous farms - Clean particle signature Clean particle signat re - Finer granularity precise measurement - Kinematics. effective mass cuts and event topology - Track reconstruction and detector matching - Event reconstruction and analysis y 100-1000 Hz. Mass storage 99.9 % rejected: 0.1 Accepted Reconstruction and analysis. S. Cittolin. CERN/CMS 16

  17. 8-fold DAQ structure 8-fold DAQ structure S. Cittolin. CERN/CMS 17

  18. Sept. 2008 first events Sept. 2008 first events S. Cittolin. CERN/CMS

  19. March 09 Technical Global Run March 09 Technical Global Run S. Cittolin. CERN/CMS 19

  20. CMS experience CMS experience DAQ project timeline Industry trends/DAQ Hardware/Software components DAQ at Super LHC p S. Cittolin. CERN/CMS 20

  21. LHC/CMS-DAQ project timeline LHC/CMS-DAQ project timeline R Research and Development (DRDC) h d D l t (DRDC) 1990 Design of experiment Trigger, Timing and Control distribution (TTC) Readout prototypes (FPGA,PC, IOP-200 MB/s ) 1992 CMS Letter of Intent Networks (ATM, Fiber Channel, xyz..) 1994 Technical Design Report CMS 2-level triggers design 1996 Event Builder Demonstrators FPGA/PC data concentrators 1998 8x8 Fiber channel EVB 32x32 Myrinet EVB 2000 Trigger Technical Design Report 2000 Trigger Technical Design Report 64x64 Ethernet EVB, PC driven 64x64 Ethernet EVB PC driven 2002 DAQ Technical Design Report Final Design Pre-series 2004 2004 64x64 Myrinet/Ethernet 64 64 M i t/Eth t Construction and commissioning 2006 Magnet test Global Run 1024 2 Gb/s D2S Myrinet links and routers 8x80x(80x7) GbEthernet EVB/HLT 8x80x(80x7) GbEthernet EVB/HLT 2008 Circulating beam Global Run 10000 on-line cores 2009 Colliding beams Lesson 1. 12 yeas of R&D (too much?) S. Cittolin. CERN/CMS 21

  22. Computing and communication trends Computing and communication trends Lesson 2. Moore law confirmed S. Cittolin. CERN/CMS 22

  23. Two trajectories Two trajectories 1997 GOOGLE first cluster 1997 GOOGLE first cluster 1997 CMS 4x4 FC-EVB 1997 CMS 4x4 FC EVB 2008 One of Google data centers 10 6 cores 2008 Cessy CMS HLT center 10 4 cores, 2 Tb/s maximum bandwidth S. Cittolin. CERN/CMS 23 23

  24. Global Internet traffic (Cisco forecasts) Global Internet traffic (Cisco forecasts) US Consumer (PB per month) US Consumer (PB per month) 2007 2007 2008 2008 2009 2009 2010 2010 2014 2014 Web, email, transfer 710 999 1336 1785 P2P 1747 2361 3075 3981 Gaming 131 187 252 324 Video Communications 25 37 49 70 VoIP 39 56 72 87 Internet Video to PC 647 1346 2196 3215 Internet Video to TV 99 330 756 1422 Business 1469 2031 2811 3818 Mobile Mobile 26 26 65 65 153 153 345 345 Total global traffic (Pb/M) 4884 7394 10666 14984 Global Internet traffic (Tb/s) 20 30 40 60 Total US traffic (Tb/s) 3 4 6 8 Google US traffic (Tb/s) 0.3 0.7 1.5 3 CMS Maximum bandwidth (Tb/s) CMS Maximum bandwidth (Tb/s) 1 1 2 2 2 2 2 2 >10 10 Lesson 3. Will we buy computing power and network bandwidth? S. Cittolin. CERN/CMS 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend