U.S. CMS Detector Operations Outline Cathy Newman-Holmes 13 - - PowerPoint PPT Presentation

u s cms detector operations
SMART_READER_LITE
LIVE PREVIEW

U.S. CMS Detector Operations Outline Cathy Newman-Holmes 13 - - PowerPoint PPT Presentation

U.S. CMS Detector Operations Outline Cathy Newman-Holmes 13 February 2013 Overview of CMS Operations Collisions - 2012 Efficiency, Down time Survey of US CMS subsystems 2012 Performance Operations Cost and Milestones


slide-1
SLIDE 1

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

1

U.S. CMS Detector Operations

Cathy Newman-Holmes

13 February 2013

Outline

  • Overview of CMS Operations

 Collisions - 2012  Efficiency, Down time

  • Survey of US CMS subsystems

 2012 Performance

  • Operations Cost and Milestones
  • Summary
slide-2
SLIDE 2

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

2

pp Collisions March - Dec 2012

  • 8 TeV proton-proton

collisions 11 March 2012 – 16 December 2012.

  • CMS recorded 21.79

fb-1 from 23.30 fb-1 delivered (= 94%).

  • This is more than 4 x

the data sample at the last review (March, 2012).

  • Also note that

efficiency for 2011 data collection was 91%.

slide-3
SLIDE 3

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

3

p-Pb Collisions January – February 2013

  • Proton – Lead

collisions started 20 Jan 2013.

  • CMS recorded 31.13

nb-1 from 31.69 nb-1 delivered (= 98%).

slide-4
SLIDE 4

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

4

Average efficiency in 2012

Period Delivered* Luminosity fb- 1 Recorded* Luminosity fb- 1 Efficiency (luminosity) Downtime Dead-time April-June 6.78 6.26 92.3% 5.9% 1.8% July-21Aug** 4.97 4.73 95.1% 3.8% 1% 22Aug-16 Sep 2.99 2.74 94.4% 4.1% 1.5% 26 Sept-7Oct 1.44 1.37 95.1% 3.4% 1.5% 7 Oct-6 Dec 6.87 6.51 94.8% 3.7% 1.5%

** Fills 2957, 2992 and 2993 with B=OFF in August have been excluded: ~0.5/fb

Huge effort from all sub-systems, shifts crews, on-call experts, to reach this level and maintain it

from M. Chamizo Llatas

slide-5
SLIDE 5

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

5

Recorded vs Delivered vs Certified Luminosity March – Dec, 2012

  • Efficiency of good (“golden”) selection = 91.1%.

 All systems are certified as good for these data.

  • “Golden” plot above shows 19.6 fb-1 good for any analysis.
  • We have 20.7 fb-1 for muon only analyses (efficiency = 96.3%).

 Tracker and muon systems are certified good for these data.

“Golden” luminosity Luminosity for Muon Analyses

From S. Maruyama

slide-6
SLIDE 6

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

6

Fraction of live channels 2010-2012

  • CMS working well after three years of operation.
  • Thanks to the huge effort of many people to maintain the detector.
  • Note that order of subsystems on vertical scale is reversed for 2010 – 2012.

99% 98% 99% 98%

Dec 2012

from M. Chamizo Llatas

slide-7
SLIDE 7

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

7

Run Time Logger

  • Plots on this page are from the web-based

Run Time Logger – for all 2012 proton- proton collisions.

slide-8
SLIDE 8

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

8

Run Time Logger - categories

  • Largest single source of lost luminosity (21%) was due to tracker DAQ.
  • All groups have been working hard to reduce downtime.
slide-9
SLIDE 9

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

9

Pile-up History

Period Peak average pile up 2010 3.5 2011 18 2012 34 β* from 1.5m to 1m

LHC increasing bunch charges

β* = 0.6 m and bunch charges 1.6x1011 protons/bunch Two “special high pile” up fills in 2011 became “standard” operation in 2012 Two “special high pile” up fills in 2012…will they become “standard operation” after LS1 ?

2011 2010 2012

from M. Chamizo Llatas

slide-10
SLIDE 10

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

10

Run Coordination organization for 2012

Run Coordinator: M. Chamizo-Llatas Deputies: G. Rakness, C. Delaere Cross- coordination G.Rakness Organization support/training : A.Barisone BRM

Ops mgr

LUMI

Ops mgr

L1

Ops mgr

HTL

Ops mgr

DQM

Ops mgr

Offline

Ops mgr

Tracker DPG

P.Merkel A.Venturi

ECAL DPG

A.Bornheim D.Petyt

HCAL DPG

O.Kodolov a S.Banerjee

DT DPG

M.Pelliccioni L.Guiducci

CSC DPG

T.Cox V.Paltchik

RPC DPG

C.Carrillo K.Bunkowsk i

L1 DPG

A.Heister C.Battilana & V.Mihai

Sub-detectors Tracker, ECAL,HCAL,DT,RPC,CSC Ops

mgr

Heavy Ions

Ops mgr

DPG coordinator: C.Delaere Upgrade Project Detectors coordination area Detectors Projects PPD

U.S. names in red.

slide-11
SLIDE 11

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

11

US CMS Detector Operations- Level 2 Managers

  • Blue italics

Appointed since last review (both were already deputy L2s).

  • Drawings of subdetectors and U.S. institutions working on each are in the

background slides at the end of this talk.

  • Upgrade R&D will be described by Daniela Bortoletto in a later talk.

New Deputy Detector Operations Manager: Vivian O’Dell.

slide-12
SLIDE 12

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

12

Tracker 2012 Performance

  • Highest priority: Stable Running to maximize useful data sample.

Active, Masked

Detector Start 2012 % Now % TIB/TID 95.02 94.63 TOB 98.13 97.79 TEC+ 98.81 98.81 TEC- 99.13 99.13 Tracker 97.75 97.48 Failures understood, potentially addressable in LS1.

from S. Nahn

slide-13
SLIDE 13

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

13

Tracker Uptime

  • CMS: 21.8 fb-1 collected

 TK related lost lumi 0.231 fb-1, ~ 1% absolute  Offline: Additional 0.166 (HV off) + 0.070 (misc) fb-1 bad

  • Major sources fixed or mitigated

 Firmware upgrades for robust handling of intermittent data corruption  Soft Error Recognition/Recovery - 1-2 minute “stop start” replaced with < 30 second recovery.

  • HV turn-on was automated.

from S. Nahn

slide-14
SLIDE 14

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

14

Effects of CPU and memory improvements

  • Many improvements over the past 2 years have yielded substantial savings in CPU and

memory use. Some of this comes from better compilers but most comes from improved tracking code.

  • But CPU time still shows significant non-linearities with pileup.
  • Results for track reconstruction only; data results from 2011 high pileup run with about 35

pileup.

1 4

200 events from 2011 high pileup run

from K. Stenson

slide-15
SLIDE 15

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

15

Track Reconstruction Success

  • Tracker is crucial for charged lepton reconstruction (e, µ)
  • From a high-pileup run, we have an event with 78

reconstructed vertices.

slide-16
SLIDE 16

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

16

Pixel Detector - 2012

  • # of working channels has not changed much in 2012.

from S. Zenz

slide-17
SLIDE 17

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

17

Single Event Upset Automatic Recovery

  • Radiation from proton collisions causes single event effects in detector

electronics.  Scope of effects range from being hardly noticeable to stopping the run.  Started to become an issue with increasing luminosity in 2011.

  • 2012: full commissioning of automatic soft error recovery.

 Depending on the error and the system, this is done via hardware or software.

  • Systems continue to automate recovery from known problems.
  • Pixels Overall: 3-4 minute

recovery time → 12 seconds

  • Saved 15 hours of running

in 2012!

slide-18
SLIDE 18

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

18

Pixel Luminosity Telecope (PLT)

  • During the 2011 Extended Technical

Stop, a PLT demonstrator was installed on the CASTOR table.

  • Proto-PLT up-and-running since first

2012 beams.

  • Running unattended since July, 2012

(with on-call coverage).

  • First generation DAQ works and

electronics perform as designed.

  • In the 2012 pilot run, a decrease of

diamond pulse height at high particle rates was observed.

 A high priority effort is to confirm this effect can be eliminated through removal of surface material. All hardware is in-hand. One quarter PLT will be installed as dry-run in March 2013.

From J. Hegeman

slide-19
SLIDE 19

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

19

Good overall status of ECAL and ES. Very little evolution in number of dead channels EE+ EB EE-

EB+EE EB EE ES 2011-Nov 99.07 99.16 98.72 95.1 2012-Febr 99.04 99.16 98.54 95.1 2012-Mar 99.04 99.16 98.54 96.9 2012-Dec 98.97 99.11 98.38 96.8

Active channels:

ECAL detector status

  • Number of bad channels stable
  • Very few single bad channels
  • Periodic good health checks to

record in detail the performance of each channel and monitor their evolution vs time

  • Possibility to recover of a large

fraction of the masked 5x5 regions in EE+ in LS1

From F. Cavallari

slide-20
SLIDE 20

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

20

New Laser

  • The new laser was procured/commissioned/installed on schedule and used

throughout the 2012 run.

  • There were no hardware failures on the new laser itself, in contrast to the old

lasers that required frequent interventions (lamps replaced etc).

  • After some initial operational
  • ptimization at P5, very stable
  • peration.
  • In summer some degradation

traced to auxiliary optics.

  • End of summer some

degradation suspected to be caused by issues internal to the laser.

  • reduced pump current from

55 to 45 A for precaution. Very stable operation since then.

  • The quality of the transparency

corrections was not degraded.

from F. Cavallari

slide-21
SLIDE 21

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

21

ECAL Performance

Both electrons in EB One e in EE +

  • ne in

EB. from B. Cox

  • mass resolution important for

Higgs discovery.

slide-22
SLIDE 22

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

22

HCAL Performance 2012

  • > 99% good channels, contribution to lost luminosity ~3%.
  • Only 2 incidents > 30 min.

 CAEN VME link failure.  Reboot of HCAL computers (spurious incident).

  • Another Op incident: Laser firing in abort gap.

 Problem discovered in Higher Level Trigger.  Laser events now disabled during Global running.

Effects managed by calibration: HBHE pixel drift, HF PMT gain loss, and radiation damage to HF quartz fibers and HE.

  • HBHE pixel drift: monitored with HCAL LED runs.
  • HF PMT gain loss: same LED runs as above.
  • HF radiation damage: monitored with local laser runs.
  • HE radiation damage: added laser to megatile runs this year which led

to this being observed.

slide-23
SLIDE 23

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

23

HF PMT Gain Drift

  • Rate of gain loss decreased with time in 2012.
  • No gain loss observed for R7600s.

 These are the new PMTs that will be replacing the current HF PMTs.

  • Updates to LUTs allowed us to maintain Stable HF Lumi performance.

R7525 PMTs in 2011 and 2012 R7600s ( 24 installed Feb-2012)

slide-24
SLIDE 24

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

24

CSC in 2012 run

  • CSC readout efficiency was > 97.2%.
  • Down time due to CSC was less than

3%.

  • Only 120/pb of accumulated

luminosity was not certified as good for physics analysis due to CSC problems.

  • The electronics dead rate ~20

CFEBs/3 years (1% of the efficiency).

  • Muons reconstructed in CSCs

played a significant role in the discovery of the Higgs boson.

CSC efficiency > 97%

(GeV)

l 4

m

80 100 120 140 160 180

Events / 3 GeV

2 4 6 8 10 12 14 16 18

(GeV)

l 4

m

80 100 120 140 160 180

Events / 3 GeV

2 4 6 8 10 12 14 16 18

Data Z+X *, ZZ g Z =126 GeV

H

m

CMS preliminary

  • 1

= 8 TeV, L = 12.2 fb s

slide-25
SLIDE 25

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

25

Calorimeter Trigger Operations

Activities (Wisconsin):

  • Offline/Online: DQM Updates (more exact bit-level monitoring)
  • Online: Trigger Supervisor (TS) & RCT SW Updates
  • RCT On-Call for 2012/3 Physics Run
  • RCT Maintenance & Repairs
  • RCT Database and Calibration Updates
  • Online: DCS maintenance and updates

Example of Improvement:

  • Combination of the e/γ corrections in RCT

& new transparency corrections in ECAL, improvement was seen in efficiency turn

  • n for electrons, particularly in the ECAL
  • Endcap. Plot shows this improvement

using electrons from Z decay & level-1 trigger threshold of 15 GeV.

from W. Smith

slide-26
SLIDE 26

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

26

EMU Trigger Operations

Activities (Florida & Rice):

  • On call CSC expert coverage during run to

diagnose & intervene

  • Support of Rice-designed trigger boards
  • CSCTF Firmware & LUT Revisions
  • Repair of broken boards
  • Support of Online Software & DQM,
  • Support of Offline Validation & Monitoring
  • Revisions of GMT Logic to incorporate CSC

changes

Example of Improvement for |η|=2.1-2.4

  • Upper plot shows 2011 CSCTF logic using

loose 3 station track with ME1 & lower plot shows 2012 use of ME2-ME3-ME4, suppressing contribution of low pT tracks before threshold, improving rate suppression for L1SingleMu

From W. Smith

slide-27
SLIDE 27

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

27

HLT Operations

Activities(Boston U., Brown, U.I. Chicago, Minnesota, Notre Dame):

  • HLT Code Integration, Trigger

Performance, Trigger Menu Integration, Trigger Menu Development

Example of Improvement:

  • After much work to improve

and monitor CPU performance, average CPU time per event

  • f 135 ms is found to be well

within the total HLT budget (~160 ms per event at 100 kHz L1 trigger rate). Plot shows HLT CPU time per event extrapolates within this total budget until luminosities close to 8 × 1033cm-2s-

1. from W. Smith

slide-28
SLIDE 28

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

28

Central DAQ in 2012

  • CDAQ was responsible for very little lost luminosity in 2012:

 About 6% of total lost luminosity, split about evenly between CDAQ_HW and CDAQ_SW.

Many Improvements to Central DAQ in 2012

  • Improved throughput – split loaded FEDBuilders and merge small ones.
  • Improved DAQ start/stop time.

 Gained about 24 seconds out of 66. With ~ 700 runs/year, it adds up!

  • Improved HLT startup time – eliminated one minute of dead time at the start
  • f a run.
  • Added HLT hosts (DELL C6220) – increased the HLT CPU budget per event

from 100 msec to 150 msec.

  • Automatic generation of new configuration that removes the failed hosts.
  • Soft Error Recovery – Automatic procedure to recover from errors without

stopping a run  Gained about 3 hours of down time mostly from stop/start time.

  • Automatic assignment of running modes based on LHC status.
  • Audio alarms for WBM, DCS and DQM.
  • Dead time monitoring is stored in Database.
slide-29
SLIDE 29

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

29

US CMS Detector Operations Budget (without Upgrade R&D)

  • 2013 budget: Includes ~ 5 months of Operations and then LS1.

 In table above, Labor is technical (engineers, computer professionals, etc.) while the Travel column also includes substantial funding for physicists.

  • Large budget for Endcap Muon includes two substantial improvement projects:

 New ME42 chambers, ME11 Electronics

  • New issue – we have some key university personnel whose funding is affected

by the recent DOE comparative review.

 This will require use of management reserve.

slide-30
SLIDE 30

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

30

US CMS Detector Operations Budget (cont’d)

  • Expect 2014 budget for improvements to be much smaller.

Redirect funding to Software & Computing, Phase 2 upgrade R&D. But may need to increase Travel & COLA when next run starts.

slide-31
SLIDE 31

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

31

US CMS Operations Metrics

  • US CMS has metrics to track Operations.
slide-32
SLIDE 32

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

32

US CMS 2012 Milestones

  • US CMS has milestones for repair and improvement projects.
  • This table shows milestones achieved in 2012.
slide-33
SLIDE 33

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

33

US CMS 2013 Milestones

  • Milestones for 2013.
  • Dates in green have already been achieved.
  • Under Construction
  • Have to pare this list down

and get some missing dates from ECAL .

  • Also add milestones for

FPix shutdown work and PLT.

slide-34
SLIDE 34

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

34

Summary

  • CMS detector had another great year taking data at 8 TeV.

 Detector worked very well, no major problems.

  • Data taking efficiency was high, ~ 94%.

 Continued heroic effort by all groups to minimize losses.

  • Challenges ahead:

 Many improvement and repair projects scheduled for the shutdown.  Expect turnover in key operations personnel. Will have to make sure we retain key knowledge.

  • U.S. people and support both vital for CMS.

 Leadership roles in management, operations, physics analysis.

slide-35
SLIDE 35

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

35

Background Material

slide-36
SLIDE 36

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

36

Endcap Muon System (EMU)

Carnegie Mellon, FIT, Florida, FNAL, Northeastern, Northwestern, Ohio State, Purdue, Rice, Texas A&M,, UC Davis, UCLA, UC Riverside, Wayne State, Wisconsin

US responsibility: Cathode Strip Chambers (CSC) in the Endcap Muon System.

slide-37
SLIDE 37

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

37

Hadron Calorimeter (HCAL)

Boston, Brown, Caltech, Fairfield, FIT, FIU, FNAL, FSU, Iowa Kansas, Maryland, Minnesota, Mississippi, MIT, Northeastern, Northwestern, Notre Dame, Princeton, Purdue, Rochester, Rockefeller, TTU, UC Riverside, UC Santa Barbara, UIC

HB Brass Absorber (5cm) + Scintillator Tile (3.7mm) Photo Detector (HPD) | | 0.000 ~ 1.393 HE Brass Absorber (8cm) + Scintillator Tile (3.7mm) Photo Detector (HPD) | | 1.305 ~ 3.000 HO Scintillator Tile (10mm) outside of solenoid Photo Detector (HPD) | | 0.000 ~ 1.305 HF Iron Absorber + Quartz Fibers Photo Detector (PMT) | | 2.853 ~ 5.191 HB+ HB- HE+ HE- HF+ HF- HO0 HO+1 HO+2 HO-1 HO-2

EB+ EB- EE+ EE- Tracker

Shuichi Kunori, Maryland

slide-38
SLIDE 38

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

38

Electromagnetic Calorimeter (ECAL)

Caltech, Cornell, Kansas State, Minnesota, Northeastern, Notre Dame, Rutgers, Virginia

Crystals are lead tungstate (PbWO4).

slide-39
SLIDE 39

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

39

Silicon Tracking System

Outer Barrel (TOB) End Cap (TEC) Barrel Pixels Forward Pixels

r z

Inner Barrel (TIB) Inner Disk (TID)

Red=Single Blue=Double Double modules are made up from two single sided modules glued back-to-back

η

slide-40
SLIDE 40

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

40

30 cm 93 cm

Pixel Tracker

Note – US contributes to the Forward Pixels, but works closely with the Barrel Pixels group.

  • Barrel layers at r=4, 7, 11 cm
  • Two disks at each end, z=34, 46 cm
  • Pixel size ~100μm ×150μm
  • 48M pixels in barrel
  • 18M pixels in disks

3 points for η<2.5 r-φ resolution ~10 μm r-z resolution~20μm

slide-41
SLIDE 41

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

41

Forward Pixel Detctor – FPix

Colorado, Cornell, FNAL, Iowa, Johns Hopkins, Kansas, Mississippi, Nebraska, Northwestern, Princeton, Puerto Rico, Purdue, Purdue Calumet, Rice, Rochester, Rutgers, SUNY-Buffalo, Tennessee, UC Davis, Vanderbilt

  • Two disks at each end,

z=34, 46 cm

  • Pixel size ~100μm ×150μm
  • 24 blades in each disk
  • Blades rotated by 20 for

charge sharing (Lorentz angle, track inclination)

  • 7 detector modules per

blade (4 on front and 3 on back of the blade)

  • 45 readout chips/blade
  • 18M pixels in disks
  • Room for another disk @

z=58.5 cm if needed

slide-42
SLIDE 42

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

42

11.4 million silicon strips 65.9 million pixels in final configuration

Outer Barrel (TOB)

Inner Barrel (TIB) Pixel

2.4 m

volume 24.4 m3 running temperature –10 0C Inner Disks (TID) End cap (TEC)

Silicon Tracker

Brown, FNAL, Kansas, MIT, Rochester, UCSB, UC Riverside, UC San Diego, UIC

slide-43
SLIDE 43

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

43

Trigger

Boston, Brown, Caltech, Cornell, Florida, FNAL, Kansas State, Purdue Rice, Rockefeller, Texas A&M, UC Davis, UCLA, UCSB, UIC, Wisconsin

  • Trigger & DAQ Architecture: 2 Levels:
  • Level-1 Trigger:

25 ns input, 3.2 s latency

  • High Level Trigger Software

Processor farm Interaction rate: 1 GHz Bunch Crossing rate: 40 MHz Level 1 Output: 100 kHz (50 initial) Output to Storage: 100 Hz Average Event Size: 1 MB Data production 1 TB/day

UXC USC

slide-44
SLIDE 44

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

44

Data Acquisition System (DAQ)

FNAL, MIT, UC San Diego, UCLA 8 “slices”

slide-45
SLIDE 45

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

45

CMS DAQ

Readout Builder Event Filter Data to Surface

MIT FNAL UCSD UCLA

Vivian O’Dell, Fermilab

  • Data to Surface –

acquire data from front-end electronics.

  • Readout Builder –

assemble data into events.

  • Event Filter – run

High Level Trigger software for final event selection.

slide-46
SLIDE 46

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

46

CMS DAQ overview

FRL crates Myrinet fibers LVDS cables

  • FEDbuilder 8X8 switches

(USC)

 Feeds the 8 DAQ slices  Based on 2 Gb myrinet

  • RU (640 Dell 2950 PC)

 3 connections/RU (~250 MB/s/RU)

  • RUBuilder

 Event building network: GbE  3 connections/RU (~250 MB/s/RU)

  • FU/BU (720 Dell 1950 PC)

 2 connections/BUFU  (~1.5BUFU/RU) ~170 MB/s

  • HLT running on FUs

 Current HLT CPU budget ~50ms/event for 100 kHz input

  • Storage Manager

 8 Satabeasts  16 logger nodes  ~ 2 GB/s, 320 TB

  • Uploading to T0

 2 10Gb/s optical links FED crates (tracker) Myrinet Switches Gbe Switche s BU/FU Storage Manager s  446 FRL

slide-47
SLIDE 47

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

47

Pixel Luminosity Telescope (PLT)

  • New device to measure luminosity.

 Based on counting tracks, more robust than using HF calorimeter.  Luminosity data collection independent of CMS DAQ.

  • Approved by CMS Management Board Nov, 2009.
  • Expect ready for installation end of 2011.

PLT Design

  • Telescope Arrays: 8 per CMS end.

 Location: r 5 cm, z 1.75 m.

  • Telescopes: three planes.
  • Telescope Planes

 Diamond pixel sensors.  Active area 4.0 mm x 4.0 mm.  Bump bonded to PSI46v2 pixel ROC.

  • Measure number of three-fold coincidences in each

bunch crossing (40 MHz) using fast OR outputs.  Read out full pixel hit information of each plane at 1-10 kHz.

slide-48
SLIDE 48

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

48

HO HPDs → SiPMs

  • HPDs sensitive to intermediate B fields

problems with HO HPDs.

  • Had to lower HV on HO HPDs R1,R2.

 HV now off on Ring 2.

  • Result is that HO is not operating as was
  • intended. (Cannot see muons for calibration or
  • ther uses, poor signal to noise reduces utility of

leakage correction for jets.)

  • Silicon Photomultiplier (SiPM) is an array of

Silicon APD pixels on a common substrate, each

  • perating in Geiger mode.
  • Signal from minimum ionizing muons has been
  • bserved from the SiPMS.
  • S/N is ~ 4 – 5 times better than from HPDs.
slide-49
SLIDE 49

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

49

Forward Hadron Calorimeter - Improvements

PMT Window Events

  • Known to be a

problem for some time; observed in test beam.

  • Have been taking data

with some PMTs covered to study window interactions.

  • Window interactions are earlier in time.
  • We are replacing the HF PMTs with a thinner

window multi-anode tube.

  • PMTs were ordered October, 2010.

Data from Covered PMTs – due to Window Interactions

slide-50
SLIDE 50

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

50

Forward Hadron Calorimeter - Improvements

Scintillation in Light Guide Sleeves

  • High Occupancy for one slice shown is due to scintillation in High Efficiency

Mirror (HEM) material.

  • All HF light guides have a HEM sleeve to electrically isolate the PMT.
  • For one light guide, the whole length was HEM.

covered

slide-51
SLIDE 51

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

51

Forward Hadron Calorimeter - Improvements

  • Scintillation light produced by the HEM is

the main source of anomalous high energy signals in HF.

  • Late hits in time band 3 do not appear in

the spectrum from covered PMTs.

  • Time band 3 events have a harder energy

spectrum. Blue: HEM sleeve Green: HEM sleeve replaced with Teflon.

  • Status: all sleeves

were replaced in January, 2011.

  • Final sleeve design

(new PMTs) is being developed. Data from PMTs (not covered)

slide-52
SLIDE 52

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

52

Endcap Muon – ME1/1 Electronics

  • ME1/1 – EMU chambers at

smallest r-z – important for standalone muon momemtum resolution for muons with | | = 1.6 – 2.4.

 Subdivided into two regions.

  • Cathode strips are ganged 3:1

in the high 2.1<| |<2.4) part

  • f ME1/1.

 Two ghost segments for each real segment.  Causes problems for the trigger.

  • Was needed at the time of
  • riginal design to reduce channel

count.

  • With today’s electronics, possible

to read out every strip.

  • Another problem – Trigger

Motherboard is blind for several bunch crossings after reconstructing a muon candidate.  TMB is shared by both regions

  • f ME1/1.

 So problems in the 2.1<| |<2.4 region can affect the muon trigger in the 1.6<| |<2.1 region.

slide-53
SLIDE 53

U.S. CMS Detector Operations 13 February 2013

  • C. Newman-Holmes

53

Endcap Muon – ME1/1 Electronics

ME1/1 Electronics Improvement includes the following:

  • Replacement of the ME1/1 Cathode Front End Board (CFEB) with a digital version

(DCFEB).

 The current CFEBs store the charges on SCAs and digitize them only on coincidence of an L1A with Local Cathode Trigger (LCT).  The DCFEB would digitize all charges and store in a digital pipeline.  This is deadtimeless and removes rate issues. 504 boards would be replaced.

  • Replacement of Skew-Clear cables: The DCFEBs would also send the data out
  • ptically, rather than over copper, which would give the necessary bandwidth

increase for removing the ganging. The Skew-Clear cables were made longer than the specs of the drivers, so their replacement would also address one of the major weak points of the system.

  • Replacement of the DAQ Mother Boards (DMB) to be able to receive the optical

DCFEB data (72 boards)

  • Replacement of the Trigger Mother Boards (TMB) to receive the new DCFEB

signals (72 boards). It is sufficient to make just a replacement daughter card with the optical transceivers.

  • Status – project was reviewed in December, 2010. Prototypes are being

developed.