The Trigger and Real-time Reconstruction at LHCb CPAD - - PowerPoint PPT Presentation

the trigger and real time reconstruction at lhcb
SMART_READER_LITE
LIVE PREVIEW

The Trigger and Real-time Reconstruction at LHCb CPAD - - PowerPoint PPT Presentation

The Trigger and Real-time Reconstruction at LHCb CPAD Instrumentation Frontier Workshop 2019 Daniel Craik on behalf of the LHCb collaboration Massachusetts Institute of Technology 9th December, 2019 Photo: User:Emery / Wikimedia Commons /


slide-1
SLIDE 1

The Trigger and Real-time Reconstruction at LHCb

CPAD Instrumentation Frontier Workshop 2019 Daniel Craik

  • n behalf of the LHCb collaboration

Massachusetts Institute of Technology

9th December, 2019

Photo: User:Emery / Wikimedia Commons / CC-BY-SA-2.5

slide-2
SLIDE 2

The LHCb detector

Located at point 8 of the LHC General-purpose detector in the forward region Specialised in studying b- and c-decays Instrumented in the forward region to exploit forward-production of c- and b-hadrons

/4 π /2 π /4 π 3 π /4 π /2 π /4 π 3 π

[rad]

1

θ [rad]

2

θ

1

θ

2

θ b b

z

LHCb MC = 14 TeV s Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 1 / 23

slide-3
SLIDE 3

The LHCb detector

Instrumentation in the forward region (2 < η < 5) Excellent secondary vertex reconstruction Precise tracking before and after magnet Good PID separation up to ∼ 100 GeV /c

JINST 3 (2008) S08005 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 2 / 23

slide-4
SLIDE 4

LHCb timeline

2010 2012 2014 2016 2018 now 2022 2024 2026 2028 2030 2032

LHC HL-LHC

Run I LS 1 Run II LS 2 Run III LS 3 Run IV LS 4 Runs V+

Phase I Upgrade Triggerless readout at 40 MHz Phase Ib Upgrade Possible stepping stone Phase II Upgrade Upgrade for HL 9 fb−1 50 fb−1 300 fb−1 Belle 2 50 ab−1 LHCb may be only dedicated B-physics experiment timetable may shift Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 3 / 23

slide-5
SLIDE 5

Real-time reconstruction in Run II

Hardware trigger

high pT, ET

1st software trigger

partial reco

2nd software trigger

full reco

Reconstruction

Align + Calib

Analysis

40 MHz 1 MHz 100 kHz 5 kHz 5 kHz

“Online”: near detector “Offline”: grid computing Time from collision: µs ms hours weeks

Run I:

Hardware trigger

high pT, ET

1st software trigger

partial reco

9PB buffer

Real-time Align + Calib

2nd software trigger

full reco

Analysis

(Turbo) 40 MHz 1 MHz 100 kHz 100 kHz 12 kHz

Online Offline Time from collision: µs ms hours hours

Run II:

Calibration and alignment of Run I data performed “offline” weeks after data taking

Trigger reconstruction different from offline

In Run II, data buffered before final trigger stage

Allows for real-time alignment and calibration Offline-like reconstruction within the trigger Many analyses use “Turbo-stream” data – online reconstruction, full raw event not saved

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 4 / 23

slide-6
SLIDE 6

Real-time reconstruction in Run II

Real-time alignment and calibration performed for vertex locator, RICH detectors, tracking stations, calorimeter and muon stations Will focus on velo and RICH Alignment particularly important for velo, which

  • pens and closes

between fills Gas-filled RICH detectors also require frequent calibration

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 5 / 23

slide-7
SLIDE 7

Real-time reconstruction in Run II

Each alignment task performed

  • nce per fill

Alignment begins once a large enough dataset has been collected Calibration of RICH gas refractive index performed regularly to account for temperature/pressure changes within the radiator gas

LHCb Preliminary LHCb Preliminary

align + calib

initial improved

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 6 / 23

slide-8
SLIDE 8

Real-time alignment of the Velo

Vertex locator modules sit 5 mm from the LHC beam Consists of two retractable halves (one shown) Modules formed of two sections – one on each velo half During beam injection, velo retracted to 35 mm for safety Closed once LHC beams are stable Moves every fill → align every fill

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 7 / 23

slide-9
SLIDE 9

Real-time alignment of the Velo

Alignment of velo based on minimising residuals between hits and reconstructed tracks Plot shows x and y translation between the two velo halves Tolerance of ±2 µm allowed without alignment update (empty markers) Updates may also be caused by other degrees of freedom

e.g. offsets or rotations within a velo half

∆Tx x z

100 200

Alignment number [a.u.]

20 − 15 − 10 − 5 − 5 10 15 20

m] µ Variation [

x-translation y-translation

LHCb VELO

Preliminary

Empty markers = no update 17/04/2018 - 21/11/2018

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 8 / 23

slide-10
SLIDE 10

Real-time calibration of the RICH

250 mrad Track Beam pipe Photon Detectors VELO exit window Spherical Mirror Plane Mirror C4F10 100 200 z (cm) Magnetic Shield

RICH detectors provide particle ID information based on angle of Cherenkov radiation Index of refraction of the gas radiators sensitive to changes in temperature, pressure and composition These features are monitored but data-driven calibration also required Compare recorded and expected Cherenkov angles (bottom left) Alignment of mirrors also calibrated (bottom right)

delta(Cherenkov Theta) / rad 0.008 − 0.006 − 0.004 − 0.002 − 0.002 0.004 0.006 0.008 Entries 1600 1800 2000 2200 2400 2600 2800

3

10 ×

Rich1Gas Rec-Exp Cktheta | All photons

Entries 3.643921e+08 Mean 0.0002294 Std Dev 0.004334 / ndf 2 χ 1711 / 90 Gaus Constant 8.645e+05 Gaus Mean 0.0006738 Gaus Sigma 0.001926 p3 1.905e+06 p4 1.09e+07 p5 2.062e+09 − p6 1.113e+11 Run 182551 SF 1.025672

Mon Aug 29 05:41:53 2016

Rich1Gas Rec-Exp Cktheta | All photons [rad] φ 2 4 6 [rad] θ ∆

  • 0.002
  • 0.0015
  • 0.001
  • 0.0005

0.0005 0.001 0.0015 0.002 600 700 800 900 1000 1100

3

10 ×

3

R

AD

  • φ

2 4 6

  • R

AD

  • θ

  • 0.002
  • 0.0015
  • 0.001
  • 0.0005

0.0005 0.001 0.0015 0.002 600 700 800 900 1000 1100

3

10 ×

NIM A 12 (2016) 041 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 9 / 23

slide-11
SLIDE 11

The turbo stream

Save only parts of the event needed for offline analysis Multiple persistence levels

Only candidate (∼ 7 kB) Part of event (∼ 16 kB) Full event (∼ 48 kB)

  • cf. Non-turbo (∼ 69 kB)

Used by many analyses, e.g.

]

2

c ) [MeV/

+

π D ( m

2005 2010 2015 2020

)

2

c Candidates / ( 0.1 MeV/

1000 2000 3000 4000 5000 6000

3

10 ×

Data

+

K

K → D

  • Comb. bkg.

LHCb ]

2

c ) [MeV/

+

π D ( m

2005 2010 2015 2020

)

2

c Candidates / ( 0.1 MeV/

200 400 600 800 1000 1200 1400 1600 1800 2000 2200

3

10 ×

Data

+

π

π → D

  • Comb. bkg.

LHCb

3

10

4

10

5

10

2

10

3

10

4

10

5

10

6

10

7

10

m(µ+µ−) [ MeV ] Candidates / σ[m(µ+µ−)]/ 2

LHCb

√s = 13 TeV prompt µ+µ− µQµQ hh + hµQ ⇒ isolation applied prompt-like sample pT(µ) > 1 GeV, p(µ) > 20 GeV ]

2

c ) [MeV/

++ cc

Ξ (

cand

m

3500 3600 3700

2

c Candidates per 5 MeV/ 20 40 60 80 100 120 140 160 180 Data Total Signal Background

LHCb 13 TeV

CP violation in charm decays Search for dark photons decaying to dimuons Observation of Ξ++

cc

PRL 122 (2019) 211803 PRL 120 (2018) 061801 PRL 119 (2017) 112001

JINST 14 (2019) P04006 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 10 / 23

slide-12
SLIDE 12

The LHCb detector: Run III upgrade

Design choices for our physics programme

un 3 is before the HL-LHC era in Run 4 onwards

LHCb Run I LHCb upgrade

☞ ☞ ☞ ☞

[Run I]

<latexit sha1_base64="SGEvPGpYNz1aGYj+saXzIoIE0xE=">ACJnicbVDLSgMxFL3js9bXqEs3QRHcWGaqoG6k6MaFCwWrhU5bM2mqoclMSDJiGeZD3LvxV9wIPhB3foqZ1oW2HgicnHMv94TSs608bxPZ2x8YnJqujBTnJ2bX1h0l5YvdJwoQqsk5rGqhVhTziJaNcxwWpOKYhFyehl2j3L/8pYqzeLo3PQkbQh8HbEOI9hYqeUeBAKbG4J5epKhAEup4ju0gwLDBNXI95rpdjlDaAEIiJrplvlrP/ROfezlrvulbw+0Cjxf8h6ZUbeXwHAact9CdoxSQSNDOFY67rvSdNIsTKMcJoVg0RTiUkX9O6pRG2azTS/p0Z2rBKG3ViZV9kUF/93ZFioXVPhLYyv0oPe7n4n1dPTGevkbJIJoZGZDCok3BkYpSHhtpMUWJ4zxJMFLO7InKDFSbGRlu0IfjDJ4+Sarm0X/LPbBiHMEABVmENsGHXajAMZxCFQg8wBO8wpvz6Dw787HoHTM+elZgT9wvr4BdH+myQ=</latexit><latexit sha1_base64="nc8K08Xq8OjimhfRiX3GI2Ax3Q=">ACJnicbVDLTgIxFO34BHyhLt0EhM3khk0UTeG6MaFC0xEiAyQTinQ0M40bcdAJvMh7t248j/cmPiIcefaAdYKHiSJqfn3Jt7/Eo0rb9qc1Mzs3v7CYSmeWldW17LrG9cqCUmZRywQFY9pAijPilrqhmpCkQ9xipeL2zxK/cEqlo4F/pgSB1jo+bVOMtJGa2ROXI93FiEUXMXSREDLowPoasqJgo7diPYLMYxcySHmcSPaK8TDj0q4EzezOTtvDwGniTMmuWJa3N089r9LzeyL2wpwyImvMUNK1Rxb6HqEpKaYkTjhoIhHuoQ2qG+sisUY+Gd8Zwxygt2A6keb6GQ/V3R4S4UgPumcrkKjXpJeJ/Xi3U7aN6RH0RauLj0aB2yKAOYBIabFJsGYDQxCW1OwKcRdJhLWJNmNCcCZPniblQv471yaME7BCmwBbBLnDAISiCc1ACZYDBPXgCr+DNerCerXfrY1Q6Y417NsEfWF8/SPCo6Q=</latexit><latexit sha1_base64="nc8K08Xq8OjimhfRiX3GI2Ax3Q=">ACJnicbVDLTgIxFO34BHyhLt0EhM3khk0UTeG6MaFC0xEiAyQTinQ0M40bcdAJvMh7t248j/cmPiIcefaAdYKHiSJqfn3Jt7/Eo0rb9qc1Mzs3v7CYSmeWldW17LrG9cqCUmZRywQFY9pAijPilrqhmpCkQ9xipeL2zxK/cEqlo4F/pgSB1jo+bVOMtJGa2ROXI93FiEUXMXSREDLowPoasqJgo7diPYLMYxcySHmcSPaK8TDj0q4EzezOTtvDwGniTMmuWJa3N089r9LzeyL2wpwyImvMUNK1Rxb6HqEpKaYkTjhoIhHuoQ2qG+sisUY+Gd8Zwxygt2A6keb6GQ/V3R4S4UgPumcrkKjXpJeJ/Xi3U7aN6RH0RauLj0aB2yKAOYBIabFJsGYDQxCW1OwKcRdJhLWJNmNCcCZPniblQv471yaME7BCmwBbBLnDAISiCc1ACZYDBPXgCr+DNerCerXfrY1Q6Y417NsEfWF8/SPCo6Q=</latexit><latexit sha1_base64="hJcrep5ixWKGo9w/GMTON94B9+g=">ACJnicbVDLSsNAFJ34rPUVdelmsAhuLEkV1I0U3bhwUcHYQpOWyXTSDp1JwsxELCF/48ZfcSP4QNz5KU7SLrT1wMCZc+7l3nv8mFGpLOvLmJtfWFxaLq2UV9fWNzbNre07GSUCEwdHLBItH0nCaEgcRUjrVgQxH1Gmv7wMveb90RIGoW3ahQTj6N+SAOKkdJS1zx3OVIDjFh6nUEXxbGIHuAxdBXlRELb6qRHtQymruAQ86yTHtay4iNzbmds2JVrQJwltgTUgETNLrmq9uLcMJqDBDUrZtK1ZeioSimJGs7CaSxAgPUZ+0NQ2RXsNLizszuK+VHgwioV+oYKH+7kgRl3LEfV2ZXyWnvVz8z2snKj1UhrGiSIhHg8KEgZVBPQYI8KghUbaYKwoHpXiAdIKx0tGUdgj198ixatWzqn1jVeoXkzRKYBfsgQNgxNQB1egARyAwSN4Bm/g3XgyXowP43NcOmdMenbAHxjfPyxPpRc=</latexit>

A real fill with the 
 upgrade overlaid

Run at 5× higher luminosity Triggerless readout at 40 MHz New vertex locator New tracking (UT, SciFi)

CERN-LHCC-2012-007 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 11 / 23

slide-13
SLIDE 13

Challenges in Run III

At increased luminosity, charm (beauty) in 24 % (2 %) of bunch crossings

Cannot write out charm at 7 MHz

Trigger must distinguish signal from less-interesting signal as well as from background No longer feasible to have first trigger based on calorimeters and muon detectors alone Need as much information about an event as soon as possible

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 12 / 23

slide-14
SLIDE 14

LHCb trigger in Run III

x86 CPU farm Run 1 Run 2

pp Collisions Hardware L0 EB HLT1 9 PB buffer calibration HLT2 Storage

1 TB/s 1 TB/s 50 GB/s 50 GB/s 50 GB/s 50 GB/s 4 GB/s 0.3 GB/s 0.7 GB/s 6 GB/s 6 GB/s x86 CPU farm Run 3: Baseline

pp Collisions EB HLT1 buffer on disk calibration and alignment HLT2 Storage

5 TB/s 5 TB/s 10 GB/s

Hardware trigger to be removed from Run III HLT1 software trigger must perform at 30× higher rate with 5× the pileup Buffer will reduce from O(weeks)→ O(days) Significant increase in data transfer rates New trigger setup offers up to ∼ 10× efficiency improvement for some physics channels

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 13 / 23

slide-15
SLIDE 15

Run III baseline HLT1 performance

Significant progress made to

  • ptimise tracking algorithms

∼ 4× improvement in throughput from vectorising, improvements to event model and optimisation

2 1 8

  • 9
  • 2

5 2 1 8

  • 9
  • 3

2 1 8

  • 1
  • 5

2 1 8

  • 1
  • 1

2 1 8

  • 1
  • 1

5 2 1 8

  • 1

1

  • 9

2 1 8

  • 1

1

  • 1

5 2 1 8

  • 1

2

  • 1

3 2 1 8

  • 1

2

  • 3

2 1 9

  • 1
  • 1

1 2 1 9

  • 1
  • 1

7 2 1 9

  • 3
  • 3

2 1 9

  • 4
  • 5

2 1 9

  • 4
  • 1

1 2 1 9

  • 4
  • 1

6 2 1 9

  • 4
  • 2

1 2 1 9

  • 4
  • 2

7 2 1 9

  • 5
  • 3

2 1 9

  • 5
  • 9

2 1 9

  • 5
  • 2

7 2 1 9

  • 6
  • 2

2 1 9

  • 6
  • 8

2 1 9

  • 6
  • 1

4 2 1 9

  • 6
  • 1

9 2 1 9

  • 6
  • 2

4 2 1 9

  • 7
  • 5

2 1 9

  • 7
  • 1

1 2 1 9

  • 7
  • 1

7 2 1 9

  • 7
  • 2

8 2 1 9

  • 6
  • 2

9 2 1 9

  • 7
  • 5

2 1 9

  • 7
  • 1

1 2 1 9

  • 7
  • 1

7 5 10 15 20 25 30 35

Events processed per second, assuming 1000 reference nodes (MHz) LHCb Upgrade simulation

Scalar event model, maximal SciFi reconstruction Scalar event model, fast SciFi reconstruction with tighter track tolerance criteria Scalar event model, vectorizable SciFi reconstruction with entirely reworked algorithm logic Fully SIMD-POD friendly event model, vectorizable SciFi and vectorized vertex detector and PV reconstruction, I/O improvements

multi-threaded processes offer gains over running more processes in parallel Optimal CPU architecture under investigation – new AMD architecture offers significant cost/benefit improvements

LHCB-FIGURE-2019-002 LHCB-TDR-017 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 14 / 23

slide-16
SLIDE 16

Run III baseline HLT1 performance

Allows for a flexible and configurable sequence Physics performance of single-track and two-track selections studied Loose (L) and tight (T) versions of algorithms simulated with different pT thresholds (500 − 1000 MeV /c) (Top) ∼ 1 MHz output rate achievable based

  • n “minimum bias” simulation

Two-track selection remains efficient Single-track selection still requires work

500 MeV 750 MeV 1000 MeV

Rate (MHz)

0.5 1 1.5 2 LL LT TL TT LL LT TL TT LL LT TL TT

Heavy flavor purity

0.2 0.4 0.6 0.8 1

1-Track 2-Track 1- OR 2-Track LL LT TL TT LL LT TL TT LL LT TL TT 0.2 0.4 0.6 0.8 1 500 MeV 750 MeV 1000 MeV

Efficiency

  • K

+

K → D

+

π

  • K

+

K →

+

D

  • π

+

π

  • K

+

K → D

  • K

+

K

S

K → D

+

π

+ c

Λ →

++ c

Σ 1-Track 2-Track 1- OR 2-Track LL LT TL TT LL LT TL TT LL LT TL TT 0.2 0.4 0.6 0.8 1 500 MeV 750 MeV 1000 MeV

Efficiency

+

K D →

+

B

+

K

S

K →

+

B

  • µ

+

µ

*

K → B ν

+

µ

  • D

→ B

  • D

+

D → B

+

π

  • K

+

K →

+

B 1-Track 2-Track 1- OR 2-Track

LHCb-PUB-2017-006 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 15 / 23

slide-17
SLIDE 17

LHCb trigger in Run III

x86 CPU farm Run 1 Run 2

pp Collisions Hardware L0 EB HLT1 9 PB buffer calibration HLT2 Storage

1 TB/s 1 TB/s 50 GB/s 50 GB/s 50 GB/s 50 GB/s 4 GB/s 0.3 GB/s 0.7 GB/s 6 GB/s 6 GB/s x86 CPU farm Run 3: Baseline

pp Collisions EB HLT1 buffer on disk calibration and alignment HLT2 Storage

5 TB/s 5 TB/s 10 GB/s x86 CPU farm Run 3: GPU-enhanced

pp Collisions EB + HLT1 on GPUs buffer on disk calibration and alignment HLT2 Storage

5 TB/s 0.2 TB/s 10 GB/s

Option to move to a GPU-based HLT1 with GPUs installed on the Event Builder servers Free up full CPU farm for HLT2 and save on networking between event builders and CPU farm Demonstrated technical feasibility Decision due in early 2020

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 16 / 23

slide-18
SLIDE 18

Why GPUs?

Moore’s law still holds but single-thread performance has levelled off Gains now to be made through parallelisation GPUs specialised for massively parallel operations (100s–1000s of cores)

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 17 / 23

slide-19
SLIDE 19

HLT1

HLT1 involves decoding, clustering and track reconstruction for all tracking subdetectors Algorithms also perform Kalman filter and trigger selection All stages of the process may be parallelised

Raw data Global Event Cut Velo decoding and clustering Velo tracking Simple Kalman filter Find primary vertices UT decoding UT tracking SciFi decoding SciFi tracking Parameterized Kalman filter Muon decoding Muon ID Find sec-

  • ndary vertices

Select events Selected events

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 18 / 23

slide-20
SLIDE 20

The Allen project

Generic configurable framework for GPU-based execution of an algorithm sequence

Data passed to GPU device All algorithms executed in order Results passed back to the host

Process thousands of events in a single sequence

Opportunity for massive parallelisation

Configurable sequences at compile time Configurable algorithms at run time Custom GPU memory management – no dynamic allocation Built in validation and monitoring Cross-platform compatibility with CPU architectures Named for Frances E. Allen Implement HLT1 on GPUs

Photo: User:Rama / Wikimedia Commons / CC-BY-SA-2.0 fr

LHCB-FIGURE-2019-009 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 19 / 23

slide-21
SLIDE 21

Allen selection ingredients

Primary vertices

20 40

track multiplicity of MC PV

0.2 0.4 0.6 0.8 1

effjciency

effjciency multiplicity distribution Number of events [a.u.]

LHCb simulation GPU R&D

Tracks

2000 4000 [MeV]

T

p

0.2 0.4 0.6 0.8 1 effjciency, not electrons effjciency, electrons distribution, not electrons

T

p distribution, electrons

T

p Number of events [a.u.]

LHCb simulation GPU R&D

Selections One track Two tracks Single muon Two muons (displaced) Two muons (high-mass)

Momentum

20 40 60 80 100

3

10 ×

p [MeV/c]

0.2 0.4 0.6 0.8 1 1.2

/p [%]

p

σ

p resolution p distribution Number of events [a.u.]

LHCb simulation GPU R&D

Impact parameter

500 1000 1500 2000

Rate [kHz]

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Efficiency

2-Track, Parameterized 2-Track, Simple 1-Track, Parameterized 1-Track, Simple

LHCb simulation GPU R&D

Secondary vertices Muon ID

σ

50 100

3

10 ×

p [MeV]

0.2 0.4 0.6 0.8 1

Muon ID efficiency

Muon ID efficiency p distribution

Number of events [a.u.]

LHCb simulation GPU R&D

LHCB-FIGURE-2019-009 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 20 / 23

slide-22
SLIDE 22

Allen performance

Trigger Rate [kHz] 1-Track 249 ± 18 2-Track 663 ± 30 High-pT muon 1 ± 1 Displaced dimuon 50 ± 8 High-mass dimuon 101 ± 12 Total 971 ± 36 Total rate reduced 30 → 1 MHz Physics performance consistent with x86 baseline

Signal GEC TIS -OR- TOS TOS GEC × TOS B0 → K ∗0µ+µ− 89 ± 2 85 ± 2 78 ± 3 69 ± 3 B0 → K ∗0e+e− 84 ± 3 69 ± 4 62 ± 4 53 ± 3 B0

s → φφ

83 ± 3 70 ± 3 65 ± 4 54 ± 3 D+

s → K +K −π+

82 ± 4 62 ± 5 38 ± 5 32 ± 4 Z → µ+µ− 78 ± 1 97 ± 1 97 ± 1 75 ± 1 GEC = global event cut, TIS = trigger independent of signal, TOS= trigger on signal

LHCB-FIGURE-2019-009 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 21 / 23

slide-23
SLIDE 23

Allen throughput

2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5

Theoretical 32 bit TFLOPS

10 20 30 40 50 60 70 80

Allen throughput [kHz]

GeForce GTX 670 GeForce GTX 680 GeForce GTX 1060 6GB GeForce GTX TITAN X GeForce GTX 1080 Ti Tesla T4 GeForce RTX 2080 Ti Quadro RTX 2080 Ti Tesla V100 32GB

LHCb simulation GPU R&D

Full HLT1 algorithm can be run on ∼ 500 current GPUs Buy GPUs instead of networking Performance scales with GPU so can expect more from 2021 GPUs

Not yet limited by Amdahl’s law Potential to perform more tasks within HLT1

LHCB-FIGURE-2019-009 Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 22 / 23

slide-24
SLIDE 24

Summary

Real-time reconstruction and calibration a success story for LHCb in Run II Offline-quality reconstruction allowed for many trigger selections to be moved to the turbo stream

Selections can be based on offline-quality features Smaller event size → higher event rate for same disk rate Tradeoff – full raw event not saved → cannot rerun reconstruction offline Already crucial for charm decays in Run II

LHCb detector and DAQ upgrades for Run III

No hardware trigger First-stage software trigger must perform track reconstruction at bunch-crossing rate

Baseline x86 implementation of first-stage trigger significantly optimised to deal with higher throughput Allen project offers a GPU-implementation

Generic framework allows for configurable algorithm sequence Feasibility for possible use in LHCb Run III already demonstrated

Dan Craik (MIT) Trigger & Real-time Reconstruction @ LHCb 2019-12-09 23 / 23