atlas
play

ATLAS FullDressRehearsals CommonCompu5ngReadinessChallenges a - PowerPoint PPT Presentation

ATLAS FullDressRehearsals CommonCompu5ngReadinessChallenges a forward look ISGC2008 711April2008,AcademiaSinica,Taipei,Taiwan KorsBos CERN/NIKHEF,ATLAS 1 ALMOST READY


  1. ATLAS
 Full
Dress
Rehearsals
 Common
Compu5ng
Readiness
Challenges
 a forward look 
 ISGC
2008
 7‐11
April
2008,
Academia
Sinica,
Taipei,
Taiwan
 Kors
Bos
 CERN/NIKHEF,
ATLAS
 1


  2. ALMOST READY TO GO the ATLAS Main Control Room

  3. Top view of the open detector with the ECT moved off the beam line (before lowering of the Small Wheel) 2‐April‐2008
 DOE/NSF
JOG#21,
ATLAS
status
report
 3


  4. Installation Status • The detector is now open and in its ‘long shutdown’ configuration: – Forward Muon Wheels (Big Wheels) have been retracted against the end-wall structures – Endcap Toroids were off beam line in parking positions, Endcap Calorimeters are in open (3 m retracted) position – Both Shielding Disk JD/Small Wheels were lowered end of February • Inner detectors are all in place. All detectors, except Pixels, are also cabled and operational. Pixel electrical connections are well advanced and all, including cooling, should be finished in the coming week, followed by global tests and sign-off which is foreseen for mid to end April. This is the critical path of ATLAS. • Endcap Calorimeter electronics refurbishing has been completed and work on Barrel electronics is progressing well. All work is foreseen to be completed by early April. • Both Endcap Toroids have been tested individually up to 50 and 75% current respectively. The ATLAS detector must be closed (run-configuration) before the overall magnet tests at full power. • All Muon Barrel Chambers are now mechanically installed, including special chambers on the Endcap Toroids. Here the critical path is the commissioning of the RPC chambers and the installation of all CAEN power supplies (late delivery). • The Muon end wall chamber installation is well advanced but not fully completed. There are still some chambers to be installed (about 3 weeks of work) both on sides A and C. This can be done also when the beam pipe is already fully closed.

  5. InstallaFon Schedule Version 9.3 for CompleFng the Detector 5


  6. Updated
informa.on
available
at:
 hPp://hcc.web.cern.ch/hcc/
 The
progress
on
the
cooldown
has
been
good
!
 5‐April‐2008


  7. 3
principal
ac5vi5es
 for
the
next
challenge
in
May
 1. T0
processing
and
data
distribu5on
 2. T1
data
re‐processing
 3. T2
Simula5on
Produc5on
 Fully
rely
on
srmv2
everywhere
 Test
now
at
real
scale
(need
disk
space
now!)
 Test
the
full
show:
shi^s,
communica5on,
etc.

 7


  8. ‐1‐

 T0
processing
and
data
distribu5on
 • Starts
on
May
5
for
4
weeks
 • Use
M6
data
and
data
generator
 • Test
of
new
small
file
merging
schema
 • Simulate
running
of
10
hours@200Hz
per
day
 – nominal
is
50,000
seconds
=
14
hours
 • Distribu5on
of
data
to
T1’s
and
T2’s
 • Request
T1
storage
classes
ATLASDATADISK
and
 ATLASDATATAPE
for
disk
and
tape
 • Request
T1
storage
space
for
full
4
weeks
 8


  9. Tape
 SFO1
 SFO2
 Reconstruc.on
 T0
 farm
 SFO3
 SFO4
 Calibra5on
 RAW
 Reconstruc5on
 SFO5
 ESD
 Checksum
 T1
 Merging
 AOD
 T2
 AOD
 T1
 RAW
 T2
 T1
 ESD
 T2
 AOD
 T1
 T2
 AOD
 T1
 T2
 T2
 9


  10. ATLAS
Data
@
T0
 • Raw
data
arrives
on
disk
and
is
archived
to
tape
 • ini5al
processing
provides
ESD,
AOD
and
TAG
and
NTUP
 • a
frac5on
(10%)
of
RAW,
ESD
and
AOD
is
made
available
on
disk

 • RAW
data
is
distributed
by
ra5o
over
the
T1’s
to
go
to
tape
 • AOD
is
copied
to
each
T1
to
remain
on
disk
 • ESD
follows
the
RAW
to
the
T1
 • a
second
ESD
copy
is
send
to
the
paired
T1
 • we
may
change
this
distribu5on
for
early
running
 10


  11. Tier‐0 Dataflow
for
ATLAS
DATA
 t0atlas ESD AOD TAG Raw Tier‐1 ATLASDATATAPE ATLASDATADISK ATLASDATATAPE AOD TAG Tier‐2 AOD End
User
 Group
Analysis ATLASDATADISK AOD Analysis Tier‐3 ATLASENDUSER ATLASGRP<name> 11


  12. Data
sample
per
day
 T1
 Share
 Tape
 Disk
 RAW=1.6MB
 BNL
 25
%
 
2.9
TB
 9
TB
 ESD=1MB
 AOD=0.2MB
 IN2P3
 15
 1.7
 4
 SARA
 15
 1.7
 4
 10
hrs@200Hz
  7.2
Mevents/day
 RAL
 10
 1.2
 3
 In
the
T0:
 FZK
 10
 1.2
 3
 • 11.5
TB/day
RAW
to
tape
 • 1.2
TB/day
RAW
to
disk
(10%)
 CNAF
 5
 0.6
 2
 • 7.2
TB/day
ESD
to
disk
 ASGC
 5
 0.6
 2
 • 1.4
TB/day
AOD
to
disk
 PIC
 5
 0.6
 2
 10
day
t0atlas
buffer
must
be
98
TByte
 NDGF
 5
 0.6
 2
 Triumf
 5
 0.6
 2
 12


  13. Tape
&
Disk
Space
Requirements
 for
the
4
weeks
of
CCRC
 T1
 Share
 Tape
 Disk
 RAW=1.6MB
 BNL
 25
%
 81
TB
 252
TB
 ESD=1MB
 AOD=0.2MB
 IN2P3
 15
 48
 112
 SARA
 15
 48
 112
 10
hrs@200Hz
  7.2
Mevents/day
 RAL
 10
 34
 84
 CCRC
is
4
weeks
of
28
days
 FZK
 10
 34
 84
 In
the
T0:
 CNAF
 5
 17
 56
 • 322
TB
RAW
to
tape
 ASGC
 5
 17
 56
 • 32
TB
RAW
to
disk
(10%)
 • 202
TB
ESD
to
disk
 PIC
 5
 17
 56
 • 39
TB
AOD
to
disk
 NDGF
 5
 17
 56
 atldata
disk
must
be
273
TB

 Triumf
 5
 17
 56
 13


  14. Throughput during February CCRC Throughput
 • Generated data files of realistic sizes – RAW to all T1’s to tape Mbyte/sec
 – ESD and AOD to all T1’s to disk – Ramped up to nominal rates – Full Computing Model with MoU shares • Relatively good throughput achieved – Sustained 700 MB/s for 2 days – Peaks above 1.1GB/s for several hours – Errors understood and fixed Day
 14

  15. U.S.
Tier
2’s
 15

  16. ATLAS
Data
@
T1
 • T1’s
are
for
data
archive
and
(re‐)processing
 • And
for
group
analysis
on
ESD
(and
AOD
data)
 • A
share
of
Raw
data
goes
to
tape
@T1
 • Each
T1
receives
a
copy
of
all
AOD
files
 • Each
T1
receives
a
share
of
the
ESD
files
 – In
total
3
copies
of
all
ESD
files
world‐wide
 Space
Token
 Storage
Type
 Used
for
 Size
 ATLASDATADISK
 T1D0
 ESD,AOD,TAG
 By
Share
 ATLASDATATAPE
 T0D1
 RAW
 By
Share
 16


  17. ATLAS
Data
@
T2
 T2’s
are
for
Monte
Carlo
Simula5on
Produc5on
 • ATLASs
assume
there
is
no
tape
storage
available
 • Also
used
for
Group
analysis
 • – Each
physics
group
has
its
own
space
token
ATLASGRP<name>
 – F.e.
ATLASGRPHIGGS,
ATLASGRPSUSY,
ATLASGRPMINBIAS
 – Some
ini5al
volume
for
tes5ng:
2
TB
 T2’s
may
request
AOD
datasets
 • – Defined
by
the
primary
interest
of
the
physics
community
 – Another
full
copy
of
all
AOD’s
should
be
available
in
the
cloud
 Also
for
End‐User
Analysis
 • – Accounted
as
T3
ac5vity,
not
under
ATLAS
control
 – Storage
space
not
accounted
as
ATLAS
 – But
almost
all
T2
(and
even
T1’s
)
need
space
for
token
ATLASENDUSER
 – Some
ini5al
value
for
tes5ng:
2
TB
 Space
Token
 Storage
Type
 Used
For
 Size
[TB]
 ATLASDATADISK
 T0D1
 AOD,
TAG
 2
 ATLASGRP<name>
 T0D1
 Group
Data
 2
 ATLASENDUSER
 T0D1
 End‐User
Data
 2
 17


  18. ‐2‐

 T1
re‐processing
 • Not
at
full
scale
(yet)
 • M5
data
staged
back
from
tape
per
dataset
 • Condi5ons
data
on
disk
(140
files)
 – Each
re‐proc.
job
opens
~35
of
those
files
 • M5
data
file
copied
to
local
disk
of
WN
 • Output
ESD
and
AOD
file
 
 – Kept
on
disk
and
archived
on
tape
(T1D1
storage
class)
 – Copied
to
one
or
two
other
T1’s
for
ESD
files
 – Copied
to
all
other
T1’s
for
AOD
files
 Space
Token
 Storage
Type
 Used
for
 Size
 ATLASDATADISKTAPE
 T1D1
 ESD,AOD,TAG
from
 By
Share
 re‐processing
of
 detector
data
 18


  19. 19


  20. Space
requirements
for
M5
re‐proc
 • M5
RAW
data
was
distributed
over
T1’s
 • Total
data
volume
60
TB
(3
days)
 • Only
(small)
ESD
output
from
re‐processing
 • So
minimal
requirements
for
T1D1
pool
 • Re‐processing
3
5mes
faster
than
ini5al
 processing
to
achieve
3
5mes
per
year
 • So,
should
aim
to
re‐process
M5
every
day

 20


Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend