The Pilot WLCG Service: The Pilot WLCG Service: Last steps before - - PowerPoint PPT Presentation

the pilot wlcg service the pilot wlcg service
SMART_READER_LITE
LIVE PREVIEW

The Pilot WLCG Service: The Pilot WLCG Service: Last steps before - - PowerPoint PPT Presentation

ISGC Taipei, May 2006 Taipei, May 2006 ISGC The Pilot WLCG Service: The Pilot WLCG Service: Last steps before full production i) Definition of what the service actually is ii) Highlight steps that still need to be taken iii)Issues


slide-1
SLIDE 1

ISGC ISGC – – Taipei, May 2006 Taipei, May 2006

The Pilot WLCG Service: The Pilot WLCG Service:

Last steps before full production

i) Definition of what the service actually is ii) Highlight steps that still need to be taken iii)Issues & concerns Jamie Shiers, CERN Jamie Shiers, CERN

slide-2
SLIDE 2

Abstract

  • The production phase of the Service Challenge 4 - also known as the

Pilot WLCG Service - is due to start at the beginning of June 2006 This leads to the full production WLCG service from October 2006

  • Thus the WLCG pilot is the final opportunity to shakedown not only the

services provided as part of the WLCG computing environment - including their functionality - but also the operational and support procedures that are required to offer a full production service.

  • This talk will describe in detail all aspects of the service, together with

the currently planned production and test activities of the LHC experiments to validate their computing models as well as the service

  • itself. [ Probably no time for this, but slides included… ]
  • There is a Service Challenge talk on Thursday – detail about the recent

SC4 T0-T1 disk-disk and disk-tape activities +Tx-Ty transfers then…

slide-3
SLIDE 3

Caveat

  • I realize that this presentation is primarily oriented towards the

WLCG community.

  • However, I had the choice of:

Something targeted at the general Grid community (and probably of little interest to WLCG…) Something addressing the main concerns of delivering in the IMMEDIATE FUTURE reliable production services for the WLCG (with possibly some useful messages for the general community)

  • I have chosen the latter…

There will nevertheless be a few introductory slides…

slide-4
SLIDE 4

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

The Worldwide LHC Computing Grid

  • Purpose

Develop, build and maintain a distributed computing environment for the storage and analysis of data from the four LHC experiments

Ensure the computing service … and common application libraries and tools

  • Phase I – 2002-05 - Development & planning
  • Phase II – 2006-2008 – Deployment & commissioning of the

initial services

The solution!

slide-5
SLIDE 5

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

simulation reconstruction analysis

interactive physics analysis

batch physics analysis batch physics analysis

detector event summary data raw data

event reprocessing event reprocessing event simulation event simulation

analysis objects (extracted by physics topic)

Data Handling and Computation for Physics Analysis

event filter (selection & reconstruction) event filter (selection & reconstruction)

processed data

les.robertson@cern.ch

CERN

slide-6
SLIDE 6

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

LCG Service Hierarchy

Tier-0 – the accelerator centre

  • Data acquisition & initial processing
  • Long-term data curation
  • Data Distribution to Tier-1 centres

Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany –Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY)

Tier-1 – “online” to the data acquisition process high availability

  • Managed Mass Storage –

grid-enabled data service

  • All re-processing passes
  • Data-heavy analysis
  • National, regional support

Tier-2 – ~100 centres in ~40 countries

  • Simulation
  • End-user analysis – batch and interactive
  • Services, including Data Archive and Delivery, from Tier-1s
slide-7
SLIDE 7

les robertson - cern-it-7 last update 02/05/2006 13:11

LCG

CERN 18% A ll Tier-1s 39% A ll Tier-2s 43% CERN 12% A ll Tier-1s 55% A ll Tier-2s 33% CERN 34% A ll Tier-1s 66%

CPU Disk Tape

Summary of Computing Resource Requirements

All experiments - 2008 From LCG TDR - June 2005 CERN All Tier-1s All Tier-2s Total CPU (MSPECint2000s) 25 56 61 142 Disk (PetaBytes) 7 31 19 57 Tape (PetaBytes) 18 35 53

slide-8
SLIDE 8

What are the requirements for the WLCG?

  • Over the past 18 – 24 months, we have seen:
  • The LHC Computing Model documents and Technical Design Reports;
  • The associated LCG Technical Design Report;
  • The finalisation of the LCG Memorandum of Understanding (MoU)
  • Together, these define not only the functionality required (Use Cases), but also the

requirements in terms of Computing, Storage (disk & tape) and Network

  • But not necessarily in an site-accessible format…
  • We also have close-to-agreement on the Services that must be run at each

participating site

  • Tier0, Tier1, Tier2, VO-variations (few) and specific requirements
  • We also have close-to-agreement on the roll-out of Service upgrades to address

critical missing functionality

  • We have an on-going programme to ensure that the service delivered meets the

requirements, including the essential validation by the experiments themselves

slide-9
SLIDE 9

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

More information on the Experiments’ Computing Models

  • LCG Planning Page
  • GDB Workshops

Mumbai Workshop - see GDB Meetings page

Experiment presentations, documents

Tier-2 workshop and tutorials CERN - 12-16 June

Technical Design Reports

  • LCG TDR
  • Review by the LHCC
  • ALICE TDR

supplement: Tier-1 dataflow diagrams

  • ATLAS TDR

supplement: Tier-1 dataflow

  • CMS TDR

supplement Tier 1 Computing Model

  • LHCb TDR

supplement: Additional site dataflow diagrams

Please register asap! Both workshop & tutorials!

slide-10
SLIDE 10

How do we measure success?

  • By measuring the service we deliver against the MoU targets
  • Data transfer rates;
  • Service availability and time to resolve problems;
  • Resources provisioned across the sites as well as measured usage…
  • By the “challenge” established at CHEP 2004:
  • [ The service ] “should not limit ability of physicist to exploit performance of

detectors nor LHC’s physics potential“

  • “…whilst being stable, reliable and easy to use”
  • Preferably both…
  • Equally important is our state of readiness for startup / commissioning,

that we know will be anything but steady state

  • [ Oh yes, and that favourite metric I’ve been saving… ]
slide-11
SLIDE 11

The Requirements

  • Resource requirements, e.g. ramp-up in TierN CPU, disk, tape and network
  • Look at the Computing TDRs;
  • Look at the resources pledged by the sites (MoU etc.);
  • Look at the plans submitted by the sites regarding acquisition, installation and

commissioning; Measure what is currently (and historically) available; signal anomalies.

  • Functional requirements, in terms of services and service levels, including
  • perations, problem resolution and support
  • Implicit / explicit requirements in Computing Models;
  • Agreements from Baseline Services Working Group and Task Forces;
  • Service Level definitions in MoU;

Measure what is currently (and historically) delivered; signal anomalies.

  • Data transfer rates – the TierX TierY matrix
  • Understand Use Cases;

Measure …

And test extensively, both ‘dteam’ and other VOs

slide-12
SLIDE 12

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Service Challenges

  • Purpose

Understand what it takes to operate a real grid service real grid service – run for weeks/months at a time (not just limited to experiment Data Challenges) Trigger and verify Tier-1 & large Tier-2 planning and deployment –

  • tested with realistic usage patterns

Get the essential grid services ramped up to target levels of reliability, availability, scalability, end-to-end performance

  • Four progressive steps from October 2004 thru September 2006

End 2004 - SC1 – data transfer to subset of Tier-1s Spring 2005 – SC2 – include mass storage, all Tier-1s, some Tier-2s 2nd half 2005 – SC3 – Tier-1s, >20 Tier-2s – first set of baseline services

Jun-Sep 2006 – SC4 – pilot service Autumn 2006 – LHC service in continuous operation – ready for data taking in 2007

slide-13
SLIDE 13

WLCG Service Deadlines

full physics run first physics cosmics 2007 2008 2006

Pilot Services – stable service from 1 June 06 LHC Service in operation – 1 Oct 06

  • ver following six months ramp up to full
  • perational capacity & performance

LHC service commissioned – 1 Apr 07

slide-14
SLIDE 14

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

SC4 – the Pilot LHC Service from June 2006

A stable service on which experiments can make a full demonstration of experiment offline chain

  • DAQ Tier-0 Tier-1

data recording, calibration, reconstruction

  • Offline analysis - Tier-1 Tier-2 data exchange

simulation, batch and end-user analysis

And sites can test their operational readiness

  • Service metrics MoU service levels
  • Grid services
  • Mass storage services, including magnetic tape

Extension to most Tier-2 sites Evolution of SC3 rather than lots of new functionality In parallel –

  • Development and deployment of distributed database services (3D project)
  • Testing and deployment of new mass storage services (SRM 2.1)
slide-15
SLIDE 15

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Production Services: Challenges

  • Why is it so hard to deploy reliable, production services?
  • What are the key issues remaining?
  • How are we going to address them?
slide-16
SLIDE 16

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Production WLCG Services

(a) The building blocks

slide-17
SLIDE 17

Grid Computing

  • Today there are many definitions of Grid computing:
  • The definitive definition of a Grid is provided by [1] Ian Foster

in his article "What is the Grid? A Three Point Checklist" [2].The three points of this checklist are:

Computing resources are not administered centrally. Open standards are used. Non trivial quality of service is achieved.

  • … Some sort of Distributed System at least…
  • WLCG could be called a fractal Grid (explained later…)
slide-18
SLIDE 18

CERN - Computing Challenges 18

Distributed Systems…

  • “A distributed system is one in which

the failure of a computer you didn't even know existed can render your own computer unusable.” Leslie Lamport

slide-19
SLIDE 19

The Creation of the Internet

  • The USSR's launch of Sputnik spurred the U.S. to create the Defense

Advanced Research Projects Agency (DARPA) in February 1958 to regain a technological lead. DARPA created the Information Processing Technology Office to further the research of the Semi Automatic Ground Environment program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution. Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran who had written an exhaustive study for the U.S. Air Force that recommended packet switching to make a network highly robust and survivable.

  • In August 1991 CERN, which straddles the border between France and

Switzerland publicized the new World Wide Web project, two years after Tim Berners-Lee had begun creating HTML, HTTP and the first few web pages at CERN (which was set up by international treaty and not bound by the laws of either France or Switzerland).

slide-20
SLIDE 20

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Production WLCG Services

(b) So What Happens When1 it Doesn’t Work?

1Something doesn’t work all of the time

slide-21
SLIDE 21

The 1st Law Of (Grid) Computing

  • Murphy's law (also known as Finagle's law or Sod's law) is

a popular adage in Western culture, which broadly states that things will go wrong in any given situation. "If there's more than one way to do a job, and one of those ways will result in disaster, then somebody will do it that way." It is most commonly formulated as "Anything that can go wrong will go wrong." In American culture the law was named after Major Edward A. Murphy, Jr., a development engineer working for a brief time on rocket sled experiments done by the United States Air Force in 1949.

  • … first received public attention during a press conference

… it was that nobody had been severely injured during the rocket sled [of testing the human tolerance for g-forces during rapid deceleration.]. Stapp replied that it was because they took Murphy's Law under consideration.

slide-22
SLIDE 22

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

97% 48 48 24 Other essential services – outside prime service hours 98% 4 2 2 Other essential services – prime service hours 99% 24 12 12 Acceptance of data from the Tier-0 Centre during accelerator

  • peration

> 20% > 50% Degradation of the service Service interruption Availability Maximum delay in responding to

  • perational problems (hours)

Service

Problem Response Time and Availability targets Tier-1 Centres

LCG

slide-23
SLIDE 23

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

95% 72 hours 12 hours Other services 95% 72 hours 2 hours End-user analysis facility Other periods Prime time availability Maximum delay in responding to

  • perational problems

Service

Problem Response Time and Availability targets Tier-2 Centres

LCG

slide-24
SLIDE 24

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

97% 97% 48 hours 24 hours 12 hours All other services –

  • utside prime

service hours 98% 98% 4 hours 1 hour 1 hour All other services[2] – prime service hours[3] 98% 98% 48 hours 24 hours 12 hours All other Tier-0 services n/a 99% 12 hours 6 hours 6 hours Networking service to Tier-1 Centres (beam ON) n/a 99% 12 hours 6 hours 6 hours Event reconstruction / data distribution (beam ON) n/a 99% 6 hours 6 hours 4 hours Raw data recording BEAM OFF BEAM ON Degradation > 20% Degradation > 50% DOWN Average availability[1] on an annual basis Maximum delay in responding to operational problems Service

CERN (Tier0) MoU Commitments

slide-25
SLIDE 25

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

  • The Service Challenge programme this year must show

that we can run reliable services

  • Grid reliability is the product of many components

– middleware, grid operations, computer centres, ….

  • Target for September

90% site availability 90% user job success

  • Requires a major effort by everyone

to monitor, measure, debug First data will arrive next year NOT an option to get things going later

T

  • m
  • d

e s t ? T

  • a

m b i t i

  • u

s ?

slide-26
SLIDE 26

LCG Project, Grid Deployment Group, CERN

The CERN Site Service Dash

slide-27
SLIDE 27

The LHC Computing Grid – (The Worldwide LCG)

SC4 Throughput Summary

  • We did not sustain a daily average of 1.6MB/s out of CERN nor the full

nominal rates to all Tier1s for the period

  • Just under 80% of target in week 2
  • Things clearly improved --- both since SC3 and during SC4:
  • Some sites meeting the targets! in this context I always mean T0+T1
  • Some sites ‘within spitting distance’ – optimisations? Bug-fixes? (See below)
  • Some sites still with a way to go…
  • “Operations” of Service Challenges still very heavy Will this change?
  • Need more rigour in announcing / handling problems, site reports, convergence

with standard operations etc. Vacations have a serious impact on quality of service!

  • We still need to learn:
  • How to ramp-up rapidly at start of run;
  • How to recover from interventions (scheduled are worst! – 48 hours!)
slide-28
SLIDE 28

R.Bailey R.Bailey, , Chamonix Chamonix XV, January 2006 XV, January 2006 28 28

Breakdown of a normal year Breakdown of a normal year

7-8

~ 140-160 days for physics per year Not forgetting ion and TOTEM operation Leaves ~ 100-120 days for proton luminosity running ? Efficiency for physics 50% ? ~ 50 days ~ 1200 h ~ 4 106 s of proton luminosity running / year

  • From Chamonix XIV -

Service upgrade slots?

slide-29
SLIDE 29

The LHC Computing Grid – (The Worldwide LCG)

[ATLAS] T1 – T1 Rates

(from LCG OPN meeting in Rome)

  • Take ATLAS as the example – highest inter-T1 rates due to multiple ESD

copies

  • Given spread of resources offered by T1s to ATLAS, requires “pairing of

sites” to store ESD mirrors

  • Reprocessing performed ~1 month after data taking with better

calibrations & at end of year with better calibrations & algorithms

  • Continuous or continual? (i.e. is network load constant or peaks+troughs?)

NDGF (6%) PIC (4-6%) TRIUMF (4%) + ASGC (8%) NIKHEF/SARA (13%) RAL (7%) CNAF (7%) BNL (22%) FZK (10%) + CCIN2P3 (13%)

slide-30
SLIDE 30

The LHC Computing Grid – (The Worldwide LCG)

My Concerns on Tx-Ty Coupling

  • Running cross-site services is complicated

Hard to setup; Hard to monitor; Hard to debug…

  • IMHO, we need to make these services as loosely coupled as

possible.

  • By design, ATLAS has introduced additional coupling to the T0-

T1s with the T1-T1 matrix

  • I understand your reasons for doing this, but we need to be very

clear about responsibilities, problem resolution etc

Both during ‘prime shift’ and also outside + HOLIDAY PERIODS

slide-31
SLIDE 31

LCG Project, Grid Deployment Group, CERN

A Simple T2 Model (from early 2005)

N.B. this may vary from region to region

  • Each T2 is configured to upload MC data to and download data via a given T1
  • In case the T1 is logically unavailable, wait and retry
  • MC production might eventually stall
  • For data download, retrieve via alternate route / T1
  • Which may well be at lower speed, but hopefully rare
  • Data residing at a T1 other than ‘preferred’ T1 is transparently delivered

through appropriate network route

  • T1s are expected to have at least as good interconnectivity as to T0

Scheduled T1 interventions: announced to dependent T2s. (+WLCG) A good time for routine maintenance / intervention also at these sites?

slide-32
SLIDE 32

SC3 Services – Lessons (re-)Learnt

  • It takes a L O N G

L O N G time to put services into (full) production

  • A lot of experience gained in running these services Grid-wide
  • Merge of ‘SC’ and ‘CERN’ daily operations meeting has been good
  • Still need to improve

Still need to improve ‘ ‘Grid operations Grid operations’ ’ and and ‘ ‘Grid support Grid support’ ’

  • A CERN ‘Grid Operations Room’ needs to be established
  • Need to be more rigorous about:
  • Announcing scheduled downtimes;
  • Reporting unscheduled ones;
  • Announcing experiment plans;
  • Reporting experiment results;
  • Attendance at ‘V-meetings’;
  • A

A daily OPS daily OPS ‘ ‘meeting meeting’ ’ is foreseen for LHC preparation / commissioning is foreseen for LHC preparation / commissioning

Being addressed now

slide-33
SLIDE 33

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Measuring Response times and Availability

Site Functional Test Framework:

  • monitoring services by running regular tests
  • basic services – SRM, LFC, FTS, CE, RB, Top-level BDII, Site BDII,

MyProxy, VOMS, R-GMA, ….

  • VO environment – tests supplied by experiments
  • results stored in database
  • displays & alarms for sites, grid operations, experiments
  • high level metrics for management
  • integrated with EGEE operations-portal - main tool for daily
  • perations

(egee)

slide-34
SLIDE 34

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Availability Targets

  • End September 2006 - end of Service Challenge 4

8 Tier-1s and 20 Tier-2s > 90% of MoU targets

  • April 2007 – Service fully commissioned

All Tier-1s and 30 Tier-2s > 100% of MoU Targets

slide-35
SLIDE 35

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

A vailability

  • f 10 T

ier-1 S ites

0% 20% 40% 60% 80% 100% 120% Jul-05 A ug-05 S ep-05 O ct-05 Nov

  • 05 Dec-05

Jan-06 F eb-06 M ar-06 M

  • n

th P ercen tag e availa

A vailability

  • f 5 T

ier-1 S ites

0% 20% 40% 60% 80% 100% 120% Jul-05 A ug-05 S ep-05 O ct-05 Nov

  • 05 Dec-05

Jan-06 F eb-06 M ar-06 M

  • n

th P ercen tag e availa

Site Functional Tests

  • Tier-1 sites without BNL
  • Basic tests only
  • Only partially corrected for

scheduled down time

  • Not corrected for sites with less

than 24 hour coverage average value of sites shown

slide-36
SLIDE 36

The Dashboard

  • Sounds like a conventional problem for a ‘dashboard’
  • But there is not one single viewpoint…
  • Funding agency – how well are the resources provided being used?
  • VO manager – how well is my production proceeding?
  • Site administrator – are my services up and running? MoU targets?
  • Operations team – are there any alarms?
  • LHCC referee – how is the overall preparation progressing? Areas of concern?
  • Nevertheless, much of the information that would need to be collected is

common…

  • So separate the collection from presentation (views…)
  • As well as the discussion on metrics…
slide-37
SLIDE 37

HEPiX Rome 05apr06

LCG

les.robertson@cern.ch

Medium Term Schedule

3D distributed database services

development test deployment

SC4

stable service For experiment tests

SRM 2 test and deployment plan being elaborated

October target

Additional functionality to be agreed, developed, evaluated then - tested deployed ?? Deployment schedule ??

slide-38
SLIDE 38

Summary of Key Issues

  • There are clearly many areas where a great deal still remains

to be done, including:

  • Getting stable, reliable, data transfers up to full rates
  • Identifying and testing all other data transfer needs
  • Understanding experiments’ data placement policy
  • Bringing services up to required level – functionality,

availability, (operations, support, upgrade schedule, …)

  • Delivery and commissioning of needed resources
  • Enabling remaining sites to rapidly and effectively participate
  • Accurate and concise monitoring, reporting and accounting
  • Documentation, training, information dissemination…
slide-39
SLIDE 39

Monitoring of Data Management

  • GridView is far from sufficient in terms of data management monitoring
  • We cannot really tell what is going on:
  • Globally;
  • At individual sites.
  • This is an area where we urgently need to improve things
  • Service Challenge Throughput tests are one thing…
  • But providing a reliable service for data distribution during accelerator
  • peration is yet another…
  • Cannot just ‘go away’ for the weekend; staffing; coverage etc.
slide-40
SLIDE 40

The Dashboard Again…

slide-41
SLIDE 41

The Carminati Maxim

  • What is not there for SC4 (aka WLCG pilot) will not be there for WLCG

production (and vice-versa)

  • This means:
  • We have to be using – consistantly, systematically, daily, ALWAYS – all
  • f the agreed tools and procedures that have been put in place by Grid

projects such as EGEE, OSG, …

  • BY USING THEM WE WILL FIND – AND FIX – THE HOLES
  • If we continue to use – or invent more – stop-gap solutions, then these

will continue well into production, resulting in confusion, duplication of effort, waste of time, …

  • (None of which can we afford)
slide-42
SLIDE 42

Issues & Concerns

  • Operations: we have to be much more formal and systematic about logging and
  • reporting. Much of the activity e.g. on the Service Challenge throughput

phases – including major service interventions – has not been systematically reported by all sites. Nor do sites regularly and systematically participate. Network operations needs to be included (site; global)

  • Support: move to GGUS as primary (sole?) entry point advancing well. Need to

continue efforts in this direction and ensure that support teams behind are correctly staffed and trained.

  • Monitoring and Accounting: we are well behind what is desirable here. Many

activities – need better coordination and direction. (Although I am assured that its coming soon…)

  • Services: all of the above need to be in place by June 1st(!) and fully debugged

through WLCG pilot phase. In conjunction with the specific services, based on Grid Middleware, Data Management products (CASTOR, dCache, … ) etc.

slide-43
SLIDE 43

Timeline - 2006

‘Final’ service / middleware review leading to early 2007 upgrades for LHC data taking?? December SC4 production - Tests by experiments of ‘T1 Use Cases’. ‘Tier2 workshop’ – identification of key Use Cases and Milestones for T2s June 1st WLCG ‘conference’ All sites have network / tape h/w in production(?) November Deployment of new M/W and DM services across sites – extensive testing May WLCG Service Officially

  • pened. Capacity continues

to build up. October SC4 disk – disk (nominal) and disk – tape (reduced) throughput tests April LHCC review – rerun of tape tests if required? September Detailed plan for SC4 service agreed (M/W + DM service enhancements) March T2 Milestones – debugging

  • f tape results if needed

August CHEP w/s – T1-T1 Use Cases, SC3 disk – tape repeat (50MB/s, 5 drives) February Tape Throughput tests at full nominal rates! July SC3 disk repeat – nominal rates capped at 150MB/s SRM 2.1 delivered (?) January

O/S Upgrade? (SLC4) Sometime before April 2007!

slide-44
SLIDE 44
  • The Service Challenge programme this year must show

that we can run reliable services

  • Grid reliability is the product of many components

– middleware, grid operations, computer centres, ….

  • Target for September

90% site availability 90% user job success

  • Requires a major effort by everyone

to monitor, measure, debug First data will arrive next year NOT an option to get things going later

T

  • m
  • d

e s t ? T

  • a

m b i t i

  • u

s ?

Conclusions

slide-45
SLIDE 45