computing at and grid application for belle experiment
play

Computing at and Grid Application for Belle Experiment S. Nishida - PowerPoint PPT Presentation

Computing at and Grid Application for Belle Experiment S. Nishida (KEK) ISGC2006 @ Academia Sinica, Taipei May 3, 2006 S. Nishida (KEK) ISGC2006 Grid Application for Belle 1 May 3, 2006 Contents Introduction New B Factory


  1. Computing at and Grid Application for Belle Experiment S. Nishida (KEK) ISGC2006 @ Academia Sinica, Taipei May 3, 2006 S. Nishida (KEK) ISGC2006 Grid Application for Belle 1 May 3, 2006

  2. Contents ● Introduction ● New B Factory Computer System in KEK ● Grid Application at Nagoya University ● Conclusion S. Nishida (KEK) ISGC2006 Grid Application for Belle 2 May 3, 2006

  3. Introduction Belle Experiment “B factory” experiment at KEK (Japan). KEKB collider Mt. Tsukuba ● Asymmetric e + e − collider (3.5 GeV on 8GeV) ● e + e − →ϒ (4S) → BB (1.1nb) Belle KEKB ● Circumference 3km 3km ● World highest luminosity ● Maximun Beam Current  LER (e + ) 2.0A Linac  HER (e − ) 1.36A S. Nishida (KEK) ISGC2006 Grid Application for Belle 3 May 3, 2006

  4. Belle Detector General purpose detector for various B/charm/ τ physics. S. Nishida (KEK) ISGC2006 Grid Application for Belle 4 May 3, 2006

  5. Belle Collaboration Seoul National U. Aomori U. Nagoya U. IHEP, Vienna Shinshu U. BINP Nara Women’s U. ITEP Sungkyunkwan U. Chiba U. National Central U. Kanagawa U. U. of Sydney Chonnam Nat’l U. Nat’l Kaoshiung KEK Normal U. Tata Institute U. of Cincinnati Korea U. National Taiwan U. Toho U. Ewha Womans U. Krakow Inst. of Nucl. National United U. Phys. Tohoku U. Frankfurt U. Nihon Dental College Kyoto U. Tohuku Gakuin U. Gyeongsang Nat’l U. Niigata U. Kyungpook Nat’l U. U. of Tokyo U. of Hawaii Osaka U. EPF Lausanne Tokyo Inst. of Tech. Hiroshima Tech. Osaka City U. Tokyo Metropolitan U. Jozef Stefan Inst. / U. of IHEP, Beijing Ljubljana / U. of Maribor Panjab U. Tokyo U. of Agri. and Tech. IHEP, Moscow Peking U. U. of Melbourne Toyama Nat’l College U. of Pittsburgh U. of Tsukuba Princeton U. Utkal U. Riken VPI Saga U. Yonsei U. USTC 13 countries, 57 institutes, ~400 collaborators Lots of contribution from Taiwan. more contribution in computing area, please! S. Nishida (KEK) ISGC2006 Grid Application for Belle 5 May 3, 2006

  6. Luminosity Produce large amount of B mesons!! 1 fb - 1 ~ 10 6 BB 570 fb - 1 600 Integrated Luminosity Integrated Luminosity (fb - 1 ) peak luminosity 1.63 × 10 34 cm - 2 s - 1 400 1 fb - 1 / day ● We will install Crab 200 Cavity this summer, which (hopefully) increases the luminosity 0 2006/6 1999/6 2004/6 2002/6 (twice). S. Nishida (KEK) ISGC2006 Grid Application for Belle 6 May 3, 2006

  7. Results from Belle Unitarity Triangle has been precisely determined!! ● Success of B factory experiments! ● Various B decay modes are studied (also charm, τ ) φ 1 = β , φ 2 = α , φ 3 = γ sin2 φ 1 |V ub | from from b → ccs b → ul υ φ 3 from B → DK Dalitz Analysis B - B + φ 2 from B →ππ, ρπ, ρρ S. Nishida (KEK) ISGC2006 Grid Application for Belle 7 May 3, 2006

  8. Results from Belle Higher Luminosity (larger amount of data) opens the possibility of various interesting measurements. ● Precise measurements of elements of CKM triangle ● Observation/search of rare decays (e.g. Β→ργ, τ→µγ) Studies of New Physics Example: full reconstruction technique B Β→ τυ → ● Reconstruct one of two B mesons ● Useful for B decays with neutrino ● Need enormous number of B meson pairs Observation of Β→τυ and hence CPU powers, disks... S. Nishida (KEK) ISGC2006 Grid Application for Belle 8 May 3, 2006

  9. Luminosity Scenario Toward Super KEKB Luminosity is almost super KEKB doubled every year Necessary computing resources are also doubled. present KEKB We are here 570 /fb Integrated luminosity will be 1~3 /fb in coming several years. S. Nishida (KEK) ISGC2006 Grid Application for Belle 9 May 3, 2006

  10. New B Factory Computing System New B Factory Computer System has just started its operation on March 23, 2006!! In order to deal with increasing data, we have moved from ● expensive, most reliable components to less expensive, reasonably reliable components ● Solaris to Linux ● direct access tape storage to HSM ● fixed to extensible ● closed to reasonably open S. Nishida (KEK) ISGC2006 Grid Application for Belle 10 May 3, 2006

  11. Comparison with old systems 2006- 1997- 2001- Performance \ Year (6years) (4years) (5years) ~1,250 ~100 Computing Server ~42,500 (WS) (WS+PC) (PC) [SPECint2000 rate] 1,000 Disk Capacity ~4 ~9 (1PB) [TB] 3,500 Tape Library Capacity 160 620 (3.5PB) [TB] Work Group server 3+(9) 11 80+16FS [# of hosts] 23WS 25WS User Workstation 128PC +100PC [# of hosts] +68X ● Great Improvement! c.f.) Moore’s Law: 1.5y=twice, 4y=~6.3, 5y=~10 ● Upgrade is planned in 2009, though the contract is 6 years. ● Belle has additonal computing resources (next page) S. Nishida (KEK) ISGC2006 Grid Application for Belle 11 May 3, 2006

  12. Additonal (Belle's) Resources We now obtain high-performance computer system; but we didn't suddenly switch to the “less expensive” system. We have been testing such system for several years. 20units/20TB ● Linux based PC clusters ● S-ATA disk based RAID drives ● S-AIT tape drives These resources have been essential 350TB disks 934 CPUs for Belle (production/analysis) 1.5PB tapes 1000TB disks 2280 CPUs new B computer 3.5PB tapes for comparison S. Nishida (KEK) ISGC2006 Grid Application for Belle 12 May 3, 2006

  13. Overview of the New B Computer Workgroup Servers reserved for Grid Storage On-line Computing Reconstruction Servers Farm S. Nishida (KEK) ISGC2006 Grid Application for Belle 13 May 3, 2006

  14. Computing Servers ● DELL Power Edge 1855 Xeon 3.6GHz × 2 Memory 1GB ● Made in Taiwan [Quanta] ● WG: 80 servers (for login) Linux (RHEL) ● CS: 1128 servers Linux (CentOS) ● total: 45662 SPEC CINT 2000 Rate. equivalent to 8.7THz 1 enclosure = 10 nodes / 7U space CPU will be increased by × 2.5 1 rack = 50 nodes (i.e. to 110000 SPEC CINT 2000 Rate) in 2009. S. Nishida (KEK) ISGC2006 Grid Application for Belle 14 May 3, 2006

  15. Storage System (Disk) ● Total 1PB with 42 file servers (1.5PB in 2009) ● SATAII 500GB disk × ~2000 (~1.8 failure/day ?) ● 3 types of RAID (to avoid problems) SystemWorks MASTER RAID B1230 ● HSM = 370 TB 16drive/3U/8TB non-HSM = 630 TB (made in Taiwan) ADTX ArrayMasStor LP Nexan SATA Beast 15drive/3U/7.5TB 42drive/4U/21TB S. Nishida (KEK) ISGC2006 Grid Application for Belle 15 May 3, 2006

  16. Storage System (Tape) ● HSM: PetaSite (SONY) ● 3.5PB + 60drv + 13srv ● SAIT 500GB/volume ● 30MB/s drive ● Petaserve ● Backup ● 90TB + 12drv + 3srv ● LTO3 400GB/volume ● NetVault S. Nishida (KEK) ISGC2006 Grid Application for Belle 16 May 3, 2006

  17. Usage of the B Computer (all the numbers are for 500 fb - 1 ) The location (MC) of the data files online reconstruction is managed by MC farm production a postgres HSM rawdata + database. ~ 1PB “DST” data 2.5THz 2THz (to finish in (to finish in production 6 months) 2 months) The new system “MDST” data has sufficient hadron 120TB (four vector, PID info etc.) CPU and storage + others resources non-HSM Users' analyes (at least for now) S. Nishida (KEK) ISGC2006 Grid Application for Belle 17 May 3, 2006

  18. Usage of the B Computers 80 servers (user login) Workgroup ~5 persons ~1200 servers Server /server Computing Servers (CS) 16 servers Cluster2 Cluster3 80TB Cluster1 Workfile (for users' (for users) Server home dir.) 3 LSF (batch system) clusters Data are transfered from Storage Storage 3.5PB 1PB storage servers to CS HSM non-HSM using Belle home grown e.g. hadron data e.g. raw data simple TCP/socket application S. Nishida (KEK) ISGC2006 Grid Application for Belle 18 May 3, 2006

  19. Status and Plan Now still at the stage of setting up the environment ● Transfering existing data from the old system. ● User environment not prepared yet. ● Full operation ~ summer. Grid activity at Belle ● With help from the KEK computing research center, we have applied for Belle VO. ● In the KEK pre-production LCG site, we have successfully run the Belle simulation. ● Several servers in the new B computer are reserved for Grid study, but are not used yet. ● Remote institutes will receive more advantage. Some institutes (Australia, Taiwan, Nagoya...) already involved. S. Nishida (KEK) ISGC2006 Grid Application for Belle 19 May 3, 2006

  20. Computing at Nagoya Univ. Nagoya is the instituite that has largest computing resources for Belle (except KEK). ● 900GHz equivalent Linux PC clusters. ● >130 TB RAID disks ● 1Gbps networks + newly introduced (Jan 2006) ● 270GHz equivalent Linux PC clusters. ● 400TB Virtual Disk systems ● Fujitsu VD800 + LT270 ● LTO tapes with cache disk (4.5T) ● 1Gbps + a few Gbps networks ● Direct connection to KEK B computer (1Gbps) ● Batch queue system using Sun Grid Engine. S. Nishida (KEK) ISGC2006 Grid Application for Belle 20 May 3, 2006

  21. Computing at Nagoya Univ. Target ● Analysis of Belle data. ● Monte Carlo production for Belle. ● Development for new detector. Typical usage in Belle ● Read data in the file servers from many PCs. ● Write output data to file servers. Efficient data management system: ● User-/manager- friendly system. ● Copy data w/o any changes for user ● Fast recovery from disk fault. S. Nishida (KEK) ISGC2006 Grid Application for Belle 21 May 3, 2006

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend