Computing at MPP Arthur Erhardt MPP Computing Commission + - - PowerPoint PPT Presentation

computing at mpp
SMART_READER_LITE
LIVE PREVIEW

Computing at MPP Arthur Erhardt MPP Computing Commission + - - PowerPoint PPT Presentation

Computing at MPP Arthur Erhardt MPP Computing Commission + Fachabteilung IT MPP Project Review, 17.12.2012 C/N group Fachabteilung IT C/N group upgraded to Fachabteilung IT 17.12.2012 Project Review 2012 - A. Erhardt - Computing 2


slide-1
SLIDE 1

Computing at MPP

Arthur Erhardt

MPP Computing Commission + Fachabteilung IT MPP Project Review, 17.12.2012

slide-2
SLIDE 2

17.12.2012 Project Review 2012 - A. Erhardt - Computing 2

C/N group Fachabteilung IT

  • C/N group upgraded to Fachabteilung IT
slide-3
SLIDE 3

17.12.2012 Project Review 2012 - A. Erhardt - Computing 3

Personnel Fachabteilung IT

Head of group: Erhardt, A. Linux MS SW LAN HW Pr Tel Erhardt, A. x x x x x x x Leupold, U. x x x x x x Krämer, M. x x Pan, Y. (x) x Salihagic, D. x x x Vidal, M. x(AIX) x x Krebs, K. x x

slide-4
SLIDE 4

17.12.2012 Project Review 2012 - A. Erhardt - Computing 4

Hardware Overview

  • Central Servers

– 29 fileservers with ~ 90 TB – mail, web, DNS, accounts, backup, printer, … – ~730 cores in condor for batch (BCs and PCs)

  • Experimental and engineering groups

– ~400 PCs, 2/3 Linux, 1/3 MS

  • Theory group

– ~ 80 PCs (1 DEC Alpha)

slide-5
SLIDE 5

17.12.2012 Project Review 2012 - A. Erhardt - Computing 5

LAN

slide-6
SLIDE 6

17.12.2012 Project Review 2012 - A. Erhardt - Computing 6

Software overview

  • OS

– Linux (Ubuntu 10.04/12.04 (ATLAS/BELLE/ILC)),

OpenSUSE 12.2 (Theory), RHEL & clones

– AIX, Solaris (Electronics workshop) – MacOSX, VMS (without C/N infrastructure) – MS Windows (Admin., Labs, h1win, thwin)

  • Applications / libraries

– Mathematica, Maple, Portland & intel compilers, IDL,

NAG, Matlab, …

– OSS with C/N group & mppcc

slide-7
SLIDE 7

17.12.2012 Project Review 2012 - A. Erhardt - Computing 7

Software Applications

  • Commercial

– Oracle 10, Infopark Fiona cms, Tivoli backup,

Wilma (WLAN manager), Gleitzeitserver, LogInventory, HP ProCurve Manager, …

  • OSS

– Indico, mysql, root, CERNLIB, phpBB, twiki,

Asterisk, eGroupware, kvm, ...

slide-8
SLIDE 8

17.12.2012 Project Review 2012 - A. Erhardt - Computing 8

New Phonesystem

  • Cisco/Asterisk VoIP system

– Phone system in production service – Betriebsvereinbarung Telefonanlage – TLS-aware phone firmware installed – Templates for different phone use cases

developed

– Maintenance contract with BayCom

slide-9
SLIDE 9

17.12.2012 Project Review 2012 - A. Erhardt - Computing 9

Phonesystem Observations

  • Power outage revealed insufficient UPS

capacity → UPS for 30+ min of service after power loss installed (needs testing)

  • Fax over VoIP not yet stable
slide-10
SLIDE 10

17.12.2012 Project Review 2012 - A. Erhardt - Computing 10

IT Security

  • MPP Email infrastructure redesigned:

– No NFS mounts from other desktops needed on

mail/imap server needed for service anymore → significant increase in service reliability

– No NFS export of mail storage into MPP after

security incident on Cisco IRS in 2012.

slide-11
SLIDE 11

17.12.2012 Project Review 2012 - A. Erhardt - Computing 11

C/N Plans 2013

  • Replacement of old WLAN Aps
  • Phase out all 100MBps Ethernet in scientific

environments

  • NFS+NIS → Krb5/LDAP+AFS: needs very

thorough evaluation of MPP situation and future requirements.

  • Finish Linux OS upgrades
  • Tivoli Backup**2 to RZG
slide-12
SLIDE 12

17.12.2012 Project Review 2012 - A. Erhardt - Computing 12

MPP computing commission

  • Subcommittee of IA

– members: Abt, Bethke, Erhardt, Hahn, Kluth, Leupold,

Reimann, Simon, Stonjek, Wagner

– meetings are public

  • Mandate

– oversight of C/N operations – medium- and longterm planning

  • Please consult before buying hardware or requiring

services!

slide-13
SLIDE 13

17.12.2012 Project Review 2012 - A. Erhardt - Computing 13

Rahmenverträge

  • MPG has procurement contracts for IT

– hardware, LAN, software, services – order without tendering procedure – competitive prices (still check F&L or street prices) – FSC, HP, IBM, Acer, Dell, Lenovo, Apple, …

  • Experience

– good for standard orders – companies respond well

slide-14
SLIDE 14

17.12.2012 Project Review 2012 - A. Erhardt - Computing 14

Computing at RZG (1)

  • Operation

– main users: ATLAS (WLCG), MAGIC – open for all MPP groups (10% share) – connection via 10GbE link

  • Usage

– need RZG account (via web form) – direct access: “ssh mppui[1,3].t2.rzg.mpg.de” – direct access to dcache storage (dccp, dcap) – Direct access to tape storage via afs

slide-15
SLIDE 15

17.12.2012 Project Review 2012 - A. Erhardt - Computing 15

Computing at RZG (2)

  • Current status

– ~ 3000 cores + ~ 1400 virtual (HT) cores,

2 GiB/core, ~ 2200 TB disk

– New intel arch: HT cores almost as good as real ones – ATLAS Tier-2/3 and MDT calibration – MAGIC analysis centre – Others: Theory, GERDA, ILC, BELLE(II)

  • AIX (xlf) on IBM POWER @RZG
slide-16
SLIDE 16

17.12.2012 Project Review 2012 - A. Erhardt - Computing 16

Computing at RZG: Usage 2012

slide-17
SLIDE 17

17.12.2012 Project Review 2012 - A. Erhardt - Computing 17

Computing at RZG: Usage 2011

slide-18
SLIDE 18

17.12.2012 Project Review 2012 - A. Erhardt - Computing 18

Computing at RZG (5)

  • Software:

– SLC5, WLCG, dCache, afs, SGE – access to tape storage (via afs) – various gcc versions, other dev. Tools – experiment software

  • Functions

– WLCG: send/receive grid jobs and data – local SGE batch jobs – Work interactively: (mppui[1-5].t2.rzg.mpg.de)

slide-19
SLIDE 19

17.12.2012 Project Review 2012 - A. Erhardt - Computing 19

Summary and Outlook

  • MPP IT landscape continuously changing

– rapid changes in hard- and software – Keeps C/N staff busy

  • Mass data storage at MPP growing fast

– > 100 TB. CPU power? Management? – use RZG cluster+storage (10GbE to MPP)

  • IT security

– needs a well-structured and -managed setup – requires some protocol upgrades