Status of Grid Computing in India P.S.Dhekne, India Coordinator LCG - - PowerPoint PPT Presentation

status of grid computing in india
SMART_READER_LITE
LIVE PREVIEW

Status of Grid Computing in India P.S.Dhekne, India Coordinator LCG - - PowerPoint PPT Presentation

Status of Grid Computing in India P.S.Dhekne, India Coordinator LCG May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 1 Mega Science Projects Todays science is based on worldwide collaborations by sharing computations, data and


slide-1
SLIDE 1

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 1

Status of Grid Computing in India

P.S.Dhekne, India Coordinator LCG

slide-2
SLIDE 2

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 2

Mega Science Projects

– Today’s science is based on worldwide collaborations by sharing computations, data and equipments – India is participating in LHC, STAR, PHENIX experiments – Researchers need more accurate & precise solutions to their problems in shortest possible time – Related computational problems are so complex that it can not be solved even on a most powerful (single) computing center in the world

slide-3
SLIDE 3

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 3

Why High Performance Computing?

  • Mega-Science projects need

– Huge Computations – Good collaborative Tools – Reliable, Robust, Fault Tolerant System

  • User wants cost effective solution &

Computing power good enough to run 4-5 runs a day

  • Grid Computing may satisfy users demand
slide-4
SLIDE 4

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 4

Making Information Technology (IT) as easy to use as plugging into electrical or TV socket

  • Resource sharing and coordinated problem solving in dynamic, multiple

R&D units : Millions of users, Thousand Organizations, Many Countries

  • User

Visualization

slide-5
SLIDE 5

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 5

New Opportunities

“Resource sharing and coordinated problem solving in

dynamic, multiple R&D units, virtual organizations”

slide-6
SLIDE 6

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 6

!

Libraries

A Laboratory without walls

Collaborative Tools

  • Chat
  • Email
  • Video

Conferencing

  • VR
  • White Boards
  • - Web Portal

With improved Web Services ( SOAP, WSDL, UDDI,WSFL), COM technology it is easy to develop loosely coupled distributed applications

slide-7
SLIDE 7

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 7

LHC Computing

  • LHC (Large Hadron Collider) will begin taking data

in 2006-2007 at CERN, Geneva.

  • Data rates per experiment of >100 Mbytes/sec.
  • >1 Pbytes/year of storage for raw data per experiment.
  • World-wide collaborations and analysis.

– Desirable to share computing and analysis throughout the world – Computing requirement is so huge that it can’t be met by a single Computing Centre

slide-8
SLIDE 8

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 8

LHC requirements

  • Computing challenges in LHC lies in the real time

storage of the huge amount of data, re-construction of tracks of particles released during collision and computational simulation for physics experiments.

  • Performance

required for most rudimentary simulations is about 20 Teraflop sustained speed, which is equivalent to 40,000 personal computers

  • Storage requirements are about a million times the

presently available storage on desktop personal computers.

slide-9
SLIDE 9

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 9

Data Grids for HEP

  • !"#$

%$& %#' ("#$ )*+ ',- ."#$ '*

  • "&*
  • &*
  • "

" " " (/0"#$ #& !((12&3 !((12&3 41-3 !12&3

  • 5&0/

!(( )5 !12&6 #&&/ )5 !(& 7--& 5

#&

#2&3

41-3 8'

  • !"#$
  • !"#$
  • !"#$
  • !"#$

41-3

Tier 0 Tier 0 Tier 1 Tier 1 Tier 2 Tier 2 Tier 4 Tier 4

!"#$9&0((( $":0;5

  • Tier 3

Tier 3

slide-10
SLIDE 10

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 10

International Collaboration

  • India became a CERN observer state in 2002
  • Large Hadron Collider (LHC) Grid Software Development, DAE-

CERN Protocol agreement on computing for LHC data analysis, a DATA Grid called LCG ~10 people working in India for 5 years amounting to 7.5 MSWF

  • BARC developed software is deployed at LCG, CERN
  • Co-relation Engine, Fabric management
  • Problem Tracking System (SHIVA)
  • Grid Operations (GRID VIEW)
  • Quattor enhancements a system administration toolkit
slide-11
SLIDE 11

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 11

CERN TIFR BARC CAT VECC

!"#$%&!&'"&()#

  • Development of LCG software agreement signed in 2002:

10 DAE people working in India for 5 years (7.5MCHF)

  • Tier 2/3 Centers in India

Software developed by BARC for CERN SHIVA GRIDVIEW Fabric Mgmt. Corelation Engine 100 Mbps 10 Mbps 100 Mbps 622 Mbps 10 Mbps Tier 0/1 Centre Tier 3 and CMS Users Tier 2 and Alice users Tier 2 Centre and CMS users

Internet

DAE/DST/ERNET: Geant, Garuda: C-DAC national Grid

622 Mbps

slide-12
SLIDE 12

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 12

ERNET – GEANT Connectivity

  • A 45 Mbps IPLC based connectivity is planned

between ERNET and GEANT.

  • It is a program funded by European Union through

DANTE and Govt. of India through ERNET, TIFR & DST.

  • 10 research institutes/universities will use the link

for collaborative research in High Energy Physics.

  • We would run IPv6 on this link.
slide-13
SLIDE 13

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 13

  • Univ. of Jammu

Panjab Univ. Chandigarh

  • Univ. of Raj.

Jaipur (TIFR,BARC) IUCAA Pune CAT Indore IISC Banglore IIT Chennai

  • Univ. of

Hyderabad IOP Kolkat a IIT Guwahati AMU Mumbai Bhubaneshwar VECC IIT Kanpur (DU) Delhi Universities / R&D Institutions proposed to be connected in Ist Phase ERNET PoPs

622 Mbps IPLC

  • Multi-Gigabit pan-European Research Network
  • Connecting 32 European Countries and 28 NRENs
  • Backbone capacity in the range of: 622 Mb/s-10Gb/s

AT BE CH CY CZ HR HU IE IL GR DE DK EE ES FI FR Austria Germany Belgium Cyprus Switzerland Czech Republic Denmark Estonia Finland Spain Greece France IS IT LT LU LV MT NL NO PL PT RO SE SI SK Croatia Hungary Ireland Israel Iceland Italy Lithuania Luxembourg Netherlands Latvia Norway Malta Poland Portugal Romania Sweden Slovenia Slovakia TR UK United Kingdom Turkey ERNET Backbone Links Additional Links Proposed

ERNET Connectivity with European Grid

slide-14
SLIDE 14

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 14

GARUDA

  • Department of Information Technology

(DIT), Govt. of India, has funded CDAC to deploy computational grid named GARUDA as Proof of Concept project.

  • It will connect 45 institutes in 17 cities in

the country at 10/100 Mbps bandwidth.

slide-15
SLIDE 15

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 15

Features of Fabric

  • It will be a Layer 3 MPLS VPN connecting 45

institutes in 17 cities.

  • 2.4 Gbps bandwidth will be dedicated on SP’s

backbone

  • The

last mile connectivity will be Fast Ethernet/Ethernet delivery over fiber to each institutes.

  • The last mile from Service Provider(SP)’s POP to

each node will be on a ring.

slide-16
SLIDE 16

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 16

New Domain name “tier2-kol.res.in’ has been registered and work is going on CONDOR Batch System is running with one server and eight Clients under QUATTOR cluster management environment AliROOT, GEANT and other production related packages are tested successfully in both ways ALICE Environment ( AliEn ) at present NOT running Data Grid has been registered at cern.ch Linked with CERN via 2Mbps available Internet link.

4MBPS band-width is already installed and commissioned

Tier-II Centre for ALICE

(Update on VECC and SINP Activities)

slide-17
SLIDE 17

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 17

The following H/W and S/W infrastructures were used :

  • 8 Node Cluster consisting of dual Xeon CPU & 400GB Disk Space.
  • PBS Batch System with one Management server and eight Clients under

OSCAR cluster management environment

  • ALICE Environment ( AliEn ) was installed
  • Data Grid has been registered at cern.ch
  • AliROOT, GEANT and other production related packages are tested

successfully in both ways

  • Linked with CERN via 2Mbps available Internet link and Participated in

PDC’04

Experience with Alice-Grid & PDC’04

slide-18
SLIDE 18

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 18 VECC- Cluster : High Availability Quattor

CERN

Internet Cloud

Tier-2@Kolkata

SINP- Cluster : High Availability Quattor

ROUTER FireWall

Switch Management Node (Stand-by) Management Node

4Mbps Gigabit Network

Present Status

slide-19
SLIDE 19

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 19

"8*<= > 28*<

  • %&!

4 Mbps Links

?)<= > 8<5

  • Resource sharing and coordinated problem solving in

dynamic, multiple R&D units

4 Mbps Link 4 Mbps Link

slide-20
SLIDE 20

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 20

a s d #*

Give alert for large earthquakes and tsunami

slide-21
SLIDE 21

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 21

  • IERMON Stations in India
  • +

+

  • *

* ) )

slide-22
SLIDE 22

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 22

,

  • #

&. !+!"!#/ $

  • R

e a c t

  • r

a c c i d e n t s v e r y

  • v

e r y l

  • w

p r

  • b

a b i l i t y ( 1 i n 1 06 ) , b u t h i g h c

  • n

s e q u e n c e s

slide-23
SLIDE 23

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 23

Campus Grid at BARC

100 Mbps fiber Grid Enable Visual Area Network

  • Grid Enable

Grid Enable HPC Visual Data server Web Service AFS PBS & Tools Globus GRAM,

GSI

Fabric Clusters 1 4 2 3

slide-24
SLIDE 24

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 24

Capabilities in today’s Applications

  • Pre and Post Processing
  • Observational & Processed data is growing
  • Data Assimilation, Analysis of global as well as

local needs

  • Increased Resolution Requirements
slide-25
SLIDE 25

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 25

INSTALLATIONS INSTALLATIONS

  • NCMRWF,

NCMRWF, Noida Noida

  • VSSC,

VSSC, Trivendrum Trivendrum

  • CAT,

CAT, Indore Indore, ,

  • IGCAR,

IGCAR, Kalpakkam Kalpakkam

  • NPC, Mumbai

NPC, Mumbai

  • IIT,

IIT, Kanpur Kanpur

  • IIT,

IIT, Powai Powai

  • ADA, Bangalore

ADA, Bangalore

  • IOP,

IOP, Bhubaneshwar Bhubaneshwar

  • UDCT, Mumbai

UDCT, Mumbai

  • IPR,

IPR, Gandhinagar Gandhinagar

For Daily Weather Forecasting NCMRWF Noida

For CFD analysis

VSSC Trivandrum LCA Bangalore

&#0.&+)#(&&()#)#)#%)& &#0.&+)#(&&()#)#)#%)&

slide-26
SLIDE 26

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 26

20 MPixel Multitiled Display System

Graphics Servers Graphics Servers User Graphics Data Graphics Data

1 2 5 6 9 10 13 14 4 3 8 7 12 11 16 15 12 11 2 3 4 5 6 8 7 9 10 13 14 15 16 1

Visual Data Parallelization

slide-27
SLIDE 27

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 27

Multi-Computer HPC Model

Multiprocessor Supercomputing Cluster

Multiple Graphics HW

Front-end Pre-processing Solver Post-processing Commodity Processors

slide-28
SLIDE 28

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 28

Current Status: ERNET

  • At present it is possible to get 45 Mbps IPLC

line via Geant at the same cost

  • We want 622 Mbps link to take part in SC4

testing from June 2006

  • Proposal to go for 1Gbs in 2006, 2.5 Gb/s in

2007 and 10Gb/s in 2008

  • Cost is a problem
  • More

Grid projects should become

  • perational
slide-29
SLIDE 29

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 29

Other Grids in India

  • EGEE Grid with ERNET & C-DAC
  • Coordination with Geant for Education Research
  • DAE/DST/ERNET MOU for Tier II LHC Grid
  • Our MOU with INFN, Italy to setup Grid research Hub
  • Department of Bio are in Bio-Grid
  • Weather Grid proposed by IMD
slide-30
SLIDE 30

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 30

Conclusions

  • Grids allow the scientific community to interact in real-

time with modeled phenomena to steered through distributed simulations.

  • Grid collaboratories can successfully negotiate access to

distributed yet highly expensive scientific instruments such as telescopes, SCM, Bio-microscopes, telemedicine

  • Grid Computing reaches out from high-energy physics

to governance and helps to aggregate distributed computing resources

slide-31
SLIDE 31

May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 31