Introduction to Grid Computing
東京工業大学 学術国際情報センタ
/数
理・ 計算科学専攻/JST 松岡 聡
matsu@is.titech.ac.jp
Introduction to Grid Computing / - - PowerPoint PPT Presentation
Introduction to Grid Computing / /JST matsu@is.titech.ac.jp (HPC) ( )
/数
matsu@is.titech.ac.jp
(並列
)スパコ
大変高価
大変高価(
(一台数億円~数百億円
一台数億円~数百億円)
)
(近年は
Unixも
) )
構造計算・
流体・ 他の 構造計算・ 流体・ 他のPDE
PDE・
・
QCD QCD
Sparse CG, Monte Carlo, etc.etc.
Sparse CG, Monte Carlo, etc.etc.
HPCを遼に安価、
大規模問題の世界記録の達成、
未解決問題の解決
例: ORにおける NUG30問題
複雑なアルゴリ
ズムの実応用・ 科学技術への適用
Economic Simulation, Biochemistry, Architecture, Control Theory, Architecture Simulation, Planning
電子メ
CSCW、
Web、
Gopher, ftpアーカイブ、 netnews、
Appletなど
F a s t E t h e r n e t A T M M y r i n e t MPI_SEND(...) MPI_RECEIVE(...) MPI_ISEND(...) 複数のネット ワークボード 大容量 ディスク
DBイ
UIイ
High-Performance Distributed
Computing 「
超広域高性能計算」
Metacomputing[Smarr87] “The GRID” [Fosterら
98]
既存のソ
フ ト 基盤の上位レ イ ヤ
サービスと
プ ロ ト コ ルの提供・ 標準 化 Grid book picture here
計算物理、
大規模並列計算、
エンジニアリ
On-demand creation Virtual Computing Systems Medium for Virtual Organizations
http: / / http: / /
Web: Uniform access to HTML documents Grid: Flexible, high-perf access to all significant resources
Sensor nets Data archives Com puters Software catalogs Colleagues
assignment problem on June 16, 2000
,26,17,30,6,20,19,8,18,7,27,12,11,23
bound problem to master-worker structure
days (peak 1009 processors), using parallel computers, workstations, and clusters
MetaNEOS: Argonne, Northwestern, Wisconson
Grid上で、
空いている
CPU資源を
提供
Condorを
用い,大規模
Master-Worker, Branch-and-Bound計
算を 行う
対故障性と
リ カ バリ が鍵
telnet, rloginなど 分散OS技術 Message Passing - PVM, MPI WWW 技術 - HTTP/HTML/CGI ORB (Object-Request Broker)- CORBA, DCOM, Java RMI Agent技術 (Plangent, Aglets, Voyagerなど) などなど
(そのままでは)Grid(グローバルコ
部分には使えるが。
。 。
不特定ユーザ モービルコード
Heterogeneity
言語、
OS、
ハード ウェ ア、 管理ポリ シ
高性能
HPC, HTC 高バンド
幅高レーテンシ対 応
サイ
ト の独立性
root権限なし
スケーラ
ビ リ テ ィ
世界規模へ
資源配分
計算資源、
リ モート センサー など
80年代 - 分散コンピューティ
ン グ
90年代初期 - ギガビッ
ト テスト ベッ ド
主にネッ
ト ワーキングの研究
I-Way 1995
アプリ
ケーショ ンの feasibilityが主
Alliance (NCSA) Virtual Machine Room PACIs (NCSA/SDSC NSF National Technology Grid) 1998~ NASA Information Power Grid 1999~ ASCI DISCOM 1999~ GriPhyN (Grid Physics Network), PPDG, 2000~ eGrid (European Grid), (EU/CERN) DataGrid, 2000~ ApGrid (Asia-Pacific Grid) 2000~ NCSA-SDSC Distributed Terascale System 2001~ , IDVGL
(Distributed Virtual Grid Lab) 2001~
NASA Information Power Grid NSF National Technology Grid NPACI (SDSC), Alliance (NCSA)
主に大学、
およびNCSA,SDSC,
30+ の大学
5年ないし
は10年の大規模プロ ジェ ク ト
SDSCの50%のリ
ソ ースは
NPACIへ
CS+ アプリ
ケーショ ン
本物の作成
NGI - vBNS (Internet2) Avelin (OC192)
San Diego Supercomputing Center
: Globus+ Legion+ NWSなど
neural science: Mark Ellisman
USCD
earth systems: Berneard
Minster (SIO)
molecular structure: Russ
Altman (Stanford)
engineering: Tensley Oden
(UTexas)
NPACIの技術VIPの一人の SDSC Regan Moore氏と HPSS storageパネル
computing, remote interaction
Engineering Simulation (NEES)
Integrated instrumentation,
collaboration, simulation
ATLAS, CMS, LIGO, SDSS World-wide distributed analysis of
Petascale data
based collaboration
Toolkits, Framework
Globus, Legion, AppLes
Message Passing
MPICH-G2
Distributed collaboration
CAVERNsoft, Access Grid
High-throughput computing
Condor-G, Nimrod-G
Distributed data
management & analysis
Data Grid toolkits
GridRPC
Ninf, Netsolve, Nimrod
Desktop access to Grid
resources
Commodity Grid Toolkits
(CoG Kits)
Performance Monitoring
NWS
United Devices Entropia Platform Computing Parabon
Remote access Remote monitor Information services Fault detection . . . Resource mgmt Collaboration Tools Data Mgmt Tools Distributed simulation . . . net
Argonne National Lab/USC-ISI
www.globus.org The most popular Grid “Toolkit”
Developed as a key enabling mechanism the Grid
Security Infrastructure
Uniform authentication & authorization mechanisms in multi-
institutional setting
Single sign-on, delegation, identity mapping Public key technology, SSL, X.509, GSS-API
Used to construct Grid resource managers that provide
secure remote access to
Computers: GRAM server (HTTP), secure shell Storage: storage resource managers, GSIFTP Networks: differentiated service mechanisms
Globus project Co-PI: Carl Kesselman
tomographic reconstruction real-time collection wide-area dissemination desktop & VR clients with shared controls
Advanced Photon Source
archival storage DOE X-ray grand challenge: ANL, USC/ ISI, NIST, U.Chicago
)
GGFへ
議長: Charlie Catlette氏 (Argonne 国立研) 13のWorking Group 年3回の GGF meeting 我が国から
は村岡氏(A d v i s o r y
), 関口氏, 松岡(Steering)
400人以上の参加者
2002年にも
我が国で?
Asia-Pacificより
Critical Massの参加者が必要
Grid上の機種、
OS独立な高性能 RPC システム
Fortran, C/C+ + , Java, Mathematica, COM(Excel)
ユーザの視点: 通常のラ
イ ブ ラ リ
動的、
かつ数値計算ラ イ ブ ラ リ に 特化し たNinf RPC IDL & プロト コ ル
自動的資源配分
メ
タ サーバによる適切なNinfサーバへの計算の割り 当て
並列処理のサポート
ク
ラ イ ア ン ト 側: タ ス ク パラレル、 ト ランスアクショ ン
サーバ側: データ
パラ レ ル (タ ス ク パラ レ ルも
)
WWWや分散DBのデータ
を 直接計算に
NinfDB, WebAccess, Matrix Workshop
組織内と
不特定ユーザを対象と し たsecurity
Campus-Wideからグローバルコンピューティ
ングへ
www.ninf.org
工技院・
産総研/筑波大/ 物質研
関口
智史(産総研)
佐藤
三久(つく ば大)
中田
英基(産総研)
高木
浩光(産総研)
田中
良夫(産総研)
建部
治 (産総研)
東工大
松岡
聡
合田
憲人
小川
宏高
その他学生
その他京都大学など 共同研究
Netsolve, NWS (Tennesse大)
Jack Dongarra Rick Wolski
AppLeS (UCSD)
Fran Berman Henri Casanova
Globus
Ian Foster (Argonne) Carl Kesselman (USC/ISI) Etc.
引数に対し
、 共有メ モリ のイ メ ージを提供
動的なI DLによる指定、
引数間依存性解析など
非同期呼び出し
、 ト ラ ン ス ア ク シ ョ ン
セ
キュ リテ ィ など、 Gridサービスなどは自動化
Ninf_call(FUNC_NAME, ....);
FUNC_NAME =
ninf:/ / HOST:PORT/ ENTRY_NAME
C, C+ + , Fortran, Java, Lisp, COM,
Mathematica, ...
Ninf_call
double A[n][n],B[n][n],C[n][n]; /* Data Decl.*/ dmmul(n,A,B,C); /* Call local function*/ Ninf_call(“dmmul”,n,A,B,C); /* Call Ninf Func */ double A[n][n],B[n][n],C[n][n]; /* Data Decl.*/ dmmul(n,A,B,C); /* Call local function*/ Ninf_call(“dmmul”,n,A,B,C); /* Call Ninf Func */
“Ninfy” via IDL descriptions
Network Layer Condor Globus
Higher-level Grid Middleware e.g.
GFarm
Client
Client Side Server Side
Client Server Server Monitor/ Client Proxy Monitor/ Server Proxy
MetaServer (Agent) Directory Service DB
Scheduler Monitor/ Probe
GridRPC
Throughput Measurement Load Measurement
: Ninf_call(“linpack”, ..); : Program Ninf Client Library Ninf Client
Ninf Library (Ninf Executable) Ninf Library (Ninf Executable)
Exec
Mathematica, Jini (JiPANG)...
Ninf_call
double A[n][n],B[n][n],C[n][n]; /* Data Decl.*/ dmmul(n,A,B,C); /* Call local function*/ Ninf_call(“dmmul”,n,A,B,C); /* Call Ninf Func */ double A[n][n],B[n][n],C[n][n]; /* Data Decl.*/ dmmul(n,A,B,C); /* Call local function*/ Ninf_call(“dmmul”,n,A,B,C); /* Call Ninf Func */
“Gridify” via IDL descriptions
IDL information:
library function’s name, and its alias (Define) arguments’ access mode, data type (mode_in, out, inout, ...) required library for the routine (Required) computation order (CalcOrder) source language (Calls)
De fine dmmul(long mode _in int n, mode _in double A[n][n], mode _in double B[n][n], mode _out double C[n][n]) “ de sc ription “ Re quire d “libXXX.o” Calc Or de r n^3 Ca lls “C” dmmul(n,A,B,C); De fine dmmul(long mode _in int n, mode _in double A[n][n], mode _in double B[n][n], mode _out double C[n][n]) “ de sc ription “ Re quire d “libXXX.o” Calc Or de r n^3 Ca lls “C” dmmul(n,A,B,C);
No client stub routines (cf. CORBA, more like Java Jini) No modification of client program when server’s libs updated Client library stays relatively static
Client Program
Ninf Server
Ninf library program
Interface Info Interface Info
Interface Request Interface Info.
Interface Info
Argument Result
Client Library Stub Program
(1)Write an (Ninf) IDL for the library/app (2)Run Ninf interface generator on server
stub programs and Makefile
(3)Compile the library program and link with Ninf stub
Ninf executable
(4)Register Ninf executables with Ninf server (5) Your app/lib is now Gridified---Away you go!
for (i = 0; i < PU; i+ + ){ Ninf_call_async(buffer, i, NPP, ktmp+ i, xsumtmp+ i, ysumtmp+ i, qtmp[i]); } Ninf_wait_all();
Ninf_transaction_begin(); for (i = 0; i < PU; i+ + ){ Ninf_call(buffer, i, NPP, ktmp+ i, xsumtmp+ i, ysumtmp+ i, qtmp[i]); } Ninf_transaction_end( ) ; if (ptid != PvmNoParent){ pvm_initsend(PvmDataDefault) ; pvm_pkint(&k, 1, 1); pvm_pkdouble(&xsum, 1, 1); pvm_pkdouble(&ysum, 1, 1); pvm_pkint(q, 10, 1); pvm_send( ptid, M_RES); } else { for(i = 1; i < PU; i+ + ){ pvm_recv( tids[i], M_RES); pvm_upkint( &ktmp, 1, 1); pvm_upkdouble( &xsumtmp, 1, 1) ; pvm_upkdouble( &ysumtmp, 1, 1) ; pvm_upkint( qtmp, 10, 1); : }
Ninf EP PVM EP
Why define GridRPC/NES for Grids?
General-purpose ORBs such as CORBA sufficient?
Our paper “Are Global Computing Systems Useful ?”[IEEE IPDPS2000] Compare qualitative usability as well as the performance
Ease of writing and maintaining client programs Ease of installing and managing the whole system Performance for Grid-enabling a library/application
Gridified Numerical Libs (Gridified Scalapack, etc.) Gridifying Application Services as Portals (NetCFD)
Netsolve/AppLes: MCell (Neuroscience) Ninf: DOS, N-cyclic polynomial satisfaction (Operation
Research)
Netsolve: Various apps on SinRG project (Pipeline) Ninf: OR SCRM optimizer (Iterative + PS), BMI optimizer
(Branch&Bound)
Gfarm DataGrid Project (New version of Ninf/GridRPC,
Massively Data Intensive)
Parallelized CFD program is “ninfied” on Ninf server. providing an interface to a parallel CFD program running
Use Callbac k func tion to dr ive Br
GUI
http:/ / pdplab .trc.r wc p.or .jp/ ne tCF D/
Presto I
64 Celeron500Mhz, 384MB/ node
64 Celeron500Mhz, 384MB/ node
Linux + RWC Score + our stuff
Linux + RWC Score + our stuff
Semi production, parallel OR
Semi production, parallel OR algorithm on the Grid algorithm on the Grid
Prospero (Presto I I )
256
256 procs procs (64-node PI I I - (64-node PI I I - 800x2SMP 640MB + 128-node 800x2SMP 640MB + 128-node Celeron- Celeron- 800 256MB), 2-trunked 800 256MB), 2-trunked 100Base-T, 3TB storage 100Base-T, 3TB storage
General-purpose cluster research,
General-purpose cluster research, Grid simulation, Grid app. Run Grid simulation, Grid app. Run (incl. over the Pacific) (incl. over the Pacific)
Presto I I I
Athlon
Athlon 78 78 Procs Procs, 1.33Ghz, > , 1.33Ghz, > 768MB, 768MB, Myrinet Myrinet 2K, 15TB 2K, 15TB Storage Storage
“
“Gfarm Gfarm” ” Prototype Prototype
Pom
Heterogeneous Development
Heterogeneous Development Cluster Cluster
12-node, PI I I -500Mhzx2 or
12-node, PI I I -500Mhzx2 or Celeron Celeron 300x2, etc. 300x2, etc.
Parakeet
Plug & Play Clustering
Plug & Play Clustering
20 High-Performance Notebooks
20 High-Performance Notebooks (600Mhz Mobile (600Mhz Mobile Celeron Celeron) )
Leverage Commodiy Commodiy PC, PC, ~ ~ 440procs,
Greater than
Greater than Titech Titech GSIC GSIC (CC) @ 1/50 cost (CC) @ 1/50 cost
700
700 procs procs, 1.5 , 1.5 TeraFlops TeraFlops by by 1-2Q2002 1-2Q2002
> 1.5 > 1.5 Teraflops by Teraflops by 1Q 2002 1Q 2002
TITECH Campus Grid Proposal (“Field of Dreams”)
学内 Titech Gridユーザ
(数百GFLOPS, 数十TeraBytes)
び学内外の研究者との容易な共有
計算およびストレッジ資源活用可能
情報センターに設置される 各専攻のクラスタの仮想設置 の集合体 各専攻のクラスタ計算機
ン の50-100倍のC/P
ロ セ ッ サ, 数百GFLOPS, 数十TeraBytes
ソ フ ト ウ ェ ア
Super TI TANET
世界ではGridが新たなE-Scienceのインフラ として急速に研究開発が
各プロジェクト数十億規模、 3-10 年間の予算 国際フォーラム: Global Grid Forum (1999)
東工 大はSteering Groupのメンバー Gridのインフラソフトウェアの開発
NWS (U. Tennessee)
キャンパス内Grid構築プロジェクト
→キャンパス内クラスタ分散による学内Grid構築と いう点では類似しているが、本要求は数十倍の計算 およびストレッジを
E-Scienceに提供
Titech Grid全体では学内のE-Scienceに学内スパコン
の100倍の資源をGridおよびPCクラスタ技術で提供
級のスト レ ッ ジ
→我が国のインターネットが本学で始まったように、
Gridインフラにおいても国際的な COEへ
→ 本学を新たなE-Scienceの拠点へ
Truck
SuperSINET等で
学外Gridとの連携
TeraFlops, Tera~ PetaBytes, Giga~ TeraBpsを
要求する新世代のE-Scienceのアプリケーショ ン
害 シ ミ ュ レ ー シ ョ ン →従来型のスパコンの多少のインクリメンタルな 増強では扱いが不可能(スパコンでは不得意、
C/Pが非現実的)
S p a r c C l u s t e r ( 1 9 9 6 ) R W C C l u s t e r ( 1 9 9 7 ) P r e s t
( 1 9 9 8 ) P r e s t
I ( 1 9 9 9 ) P r e s t
I u p g . 1 H 2 K P r e s t
I u p g . 2 H 2 K P r e s t
a l P r e s t
V C a m p u s G r i d
GigaFLOPS
SF-Express Distributed Interactive Simulation: Caltech, USC/ ISI
Resource allocation Distributed startup I/O and configuration Fault detection
NCSA Origin Caltech Exem plar CEW ES SP Maui SP
NEOS Project
Argonne National Lab)
本研究室と
Applied Math Group
Use Ninf system and
Presto Clusters on Grid Testbed
SCRM(Generalized
Quadratic Optimization Algorithm)
Solves Non-Convex
Optimization Problems
APANTokyo RWCP TITECH
TransPAC 100Mbps
AIST/TACC
NWS Sensors Virutal/Real Client
ApGrid ApGrid Testbed Testbed
NWS Sensors Virutal/Real Client Virutal/Real Client NWS Sensors
Kyoto-U
SCRM(Generalized
Quadratic Optimization Algorithm)
Iterative execution of
multiple Semi-Definite Programming solver w/Ninf via Master- Worker
Some problems 100Fold
speedup/128 procs (exec. Time world record)
P R E S T O クラスタによる非凸二次計画問題のS C R M 法による並列 実行 1 2 3 4 5 6 7 8 1 2 4 8 1 6 3 2 6 4 # P r
e s s
s 実 行 時 間 ( 秒 ) N Q P 1 5 _ 1 . d a t N Q P 1 2 _ 1 . d a t
Salk Institute and CMU General simulator for cellular
microphysiology
Revolutionary disciplinary results 3-D Monte-Carlo simulation Embarrassingly parallel with
file sharing.
Input files (e.g. 150MB) Tasks Output file 3-D Rendering Post-processed Output
Use Ninf-Netsolve for Large Application Run over the Pacific (SDSC ASCI Blue Horizon, ORNL cluster, PRESTO clusters)
University of Tennessee, Knoxville
NetSolve + IBP
University of California, San Diego
GRAM + GASS
Tokyo Institute of Technology
NetSolve + NFS NetSolve + IBP
APST Daemon APST Client
PRESTO Cluster
Richly Integrated, End-to-End System Richly Integrated, End-to-End System
Imaging Instrument Supercomputing Large-scale Databases Scanning, Acquisition Reconstruction, Segmentation Visualization, Measurement
Network GLOBUS - GRID
Remote Access for Data Acquisition and Analysis Remote Access for Data Acquisition and Analysis
Data Sizes - now M-JPEG Video: 80 Kbps Digital Video: 36 Mbps HiRes Image: 16 MB Tilt Series: 1936 MB Raw Volume: 8 GB Refined Volume: ???
Tokyo XP Tokyo XP (Chicago) (Chicago) STAR STAR TAP TAP
TransPAC TransPAC vBNS vBNS
(San Diego) (San Diego) SDSC SDSC NCMIR NCMIR (San Diego) (San Diego)
UCSD UCSD
UHVEM UHVEM (Osaka, Japan) (Osaka, Japan)
CRL/MPT CRL/MPT
NCMIR NCMIR (San Diego) (San Diego) UHVEM UHVEM (Osaka, Japan) (Osaka, Japan)
st
nd
Globus
(slide courtesy of Mark Ellisman@UCSD)
Visualization Module Neurologist (specialist in brain disease) Data Acquisition Module Supercomputer Cybermedia Center Osaka University & Singapore iHPC Computation Module MEG A hospital in your city
C T F S y s t e m s I n c .
Globus Globus
MPI MPI
Wavelet Analysis Wavelet Analysis
iHPC
7 nodes Osaka Universit y
analysis, data mining
High Energy Physics (e.g. CERN LHC) Astronomical Observation, Earth Science Bioinformatics… Good support still needed
E-Government, E-Commerce, Data warehouse Search Engines Other Commercial Stuff
ATLAS Det ect or 40mx20m 7000 Tons
Ot her Det ect ors e.g. CMS (4 Tot al)
~ 2000 physicists from 35 countries
~1P B/ year (1MB/ event 30MB/ sec) ~1P B/ year ~300TB/ year 100KB/ event ~10TB/ year 10KB/ event
magnetic-field reconstruction algorithm tracker-2 reconstruction algorithm
RAW
tracker-1 digits tracker-2 digits
Event
calorimeter-1 digits calorimeter-2 digits magnet-1 digits
REC
tracker-1 position info tracker-2 position info
Event
magnet-1 field calorimeter-1 energy calorimeter-2 energy track reconstruction algorithm cluster reconstruction algorithm
ESD
track-1
Event
cluster-1 tracker-1 reconstruction algorithm calorimeter-2 reconstruction algorithm calorimeter-1 reconstruction algorithm cluster-2 cluster-3 track-2 track-3 track-4 track-5 jet identification algorithm electron identification algorithm
AOD
jet 1
Event
electron1 photon1 electron2 jet 2 Et miss Et miss identification algorithm
and access control
Based on Grid technology & PC Cluster tech.
CRL Tokyo Inst. Tech. Osaka-U, NAIST CERN KEK JGN Tokyo JGN Osaka JGN Tsukuba
JGN JGN JGN ANL UIC
JGN
STARTAP
IMnet
Internet
AIST
Prototype 1 – Presto III Cluster (1/50th
scale) @ Titech
AMD Athlon 1.33 Ghz x 128 nodes 768MB mem, 200GB HDD/node
100GB mem, 25TB HDD total 300 Gflops Peak, Myrinet 2K
78 nodes currently operational AMD Press Release Today
Protoype 2 – Presto IV (1/20th scale)
Design Mostly Done
0.13micron Athlon, 2Ghz x 128 Dual Nodes 400GB Disk/Node, 50 TB total 1 TFlops Peak, Myrinet 2K
Operational by 1Q2002@Titech (our lab) Domestic and International Data Challenges
via Infiniband
interconnect into local fabric
a big problem, need engineering development
technology due to high computing and I/O capacity requirement
Commodity Technology circa 2005 300GByte low power HD Drive, Raid 5, 25 Drives/box
= > 6 Terabytes/box (Plug&Play, Active Cooling)
> 10GigaFlop SMT 64-bit CPUx4-8, > 20GB RAM Multi-channel, Multi-gigabit LAN, > 10GigaBps 4U box, 600W power/box, Active cooling
60TeraBytes@250 disks, 40CPUs/40U chassis, 5KWatts 20 Chassis, 1.2 PetaBytes@5000HDDs, 8-16Teraflops
@800CPUs, 100KWatts, 3 Petabyte Tape Storage
Direct Infiniband link into the WAN fabric
Initial Prototype 2000-2001
Gfarm filesystem, Metadata management,
data streaming and GridRPC
Mock Data Challenge (Monte-carlo) Deploy on Development Gfarm Cluster
Second Prototype 2002(-2003)
Load balance, Fault Tolerance, Scalability Accelerate by National “Broadband
Computing initiative” proposal
Full Production Development (2004-2005
and beyond)
Deploy on Production GFarm cluster Petascale online storage
Synchronize with ATLAS schedule
ATLAS-Japan Tier-1 RC “prime customer”
Today 135Mbps P lanned 10xN Gbps U-Tokyo (60km) TI TECH (80km)
Super SI NET Tsukuba WAN
solutions not well applicable
solutions
further research and development required
such challenge
Various Collaboration Successes
HE Electron Microscope (Osaka-U/UCSD) Remote Magnetoencephalography (Osaka-
U/iHPC Singapore)
Operations Research (TITECH/Kyoto-U) ATLAS-Gfarm
Interest Growing Rapidly
Astronomy, Subaru and Bisei Telescopes
(NAO, CRL)
Lunar Exploration, SELENE (NASDA) Earthquake Measurement (Bosai) Genome Informatics (Riken, JAIST, etc.)
Tsukuba WAN/ONE Meeting
Over 250 participants
The Next Big National Project…
Observat ories Mecca in Hawaii
VLBI: Kashima 34m telescope
3D eart hquake simulat or in MI KI
N
4
T s u k u b a W A N r
t e m a p
1 0 1 1 9 6 7 5 3 2 8 1 2 1
Total perimeter: 46.4km
STA/IM net KEK NTT AS lab NRED Tsukuba Univ. STACI Material
AIST NIES Maffine
ULIS TAO/JGN
64bit CPU Massive HDD
Giga Ethernet
Myrinet
Interconnection Clust er component CP U+ Disk
Federat ion of Mult iple Pet aScale Clust ers over Grid int ernet 10Gbps-Tbps (Super SI NET) Aut omat ed Dist ribut ion I nf iniband DWDM “Post -Clust ers”
Distributed Processing Of PetaBytes of Data
for Asia-Pacific Grid researchers
interests to GGF
APAN/ TransPAC
Not a project funded from single source
North America (STARTAP) Europe
Japan China Hong Kong Malaysia Singapore Indonesia
Philippines
TransPAC (100 Mbps)
America Europe Thailand
Provide “Free” Grid
Resources as Testbed
Port and Provide Various
Grid Services
Ninf/ GridRPC/Netsolve,
Gfarm, Globus, NWS, PACX, Stampi, RealGrid, APST, Legion, Cactus…
Collaboration from
Japanese SC companies
Would like to collaborate
w/other Grid partners
APANTokyo KEK Utokyo TITECH
TransPAC 100Mbps
ETL/TACC
STAR TAP Chicago
ApGrid - Korea, Singapore, Australia, etc,
ApGrid nodes in Japan
NWS Sensors Virutal/Real Client
ApGr id ApGr id Test bed Test bed
NWS Sensors Virutal/Real Client Virutal/Real Client NWS Sensors
US and European Partners Kyoto-U
Japan
AI ST/ Tsukuba Advanced Computing
Center
Universities
TI T, Kyushu, Kyoto Waseda, Osaka,
Tsukuba
Computing Center, labs
KEK ( Gfarm) Real World Computing Partnership Other Govt. Labs, Universities. NEC, Fujitsu, Hitachi, …
Australia
ANU/ APAC, Monash U
United States
PNNL, (other labs and centers?)
Korea
Science and Technology I nformation
Europe
Thailand
Computer Technology Center
Taiwan
Performance Computing
Potential Asian Partners
Other APAN members
Centers, Labs
SR8000, SR2201, VPP, SX
IBM SPs, Origin 2K, Sun Enterprise
Severs
Several 100 procs each
I will be submitting 256 procs
Federation of Grid clusters Multiple 1000s node, Terascale
clusters in 2001-2002
Japan
AI ST/ Tsukuba Advanced
Computing Center/ ETL
Kyushu University
Computing Center, labs
Kyoto-U (Several labs) Waseda U, Osaka-u, KEK (DataGrid) Other Govt. Labs, Universities.
Australia
ANU, Monash U
United States
PNNL, (other labs and centers?)
Potential Asian Partners
Korea (KORDI C, ) Singapore (NUS) Malaysia Thailand ROC, Hong Kong, Taiwan Other APAN members
Grid Computing and Cluster Computing 高性能ネッ
ト ワーキングと の融合
よ
り 普遍で汎用の技術 “Commodity化”、 スパコ ン の 終焉
科学技術において、
従来は困難だっ た問題を 解決
ネッ
ト ワーク 高性能計算科学を さ ら に 普遍的に
し
かし 、 情報技術は先進的技術を使いこなす必要あり
最先端の超広域の計算イ
ン フ ラ テ ク ノ ロ ジ
ネッ
ト ワーク テ ク ノ ロ ジ と コ ン ピ ュ ーティ ングテクノ ロジ の融合
計算科学による新たな科学技術の進歩、
パラ ダ イ ム の変化
HPC・
言語実装・
OS・
分散システム
ネッ
ト ワーキング・ イ ン タ ーネッ ト
アプリ
ケーショ ン 科学者
http://matsu-www.is.titech.ac.jp,
http:// ninf.apgrid.org