CESNET e Infrastructure Storage services vision and plans Storage - - PowerPoint PPT Presentation

cesnet e infrastructure
SMART_READER_LITE
LIVE PREVIEW

CESNET e Infrastructure Storage services vision and plans Storage - - PowerPoint PPT Presentation

CESNET e Infrastructure Storage services vision and plans Storage services vision and plans Peter Ver imk (peter.vercimak@cesnet.cz) CESNET, Prague Czech Republic Czech national e Infrastructure projects CESNET CESNET


slide-1
SLIDE 1

CESNET e‐Infrastructure

Storage services – vision and plans Storage services vision and plans

Peter Verčimák (peter.vercimak@cesnet.cz) CESNET, Prague Czech Republic

slide-2
SLIDE 2

Czech national e‐Infrastructure projects

  • CESNET
  • CESNET
  • IT4Innovations
  • CERIT‐SC

Potential customers of new e‐Infrastructure services including storage:

ESFRI roadmap projects and other national and international R&D groups and projects e.g.… BIOMEDREG, CzechPolar, CZERA, CzechGeo/EPOS, ESS, CANAM, FAIR, ALICE,ELI, PALS, UCNK, CESSDA ESS survey Reaktory Řež LINDAT/CLARIN ThALES SHARE CzechCOS/ICOS CESSDA, ESS‐survey, Reaktory Řež, LINDAT/CLARIN, ThALES, SHARE, CzechCOS/ICOS …

Building of CESNET e‐Infrastructure – two projects:

1. The “OP VaVpI” project – in frame of EU Operational Program for R&D (2011‐2013) 2 The “Great Infrastructure” project – in frame of national ESFRI Roadmap (2011‐2015 ) 2. The Great Infrastructure project in frame of national ESFRI Roadmap (2011 2015…)

The goal of both projects in storage area is common – to build up and to put into service the storage system of three distributed large‐scale repositories for saving and sharing of large volume of data including archiving for saving and sharing of large volume of data including archiving

slide-3
SLIDE 3

CESNET Distrib ted Data Repositor CESNET Distributed Data Repository

Pardubice Plzen Brno Plzen DWDM distance Storage location

slide-4
SLIDE 4

The main purpose of the CESNET “national” storage system: easily accessible and redundant data repository for academic and scientific community.

  • From user point of view: “unlimited storage capacities”,
  • therwise overall capacity ‐ about 10 PB at the three locations
  • Technical concept: HSM system – composed of disk arrays and

tape libraries (or equivalent systems – MAID, VTL and the like) d h d d i i l i

  • From data access method and communication protocol point
  • f view – combined NAS/SAN system

Data repository services: Data repository services:

  • Storage element
  • File system services (including backup)
  • Data‐block access services
slide-5
SLIDE 5
  • Storage element – data repository oriented to capacity
  • Performance is important but not critical parameter

e o a ce s po ta t but ot c t ca pa a ete

  • Accessible via protocols for large volume data transfer – gridFTP (or

just via common scp, ftp, rsync,…)

  • Locally connected file system
  • Locally connected file system
  • File system services – from HSM point of view again

locally connected FS but through NFS or SMB protocols locally connected FS but through NFS or SMB protocols

  • Could be used for backup services, too – both just as raw capacity for

“customers administrated” backup or as back‐end repository providing the server part of backup SW on the storage system the server part of backup SW on the storage system

  • Data‐block access services – limited size of customers

would have possibility to access their data via iSCSI or even FC p y

slide-6
SLIDE 6

Simplified scheme of data repository in one storage location

Disk array

User Interface - NFS, CIFS/SMB, FTP, HTTP...

Simplified scheme of data repository in one storage location

Disk array

Front-end-2 Front-end-1

Catalyst 2948G

Ethernet switch

Fabric switch Tape library

Virt Server-2 10 GE 8 Gbps Virt Server-1

IP

MSTP

FCoDWDM e next storage

SI Interface

Router

To the FCIP

iSCS

slide-7
SLIDE 7

D t t iti Datacenter composition

  • Disk array (Tier1 Tier2)

about 300 TB

  • Disk array (Tier1, Tier2) – about 300 TB
  • Faster (15k FC, SAS) and slower (7k SATA) disk tiers
  • Tape library (Tier3)

about 3 PB

  • Tape library (Tier3) – about 3 PB
  • Fabric and Ethernet network infrastructure

R d d SAN (8Gb) d LAN (10GE) i h d li

  • Redundant SAN (8Gb) and LAN (10GE) switches and lines
  • User interface (Front‐end) servers
  • Virtualization farm (hypervisor platform)
  • The servers for application support of specific users’ OS and other

(application SW) requirements (application SW) requirements

slide-8
SLIDE 8
  • Management system (SW and its platform according

to proprietary vendors’ solution)

  • HSM system
  • NAS Heads

M t d t

  • Metadata server
  • Superstructural application servers

l f f d d ( ) d d

  • Platforms for data administration systems (iRODS) or advanced storage

services (webDAV, ePosteRestante)

  • Replication interface components
  • Replication interface components
  • Dedicated lines between storage locations will use FCoDWDM or FCIP

to ensure backup for DR purposes

slide-9
SLIDE 9

Thanks for your attention Thanks for your attention. Any questions? Not required, but tolerable…