VMS SAN Technology Spring 2002 John Covert 3C01 Fibre Channel - - PDF document

vms san technology
SMART_READER_LITE
LIVE PREVIEW

VMS SAN Technology Spring 2002 John Covert 3C01 Fibre Channel - - PDF document

VMS SAN Technology Spring 2002 John Covert 3C01 Fibre Channel Fibre Channel Fibre Channel ANSI standard network and storage interconnect OpenVMS, and most others, use it for SCSI storage TCI/IP and VIA also possible 1.06


slide-1
SLIDE 1

1

VMS SAN Technology

Spring 2002

John Covert 3C01

2

Fibre Channel Fibre Channel Fibre Channel

ANSI standard network and storage interconnect

– OpenVMS, and most others, use it for SCSI storage – TCI/IP and VIA also possible

1.06 gigabit/sec., full-duplex, serial interconnect

– Very capable of 100MB/sec per link (with 1gb links) – 2gb in early 2002… 10gb in 2003-2004

Long distance

– 500M multi-mode fiber – 100KM single-mode fiber – 600KM with FC/ATM links

slide-2
SLIDE 2

2

3

Topologies Topologies

Arbitrated loop FC-AL (NT/UNIX today)

– Uses Hubs – Max. Number of nodes is fixed at 126 – Shared bandwidth

Switched (SAN - VMS / UNIX / NT)

– Highly scalable – Multiple concurrent communications – Switch can connect other interconnect types

4

Current Configurations Current Configurations Current Configurations

Up to twenty switches (8 or 16-port) per FC fabric AlphaServer 800, 1000A*, 1200, 4100, 4000, 8200, 8400, DS10, DS20, DS20E, ES40, GS60, GS80, GS140, GS160 & GS320 Adapters (max) per host determined by the platform type: 2, 4, 8, 26 Multipath support - no single point of failure 100km max length (w/o ATM)

* The AS1000A does not have console support for FC.

slide-3
SLIDE 3

3

5

Long-Distance Storage Interconnect Long Long-

  • Distance Storage Interconnect

Distance Storage Interconnect

FC is the first long-distance storage interconnect

– New possibilities for disaster tolerance – Extensive multipath capability

Host-based Volume Shadowing Data Replication Manager (DRM)

6

FDDI T3 ATM CI, DSSI, MC, FDDI Gigabit Ethernet

HBVS: Multi-site FC Clusters HBVS: Multi HBVS: Multi-

  • site FC Clusters

site FC Clusters

FC Switch FC Switch HSG HSG HSG HSG Alpha Alpha HSG HSG HSG HSG Alpha Alpha FC Switch FC (100 KM) (600KM w/ATM) FC Switch

host based shadow set

= GigaSwitch

slide-4
SLIDE 4

4

7

HBVS Multi-site FC Pro and Con HBVS Multi HBVS Multi-

  • site FC Pro and Con

site FC Pro and Con

Pro

– High performance, low latency – Symmetric access – Fast failover

Con

– Full shadow copies and merges are required today

HSG write logging, after V7.3

– More CPU overhead

8

FC Switch FC Switch

DRM Configuration DRM Configuration DRM Configuration

HSG HSG HSG HSG FC host FC host FC Switch HSG HSG HSG HSG FC host FC host FC Switch

Cold stand-by nodes

Host-to-Host cluster communication

600KM max

slide-5
SLIDE 5

5

9

FC Switch FC Switch

DRM Configuration DRM Configuration DRM Configuration

HSG HSG HSG HSG Alpha Alpha FC Switch

Host-to-Host (LAN/CI/DSSI/MC)

HSG HSG HSG HSG Alpha Alpha FC Switch

FC (100 KM single mode)

controller based remote copy set

Cold stand-by nodes

10

DRM Pro and Con DRM Pro and Con DRM Pro and Con

Pro

– High performance, low latency – No shadow merges – Supported now, and enhancements are planned

Con

– Asymmetric access – Cold standby – Requires both HSG controller ports on the same fabric – Manual failover

15 min. Is typical

slide-6
SLIDE 6

6

11

VMS V7.3 SAN Features VMS V7.3 SAN Features VMS V7.3 SAN Features

12

FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path”

KGPSA (FibreChannel) KZPBA (SCSI) Improves I/O scaling on SMP platforms – Moves I/O processing off the primary CPU – Reduces “hold time” of IOLOCK8 by ~30% – Streamlines the normal I/O path (read/write) – Uses pre-allocated “resource bundles” Explicit controls available – SET DEVICE/PREFERRED_CPU – SYSGEN parameters fast_path fast_path_ports

slide-7
SLIDE 7

7

13

FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path”

$ show device /full fga0 Device FGA0:, device type KGPSA Fibre Channel, is online, shareable, error logging is enabled. Error count Operations completed Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G,W Reference count Default buffer size Current preferred CPU Id 15 Fastpath 1 FC Port Name 1000-0000-C921-BD93 FC Node Name 2000-0000-C921-BD93 $ $ $ set device fga0: /preferred=3 $

14

Fibre Channel Tape Support Fibre Channel Tape Support Fibre Channel Tape Support

Modular Data Router

– Fibre Channel to parallel SCSI bridge – Connects to one or two Fibre Channel ports on a SAN

Multi-host, but not multi-path Can be served to the cluster via TMSCP Supported as a native VMS tape device by COPY, BACKUP, etc. ABS, MRU, SLS support

slide-8
SLIDE 8

8

15

FibreChannel Tape Support FibreChannel Tape Support FibreChannel Tape Support

MDR FC Switch WinNT WinNT WinNT OpenVMS Alpha OpenVMS OpenVMS Alpha Alpha OpenVMS Alpha OpenVMS OpenVMS Alpha Alpha Tape Library RAID Array Disk Controller OpenVMS Alpha or VAX OpenVMS OpenVMS Alpha or Alpha or VAX VAX Tru64 Alpha Tru64 Tru64 Alpha Alpha OpenVMS Alpha OpenVMS OpenVMS Alpha Alpha SCSI

TMSCP served

16

VMS V7.3-1 SAN Features VMS V7.3 VMS V7.3-

  • 1 SAN Features

1 SAN Features

slide-9
SLIDE 9

9

17

Failover to the MSCP Served Path Failover to the MSCP Served Path Failover to the MSCP Served Path

Disk Multipath Failover to MSCP Served Paths

– Current implementation supports failover amongst direct paths – New implementation allows failover to a served path if all direct paths are down and failback when direct path is restored – Supported for multihost FibreChannel and SCSI connections

18

$ sh dev /full $1$dga100:

Disk $1$DGA100: (CEAGLE), device type HSV100, is online, mounted, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled. Error count Operations completed 562369 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 352 Default buffer size 512 Current preferred CPU Id Fastpath 1 WWID 01000010:6005-08B4-0001-003F-0002-1000-0021-0000 Total blocks 41943040 Sectors per track 128 Total cylinders 2560 Tracks per cylinder 128 Host name "CEAGLE" Host type, avail Compaq AlphaServer ES40, yes Alternate host name "MARQUE"

  • Alt. type, avail Compaq AlphaServer ES45 Model 2, yes

Allocation class 1 I/O paths to device 5 Path PGA0.5000-1FE1-0011-AF08 (CEAGLE), primary path, current path. Error count Operations completed 561886 Path PGA0.5000-1FE1-0011-AF0C (CEAGLE). Error count Operations completed 161 Path PGB0.5000-1FE1-0011-AF09 (CEAGLE). Error count Operations completed 161 Path PGB0.5000-1FE1-0011-AF0D (CEAGLE). Error count Operations completed 161 Path MSCP (MARQUE). Error count Operations completed

slide-10
SLIDE 10

10

19

FDDI T3 ATM CI, DSSI, MC, FDDI Gigabit Ethernet

Failover To The MSCP Path Failover Failover To The MSCP Path To The MSCP Path

FC Switch FC Switch HSG HSG HSG HSG Alpha Alpha HSG HSG HSG HSG Alpha Alpha FC Switch FC (100 KM) FC Switch

host based shadow set

= GigaSwitch

20

Multipath Tape Support Multipath Multipath Tape Support Tape Support

Multipath Tape Support

– Allows user selection of path (load balancing)

200MB/sec through a dual FC MDR 8 SDLT drives can be driven at full compacted bandwidth No need to use MDR SSP

– Dynamic failover between 2 ports on the MDR – MDR is still a single point of failure, but fabric failures are tolerated – Failover to MSCP Path not supported for tape

slide-11
SLIDE 11

11

21 $ sh dev /full mga4: Magtape $2$MGA4: (CLETA), device type COMPAQ SuperDLT1, is online, file-oriented device, available to cluster, device has multiple I/O paths, error logging is enabled, controller supports compaction (compaction disabled), device supports fastskip. Error count Operations completed Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count Default buffer size 2048 WWID 02000008:500E-09E0-0005-460D Density default Format Normal-11 Allocation class 2 Volume status: no-unload on dismount, position lost, odd parity. I/O paths to device 4 Path PGA0.5005-08B3-0010-2699 (CLETA), primary path, current path. Error count Operations completed Path PGB0.5005-08B3-0010-2699 (CLETA). Error count Operations completed Path PGC0.5005-08B3-0010-269A (CLETA). Error count Operations completed Path PGD0.5005-08B3-0010-269A (CLETA). Error count Operations completed 22

PCI PCI

fgd fgc fgb fga CPU 1 CPU 2 CPU 3 CPU 4

FC Switch

Typical FibreChannel Configuration

HSG

A B

1 2 3 4

PCI PCI

fgd fgc fgb fga CPU 1 CPU 2 CPU 3 CPU 4

MDR

1 2 pga pgb pgc pgd pga pgb pgc pgd

FC Switch

slide-12
SLIDE 12

12

23

Distributed Interrupts Distributed Interrupts Distributed Interrupts

Distributed Interrupts to Fastpath Devices

– Allows hardware interrupt to be directly targeted to the “preferred” fastpath CPU – Frees up CPU cycles on the primary processor – Avoids IP interrupt overhead to direct interrupt to the “preferred” fastpath CPU – CPU 0 load for IO processing = 0% with distributed interrupts + fastpath

24

Interrupt Coalescing on the KGPSA Interrupt Coalescing on the KGPSA Interrupt Coalescing on the KGPSA

Interrupts Coalescing on KGPSA Adapters

– Aggregates IO completion interrupts in the host bus adapter – Saves passes through the interrupt handler and reduced IOLOCK8 hold time – Initial tests show a 25% reduction of IOLOCK8 hold time (3-4us per IO), resulting in a direct 25% increase in maximum IO/second for high IO workloads

slide-13
SLIDE 13

13

25

FibreChannel Driver Optimization FibreChannel Driver Optimization FibreChannel Driver Optimization

FibreChannel driver optimization

– Reduces IOLOCK8 hold time by an additional 3- 6us per IO – This feature cannot be backported to earlier versions of OpenVMS – This optimization combined with interrupt coalescing cuts IOLOCK8 time by 50% and allows a 2x+ increase in maximum IO/second

26

KZPEA Fastpath KZPEA KZPEA Fastpath Fastpath

Extends Fastpath to the KZPEA (Ultra3 SCSI)

– ½ IOLOCK8 hold time compared to V7.3

20us -> 9us

– Enabled with fast_path_ports bit 2=0

slide-14
SLIDE 14

14

27

General SAN Enhancements General SAN Enhancements General SAN Enhancements

28

2Gb FibreChannel 2Gb FibreChannel 2Gb FibreChannel

2Gb Links

– End to end upgrade during early 2002 – LP9002 (2Gb PCI adapter) – Pleadies 4 switch (16 2Gb ports) – 2Gb storage array (Enterprise)

slide-15
SLIDE 15

15

29

Enterprise – HSV 110 Enterprise Enterprise – – HSV 110 HSV 110

HSV Storage Controller

– Follow on to HSG80/60 – Creates virtual volumes from physical storage – 3-6x HSG80 performance – 248 physical FC drives (18TB)

dual ported 10k rpm drives

– 2Gb interface to the fabric – 2Gb interface to drives – Shipping in Q4 2001 – Measured VMS Performance

80K IO/second (HSG80 = 15K IO/second) 200 MB/sec (400MB/sec from cache)

30

Enterprise – HSV 110 Enterprise Enterprise – – HSV 110 HSV 110

slide-16
SLIDE 16

16

31

Enterprise – HSV 110 Enterprise Enterprise – – HSV 110 HSV 110

32

Enterprise – HSV 110 Enterprise Enterprise – – HSV 110 HSV 110

Virtually Capacity-Free Snapshot

– Creates an instant clone of mapping data – As data is written to the master copy additional storage is consumed and the maps diverge – To use the snapshot in VMS

Use SAN appliance or SANscript to create snapshot $ mc sysman io auto $ set volume $1$dgaxxx /label=newname $ mou/sys $1$dgaxxx newname

slide-17
SLIDE 17

17

33

SAN/SCSI Futures SAN/SCSI Futures SAN/SCSI Futures

34

OpenVMS SAN/scsi Futures OpenVMS SAN/ OpenVMS SAN/scsi scsi Futures Futures

Modular SAN Array 1000

– 2gb FibreChannel frontend – 4 U160 SCSI backend ports – 4u rackmount with 14 drives – 28 additional drives with external storage shelves – Low cost 2 node clusters with FC-AL hub – Latent support in V7.3-1

slide-18
SLIDE 18

18

35

OpenVMS SAN/scsi Futures OpenVMS SAN/ OpenVMS SAN/scsi scsi Futures Futures

HSG write logging

– Enables mini-merge on HSG80 – Requires ACS 8.7 – V7.3-1 + Tima (Q3 2002)

SmartArray 5300

– Backplane RAID adapter – 2/4 Ultra3 SCSI channels

Up to 56 drives

– ~15K IO/sec – Available Q3 2002

36

OpenVMS SAN/scsi Futures OpenVMS OpenVMS SAN/ SAN/scsi scsi Futures Futures

Versastor

– Storage virtualization across multiple FibreChannel storage arrays (HSG/HSV in V1) – 2003 availability for VMS

Native FC tapes iSCSI host bus adapters and storage arrays Concurrent Multipath I/O Dynamic volume expansion Further IOLOCK8 minimization/breakup SCS over FibreChannel

slide-19
SLIDE 19

19

37

VMS Itanium Plans for SANs VMS Itanium Plans for VMS Itanium Plans for SANs SANs

Star Coupler

CI Storage OpenVMS VAX

HSJ

LAN for

Host-to- Host Comm.

OpenVMS Alpha Fibre Channel Storage

FC Switch

HSG/HSV

OpenVMS ItaniumTM

Fibre is good for you!