Invesco Reducing costs and simplifying DR with NetApp Julian Wood - - PowerPoint PPT Presentation

invesco reducing costs and simplifying dr with netapp
SMART_READER_LITE
LIVE PREVIEW

Invesco Reducing costs and simplifying DR with NetApp Julian Wood - - PowerPoint PPT Presentation

Invesco Reducing costs and simplifying DR with NetApp Julian Wood Server Architecture Team NetApp User Group Meeting 12 November 2008 Invesco Reducing costs and simplifying DR with NetApp 1. About Invesco 2. Our NetApp journey since 2001


slide-1
SLIDE 1

Invesco Reducing costs and simplifying DR with NetApp

Julian Wood

Server Architecture Team

NetApp User Group Meeting

12 November 2008

slide-2
SLIDE 2

12 November 2008 2

Invesco Reducing costs and simplifying DR with NetApp

  • 1. About Invesco
  • 2. Our NetApp journey since 2001
  • 3. Current NetApp Architecture
  • 4. Disaster Recovery
  • 5. VMWare and NetApp Architecture
  • 6. VM Disaster Recovery Demo
slide-3
SLIDE 3

12 November 2008 3

About Invesco A Premier Global Investment Management Organization ▪ Invesco is a leading independent global investment management company, with $409.6 billion in assets under management (as of September 30, 2008), dedicated to helping people worldwide build their financial security. ▪ Listed on the New York Stock Exchange (NYSE) under the symbol IVZ.

slide-4
SLIDE 4

12 November 2008 4

About Invesco A Premier Global Investment Management Organization ▪ Invesco Perpetual is the UK based brand of Invesco. ▪ One of the largest independent investment managers in the UK, currently managing assets on behalf of consumers, institutional clients and investment professionals. ▪ 5,354 employees in 55 offices in 20 countries world wide.

slide-5
SLIDE 5

12 November 2008 5

Invesco and NetApp

▪ UK using NetApp technology since 2001 ▪ 30 Filers and Nearstores over the years in the UK. ▪ Currently 6 Filers and 2 Nearstores in the UK. ▪ Globally 41 filers and 680TB of storage. ▪ 2007 NetApp Innovation Award Winner in Enterprise Infrastructure category

Use of NetApp and VMWare: ASIS on R200 for backups & NFS

slide-6
SLIDE 6

12 November 2008 6

  • 2. Our NetApp journey since 2001
slide-7
SLIDE 7

12 November 2008 7

Stage 1a – NTFS in Henley 2001

840c cluster 840c DR Henley Henley

NTFS

+ snapshots

NTFS

+ snapshots

Volume Snapmirror

Every few minutes

NTFS

+ snapshots

NTFS

+ snapshots

▪ NTFS Data, Traditional Volumes, Snapshots, Snapmirror ▪ Primary Data Center Henley – 2 x 840c ▪ DR Data Center also in Henley – 1 x 840c ▪ Aliases

slide-8
SLIDE 8

12 November 2008 8

Stage 1a – NTFS in Henley 2001

▪ NTFS Data, Traditional Volumes, Snapshots, Snapmirror ▪ Primary Data Center Henley – 2 x 840c ▪ DR Data Center also in Henley – 1 x 840c ▪ R100 in London 840c cluster R100 Henley Henley

NTFS NTFS NTFS NTFS Qtree Snapmirror

28 Daily and Month end Snapshots

slide-9
SLIDE 9

12 November 2008 9

Stage 1b – NTFS in London and Henley 2002

▪ NTFS Data, Traditional Volumes, Snapshots, Snapmirror ▪ Consolidate Data Centers to 2, London and Henley London Henley

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

Volume Snapmirror

Every few minutes

NTFS

+ snapshots

NTFS

+ snapshots

slide-10
SLIDE 10

12 November 2008 10

Stage 1b – NTFS in London and Henley 2002

▪ NTFS Data, Traditional Volumes, Snapshots, Snapmirror ▪ Consolidate Data Centers to 2, London and Henley London Henley

NTFS NTFS NTFS NTFS NTFS NTFS Qtree Snapmirror NTFS NTFS

28 Daily and Month end Snapshots 28 Daily and Month end Snapshots

slide-11
SLIDE 11

12 November 2008 11

Stage 2 2003

▪ 2 Primary Datacenters, London and Henley hosting Live and DR data ▪ Three Storage Environments

NTFS Data

Database

Messaging

▪ Tiered Storage

Primary Data on Filer

Secondary Data on Nearstore

▪ Extend Production NTFS Snapshots ▪ Archive NTFS Data ▪ Disk Backups ▪ SQL and Oracle Backups ▪ Archived Email

slide-12
SLIDE 12

12 November 2008 12

Stage 2 – NTFS Data 2003

▪ Live Filer Cluster per Data Center – 940c ▪ 6 NTFS Filers to 4 ▪ NTFS

Client user data

Company and departmental shared data

Shared Application data

FTP Data

Documentation Processing

Terminal Server Profiles

▪ SnapShots on live for 3 days ▪ SnapMirror data to Nearline for extended snapshots ▪ SnapMirror data between data centers for DR

slide-13
SLIDE 13

12 November 2008 13

Stage 2 – NTFS Data 2003

▪ Live Filer Cluster per Data Center – 940c

London Henley

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

NTFS

+ snapshots

Volume Snapmirror

Every few minutes

Local NTFS Drive Mapping Local NTFS Drive Mapping Remote NTFS Drive Mapping

slide-14
SLIDE 14

12 November 2008 14

Stage 2 – NTFS Data 2003

London Henley

NTFS NTFS NTFS NTFS NTFS NTFS Qtree Snapmirror NTFS NTFS

28 Daily and Month end Snapshots 28 Daily and Month end Snapshots

▪ Live Filer Cluster per Data Center – 940c ▪ Nearstore per Data Center R100/R200

slide-15
SLIDE 15

12 November 2008 15

Stage 2 – Database Data 2003

▪ Filer Cluster per Data Center – 940c ▪ 4 Filers ▪ Oracle and SQL Databases

60 Oracle Databases

240+ SQL Databases

6 Oracle Windows Servers

1 Corporate Active-Active SQL Server MS Cluster

▪ Private iSCSI Storage Network ▪ SnapDrive ▪ SnapManager for SQL – managed by DBA’s ▪ Oracle Dataguard ▪ Nearstore for Backups

slide-16
SLIDE 16

12 November 2008 16

Stage 2 – Database Data 2003

▪ Filer Cluster per Data Center – 940c ▪ London Live, Henley DR ▪ SnapDrive + SnapManager for SQL

London Cluster Henley Cluster Live DR

SQL

+ SnapManager

SQL

+ SnapManager

Oracle

+ SnapDrive

Oracle

+ SnapDrive

SnapMirror Oracle Dataguard

slide-17
SLIDE 17

12 November 2008 17

Stage 2 – Database Data 2003

▪ Nearstore used for Disk Backups ▪ Oracle DEV / Test SQL SQL

Oracle Oracle

SQL Dump Oracle Dump

London Cluster Henley

SQL

Backups

Oracle

Backups

Oracle

DEV / Test

slide-18
SLIDE 18

12 November 2008 18

Stage 2 – Messaging Data 2003

▪ Live / DR Filer Cluster per Data Center – 940c ▪ 4 Filers ▪ Exchange, Sharepoint ▪ Private iSCSI Storage Network ▪ SnapDrive ▪ SnapManager for Exchange Databases – Managed by Messaging Team ▪ SnapMirror for Exchange Logs ▪ Nearstore for KVault

slide-19
SLIDE 19

12 November 2008 19

Stage 2 – Messaging Data 2003

▪ Live / DR Filer Cluster per Data Center – 940c ▪ Microsoft Exchange

London Cluster Henley Cluster Live / DR Live / DR

Exchange

+ SnapManager

SnapMirror

Exchange

+ SnapManager

Exchange

+ SnapManager

Exchange

+ SnapManager

SnapMirror

slide-20
SLIDE 20

12 November 2008 20

Stage 2 – Messaging Data 2003

▪ Live / DR Filer Cluster per Data Center – 940c ▪ Microsoft Sharepoint

London Cluster Henley Cluster Live / DR Live / DR

Sharepoint

Robocopy

Sharepoint

slide-21
SLIDE 21

12 November 2008 21

Stage 2 – Messaging Data 2003

▪ Nearstore used for KVault archived mail ▪ Snapmirrored between sites for DR London Henley

Archived DR Mail Archived DR Mail Archived Mail Archived Mail Exchange Mail Exchange Mail

slide-22
SLIDE 22

12 November 2008 22

Stage 2 12 Filers, 2 Nearstores

▪ 3 Filer Clusters per Data Center – 940c ▪ 2 Nearstore R100 per Data Center to 1 R200

London Henley

NTFS NTFS SQL Oracle Messaging Messaging NTFS NTFS SQL Oracle Messaging Messaging NTFS NTFS Backups KVault NTFS Backups KVault Test / DEV Test / Dev

slide-23
SLIDE 23

12 November 2008 23

  • 2. Current NetApp Architecture
slide-24
SLIDE 24

12 November 2008 24

Stage 3 2007

▪ 2 Primary Datacenters, London and Henley ▪ Storage - London Active, Henley DR / UAT ▪ Two Storage Environments

Messaging and Database

NTFS Data and VMWare

▪ Tiered Storage

Primary Data on Filer

Secondary Data on Nearstore

▪ Extend Production NTFS Snapshots ▪ Archive NTFS Data ▪ Disk Backups ▪ SQL and Oracle Backups / DEV ▪ Archived Emails ▪ Journal Emails ▪ VM Templates

slide-25
SLIDE 25

12 November 2008 25

Stage 3 2007

▪ Reduce Filers from 12 to 6 ▪ Clustered Filer for Live, Single Filer for DR ▪ Change storage infrastructure from Live/Live to Live/DR ▪ RAID-DP ▪ Merge Messaging and Database

8 Filers down to 3

▪ Store NTFS Data with VMWare Servers

4 Filers down to 3

Store VM Disk files

▪ SnapMirror data from Live to DR ▪ SnapMirror data to Nearline for tiered storage

Snaplock for Journaled Mail

slide-26
SLIDE 26

12 November 2008 26

Messaging and Database

London Cluster Henley Live DR / UAT

Exchange

+ SnapManager

Exchange

+ SnapManager

Sharepoint

SQL

+ SnapManager

Oracle

+ SnapDrive

Exchange

+ SnapManager

Sharepoint

SQL

+ SnapManager

Oracle

+ SnapDrive

▪ 1 Filer Cluster in London – 3070 ▪ DR / UAT Filer in Henley – 3070 ▪ All Prod Servers in London, DR / UAT in Henley

Oracle

UAT

SQL

UAT

slide-27
SLIDE 27

Archived DR Mail Archived DR Mail SQL DR Logs Oracle Dev/Test

12 November 2008 27

Messaging and Database

▪ Nearstore Tiered Storage

KVault Archived Mail in London and Henley

Journal SnapLock Mail in London and Henley

Oracle Database Backups and SQL Logs

Oracle Dev / Test databases

London Henley

Archived Mail Journal Mail SQL Logs Oracle Backups Database Data Archived Mail Journal Mail Exchange Mail

slide-28
SLIDE 28

NTFS 12 November 2008 28

NTFS and VMWare

London Cluster Henley Live DR

NTFS DR

VMWare DR

▪ 1 Filer Cluster in London - 3040 ▪ DR Filer in Henley – 3040 ▪ Merge all NTFS Data in London ▪ All Client drive mappings to London

NTFS

VMWare VMWare

London NTFS Drive Mapping London NTFS Drive Mapping

slide-29
SLIDE 29

Livestate DR EU NTFS DR

12 November 2008 29

NTFS and VMWare

▪ R200 Nearstore Tiered Storage

UK NTFS 28 Days and Month End Snapshots

Livestate OS Backups

Archived European NTFS Data

Other NTFS Backup Data

VM Templates

London Henley

NTFS Archive Livestate Backups VM DR Template EU NTFS DR Livestate Backups EU NTFS Backups NTFS Data VM Template NTFS Backups NTFS Backups

slide-30
SLIDE 30

12 November 2008 30

Current 6 Filers, 2 Nearstores

▪ 2 Filer Clusters in London, 2 DR Filers in Henley ▪ 1 R200 per Data Center

London Live Henley DR

NTFS VMWare Database Messaging NTFS DR NTFS

NTFS Archive

Backups KVault VMWare Live / DR Database DR / UAT Messaging DR Journal

NTFS Archive

Backups KVault Journal

Cluster, CIFS, iSCSI, NFS, Nearstore, SnapMirror, SnapShots, SnapRestore, SnapLock, ASIS

slide-31
SLIDE 31

12 November 2008 31

  • 4. Disaster Recovery
slide-32
SLIDE 32

12 November 2008 32

Disaster Recovery Planning

▪ What is required in DR? ▪ From when? Recovery Point Objective (RPO) ▪ How long? Recovery Time Objective (RTO) ▪ What is available? ▪ Budget

slide-33
SLIDE 33

12 November 2008 33

Invesco Disaster Recovery

▪ 4 hour Business Critical Applications (RTO)

Primary NTFS Data

Exchange email

Enterprise Vault Archived Mail

Sharepoint

Livestate OS backups

SQL

Oracle

VM Servers

Other applications

Active / Active configurations

▪ GC, DNS ▪ Web Servers, FTP ▪ Batch Scheduling ▪ Citrix

▪ 24 hour applications ▪ No Tapes!

slide-34
SLIDE 34

SnapMirror

OS OS

12 November 2008 34

Exchange Disaster Recovery

London Cluster Henley Live DR

SnapMirror

OS OS OS OS OS OS

▪ 1 Filer Cluster in London, DR Partner Henley – 3070

Exchange Stores SnapMirrored twice daily

Exchange Logs SnapMirrored every 15 minutes (RPO)

OS Livestate Image Daily to R200

Exchange

+ SnapManager

Exchange

+ SnapManager

slide-35
SLIDE 35

12 November 2008 35

  • 5. VMWare and NetApp Architecture
slide-36
SLIDE 36

12 November 2008 36

Invesco and VMWare

▪ Proven Invesco and industry technology ▪ ESX in Production since 2005 ▪ European Platform Review 2006

Look at entire operating platform and find efficiencies

Reduce Physical Servers / Filer count

Retire old hardware

Better utilise hardware resources through virtualisation

▪ Planning

Management by-in

Sell virtualisation

Explain technology and benefits

▪ Virtualise First Policy

Justify why not

Physical dependencies – NAS / IP / Telephony / Dongle

Licensing

slide-37
SLIDE 37

12 November 2008 37

European Platform Review

353 Server Devices down to 284

Virtual Physical 50 100 150 200 250 300 350 400 Nov-06 Dec-06 Jan-07 Feb-07 Mar-07 Apr-07 May-07 Jun-07 Jul-07 Aug-07 Sep-07 Oct-07 Nov-07 Dec-07 Jan-08 Feb-08 Mar-08 Apr-08 May-08 Jun-08 Jul-08 Aug-08 Sep-08

Physical and Virtual Europe Stacked

slide-38
SLIDE 38

12 November 2008 38

European Platform Review

353 Server Devices down to 284

% Virtual 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Nov-06 Dec-06 Jan-07 Feb-07 Mar-07 Apr-07 May-07 Jun-07 Jul-07 Aug-07 Sep-07 Oct-07 Nov-07 Dec-07 Jan-08 Feb-08 Mar-08 Apr-08 May-08 Jun-08 Jul-08 Aug-08 Sep-08

% of Virtual Servers

slide-39
SLIDE 39

12 November 2008 39

Invesco and VMWare

▪ Started with local attached storage ▪ ESX 3 ▪ Great uptime!

slide-40
SLIDE 40

12 November 2008 40

VMWare and NetApp Architecture

▪ 1 Filer Cluster in London, DR Partner Henley – 3040

NTFS

VMWare

▪ iSCSI NAS, NFS for Templates ▪ Most Production Servers in London ▪ Active/Active only in Henley ▪ Virtual Center in London, Livestate OS to Henley ▪ Cluster of 7 ESX 3.5 Servers in London

VMotion, DRS, HA

EVC

▪ Cluster of 3 ESX 3.5 Servers in Henley

VMotion, DRS, HA

EVC

▪ Globally 48 ESX hosts, 1333 VMs

slide-41
SLIDE 41

SnapMirror

12 November 2008 41

NTFS and VMWare

London Cluster Henley Live Live/DR

▪ 1 Filer Cluster in London, DR Partner Henley – 3040

NTFS

VMWare

NTFS

VMWare

VMWare DR

VMWare

slide-42
SLIDE 42

12 November 2008 42

VMWare Network Connectivity

▪ Separate LAN / NAS – non routable

LAN + Service Console

NAS + VMotion

No VLAN tagging

LAN NAS

Primary Secondary SC VMotion

Clients Physical Servers

slide-43
SLIDE 43

12 November 2008 43

ESX Host Networking

HP DL380 G5 2 x Quad Core Intel CPU’s with FlexMigration 36 Gb RAM

slide-44
SLIDE 44

12 November 2008 44

NetApp Filer Networking

slide-45
SLIDE 45

12 November 2008 45

VMWare NetApp Volumes, qtrees and iSCSI LUNs

lonvm_srvprod1-1.lun 400 GB lonvm_srvprod1-2.lun 400 GB lonvm_srvprod1-3.lun 400 GB lonvm_srvprod1-4.lun 400 GB lonvm_srvprod1-5.lun 400 GB

q_lonvm_srvprod1 v_lonvm_srvprod1 – 1100Gb

▪ Volumes Guaranteed – 1100Gb

▪ No Snapshot reserve ▪ ASIS

▪ Qtree ▪ LUNs

▪ 400Gb presented to ESX ▪ Space not Reserved ▪ Backups / Restores

▪ Naming Standards

slide-46
SLIDE 46

12 November 2008 46

A-SIS

▪ A-SIS at Volume level ▪ Volume SnapMirror updates take along A-SIS ▪ Scheduled task ▪ Coordinate with Snapshots

A-SIS first then Snapshots / SnapMirror

▪ ESX Servers unaware for iSCSI / FCP

df -g Filesystem total used avail capacity Mounted on /vol/v_lonvm_srvtest1/ 1000GB 802GB 197GB 80% /vol/v_lonvm_srvtest1/ /vol/v_lonvm_srvtest1/.snapshot 0GB 203GB 0GB ---% /vol/v_lonvm_srvtest1/.snapshot df –g –s Filesystem used saved %saved /vol/v_lonvm_srvtest1/ 802GB 940GB 54%

1381GB used

slide-47
SLIDE 47

12 November 2008 47

iSCSI vs NFS

▪ iSCSI and FCP Block Level ▪ NFS like CIFS - scalable mature networking protocols ▪ IP networking ▪ NFS

No LUNs, initiators, zoning, multipathing, iSCSI reservations

No rescanning

Browse and restore from Snapshots

Increase / Decrease on the fly

ESX Team manage storage provided by Storage Team

Easier backup and restore individual VMs

Use less Disk Space

Automatically thin-provisioned

ESX Servers direct visibility of disk space including ASIS

▪ Cost ▪ SRM

slide-48
SLIDE 48

12 November 2008 48

iSCSI vs NFS

▪ iSCSI

df -g Filesystem total used avail capacity Mounted on /vol/v_lonvm_srvtest1/ 1000GB 802GB 197GB 80% /vol/v_lonvm_srvtest1/ /vol/v_lonvm_srvtest1/.snapshot 0GB 203GB 0GB ---% /vol/v_lonvm_srvtest1/.snapshot df –g –s Filesystem used saved %saved /vol/v_lonvm_srvtest1/ 802GB 940GB 54%

1381GB used

▪ NFS

df -g Filesystem total used avail capacity Mounted on /vol/v_lonvm_srvtest1_nfs/ 750GB 519GB 230GB 69% /vol/v_lonvm_srvtest1_nfs/ df –g –s Filesystem used saved %saved /vol/v_lonvm_srvtest1_nfs/ 519GB 875GB 63%

slide-49
SLIDE 49

12 November 2008 49

  • 6. VM Disaster Recovery Demo
slide-50
SLIDE 50

Volume SnapMirror

12 November 2008 50

VMWare NetApp Volumes, qtrees and iSCSI LUNs DR

lonvm_srvprod1-1.lun 400 GB lonvm_srvprod1-2.lun 400 GB lonvm_srvprod1-3.lun 400 GB lonvm_srvprod1-4.lun 400 GB lonvm_srvprod1-5.lun 400 GB

q_lonvm_srvprod1 v_lonvm_srvprod1 – 1100Gb

London Henley Live DR

lonvm_srvprod1-1.lun 400 GB lonvm_srvprod1-2.lun 400 GB lonvm_srvprod1-3.lun 400 GB lonvm_srvprod1-4.lun 400 GB lonvm_srvprod1-5.lun 400 GB

q_lonvm_srvprod1 v_lonvm_srvprod1 – 1100Gb

slide-51
SLIDE 51

12 November 2008 51

Virtual Machine DR Steps

▪ Source Data

Group VMs in volumes based on recovery

Consistent Snapshots?

▪ Snapmirror to BCP ▪ Testing

Shut down source VMs

▪ DR

Quiesce, Break SnapMirror at destination

Make LUNs available to ESX (iSCSI)

Add VMs to inventory (iSCSI)

Power On

▪ Fail Back / End Test

slide-52
SLIDE 52

12 November 2008 52

Windows PowerShell and the VI Toolkit

▪ All-singing all-dancing scripting environment from Microsoft ▪ Extended with VMWare VIToolkit extension ▪ Not a plug-in to Virtual Center ▪ http://www.vmware.com/sdk/vitk_win/index.html ▪ http://blogs.vmware.com/vipowershell/

slide-53
SLIDE 53

12 November 2008 53

Windows PowerShell and the VI Toolkit

What can you do?

# Connect to VC Server PS:> get-viserver viserver # Power on VMs PS:> get-vm -location (get-folder “Web Servers") | start-vm #Display VM Name and Snapshot Name PS:> Get-VM | Get-Snapshot | select @{name="VM Name"; Expression={$_.vm.name}},name VM Name Name

  • LONVMTEST1

Initial Build LONVMTEST1 Before New Version LONVMTEST1 Pre Update # Identify all snapshots older than 30 days and export to .CSV PS:> Get-VM | Get-Snapshot | Where { $_.Created -lt (Get-Date).AddDays(-30)} | select Name, Created | export-csv D:\snap.csv #Get All VM Hard Disk Sizes in GB PS:> get-vm | select name, { $_.HardDisks | % { $_.CapacityKB / 1024 / 1024 } } # Get the names of VMs with connected CD drives: PS:> get-vm | where { $_ | get-cddrive | where { $_.ConnectionState.Connected -eq "true" } } | select Name # Display all events relating to a VM: PS:> Get-VM “MyVMName” | Get-VIEvent | Format-Table CreatedTime, FullFormattedMessage - AutoSize

slide-54
SLIDE 54

12 November 2008 54

Creating Consistent Snapshots - Powershell

iSCSI / NFS

# Connect to VC Server Get-VIServer VIServer $datastore = “london_iscsi_srvtest1*” $sourcefiler = “londonfiler” $destinationfiler = “henleyfiler” $volume = “v_lonvm_srvtest1” $netappsnapname = “quiesce” # Get all VMs in Datastores $vms = get-vm -Datastore (get-datastore $datastore) # Create VM SnapShot for all VMs in Datastores $vms | New-Snapshot -Name SnapQuiesce -Quiesce # Create NetApp SnapShot for Volume & "$env:windir\system32\rsh.exe" $sourcefiler -l user:password snap create $volume $netappsnapname # Update NetApp SnapMirror for Volume to BCP site & "$env:windir\system32\rsh.exe" $destinationfiler -l user:password snapmirror update $volume # Delete VM SnapShot previously created $vms | Get-Snapshot | where { $_.Name -eq 'SnapQuiesce'} | Remove-Snapshot - Confirm:$false # Disconnect from VC Server Disconnect-VIServer -Confirm:$False

slide-55
SLIDE 55

12 November 2008 55

Invoke DR - iSCSI

# Connect to VC Server Get-VIServer VIServer $destinationfiler = “henleyfiler” #DR Filer $hosts = "gbhenesx*" #Hosts to rescan storage" $datastorefind = “london_iscsi_srvdemo1-" #Datastores to search $vmhost = "henleyhost.invesco" #ESX Host to place found VMs $resourcepool = "Hen_DL380G5_1_Prod1" #Resource Pool to place found VMs $vmfolder = "Henley BCP" #VC Folder to place found VMs # Add VI3 Community Extentions (used for get-datastorefiles, Powershell CTP2 required) add-module "C:\Program Files\VMware\Infrastructure\VIToolkitForWindows\Extensions.psm1" # Quiesce and Break Filer Snapmirror for volume & "$env:windir\system32\rsh.exe" $destinationfiler -l user:password snapmirror quiesce $volume & "$env:windir\system32\rsh.exe" $destinationfiler -l user:password snapmirror break $volume # Rescan all HBA's at destination get-vmhoststorage (get-vmhost -name $hosts) -rescanallhba # Get all SnapMirrored Datastores based on search $datastores = get-datastore snap*$datastorefind* # Loop through each snapmirrored datastore foreach($datastore in $datastores){ # Rename datastore to normal name prefixed with snap- $datastorename = $datastore.Name.substring(14,$datastore.Name.Length-14) set-datastore -datastore $datastore -name snap-$datastorename #Register all .vmx Files on the datastores as VMs. $vmxpath = get-datastorefiles (get-datastore snap-$datastorename) | where { $_.Path -match '.vmx$' } | select path | % { Register-VM $_.Path -vmhost (get-vmhost $vmhost) -resourcepool (get-resourcepool $resourcepool) -folder (get-folder $vmfolder)} } # Disconnect from VC Server Disconnect-VIServer -Confirm:$False

slide-56
SLIDE 56

12 November 2008 56

Invoke DR – NFS

The Long way

# Connect to VC Server Get-VIServer VIServer $datastorefind = "gbhennl01_nfs_template_iso" #BCP NFS Datastores to search $vmhost = “henleyhost " #ESX Host to place found VMs $resourcepool = "Hen_DL380G5_1_Prod1" #Resource Pool to place found VMs $vmfolder = "Henley BCP" #Folder to place found VMs # Add VI3 Community Extentions (used for get-datastorefiles) add-module "C:\Program Files\VMware\Infrastructure\VIToolkitForWindows\Extensions.psm1" # Quiesce and Break Filer Snapmirror for volume & "$env:windir\system32\rsh.exe" $destinationfiler -l user:password snapmirror quiesce $volume & "$env:windir\system32\rsh.exe" $destinationfiler -l user:password snapmirror break $volume #Register all .vmx Files as VMs on the datastore ignoring .snapshot directory $vmxpath = get-datastorefiles (get-datastore $datastorefind) | where {$_.Path -match '.vmx$' -and $_.Path -notmatch '.snapshot'} | select path | % { Register-VM $_.Path

  • vmhost (get-vmhost $vmhost) -resourcepool (get-resourcepool $resourcepool) -folder

(get-folder $vmfolder)} # Disconnect from VC Server Disconnect-VIServer -Confirm:$False

The Short Way - Preregister VMs and power on

slide-57
SLIDE 57

12 November 2008 57

Questions?

No VMs were harmed in the making of this presentation

slide-58
SLIDE 58

Invesco Reducing costs and simplifying DR with NetApp

Julian Wood

julian.wood@invesco.com

NetApp User Group Meeting

12 November 2008