AARNET Mirror and CDN update Stephen Walsh Network Operations - - PowerPoint PPT Presentation

aarnet mirror and cdn update
SMART_READER_LITE
LIVE PREVIEW

AARNET Mirror and CDN update Stephen Walsh Network Operations - - PowerPoint PPT Presentation

AARNet Net Copyri right 2012 15 th TF-Storage NDN2014 Uppsala AARNET Mirror and CDN update Stephen Walsh Network Operations AARNet Net Copyri right 2012 But First AARNet Net Copyri right 2012 Our National Connectivity 3 AARNet Net


slide-1
SLIDE 1

AARNet Net Copyri right 2012

AARNET Mirror and CDN update

Stephen Walsh

Network Operations 15th TF-Storage NDN2014 Uppsala

slide-2
SLIDE 2

AARNet Net Copyri right 2012

But First…

slide-3
SLIDE 3

AARNet Net Copyri right 2012

3

Our National Connectivity

slide-4
SLIDE 4

AARNet Net Copyri right 2012

Mirror Update

slide-5
SLIDE 5

AARNet Net Copyri right 2012

Mirror History

  • Mirror (1998)

– 4 processor Sun SS1000 with 256M of ram and 50G of disk – Ram upgraded via a donation from member institution

  • Mirror2 (2001)

– Sun donated Enterprise 450, we purchased 2 x A1000 disk systems

  • Mirror3 (2006)

– Redesign to use commodity hardware backed to a SAN

5

slide-6
SLIDE 6

AARNet Net Copyri right 2012

AARNet RETAIN on AARNet 3

  • Mirrorv4/RETAIN (2010)

– Sited in Brisbane, Multiple servers backed with a Hitachi SMS100 – HAProxy SSD Cache front end are 10G connected – Scavenger pool IP class run, ISPs found to be major users – Everything would fall apart as load increased.

  • Mirrorv5 planned for 2011/12, but fell through the cracks

between OSI Layer 8 (budget), Layer 9 (management) and Layer 10 (free time)

6

slide-7
SLIDE 7

AARNet Net Copyri right 2012

Brisbane Canberra Melbourne Perth Darwin Sippy Downs Gladstone Rockhampton Mackay Cairns Townsville Armidale Geraldton AARNet POP < 1 Gbps < 2.5 Gbps < 10 Gbps WDM Transmission KEY

AARNet Mirror (RETAIN)

Hobart Murchison Radio-Astronomy Observatory Sydney Adelaide Alice Springs

Mirror Front end

slide-8
SLIDE 8

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • AARNetCDN is born!
  • Supermicro storage chassis, 72 x HDD in 4RU
  • ~100TB current storage. Upgrade path is simple.
  • Two Storage nodes sited in Canberra, split between POPs,

primary site also has VM hardware and a SSD Cache running ATS

  • Supermicro TwinPro 4-blade server for VM provisioning for repo
  • r ’special petal’ projects
  • Each capital city will have SSD Cache, and some international

sites

8

slide-9
SLIDE 9

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • Supermicro 6074R-E1R72L Chassis, 72 Disks in paired trays

– Each tray is a RAID 0 pair – Storage growth is a matter of replacing a tray.

9

slide-10
SLIDE 10

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • Supermicro TwinPro Chassis – Virtualisation Node

– 4 blades per chassis – Mix of SSD and HDD, clustered into Ceph for HA KVM

10

slide-11
SLIDE 11

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • Supermicro 2027R-AR24NV Chassis – Front end

– 24 x SAS SSD = ~300Tb – Runs Apache Traffic Server, directly connected to BD Routers.

11

slide-12
SLIDE 12

AARNet Net Copyri right 2012

Brisbane Canberra Melbourne Perth Darwin Sippy Downs Gladstone Rockhampton Mackay Cairns Townsville Armidale Geraldton AARNet POP < 1 Gbps < 2.5 Gbps < 10 Gbps WDM Transmission KEY

AARNet 4

Hobart Murchison Radio-Astronomy Observatory Sydney Adelaide Alice Springs

Mirror ATS Front-end

slide-13
SLIDE 13

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • CephFS

– Initially, ran very well. – Sync speeds were acceptable

  • Trouble developed and things fell apart when we hit high load
  • Failure of Ceph was more about the size of the hammer than

the problem we were trying to fix with it.

13

slide-14
SLIDE 14

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • Currently running ZFS as interim

– L2ARC and ZFS Intent LOG provided an unexpected performance boost – Snapshot is making filesystem syncs easier

  • Snapshot the fs
  • Update that fs, keeping original fs mounted and running.
  • When COW sync is complete and confirmed, ZFS send snapshot

to original fs

  • Drink Beer.

14

slide-15
SLIDE 15

AARNet Net Copyri right 2012

AARNetCDN on AARNet 4

  • . Gluster?

– Suggested model is to bind all 72 disks into RAID – ‘Totally nothing will go wrong with that, really honest’ – Nope

  • Rsync always needs a filesystem to write to.
  • When you hit a specific size, mirror or cdn file systems are hard

if you don’t have infinite money and people to throw at the problem.

15

slide-16
SLIDE 16

AARNet Net Copyri right 2012

Thank You

Stephen.walsh@aarnet.edu.au