high availability with a minimal cluster
play

High Availability with a minimal Cluster 29. October 2009 Thorsten - PowerPoint PPT Presentation

High Availability with a minimal Cluster 29. October 2009 Thorsten Frauf Availability Engineering Sun Microsystems GmbH Open HA Cluster Agenda Motivation Open HA Cluster 2009.06 Minimal HA Configuration Weak Membership COMSTAR / iSCSI


  1. High Availability with a minimal Cluster 29. October 2009 Thorsten Früauf Availability Engineering Sun Microsystems GmbH Open HA Cluster

  2. Agenda Motivation Open HA Cluster 2009.06 Minimal HA Configuration Weak Membership COMSTAR / iSCSI / ZFS Crossbow IPS Live Demo References High Availability with a minimal Cluster 2 Open HA Cluster

  3. Why care about High Availability? Computer systems Failures are provide services: inevitable: Web Services, Software Bugs Databases, Hardware Business Logic, File components Systems, etc. People and Downtime is costly Processes Services should be Natural Disaster available as close Terrorism as possible to 100% of the time High Availability with a minimal Cluster 3 Open HA Cluster

  4. The Goal of High Availability HA Cluster automate the recovery process from inevitable failures to minimize downtime and cost. High Availability with a minimal Cluster 4 Open HA Cluster

  5. You don't want your users to see this... High Availability with a minimal Cluster 5 Open HA Cluster

  6. Methods to implement HA Redundant hardware physical nodes, network adapters, network paths, storage, storage paths, etc. Software monitoring physical nodes, applications, network paths, storage paths, etc. Failover to secondary hardware when problems detected High Availability with a minimal Cluster 6 Open HA Cluster

  7. Perceptions of HA Clusters complex difficult to use complicated requires special hardware heavyweight expensive difficult to install Perceptions not completely unfounded... High Availability with a minimal Cluster 7 Open HA Cluster

  8. Typical HA Cluster Hardware config Two or more physical machines Four or more network adapters on each machine Dedicated interconnects between nodes Shared disk storage multi-homed disks or network-attached storage Redundant storage paths from each node Quorum arbitration device etc. High Availability with a minimal Cluster 8 Open HA Cluster

  9. Typical HA Cluster software components Heartbeats Membership Distributed configuration repository Service management Cluster-private networking layer Global file system Network load-balancing etc. High Availability with a minimal Cluster 9 Open HA Cluster

  10. Solaris Cluster Architecture Global Network Service Scalable Service Provides Global IP address Software Load Balancing with failure protection Monitoring HA Failover Service ● Application failover ● Within nodes or container Heartbeats ● Containers failover ● Failover IP address Membership Global File Quorum Service Disk Fencing Failover File service High Availability with a minimal Cluster 10 Open HA Cluster

  11. Campus / Metro Cluster Up to 400KM High Availability with a minimal Cluster 11 Open HA Cluster

  12. Solaris Cluster Geographic Edition Primary Site Backup Site Optional Heartbeat Network Optional Storage Network Oracle RAC support Replication 9 & 10 Sun StorEdge Availability Suite 4.0 EMC SRDF HDS Truecopy Dataguard for Oracle RAC Script based plugin for MySQL High Availability with a minimal Cluster 12 Open HA Cluster

  13. Re-evaluate HA Cluster Complexity Many use cases (incl. SLA) require all the hardware and software in traditional HA Clusters … but not everything... approach: “good enough” is sufficient as well! Configure, install, and use only the hardware and software components you actually need High Availability with a minimal Cluster 13 Open HA Cluster

  14. Goals of Project Colorado Provide a lightweight, modular, cluster that can run on minimized hardware configurations What has been possible before should still be possible to configure as much as OpenSolaris allows High Availability with a minimal Cluster 14 Open HA Cluster

  15. How to Get There Port Open HA Cluster source to work with OpenSolaris Add hardware minimization features Leverage OpenSolaris Image Packaging System (IPS) for software modularity and extensibility Analyze all package dependencies High Availability with a minimal Cluster 15 Open HA Cluster

  16. Development context SCX 07/07 SCX 10/07 SCX 02/08 SCX 06/08 SCX 09/08 SCX 12/08 OHAC 2009.06 runs with runs with runs with runs with runs with runs with runs with Open HA Cluster SXCE SXCE SXCE SXCE SXCE SXCE OpenSolaris Binary Distribution build 68 build 70b build 79b build 86 build 97 build 101b build 111b Open HA Cluster Source Code current development 27. Juni 2007 Solaris Cluster 3.2 12/06 Solaris Cluster 3.2 Solaris Cluster 3.2 02/08 (Update 1) 01/09 (Update 2) SCX = Solaris Cluster Express OHAC = Open HA Cluster SXCE = Solaris Express Community Edition High Availability with a minimal Cluster 16 Open HA Cluster

  17. Solaris Express vs. OpenSolaris KSH93 KSH93 (default) KSH88 (default) Installer (old & some news) Installer (new) ON (= OS/Net) SVR4 package (default) Java IPS (default) encumbered code: X window system SVR4 package (legacy) CDE / JDS UFS root x86/x64 Motif gnome SPARC binaries from closed source Zones: Zones: ZFS root ipkg (full root) native (sparse & full root) Motif libs COMSTAR lx lx webconsole / JATO Crossbow Solaris Express (Nevada) OpenSolaris 200X.Y Binary distribution of Binary distribution on LiveCD usr/src und usr/closed freely redistributable packages not freely redistributable (pkg.opensolaris.org Repo) not freely redistributable packages (pkg.sun.com Repo) High Availability with a minimal Cluster 17 Open HA Cluster

  18. Open HA Cluster 2009.06 (Colorado-I) Runs on OpenSolaris 2009.06 (SPARC & x86/x64) Many features from Solaris Cluster 3.2 available Free to use (without support) Support subscriptions available Installation from IPS package repository https://pkg.sun.com/opensolaris/ha-cluster Source is open and freely available at http://www.opensolaris.org/os/community/ha- clusters/ High Availability with a minimal Cluster 18 Open HA Cluster

  19. Open HA Cluster 2009.06 Agents Apache Webserver DHCP Apache Tomcat DNS MySQL Kerberos GlassFish Samba NFS HA Containers ipkg Zones Generic Data Service (GDS) High Availability with a minimal Cluster 19 Open HA Cluster

  20. Hardware Minimization Using local disks as “Poor man's shared storage” with COMSTAR iSCSI and ZFS Using Crossbow VNICs for private cluster traffic over public network “Weak membership” (preview-only feature) Taken together, allow any two-nodes on the same IP subnet to form a functional cluster. High Availability with a minimal Cluster 20 Open HA Cluster

  21. Minimale HA Konfiguration Heartbeats IPS package external ping over VNICs repositories targets Weak Membership Monitoring HA Failover Service ● Application failover local Storage ● IP address failover ● HA ZFS failover exported as iSCSI targets Failover ZFS File system mirroring iSCSI targets High Availability with a minimal Cluster 21 Open HA Cluster

  22. Technologies useable for Minimization Weak Membership COMSTAR / iSCSI Software Quorum IPsec Quorum Server Crossbow Optional Fencing IPS HA ZFS VirtualBox for training and development High Availability with a minimal Cluster 22 Open HA Cluster

  23. HA Cluster „Strong Membership“ Use concept of quorum to ensure cluster consistency in the presence of partitions in space and time Partition in space (network partition) can cause split-brain Partition in time can cause amnesia Two-node cluster requires third arbitration device in case of partitions Typically hardware disk or software quorum server High Availability with a minimal Cluster 23 Open HA Cluster

  24. Weak Membership (preview feature) Run a two-node cluster without a quorum device External “ping target” used as “health check” to arbitrate in case of split-brain Worst-case, both nodes stay up and provide service OpenSolaris Duplicate Address Detection (DAD) can mitigate somewhat Places importance of availability above data integrity Can lead to data loss High Availability with a minimal Cluster 24 Open HA Cluster

  25. Why use Weak Membership? Read-only or read-mostly applications Availability is more important than data integrety the SLA matches (solution is “good enough”) Test Cluster with limited resources Demos Development Training High Availability with a minimal Cluster 25 Open HA Cluster

  26. iSCSI Storage IP-based storage networking standard Initiators (clients) send SCSI commands to targets (storage devices) over regular IP networks Alternative to NAS, SAN and DAS The OpenSolaris Common Multiprotocol iSCSI Target (COMSTAR) implements the iSCSI protocol High Availability with a minimal Cluster 26 Open HA Cluster

  27. COMSTAR iSCSI for OHAC 2009.06 Each node exports directly-attached disk as iSCSI target Nodes access both disks through iSCSI initiators Mirrored zpool built on top of the two disks HAStoragePlus imports zpool on node hosting the services that need it If one node goes down, local half of mirror still available and accessible from other node High Availability with a minimal Cluster 27 Open HA Cluster

  28. COMSTAR iSCSI Configuration Node 1 Node 2 Mirrored Zpool iSCSI iSCSI Initiator Initiator clprivnet0 iSCSI iSCSI Target Target Local Local Disk Disk High Availability with a minimal Cluster 28 Open HA Cluster

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend