MySQL High Availability Solutions Alex Poritskiy Percona The Five - - PowerPoint PPT Presentation

mysql high availability solutions
SMART_READER_LITE
LIVE PREVIEW

MySQL High Availability Solutions Alex Poritskiy Percona The Five - - PowerPoint PPT Presentation

MySQL High Availability Solutions Alex Poritskiy Percona The Five 9s of Availability Clustering & disasters Geographical Redundancy power failures network failures Clustering Technologies hardware failures software failures


slide-1
SLIDE 1

MySQL High Availability Solutions

Alex Poritskiy
 Percona

slide-2
SLIDE 2

2

The Five 9s of Availability

9 9 9 9 9

35 days 4 days 50 mins 5 mins 8 hours

Unmanaged Replication Technologies Clustering Technologies Clustering & Geographical Redundancy software upgrades hardware upgrades maintenance operations software failures network failures power failures disasters Well-Managed hardware failures

Small Business ISPs & Mainstream Business Data Centers Banking Medical Telco Military Defense

%

slide-3
SLIDE 3

3

MySQL High Availability Solutions

  • MySQL Replication
  • MHA/Orchestrator by Github and MySQL Replication
  • Percona XtraDB Cluster/Galera
  • MySQL InnoDB Cluster (Group Replication)
  • MySQL NDB Cluster
slide-4
SLIDE 4

4

MySQL Replication

9 9 9 %

slide-5
SLIDE 5

5

MySQL Replication Topologies

Master > Slave Masters > Slave (Multi-Source) Master < > Master (Multi-Master) Master > Slaves Circular (Multi-Master) Master > Slave > Slaves

slide-6
SLIDE 6

6

Inside MySQL Replication

Web/App Server Web/App Server

Writes & Reads

MySQL Master

I/O Thread SQL Thread Writes relay binlog

MySQL Slave

mysqld data index & binlogs mysqld data binlog Replication

slide-7
SLIDE 7

7

MySQL Replication (Scale-Out)

Master Server Slave Server

Writes Index & Bin Log Rotation Writes Reads Possible Roles

  • Fail over server
  • Used for performing backups
  • Read load balancing
  • Additional slaves allow Scale-Out

Backups Writes & Reads

Web/App Server Web/App Server

Replication

slide-8
SLIDE 8

8

MySQL Replication(Fail-Over)

Master Server Slave Server

Writes

Web/App Server Web/App Server

Replication Manual Fail Over

X X

F a i l O v e r

X

Writes & Reads Writes Index & Bin Log Rotation

X

slide-9
SLIDE 9

9

MySQL Replication Pros

  • Mature, standard feature of MySQL.
  • Compatible with many topologies.
  • Supports multi-datacenter “clusters”.
  • Easy to configure.
  • Continues to gain new features:

▪ Semi-synchronous ▪ Row-based binary logs ▪ Parallel SQL threads ▪ Global Transaction ID (GTID)

slide-10
SLIDE 10

10

MySQL Replication Cons

  • Slave lag.

▪ Use row-based binary logs and other strategies.

  • Slave drift.

▪ Use read-only, pt-table-checksum, pt-table-sync.

  • No automatic failover.

▪ Use haproxy and/or keepalived or other solutions.

slide-11
SLIDE 11

11

Topology managers + MySQL replication

9 9 9 % .

9

slide-12
SLIDE 12

12

MHA toolkit

slide-13
SLIDE 13

13

MHA toolkit

slide-14
SLIDE 14

14

MHA toolkit

  • Note that MHA never rebuilds an existing multi-master

replication link, it always reconfigures topology using master- slave.

  • MHA only takes care of the MySQL failover but has the

ability to add custom code to make the application and any load balancers aware of the change.

slide-15
SLIDE 15

15

MHA toolkit: Pros

  • Uses traditional MySQL replication.
  • Easy to deploy for an existing replication group.
  • Subscribes slaves to the correct binlog position.
  • Strong command line support
  • Very good point in time recovery via binary logs
slide-16
SLIDE 16

16

MHA toolkit: Cons

  • Disconnects after failover
  • No GUI
  • Not updated, long term stable release
  • Doesn’t crawl and auto discover topology
  • Cannot manage PXC, co-masters or active/active
  • Failover isn’t guaranteed instantly - can cause delays as logs are

scanned.

  • Still async replication (Consistency may be compromised)
  • Monitor can be a single point of failure (monitor is only needed for

failover)

slide-17
SLIDE 17

17

Orchestrator by Github

  • Orchestrator crawls through your

replication topology and updates its GUI based on your current topology and health.

  • Orchestrator also knows about

binary log position, GTID, and Binlog Servers.

  • Refactoring replication topologies

can be a matter of dragging and dropping a replica under another master.

slide-18
SLIDE 18

18

Orchestrator by Github

slide-19
SLIDE 19

19

Orchestrator: Pros

  • Integrated into PMM
  • GUI in addition to command line
  • Various options for managing replication (GTID, psuedo-GTID

etc)

  • Built for GTID
  • Can manage backups and other tasks
  • Currently in development with a dedicated team
  • Auto Discovers topology
  • Works with co-masters
slide-20
SLIDE 20

20

Orchestrator: Cons

  • Does not recognise PXC nodes as part of a single cluster

(sees them as separate masters)

  • Point in time recovery relies on good slaves, doesn’t do

binary log discovery

slide-21
SLIDE 21

21

PXC, MySQL InnoDB Cluster

9 9 9

9

9 % .

slide-22
SLIDE 22

22

Percona XtraDB Cluster(PXC)

  • Synchronous replication—every node is

writable.

slide-23
SLIDE 23

23

Percona XtraDB Cluster(PXC)

slide-24
SLIDE 24

24

Percona XtraDB Cluster(PXC)

5 nodes in the 2 Datacenters(example)

MySQL MySQL

Main Data Center Secondary Data Center

WAN

gmcast.segment=1 gmcast.segment=2

slide-25
SLIDE 25

25

Percona XtraDB Cluster(PXC) 3 nodes in the single datacenter + Async Slave

Main Data Center Secondary Data Center

WAN

Asynchronous MySQL Replication slave (not part of PXC)

MySQL Slave Server

Percona XtraDB Cluster

slide-26
SLIDE 26

26

Percona XtraDB Cluster(PXC) : Pros

  • Writing to any node updates all other nodes synchronously

(like semi-sync replication).

  • All nodes can be writers.
  • Adding a new node, or recovering an offline node, initialises

data automatically.

  • Multi-datacenter (assuming low latency)
  • Quorum-based selection of nodes that can take queries.

Nodes not in quorum will be shut down and must copy data before they can rejoin the cluster.

slide-27
SLIDE 27

27

Percona XtraDB Cluster(PXC): Cons

  • Not suitable for large transactions.
  • Not suitable for heavy write traffic, especially rapid writes to a

hotspot – don't rely on multiple writers.

  • Reinitialising a node can be expensive.
slide-28
SLIDE 28

28

MySQL InnoDB Cluster

  • Synchronous

replication—every node is writable as well.

slide-29
SLIDE 29

29

MySQL InnoDB Cluster: Pros

  • Writing to any node updates all other nodes synchronously

(like semi-sync replication).

  • All nodes can be writers.
  • Single-Primary and Multi-Primary modes available
  • Quorum-based selection of nodes that can take queries.
slide-30
SLIDE 30

30

MySQL InnoDB Cluster: Cons

  • Not suitable for large transactions.
  • Not suitable for heavy write traffic, especially rapid writes to a

hotspot – don't rely on multiple writers.

  • No SST concept - all nodes needs to start with same data
  • Replication error just stop replication to the node
  • Inconsistencies can/should be fixed manually
slide-31
SLIDE 31

31

Percona XtraDB Cluster and MySQL Cluster similar limitations

  • Large & Long running transactions

▪ higher chance on failures ▪ non-efficient replication of large transactions

  • Workload hotspots (updating same set of rows in parallel)

  • Often writing to 1 node is the best solution
slide-32
SLIDE 32

32

9 9 9 9 9 % . 9

slide-33
SLIDE 33

33

MySQL NDB Cluster

  • Shared-Nothing
  • Synchronous (2-phase commit)
  • In memory store
  • Traditionally used by:

▪ Telecom operators ▪ Gaming companies

slide-34
SLIDE 34

34

MySQL NDB Cluster Basic Architecture

MySQL Server or NDB API for all Writes & Reads

Data Node Data Node Data Node Data Node

NDB Storage Engine

Management Server Management Server NDB API NDB API

MySQL Cluster

Memory & Disk

Management Server Management Server Web/App Server Web/App Server Web/App Server Web/App Server

slide-35
SLIDE 35

35

MySQL NDB Cluster Components

  • Data Nodes

▪ Store redundant data fragments ▪ Combined to form of Node Groups

  • Management Nodes

▪ Used at startup ▪ Re-configuration

  • API Nodes

▪ Standard SQL interface ▪ NDB API(The NDB API is an object-oriented application programming interface for NDB Cluster that

implements indexes, scans, transactions, and event handling)

slide-36
SLIDE 36

36

MySQL NDB Cluster Data node Architecture

  • Four Data Nodes
  • Two Replicas
  • Two Node Groups

8 China Beijing 8 2 Norway Oslo 7 4 Russia Moscow 6 2 Greece Athens 5 9 Japan Tokyo 4

  • 5

USA New York City 3 2 Germany Berlin 2 2 Denmark Copenhagen 1 UTC Country Capital ID 8 China Beijing 8 2 Norway Oslo 7 4 Russia Moscow 6 2 Greece Athens 5 9 Japan Tokyo 4

  • 5

USA New York City 3 2 Germany Berlin 2 2 Denmark Copenhagen 1 UTC Country Capital ID Partition 1 Partition 2 Partition 3 Partition 4 Data Node Data Node

P1-Primary P2-Secondary P1-Secondary P2-Primary

Data Node Data Node

Node Group 1

Data Node Data Node

P3-Primary P4-Secondary P3-Secondary P4-Primary

Data Node Data Node

Node Group 2

slide-37
SLIDE 37

37

MySQL NDB Cluster Data node Architecture

  • Four Data Nodes
  • Two Replicas
  • Two Node Groups

8 China Beijing 8 2 Norway Oslo 7 4 Russia Moscow 6 2 Greece Athens 5 9 Japan Tokyo 4

  • 5

USA New York City 3 2 Germany Berlin 2 2 Denmark Copenhagen 1 UTC Country Capital ID 8 China Beijing 8 2 Norway Oslo 7 4 Russia Moscow 6 2 Greece Athens 5 9 Japan Tokyo 4

  • 5

USA New York City 3 2 Germany Berlin 2 2 Denmark Copenhagen 1 UTC Country Capital ID Partition 1 Partition 2 Partition 3 Partition 4

Data Node Data Node

P1-Primary P2-Secondary P1-Secondary P2-Primary

Data Node Data Node

Node Group 1

Data Node Data Node

P3-Primary P4-Secondary P3-Secondary P4-Primary

Data Node Data Node

Node Group 2

X X

data still available

slide-38
SLIDE 38

38

MySQL NDB Cluster Scale-Out

Data Node Data Node Data Node Data Node

NDB Storage Engine

Management Server Management Server NDB API NDB API

MySQL Cluster

Data Node Data Node Data Node Data Node Management Server Management Server

slide-39
SLIDE 39

39

MySQL NDB Cluster Failover

Data Node Data Node Data Node Data Node

NDB Storage Engine

Management Server Management Server NDB API NDB API

MySQL Cluster

Data Node Data Node Data Node Data Node Management Server Management Server

X X X X X X X

slide-40
SLIDE 40

40

MySQL NDB Cluster + Replication

slide-41
SLIDE 41

41

MySQL NDB Cluster: Pros

  • Every component has redundancy for failover.
  • Exceptionally high performance for reads and writes, if queries are

simple, single-row.

  • Ultimate HA – add new nodes with no downtime.
slide-42
SLIDE 42

42

MySQL NDB Cluster: Cons

  • Not a general purpose system
  • You probably need to redesign your schema.
  • NDB is a distinct storage engine; it is not InnoDB.
  • Queries that does not hit primary keys can be much slower and transfer lots of

data temporarily to SQL nodes.

  • Not suitable for multi-datacenter. (only can be achieved by async replication

between the clusters)

  • Increased hardware costs
  • All components are redundant.
  • Need enough RAM
slide-43
SLIDE 43

43

How to choose right HA solution ?

  • Know your availability goals and requirements
  • Know you workload(What ? How many? What ratio? How complex?,etc)
  • Know considered solutions limitations
  • Know your application requirements and logic internals
slide-44
SLIDE 44

44

How to choose right HA solution ?

  • Do not fully rely on the HA solution (be ready to handle some

scenarios on the application side, since usually there is no solution that could cover any use case and/or scenario )

  • Once implemented : develop and follow code/infrastructure

deployment guideline in order to make sure capability with implemented solution (What can do? What can’t do? How we are doing

stuff now ?, etc )

  • Test, test, test ……
slide-45
SLIDE 45

45

Demo: MySQL NDB Cluster

Let’s build small MySQL NDB Cluster: 4 Pre-setup Ubuntu 18.10 VM’s:

mgm_sql1 - 192.168.56.103 - Management node node1 - 192.168.56.101 - Data node1 node2- 192.168.56.102 - Data node2 mgm_sql2 - 192.168.56.104 - SQL node

slide-46
SLIDE 46

46

Thank you !

slide-47
SLIDE 47

47

Thank you Sponsors !