InnoDB ReplicaSet in MySQL 8.0.19 ( configuration + Switchover the primary member )

At 13th January 2020, MySQL Community has released the very latest MySQL 8.x version ( 8.0.19 ). The MySQL 8.0.19 has released with the good amount of bug fixes and the cool features . InnoDB ReplicaSet is one of them and most interesting feature for me . In this blog I am going to explain about the configuration of the MySQL InnoDB ReplicaSet and how to do the switchover smoothly with InnoDB ReplicaSet .

What is InnoDB ReplicaSet ?

from MySQL document,

The AdminAPI includes support for InnoDB ReplicaSet, that enables you to administer a set of MySQL instances running asynchronous GTID-based replication in a similar way to InnoDB cluster. A InnoDB ReplicaSet consists of a single primary and multiple secondaries 

Note : For configure the InnoDB ReplicaSet , the servers should be configured with the GTID .

InnoDB ReplicaSet configuration :

First of all, I have created two machines with MySQL 8.0.19 for the ReplicaSet configuration .

  • 192.168.33.14 ( sakthilabs14 )
  • 192.168.33.15 ( sakthilabs15 )

Below I am providing the each and every steps which involved in the InnoDB ReplicaSet configuration .

Step 1 : ( allow the communication between hosts )

Allow the hostname based communications between the servers , which are involving in the InnoDB ReplicaSet .

[root@sakthilabs14 ~]# cat /etc/hosts | grep -i sakthi
192.168.33.14 sakthilabs14 sakthilabs14
192.168.33.15 sakthilabs15 sakthilabs15

Step 2: ( preparing the server for ReplicaSet configuration )

The servers should be configured with the GTID and the separate server-id

MySQL localhost:33060+ ssl JS > dba.configureReplicaSetInstance(‘root@localhost’,{clusterAdmin: “‘sakthiRepl’@’%'”, clusterAdminPassword: ‘Repl@321’});
Configuring local MySQL instance listening at port 3306 for use in an InnoDB ReplicaSet…

This instance reports its own address as sakthilabs14:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

NOTE: Some configuration options need to be fixed:
+————————–+—————+—————-+————————————————–+
| Variable | Current Value | Required Value | Note |
+————————–+—————+—————-+————————————————–+
| enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server |
| gtid_mode | OFF | ON | Update read-only variable and restart the server |
| server_id | 1 | | Update read-only variable and restart the server |
+————————–+—————+—————-+————————————————–+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y
Cluster admin user ‘sakthiRepl’@’%’ created.
Configuring instance…
The instance ‘sakthilabs14:3306’ was configured to be used in an InnoDB ReplicaSet.
Restarting MySQL…
NOTE: MySQL server at sakthilabs14:3306 was restarted.

Note : This step needs to be performed on all the servers, which are going to participate in InnoDB ReplicaSet .

Step 3 : ( creating the ReplicaSet )

This step helps to initiating the ReplicaSet on the server .

MySQL localhost:33060+ ssl JS > var rs = dba.createReplicaSet(“firstReplicaset”)
A new replicaset with instance ‘sakthilabs14:3306’ will be created.

Checking MySQL instance at sakthilabs14:3306

This instance reports its own address as sakthilabs14:3306
sakthilabs14:3306: Instance configuration is suitable.Updating metadata…

ReplicaSet object successfully created for sakthilabs14:3306.
Use rs.addInstance() to add more asynchronously replicated instances to this replicaset and rs.status() to check its status.

MySQL localhost:33060+ ssl JS > rs.status()
{
“replicaSet”: {
“name”: “firstReplicaset”,
“primary”: “sakthilabs14:3306”,
“status”: “AVAILABLE”,
“statusText”: “All instances available.”,
“topology”: {
“sakthilabs14:3306”: {
“address”: “sakthilabs14:3306”,
“instanceRole”: “PRIMARY”,
“mode”: “R/W”,
“status”: “ONLINE”
}
},
“type”: “ASYNC”
}
}

Step 4 : ( adding the second machine as the secondary server )

MySQL localhost:33060+ ssl JS > rs.addInstance(‘sakthiRepl@192.168.33.15:3306’);
Adding instance to the replicaset…

NOTE: The target instance ‘sakthilabs15:3306’ has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether replication can completely recover its state.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of ‘sakthilabs15:3306’ with a physical snapshot from an existing replicaset member. To use this method by default, set the ‘recoveryMethod’ option to ‘clone’.

WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the ‘recoveryMethod’ option to ‘incremental’.

Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): clone

Updating topology
Waiting for clone process of the new member to complete. Press ^C to abort the operation.

Waiting for clone to finish…
NOTE: sakthilabs15:3306 is being cloned from sakthilabs14:3306
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed

NOTE: sakthilabs15:3306 is shutting down…

Waiting for server restart… ready

sakthilabs15:3306 has restarted, waiting for clone to finish…
** Stage RESTART: Completed

Clone process has finished: 59.63 MB transferred in about 1 second (~59.63 MB/s)

** Configuring sakthilabs15:3306 to replicate from sakthilabs14:3306
** Waiting for new instance to synchronize with PRIMARY…

The instance ‘sakthilabs15:3306’ was added to the replicaset and is replicating from sakthilabs14:3306.

Step 5 : ( check the ReplicaSet status )

MySQL localhost:33060+ ssl JS > rs.status();
{
"replicaSet": {
"name": "firstReplicaset",
"primary": "sakthilabs14:3306",
"status": "AVAILABLE",
"statusText": "All instances available.",
"topology": {
"sakthilabs14:3306": {
"address": "sakthilabs14:3306",
"instanceRole": "PRIMARY",
"mode": "R/W",
"status": "ONLINE"
},
"sakthilabs15:3306": {
"address": "sakthilabs15:3306",
"instanceRole": "SECONDARY",
"mode": "R/O",
"replication": {
"applierStatus": "APPLIED_ALL",
"applierThreadState": "Slave has read all relay log; waiting for more updates",
"receiverStatus": "ON",
"receiverThreadState": "Waiting for master to send event",
"replicationLag": null
},
"status": "ONLINE"
}
},
"type": "ASYNC"
}
}

from the ReplicaSet status, you can monitor the complete replication status including the replication log .

How to switchover the Primary member in InnoDB ReplicaSet ?

from the output ,

  • sakthilabs14 ( PRIMARY )
  • sakthilabs15 ( SECONDARY )

With this example, I am going to change the sakthilabs15 as PRIMARY and sakthilabs14 as SECONDARY member .

changing the PRIMARY member ,

MySQL localhost:33060+ ssl JS > rs.setPrimaryInstance(‘sakthilabs15:3306’)
sakthilabs15:3306 will be promoted to PRIMARY of ‘firstReplicaset’.
The current PRIMARY is sakthilabs14:3306.

Connecting to replicaset instances
** Connecting to sakthilabs14:3306
** Connecting to sakthilabs15:3306
** Connecting to sakthilabs14:3306
** Connecting to sakthilabs15:3306

Performing validation checks
** Checking async replication topology…
** Checking transaction state of the instance…

Synchronizing transaction backlog at sakthilabs15:3306

Updating metadata

Acquiring locks in replicaset instances
** Pre-synchronizing SECONDARIES
** Acquiring global lock at PRIMARY
** Acquiring global lock at SECONDARIES

Updating replication topology
** Configuring sakthilabs14:3306 to replicate from sakthilabs15:3306

sakthilabs15:3306 was promoted to PRIMARY.

There is also the option ReplicaSet.forcePrimaryInstance to forcefully configure ( failover ) the PRIMARY instance .

Limitations :

  • InnoDB ReplicaSet cannot support for auto failover
  • Data loss is possible in crash

Thanks !!

MySQL InnoDB Cluster Tutorial 3 ( Switching cluster mode and primary member )

So far, I have written two tutorial blogs about MySQL InnoDB Cluster . Those blogs describe about the InnoDB Cluster configuration and how to integrate InnoDB Cluster with the MySQL router . You can get them through the below links .

In this blog I am going to explain the following two things ,

  • How to switch the cluster to ( single | multi ) primary mode without downtime ?
  • How to make the specific node as the Primary member without downtime ?

I have already configured the three node InnoDB Cluster with single primary mode topology .

MySQL 192.168.33.11:3306 ssl JS > cluster.getName();
first_InnoDB_cluster

MySQL 192.168.33.11:3306 ssl JS > \sql
Switching to SQL mode… Commands end with ;

MySQL 192.168.33.11:3306 ssl SQL > select channel_name,member_host,member_state,member_role,member_version from performance_schema.replication_group_members\G
* 1. row *
channel_name: group_replication_applier
member_host: sakthilabs11
member_state: ONLINE
member_role: PRIMARY
member_version: 8.0.18
* 2. row *
channel_name: group_replication_applier
member_host: sakthilabs12
member_state: ONLINE
member_role: SECONDARY
member_version: 8.0.18
* 3. row *
channel_name: group_replication_applier
member_host: sakthilabs13
member_state: ONLINE
member_role: SECONDARY
member_version: 8.0.18
3 rows in set (0.0070 sec)

Now, this is the time for the experiment .

1. How to switch the cluster to ( single | multi ) primary mode without downtime ?

Right now my cluster topology is single primary mode .

        }
    }, 
    "topologyMode": "Single-Primary"
}, 
"groupInformationSourceMember": "sakthilabs11:3306"

to convert to Multi primary mode ..

MySQL 192.168.33.11:3306 ssl sakthi JS > cluster.switchToMultiPrimaryMode();
Switching cluster ‘first_InnoDB_cluster’ to Multi-Primary mode…

Instance ‘sakthilabs11:3306’ remains PRIMARY.
Instance ‘sakthilabs12:3306’ was switched from SECONDARY to PRIMARY.
Instance ‘sakthilabs13:3306’ was switched from SECONDARY to PRIMARY.

The cluster successfully switched to Multi-Primary mode.

At MySQL Group Replication, you need to turn the variable group_replication_single_primary_mode to OFF, to switch to Multi primary topology . It will not allow when the cluster is active . So, it is difficult to do without MySQL Shell .

to convert to single primary mode again ,

MySQL 192.168.33.11:3306 ssl sakthi JS > cluster.switchToSinglePrimaryMode();
Switching cluster ‘first_InnoDB_cluster’ to Single-Primary mode…

Instance ‘sakthilabs11:3306’ remains PRIMARY.
Instance ‘sakthilabs12:3306’ was switched from PRIMARY to SECONDARY.
Instance ‘sakthilabs13:3306’ was switched from PRIMARY to SECONDARY.

WARNING: Existing connections that expected a R/W connection must be disconnected, i.e. instances that became SECONDARY.

The cluster successfully switched to Single-Primary mode.

Perfect !!

2. How to make the specific node as the Primary member without downtime ?

Right now , sakthilabs12 is the primary member of my cluster .

“primary”: “sakthilabs12:3306”,
“ssl”: “REQUIRED”,
“status”: “OK”,

Because of some mainteneance task, I wanted to switch the primary member to sakthilabs11 without any downtime .

MySQL 192.168.33.11:3306 ssl sakthi JS > cluster.setPrimaryInstance(‘sakthilabs11:3306’)
Setting instance ‘sakthilabs11:3306’ as the primary instance of cluster ‘first_InnoDB_cluster’…

Instance ‘sakthilabs11:3306’ was switched from SECONDARY to PRIMARY.
Instance ‘sakthilabs12:3306’ was switched from PRIMARY to SECONDARY.
Instance ‘sakthilabs13:3306’ remains SECONDARY.

The instance ‘sakthilabs11:3306’ was successfully elected as primary.

“primary”: “sakthilabs11:3306”,

“ssl”: “REQUIRED”,

“status”: “OK”,

Perfect ,

At group replication you need to perform the below task to switch the primary node without any downtime .

mysql> select @@hostname;
+---------------+
| @@hostname    |
+---------------+
| sakthilabs13  |
+---------------+
1 row in set (0.00 sec)
mysql> show global variables like 'server_uuid';
+---------------+--------------------------------------+
| Variable_name | Value                                | 
+---------------+--------------------------------------+
| server_uuid   | f374a06e-28a5-11ea-a6c6-080027bc6e8c |
+---------------+--------------------------------------+
1 row in set (0.00 sec)
mysql> select group_replication_set_as_primary('f374a06e-28a5-11ea-a6c6-080027bc6e8c');
+--------------------------------------------------------------------------+
| group_replication_set_as_primary('f374a06e-28a5-11ea-a6c6-080027bc6e8c') |
+--------------------------------------------------------------------------+
| Primary server switched to: f374a06e-28a5-11ea-a6c6-080027bc6e8c |
+--------------------------------------------------------------------------+
1 row in set (0.08 sec)
mysql> select member_host,member_role from performance_schema.replication_group_members;
+---------------+-------------+
| member_host   | member_role |
+---------------+-------------+
| sakthilabs11  | SECONDARY   |
| sakthilabs12  | SECONDARY   |
| sakthilabs13  | PRIMARY     |
+---------------+-------------+
3 rows in set (0.02 sec)

I hope this blog will help someone who has started to learn the MySQL InnoDB cluster . I’ll update my next blog soon.

Thanks !!

MySQL InnoDB Cluster Tutorial 2 ( Integrating with MySQL router )

From my last blog , I have explained the details about the configuration of InnoDB Cluster ( Group Replication + MySQL shell ) . You can find the link below .

MySQL InnoDB Cluster Tutorial 1 ( Group Replication + MySQL Shell )

In this blog, I am going to explain How to integrate the MySQL router with the existing cluster setup .

As I explained in Tutorial 1 , I already have configured the cluster setup with MySQL shell and group replication,

MySQL 192.168.33.11:3306 ssl JS > cluster.getName();
first_InnoDB_cluster

MySQL 192.168.33.11:3306 ssl JS > \sql
Switching to SQL mode… Commands end with ;

MySQL 192.168.33.11:3306 ssl SQL > select channel_name,member_host,member_state,member_role,member_version from performance_schema.replication_group_members\G
* 1. row *
channel_name: group_replication_applier
member_host: sakthilabs11
member_state: ONLINE
member_role: PRIMARY
member_version: 8.0.18
* 2. row *
channel_name: group_replication_applier
member_host: sakthilabs12
member_state: ONLINE
member_role: SECONDARY
member_version: 8.0.18
* 3. row *
channel_name: group_replication_applier
member_host: sakthilabs13
member_state: ONLINE
member_role: SECONDARY
member_version: 8.0.18
3 rows in set (0.0070 sec)

Let’s jump into the topic. First step, need to install the MySQL router ,

yum install mysql-router-community.x86_64

I have installed MySQL router community edition .

# yum list installed | grep -i router
mysql-router-community.x86_64 8.0.18-1.el7 @mysql-tools-community

Second step, need to create the dedicated directory for the MySQL router operation . After this , need to run the MySQL router with bootstrap option .

# mkdir -p /root/mysqlrouter

# mysqlrouter –bootstrap InnoDBCluster@sakthilabs11:3306 –directory /root/mysqlrouter –user=root
Please enter MySQL password for InnoDBCluster:

Bootstrapping MySQL Router instance at ‘/root/mysqlrouter’…

…….

MySQL Classic protocol

Read/Write Connections: localhost:6446

Read/Only Connections: localhost:6447

MySQL X protocol

Read/Write Connections: localhost:64460

Read/Only Connections: localhost:64470

  • –bootstrap : bootstrap option will helps to automatically configure the router operation with the MySQL InnoDB cluster

The below files will be created after bootstrap the router .

pwd : /root/mysqlrouterdrwx——. 2 root root 6 Dec 30 15:07 run
-rw——-. 1 root root 88 Dec 30 15:07 mysqlrouter.key
drwx——. 2 root root 29 Dec 30 15:07 log
-rwx——. 1 root root 277 Dec 30 15:07 start.sh
-rw——-. 1 root root 1.4K Dec 30 15:07 mysqlrouter.conf
drwx——. 2 root root 39 Dec 30 15:07 data
-rwx——. 1 root root 161 Dec 30 15:07 stop.sh

mysqlrouter.conf will contains the Configuration options , By triggering the start.sh script we can start the MySQL router daemon .

# ./start.sh
PID 14791 written to ‘/root/mysqlrouter/mysqlrouter.pid’
logging facility initialized, switching logging to loggers specified in configuration

# ps -ef | grep -i mysqlrou
root 14791 1 21 15:22 pts/0 00:00:04 /bin/mysqlrouter -c /root/mysqlrouter/mysqlrouter.conf
root 14801 14636 0 15:23 pts/0 00:00:00 grep –color=auto -i mysqlrou

# netstat -tulnp | grep -i mysqlrouter
tcp 0 0 0.0.0.0:64460 0.0.0.0:* LISTEN 14791/mysqlrouter
tcp 0 0 0.0.0.0:6446 0.0.0.0:* LISTEN 14791/mysqlrouter
tcp 0 0 0.0.0.0:6447 0.0.0.0:* LISTEN 14791/mysqlrouter
tcp 0 0 0.0.0.0:64470 0.0.0.0:* LISTEN 14791/mysqlrouter

Alright, we have integrated the MySQL router with Cluster . Now we can test this with read and read/write connections .

For read / write connections , ( port : 6446 )

# mysql -P6446 -uInnotest -p’xxxxxxxxxx’ -h127.0.0.1 -e “create database test_write”

# mysql -P6446 -uInnotest -p’xxxxxxxxxxx’ -h127.0.0.1 -e “use sakthi ; select database()”
mysql: [Warning] Using a password on the command line interface can be insecure.
+————+
| database() |
+————+
| sakthi |
+————+

I can perform, both read and writes with pot 6446 .

For only read connections , ( port : 6447 )

# mysql -P6447 -uInnotest -p’xxxxxxxx’ -h127.0.0.1 -e “use sakthi ; select database()”
+————+
| database() |
+————+
| sakthi |
+————+

# mysql -P6447 -uInnotest -p’xxxxxxxxxx’ -h127.0.0.1 -e “create database test_write”
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1290 (HY000) at line 1: The MySQL server is running with the –super-read-only option so it cannot execute this statement

We can perform only reads with port 6447 . It illustrate the port 6447 only connects the reader nodes not master .

I hope this blog will helps someone who has started to learn the MySQL InnoDB cluster .

Thanks !!

MySQL InnoDB Cluster Tutorial 1 ( Group Replication + MySQL Shell )

MySQL InnoDB Cluster has introduced by the MySQL team for the High Availability ( HA ) purpose . It provides a complete high availability solution for MySQL.

Alright, I am planning to write the series of the blogs about the InnoDB Cluster configurations / Management with MySQL Shell / Monitoring etc …

In this blog I am going to show the InnoDB Cluster configuration with three nodes .

What is InnoDB Cluster ?

MySQL InnoDB Cluster is the Combination of,

  • MySQL shell
  • Group Replication ( GR )
  • MySQL Router

Lab Environment :

I have prepared my lab with three servers,

  • OS : Centos 7.7
  • MySQL 8.0.18 ( latest version )

The server details are ,

  • 192.168.91.11 ( hostname : sakthilabs11 )
  • 192.168.91.12 ( hostname : sakthilabs12 )
  • 192.168.91.13 ( hostname : sakthilabs13 )

Step 1 :

Need to allow the complete communication between the cluster nodes based on the hostname and IP . The below entry needs to be made on all the cluster nodes individually .

[root@sakthilabs11 ~]# cat /etc/hosts | grep 192
192.168.33.11 sakthilabs11 sakthilabs11
192.168.33.12 sakthilabs12 sakthilabs12
192.168.33.13 sakthilabs13 sakthilabs13

Step 2 :

In this step, we need to prepare the MySQL server for the InnoDB Cluster . The below step needs to be individually executed on all the cluster nodes .

cmd : dba.configureLocalInstance("username@userhost:3306"); 

When executing the above command , it will print the informations and ask the actions to configure the InnoDB Cluster . I just highlight them in the below output section .

output ;

MySQL localhost:33060+ ssl JS > dba.configureLocalInstance("root@localhost:3306");
Please provide the password for 'root@localhost:3306': *
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): y
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster…
1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel
Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: InnoDBCluster
Password for new account: ***
Confirm password: ***
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable |
| enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server |
| gtid_mode | OFF | ON | Update read-only variable and restart the server |
| server_id | 1 | | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y
Cluster admin user 'InnoDBCluster'@'%' created.
Configuring instance…
The instance 'localhost:3306' was configured for InnoDB cluster usage.
Restarting MySQL…
NOTE: MySQL server at localhost:3306 was restarted.

Step 3 :

After prepare all the nodes , need to login any one of the MySQL Shell with the InnoDB Cluster account ( which was created during the preparing phase ) .

cmd : shell.connect('InnoDBCluster@192.168.33.23:3306');

output :

MySQL localhost:33060+ ssl JS > shell.connect('InnoDBCluster@192.168.33.11:3306');
Creating a session to 'InnoDBCluster@192.168.33.11:3306'
Please provide the password for 'InnoDBCluster@192.168.33.11:3306': ***
Save password for 'InnoDBCluster@192.168.33.11:3306'? [Y]es/[N]o/Ne[v]er (default No): y
Fetching schema names for autocompletion… Press ^C to stop.
Closing old connection…
Your MySQL connection id is 9
Server version: 8.0.18 MySQL Community Server - GPL
No default schema selected; type \use to set one.

MySQL 192.168.33.11:3306 ssl JS >

Step 4 :

Create your first node of the InnoDB Cluster .

cmd : cluster = dba.createCluster('first_InnoDB_cluster');

output :

MySQL 192.168.33.11:3306 ssl JS > cluster = dba.createCluster('first_InnoDB_cluster');
A new InnoDB cluster will be created on instance '192.168.33.11:3306'.
Validating instance at 192.168.33.11:3306…
This instance reports its own address as sakthilabs11:3306
Instance configuration is suitable.
Creating InnoDB cluster 'first_InnoDB_cluster' on '192.168.33.11:3306'…
Adding Seed Instance…
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
MySQL 192.168.33.11:3306 ssl JS >

Step 5 :

Now we have successfully created the single node cluster , Have to add the other nodes as well . When adding the other nodes, it will ask the recovery method, we need to choose them . Clone plugin is the default one .

cmd : cluster.addInstance('InnoDBCluster@192.168.33.12:3306');

output :

MySQL 192.168.33.11:3306 ssl JS > cluster.addInstance('InnoDBCluster@192.168.33.12:3306');
Please provide the password for 'InnoDBCluster@192.168.33.12:3306': ***
Save password for 'InnoDBCluster@192.168.33.12:3306'? [Y]es/[N]o/Ne[v]er (default No): y
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): clone
Validating instance at 192.168.33.12:3306…
This instance reports its own address as sakthilabs12:3306
Instance configuration is suitable.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster…
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
Waiting for clone to finish…
NOTE: 192.168.33.12:3306 is being cloned from sakthilabs11:3306
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
** Stage RECOVERY: \
NOTE: 192.168.33.12:3306 is shutting down…
Waiting for server restart… ready
sakthilabs12:3306 has restarted, waiting for clone to finish…
Clone process has finished: 59.55 MB transferred in about 1 second (~59.55 MB/s)
Incremental distributed state recovery is now in progress.
Waiting for distributed recovery to finish…
NOTE: '192.168.33.12:3306' is being recovered from 'sakthilabs11:3306'
Distributed recovery has finished
The instance '192.168.33.12:3306' was successfully added to the cluster.

Similarly , I have added the third node as well .

Finally ,

We can check the cluster status with the below command .

cmd : cluster.status();

output :

MySQL 192.168.33.11:3306 ssl JS > cluster.status();
{
"clusterName": "first_InnoDB_cluster",
"defaultReplicaSet": {
"name": "default",
"primary": "sakthilabs11:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"sakthilabs11:3306": {
"address": "sakthilabs11:3306",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.18"
},
"sakthilabs12:3306": {
"address": "sakthilabs12:3306",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.18"
},
"sakthilabs13:3306": {
"address": "sakthilabs13:3306",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.18"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "sakthilabs11:3306"
}

Hope this blog will helps someone who is trying to learning the MySQL InnoDB Cluster . I will be coming back with new blog soon .

Thanks !!

wsrep_sst_auth is no more exist ( Percona Xtradb Cluster 8.x )

Before PXC 8.x , wsrep_sst_auth is the variable which was used to assign the SST ( State Snapshot Transfer ) user and password . From , PXC 8 the variable is deprecated and completely removed as it is causing the security concern because the user and password was saved in the .txt file and it is easily visible for the OS users .

So, if anyone using the variable wsrep_sst_auth on PXC 8 + will experience the below error .

2019-12-12T11:20:08.606323Z 0 [ERROR] [MY-000067] [Server] unknown variable ‘wsrep_sst_auth=root:Jesus@7sakthI’. 2019-12-12T11:20:08.606494Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you’re executing mysql_upgrade to correct the issue. 2019-12-12T11:20:08.608377Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-12-12T11:20:10.612273Z 3 [ERROR] [MY-000000] [Galera] Exception: State wait was interrupted 2019-12-12T11:20:10.612530Z 3 [ERROR] [MY-000000] [Galera] View callback failed. This is unrecoverable, restart required. (FATAL)

Then How the SST was implemented in PXC 8 ?

The complete details has been provided in the Percona blog . From the Percona blog, there are three major user accounts which participating in the SST .

From Percona Blog,

mysql.pxc.internal.session

The mysql.pxc.internal.session user account provides the appropriate security context to create and set up the other PXC accounts. This account has a limited set of privileges, enough needed to create the mysql.pxc.sst.user

.

This account is locked and cannot be used to login (the password field will not allow login).

mysql.pxc.sst.user

The mysql.pxc.sst.user is used by XtraBackup to perform the backup. This account has the full set of privileges needed by XtraBackup.

 This account is created for an SST and is dropped at the end of an SST and also when the PXC node is shutdown. The creation/provisioning of this user account is not written to the binlog and is not replicated to other nodes. However, this account is sent with the backup to the joiner node. So the joiner node also has to drop this user after the SST has finished.

mysql.pxc.sst.role

The mysql.pxc.sst.role is the MySQL role that provides the privileges needed for XtraBackup. This allows for easy addition/removal of privileges needed for an SST.

The experimental release of PXC is based on MySQL 8.0.15, and we have not implemented the role-based support due to issues found with MySQL 8.0.15. This will be revisited in future versions of PXC 8.0.

Okay, Now experimentally ,

With wsrep_sst_auth , the error will occur as I showed above .

2019-12-12T11:20:08.606323Z 0 [ERROR] [MY-000067] [Server] unknown variable 'wsrep_sst_auth=root:Jesus@7sakthI'.
2019-12-12T11:20:08.606494Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you're executing mysql_upgrade to correct the issue.
2019-12-12T11:20:08.608377Z 0 [ERROR] [MY-010119] [Server] Aborting

Without wsrep_sst_auth,

  • Internally, it will create the user mysql.pxc.sst.user for SST .
  • The user will not be written in the binary logs .
  • The user will be drop at the end of the SST process .
from file innobackup.backup.log,

xtrabackup: recognized client arguments: --user=mysql.pxc.sst.user --password=* --socket=/8.0/mysql.sock --lock-ddl=1 --backup=1 --galera-info=1 --stream=xbstream --target-dir=/tmp/pxc_sst_E1qV/donor_xb_cl6N

from MySQL process list ,

from MySQL.user table ,

as I mentioned earlier, the mysql.pxc.sst.user will be dropped once the SST has completed .

At the end of the SST, the below logs will be appear on the joiner node data directory .

ALL GOOD !!

I just experimented and experienced with this issue today . So, I just got curious to write about this . Honestly the Percona blog ( I already shared the link in this blog ) has more details .

Thank you !!!

Backup streaming with Mariabackup

Recently one of my friend had the requirement to configure the slave with MariaDB 10.4 . He planned to stream the backup from the standalone master to slave node and configure the replication . He was familiar with the Percona Xtrabackup and this is the first time he is working on the Mariabackup streaming .

While scheduling the streaming Mariabackup he was experienced with the below error .

Error Logs :

02] 2019-11-15 19:41:25 Streaming ./mysql/innodb_table_stats.ibd

[02] 2019-11-15 19:41:25         …done

[05] 2019-11-15 19:41:25 Streaming ./mysql/innodb_index_stats.ibd

[01] 2019-11-15 19:41:25 Streaming ./mysql/transaction_registry.ibd

[01] 2019-11-15 19:41:25         …done

[04] 2019-11-15 19:41:25 Streaming ./mysql/gtid_slave_pos.ibd

[04] 2019-11-15 19:41:25         …done

[06] 2019-11-15 19:41:25 Streaming ./jesus/g.ibd

[06] 2019-11-15 19:41:25         …done

[05] 2019-11-15 19:41:25         …done

bash: xbstream: command not found

mariabackup: Error writing file ‘UNKNOWN’ (Errcode: 32 “Broken pipe”)

[00] 2019-11-15 19:41:26 xb_stream_write_data() failed.

mariabackup: Error writing file ‘UNKNOWN’ (Errcode: 32 “Broken pipe”)

[03] 2019-11-15 19:41:26 mariabackup: xtrabackup_copy_datafile() failed.

[00] FATAL ERROR: 2019-11-15 19:41:26 failed to copy datafile.

Then he reached me about this issue . While going through the logs, I identified that the error says,

bash: xbstream: command not found

I suggested him to use the mbstream for extract the files from xbstream archive as Mariabackup is using the mbstream for extract the archive . The complete details has been provided in Mariabackup official website .

Then all works as expected . So the complete streaming Mariabackup command is ,

mariabackup –user=’stream’ –password=”Jesus@7sakthI” –parallel=6 –backup –stream=xbstream | ssh <remote_user@remote_host> “cat – | mbstream -x -C /mariaBackup”

Backup completion logs :

00] 2019-11-15 19:57:57 mariabackup: The latest check point (for incremental): ‘144398’

mariabackup: Stopping log copying thread.[00] 2019-11-15 19:57:57 >> log scanned up to (144407)

[00] 2019-11-15 19:57:57 >> log scanned up to (144407)

[00] 2019-11-15 19:57:57 Executing BACKUP STAGE END

[00] 2019-11-15 19:57:57 All tables unlocked

[00] 2019-11-15 19:57:57 Streaming ib_buffer_pool to <STDOUT>

[00] 2019-11-15 19:57:57         …done

[00] 2019-11-15 19:57:57 Backup created in directory ‘/mariaBackup/xtrabackup_backupfiles/’

[00] 2019-11-15 19:57:57 Streaming backup-my.cnf

[00] 2019-11-15 19:57:57         …done

[00] 2019-11-15 19:57:57 Streaming xtrabackup_info

[00] 2019-11-15 19:57:57         …done

[00] 2019-11-15 19:57:57 Redo log (from LSN 144398 to 144407) was copied.

[00] 2019-11-15 19:57:57 completed OK!

Why I am writing about this ?

During the analysis, I have searched about the Mariabackup streaming blogs . But, there is no such blog explaining the Mariabackup streaming with the exact command and details. So, I just wanted to write about this . Hope this helps someone, who is new for Mariabackup .

Create your website at WordPress.com
Get started