Upgrade Galera to v5.7 manually

Upgrade Galera to v5.7 manuallyΒΆ

Note

This feature is available starting from the MCP 2019.2.18 maintenance update. Before using the feature, follow the steps described in Apply maintenance updates.

This section instructs you on how to manually upgrade the Galera cluster to version 5.7. The upgrade of an underlying operating system is out of scope.

During the upgrade, the Galera cluster will be shut down and all nodes will be shut down one by one. Then package update and service start will be performed.

Warning

Before performing the upgrade on a production environment:

  • Accomplish the procedure on a staging environment to determine the required maintenance window duration.
  • Schedule an appropriate maintenance window to reduce the load on the cluster.
  • Do not shut down the VMs or workloads as networking and storage functions are not affected.

To upgrade Galera to v5.7 manually:

  1. Prepare the Galera cluster for the upgrade:

    1. Verify that you have added the required repositories on the Galera nodes to download the updated MySQL and Galera packages.

    2. Verify that your Galera cluster is up and running as described in Verify a Galera cluster status.

    3. Create an instant backup of the MySQL database as described in Back up and restore a MySQL database.

    4. Perform a MySQL dump to a file that can later be used as data source for manual restoration. Verify that the node has enough free space to make a dump file. Run the following command on a database server node (dbs):

      mysqldump --defaults-file=/etc/mysql/debian.cnf -AR > %dump_file
      
  2. Log in to the Salt Master node.

  3. Open the cluster level of your deployment model.

  4. Specify the new version for Galera packages:

    1. In <cluster_name>/openstack/database/mysql_version.yml, create a new YAML file with the following content:

      parameters:
        _param:
          galera_mysql_version: "5.7"
      
    2. In <cluster_name>/openstack/database/master.yml and <cluster_name>/openstack/database/slave.yml, add the created file:

      classes:
      ...
      - cluster.<cluster_name>.openstack.database.mysql_version
      
    3. Refresh pillars on the database nodes:

      salt -C "I@galera:master or I@galera:slave" saltutil.refresh_pillar
      
    4. Verify that the pillars of the database nodes have Galera version 5.7:

      salt -C "I@galera:master or I@galera:slave" pillar.get galera:version:mysql
      

      Warning

      If the Galera version is not 5.7, resolve the issue before proceeding with the upgrade.

  5. Add repositories with new Galera packages:

    1. Apply the linux.system.repo state on the database nodes:

      salt -C "I@galera:master or I@galera:slave" state.sls linux.system.repo
      
    2. Verify the availability of the new MySQL packages:

      salt -C "I@galera:master or I@galera:slave" cmd.run 'apt-cache policy mysql-wsrep-server-5.7 mysql-wsrep-5.7'
      
    3. Verify the availability of the new percona-xtrabackup-24 packages:

      salt -C "I@galera:master or I@galera:slave" cmd.run 'apt-cache policy percona-xtrabackup-24'
      
    4. Verify that the salt-formula-galera version is 1.0+202202111257.6945afc or later:

      dpkg -l |grep salt-formula-galera
      
  6. Verify the runtime versions of the MySQL nodes of the Galera cluster:

    salt -C "I@galera:master or I@galera:slave" mysql.version
    

    Example of system response:

    dbs02.openstack-ovs-core-ssl-pike-8602.local:
        5.6.51-1~u16.04+mcp1
    dbs01.openstack-ovs-core-ssl-pike-8602.local:
        5.6.51-1~u16.04+mcp1
    dbs03.openstack-ovs-core-ssl-pike-8602.local:
        5.6.51-1~u16.04+mcp1
    
  7. Perform the following steps on the Galera nodes one by one:

    1. Stop the MySQL service on the node 3:

      salt -C 'I@galera:slave and *03*' service.stop mysql
      
    2. Stop the MySQL service on the node 2:

      salt -C 'I@galera:slave and *02*' service.stop mysql
      
    3. Stop the MySQL service on the master MySQL node:

      salt -C 'I@galera:master and *01*' service.stop mysql
      
  8. Perform the following steps on the MySQL master node that was stopped last:

    1. Open /etc/mysql/my.cnf for editing.

    2. Comment out the wsrep_cluster_address line:

      ...
      #wsrep_cluster_address="gcomm://192.168.2.51,192.168.2.52,192.168.2.53"
      ...
      
    3. Add the wsrep_cluster_address parameter without any IP address specified:

      ...
      wsrep_cluster_address="gcomm://"
      ...
      
  9. On the same node, upgrade the packages:

    1. Obtain the MySQL root password:

      salt -C "I@galera:master" config.get mysql:client:server:database:admin:password
      
    2. Update the percona-xtrabackup package:

      apt-get -y install percona-xtrabackup-24
      
    3. Update the Galera packages:

      apt-get -y install -o DPkg::Options::=--force-confold -o Dpkg::Options::=--force-confdef mysql-wsrep-5.7 mysql-wsrep-common-5.7 galera-3
      
    4. In the window that opens, enter the root password obtained in the step above.

    5. Restart the mysql service:

      systemctl restart mysql
      
    6. Verify the cluster status:

      salt-call mysql.status | grep -A1 wsrep_cluster_size
      
  10. Perform the step 9 on the remaining Galera nodes one by one.

  11. On the node where you have changed the wsrep_cluster_address parameter, apply the galera state and restart the service:

    salt -C "I@galera:master" state.apply galera
    salt -C "I@galera:master" service.restart mysql
    
  12. Verify that your Galera cluster is up and running:

    salt -C 'I@galera:master' mysql.status | \
    grep -EA1 'wsrep_(local_state_c|incoming_a|cluster_size)'
    

    Example of system response:

    wsrep_cluster_size:
      3
    
    wsrep_incoming_addresses:
      192.168.2.52:3306,192.168.2.53:3306,192.168.2.51:3306
    
    wsrep_local_state_comment:
      Synced