Restore the Salt Master node

Restore the Salt Master nodeΒΆ

You may need to restore the Salt Master node after a hardware or software failure. This section instructs you on how to restore the Salt Master node using Backupninja.

To restore the Salt Master node using Backupninja:

Select from the following options:

  • Restore the Salt Master node automatically as described in Restore the services using Backupninja pipeline.

  • Restore the Salt Master node manually:

    1. Redeploy the Salt Master node using the day01 image with the configuration ISO drive for the Salt Master VM as described in Deploy the Salt Master node.

      Caution

      Make sure to securely back up the configuration ISO drive image. This image contains critical information required to re-install your cfg01 node in case of storage failure, including master key for all encrypted secrets in the cluster metadata model.

      Failure to back up the configuration ISO image may result in loss of ability to manage MCP in certain hardware failure scenarios.

    2. Log in to the Salt Master node.

    3. On the cluster level of the Reclass model, add the following pillar in the cluster/infra/config/init.yml file:

      parameters:
        salt:
          master:
            initial_data:
              engine: backupninja
              source: ${_param:backupninja_backup_host}  # the backupninja server that stores Salt Master backups, for example: kvm03
              host: ${_param:infra_config_hostname}.${_param:cluster_domain}  # for example: cfg01.deploy-name.local
              home_dir: '/path/to/backups/' # for example: '/srv/volumes/backup/backupninja'
          minion:
            initial_data:
              engine: backupninja
              source: ${_param:backupninja_backup_host}  # the backupninja server that stores Salt Master backups, for example: kvm03
              host: ${_param:infra_config_hostname}.${_param:cluster_domain}  # for example: cfg01.deploy-name.local
              home_dir: '/path/to/backups/' # for example: '/srv/volumes/backup/backupninja'
      
    4. Verify that the pillar for Backupninja is present:

      salt-call pillar.data backupninja
      

      If the pillar is not present, configure it as described in Enable a backup schedule for the Salt Master node using Backupninja.

    5. Verify that the pillar for master and minion is present:

      salt-call pillar.data salt:minion:initial_data
      salt-call pillar.data salt:master:initial_data
      

      If the pillar is not present, verify the pillar configuration in cluster/infra/config/init.yml described above.

    6. Apply the salt.master.restore and salt.minion.restore states.

      Mirantis recommends running the following command using Linux GNU Screen or alternatives.

      salt-call state.sls salt.master.restore,salt.minion.restore
      

      Running the states above restores the Salt Master node PKI and CA certificates and creates files as a flag in the /srv/salt/ directory that indicates the Salt Master node restore is completed.

      Caution

      If you rerun the state, it will not restore the Salt Master node again. To repeat the restore procedure, first delete the master-restored and minion-restored files from the /srv/salt directory and rerun the above states.

    7. Verify that the Salt Master node is restored:

      salt-key
      salt -t2 '*' saltutil.refresh_pillar
      ls -la /etc/pki/ca/salt_master_ca/