You may need to restore the Salt Master node after a hardware or software failure.
To restore the Salt Master node from a Backupninja rsync backup:
Redeploy the Salt Master node using the day01
image with the
configuration ISO drive for the Salt Master VM as described in
Deploy the Salt Master node.
Caution
Make sure to securely back up the configuration ISO drive image.
This image contains critical information required to re-install
your cfg01
node in case of storage failure, including master
key for all encrypted secrets in the cluster metadata model.
Failure to back up the configuration ISO image may result in loss of ability to manage MCP in certain hardware failure scenarios.
Log in to the Salt Master node.
Configure your deployment model by including the following pillar in
cluster/infra/config/init.yml
:
parameters:
salt:
master:
initial_data:
engine: backupninja
source: kvm03 # the backupninja server that stores Salt Master backups
host: cfg01.<domain_name> # for example: cfg01.deploy-name.local
minion:
initial_data:
engine: backupninja
source: kvm03 # the backupninja server that stores Salt Master backups
host: cfg01.<domain_name> # for example: cfg01.deploy-name.local
Verify that the pillar for Backupninja is present:
salt-call pillar.data backupninja
If the pillar is not present, configure it as described in Enable a backup schedule for the Salt Master node using Backupninja.
Verify that the pillar for master
and minion
is present:
salt-call pillar.data salt:minion:initial_data
salt-call pillar.data salt:master:initial_data
If the pillar is not present, verify the pillar configuration in
cluster/infra/config/init.yml
described above.
Apply the salt.master.restore
and salt.minion.restore
states.
Mirantis recommends running the following command using Linux GNU Screen or alternatives.
salt-call state.sls salt.master.restore,salt.minion.restore
Running the states above restores the Salt Master node`s PKI and CA
certificates and creates files as a flag in the /srv/salt/
directory
that indicates the Salt Master node restore is completed.
Caution
If you rerun the state, it will not restore the Salt Master
node again. To repeat the restore procedure,
first delete the master-restored
and minion-restored
files from the /srv/salt
directory and rerun the above
states.
Verify that the Salt Master node is restored:
salt-key
salt -t2 '*' saltutil.refresh_pillar
ls -la /etc/pki/ca/salt_master_ca/