Restore MKE¶
MKE supports the following three different approaches to performing a restore:
Run the restore on the machines from which the backup originated or on new machines. You can use the same swarm from which the backup originated or a new swarm.
Run the restore on a manager node of an existing swarm that does not have MKE installed. In this case, the MKE restore uses the existing swarm and runs in place of an MKE install.
Run the restore on an instance of MCR that is not included in a swarm. The restore performs docker swarm init just as the install operation would do. This creates a new swarm and restores MKE thereon.
Note
During the MKE restore operation, Kubernetes declarative objects and containers are recreated and IP addresses are resolved.
For more information, refer to Restoring an etcd cluster.
Prerequisites¶
Consider the following requirements prior to restoring MKE:
To restore an existing MKE installation from a backup, you must uninstall MKE from the swarm by using the
uninstall-ucp
command.Restore operations must run using the same major and minor MKE version and
mirantis/ucp
image version as the backed-up cluster.If you restore MKE using a different swarm than the one where the backed-up MKE was deployed, MKE will use new TLS certificates. In this case, you must download new client bundles, as the existing ones will no longer be operational.
Restore MKE¶
Note
At the start of the restore operation, the script identifies the MKE version defined in the backup and performs one of the following actions:
The MKE restore fails if it runs using an image that does not match the MKE version from the backup. To override this in, for example, a testing scenario, use the
--force
flag.MKE provides instructions on how to run the restore process for the MKE version in use.
Note
If SELinux is enabled, you must temporarily disable it prior to running the restore command. You can then reenable SELinux once the command has completed.
Volumes are placed onto the host where you run the MKE restore command.
Restore MKE from an existing backup file. The following example illustrates how to restore MKE from an existing backup file located in
/tmp/backup.tar
:docker container run \ --rm \ --interactive \ --name ucp \ --volume /var/run/docker.sock:/var/run/docker.sock \ mirantis/ucp:3.7.15 restore \ --san=${APISERVER_LB} < /tmp/backup.tar
Replace
mirantis/ucp:3.7.15
with the MKE version in your backup file.For the
--san
flag, assign the cluster APIserver
IP address without the port number to theAPISERVER_LB
variable. For example, forhttps://172.16.243.2:443
use172.16.243.2
. For more information on the--san
flag, refer to MKE CLI restore options.
If the backup file is encrypted with a passphrase, include the
--passphrase
flag in the restore command:docker container run \ --rm \ --interactive \ --name ucp \ --volume /var/run/docker.sock:/var/run/docker.sock \ mirantis/ucp:3.7.15 restore \ --san=${APISERVER_LB} \ --passphrase "secret" < /tmp/backup.tar
Alternatively, you can invoke the restore command in interactive mode by mounting the backup file to the container rather than streaming it through stdin:
docker container run \ --rm \ --interactive \ --name ucp \ --volume /var/run/docker.sock:/var/run/docker.sock \ -v /tmp/backup.tar:/config/backup.tar \ mirantis/ucp:3.7.15 restore -i
Regenerate certs. The current certs volume containing cluster-specific information, such as SANs, is invalid on new clusters with different IPs. For volumes that are not backed up, such as
ucp-node-certs
, the restore regenerates certs. For certs that are backed up,ucp-controller-server-certs
, the restore does not perform a regeneration and you must correct those certs when the restore completes.After you successfully restore MKE, add new managers and workers just as you would after a fresh installation.
For restore operations, review the output of the restore command.
Verify the MKE restore¶
Run the following command:
curl -s -k https://localhost/_ping
Log in to the MKE web UI.
In the left-side navigation panel, navigate to Shared Resources > Nodes.
Verify that all swarm manager nodes are healthy:
Monitor all swarm managers for at least 15 minutes to ensure no degradation.
Verify that no containers on swarm manager nodes are in an unhealthy state.
Verify that no swarm nodes are running containers with the old version, except for Kubernetes Pods that use the
ucp-pause
image.
See also