Restore MKE

To restore MKE, select one of the following options:

  • Run the restore on the machines from which the backup originated or on new machines. You can use the same swarm from which the backup originated or a new swarm.

  • On a manager node of an existing swarm that does not have MKE installed. In this case, MKE restore uses the existing swarm and runs instead of any install.

  • Run the restore on an instance of MCR that isn’t participating in a swarm, in which case it performs docker swarm init in the same way as the install operation would. A new swarm is created and MKE is restored on top.

Limitations

  • To restore an existing MKE installation from a backup, you need to uninstall MKE from the swarm by using the uninstall-ucp command.

  • Restore operations must run using the same major/minor MKE version (and mirantis/ucp image version) as the backed up cluster. Restoring to a later patch release version is allowed.

  • If you restore MKE using a different Docker swarm than the one where MKE was previously deployed on, MKE will start using new TLS certificates. Existing client bundles won’t work anymore, so you must download new ones.

Kubernetes settings, data, and state

During the MKE restore operation, Kubernetes declarative objects are re-created, containers are re-created, and IPs are resolved.

For more information, see Restoring an etcd cluster.

Perform MKE restore

When the restore operation starts, the script identifies the MKE version defined in the backup and performs one of the following actions:

  • Fails if the restore operation runs using an image that does not match the MKE version from the backup. To override this in a testing scenario, for example, use the –force flag.

  • Provides instructions on how to run the restore process using the matching MKE version from the backup.

Volumes are placed onto the host where you run the MKE restore command.

The following example illustrates how to restore MKE from an existing backup file located in /tmp/backup.tar by default:

Note

In the commands below:

  • In mirantis/ucp:<version>, replace <version> with the matching MKE version in your backup file.

  • Using the --san flag, pass the APISERVER_LB variable that contains the cluster API server IP address without the port number. For example, for https://172.16.243.2:443, add 172.16.243.2 as a variable. For more details about the --san flag, see MKE CLI restore options.

docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock  \
mirantis/ucp:3.3.16 restore --san=${APISERVER_LB} < /tmp/backup.tar

If the backup file is encrypted with a passphrase, provide the passphrase to the restore operation. For example:

docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock  \
mirantis/ucp:3.3.16 restore --san=${APISERVER_LB} --passphrase "secret" < /tmp/backup.tar

You can also invoke the restore command in interactive mode by mounting the backup file to the container rather than streaming it through stdin:

docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/backup.tar:/config/backup.tar \
mirantis/ucp:3.3.16 restore -i

Regenerate Certs

The current certs volume containing cluster specific information (such as SANs) is invalid on new clusters with different IPs. For volumes that are not backed up (ucp-node-certs, for example), the restore regenerates certs. For certs that are backed up, (ucp-controller-server-certs), the restore does not perform a regeneration and you must correct those certs when the restore completes.

After you successfully restore MKE, you can add new managers and workers the same way you would after a fresh installation.

Restore operation status

For restore operations, view the output of the restore command.

Verify the MKE restore

A successful MKE restore involves verifying the following items:

  • All swarm managers are healthy after running the following command:

"curl -s -k https://localhost/_ping".

Alternatively, check the MKE UI Nodes page for node status, and monitor the UI for warning banners about unhealthy managers.

Note

  • Monitor all swarm managers for at least 15 minutes to ensure no degradation.

  • Ensure no containers on swarm managers are marked as “unhealthy”.

  • No swarm managers or nodes are running containers with the old version, except for Kubernetes Pods that use the “ucp-pause” image.

See also