To restore MKE, select one of the following options:
docker swarm init
in the same
way as the install operation would. A new swarm is created and MKE is
restored on top.uninstall-ucp
command.docker/ucp
image version) as the backed up cluster.
Restoring to a later patch release version is allowed.During the MKE restore, Kubernetes declarative objects are re-created, containers are re-created, and IPs are resolved.
For more information, see Restoring an etcd cluster.
When the restore operations starts, it looks for the MKE version used in the backup and performs one of the following actions:
- Fails if the restore operation is running using an image that does not match the MKE version from the backup (a `--force` flag is available to override this if necessary)
- Provides instructions how to run the restore process using the matching MKE version from the backup
Volumes are placed onto the host on which the MKE restore command occurs.
The following example shows how to restore MKE from an existing backup
file, presumed to be located at /tmp/backup.tar
(replace
<MKE_VERSION>
with the version of your backup):
$ docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:3.2.5 restore < /tmp/backup.tar
If the backup file is encrypted with a passphrase, provide the
passphrase to the restore operation(replace <MKE_VERSION>
with the
version of your backup):
$ docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:3.2.5 restore --passphrase "secret" < /tmp/backup.tar
The restore command can also be invoked in interactive mode, in which
case the backup file should be mounted to the container rather than
streamed through stdin
:
$ docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/backup.tar:/config/backup.tar \
docker/ucp:3.2.5 restore -i
The current certs volume containing cluster specific information (such
as SANs) is invalid on new clusters with different IPs. For volumes that
are not backed up (ucp-node-certs
, for example), the restore
regenerates certs. For certs that are backed up,
(ucp-controller-server-certs), the restore does not perform a
regeneration and you must correct those certs when the restore
completes.
After you successfully restore MKE, you can add new managers and workers the same way you would after a fresh installation.
For restore operations, view the output of the restore command.
A successful MKE restore involves verifying the following items:
"curl -s -k https://localhost/_ping".
Alternatively, check the MKE UI Nodes page for node status, and monitor the UI for warning banners about unhealthy managers.
Note: - Monitor all swarm managers for at least 15 minutes to ensure no degradation. - Ensure no containers on swarm managers are marked as “unhealthy”. - No swarm managers or nodes are running containers with the old version, except for Kubernetes Pods that use the “ucp-pause” image.