Disaster recovery procedures should be performed in the following order:
Swarm is resilient to failures and the swarm can recover from any number of temporary node failures (machine reboots or crash with restart) or other transient errors. However, a swarm cannot automatically recover if it loses a quorum. Tasks on existing worker nodes continue to run, but administrative tasks are not possible, including scaling or updating services and joining or removing nodes from the swarm. The best way to recover is to bring the missing manager nodes back online. If that is not possible, continue reading for some options for recovering your swarm.
In a swarm of N
managers, a quorum (a majority) of manager nodes
must always be available. For example, in a swarm with 5 managers, a
minimum of 3 must be operational and in communication with each other.
In other words, the swarm can tolerate up to (N-1)/2
permanent
failures beyond which requests involving swarm management cannot be
processed. These types of failures include data corruption or hardware
failures.
If you lose the quorum of managers, you cannot administer the swarm. If you have lost the quorum and you attempt to perform any management operation on the swarm, an error occurs:
Error response from daemon: rpc error: code = 4 desc = context deadline exceeded
The best way to recover from losing the quorum is to bring the failed
nodes back online. If you can’t do that, the only way to recover from
this state is to use the --force-new-cluster
action from a manager
node. This removes all managers except the manager the command was run
from. The quorum is achieved because there is now only one manager.
Promote nodes to be managers until you have the desired number of
managers.
# From the node to recover
$ docker swarm init --force-new-cluster --advertise-addr node01:2377
When you run the docker swarm init
command with the
--force-new-cluster
flag, the Docker Engine where you run the
command becomes the manager node of a single-node swarm which is capable
of managing and running services. The manager has all the previous
information about services and tasks, worker nodes are still part of the
swarm, and services are still running. You need to add or re-add manager
nodes to achieve your previous task distribution and ensure that you
have enough managers to maintain high availability and prevent losing
the quorum.
Generally, you do not need to force the swarm to rebalance its tasks. When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user.
In Docker 1.13 and higher, you can use the --force
or -f
flag
with the docker service update
command to force the service to
redistribute its tasks across the available worker nodes. This causes
the service tasks to restart. Client applications may be disrupted. If
you have configured it, your service uses a rolling
update.
If you use an earlier version and you want to achieve an even balance of
load across workers and don’t mind disrupting running tasks, you can
force your swarm to re-balance by temporarily scaling the service
upward. Use docker service inspect --pretty <servicename>
to see the
configured scale of a service. When you use docker service scale
,
the nodes with the lowest number of tasks are targeted to receive the
new workloads. There may be multiple under-loaded nodes in your swarm.
You may need to scale the service up by modest increments a few times to
achieve the balance you want across all the nodes.
When the load is balanced to your satisfaction, you can scale the
service back down to the original scale. You can use
docker service ps
to assess the current balance of your service
across nodes.