MKE is designed for scaling horizontally as your applications grow in size and usage. You can add or remove nodes from the MKE cluster to make it scale to your needs.
Since MKE leverages the clustering functionality provided by Mirantis Container Runtime, you use the docker swarm join command to add more nodes to your cluster. When joining new nodes, the MKE services automatically start running in that node.
When joining a node to a cluster you can specify its role: manager or worker.
Manager nodes
Manager nodes are responsible for cluster management functionality and dispatching tasks to worker nodes. Having multiple manager nodes allows your cluster to be highly-available and tolerate node failures.
Manager nodes also run all MKE components in a replicated way, so by adding additional manager nodes, you’re also making MKE highly available. Learn more about the MKE architecture.
Worker nodes
Worker nodes receive and execute your services and applications. Having multiple worker nodes allows you to scale the computing capacity of your cluster.
When deploying Mirantis Secure Registry in your cluster, you deploy it to a worker node.
To join nodes to the cluster, go to the MKE web UI and navigate to the Nodes page.
Click Add Node to add a new node.
Copy the displayed command, use ssh to log into the host that you want
to join to the cluster, and run the docker swarm join
command on the
host.
To add a Windows node, click Windows and follow the instructions in Join Windows worker nodes to a cluster.
After you run the join command in the node, you can view the node in the MKE web UI.
If the target node is a manager, you will need to first demote the node into a worker before proceeding with the removal:
docker node ls
and identify the nodeID
or hostname of the target node. Then, run
docker node demote <nodeID or hostname>
.If the status of the worker node is Ready
, you’ll need to
manually force the node to leave the cluster. To do this, connect to
the target node through SSH and run docker swarm leave --force
directly against the local docker engine.
Loss of quorum
Do not perform this step if the node is still a manager, as this may cause loss of quorum.
Now that the status of the node is reported as Down
, you may
remove the node:
docker node rm <nodeID or hostname>
.Once a node is part of the cluster you can change its role making a manager node into a worker and vice versa. You can also configure the node availability so that it is:
In the MKE web UI, browse to the Nodes page and select the node. In the details pane, click the Configure to open the Edit Node page.
If you’re load-balancing user requests to MKE across multiple manager nodes, when demoting those nodes into workers, don’t forget to remove them from your load-balancing pool.
You can also use the command line to do all of the above operations. To get the join token, run the following command on a manager node:
docker swarm join-token worker
If you want to add a new manager node instead of a worker node, use
docker swarm join-token manager
instead. If you want to use a custom
listen address, add the --listen-addr
arg:
docker swarm join \
--token SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj \
--listen-addr 234.234.234.234 \
192.168.99.100:2377
Once your node is added, you can see it by running docker node ls
on
a manager:
docker node ls
To change the node’s availability, use:
docker node update --availability drain node2
You can set the availability to active
, pause
, or drain
.
To remove the node, use:
docker node rm <node-hostname>