Docker Enterprise is designed for scaling horizontally as your applications grow in size and usage. You can add or remove nodes from the cluster to scale it to your needs. You can join Windows Server and Linux nodes to the cluster.
Because Docker Enterprise leverages the clustering functionality provided by Docker Engine, you use the docker swarm join command to add more nodes to your cluster. When you join a new node, Docker Enterprise services start running on the node automatically.
When you join a node to a cluster, you specify its role: manager or worker.
Manager: Manager nodes are responsible for cluster management functionality and dispatching tasks to worker nodes. Having multiple manager nodes allows your swarm to be highly available and tolerant of node failures.
Manager nodes also run all Docker Enterprise components in a replicated way, so by adding additional manager nodes, you’re also making the cluster highly available.
Worker: Worker nodes receive and execute your services and applications. Having multiple worker nodes allows you to scale the computing capacity of your cluster.
When deploying Docker Trusted Registry in your cluster, you deploy it to a worker node.
You can join Windows Server and Linux nodes to the cluster, but only Linux nodes can be managers.
To join nodes to the cluster, go to the UCP web interface and navigate to the Nodes page.
Click Add Node to add a new node.
Select the type of node to add, Windows or Linux.
Click Manager if you want to add the node as a manager.
Check the Use a custom listen address option to specify the address and port where new node listens for inbound cluster management traffic.
Check the Use a custom listen address option to specify the IP address that’s advertised to all members of the cluster for API access.
Copy the displayed command, use SSH to log in to the host that you want
to join to the cluster, and run the docker swarm join
command on the
host.
To add a Windows node, click Windows and follow the instructions in Join Windows worker nodes to a cluster.
After you run the join command in the node, the node is displayed on the Nodes page in the UCP web interface. From there, you can change the node’s cluster configuration, including its assigned orchestrator type.
Once a node is part of the cluster, you can configure the node’s availability so that it is:
Pause or drain a node from the Edit Node page:
You can promote worker nodes to managers to make UCP fault tolerant. You can also demote a manager node into a worker.
To promote or demote a manager node:
If you are load balancing user requests to Docker Enterprise across multiple manager nodes, remember to remove these nodes from the load-balancing pool when demoting them to workers.
Worker nodes can be removed from a cluster at any time.
Manager nodes are ingtegral to the cluster’s overall health, and thus you must be careful when removing one from the cluster.
You can use the Docker CLI client to manage your nodes from the CLI. To do this, configure your Docker CLI client with a UCP client bundle.
Once you do that, you can start managing your UCP nodes:
docker node ls
You can use the API to manage your nodes in the following ways:
Use the node update API to add the orchestrator label (that is,
com.docker.ucp.orchestrator.kubernetes
):
/nodes/{id}/update
Use the /api/ucp/config-toml API to change the default orchestrator setting.