Manage nodes

The process of adding and removing nodes differs, depending on whether the affected nodes are Manager nodes, Worker nodes, or MSR nodes.

Manager Nodes

Swarm manager nodes use the Raft Consensus Algorithm to manage the swarm state. As such, it is advisable to have an understanding of some general Raft concepts in order to manage a swarm.

  • There is no limit on the number of manager nodes that can be deployed. The decision on how many manager nodes to implement comes down to a trade-off between performance and fault-tolerance. Adding manager nodes to a swarm makes the swarm more fault-tolerant, however additional manager nodes reduce write performance as more nodes must acknowledge proposals to update the swarm state (which means more network round-trip traffic).

  • Raft requires a majority of managers, also referred to as the quorum, to agree on proposed updates to the swarm, such as node additions or removals. Membership operations are subject to the same constraints as state replication.

  • In addition, Manager nodes host the control plane etcd cluster, and thus making changes to the cluster requires a working etcd cluster with the majority of peers present and working.

  • It is highly advisable to run an odd number of peers in quorum-based systems. MKE only works when a majority can be formed, so once more than one node has been added it is not possible to (automatically) go back to having only one node.

Add Manager Nodes

Adding manager nodes is as simple as adding them to the launchpad.yaml file. Re-running launchpad apply will configure MKE on the new node and also makes necessary changes in the swarm and etcd cluster.

Remove Manager Nodes

  1. Remove the manager host from the launchpad.yaml file.

  2. Enable pruning by changing the prune setting to true in spec.cluster.prune.

    spec:
      cluster:
        prune: true
    
  3. Run the launchpad apply command.

  4. Remove the node in the infrastructure.

Worker Nodes

Add Worker Nodes

To add worker nodes, simply include them in the launchpad.yaml file. Re-running launchpad apply will configure everything on the new node and join it to the cluster.

Remove Worker Nodes

  1. Remove the host from the launchpad.yaml file.

  2. Enable pruning by changing the prune setting to true in spec.cluster.prune.

    spec:
      cluster:
        prune: true
    
  3. Run the launchpad apply command.

  4. Remove the node in the infrastructure.

MSR Nodes

MSR nodes are identical to worker nodes. They participate in the MKE swarm, but should not be used as traditional worker nodes for both MSR and cluster workloads.

Note

By default, MKE will prevent scheduling of containers on MSR nodes.

MSR forms its own cluster and quorum in addition to the swarm formed by MKE. There is no limit on the number of MSR nodes that can be configured, however the best practice is to limit the amount to five. As with manager nodes, the decision on how many nodes to implement should be made with an understanding of the trade-off between performance and fault-tolerance (a larger amount of nodes added can incur severe performance penalties).

The quorum formed by MSR utilizes RethinkDB which, as with swarm, uses the Raft Consensus Algorithm.

Add MSR Nodes

To add MSR nodes, simply include them in the launchpad.yaml file with a host role of msr. When adding an MSR node, specify both the adminUsername and adminPassword in the spec.mke section of the launchpad.yaml file so that MSR knows which admin credentials to use.

spec:
  mke:
    adminUsername: admin
    adminPassword: passw0rd!

Next, re-run launchpad apply which will configure everything on the new node and join it into the cluster.

Remove MSR nodes

  1. Remove the host from the launchpad.yaml file.

  2. Enable pruning by changing the prune setting to true in spec.cluster.prune.

    spec:
      cluster:
        prune: true
    
  3. Run the launchpad apply command.

  4. Remove the node in the infrastructure.