Add a machine¶
Warning
This section only applies to Container Cloud 2.27.2 (Cluster release 16.2.2) or earlier versions. Since Container Cloud 2.27.3 (Cluster release 16.2.3), support for vSphere-based clusters is suspended. For details, see Deprecation notes.
After you create a new VMware vSphere-based Mirantis Container Cloud managed cluster as described in Create a managed cluster, proceed with adding machines to this cluster using the Container Cloud web UI.
You can also use the instruction below to scale up an existing managed cluster.
To add a machine to a vSphere-based managed cluster:
Log in to the Container Cloud web UI with the
m:kaas:namespace@operator
orm:kaas:namespace@writer
permissions.Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.
In the Clusters tab, click the required cluster name. The cluster page with Machines list opens.
On the cluster page, click Create Machine.
Fill out the form with the following parameters as required:
¶ Parameter
Description
Create Machines Pool
Select to create a set of machines with the same provider spec to manage them as a single unit. Enter the machine pool name in the Pool Name field.
Count
Specify the number of machines to create. If you create a machine pool, specify the replicas count of the pool.
Select Manager or Worker to create a Kubernetes manager or worker node.
Caution
The required minimum number of manager machines is three for HA. A cluster can have more than three manager machines but only an odd number of machines.
In an even-sized cluster, an additional machine remains in the
Pending
state until an extra manager machine is added. An even number of manager machines does not provide additional fault tolerance but increases the number of node required for etcd quorum.The required minimum number of worker machines for the Container Cloud workloads is two. If the multiserver mode is enabled for StackLight, add three worker nodes.
Template Path
Path to the VM template prepared during the management cluster bootstrap. Use the drop-down list to select the required item.
You may select VM templates of your vSphere datacenter account that are also displayed in the drop-down list. For the list of supported operating systems, refer to Requirements for a VMware vSphere-based cluster.
Note
Mirantis does not recommend using VM templates that contain the Unknown label in the drop-down list.
Caution
Container Cloud does not support mixed operating systems, RHEL combined with Ubuntu, in one cluster.
RHEL License
Applies to RHEL deployments only.
From the drop-down list, select the RHEL license that you previously added for the cluster being deployed.
VM Memory Size
VM memory size in GB, defaults to 16 GB.
To prevent issues with low RAM, Mirantis recommends the following VM templates for a managed cluster with 50-200 nodes:
16 vCPUs and 40 GB of RAM - manager node
16 vCPUs and 128 GB of RAM - nodes where the StackLight server components run
VM CPU Size
VM CPUs number, defaults to 8.
Upgrade Index
Optional. A positive numeral value that defines the order of machine upgrade during a cluster update.
Note
You can change the upgrade order later on an existing cluster. For details, see Change the upgrade order of a machine or machine pool.
Consider the following upgrade index specifics:
The first machine to upgrade is always one of the control plane machines with the lowest
upgradeIndex
. Other control plane machines are upgraded one by one according to their upgrade indexes.If the
Cluster
specdedicatedControlPlane
field isfalse
, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.If several machines have the same upgrade index, they have the same priority during upgrade.
If the value is not set, the machine is automatically assigned a value of the upgrade index.
Node Labels
Add the required node labels for the worker machine to run certain components on a specific node. For example, for the StackLight nodes that run OpenSearch and require more resources than a standard node, add the StackLight label. The list of available node labels is obtained from
allowedNodeLabels
of your currentCluster
release.If the
value
field is not defined inallowedNodeLabels
, from the drop-down list, select the required label and define an appropriate custom value for this label to be set to the node. For example, thenode-type
label can have thestorage-ssd
value to meet the service scheduling logic on a particular machine.Note
Due to the known issue 23002 fixed in Container Cloud 2.21.0, a custom value for a predefined node label cannot be set using the Container Cloud web UI. For a workaround, refer to the issue description.
Caution
If you deploy StackLight in the HA mode (recommended):
Add the StackLight label to minimum three worker nodes. Otherwise, StackLight will not be deployed until the required number of worker nodes is configured with the StackLight label.
Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to correctly maintain the worker nodes where the StackLight local volumes were provisioned. For details, see Delete a cluster machine.
To obtain the list of nodes where StackLight is deployed, refer to Upgrade managed clusters with StackLight deployed in HA mode.
If you move the StackLight label to a new worker machine on an existing cluster, manually deschedule all StackLight components from the old worker machine, which you remove the StackLight label from. For details, see Deschedule StackLight Pods from a worker machine.
Note
To add node labels after deploying a worker machine. navigate to the Machines page, click the More action icon in the last column of the required machine field, and select Configure machine.
Since Container Cloud 2.24.0, you can configure node labels for machine pools after deployment using the More > Configure Pool option.
Click Create.
Repeat the steps above for the remaining machines.
Monitor the deploy or update live status of the machine:
- Quick status
On the Clusters page, in the Managers or Workers column. The green status icon indicates that the machine is Ready, the orange status icon indicates that the machine is Updating.
- Detailed status
In the Machines section of a particular cluster page, in the Status column. Hover over a particular machine status icon to verify the deploy or update status of a specific machine component.
You can monitor the status of the following machine components:
Component
Description
Kubelet
Readiness of a node in a Kubernetes cluster.
Swarm
Health and readiness of a node in a Docker Swarm cluster.
LCM
LCM readiness status of a node.
ProviderInstance
Readiness of a node in the underlying infrastructure (virtual or bare metal, depending on the provider type).
Graceful Reboot
Readiness of a machine during a scheduled graceful reboot of a cluster, available since Cluster releases 15.0.1 and 14.0.0.
Infrastructure Status
Available since Container Cloud 2.25.0 for the bare metal provider only. Readiness of the
IPAMHost
,L2Template
,BareMetalHost
, andBareMetalHostProfile
objects associated with the machine.LCM Operation
Available since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Health of all LCM operations on the machine.
LCM Agent
Available since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0). Health of the LCM Agent on the machine and the status of the LCM Agent update to the version from the current Cluster release.
The machine creation starts with the Provision status. During provisioning, the machine is not expected to be accessible since its infrastructure (VM, network, and so on) is being created.
Other machine statuses are the same as the
LCMMachine
object states:Uninitialized - the machine is not yet assigned to an
LCMCluster
.Pending - the agent reports a node IP address and host name.
Prepare - the machine executes
StateItems
that correspond to theprepare
phase. This phase usually involves downloading the necessary archives and packages.Deploy - the machine executes
StateItems
that correspond to thedeploy
phase that is becoming a Mirantis Kubernetes Engine (MKE) node.Ready - the machine is being deployed.
Upgrade - the machine is being upgraded to the new MKE version.
Reconfigure - the machine executes
StateItems
that correspond to thereconfigure
phase. The machine configuration is being updated without affecting workloads running on the machine.
Once the status changes to Ready, the deployment of the cluster components on this machine is complete.
You can also monitor the live machine status using API:
kubectl get machines <machineName> -o wide
Example of system response since Container Cloud 2.23.0:
NAME READY LCMPHASE NODENAME UPGRADEINDEX REBOOTREQUIRED WARNINGS demo-0 true Ready kaas-node-c6aa8ad3 1 false
For the history of a machine deployment or update, refer to Inspect the history of a cluster and machine deployment or update.
Verify the status of the cluster nodes as described in Connect to a Mirantis Container Cloud cluster.
Warning
An operational managed cluster must contain a minimum of 3 Kubernetes manager machines to meet the etcd quorum and 2 Kubernetes worker machines.
The deployment of the cluster does not start until the minimum number of machines is created.
A machine with the manager role is automatically deleted during the cluster deletion. Manual deletion of manager machines is allowed only for the purpose of node replacement or recovery.
Support status of manager machine deletion
Since the Cluster releases 17.0.0, 16.0.0, and 14.1.0, the feature is generally available.
Before the Cluster releases 16.0.0 and 14.1.0, the feature is available within the Technology Preview features scope for non-MOSK-based clusters.
Before the Cluster release 17.0.0 the feature is not supported for MOSK.
Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:
10.0.0.0/16
is used for Swarm networks. IP addresses from this network are virtual.10.99.0.0/16
is used for MCR networks. IP addresses from this network are allocated on hosts.
Verification of Swarm and MCR network addresses
To verify Swarm and MCR network addresses, run on any master node:
docker info
Example of system response:
Server: ... Swarm: ... Default Address Pool: 10.0.0.0/16 SubnetSize: 24 ... Default Address Pools: Base: 10.99.0.0/16, Size: 20 ...
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the
10.0.0.0/24
address block. Also, three MCR networks are created by default and occupy three address blocks:10.99.0.0/20
,10.99.16.0/20
,10.99.32.0/20
.To verify the actual networks state and addresses in use, run:
docker network ls docker network inspect <networkName>
See also