Add a machine¶
After you create a new AWS-based managed cluster as described in Create a managed cluster, proceed with adding machines to this cluster using the Mirantis Container Cloud web UI.
You can also use the instruction below to scale up an existing managed cluster.
Do not stop the AWS instances dedicated to the Container Cloud clusters to prevent data failure and cluster disaster.
To add a machine to an AWS-based managed cluster:
For Container Cloud 2.17.0, apply the workaround for the known issue 24075.
After Container Cloud is upgraded to 2.18.0, remove the values added to the
Clusterobject during the workaround application.
Log in to the Container Cloud web UI with the
Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.
In the Clusters tab, click the required cluster name. The cluster page with the Machines list opens.
Click Create Machine.
Fill out the form with the following parameters as required:
Create Machines Pool
Available since Container Cloud 2.17.0. Select to create a set of machines with the same provider spec to manage them as a single unit. Enter the machine pool name in the Pool Name field.
Specify the number of machines to create. If you create a machine pool, specify the replicas count of the pool.
Select Manager or Worker to create a Kubernetes manager or worker node.
The required minimum number of machines:
3 manager nodes for HA.
2 worker nodes for the Container Cloud workloads. If the multiserver mode is enabled for StackLight, add 3 worker nodes.
A cluster can have more than 3 manager machines, but only an odd number. In an even-sized cluster, an additional machine remains in the
Pendingstate until an extra manager machine is added. An even number of manager machines does not provide additional fault tolerance but increases the number of node required for etcd quorum.
From the drop-down list, select the required AWS instance type. For production deployments, Mirantis recommends:
c5.2xlargefor worker nodes
c5.4xlargefor manager nodes
r5.4xlargefor nodes where the StackLight server components run
For more details about requirements, see Requirements for an AWS-based cluster.
From the drop-down list, select the required AMI ID of Ubuntu 20.04. For example,
Root device size
Select the required root device size,
Optional. Generally available since Container Cloud 2.19.0. A positive numeral value that defines the order of machine upgrade during a cluster update.
You can change the upgrade order later on an existing cluster. For details, see Change the upgrade order of a machine or machine pool.
Consider the following upgrade index specifics:
The first machine to upgrade is always one of the control plane machines with the lowest
upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.
false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.
If several machines have the same upgrade index, they have the same priority during upgrade.
If the value is not set, the machine is automatically assigned a value of the upgrade index.
Select the required node labels for the worker machine to run certain components on a specific node. For example, for the StackLight nodes that run OpenSearch and require more resources than a standard node, select the StackLight label. The list of available node labels is obtained from your current
Due to the known issue 23002, a custom value for a predefined node label cannot be set using the Container Cloud web UI. For a workaround, refer to the issue description.
If you deploy StackLight in the HA mode (recommended):
Add the StackLight label to minimum three worker nodes. Otherwise, StackLight will not be deployed until the required number of worker nodes is configured with the StackLight label.
Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to correctly maintain the worker nodes where the StackLight local volumes were provisioned. For details, see Delete a cluster machine.
To obtain the list of nodes where StackLight is deployed, refer to Upgrade managed clusters with StackLight deployed in HA mode.
You can add node labels after deploying a worker machine. On the Machines page, click the More action icon in the last column of the required machine field and select Configure machine.
Repeat the steps above for the remaining machines.
Monitor the deploy or update live status of the machine:
- Quick status
On the Clusters page, in the Managers or Workers columns. The green status icon indicates that the machine is Ready, the orange status icon indicates that the machine is Updating.
- Detailed status
In the Machines section of a particular cluster page, in the Status column. Hover over a particular machine status icon to verify the deploy or update status of a specific machine component.
You can monitor the status of the following machine components:
Readiness of a node in a Kubernetes cluster
Health and readiness of a node in a Docker Swarm cluster
LCM readiness status of a node
Readiness of a node in the underlying infrastructure (virtual or bare metal, depending on the provider type)
The machine creation starts with the Provision status. During provisioning, the machine is not expected to be accessible since its infrastructure (VM, network, and so on) is being created.
Other machine statuses are the same as the LCMMachine object states described in LCM Controller.
Once the status changes to Ready, the deployment of the managed cluster components on this machine is complete.
Verify the status of the cluster nodes as described in Connect to a Mirantis Container Cloud cluster.
An operational managed cluster must contain a minimum of 3 Kubernetes manager nodes and 2 Kubernetes worker nodes. The deployment of the cluster does not start until the minimum number of nodes is created.
A machine with the manager node role is automatically deleted during the cluster deletion.
Before Container Cloud 2.17.0, to meet the etcd quorum and prevent the deployment failure, deletion of the manager nodes is prohibited.
Since Container Cloud 2.17.0, deletion of the manager nodes is allowed for non-MOSK-based clusters within the Technology Preview features scope for the purpose of node replacement or recovery.