Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Create a machine using MOSK management console¶
After you add bare metal hosts and create a cluster as described in Create a MOSK cluster, proceed with associating Kubernetes machines of your cluster with the previously added bare metal hosts using the MOSK management console.
To add a Kubernetes machine to a MOSK cluster:
Log in to the MOSK management console with the
m:kaas:namespace@operatororm:kaas:namespace@writerpermissions.Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.
In the Clusters tab, click the required cluster name. The cluster page with the Machines list opens.
Click Create Machine button.
Fill out the Create New Machine form as required:
- Name
New machine name. If empty, a name is automatically generated in the
<clusterName>-<machineType>-<uniqueSuffix>format.
- Type
Machine type. Select Manager or Worker to create a Kubernetes manager or worker node.
Caution
The required minimum number of machines:
3 manager nodes for HA
3 worker storage nodes for a minimal Ceph cluster
- L2 Template
From the drop-down list, select the previously created L2 template, if any. For details, see Create L2 templates. Otherwise, leave the default selection to use the default L2 template of the cluster.
- Distribution
Operating system to provision the machine. From the drop-down list, select Ubuntu 22.04 Jammy as the machine distribution.
- Upgrade Index
Optional. A positive numeral value that defines the order of machine upgrade during a cluster update.
Note
You can change the upgrade order later on an existing cluster. For details, see Change the upgrade order of a machine.
Consider the following upgrade index specifics:
The first machine to upgrade is always one of the control plane machines with the lowest
upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.If the
ClusterspecdedicatedControlPlanefield isfalse, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.If several machines have the same upgrade index, they have the same priority during upgrade.
If the value is not set, the machine is automatically assigned a value of the upgrade index.
- Host Configuration
Configuration settings of the bare metal host to be used for the machine:
- Host Inventory
From the drop-down list, select the previously created custom bare metal host to be used for the new machine.
- Host Profile
From the drop-down list, select the previously created custom bare metal host profile, if any. For details, see Create a custom bare metal host profile. Otherwise, leave the default selection.
- Labels
Add the required node labels for the worker machine to run certain components on a specific node. For example, for the StackLight nodes that run OpenSearch and require more resources than a standard node, add the StackLight label. The list of available node labels is obtained from
allowedNodeLabelsof your currentClusterrelease.If the
valuefield is not defined inallowedNodeLabels, from the drop-down list, select the required label and define an appropriate custom value for this label to be set to the node. For example, thenode-typelabel can have thestorage-ssdvalue to meet the service scheduling logic on a particular machine.Caution
When deploying StackLight in the HA mode (recommended), add the StackLight label to minimum three worker nodes. Otherwise, StackLight will not be deployed until the required number of worker nodes is configured with the StackLight label.
Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to correctly maintain the worker nodes where the StackLight local volumes were provisioned. For details, see Delete a cluster machine.
To obtain the list of nodes where StackLight is deployed
In the MOSK management console, download the required cluster
kubeconfigas described in Connect to a MOSK cluster.Export the
kubeconfigparameters to your local machine with access to kubectl. For example:export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
Obtain the list of machines with the StackLight local volumes attached:
kubectl get persistentvolumes -o=json | \ jq '.items[]|select(.spec.claimRef.namespace=="stacklight")|.spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions[].values[]| sub("^kaas-node-"; "")' | \ sort -u | xargs -I {} kubectl --kubeconfig <mgmtKubeconfig> -n <projectName> get machines -o=jsonpath='{.items[?(@.metadata.annotations.kaas\.mirantis\.com/uid=="{}")].metadata.name}{"\n"}'
In the command above, substitute
<mgmtKubeconfig>with the path to your management clusterkubeconfigandprojectNamewith the project name where your MOSK cluster is located.
If you move the StackLight label to a new worker machine on an existing cluster, manually deschedule all StackLight components from the old worker machine, which you remove the StackLight label from. For details, see Deschedule StackLight Pods from a worker machine.
Note
To add node labels after deploying a worker machine, on the Machines page, click the More action icon in the last column of the required machine field and select Configure machine.
- Auto-commence provisioning
Select to set
day1Provisioningtoauto, allowing immediate automatic provisioning after bare metal host inspection.If unselected (default), the machine will enter
AwaitsProvisioningstate and require a manual approval using the Provision button.
- Auto-commence deployment
Select to set
day1Deploymenttoauto, allowing immediate automatic deployment after provisioning completes.If unselected (default), the machine will enter
AwaitsDeploymentstate and require a manual approval using the Deploy button.
Click Create.
At this point, MOSK adds the new machine object to the specified cluster. And the Bare Metal Operator Controller creates the relation to a bare metal host with the labels matching the roles.
The workflow depends on the controlled provisioning settings:
If Auto-commence provisioning is selected, provisioning starts automatically after bare metal host inspection completes. Otherwise, the machine switches to the
AwaitsProvisioningstate. Click Provision in the machine menu to approve provisioning after verifying the hardware inventory.If Auto-commence deployment is selected, deployment starts automatically. Otherwise, the machine switches to the
AwaitsDeploymentstate. Click Deploy in the machine menu to approve deployment after verifying the provisioned configuration.
For details on the controlled provisioning workflow, see Controlled bare metal provisioning workflows.
Provisioning of the newly created machine starts when the
Machineobject is created and includes the following stages:Creation of partitions on local disks as required by the operating system and MOSK architecture.
Configuration of network interfaces on the host as required by the operating system and MOSK architecture.
Installation and configuration of the MOSK LCM Agent.
Repeat the steps above for the remaining machines.
Now, proceed to Add a Ceph cluster using KaaSCephCluster (deprecated).