Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Create a machine using MOSK management console

After you add bare metal hosts and create a cluster as described in Create a MOSK cluster, proceed with associating Kubernetes machines of your cluster with the previously added bare metal hosts using the MOSK management console.

To add a Kubernetes machine to a MOSK cluster:

  1. Log in to the MOSK management console with the m:kaas:namespace@operator or m:kaas:namespace@writer permissions.

  2. Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.

  3. In the Clusters tab, click the required cluster name. The cluster page with the Machines list opens.

  4. Click Create Machine button.

  5. Fill out the Create New Machine form as required:

    • Name

      New machine name. If empty, a name is automatically generated in the <clusterName>-<machineType>-<uniqueSuffix> format.

    • Type

      Machine type. Select Manager or Worker to create a Kubernetes manager or worker node.

      Caution

      The required minimum number of machines:

      • 3 manager nodes for HA

      • 3 worker storage nodes for a minimal Ceph cluster

    • L2 Template

      From the drop-down list, select the previously created L2 template, if any. For details, see Create L2 templates. Otherwise, leave the default selection to use the default L2 template of the cluster.

    • Distribution

      Operating system to provision the machine. From the drop-down list, select Ubuntu 22.04 Jammy as the machine distribution.

      Warning

      Do not use obsolete Ubuntu 20.04 distribution on greenfield deployments but only on existing clusters based on Ubuntu 20.04, which reaches end-of-life in April 2025. MOSK 24.3 release series is the last one to support Ubuntu 20.04 as the host operating system.

      Update of management or MOSK clusters running Ubuntu 20.04 to the following major product release, where Ubuntu 22.04 is the only supported version, is not possible.

    • Upgrade Index

      Optional. A positive numeral value that defines the order of machine upgrade during a cluster update.

      Note

      You can change the upgrade order later on an existing cluster. For details, see Change the upgrade order of a machine.

      Consider the following upgrade index specifics:

      • The first machine to upgrade is always one of the control plane machines with the lowest upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.

      • If the Cluster spec dedicatedControlPlane field is false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.

      • If several machines have the same upgrade index, they have the same priority during upgrade.

      • If the value is not set, the machine is automatically assigned a value of the upgrade index.

    • Host Configuration

      Configuration settings of the bare metal host to be used for the machine:

      • Host Inventory

        From the drop-down list, select the previously created custom bare metal host to be used for the new machine.

        Note

        Before MOSK 25.2, the field name was Host.

      • Host Profile

        From the drop-down list, select the previously created custom bare metal host profile, if any. For details, see Create a custom bare metal host profile. Otherwise, leave the default selection.

    • Labels

      Add the required node labels for the worker machine to run certain components on a specific node. For example, for the StackLight nodes that run OpenSearch and require more resources than a standard node, add the StackLight label. The list of available node labels is obtained from allowedNodeLabels of your current Cluster release.

      If the value field is not defined in allowedNodeLabels, from the drop-down list, select the required label and define an appropriate custom value for this label to be set to the node. For example, the node-type label can have the storage-ssd value to meet the service scheduling logic on a particular machine.

      Caution

      If you deploy StackLight in the HA mode (recommended):

      • Add the StackLight label to minimum three worker nodes. Otherwise, StackLight will not be deployed until the required number of worker nodes is configured with the StackLight label.

      • Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to correctly maintain the worker nodes where the StackLight local volumes were provisioned. For details, see Delete a cluster machine.

        To obtain the list of nodes where StackLight is deployed, refer to Upgrade managed clusters with StackLight deployed in HA mode.

      If you move the StackLight label to a new worker machine on an existing cluster, manually deschedule all StackLight components from the old worker machine, which you remove the StackLight label from. For details, see Deschedule StackLight Pods from a worker machine.

      Note

      To add node labels after deploying a worker machine, navigate to the Machines page, click the More action icon in the last column of the required machine field, and select Configure machine.

    • Auto-commence provisioning

      Available since MOSK 25.2 and MOSK management 2.30.0. Select to set day1Provisioning to auto, allowing immediate automatic provisioning after bare metal host inspection.

      If unselected (default), the machine will enter AwaitsProvisioning state and require a manual approval using the Provision button.

    • Auto-commence deployment

      Available since MOSK 25.2 and MOSK management 2.30.0. Select to set day1Deployment to auto, allowing immediate automatic deployment after provisioning completes.

      If unselected (default), the machine will enter AwaitsDeployment state and require a manual approval using the Deploy button.

    • Count

      Specify the number of machines to create. If you create a machine pool, specify the replicas count of the pool.

    • Manager

      Select Manager or Worker to create a Kubernetes manager or worker node.

      Caution

      The required minimum number of machines:

      • 3 manager nodes for HA

      • 3 worker storage nodes for a minimal Ceph cluster

    • BareMetal Host Label

      Assign the role to the new machine(s) to link the machine to a previously created bare metal host with the corresponding label. You can assign one role type per machine. The supported labels include:

      • Manager

        This node hosts the manager services of a MOSK cluster. For the reliability reasons, MOSK does not permit running end user workloads on the manager nodes or use them as storage nodes.

      • Worker

        The default role for any node in a MOSK cluster. Only the kubelet service is running on the machines of this type.

    • Upgrade Index

      Optional. A positive numeral value that defines the order of machine upgrade during a cluster update.

      Note

      You can change the upgrade order later on an existing cluster. For details, see Change the upgrade order of a machine.

      Consider the following upgrade index specifics:

      • The first machine to upgrade is always one of the control plane machines with the lowest upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.

      • If the Cluster spec dedicatedControlPlane field is false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.

      • If several machines have the same upgrade index, they have the same priority during upgrade.

      • If the value is not set, the machine is automatically assigned a value of the upgrade index.

    • Distribution

      Operating system to provision the machine. From the drop-down list, select the required Ubuntu distribution.

    • L2 Template

      From the drop-down list, select the previously created L2 template, if any. For details, see Create L2 templates. Otherwise, leave the default selection to use the default L2 template of the cluster.

      Note

      Before Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0), if you leave the default selection in the drop-down list, a preinstalled L2 template is used. Preinstalled templates are removed in the above-mentioned releases.

    • BM Host Profile

      From the drop-down list, select the previously created custom bare metal host profile, if any. For details, see Create a custom bare metal host profile. Otherwise, leave the default selection.

    • Node Labels

      Add the required node labels for the worker machine to run certain components on a specific node. For example, for the StackLight nodes that run OpenSearch and require more resources than a standard node, add the StackLight label. The list of available node labels is obtained from allowedNodeLabels of your current Cluster release.

      If the value field is not defined in allowedNodeLabels, from the drop-down list, select the required label and define an appropriate custom value for this label to be set to the node. For example, the node-type label can have the storage-ssd value to meet the service scheduling logic on a particular machine.

      Caution

      If you deploy StackLight in the HA mode (recommended):

      • Add the StackLight label to minimum three worker nodes. Otherwise, StackLight will not be deployed until the required number of worker nodes is configured with the StackLight label.

      • Removal of the StackLight label from worker nodes along with removal of worker nodes with StackLight label can cause the StackLight components to become inaccessible. It is important to correctly maintain the worker nodes where the StackLight local volumes were provisioned. For details, see Delete a cluster machine.

        To obtain the list of nodes where StackLight is deployed, refer to Upgrade managed clusters with StackLight deployed in HA mode.

      If you move the StackLight label to a new worker machine on an existing cluster, manually deschedule all StackLight components from the old worker machine, which you remove the StackLight label from. For details, see Deschedule StackLight Pods from a worker machine.

      Note

      To add node labels after deploying a worker machine, navigate to the Machines page, click the More action icon in the last column of the required machine field, and select Configure machine.

  6. Click Create.

    At this point, MOSK adds the new machine object to the specified cluster. And the Bare Metal Operator Controller creates the relation to a bare metal host with the labels matching the roles.

    The workflow depends on the controlled provisioning settings:

    1. If Auto-commence provisioning is selected, provisioning starts automatically after bare metal host inspection completes. Otherwise, the machine switches to the AwaitsProvisioning state. Click Provision in the machine menu to approve provisioning after verifying the hardware inventory.

    2. If Auto-commence deployment is selected, deployment starts automatically. Otherwise, the machine switches to the AwaitsDeployment state. Click Deploy in the machine menu to approve deployment after verifying the provisioned configuration.

    For details on the controlled provisioning workflow, see Controlled bare metal provisioning workflows.

    Provisioning of the newly created machine starts when the Machine object is created and includes the following stages:

    1. Creation of partitions on local disks as required by the operating system and MOSK architecture.

    2. Configuration of network interfaces on the host as required by the operating system and MOSK architecture.

    3. Installation and configuration of the MOSK LCM Agent.

  7. Repeat the steps above for the remaining machines.

  8. Verify machine status.

Now, proceed to Add a Ceph cluster prior to MOSK 25.2.