This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.24.0. For the list of enhancements in the Cluster release 14.0.0 that is introduced by the Container Cloud release 2.24.0, see the 14.0.0.

Automated upgrade of operating system on bare metal clusters

Support status of the feature

  • Since MOSK 23.2, the feature is generally available for MOSK clusters.

  • Since Container Cloud 2.24.2, the feature is generally available for any type of bare metal clusters.

  • Since Container Cloud 2.24.0, the feature is available as Technology Preview for management and regional clusters only.

Implemented automatic in-place upgrade of an operating system (OS) distribution on bare metal clusters. The OS upgrade occurs as part of cluster update that requires machines reboot. The OS upgrade workflow is as follows:

  1. The distribution ID value is taken from the id field of the distribution from the allowedDistributions list in the spec of the ClusterRelease object.

  2. The distribution that has the default: true value is used during update. This distribution ID is set in the spec:providerSpec:value:distribution field of the Machine object during cluster update.

On management and regional clusters, the operating system upgrades automatically during cluster update. For managed clusters, an in-place OS distribution upgrade should be performed between cluster updates. This scenario implies a machine cordoning, draining, and reboot.


During the course of the Container Cloud 2.24.x series, Mirantis highly recommends upgrading an operating system on your cluster machines to Ubuntu 20.04 before the next major Cluster release becomes available. It is not mandatory to upgrade all machines at once. You can upgrade them one by one or in small batches, for example, if the maintenance window is limited in time.

Otherwise, the Cluster release update of the 18.04 based clusters will become impossible as of the Cluster releases introduced in Container Cloud 2.25.0, in which only the 20.04 distribution will be supported.

Support for WireGuard on bare metal clusters


Added initial Technology Preview support for WireGuard that enables traffic encryption on the Kubernetes workloads network. Set secureOverlay: true in the Cluster object during deployment of management, regional, or managed bare metal clusters to enable WireGuard encryption.

Also, added the possibility to configure the maximum transmission unit (MTU) size for Calico that is required for the WireGuard functionality and allows maximizing network performance.


For MOSK-based deployments, the feature support is available since MOSK 23.2.

MetalLB configuration changes for bare metal and vSphere

For management and regional clusters


For managed clusters, this object is available as Technology Preview and will become generally available in one of the following Container Cloud releases.

Introduced the following MetalLB configuration changes and objects related to address allocation and announcement of services LB for bare metal and vSphere providers:

  • Introduced the MetalLBConfigTemplate object for bare metal and the MetalLBConfig object for vSphere to be used as default and recommended.

  • For vSphere, during creation of clusters of any type, now a separate MetalLBConfig object is created instead of corresponding settings in the Cluster object.

  • The use of either Subnet objects without the new MetalLB objects or the configInline MetalLB value of the Cluster object is deprecated and will be removed in one of the following releases.

  • If the MetalLBConfig object is not used for MetalLB configuration related to address allocation and announcement of services LB, then automated migration applies during creation of clusters of any type or cluster update to Container Cloud 2.24.0.

    During automated migration, the MetalLBConfig and MetalLBConfigTemplate objects for bare metal or the MetalLBConfig for vSphere are created and contents of the MetalLB chart configInline value is converted to the parameters of the MetalLBConfigTemplate object for bare metal or of the MetalLBConfig object for vSphere.

The following changes apply to the bare metal bootstrap procedure:

  • Moved the following environment variables from cluster.yaml.template to the dedicated ipam-objects.yaml.template:





  • Modified the default network configuration. Now it includes a bond interface and separated PXE and management networks. Mirantis recommends using separate PXE and management networks for management and regional clusters.

Support for RHEL 8.7 on the vSphere provider


Added support for RHEL 8.7 on the vSphere-based management, regional, and managed clusters.


Container Cloud does not support mixed operating systems, RHEL combined with Ubuntu, in one cluster.

Custom flavors for Octavia on OpenStack-based clusters

Implemented the possibility to use custom Octavia Amphora flavors that you can enable in spec:providerSpec section of the Cluster object using serviceAnnotations:loadbalancer.openstack.org/flavor-id during management or regional cluster deployment.


For managed clusters, you can enable the feature through the Container Cloud API. The web UI functionality will be added in one of the following Container Cloud releases.

Deletion of persistent volumes during an OpenStack-based cluster deletion

Completed the development of persistent volumes deletion during an OpenStack-based managed cluster deletion by implementing the Delete all volumes in the cluster check box in the cluster deletion menu of the Container Cloud web UI.


The feature applies only to volumes created on clusters that are based on or updated to the Cluster release 11.7.0 or later.

If you added volumes to an existing cluster before it was updated to the Cluster release 11.7.0, delete such volumes manually after the cluster deletion.

Support for Keycloak Quarkus

Upgraded the Keycloak major version from 18.0.0 to 21.1.1. For the list of new features and enhancements, see Keycloak Release Notes.

The upgrade path is fully automated. No data migration or custom LCM changes are required.


After the Keycloak upgrade, access the Keycloak Admin Console using the new URL format: https://<keycloak.ip>/auth instead of https://<keycloak.ip>. Otherwise, the Resource not found error displays in a browser.

Custom host names for cluster machines


Added initial Technology Preview support for custom host names of machines on any supported provider and any cluster type. When enabled, any machine host name in a particular region matches the related Machine object name. For example, instead of the default kaas-node-<UID>, a machine host name will be master-0. The custom naming format is more convenient and easier to operate with.

You can enable the feature before or after management or regional cluster deployment. If enabled after deployment, custom host names will apply to all newly deployed machines in the region. Existing host names will remain the same.

Parallel update of worker nodes


Added initial Technology Preview support for parallelizing of node update operations that significantly improves the efficiency of your cluster. To configure the parallel node update, use the following parameters located under spec.providerSpec of the Cluster object:

  • maxWorkerUpgradeCount - maximum number of worker nodes for simultaneous update to limit machine draining during update

  • maxWorkerPrepareCount - maximum number of workers for artifacts downloading to limit network load during update


For MOSK clusters, you can start using this feature during cluster update from 23.1 to 23.2. For details, see MOSK documentation: Parallelizing node update operations.

Cache warm-up for managed clusters

Implemented the CacheWarmupRequest resource to predownload, aka warm up, a list of artifacts included in a given set of Cluster releases into the mcc-cache service only once per release. The feature facilitates and speeds up deployment and update of managed clusters.

After a successful cache warm-up, the object of the CacheWarmupRequest resource is automatically deleted from the cluster and cache remains for managed clusters deployment or update until next Container Cloud auto-upgrade of the management or regional cluster.


If the disk space for cache runs out, the cache for the oldest object is evicted. To avoid running out of space in the cache, verify and adjust its size before each cache warm-up.


For MOSK-based deployments, the feature support is available since MOSK 23.2.

Support for auditd


Added initial Technology Preview support for the Linux Audit daemon auditd to monitor activity of cluster processes on any type of Container Cloud cluster. The feature is an essential requirement for many security guides that enables auditing of any cluster process to detect potential malicious activity.

You can enable and configure auditd either during or after cluster deployment using the Cluster object.


For MOSK-based deployments, the feature support is available since MOSK 23.2.

Enhancements for TLS certificates configuration


Enhanced TLS certificates configuration for cluster applications:

  • Added support for configuration of TLS certificates for MKE on management or regional clusters to the existing support on managed clusters.

  • Implemented the ability to configure TLS certificates using the Container Cloud web UI through the Security section located in the More > Configure cluster menu.

Graceful cluster reboot using web UI

Expanded the capability to perform a graceful reboot on a management, regional, or managed cluster for all supported providers by adding the Reboot machines option to the cluster menu in the Container Cloud web UI. The feature allows for a rolling reboot of all cluster machines without workloads interruption. The reboot occurs in the order of cluster upgrade policy.


For MOSK-based deployments, the feature support is available since MOSK 23.2.

Creation and deletion of bare metal host credentials using web UI

Improved management of bare metal host credentials using the Container Cloud web UI:

  • Added the Add Credential menu to the Credentials tab. The feature facilitates association of credentials with bare metal hosts created using the BM Hosts tab.

  • Implemented automatic deletion of credentials during deletion of bare metal hosts after deletion of managed cluster.

Node labeling improvements in web UI

Improved the Node Labels menu in the Container Cloud web UI by making it more intuitive. Replaced the greyed out (disabled) label names with the No labels have been assigned to this machine. message and the Add a node label button link.

Also, added the possibility to configure node labels for machine pools after deployment using the More > Configure Pool option.

Documentation enhancements

On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on managing Ceph OSDs with a separate metadata device.