This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.19.0. For the list of enhancements in the Cluster releases 11.3.0 and 7.9.0 that are introduced by the Container Cloud release 2.19.0, see the Cluster releases (managed).

General availability support for machines upgrade order

Implemented full support for the upgrade sequence of machines that allows prioritized machines to be upgraded first. You can now set the upgrade index on an existing machine or machine pool using the Container Cloud web UI.

Consider the following upgrade index specifics:

  • The first machine to upgrade is always one of the control plane machines with the lowest upgradeIndex. Other control plane machines are upgraded one by one according to their upgrade indexes.

  • If the Cluster spec dedicatedControlPlane field is false, worker machines are upgraded only after the upgrade of all control plane machines finishes. Otherwise, they are upgraded after the first control plane machine, concurrently with other control plane machines.

  • If several machines have the same upgrade index, they have the same priority during upgrade.

  • If the value is not set, the machine is automatically assigned a value of the upgrade index.

Web UI support for booting an OpenStack machine from a volume


Implemented the Boot From Volume option for the OpenStack machine creation wizard in the Container Cloud web UI. The feature allows booting OpenStack-based machines from a block storage volume.

The feature is beneficial for clouds that do not have enough space on hypervisors. After enabling this option, the Cinder storage is used instead of the Nova storage.

Modification of network configuration on machines


Enabled the ability to modify existing network configuration on running bare metal clusters with a mandatory approval of new settings by an Infrastructure Operator. This validation is required to prevent accidental cluster failures due to misconfiguration.

After you make necessary network configuration changes in the required L2 template, you now need to approve the changes by setting the spec.netconfigUpdateAllow:true flag in each affected IpamHost object.


For MKE clusters that are part of MOSK infrastructure, the feature support will become available in one of the following Container Cloud releases.

New format of log entries on management clusters

Implemented a new format of log entries for cluster and machine logs of a management cluster. Each log entry now contains a request ID that identifies chronology of actions performed on a cluster or machine. The feature applies to all supported cloud providers.

The new format is <providerType>.<objectName>.req:<requestID>. For example, bm.machine.req:374, bm.cluster.req:172.

  • <providerType> - provider name, possible values: aws, azure, os, bm, vsphere, equinix.

  • <objectName> - name of an object being processed by provider, possible values: cluster, machine.

  • <requestID> - request ID number that increases when a provider receives a request from Kubernetes about creating, updating, deleting an object. The request ID allows combining all operations performed with an object within one request. For example, the result of a machine creation, update of its statuses, and so on.

Extended and basic versions of logs

Implemented the --extended flag for collecting the extended version of logs that contains system and MKE logs, logs from LCM Ansible and LCM Agent along with cluster events and Kubernetes resources description and logs. You can use this flag to collect logs on any cluster type.

Without the --extended flag, the basic version of logs is collected, which is sufficient for most use cases. The basic version of logs contains all events, Kubernetes custom resources, and logs from all Container Cloud components. This version does not require passing --key-file.

Distribution selector for bare metal machines in web UI

Added the Distribution field to the bare metal machine creation wizard in the Container Cloud web UI. The default operating system in the distribution list is Ubuntu 20.04.


Do not use the outdated Ubuntu 18.04 distribution on greenfield deployments but only on existing clusters based on Ubuntu 18.04.

Removal of Helm v2 support from Helm Controller

After switching all remaining OpenStack Helm releases from v2 to v3, dropped support for Helm v2 in Helm Controller and removed the Tiller image for all related components.