This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.11.0. For the list of enhancements in the Cluster releases 7.1.0, 6.18.0, and 5.18.0 that are supported by the Container Cloud release 2.11.0, see the Cluster releases (managed).
Support for the Microsoft Azure cloud provider¶
Introduced the Technology Preview support for the Microsoft Azure cloud provider, including support for creating and operating of management, regional, and managed clusters.
For the Technology Preview feature definition, refer to Technology Preview features.
RHEL 7.9 bootstrap node for the vSphere-based provider¶
Implemented the capability to bootstrap the vSphere provider clusters on the bootstrap node that is based on RHEL 7.9.
Validation labels for the vSphere-based VM templates¶
Implemented validation labels for the vSphere-based VM templates in the Container Cloud web UI. If a VM template was initially created using the built-in Packer mechanism, the Container Cloud version has a green label on the right side of the drop-down list with VM templates. Otherwise, a template is marked with the Unknown label.
Mirantis recommends using only green-labeled templates for production deployments.
Automatic migration of Docker data and LVP volumes to NVMe on AWS clusters¶
Implemented automatic migration of Docker data located at
and local provisioner volumes from existing EBS to local NVMe SSDs during
the AWS-based management and managed clusters upgrade. On new clusters,
/var/lib/docker Docker data is now located on local NVMe SSDs
The migration allows moving heavy workloads such as etcd and MariaDB to local NVMe SSDs that significantly improves cluster performance.
Switch of core Helm releases from v2 to v3¶
Upgraded all core Helm releases in the
objects from v2 to v3. Switching of the remaining Helm releases to v3 will
be implemented in one of the following Container Cloud releases.
Bond interfaces for baremetal-based management clusters¶
Added the possibility to configure L2 templates for the baremetal-based management cluster to set up a bond network interface to the PXE/Management network.
Apply this configuration to the bootstrap templates before you run the bootstrap script to deploy the management cluster.
Using this configuration requires that every host in your management cluster has at least two physical interfaces.
Connect at least two interfaces per host to an Ethernet switch that supports Link Aggregation Control Protocol (LACP) port groups and LACP fallback.
Configure an LACP group on the ports connected to the NICs of a host.
Configure the LACP fallback on the port group to ensure that the host can boot over the PXE network before the bond interface is set up on the host operating system.
Configure server BIOS for both NICs of a bond to be PXE-enabled.
If the server does not support booting from multiple NICs, configure the port of the LACP group that is connected to the PXE-enabled NIC of a server to be primary port. With this setting, the port becomes active in the fallback mode.
Bare metal advanced configuration using web UI¶
Implemented the following amendments for bare metal advanced configuration in the Container Cloud web UI:
On the Cluster page, added the Subnets section with a list of available subnets.
Added the Add new subnet wizard.
Renamed the BareMetal tab to BM Hosts.
Added the BM Host Profiles tab that contains a list of custom bare metal host profiles, if any.
Added the BM Host Profile drop-down list to the Create new machine wizard.
Equinix Metal capacity labels for machines in web UI¶
Implemented the verification mechanism for the actual capacity of the Equinix Metal facilities before machines deployment. Now, you can see the following labels in the Equinix Metal Create a machine wizard of the Container Cloud web UI:
Normal - the facility has a lot of available machines. Prioritize this machine type over others.
Limited - the facility has a limited number of machines. Do not request many machines of this type.
Unknown - Container Cloud cannot fetch information about the capacity level since the feature is disabled.
On top of continuous improvements delivered to the existing Container Cloud guides, added a procedure on how to update the Keycloak IP address on bare metal clusters.