Enhancements

This section outlines new features and enhancements introduced in the Container Cloud release 2.25.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.25.0, see 17.0.0, 16.0.0, and 14.1.0.

Container Cloud Bootstrap v2

Implemented Container Cloud Bootstrap v2 that provides an exceptional user experience to set up Container Cloud. With Bootstrap v2, you also gain access to a comprehensive and user-friendly web UI for the OpenStack and vSphere providers.

Bootstrap v2 empowers you to effortlessly provision management clusters before deployment, while benefiting from a streamlined process that isolates each step. This approach not only simplifies the bootstrap process but also enhances troubleshooting capabilities for addressing any potential intermediate failures.

Note

The Bootstrap web UI support for the bare metal provider will be added in one of the following Container Cloud releases.

General availability for ‘MetalLBConfigTemplate’ and ‘MetalLBConfig’ objects

Completed development of the MetalLB configuration related to address allocation and announcement for load-balanced services using the MetalLBConfigTemplate object for bare metal and the MetalLBConfig object for vSphere. Container Cloud uses these objects in default templates as recommended during creation of a management or managed cluster.

At the same time, removed the possibility to use the deprecated options, such as configInline value of the MetalLB chart and the use of Subnet objects without new MetalLBConfigTemplate and MetalLBConfig objects.

Automated migration, which applied to these deprecated options during creation of clusters of any type or cluster update to Container Cloud 2.24.x, is removed automatically during your management cluster upgrade to Container Cloud 2.25.0. After that, any changes in MetalLB configuration related to address allocation and announcement for load-balanced services will be applied using the MetalLBConfig, MetalLBConfigTemplate, and Subnet objects only.

Manual IP address allocation for bare metal hosts during PXE provisioning

Technology Preview

Implemented the following annotations for bare metal hosts that enable manual allocation of IP addresses during PXE provisioning on managed clusters:

  • host.dnsmasqs.metal3.io/address - assigns a specific IP address to a host

  • baremetalhost.metal3.io/detached - pauses automatic host management

These annotations are helpful if you have a limited amount of free and unused IP addresses for server provisioning. Using these annotations, you can manually create bare metal hosts one by one and provision servers in small, manually managed chunks.

Status of infrastructure health for bare metal and OpenStack providers

Implemented the Infrastructure Status condition to monitor infrastructure readiness in the Container Cloud web UI during cluster deployment for bare metal and OpenStack providers. Readiness of the following components is monitored:

  • Bare metal: the MetalLBConfig object along with MetalLB and DHCP subnets

  • OpenStack: cluster network, routers, load balancers, and Bastion along with their ports and floating IPs

For the bare metal provider, also implemented the Infrastructure Status condition for machines to monitor readiness of the IPAMHost, L2Template, BareMetalHost, and BareMetalHostProfile objects associated with the machine.

General availability for RHEL 8.7 on vSphere-based clusters

Introduced general availability support for RHEL 8.7 on VMware vSphere-based clusters. You can install this operating system on any type of a Container Cloud cluster including the bootstrap node.

Note

RHEL 7.9 is not supported as the operating system for the bootstrap node.

Caution

A Container Cloud cluster based on mixed RHEL versions, such as RHEL 7.9 and 8.7, is not supported.

Automatic cleanup of old Ubuntu kernel packages

Implemented automatic cleanup of old Ubuntu kernel and other unnecessary system packages. During cleanup, Container Cloud keeps two most recent kernel versions, which is the default behavior of the Ubuntu apt autoremove command.

Mirantis recommends keeping two kernel versions with the previous kernel version as a fallback option in the event that the current kernel may become unstable at any time. However, if you absolutely require leaving only the latest version of kernel packages, you can use the cleanup-kernel-packages script after considering all possible risks.

Configuration of a custom OIDC provider for MKE on managed clusters

Implemented the ability to configure a custom OpenID Connect (OIDC) provider for MKE on managed clusters using the ClusterOIDCConfiguration custom resource. Using this resource, you can add your own OIDC provider configuration to authenticate user requests to Kubernetes.

Note

For OpenStack and StackLight, Container Cloud supports only Keycloak, which is configured on the management cluster, as the OIDC provider.

The admin role for management cluster

Implemented the management-admin OIDC role to grant full admin access specifically to a management cluster. This role enables the user to manage Pods and all other resources of the cluster, for example, for debugging purposes.

General availability for graceful machine deletion

Introduced general availability support for graceful machine deletion with a safe cleanup of node resources:

  • Changed the default deletion policy from unsafe to graceful for machine deletion using the Container Cloud API.

    Using the deletionPolicy: graceful parameter in the providerSpec.value section of the Machine object, the cloud provider controller prepares a machine for deletion by cordoning, draining, and removing the related node from Docker Swarm. If required, you can abort a machine deletion when using deletionPolicy: graceful, but only before the related node is removed from Docker Swarm.

  • Implemented the following machine deletion methods in the Container Cloud web UI: Graceful, Unsafe, Forced.

  • Added support for deletion of manager machines, which is intended only for replacement or recovery of failed nodes, for MOSK-based clusters using either of deletion policies mentioned above.

General availability for parallel update of worker nodes

Completed development of the parallel update of worker nodes during cluster update by implementing the ability to configure the required options using the Container Cloud web UI. Parallelizing of node update operations significantly optimizes the update efficiency of large clusters.

The following options are added to the Create Cluster window:

  • Parallel Upgrade Of Worker Machines that sets the maximum number of worker nodes to update simultaneously

  • Parallel Preparation For Upgrade Of Worker Machines that sets the maximum number of worker nodes for which new artifacts are downloaded at a given moment of time