Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Enhancements¶
This section outlines new features and enhancements introduced in the Container Cloud release 2.27.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.27.0, see 17.2.0 and 16.2.0.
General availability for Ubuntu 22.04 on bare metal clusters¶
Implemented full support for Ubuntu 22.04 LTS (Jellyfish) as the default host operating system that now installs on non-MOSK bare metal management and managed clusters.
For MOSK:
Existing management clusters are automatically updated to Ubuntu 22.04 during cluster upgrade to Container Cloud 2.27.0 (Cluster release 16.2.0).
Greenfield deployments of management clusters are based on Ubuntu 22.04.
Existing and greenfield deployments of managed clusters are still based on Ubuntu 20.04. The support for Ubuntu 22.04 on this cluster type will be announced in one of the following releases.
Caution
Upgrading from Ubuntu 20.04 to 22.04 on existing deployments of Container Cloud managed clusters is not supported.
Improvements in the day-2 management API for bare metal clusters¶
TechPreview
Enhanced the day-2 management API the bare metal provider with several key improvements:
Implemented the
sysctl,package, andirqbalanceconfiguration modules, which become available for usage after your management cluster upgrade to the Cluster release 16.2.0. These Container Cloud modules use the designatedHostOSConfigurationobject namedmcc-modulesto distingish them from custom modules.Configuration modules allow managing the operating system of a bare metal host granularly without rebuilding the node from scratch. Such approach prevents workload evacuation and significantly reduces configuration time.
Optimized performance for faster, more efficient operations.
Enhanced user experience for easier and more intuitive interactions.
Resolved various internal issues to ensure smoother functionality.
Added comprehensive documentation, including concepts, guidelines, and recommendations for effective use of day-2 operations.
Optimization of strict filtering for devices on bare metal clusters¶
Optimized the BareMetalHostProfile custom resource, which uses
the strict byID filtering to target system disks using the byPath,
serialNumber, and wwn reliable device options instead of the
unpredictable byName naming format.
The optimization includes changes in admission-controller that now blocks
the use of bmhp:spec:devices:by_name in new BareMetalHostProfile
objects.
Deprecation of SubnetPool and MetalLBConfigTemplate objects¶
As part of refactoring of the bare metal provider, deprecated the
SubnetPool and MetalLBConfigTemplate objects. The objects will be
completely removed from the product in one of the following releases.
Both objects are automatically migrated to the MetallbConfig object during
cluster update to the Cluster release 17.2.0 or 16.2.0.
Learn more
The ClusterUpdatePlan object for a granular cluster update¶
TechPreview
Implemented the ClusterUpdatePlan custom resource to enable a granular
step-by-step update of a managed cluster. The operator can control the update
process by manually launching update stages using the commence flag.
Between the update stages, a cluster remains functional from the perspective
of cloud users and workloads.
A ClusterUpdatePlan object is automatically created by the respective
Container Cloud provider when a new Cluster release becomes available for your
cluster. This object contains a list of predefined self-descriptive update
steps that are cluster-specific. These steps are defined in the spec
section of the object with information about their impact on the cluster.
Update groups for worker machines¶
Implemented the UpdateGroup custom resource for creation of update groups
for worker machines on managed clusters. The use of update groups provides
enhanced control over update of worker machines. This feature decouples the
concurrency settings from the global cluster level, providing update
flexibility based on the workload characteristics of different worker machine
sets.
LCM Agent heartbeats¶
Implemented the same heartbeat model for the LCM Agent as Kubernetes uses for Nodes. This model allows reflecting the actual status of the LCM Agent when it fails. For visual representation, added the corresponding LCM Agent status to the Container Cloud web UI for clusters and machines, which reflects health status of the LCM agent along with its status of update to the version from the current Cluster release.
Handling secret leftovers using secret-controller¶
Implemented secret-controller that runs on a management cluster and cleans
up secret leftovers of credentials that are not cleaned up automatically after
creation of new secrets. This controller replaces rhellicense-controller,
proxy-controller, and byo-credentials-controller as well as partially
replaces the functionality of license-controller and other credential
controllers.
Note
You can change memory limits for secret-controller on a
management cluster using the resources:limits parameter in the
spec:providerSpec:value:kaas:management:helmReleases: section of the
Cluster object.
MariaDB backup for bare metal and vSphere providers¶
Implemented the capability to back up and restore MariaDB databases on management clusters for bare metal and vSphere providers. Also, added documentation on how to change the storage node for backups on clusters of these provider types.