Enhancements¶
This section outlines new features and enhancements introduced in the Container Cloud release 2.26.0. For the list of enhancements delivered with the Cluster releases introduced by Container Cloud 2.26.0, see 17.1.0 and 16.1.0.
Pre-update inspection of pinned product artifacts in a ‘Cluster’ object¶
To ensure that Container Cloud clusters remain consistently updated with the
latest security fixes and product improvements, the Admission Controller
has been enhanced. Now, it actively prevents the utilization of pinned
custom artifacts for Container Cloud components. Specifically, it blocks
a management or managed cluster release update, or any cluster configuration
update, for example, adding public keys or proxy, if a Cluster
object
contains any custom Container Cloud artifacts with global or image-related
values overwritten in the helm-releases
section, until these values are
removed.
Normally, the Container Cloud clusters do not contain pinned artifacts,
which eliminates the need for any pre-update actions in most deployments.
However, if the update of your cluster is blocked with the
invalid HelmReleases configuration
error, refer to
Update notes: Pre-update actions for details.
Note
In rare cases, if the image-related or global values should be
changed, you can use the ClusterRelease
or KaaSRelease
objects
instead. But make sure to update these values manually after every major
and patch update.
Note
The pre-update inspection applies only to images delivered by Container Cloud that are overwritten. Any custom images unrelated to the product components are not verified and do not block cluster update.
Learn more
Disablement of worker machines on managed clusters¶
TechPreview
Implemented the machine disabling API that allows you to seamlessly remove a worker machine from the LCM control of a managed cluster. This action isolates the affected node without impacting other machines in the cluster, effectively eliminating it from the Kubernetes cluster. This functionality proves invaluable in scenarios where a malfunctioning machine impedes cluster updates.
Learn more
Day-2 management API for bare metal clusters¶
TechPreview
Added initial Technology Preview support for the HostOSConfiguration
and
HostOSConfigurationModules
custom resources in the bare metal provider.
These resources introduce configuration modules that allow managing the
operating system of a bare metal host granularly without rebuilding the node
from scratch. Such approach prevents workload evacuation and significantly
reduces configuration time.
Configuration modules manage various settings of the operating system using Ansible playbooks, adhering to specific schemas and metadata requirements. For description of module format, schemas, and rules, contact Mirantis support.
Warning
For security reasons and to ensure safe and reliable cluster operability, contact Mirantis support to start using these custom resources.
Caution
As long as the feature is still on the development stage,
Mirantis highly recommends deleting all HostOSConfiguration
objects,
if any, before automatic upgrade of the management cluster to Container Cloud
2.27.0 (Cluster release 16.2.0). After the upgrade, you can recreate the
required objects using the updated parameters.
This precautionary step prevents re-processing and re-applying of existing
configuration, which is defined in HostOSConfiguration
objects, during
management cluster upgrade to 2.27.0. Such behavior is caused by changes in
the HostOSConfiguration
API introduced in 2.27.0.
Strict filtering for devices on bare metal clusters¶
Implemented the strict byID
filtering for targeting system disks using
specific device options: byPath
, serialNumber
, and wwn
.
These options offer a more reliable alternative to the unpredictable
byName
naming format.
Mirantis recommends adopting these new device naming options when adding new nodes and redeploying existing ones to ensure a predictable and stable device naming schema.
Learn more
Dynamic IP allocation for faster host provisioning¶
Introduced a mechanism in the Container Cloud dnsmasq server to dynamically allocate IP addresses for baremetal hosts during provisioning. This new mechanism replaces sequential IP allocation that includes the ping check with dynamic IP allocation without the ping check. Such behavior significantly increases the amount of baremetal servers that you can provision in parallel, which allows you to streamline the process of setting up a large managed cluster.
Support for Kubernetes auditing and profiling on management clusters¶
Added support for the Kubernetes auditing and profiling enablement and
configuration on management clusters. The auditing option is enabled by
default. You can configure both options using Cluster
object of the
management cluster.
Note
For managed clusters, you can also configure Kubernetes auditing
along with profiling using the Cluster
object of a managed cluster.
Cleanup of LVM thin pool volumes during cluster provisioning¶
Implemented automatic cleanup of LVM thin pool volumes during the provisioning stage to prevent issues with logical volume detection before removal, which could cause node cleanup failure during cluster redeployment.
Wiping a device or partition before a bare metal cluster deployment¶
Implemented the capability to erase existing data from hardware devices to be
used for a bare metal management or managed cluster deployment. Using the
new wipeDevice
structure, you can either erase an existing partition or
remove all existing partitions from a physical device. For these purposes,
use the eraseMetadata
or eraseDevice
option that configures cleanup
behavior during configuration of a custom bare metal host profile.
Note
The wipeDevice
option replaces the deprecated wipe
option
that will be removed in one of the following releases. For backward
compatibility, any existing wipe: true
option is automatically converted
to the following structure:
wipeDevice:
eraseMetadata:
enabled: True
Policy Controller for validating pod image signatures¶
Technology Preview
Introduced initial Technology Preview support for the Policy Controller that validates signatures of pod images. The Policy Controller verifies that images used by the Container Cloud and Mirantis OpenStack for Kubernetes controllers are signed by a trusted authority. The Policy Controller inspects defined image policies that list Docker registries and authorities for signature validation.
Configuring trusted certificates for Keycloak¶
Added support for configuring Keycloak truststore using the Container Cloud web UI to allow for a proper validation of client self-signed certificates. The truststore is used to ensure secured connection to identity brokers, LDAP identity providers, and others.
Health monitoring of cluster LCM operations¶
Added the LCM Operation condition to monitor health of all LCM operations on a cluster and its machines that is useful during cluster update. You can monitor the status of LCM operations using the the Container Cloud web UI in the status hover menus of a cluster and machine.
Learn more
Container Cloud web UI improvements for bare metal¶
Reorganized the Container Cloud web UI to optimize the baremetal-based managed cluster deployment and management:
Moved the L2 Templates and Subnets tabs from the Clusters menu to the separate Networks tab on the left sidebar.
Improved the Create Subnet menu by adding configuration for different subnet types.
Reorganized the Baremetal tab in the left sidebar that now contains Hosts, Hosts Profiles, and Credentials tabs.
Implemented the ability to add bare metal host profiles using the web UI.
Moved description of a baremetal host to Host info located in a baremetal host kebab menu on the Hosts page of the Baremetal tab.
Moved description of baremetal host credentials to Credential info located in a credential kebab menu on the Credentials page of the Baremetal tab.
Documentation enhancements¶
On top of continuous improvements delivered to the existing Container Cloud guides, added the documentation on how to export logs from OpenSearch dashboards to CSV.