Enhancements

This section outlines new features implemented in the Cluster release 11.6.0 that is introduced in the Container Cloud release 2.22.0.

Bond interfaces monitoring

Implemented monitoring of bond interfaces for clusters based on bare metal and Equinix Metal with public or private networking. The number of active and configured slaves per bond is now monitored with the following alerts raising in case of issues:

  • BondInterfaceDown

  • BondInterfaceSlaveDown

  • BondInterfaceOneSlaveLeft

  • BondInterfaceOneSlaveConfigured

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Calculation of storage retention time using OpenSearch and Prometheus panels

Implemented the following panels in the Grafana dashboards for OpenSearch and Prometheus that provide details on the storage usage and allow calculating the possible retention time based on provisioned storage and average usage:

  • OpenSearch dashboard:

    • Cluster > Estimated Retention

    • Resources > Disk

    • Resources > File System Used Space by Percentage

    • Resources > Stored Indices Disk Usage

    • Resources > Age of Logs

  • Prometheus dashboard:

    • Cluster > Estimated Retention

    • Resources > Storage

    • Resources > Strage by Percentage

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.

Deployment of cAdvisor as a StackLight component

Added cAdvisor to the StackLight deployment on any type of Container Cloud cluster that allows gathering metrics about usage of container resources.

Container Cloud web UI support for Reference Application

Enhanced support for Reference Application that is designed for workload monitoring on managed clusters adding the Enable Reference Application check box to the StackLight tab of the Create new cluster wizard in the Container Cloud web UI.

You can also enable this option after deployment using the Configure cluster menu of the Container Cloud web UI or using CLI by editing the StackLight parameters in the Cluster object.

The Reference Application enhancement also comprises switching from MariaDB to PostgreSQL to improve the application stability and performance.

Note

Reference Application requires the following resources per cluster on top of the main product requirements:

  • Up to 1 GiB of RAM

  • Up to 3 GiB of storage

Note

For the feature support on MOSK deployments, refer to MOSK documentation: Deploy RefApp using automation tools.

General availability of Ceph Shared File System

Completed the development of the Ceph Shared File System (CephFS) feature. CephFS provides the capability to create read/write shared file system Persistent Volumes (PVs).

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Support of shared Ceph clusters

TechPreview

Implemented a mechanism connecting a consumer cluster to a producer cluster. The consumer cluster uses the Ceph cluster deployed on the producer cluster to store the necessary data.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Sharing of a Ceph cluster with attached MKE clusters

Implemented the ability to share a Ceph cluster with MKE clusters that were not originally deployed by Container Cloud and are attached to the management cluster. Shared Ceph clusters allow providing the Ceph-based CSI driver to MKE clusters. Both ReadWriteOnce (RWO) and ReadWriteMany (RWX) access modes are supported with shared Ceph clusters.

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

Two Ceph Managers by default for HA

Increased the default number of Ceph Managers deployed on a Ceph cluster to two, active and stand-by, to improve fault tolerance and HA.

On existing clusters, the second Ceph Manager deploys automatically after a managed cluster update.

Note

Mirantis recommends labeling at least 3 Ceph nodes with the mgr role that equals the default number of Ceph nodes for the mon role. In such configuration, one back-up Ceph node will be available to redeploy a failed Ceph Manager in case of a server outage.

Note

For MOSK-based deployments, the feature support is available since MOSK 23.1.