Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.24.0 along with the Cluster release 14.0.0. For the list of hot fixes delivered in the 2.24.1 patch release, see 2.24.1.

  • [5981] Fixed the issue with upgrade of a cluster containing more than 120 nodes getting stuck on one node with errors about IP addresses exhaustion in the docker logs. On existing clusters, after updating to the Cluster release 14.0.0 or later, you can optionally remove the abandoned mke-overlay network using docker network rm mke-overlay.

  • [29604] Fixed the issue with the false positive failed to get kubeconfig error occurring on the Waiting for TLS settings to be applied stage during TLS configuration.

  • [29762] Fixed the issue with a wrong IP address being assigned after the MetalLB controller restart.

  • [30635] Fixed the issue with the pg_autoscaler module of Ceph Manager failing with the pool <poolNumber> has overlapping roots error if a Ceph cluster contains a mix of pools with deviceClass either explicitly specified or not specified.

  • [30857] Fixed the issue with irrelevant error message displaying in the osd-prepare Pod during the deployment of Ceph OSDs on removable devices on AMD nodes. Now, the error message clearly states that removable devices (with hotplug enabled) are not supported for deploying Ceph OSDs. This issue has been addressed since the Cluster release 14.0.0.

  • [30781] Fixed the issue with cAdvisor failing to collect metrics on CentOS-based deployments. Missing metrics affected the KubeContainersCPUThrottlingHigh alert and the following Grafana dashboards: Kubernetes Containers, Kubernetes Pods, and Kubernetes Namespaces.

  • [31288] Fixed the issue with Fluentd agent failing and the fluentd-logs Pods reporting the maximum open shards limit error, thus preventing OpenSearch to accept new logs. The fix enables the possibility to increase the limit for maximum open shards using cluster.max_shards_per_node. For details, see Tune StackLight for long-term log retention.

  • [31485] Fixed the issue with Elasticsearch Curator not deleting indices according to the configured retention period on any type of Container Cloud clusters.