The following issues have been addressed in the Mirantis Container Cloud release 2.11.0 along with the Cluster releases 7.1.0, 6.18.0, and 5.18.0.
For more issues addressed for the Cluster release 6.18.0, see also addressed issues 2.10.0.
[vSphere] Fixed the issue with a load balancer virtual IP address (VIP) being assigned to each manager node on any type of the vSphere-based cluster.
[Ceph] To avoid the Rook community issue with updating Rook to version 1.6, added the
rgw_data_log_backingconfiguration option set to
[Ceph] Fixed the issue with Ceph OSD pod being stuck in the
CrashLoopBackOffstate due to the Ceph OSD authorization key failing to be created properly after disk replacement if a custom
[Ceph][Upgrade] Fixed the issue with
dnsmasqpods failing during a baremetal-based management cluster upgrade due to Ceph not unmounting RBD volumes.
[BM] Fixed the issue with a bare metal cluster to be deployed successfully but with the runtime errors in the
IpamHostobject if an L2 template was configured incorrectly.
[StackLight] Fixed the issue with some panels of the Alertmanager and Prometheus Grafana dashboards not displaying data due to an invalid query.
[StackLight] Removed the CPU resource limit from the
elasticsearch-curatorcontainer to avoid issues with the
CPUThrottlingHighalert false-positively firing for Elasticsearch curator.
[StackLight] Fixed the issue with the Alertmanager pod getting stuck in
CrashLoopBackOffduring upgrade of a management, regional, or managed cluster and thus causing upgrade failure with the Loading configuration file failed error message in logs.
[StackLight][Upgrade] Fixed the issue with management or regional cluster upgrade failure from version 2.9.0 to 2.10.0 and managed cluster from 5.16.0 to 5.17.0 with the Cannot evict pod error for the
[StackLight] Fixed the issue with inability to set
falsefor Alertmanager email notifications.
 [LCM] Fixed the issue with managed clusters update from the Cluster release 6.12.0 to 6.14.0 failing with worker nodes being stuck in the
Deploystate with the
Network is unreachableerror.
 [LCM] Fixed the issue with the LCM agent upgrade failing with x509 error during managed clusters update from the Cluster release 6.12.0 to 6.14.0.