The following issues have been addressed in the Mirantis Container Cloud release 2.12.0 along with the Cluster releases 7.2.0, 6.19.0, and 5.19.0.
[Equinix Metal] Fixed the issue with the Equinix Metal provider failing to create machines with an SSH key error if an Equinix Metal based cluster was being deployed in an Equinix Metal project with no SSH keys.
[bare metal] Fixed the issue with failure to add a new machine to a baremetal-based managed cluster after the management cluster upgrade.
[OpenStack] Fixed the issue with failure to create a proxy-based OpenStack regional cluster due to the issue with the proxy secret creation.
[IAM] Fixed the issue with MariaDB pods failing to start after MariaDB blocked itself during the State Snapshot Transfers sync.
[LCM] Fixed the issue with joining etcd from a new node to an existing etcd cluster. The issue caused the new managed node to hang in the
Deploystate when adding it to a managed cluster.
[bootstrap] Fixed the issue with a management cluster bootstrap failing with failed to establish connection with tiller error due to kind 0.9.0 delivered with the bootstrap script being not compatible with the latest Ubuntu 18.04 image that requires kind 0.11.1.
[Ceph] Fixed the issue with a bare metal or Equinix Metal management cluster upgrade getting stuck and then failing with some Ceph daemons being stuck on upgrade to Octopus and with the
insecure global_id reclaimhealth warning in Ceph logs.
[StackLight] Fixed the issue causing inability to override default route matchers for Salesforce notifier.
If you have applied the workaround as described in StackLight known issues: 16843 after updating the cluster releases to 5.19.0, 7.2.0, or 6.19.0 and if you need to define custom matchers, replace the deprecated
matchersas required. For details, see Deprecation notes and StackLight configuration parameters.
[Update][StackLight] Fixed the issue with StackLight in HA mode placed on controller nodes being not deployed or cluster update being blocked. Once you update your Mirantis OpenStack for Kubernetes cluster from the Cluster release 6.18.0 to 6.19.0, roll back the workaround applied as described in Upgrade known issues: 17477:
stacklightlabels from worker nodes. Wait for the labels to be removed.
Remove the custom
nodeSelectorsection from the cluster spec.
[Update][StackLight] Fixed the issue causing the Cluster release update from 7.0.0 to 7.1.0 to fail due to failed Patroni pod. The issue affected the Container Cloud management, regional, or managed cluster of any cloud provider.
[Update][Ceph] Fixed the issue with upgrade of a bare metal or Equinix Metal based management or managed cluster failing with the Failed to configure Ceph cluster error due to different versions of the
[Update] Fixed the issue with the false-positive release: “squid-proxy” not found error during a management cluster upgrade of any supported cloud provider except vSphere.