The following issues have been addressed in the Mirantis Container Cloud release 2.10.0 along with the Cluster releases 7.0.0 and 5.17.0.
[AWS] Fixed the issue with managed clusters deployment, that requires persistent volumes (PVs), failing with pods being stuck in the
Pendingstate and having the
pod has unbound immediate PersistentVolumeClaimsand
node(s) had volume node affinity conflicterrors.
The issue affects only the MKE deployments with Kubernetes 1.18 and is fixed for MKE 3.4.x with Kubernetes 1.20 that is available since the Cluster release 7.0.0.
 [Equinix Metal] Fixed the issue with a manager machine deployment failing if the cluster contained at least one manager machine that was stuck in the
Provisioningstate due to the capacity limits in the selected Equinix Metal data center.
 [LCM] Fixed the issue with the existing clusters failing with the no space left on device error due to an excessive amount of core dumps produced by applications that fail frequently.
 [LCM] Fixed the issue with managed clusters deployed or updated on a regional cluster of another provider type displaying inaccurate Nodes readiness live status in the Container Cloud web UI.
[StackLight] Fixed the issue with the Tiller container of the
stacklight-helm-controllerpods switching to
CrashLoopBackOffand then being OOMKilled. Limited the releases number in history to
3to prevent RAM overconsumption by Tiller.
 [Upgrade] Fixed the issue with managed cluster release upgrade failing and the DNS names of the Kubernetes services on the affected pod not being resolved due to DNS issues on pods with host networking.