Addressed issues¶
The following issues have been addressed in the Mirantis Container Cloud release 2.10.0 along with the Cluster releases 7.0.0 and 5.17.0.
For more issues addressed for the Cluster release 6.16.0, see also addressed issues 2.8.0 and 2.9.0.
[8013][AWS] Fixed the issue with managed clusters deployment, that requires persistent volumes (PVs), failing with pods being stuck in the
Pending
state and having thepod has unbound immediate PersistentVolumeClaims
andnode(s) had volume node affinity conflict
errors.Note
The issue affects only the MKE deployments with Kubernetes 1.18 and is fixed for MKE 3.4.x with Kubernetes 1.20 that is available since the Cluster release 7.0.0.
[14981] [Equinix Metal] Fixed the issue with a manager machine deployment failing if the cluster contained at least one manager machine that was stuck in the
Provisioning
state due to the capacity limits in the selected Equinix Metal data center.[13402] [LCM] Fixed the issue with the existing clusters failing with the no space left on device error due to an excessive amount of core dumps produced by applications that fail frequently.
[14125] [LCM] Fixed the issue with managed clusters deployed or updated on a regional cluster of another provider type displaying inaccurate Nodes readiness live status in the Container Cloud web UI.
[14040][StackLight] Fixed the issue with the Tiller container of the
stacklight-helm-controller
pods switching toCrashLoopBackOff
and then being OOMKilled. Limited the releases number in history to3
to prevent RAM overconsumption by Tiller.[14152] [Upgrade] Fixed the issue with managed cluster release upgrade failing and the DNS names of the Kubernetes services on the affected pod not being resolved due to DNS issues on pods with host networking.