[24005] Deletion of a node with ironic Pod is stuck in the Terminating state¶
During deletion of a manager machine running the ironic Pod from a bare
metal management cluster, the following problems occur:
All Pods are stuck in the Terminating state
A new ironic Pod fails to start
The related bare metal host is stuck in the deprovisioning state
As a workaround, before deletion of the node running the ironic Pod,
cordon and drain the node using the kubectl cordon <nodeName> and
kubectl drain <nodeName> commands.
[40747] Unsupported Cluster release is available for managed cluster deployment¶
The Cluster release 16.0.0, which is not supported for greenfield vSphere-based
deployments, is still available in the drop-down menu of the cluster creation
window in the Container Cloud web UI.
Do not select this Cluster release to prevent deployment failures.
Use the latest supported version instead.
[40036] Node is not removed from a cluster when its Machine is disabled¶
During the ClusterRelease update of a MOSK cluster, a
node cannot be removed from the Kubernetes cluster if the related
Machine object is disabled.
As a workaround, remove the finalizer from the affected Node
object.
[39437] Failure to replace a master node on a Container Cloud cluster¶
During the replacement of a master node on a cluster of any type, the process
may get stuck with Kubelet'sNodeReadyconditionisUnknown in the
machine status on the remaining master nodes.
As a workaround, log in on the affected node and run the following
command:
dockerrestartucp-kubelet
[31186,34132] Pods get stuck during MariaDB operations¶
Due to the upstream MariaDB issue,
during MariaDB operations on a management cluster, Pods may get stuck
in continuous restarts with the following example error:
Create a backup of the /var/lib/mysql directory on the
mariadb-server Pod.
Verify that other replicas are up and ready.
Remove the galera.cache file for the affected mariadb-server Pod.
Remove the affected mariadb-server Pod or wait until it is automatically
restarted.
After Kubernetes restarts the Pod, the Pod clones the database in 1-2 minutes
and restores the quorum.
[30294] Replacement of a master node is stuck on the calico-node Pod start¶
During replacement of a master node on a cluster of any type, the
calico-node Pod fails to start on a new node that has the same IP address
as the node being replaced.
Workaround:
Log in to any master node.
From a CLI with an MKE client bundle, create a shell alias to start
calicoctl using the mirantis/ucp-dsinfo image:
[5568] The calico-kube-controllers Pod fails to clean up resources¶
During the unsafe or forced deletion of a manager machine running the
calico-kube-controllers Pod in the kube-system namespace,
the following issues occur:
The calico-kube-controllers Pod fails to clean up resources associated
with the deleted node
The calico-node Pod may fail to start up on a newly created node if the
machine is provisioned with the same IP address as the deleted machine had
As a workaround, before deletion of the node running the
calico-kube-controllers Pod, cordon and drain the node:
[26441] Cluster update fails with the MountDevice failed for volume warning¶
Update of a managed cluster based on bare metal and Ceph enabled fails with
PersistentVolumeClaim getting stuck in the Pending state for the
prometheus-server StatefulSet and the
MountVolume.MountDevicefailedforvolume warning in the StackLight event
logs.
Workaround:
Verify that the description of the Pods that failed to run contain the
FailedMount events:
In the command above, replace the following values:
<affectedProjectName> is the Container Cloud project name where
the Pods failed to run
<affectedPodName> is a Pod name that failed to run in the specified project
In the Pod description, identify the node name where the Pod failed to run.
Verify that the csi-rbdplugin logs of the affected node contain the
rbdvolumemountfailed:<csi-vol-uuid>isbeingused error.
The <csi-vol-uuid> is a unique RBD volume name.
Identify csiPodName of the corresponding csi-rbdplugin:
[36928] The helm-controllerDeployment is stuck during cluster update¶
During a cluster update, a Kubernetes helm-controllerDeployment may
get stuck in a restarting Pod loop with Terminating and Running states
flapping. Other Deployment types may also be affected.
As a workaround, restart the Deployment that got stuck:
[41806] Configuration of a management cluster fails without Keycloak settings¶
During configuration of a management cluster settings using the
Configure cluster web UI menu, updating the Keycloak Truststore
settings is mandatory, despite being optional.
As a workaround, update the management cluster using the API or CLI.