During netplan configuration after cluster deployment, changes in the
IpamHost object are not propagated to LCMMachine.
The workaround is to manually add any new label to the labels section
of the Machine object for the target host, which triggers machine
reconciliation and propagates network changes.
[35429] The WireGuard interface does not have the IPv4 address assigned¶
Due to the upstream Calico
issue, on clusters
with WireGuard enabled, the WireGuard interface on a node may not have
the IPv4 address assigned. This leads to broken inter-Pod communication
between the affected node and other cluster nodes.
The node is affected if the IP address is missing on the WireGuard interface:
Due to the upstream Calico issue, a controller node
cannot be deleted if the calico-node Pod is stuck blocking node deletion.
One of the symptoms is the following warning in the baremetal-operator
logs:
Resolving dependency Service dhcp-lb in namespace kaas failed:\
the server was unable to return a response in the time allotted,\
but may still be processing the request (get endpoints dhcp-lb).
As a workaround, delete the Pod that is stuck to retrigger the node
deletion.
[24005] Deletion of a node with ironic Pod is stuck in the Terminating state¶
During deletion of a manager machine running the ironic Pod from a bare
metal management cluster, the following problems occur:
All Pods are stuck in the Terminating state
A new ironic Pod fails to start
The related bare metal host is stuck in the deprovisioning state
As a workaround, before deletion of the node running the ironic Pod,
cordon and drain the node using the kubectl cordon <nodeName> and
kubectl drain <nodeName> commands.
[20736] Region deletion failure after regional deployment failure¶
If a baremetal-based regional cluster deployment fails before pivoting is
done, the corresponding region deletion fails.
Workaround:
Using the command below, manually delete all possible traces of the failed
regional cluster deployment, including but not limited to the following
objects that contain the kaas.mirantis.com/region label of the affected
region:
[31186,34132] Pods get stuck during MariaDB operations¶
Due to the upstream MariaDB issue,
during MariaDB operations on a management cluster, Pods may get stuck
in continuous restarts with the following example error:
On MOSK clusters, the Ansible provisioner may hang in a loop while trying to
remove LVM thin pool logical volumes (LVs) due to issues with volume detection
before removal. The Ansible provisioner cannot remove LVM thin pool LVs
correctly, so it consistently detects the same volumes whenever it scans
disks, leading to a repetitive cleanup process.
The following symptoms mean that a cluster can be affected:
A node was configured to use thin pool LVs. For example, it had the
OpenStack Cinder role in the past.
A bare metal node deployment flaps between provisioninig and
deprovisioning states.
In the Ansible provisioner logs, the following example warnings are growing:
88621.log:7389:2023-06-22 16:30:45.109 88621 ERROR ansible.plugins.callback.ironic_log[-] Ansible task clean:fail failed on node 14eb0dbc-c73a-4298-8912-4bb12340ff49:{'msg':'There are more devices to clean', '_ansible_no_log': None, 'changed': False}
Important
Therearemoredevicestoclean is a regular warning
indicating some in-progress tasks. But if the number of such warnings is
growing along with the node flapping between provisioninig and
deprovisioning states, the cluster is highly likely affected by the
issue.
As a workaround, erase disks manually using any preferred tool.
MKE backup may fail during update of a management, regional, or managed
cluster due to wrong permissions in the etcd backup
/var/lib/docker/volumes/ucp-backup/_data directory.
The issue affects only clusters that were originally deployed using early
Container Cloud releases delivered in 2020-2021.
Using the admin kubeconfig, increase the mkeUpgradeAttempts value:
Open the LCMCluster object of the management cluster for editing:
kubectleditlcmcluster<mgmtClusterName>
In the mkeUpgradeAttempts field, increase the value to 6.
Once done, MKE backup retriggers automatically.
[30294] Replacement of a master node is stuck on the calico-node Pod start¶
During replacement of a master node on a cluster of any type, the
calico-node Pod fails to start on a new node that has the same IP address
as the node being replaced.
Workaround:
Log in to any master node.
From a CLI with an MKE client bundle, create a shell alias to start
calicoctl using the mirantis/ucp-dsinfo image:
[5568] The calico-kube-controllers Pod fails to clean up resources¶
During the unsafe or forced deletion of a manager machine running the
calico-kube-controllers Pod in the kube-system namespace,
the following issues occur:
The calico-kube-controllers Pod fails to clean up resources associated
with the deleted node
The calico-node Pod may fail to start up on a newly created node if the
machine is provisioned with the same IP address as the deleted machine had
As a workaround, before deletion of the node running the
calico-kube-controllers Pod, cordon and drain the node:
Due to the upstream Ceph issue,
on clusters with the Federal Information Processing Standard (FIPS) mode
enabled, the Ceph rook-operator fails to connect to Ceph RADOS Gateway
(RGW) pods.
As a workaround, do not place Ceph RGW pods on nodes where FIPS mode is
enabled.
[34599] Ceph ‘ClusterWorkloadLock’ blocks upgrade from 2.23.5 to 2.24.1¶
On management clusters based on Ubuntu 18.04, after the cluster starts
upgrading from 2.23.5 to 2.24.1, all controller machines are stuck in the
In Progress state with the Distribution update in
progress hover message displaying in the Container Cloud web UI.
The issue is caused by clusterworkloadlock containing the outdated
release name in the status.release field, which blocks LCM Controller
to proceed with machine upgrade. This behavior is caused by a complete removal
of the ceph-controller chart from management clusters and a failed
ceph-clusterworkloadlock removal.
The workaround is to manually remove ceph-clusterworkloadlock from the
management cluster to unblock upgrade:
[26441] Cluster update fails with the MountDevice failed for volume warning¶
Update of a managed cluster based on bare metal and Ceph enabled fails with
PersistentVolumeClaim getting stuck in the Pending state for the
prometheus-server StatefulSet and the
MountVolume.MountDevicefailedforvolume warning in the StackLight event
logs.
Workaround:
Verify that the description of the Pods that failed to run contain the
FailedMount events:
In the command above, replace the following values:
<affectedProjectName> is the Container Cloud project name where
the Pods failed to run
<affectedPodName> is a Pod name that failed to run in the specified project
In the Pod description, identify the node name where the Pod failed to run.
Verify that the csi-rbdplugin logs of the affected node contain the
rbdvolumemountfailed:<csi-vol-uuid>isbeingused error.
The <csi-vol-uuid> is a unique RBD volume name.
Identify csiPodName of the corresponding csi-rbdplugin:
[33438] ‘CalicoDataplaneFailuresHigh’ alert is firing during cluster update¶
During cluster update of a managed bare metal cluster, the false positive
CalicoDataplaneFailuresHigh alert may be firing. Disregard this alert,
which will disappear once cluster update succeeds.
The observed behavior is typical for calico-node during upgrades,
as workload changes occur frequently. Consequently, there is a possibility
of temporary desynchronization in the Calico dataplane. This can occasionally
result in throttling when applying workload changes to the Calico dataplane.