Known issues¶
This section lists known issues with workarounds for the Mirantis Container Cloud release 2.25.0 including the Cluster releases 17.0.0, 16.0.0, and 14.1.0.
For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.
Note
This section also outlines still valid known issues from previous Container Cloud releases.
Bare metal¶
[35089] Сalico does not set up networking for a pod¶
An arbitrary Kubernetes pod may get stuck in an error loop due to a failed Calico networking setup for that pod. The pod cannot access any network resources. The issue occurs more often during cluster upgrade or node replacement, but this can sometimes happen during the new deployment as well.
You may find the following log for the failed pod IP (for example,
10.233.121.132) in calico-node
logs:
felix/route_table.go 898: Syncing routes: found unexpected route; ignoring due to grace period. dest=10.233.121.132/32 ifaceName="cali9731b965838" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254
felix/route_table.go 898: Syncing routes: found unexpected route; ignoring due to grace period. dest=10.233.121.132/32 ifaceName="cali9731b965838" ifaceRegex="^cali." ipVersion=0x4 tableIndex=254
...
felix/route_table.go 902: Remove old route dest=10.233.121.132/32 ifaceName="cali9731b965838" ifaceRegex="^cali.*" ipVersion=0x4 routeProblems=[]string{"unexpected route"} tableIndex=254
felix/conntrack.go 90: Removing conntrack flows ip=10.233.121.132
The workaround is to manually restart the affected pod:
kubectl delete pod <failedPodID>
[33936] Deletion failure of a controller node during machine replacement¶
Due to the upstream Calico issue, a controller node
cannot be deleted if the calico-node
Pod is stuck blocking node deletion.
One of the symptoms is the following warning in the baremetal-operator
logs:
Resolving dependency Service dhcp-lb in namespace kaas failed: \
the server was unable to return a response in the time allotted,\
but may still be processing the request (get endpoints dhcp-lb).
As a workaround, delete the Pod that is stuck to retrigger the node deletion.
LCM¶
[34132] Pods get stuck during MariaDB operations¶
Due to the upstream MariaDB issue, during MariaDB operations on a management cluster, Pods may get stuck in continuous restarts with the following example error:
[ERROR] WSREP: Corrupt buffer header: \
addr: 0x7faec6f8e518, \
seqno: 3185219421952815104, \
size: 909455917, \
ctx: 0x557094f65038, \
flags: 11577. store: 49, \
type: 49
Workaround:
Log in to the node where the affected Pod is running.
In
/mnt/local-volumes/src/iam/kaas-iam-data/vol00/
, remove thegalera.cache
file for the affected Pod.Remove the affected Pod or wait until it is automatically restarted.
[32761] Node cleanup fails due to remaining devices¶
On MOSK clusters, the Ansible provisioner may hang in a loop while trying to remove LVM thin pool logical volumes (LVs) due to issues with volume detection before removal. The Ansible provisioner cannot remove LVM thin pool LVs correctly, so it consistently detects the same volumes whenever it scans disks, leading to a repetitive cleanup process.
The following symptoms mean that a cluster can be affected:
A node was configured to use thin pool LVs. For example, it had the OpenStack Cinder role in the past.
A bare metal node deployment flaps between
provisioninig
anddeprovisioning
states.In the Ansible provisioner logs, the following example warnings are growing:
88621.log:7389:2023-06-22 16:30:45.109 88621 ERROR ansible.plugins.callback.ironic_log [-] Ansible task clean : fail failed on node 14eb0dbc-c73a-4298-8912-4bb12340ff49: {'msg': 'There are more devices to clean', '_ansible_no_log': None, 'changed': False}
Important
There are more devices to clean
is a regular warning indicating some in-progress tasks. But if the number of such warnings is growing along with the node flapping betweenprovisioninig
anddeprovisioning
states, the cluster is highly likely affected by the issue.
As a workaround, erase disks manually using any preferred tool.
[30294] Replacement of a ‘master’ node is stuck on the ‘calico-node’ Pod start¶
During replacement of a master
node on a cluster of any type, the
calico-node
Pod fails to start on a new node that has the same IP address
as the node being replaced.
Workaround:
Log in to any
master
node.From a CLI with an MKE client bundle, create a shell alias to start calicoctl using the
mirantis/ucp-dsinfo
image:alias calicoctl="\ docker run -i --rm \ --pid host \ --net host \ -e constraint:ostype==linux \ -e ETCD_ENDPOINTS=<etcdEndpoint> \ -e ETCD_KEY_FILE=/ucp-node-certs/key.pem \ -e ETCD_CA_CERT_FILE=/ucp-node-certs/ca.pem \ -e ETCD_CERT_FILE=/ucp-node-certs/cert.pem \ -v /var/run/calico:/var/run/calico \ -v ucp-node-certs:/ucp-node-certs:ro \ mirantis/ucp-dsinfo:<mkeVersion> \ calicoctl --allow-version-mismatch \ "
In the above command, replace the following values with the corresponding settings of the affected cluster:
<etcdEndpoint>
is the etcd endpoint defined in the Calico configuration file. For example,ETCD_ENDPOINTS=127.0.0.1:12378
<mkeVersion>
is the MKE version installed on your cluster. For example,mirantis/ucp-dsinfo:3.5.7
.
Verify the node list on the cluster:
kubectl get node
Compare this list with the node list in Calico to identify the old node:
calicoctl get node -o wide
Remove the old node from Calico:
calicoctl delete node kaas-node-<nodeID>
[5782] Manager machine fails to be deployed during node replacement¶
During replacement of a manager machine, the following problems may occur:
The system adds the node to Docker swarm but not to Kubernetes
The node
Deployment
gets stuck with failed RethinkDB health checks
Workaround:
Delete the failed node.
Wait for the MKE cluster to become healthy. To monitor the cluster status:
Log in to the MKE web UI as described in Connect to the Mirantis Kubernetes Engine web UI.
Monitor the cluster status as described in MKE Operations Guide: Monitor an MKE cluster with the MKE web UI.
Deploy a new node.
[5568] The ‘calico-kube-controllers’ Pod fails to clean up resources¶
During the unsafe
or forced
deletion of a manager machine running the
calico-kube-controllers
Pod in the kube-system
namespace,
the following issues occur:
The
calico-kube-controllers
Pod fails to clean up resources associated with the deleted nodeThe
calico-node
Pod may fail to start up on a newly created node if the machine is provisioned with the same IP address as the deleted machine had
As a workaround, before deletion of the node running the
calico-kube-controllers
Pod, cordon and drain the node:
kubectl cordon <nodeName>
kubectl drain <nodeName>
Ceph¶
[34820] The Ceph ‘rook-operator’ fails to connect to RGW on FIPS nodes¶
Due to the upstream Ceph issue,
on clusters with the Federal Information Processing Standard (FIPS) mode
enabled, the Ceph rook-operator
fails to connect to Ceph RADOS Gateway
(RGW) pods.
As a workaround, do not place Ceph RGW pods on nodes where FIPS mode is enabled.
[26441] Cluster update fails with the ‘MountDevice failed for volume’ warning¶
Update of a managed cluster based on bare metal and Ceph enabled fails with
PersistentVolumeClaim
getting stuck in the Pending
state for the
prometheus-server
StatefulSet and the
MountVolume.MountDevice failed for volume
warning in the StackLight event
logs.
Workaround:
Verify that the description of the Pods that failed to run contain the
FailedMount
events:kubectl -n <affectedProjectName> describe pod <affectedPodName>
In the command above, replace the following values:
<affectedProjectName>
is the Container Cloud project name where the Pods failed to run<affectedPodName>
is a Pod name that failed to run in the specified project
In the Pod description, identify the node name where the Pod failed to run.
Verify that the
csi-rbdplugin
logs of the affected node contain therbd volume mount failed: <csi-vol-uuid> is being used
error. The<csi-vol-uuid>
is a unique RBD volume name.Identify
csiPodName
of the correspondingcsi-rbdplugin
:kubectl -n rook-ceph get pod -l app=csi-rbdplugin \ -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
Output the affected
csiPodName
logs:kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
Scale down the affected
StatefulSet
orDeployment
of the Pod that fails to0
replicas.On every
csi-rbdplugin
Pod, search for stuckcsi-vol
:for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do echo $pod kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid> done
Unmap the affected
csi-vol
:rbd unmap -o force /dev/rbd<i>
The
/dev/rbd<i>
value is a mapped RBD volume that usescsi-vol
.Delete
volumeattachment
of the affected Pod:kubectl get volumeattachments | grep <csi-vol-uuid> kubectl delete volumeattacmhent <id>
Scale up the affected
StatefulSet
orDeployment
back to the original number of replicas and wait until its state becomesRunning
.
Update¶
[37268] Container Cloud upgrade is blocked by a node in ‘Prepare’ or ‘Deploy’ state¶
Container Cloud upgrade may be blocked by a node being stuck in the Prepare
or Deploy
state with error processing package openssh-server
.
The issue is caused by customizations in /etc/ssh/sshd_config
, such as
additional Match
statements. This file is managed by Container Cloud and
must not be altered manually.
As a workaround, move customizations from sshd_config
to a new file
in the /etc/ssh/sshd_config.d/
directory.
[36328] The helm-controller
Deployment
is stuck during cluster update¶
During a cluster update, a Kubernetes helm-controller
Deployment
may
get stuck in a restarting pod loop with Terminating
and Running
states
flapping. Other Deployment
types may also be affected.
As a workaround, restart the Deployment
that got stuck:
kubectl -n <affectedProjectName> get deploy <affectedDeployName> -o yaml
kubectl -n <affectedProjectName> scale deploy <affectedDeployName> --replicas 0
kubectl -n <affectedProjectName> scale deploy <affectedDeployName> --replicas <replicasNumber>
In the command above, replace the following values:
<affectedProjectName>
is the Container Cloud project name containing the cluster with stuck pods<affectedDeployName>
is theDeployment
name that failed to run pods in the specified project<replicasNumber>
is the original number of replicas for theDeployment
that you can obtain using the get deploy command
[33438] ‘CalicoDataplaneFailuresHigh’ alert is firing during cluster update¶
During cluster update of a managed bare metal cluster, the false positive
CalicoDataplaneFailuresHigh
alert may be firing. Disregard this alert,
which will disappear once cluster update succeeds.
The observed behavior is typical for calico-node
during upgrades,
as workload changes occur frequently. Consequently, there is a possibility
of temporary desynchronization in the Calico dataplane. This can occasionally
result in throttling when applying workload changes to the Calico dataplane.