Cluster update known issues¶
This section lists the cluster update known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.6.
[4288] Cluster update failure with kubelet being stuck¶
Note
Moving forward, the workaround for this issue will be moved from Release Notes to Mirantis Container Cloud documentation: MOS clusters update fails with stuck kubelet.
A MOS cluster may fail to update to the latest Cluster release with kubelet being stuck and reporting authorization errors.
The cluster is affected by the issue if you see the Failed to make webhook authorizer request: context canceled error in the kubelet logs:
docker logs ucp-kubelet --since 5m 2>&1 | grep 'Failed to make webhook authorizer request: context canceled'
As a workaround, restart the ucp-kubelet
container on the affected
node(s):
ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet
Note
Ignore failures in the output of the first command, if any.
[16987] Cluster update fails at Ceph CSI pod eviction¶
An update of a MOS cluster may fail with the ceph csi-driver is not evacuated yet, waiting… error during the Ceph CSI pod eviction.
Workaround:
Scale the affected
StatefulSet
of the pod that fails to init down to0
replicas. If it is theDaemonSet
such asnova-compute
, it must not be scheduled on the affected node.On every
csi-rbdplugin
pod, search for stuckcsi-vol
:rbd device list | grep <csi-vol-uuid>
Unmap the affected
csi-vol
:rbd unmap -o force /dev/rbd<i>
Delete
volumeattachment
of the affected pod:kubectl get volumeattachments | grep <csi-vol-uuid> kubectl delete volumeattacmhent <id>
Scale the affected
StatefulSet
back to the original number of replicas or until its state isRunning
. If it is aDaemonSet
, run the pod on the affected node again.
[18871] MySQL crashes during managed cluster update or instances live migration¶
MySQL may crash when performing instances live migration or during an update of
a managed cluster running MOS from version 6.19.0 to 6.20.0. After the crash,
MariaDB cannot connect to the cluster and gets stuck in the
CrashLoopBackOff
state.
Workaround:
Verify that other MariaDB replicas are up and running and have joined the cluster:
Verify that at least 2 pods are running and operational (
2/2
andRunning
):kubectl -n openstack get pods |grep maria
Example of system response where the pods
mariadb-server-0
andmariadb-server-2
are operational:mariadb-controller-77b5ff47d5-ndj68 1/1 Running 0 39m mariadb-server-0 2/2 Running 0 39m mariadb-server-1 0/2 Running 0 39m mariadb-server-2 2/2 Running 0 39m
Log in to each operational pod and verify that the node is
Primary
and the cluster size is at least2
. For example:mysql -u root -p$MYSQL_DBADMIN_PASSWORD -e "show status;" |grep -e \ wsrep_cluster_size -e "wsrep_cluster_status" -e "wsrep_local_state_comment"
Example of system response:
wsrep_cluster_size 2 wsrep_cluster_status Primary wsrep_local_state_comment Synced
Remove the content of the
/var/lib/mysql/*
directory:kubectl -n openstack exec -it mariadb-server-1 – rm -rf /var/lib/mysql/*
Restart the MariaDB container:
kubectl -n openstack delete pod mariadb-server-1