Cluster update known issues

This section lists the cluster update known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.6.


[4288] Cluster update failure with kubelet being stuck

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Mirantis Container Cloud documentation: MOS clusters update fails with stuck kubelet.

A MOS cluster may fail to update to the latest Cluster release with kubelet being stuck and reporting authorization errors.

The cluster is affected by the issue if you see the Failed to make webhook authorizer request: context canceled error in the kubelet logs:

docker logs ucp-kubelet --since 5m 2>&1 | grep 'Failed to make webhook authorizer request: context canceled'

As a workaround, restart the ucp-kubelet container on the affected node(s):

ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet

Note

Ignore failures in the output of the first command, if any.


[16987] Cluster update fails at Ceph CSI pod eviction

Fixed in MOS 22.2

An update of a MOS cluster may fail with the ceph csi-driver is not evacuated yet, waiting… error during the Ceph CSI pod eviction.

Workaround:

  1. Scale the affected StatefulSet of the pod that fails to init down to 0 replicas. If it is the DaemonSet such as nova-compute, it must not be scheduled on the affected node.

  2. On every csi-rbdplugin pod, search for stuck csi-vol:

    rbd device list | grep <csi-vol-uuid>
    
  3. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    
  4. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  5. Scale the affected StatefulSet back to the original number of replicas or until its state is Running. If it is a DaemonSet, run the pod on the affected node again.


[18871] MySQL crashes during managed cluster update or instances live migration

Fixed in MOS 22.2

MySQL may crash when performing instances live migration or during an update of a managed cluster running MOS from version 6.19.0 to 6.20.0. After the crash, MariaDB cannot connect to the cluster and gets stuck in the CrashLoopBackOff state.

Workaround:

  1. Verify that other MariaDB replicas are up and running and have joined the cluster:

    1. Verify that at least 2 pods are running and operational (2/2 and Running):

      kubectl -n openstack get pods |grep maria
      

      Example of system response where the pods mariadb-server-0 and mariadb-server-2 are operational:

      mariadb-controller-77b5ff47d5-ndj68   1/1     Running     0          39m
      mariadb-server-0                      2/2     Running     0          39m
      mariadb-server-1                      0/2     Running     0          39m
      mariadb-server-2                      2/2     Running     0          39m
      
    2. Log in to each operational pod and verify that the node is Primary and the cluster size is at least 2. For example:

      mysql -u root -p$MYSQL_DBADMIN_PASSWORD -e "show status;" |grep -e \
      wsrep_cluster_size -e "wsrep_cluster_status" -e "wsrep_local_state_comment"
      

      Example of system response:

      wsrep_cluster_size          2
      wsrep_cluster_status        Primary
      wsrep_local_state_comment   Synced
      
  2. Remove the content of the /var/lib/mysql/* directory:

    kubectl -n openstack exec -it mariadb-server-1  rm -rf /var/lib/mysql/*
    
  3. Restart the MariaDB container:

    kubectl -n openstack delete pod mariadb-server-1