Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Cluster update known issues¶
This section lists the cluster update known issues with workarounds for the Mirantis OpenStack for Kubernetes release 21.5.
[4288] Cluster update failure with kubelet being stuck¶
A MOS cluster may fail to update to the latest Cluster release with kubelet being stuck and reporting authorization errors.
The cluster is affected by the issue if you see the Failed to make webhook authorizer request: context canceled error in the kubelet logs:
docker logs ucp-kubelet --since 5m 2>&1 | grep 'Failed to make webhook authorizer request: context canceled'
As a workaround, restart the ucp-kubelet container on the affected
node(s):
ctr -n com.docker.ucp snapshot rm ucp-kubelet
docker rm -f ucp-kubelet
Note
Ignore failures in the output of the first command, if any.
[16987] Cluster update fails at Ceph CSI pod eviction¶
An update of a MOS cluster may fail with the ceph csi-driver is not evacuated yet, waiting… error during the Ceph CSI pod eviction.
Workaround:
Scale the affected
StatefulSetof the pod that fails to init down to0replicas. If it is theDaemonSetsuch asnova-compute, it must not be scheduled on the affected node.On every
csi-rbdpluginpod, search for stuckcsi-vol:rbd device list | grep <csi-vol-uuid>
Unmap the affected
csi-vol:rbd unmap -o force /dev/rbd<i>
Delete
volumeattachmentof the affected pod:kubectl get volumeattachments | grep <csi-vol-uuid> kubectl delete volumeattacmhent <id>
Scale the affected
StatefulSetback to the original number of replicas or until its state isRunning. If it is aDaemonSet, run the pod on the affected node again.