Searching for results...

No results

Your search did not match anything from Mirantis documentation.
Check your spelling or try different keywords.

An error occurred

An error occurred while using the search.
Try your search again or contact us to let us know about it.

Newer documentation is now live.You are currently reading an older version.

Cluster update known issues

This section lists the cluster update known issues with workarounds for the Mirantis OpenStack for Kubernetes release 23.1.

[27797] Cluster ‘kubeconfig’ stops working during MKE minor version update

Fixed in MOSK 23.2

During update of a Container Cloud management cluster, if the MKE minor version is updated from 3.4.x to 3.5.x, access to the cluster using the existing kubeconfig fails with the You must be logged in to the server (Unauthorized) error due to OIDC settings being reconfigured.

As a workaround, during the Container Cloud cluster update, use the admin kubeconfig instead of the existing one. Once the update completes, you can use the existing cluster kubeconfig again.

To obtain the admin kubeconfig:

kubectl --kubeconfig <pathToMgmtKubeconfig> get secret -n <affectedClusterNamespace> \
-o yaml <affectedClusterName>-kubeconfig | awk '/admin.conf/ {print $2}' | \
head -1 | base64 -d > clusterKubeconfig.yaml

[32311] Update is stuck due to the ‘tf-rabbit-exporter’ ReplicaSet issue

Fixed in MOSK 23.2

On a cluster with Tungsten Fabric enabled, the cluster update is stuck with the tf-rabbit-exporter deployment having a number of pods in the Terminating state.

To verify whether your cluster is affected:

kubectl -n tf get pods | grep tf-rabbit-exporter

Example of system response on the affected cluster:

tf-rabbit-exporter-6cd5bcd677-dz4bw        1/1     Running       0          9m13s
tf-rabbit-exporter-8665b5886f-4n66m        0/1     Terminating   0          5s
tf-rabbit-exporter-8665b5886f-58q4z        0/1     Terminating   0          0s
tf-rabbit-exporter-8665b5886f-7t5bp        0/1     Terminating   0          7s
tf-rabbit-exporter-8665b5886f-b2vp9        0/1     Terminating   0          3s
tf-rabbit-exporter-8665b5886f-k4gn2        0/1     Terminating   0          6s
tf-rabbit-exporter-8665b5886f-lscb2        0/1     Terminating   0          5s
tf-rabbit-exporter-8665b5886f-pdp78        0/1     Terminating   0          1s
tf-rabbit-exporter-8665b5886f-qgpcl        0/1     Terminating   0          1s
tf-rabbit-exporter-8665b5886f-vpfrg        0/1     Terminating   0          8s
tf-rabbit-exporter-8665b5886f-vsqqk        0/1     Terminating   0          13s
tf-rabbit-exporter-8665b5886f-xfjgf        0/1     Terminating   0          2s

Workaround:

  1. Drop an extra custom resource of RabbitMQ:

    kubectl -n tf get rabbitmq
    

    Example of system response on the affected cluster:

    NAME                 AGE
    tf-rabbit-exporter   545d
    tf-rabbitmq          545d
    
  2. Delete the tf-rabbit-exporter custom resource:

    kubectl -n tf delete rabbitmq tf-rabbit-exporter