Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.18.0 including the Cluster releases 11.2.0 and 7.8.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


MKE

[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes

Workaround:

  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>
    

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.



Bare metal

[24806] The dnsmasq parameters are not applied on multi-rack clusters

Fixed in 2.19.0

During bootstrap of a bare metal management cluster with a multi-rack topology, the dhcp-option=tag parameters are not applied to dnsmasq.conf.

Symptoms:

The dnasmq-controller service contains the following exemplary error message:

KUBECONFIG=kaas-mgmt-kubeconfig kubectl -n kaas logs --tail 50 deployment/dnsmasq -c dnsmasq-controller

...
I0622 09:05:26.898898       8 handler.go:19] Failed to watch Object, kind:'dnsmasq': failed to list *unstructured.Unstructured: the server could not find the requested resource
E0622 09:05:26.899108       8 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.8/tools/cache/reflector.go:167: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: the server could not find the requested resource
...

Workaround:

Manually update deployment/dnsmasq with the updated image:

KUBECONFIG=kaas-mgmt-kubeconfig kubectl -n kaas set image deployment/dnsmasq dnsmasq-controller=mirantis.azurecr.io/bm/dnsmasq-controller:base-focal-2-18-issue24806-20220618085127

[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

Fixed in 17.0.0 and 16.0.0

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.

Workaround:

Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the kaas.mirantis.com/region label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l kaas.mirantis.com/region=<regionName>

Warning

Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.



Equinix Metal

[16379,23865] Cluster update fails with the FailedMount warning

Fixed in 2.19.0

An Equinix-based management or managed cluster fails to update with the FailedAttachVolume and FailedMount warnings.

Workaround:

  1. Verify that the description of the pods that failed to run contain the FailedMount events:

    kubectl -n <affectedProjectName> describe pod <affectedPodName>
    
    • <affectedProjectName> is the Container Cloud project name where the pods failed to run

    • <affectedPodName> is a pod name that failed to run in this project

    In the pod description, identify the node name where the pod failed to run.

  2. Verify that the csi-rbdplugin logs of the affected node contain the rbd volume mount failed: <csi-vol-uuid> is being used error. The <csi-vol-uuid> is a unique RBD volume name.

    1. Identify csiPodName of the corresponding csi-rbdplugin:

      kubectl -n rook-ceph get pod -l app=csi-rbdplugin \
      -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
      
    2. Output the affected csiPodName logs:

      kubectl -n rook-ceph logs <csiPodName> -c csi-rbdplugin
      
  3. Scale down the affected StatefulSet or Deployment of the pod that fails to init to 0 replicas.

  4. On every csi-rbdplugin pod, search for stuck csi-vol:

    for pod in `kubectl -n rook-ceph get pods|grep rbdplugin|grep -v provisioner|awk '{print $1}'`; do
      echo $pod
      kubectl exec -it -n rook-ceph $pod -c csi-rbdplugin -- rbd device list | grep <csi-vol-uuid>
    done
    
  5. Unmap the affected csi-vol:

    rbd unmap -o force /dev/rbd<i>
    

    The /dev/rbd<i> value is a mapped RBD volume that uses csi-vol.

  6. Delete volumeattachment of the affected pod:

    kubectl get volumeattachments | grep <csi-vol-uuid>
    kubectl delete volumeattacmhent <id>
    
  7. Scale up the affected StatefulSet or Deployment back to the original number of replicas and wait until its state is Running.



StackLight

[27732-1] OpenSearch PVC size custom settings are dismissed during deployment

Fixed in 11.6.0 and 12.7.0

The OpenSearch elasticsearch.persistentVolumeClaimSize custom setting is overwritten by logging.persistentVolumeClaimSize during deployment of a Container Cloud cluster of any type and is set to the default 30Gi.

Note

This issue does not block the OpenSearch cluster operations if the default retention time is set. The default setting is usually enough for the capacity size of this cluster.

The issue may affect the following Cluster releases:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

To verify that the cluster is affected:

Note

In the commands below, substitute parameters enclosed in angle brackets to match the affected cluster values.

kubectl --kubeconfig=<managementClusterKubeconfigPath> \
-n <affectedClusterProjectName> \
get cluster <affectedClusterName> \
-o=jsonpath='{.spec.providerSpec.value.helmReleases[*].values.elasticsearch.persistentVolumeClaimSize}' | xargs echo config size:


kubectl --kubeconfig=<affectedClusterKubeconfigPath> \
-n stacklight get pvc -l 'app=opensearch-master' \
-o=jsonpath="{.items[*].status.capacity.storage}" | xargs echo capacity sizes:
  • The cluster is not affected if the configuration size value matches or is less than any capacity size. For example:

    config size: 30Gi
    capacity sizes: 30Gi 30Gi 30Gi
    
    config size: 50Gi
    capacity sizes: 100Gi 100Gi 100Gi
    
  • The cluster is affected if the configuration size is larger than any capacity size. For example:

    config size: 200Gi
    capacity sizes: 100Gi 100Gi 100Gi
    

Workaround for a new cluster creation:

  1. Select from the following options:

    • For a management or regional cluster, during the bootstrap procedure, open cluster.yaml.template for editing.

    • For a managed cluster, open the Cluster object for editing.

      Caution

      For a managed cluster, use the Container Cloud API instead of the web UI for cluster creation.

  2. In the opened .yaml file, add logging.persistentVolumeClaimSize along with elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  3. Continue the cluster deployment. The system will use the custom value set in logging.persistentVolumeClaimSize.

    Caution

    If elasticsearch.persistentVolumeClaimSize is absent in the .yaml file, the Admission Controller blocks the configuration update.

Workaround for an existing cluster:

Caution

During the application of the below workarounds, a short outage of OpenSearch and its dependent components may occur with the following alerts firing on the cluster. This behavior is expected. Therefore, disregard these alerts.

StackLight alerts list firing during cluster update

Cluster size and outage probability level

Alert name

Label name and component

Any cluster with high probability

KubeStatefulSetOutage

statefulset=opensearch-master

KubeDeploymentOutage

  • deployment=opensearch-dashboards

  • deployment=metricbeat

Large cluster with average probability

KubePodsNotReady Removed in 17.0.0, 16.0.0, and 14.1.0

  • created_by_name="opensearch-master*"

  • created_by_name="opensearch-dashboards*"

  • created_by_name="metricbeat-*"

OpenSearchClusterStatusWarning

n/a

OpenSearchNumberOfPendingTasks

n/a

OpenSearchNumberOfInitializingShards

n/a

OpenSearchNumberOfUnassignedShards

n/a

Any cluster with low probability

KubeStatefulSetReplicasMismatch

statefulset=opensearch-master

KubeDeploymentReplicasMismatch

  • deployment=opensearch-dashboards

  • deployment=metricbeat

StackLight in HA mode with LVP provisioner for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Therefore, if required, migrate log data to a new persistent volume (PV).

  1. Move the existing log data to a new PV, if required.

  2. Increase the disk size for local volume provisioner (LVP).

  3. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  4. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <pvcSize>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  5. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  6. In the Cluster configuration, set the same logging.persistentVolumeClaimSize as the size of elasticsearch.persistentVolumeClaimSize. For example:

    apiVersion: cluster.k8s.io/v1alpha1
    kind: Cluster
    spec:
    ...
      providerSpec:
        value:
        ...
          helmReleases:
          - name: stacklight
            values:
              elasticsearch:
                persistentVolumeClaimSize: 100Gi
              logging:
                enabled: true
                persistentVolumeClaimSize: 100Gi
    
  7. Scale up the opensearch-master StatefulSet with dependent resources and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 3 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with an expandable StorageClass for OpenSearch PVCs

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size.

    kubectl -n stacklight get statefulset opensearch-master -o yaml | sed 's/storage: 30Gi/storage: <pvc_size>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Patch the PVCs with the new elasticsearch.persistentVolumeClaimSize value:

    kubectl -n stacklight patch pvc opensearch-master-opensearch-master-0 -p  '{ "spec": { "resources": { "requests": { "storage": "<pvc_size>" }}}}'
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize the same as the size of elasticsearch.persistentVolumeClaimSize. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    
StackLight in non-HA mode with a non-expandable StorageClass and no LVP for OpenSearch PVCs

Warning

After applying this workaround, the existing log data will be lost. Depending on your custom provisioner, you may find a third-party tool, such as pv-migrate, that provides a possibility to copy all data from one PV to another.

If data loss is acceptable, proceed with the workaround below.

Note

To verify whether a StorageClass is expandable:

kubectl -n stacklight get pvc | grep opensearch-master | awk '{print $6}' | xargs -I{} kubectl get storageclass {} -o yaml | grep 'allowVolumeExpansion: true'

A positive system response is allowVolumeExpansion: true. A negative system response is blank or false.

  1. Scale down the opensearch-master StatefulSet with dependent resources to 0 and disable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 0 statefulset opensearch-master
    
    kubectl -n stacklight scale --replicas 0 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 0 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : true }}'
    
  2. Recreate the opensearch-master StatefulSet with the updated disk size:

    kubectl get statefulset opensearch-master -o yaml -n stacklight | sed 's/storage: 30Gi/storage: <<pvc_size>>/g' > opensearch-master.yaml
    
    kubectl -n stacklight delete statefulset opensearch-master
    
    kubectl create -f opensearch-master.yaml
    

    Replace <pvcSize> with the elasticsearch.persistentVolumeClaimSize value.

  3. Delete existing PVCs:

    kubectl delete pvc -l 'app=opensearch-master' -n stacklight
    

    Warning

    This command removes all existing logs data from PVCs.

  4. In the Cluster configuration, set logging.persistentVolumeClaimSize to the same value as the size of the elasticsearch.persistentVolumeClaimSize parameter. For example:

     apiVersion: cluster.k8s.io/v1alpha1
     kind: Cluster
     spec:
     ...
       providerSpec:
         value:
         ...
           helmReleases:
           - name: stacklight
             values:
               elasticsearch:
                 persistentVolumeClaimSize: 100Gi
               logging:
                 enabled: true
                 persistentVolumeClaimSize: 100Gi
    
  5. Scale up the opensearch-master StatefulSet with dependent resources to 1 and enable the elasticsearch-curator CronJob:

    kubectl -n stacklight scale --replicas 1 statefulset opensearch-master
    
    sleep 100
    
    kubectl -n stacklight scale --replicas 1 deployment opensearch-dashboards
    
    kubectl -n stacklight scale --replicas 1 deployment metricbeat
    
    kubectl -n stacklight patch cronjobs elasticsearch-curator -p '{"spec" : {"suspend" : false }}'
    

[27732-2] Custom settings for ‘elasticsearch.logstashRetentionTime’ are dismissed

Fixed in 11.6.0 and 12.7.0

Custom settings for the deprecated elasticsearch.logstashRetentionTime parameter are overwritten by the default setting set to 1 day.

The issue may affect the following Cluster releases with enabled elasticsearch.logstashRetentionTime:

  • 11.2.0 - 11.5.0

  • 7.8.0 - 7.11.0

  • 8.8.0 - 8.10.0, 12.5.0 (MOSK clusters)

  • 10.2.4 - 10.8.1 (attached MKE 3.4.x clusters)

  • 13.0.2 - 13.5.1 (attached MKE 3.5.x clusters)

As a workaround, in the Cluster object, replace elasticsearch.logstashRetentionTime with elasticsearch.retentionTime that was implemented to replace the deprecated parameter. For example:

apiVersion: cluster.k8s.io/v1alpha1
kind: Cluster
spec:
  ...
  providerSpec:
    value:
    ...
      helmReleases:
      - name: stacklight
        values:
          elasticsearch:
            retentionTime:
              logstash: 10
              events: 10
              notifications: 10
          logging:
            enabled: true

For the StackLight configuration procedure and parameters description, refer to Configure StackLight.

[20876] StackLight pods get stuck with the ‘NodeAffinity failed’ error

Note

Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshoot StackLight.

On a managed cluster, the StackLight pods may get stuck with the Pod predicate NodeAffinity failed error in the pod status. The issue may occur if the StackLight node label was added to one machine and then removed from another one.

The issue does not affect the StackLight services, all required StackLight pods migrate successfully except extra pods that are created and stuck during pod migration.

As a workaround, remove the stuck pods:

kubectl --kubeconfig <managedClusterKubeconfig> -n stacklight delete pod <stuckPodName>

Upgrade

[24802] Container Cloud upgrade to 2.18.0 can trigger managed clusters update

Affects only Container Cloud 2.18.0

On clusters with enabled proxy and the NO_PROXY settings containing localhost/127.0.0.1 or matching the automatically added Container Cloud internal endpoints, the Container Cloud release upgrade from 2.17.0 to 2.18.0 triggers automatic update of managed clusters to the latest available Cluster releases in their respective series.

For the issue workaround, contact Mirantis support.

[21810] Upgrade to Cluster releases 5.22.0 and 7.5.0 may get stuck

Affects Ubuntu-based clusters deployed after Feb 10, 2022

If you deploy an Ubuntu-based cluster using the deprecated Cluster release 7.4.0 (and earlier) or 5.21.0 (and earlier) starting from February 10, 2022, the cluster update to the Cluster releases 7.5.0 and 5.22.0 may get stuck while applying the Deploy state to the cluster machines. The issue affects all cluster types: management, regional, and managed.

To verify that the cluster is affected:

  1. Log in to the Container Cloud web UI.

  2. In the Clusters tab, capture the RELEASE and AGE values of the required Ubuntu-based cluster. If the values match the ones from the issue description, the cluster may be affected.

  3. Using SSH, log in to the manager or worker node that got stuck while applying the Deploy state and identify the containerd package version:

    containerd --version
    

    If the version is 1.5.9, the cluster is affected.

  4. In /var/log/lcm/runners/<nodeName>/deploy/, verify whether the Ansible deployment logs contain the following errors that indicate that the cluster is affected:

    The following packages will be upgraded:
      docker-ee docker-ee-cli
    The following packages will be DOWNGRADED:
      containerd.io
    
    STDERR:
    E: Packages were downgraded and -y was used without --allow-downgrades.
    

Workaround:

Warning

Apply the steps below to the affected nodes one-by-one and only after each consecutive node gets stuck on the Deploy phase with the Ansible log errors. Such sequence ensures that each node is cordon-drained and Docker is properly stopped. Therefore, no workloads are affected.

  1. Using SSH, log in to the first affected node and install containerd 1.5.8:

    apt-get install containerd.io=1.5.8-1 -y --allow-downgrades --allow-change-held-packages
    
  2. Wait for Ansible to reconcile. The node should become Ready in several minutes.

  3. Wait for the next node of the cluster to get stuck on the Deploy phase with the Ansible log errors. Only after that, apply the steps above on the next node.

  4. Patch the remaining nodes one-by-one using the steps above.


Container Cloud web UI

[23002] Inability to set a custom value for a predefined node label

Fixed in 7.11.0, 11.5.0 and 12.5.0

During machine creation using the Container Cloud web UI, a custom value for a node label cannot be set.

As a workaround, manually add the value to spec.providerSpec.value.nodeLabels in machine.yaml.


[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.