Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.2.0 including the Cluster release 5.9.0.

Note

This section also outlines still valid known issues from previous Container Cloud releases.


AWS

[8013] Managed cluster deployment requiring PVs may fail

Fixed in the Cluster release 7.0.0

Note

The issue below affects only the Kubernetes 1.18 deployments. Moving forward, the workaround for this issue will be moved from Release Notes to Operations Guide: Troubleshooting.

On a management cluster with multiple AWS-based managed clusters, some clusters fail to complete the deployments that require persistent volumes (PVs), for example, Elasticsearch. Some of the affected pods get stuck in the Pending state with the pod has unbound immediate PersistentVolumeClaims and node(s) had volume node affinity conflict errors.

Warning

The workaround below applies to HA deployments where data can be rebuilt from replicas. If you have a non-HA deployment, back up any existing data before proceeding, since all data will be lost while applying the workaround.

Workaround:

  1. Obtain the persistent volume claims related to the storage mounts of the affected pods:

    kubectl get pod/<pod_name1> pod/<pod_name2> \
    -o jsonpath='{.spec.volumes[?(@.persistentVolumeClaim)].persistentVolumeClaim.claimName}'
    

    Note

    In the command above and in the subsequent steps, substitute the parameters enclosed in angle brackets with the corresponding values.

  2. Delete the affected Pods and PersistentVolumeClaims to reschedule them: For example, for StackLight:

    kubectl -n stacklight delete \
    
      pod/<pod_name1> pod/<pod_name2> ...
      pvc/<pvc_name2> pvc/<pvc_name2> ...
    


Bare metal

[6988] LVM fails to deploy if the volume group name already exists

Fixed in Container Cloud 2.5.0

During a management or managed cluster deployment, LVM cannot be deployed on a new disk if an old volume group with the same name already exists on the target hardware node but on the different disk.

Workaround:

In the bare metal host profile specific to your hardware configuration, add the wipe: true parameter to the device that fails to be deployed. For the procedure details, see Operations Guide: Create a custom host profile.

[7655] Wrong status for an incorrectly configured L2 template

Fixed in 2.11.0

If an L2 template is configured incorrectly, a bare metal cluster is deployed successfully but with the runtime errors in the IpamHost object.

Workaround:

If you suspect that the machine is not working properly because of incorrect network configuration, verify the status of the corresponding IpamHost object. Inspect the l2RenderResult and ipAllocationResult object fields for error messages.


[8560] Manual deletion of BareMetalHost leads to its silent removal

Fixed in Container Cloud 2.5.0

If BareMetalHost is manually removed from a managed cluster, it is silently removed without a power-off and deprovision that leads to a managed cluster failures.

Workaround:

Do not manually delete a BareMetalHost that has the Provisioned status.


IAM

[2757] IAM fails to start during management cluster deployment

Fixed in Container Cloud 2.4.0

During a management cluster deployment, IAM fails to start with the IAM pods being in the CrashLoopBackOff status.

Workaround:

  1. Log in to the bootstrap node.

  2. Remove the iam-mariadb-state configmap:

    kubectl delete cm -n kaas iam-mariadb-state
    
  3. Manually delete the mariadb pods:

    kubectl delete po -n kaas mariadb-server-{0,1,2}
    

    Wait for the pods to start. If the mariadb pod does not start with the connection to peer timed out exception, repeat the step 2.

  4. Obtain the MariaDB database admin password:

    kubectl get secrets -n kaas mariadb-dbadmin-password \
    -o jsonpath='{.data.MYSQL_DBADMIN_PASSWORD}' | base64 -d ; echo
    
  5. Log in to MariaDB:

    kubectl exec -it -n kaas mariadb-server-0 -- bash -c 'mysql -uroot -p<mysqlDbadminPassword>'
    

    Substitute <mysqlDbadminPassword> with the corresponding value obtained in the previous step.

  6. Run the following command:

    DROP DATABASE IF EXISTS keycloak;
    
  7. Manually delete the Keycloak pods:

    kubectl delete po -n kaas iam-keycloak-{0,1,2}
    

Storage

[7073] Cannot automatically remove a Ceph node

Fixed in 2.16.0

When removing a worker node, it is not possible to automatically remove a Ceph node. The workaround is to manually remove the Ceph node from the Ceph cluster as described in Operations Guide: Add, remove, or reconfigure Ceph nodes before removing the worker node from your deployment.


Container Cloud web UI

[249] A newly created project does not display in the Container Cloud web UI

Affects only Container Cloud 2.18.0 and earlier

A project that is newly created in the Container Cloud web UI does not display in the Projects list even after refreshing the page. The issue occurs due to the token missing the necessary role for the new project. As a workaround, relogin to the Container Cloud web UI.