Known issues

This section lists known issues with workarounds for the Mirantis Container Cloud release 2.20.0 including the Cluster releases 11.4.0 and 7.10.0.

For other issues that can occur while deploying and operating a Container Cloud cluster, see Deployment Guide: Troubleshooting and Operations Guide: Troubleshooting.


This section also outlines still valid known issues from previous Container Cloud releases.


[20651] A cluster deployment or update fails with not ready compose deployments

A managed cluster deployment, attachment, or update to a Cluster release with MKE versions 3.3.13, 3.4.6, 3.5.1, or earlier may fail with the compose pods flapping (ready > terminating > pending) and with the following error message appearing in logs:

'not ready: deployments: kube-system/compose got 0/0 replicas, kube-system/compose-api
 got 0/0 replicas'
 ready: false
 type: Kubernetes


  1. Disable Docker Content Trust (DCT):

    1. Access the MKE web UI as admin.

    2. Navigate to Admin > Admin Settings.

    3. In the left navigation pane, click Docker Content Trust and disable it.

  2. Restart the affected deployments such as calico-kube-controllers, compose, compose-api, coredns, and so on:

    kubectl -n kube-system delete deployment <deploymentName>

    Once done, the cluster deployment or update resumes.

  3. Re-enable DCT.

Bare metal

[24005] Deletion of a node with ironic Pod is stuck in the Terminating state

During deletion of a manager machine running the ironic Pod from a bare metal management cluster, the following problems occur:

  • All Pods are stuck in the Terminating state

  • A new ironic Pod fails to start

  • The related bare metal host is stuck in the deprovisioning state

As a workaround, before deletion of the node running the ironic Pod, cordon and drain the node using the kubectl cordon <nodeName> and kubectl drain <nodeName> commands.

[20736] Region deletion failure after regional deployment failure

If a baremetal-based regional cluster deployment fails before pivoting is done, the corresponding region deletion fails.


Using the command below, manually delete all possible traces of the failed regional cluster deployment, including but not limited to the following objects that contain the label of the affected region:

  • cluster

  • machine

  • baremetalhost

  • baremetalhostprofile

  • l2template

  • subnet

  • ipamhost

  • ipaddr

kubectl delete <objectName> -l<regionName>


Do not use the same region name again after the regional cluster deployment failure since some objects that reference the region name may still exist.


[26070] RHEL system cannot be registered in Red Hat portal over MITM proxy

Deployment of RHEL machines using the Red Hat portal registration, which requires user and password credentials, over MITM proxy fails while building the virtual machines OVF template with the following error:

Unable to verify server's identity: [SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed (_ssl.c:618)

The Container Cloud deployment gets stuck while applying the RHEL license to machines with the same error in the lcm-agent logs.

As a workaround, use the internal Red Hat Satellite server that a VM can access directly without a MITM proxy.

Management cluster upgrade

[26740] Failure to upgrade a management cluster with a custom certificate

An upgrade of a Container Cloud management cluster with a custom Keycloak or web UI TLS certificate fails with the following example error:

failed to update management cluster: \
admission webhook "" denied the request: \
failed to validate TLS spec for Cluster 'default/kaas-mgmt': \
desired hostname is not set for 'ui'


Verify that the tls section of the management cluster contains the hostname and certificate fields for configured applications:

  1. Open the management Cluster object for editing:

    kubectl edit cluster <mgmtClusterName>
  2. Verify that the tls section contains the following fields:

          name: keycloak
        hostname: <keycloakHostName>
        tlsConfigRef: “” or “keycloak”
          name: ui
        hostname: <webUIHostName>
        tlsConfigRef: “” or “ui”

Container Cloud web UI

[23002] Inability to set a custom value for a predefined node label

During machine creation using the Container Cloud web UI, a custom value for a node label cannot be set.

As a workaround, manually add the value to spec.providerSpec.value.nodeLabels in machine.yaml.


[26820] ‘KaaSCephCluster’ does not reflect issues during Ceph cluster deletion

The status section in the KaaSCephCluster.status CR does not reflect issues during the process of a Ceph cluster deletion.

As a workaround, inspect Ceph Controller logs on the managed cluster:

kubectl --kubeconfig <managedClusterKubeconfig> -n ceph-lcm-mirantis logs <ceph-controller-pod-name>