Skip to content

Known issues#

The MKE 4k known issues with available workarounds are described herein.

Post-install kubelet parameter modifications require a k0s restart#

Modifications made to the kubelet parameters in the mke4.yaml configuration file after the initial MKE 4k installation require a restart of k0s on every cluster node. To do this:

  1. Wait for a short time, roughly 60 seconds after the application of the mkectl apply command, to give the pods time to enter their Running state.

  2. Run the systemctl restart k0scontroller command on all manager nodes and the systemctl restart k0scontroller command on all worker nodes.

Upgrade may fail on clusters with two manager nodes#

MKE 3 upgrades to MKE 4k may fail on clusters that have only two manager nodes.

Info

Mirantis does not sanction upgrading MKE 3 clusters that have an even number of manager nodes. In general, having an even number of manager nodes is avoided in clustering systems due to quorum and availability factors.

Calico IPVS mode is not supported#

Calico IPVS mode is not yet supported for MKE 4k. As such, upgrading from an MKE 3 cluster using that networking modes results in an error:

FATA[0640] Upgrade failed due to error: failed to run step [Upgrade Tasks]:
unable to install BOP: unable to apply MKE4 config: failed to wait for pods:
failed to wait for pods: failed to list pods: client rate limiter Wait returned
an error: context deadline exceeded

Upgrade to MKE 4k fails if kubeconfig file is present in source MKE 3.x#

Upgrade to MKE 4k fails if the ~/.mke/mke.kubeconf file is present in the source MKE 3.x system.

Workaround:

Make a backup of the old ~/.mke/mke.kubeconf file and then delete it.

reset command must be run with --force flag#

You must run the reset command with the --force flag, as without this flag the command will always return an error.

mkectl reset -f mke4.yaml

Example output:

time="2025-09-08T19:35:44-04:00" level=info msg="==> Running phase: Disconnect from hosts"
Error: reset requires --force

Addition of extra scopes with mkectl login causes CLI to authenticate twice#

When you create the kubeconfig with the mkectl login command and add extra scopes using the --oidc-extra-scopes flag, the CLI attempts to authenticate two times during the generation of the configuration and on each cluster interaction with the generated kubeconfig.

Workaround:

When adding extra scopes to the --oidc-extra-scopes flag, make sure to also add the offline_access scope. For example:

--oidc-extra-scopes=groups,offline_access`

mkectl config get command generates log lines that malform YAML output#

The mkectl config get command output contains log records at the beginning of the generated output that invalidate the resulting YAML configuration file.

Workaround:

Exclude unwanted logs by running the mkectl config get command with the higher log level.

mkectl config get -l fatal

Pod logs do not display when MKE 4k is unininstalled and then reinstalled on same nodes#

If you uninstall MKE 4k and later try to reinstall it on the same nodes, the installation will succeed but the pods that run on manager nodes from the previous installation will not display logs and will present the following CA error:

tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes-ca"

Workaround:

  1. Following the uninstallation of MKE 4k, reboot the manager nodes before you reinstall the software.

  2. Run the following command on each manager node:

    rm -rf /var/lib/kubelet/
    

System addons fail to upgrade from MKE 4k 4.1.1 to MKE 4k 4.1.2#

When you upgrade a MKE 4k 4.1.1 cluster that is configured with an external authentication provider (OIDC, SAML, LDAP) to MKE 4k 4.1.2, mkectl reports overall upgrade success despite the upgrade failure of the following system addons: Dex, NGINX Ingress Controller, MKE 4k Dashboard.

The mke-operator logs present the following error message, regarding a missing secret:

failed to create or update ClusterDeployment: failed to prepare MKE ClusterDeployment: failed to prepare service authentication: unable to retrieve the Dex deployment secret: Secret "authentication-credentials" not found

The root cause of this is that the naming convention for authentication secrets changed between MKE 4k versions, from protocol-specific names (for example, ldap-bind-password) to a universal name (authentication-credentials). The mke-operator fails early in the reconcile loop because it attempts to locate the authentication-credentials secret, which does not yet exist, which thus prevents the upgrading of the system addons.

Workaround:

Manually create the secret by copying the data from the old secret to the new expected secret name. This allows the operator to locate the required credentials and proceed with the upgrade loop.

  1. Identify the existing authentication secret. For example, the ldap-bind-password for LDAP configurations.

  2. Copy the content of the secret into a new Secret named authentication-credentials within the same namespace.

    kubectl get secret ldap-bind-password -o json -n mke| \
    jq 'del(.metadata.resourceVersion, .metadata.uid, .metadata.creationTimestamp, .metadata.selfLink, .metadata.ownerReferences) | .metadata.name = "authentication-credentials"' | \
    kubectl apply -f -
    
  3. Verify the new secret.

    kubectl get secret authentication-credentials