Known issues

MKE 3.7.6 known issues with available workaround solutions include:

[MKE-10152] Upgrading large Windows clusters can initiate a rollback

Upgrades can rollback on a cluster with a large number of Windows worker nodes.


Invoke the --manual-worker-upgrade option and then manually upgrade the workers.

[MKE-9699] Ingress Controller with external load balancer can enter crashloop

Due to the upstream Kubernetes issue 73140, rapid toggling of the Ingress Controller with an external load balancer in use can cause the resource to become stuck in a crashloop.


  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation panel, navigate to <user name> > Admin Settings > Ingress.

  3. Click the Kubernetes tab to display the HTTP Ingress Controller for Kubernetes pane.

  4. Toggle the HTTP Ingress Controller for Kubernetes enabled control to the left to disable the Ingress Controller.

  5. Use the CLI to delete the Ingress Controller resources:

    kubectl delete service ingress-nginx-controller-admission --namespace ingress-nginx
    kubectl delete deployment ingress-nginx-controller --namespace
  6. Verify the successful deletion of the resources:

    kubectl get all --namespace ingress-nginx

    Example output:

    No resources found in ingress-nginx namespace.
  7. Return to the HTTP Ingress Controller for Kubernetes pane in the MKE web UI and change the nodeport numbers for HTTP Port, HTTPS Port and TCP Port.

  8. Toggle the HTTP Ingress Controller for Kubernetes enabled control to the right to re-enable the Ingress Controller.

[MKE-8662] Swarm only manager nodes are labeled as mixed mode

When MKE is installed in swarm only mode, manager nodes start off in mixed mode. As Kubernetes installation is skipped altogether, however, they should be labeled as swarm mode.

Workaround: Change the labels following installation.

Change the labels following installation.

[MKE-8914] Windows Server Core with Containers images incompatible with GCP

The use of Windows ServerCore with Containers images will prevent kubelet from starting up, as these images are not compatible with GCP.

As a workaround, use Windows Server or Windows Server Core images.

[MKE-8814] Mismatched MTU values cause Swarm overlay network issues on GCP

Communication between GCP VPCs and Docker networks that use Swarm overlay networks will fail if their MTU values are not manually aligned. By default, the MTU value for GCP VPCs is 1460, while the default MTU value for Docker networks is 1500.


Select from the following options:

  • Create a new VPC and set the MTU value to 1500.

  • Set the MTU value of the existing VPC to 1500.

For more information, refer to the Google Cloud Platform documentation, Change the MTU setting of a VPC network.

[FIELD-6785] Reinstallation can fail following cluster CA rotation

If MKE 3.7.x is uninstalled soon after rotating cluster CA, re-installing MKE 3.7.x or 3.6.x on an existing docker swarm can fail with the following error messages:

unable to sign cert: {\"code\":1000,\"message\":\"x509: provided PrivateKey doesn't match parent's PublicKey\"}"


  1. Forcefully trigger swarm snapshot:

    old_val=$(docker info --format '{{.Swarm.Cluster.Spec.Raft.SnapshotInterval}}')
    docker swarm update --snapshot-interval 1
    docker swarm update --snapshot-interval ${old_val}
  2. Reattempt to install MKE.

[FIELD-6402] Default metric collection memory settings may be insufficient

In MKE 3.7, ucp-metrics collects more metrics than in previous versions of MKE. As such, for large clusters with many nodes, the following ucp-metrics component default settings may be insufficient:

  • memory request: 1Gi

  • memory limit: 2Gi


Administrators can modify the MKE configuration file to increase the default memory request and memory limit setting values for the ucp-metrics component. The settings to configure are both under the cluster section:

  • For memory request, modify the prometheus_memory_request setting

  • For memory limit, modify the prometheus_memory_limit setting

[MKE-11281] cAdvisor Pods on Windows nodes cannot enter ‘Running’ state

When you enable cAdvisor, Pods are deployed to every node in the cluster. These cAdvisor Pods only work on Linux nodes, however, so the Pods that are inadvertently targeted to Windows nodes remain perpetually suspended and never actually run.

we inadvertently target Windows nodes with cAdvisor and the workaround updates the DaemonSet such that only Linux nodes are targeted.


Update the DaemonSet so that only Linux nodes are targeted by patching the ucp-cadvisor DaemonSet to include a node selector for Linux:

kubectl patch daemonset ucp-cadvisor -n kube-system --type='json' \
-p='[{"op": "replace", "path": "/spec/template/spec/nodeSelector", "value":
{"": "linux"}}]'

[MKE-11282] –swarm-only upgrade fails due to ‘unavailable’ manager ports

Upgrades to Swarm-only clusters that were originally installed using the --swarm-only fail pre-upgrade checks at the Check 7 of 8: [Port Requirements] step.


Include the --force-port-check upgrade option when upgrading a Swarm-only cluster.