Interlock NGINX proxy


What’s new

  • In previous versions, MKE did not include support for Kubernetes ingress, and that prevented the use of services running inside of the cluster from servicing requests that originated externally. Now, a service running inside the cluster can be exposed to service requests that originate externally using Istio Ingress, which means that you can route incoming requests to a specific service using Hostname, URL, and Request Header.

  • You can now use the following Istio ingress mechanisms:

    • Gateway

    • Virtual Service

    • Destination Rule

    • Mixer (handler, instance, and rule)

    • For this release, only Istio Ingress for Kubernetes is available.

  • MKE provides GPU support for Kubernetes workloads.

  • MKE supports Kubernetes on Windows Server nodes.

  • Windows Server Node VXLAN overlay network data plane in Kubernetes

    • Windows Server and Linux are now normalized to both use VXLAN.

    • Overlay networking applied to ensure communication between cluster nodes.

    • No longer requires BGP control plane (when using VXLAN).

    • Upgrade is not supported for VXLAN data plane. Only IPIP is supported. In this case windows nodes will not be part of the kube cluster.

  • In this release of MKE we upgraded the Kubernetes version used by MKE from 1.14.8 (in MKE 3.2) to 1.17.4 (in MKE 3.3). For information about Kubernetes 1.17, including a comprehensive list of new features, see Kubernetes 1.17 Release Notes.

  • The Kubernetes API underwent significant change with v1.16. For detailed information, refer to the release announcement. To learn about deprecated APIs that were removed in v1.16, refer to deprecated APIs. Significantly, as a result of the deprecation and removal of the “containerized” flag in Kubernetes, neither kubelet nor kube-proxy can run as container. For detailed information, refer to https://github.com/kubernetes/kubernetes/issues/74148.

Bug fixes

  • MKE 3.2 and earlier provided a global flag for granting users and service accounts permissions to deploy kubernetes workloads to a specific node. In MKE 3.3.0 we provide the ability to give specific permissions to users and service accounts by setting the kubernetes cluster-admin role bindings for just that user or service account.

    This change does not affect Swarm workloads.

  • The feature gates VolumeSnapshotDataSource, ExpandCSIVolumes, CSIMigration, and VolumeSubpathEnvExpansion are included in the experimental storage features and are now enabled by default.

Known issues

  • Kubernetes Ingress cannot be deployed on a cluster after MKE is upgraded from 3.2.6 to 3.3.0. A fresh install of 3.3.0 is not impacted. This issue is internally tracked as FIELD-2602.

    • To reproduce this issue, upgrade a MKE 3.2.6 cluster to 3.3.0. Next, log into MKE in a browser window, navigate to Admin Settings, and select the Ingress tab from the left navigation. In the Ingress tab, set the following configuration options:



      HTTP ingress for Kubernetes


      HTTP port


      HTTPS port


      TCP port




      When you click Save, the system fails to respond.

    • You can mitigate the problem by manually editing the MKE Configuration file.

      1. Download the ucp-config.toml file from MKE by following MKE Configuration File.

      2. Find the [cluster_config.service_mesh] heading and set it to true.

      3. Upload the modified ucp-config.toml file as described in MKE Configuration File.

      4. Navigate to MKE → Admin Settings → Ingress and change the parameters and enable Ingress.

      5. Click “Save” to enable Istio for the cluster.

  • To allow Windows worker nodes to join the cluster, all images must be pre-pulled. Refer to Use Kubernetes on Windows Server nodes.

  • The logs for the ucp-worker-agent-win container may warn that a certificate cannot be written because the file cannot be found. These warnings can be ignored. The container will be rescheduled automatically, within 90s, to repair this condition.

  • The MKE web interface has inconsistencies and unexpected behavior when interacting with the system. To work around this, use the command line tools when necessary.

  • You cannot configure VXLAN MTU and port on Windows Server. There is currently no workaround.

  • After vSwitch creation happens on windows node, connection to metadata server will be lost. It is known MSFT issue.

  • When kubernetes deployment is scaled up so that number of pods on Windows node is increased (e.g., kubectl scale --replicas=30 deployment.apps/win-nano) it sometimes cause nodes to become “not ready”. Related upstream issue: https://github.com/kubernetes/kubernetes/issues/88153. Frequency of the issue occurrence depends on node flavor (less vCPUs - more frequent) and on the scaling step (bigger step - more frequent). Please use node flavors with bigger vCPU count (8 or more) and smaller pods scaling step to reduce probability of the issue.

  • You may see a ‘Failed to compute desired number of replicas based on listed metrics’ in the Istio logs. You may ignore this error.

  • When reducing the number of gateways using the MKE web interface, the change will not take effect. The workaround is to toggle Istio off and back on again to lower the number of gateway replicas. Increasing replicas behaves appropriately, no workaround is needed for increases. CRDs from pre-existing Istio installation will be overwritten. To avoid this error, don’t install in clusters where Istio is already installed.

  • Resource types (instance, rule, and handler) can’t be created through the UI. The workaround is to perform the operation via kubectl.

  • The Kubernetes cloud provider for AWS is not enabled for Windows Server nodes. If MKE is installed with command: mirantis/ucp install --cloud-provider aws, the AWS cloud provider will be enabled for Linux nodes only. This issue does not affect Windows or Linux nodes on Azure (mirantis/ucp install --cloud-provider azure).

  • On Azure, the MKE installer may fail on the last step. Wait a minute and check the MKE UI to verify all nodes are healthy; the installer will continue when the nodes are ready.