Introduction

Warning

In correlation with the end of life (EOL) date for MKE 3.4.x, Mirantis stopped maintaining this documentation version as of 2023-04-11. The latest MKE product documentation is available here.

This documentation provides information on how to deploy and operate a Mirantis Kubernetes Engine (MKE). The documentation is intended to help operators to understand the core concepts of the product. The documentation provides sufficient information to deploy and operate the solution.

The information provided in this documentation set is being constantly improved and amended based on the feedback and kind requests from the consumers of MKE.

Product Overview

Warning

In correlation with the end of life (EOL) date for MKE 3.4.x, Mirantis stopped maintaining this documentation version as of 2023-04-11. The latest MKE product documentation is available here.

Mirantis Kubernetes Engine (MKE, formerly Universal Control Plane or UCP) is the industry-leading container orchestration platform for developing and running modern applications at scale, on private clouds, public clouds, and on bare metal.

MKE delivers immediate value to your business by allowing you to adopt modern application development and delivery models that are cloud-first and cloud-ready. With MKE you get a centralized place with a graphical UI to manage and monitor your Kubernetes and/or Swarm cluster instance.

Your business benefits from using MKE as a container orchestration platform, especially in the following use cases:

More than one container orchestrator

Whether your application requirements are complex and require medium to large clusters or simple ones that can be deployed quickly on development environments, MKE gives you a container orchestration choice. Deploy Kubernetes, Swarm, or both types of clusters and manage them on a single MKE instance or centrally manage your instance using Mirantis Container Cloud.

Robust and scalable applications deployment

Monolithic applications are old school, microservices are the modern way to deploy an application at scale. Delivering applications through an automated CI/CD pipeline can dramatically improve time-to-market and service agility. Adopting microservices becomes a lot easier when using Kubernetes and/or Swarm clusters to deploy and test microservice-based applications.

Multi-tenant software offerings

Containerizing existing monolithic SaaS applications enables quicker development cycles, automated continuous integration and deployment. But these applications need to allow multiple users to share a single instance of a software application. MKE can operate multi-tenant environments, isolate teams and organizations, separate cluster resources, and so on.

See also

Kubernetes

Reference Architecture

Warning

In correlation with the end of life (EOL) date for MKE 3.4.x, Mirantis stopped maintaining this documentation version as of 2023-04-11. The latest MKE product documentation is available here.

The MKE Reference Architecture provides a technical overview of Mirantis Kubernetes Engine (MKE). It is your source for the product hardware and software specifications, standards, component information, and configuration detail.

Introduction to MKE

Mirantis Kubernetes Engine (MKE) allows you to adopt modern application development and delivery models that are cloud-first and cloud-ready. With MKE you get a centralized place with a graphical UI to manage and monitor your Kubernetes and/or Swarm cluster instance.

The core MKE components are:

  • ucp-cluster-agent

    Reconciles the cluster-wide state, including Kubernetes addons such as Kubecompose and KubeDNS, managing replication configurations for the etcd and RethinkDB clusters, and syncing the node inventories of SwarmKit and Swarm Classic. This component is a single-replica service that runs on any manager node in the cluster.

  • ucp-manager-agent

    Reconciles the node-local state on manager nodes, including the configuration of the local Docker daemon, local date volumes, certificates, and local container components. Each manager node in the cluster runs a task from this service.

  • ucp-worker-agent

    Performs the same reconciliation operations as ucp-manager-agent but on worker nodes. This component runs a task on each worker node.

The following MKE component names differ based on the node’s operating system:

Component name on Linux

Component name on Windows

ucp-worker-agent

ucp-worker-agent-win

ucp-containerd-shim-process

ucp-containerd-shim-process-win

ucp-dsinfo

ucp-dsinfo-win

No equivalent

ucp-kube-binaries-win

ucp-pause

ucp-pause-win

MKE hardware requirements

Take careful note of the minimum and recommended hardware requirements for MKE manager and worker nodes prior to deployment.

Note

  • High availability (HA) installations require transferring files between hosts.

  • On manager nodes, MKE only supports the workloads it requires to run.

  • Windows container images are typically larger than Linux container images. As such, provision more local storage for Windows nodes and for any MSR repositories that store Windows container images.

Minimum and recommended hardware requirements

Manager nodes

Worker nodes

Minimum hardware requirements

  • 16 GB of RAM

  • 2 vCPUs

  • 79 GB available storage:

    • 79 GB available storage for the /var partition, unpartitioned

    OR

    • 79 GB available storage, partitioned as follows:

      • 25 GB for a single /var/ partition

      • 25 GB for /var/lib/kubelet/ (for installations and future upgrades)

      • 25 GB for /var/lib/docker/

      • 4 GB for /var/lib/containerd/

  • 4 GB RAM

  • 15 GB storage for the /var/ partition

Recommended hardware requirements

  • 24 - 32 GB RAM

  • 4 vCPUs

  • At least 79 GB available storage, partitioned as follows:

    • 25 GB for a single /var/ partition

    • 25 GB for /var/lib/kubelet/ (for installations and future upgrades)

    • 25 GB for /var/lib/docker/

    • 4 GB for /var/lib/containerd/

Recommendations vary depending on the workloads.

MKE software requirements

Prior to MKE deployment, consider the following software requirements:

  • Run the same MCR version (20.10.0 or later) on all nodes.

  • Run Linux kernel 3.10 or higher on all nodes.

    For debugging purposes, the host OS kernel versions should match as closely as possible.

  • Use a static IP address for each node in the cluster.

Manager nodes

Manager nodes manage a swarm and persist the swarm state. Using several containers per node, the ucp-manager-agent automatically deploys all MKE components on manager nodes, including the MKE web UI and the data stores that MKE uses.

Note

Some Kubernetes components are run as Swarm services because the MKE control plane is itself a Docker Swarm cluster.

The following tables detail the MKE services that run on manager nodes:

Swarm services

MKE component

Description

ucp-auth-api

The centralized service for identity and authentication used by MKE and MSR.

ucp-auth-store

A container that stores authentication configurations and data for users, organizations, and teams.

ucp-auth-worker

A container that performs scheduled LDAP synchronizations and cleans authentication and authorization data.

ucp-client-root-ca

A certificate authority to sign client bundles.

ucp-cluster-agent

The agent that monitors the cluster-wide MKE components. Run on only one manager node.

ucp-cluster-root-ca

A certificate authority used for TLS communication between MKE components.

ucp-controller

The MKE web server.

ucp-hardware-info

A container for collecting disk/hardware information about the host.

ucp-interlock

A container that monitors Swarm workloads configured to use layer 7 routing. Only runs when you enable layer 7 routing.

ucp-interlock-config

A service that manages Interlock configuration.

ucp-interlock-extension

A service that verifies the run status of the Interlock extension.

ucp-interlock-proxy

A service that provides load balancing and proxying for Swarm workloads. Runs only when layer 7 routing is enabled.

ucp-kube-apiserver

A master component that serves the Kubernetes API. It persists its state in etcd directly, and all other components communicate directly with the API server. The Kubernetes API server is configured to encrypt Secrets using AES-CBC with a 256-bit key. The encryption key is never rotated, and the encryption key is stored on manager nodes, in a file on disk.

ucp-kube-controller-manager

A master component that manages the desired state of controllers and other Kubernetes objects. It monitors the API server and performs background tasks when needed.

ucp-kubelet

The Kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage.

ucp-kube-proxy

The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods by way of cluster IP addresses.

ucp-kube-scheduler

A master component that manages Pod scheduling, which communicates with the API server only to obtain workloads that need to be scheduled.

ucp-kv

A container used to store the MKE configurations. Do not use it in your applications, as it is for internal use only. Also used by Kubernetes components.

ucp-manager-agent

The agent that monitors the manager node and ensures that the right MKE services are running.

ucp-proxy

A TLS proxy that allows secure access from the local Mirantis Container Runtime to MKE components.

ucp-swarm-manager

A container used to provide backward compatibility with Docker Swarm.

Kubernetes components

MKE component

Description

k8s_calico-kube-controllers

A cluster-scoped Kubernetes controller that coordinates Calico networking. Runs on one manager node only.

k8s_calico-node

The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the calico-node DaemonSet. Runs on all nodes.

Configure the container network interface (CNI) plugin using the --cni-installer-url flag. If this flag is not set, MKE uses Calico as the default CNI plugin.

k8s_discovery_istio-pilot

An Istio ingress component for Kubernetes layer 7 routing. Runs on manager nodes when Kubernetes Ingress is enabled.

k8s_enable-strictaffinity

An init container for Calico controller that sets the StrictAffinity in Calico networking according to the configured boolean value.

k8s_firewalld-policy_calico-node

An init container for calico-node that verifies whether systems with firewalld are compatible with Calico.

k8s_ingress-sds_istio-ingressgateway

An Istio ingress component for Kubernetes layer 7 routing. Runs on manager nodes when Kubernetes Ingress is enabled.

k8s_install-cni_calico-node

A container in which the Calico CNI plugin binaries are installed and configured on each host. Part of the calico-node DaemonSet. Runs on all nodes.

k8s_istio-proxy_istio-ingressgateway

An Istio ingress component for Kubernetes layer 7 routing. Runs on manager nodes when Kubernetes Ingress is enabled.

k8s_POD_istio-ingressgateway

An Istio ingress component for Kubernetes layer 7 routing. Runs on manager nodes when Kubernetes Ingress is enabled.

k8s_POD_istio-pilot

An Istio ingress component for Kubernetes layer 7 routing. Runs on manager nodes when Kubernetes Ingress is enabled.

k8s_ucp-coredns_coredns

The CoreDNS plugin, which provides service discovery for Kubernetes services and Pods.

k8s_ucp-kube-compose

A custom Kubernetes resource component that translates Compose files into Kubernetes constructs. Part of the Compose deployment. Runs on one manager node only.

k8s_ucp-kube-compose-api

The API server for Kube Compose, which is part of the compose deployment. Runs on one manager node only.

k8s_ucp-metrics-inventory

A container that generates the inventory targets for Prometheus server. Part of the Kubernetes Prometheus Metrics plugin.

k8s_ucp-metrics-prometheus

A container used to collect and process metrics for a node. Part of the Kubernetes Prometheus Metrics plugin.

k8s_ucp-metrics-proxy

A container that runs a proxy for the metrics server. Part of the Kubernetes Prometheus Metrics plugin.

k8s_ucp-node-feature-discovery-master

A container that provides node feature discovery labels for Kubernetes nodes.

k8s_ucp-node-feature-discovery-worker

A container that provides node feature discovery labels for Kubernetes nodes.

Kubernetes pause containers

MKE component

Description

k8s_POD_calico-node

The pause container for the calico-node Pod.

k8s_POD_calico-kube-controllers

The pause container for the calico-kube-controllers Pod.

k8s_POD_compose

The pause container for the compose Pod.

k8s_POD_compose-api

The pause container for ucp-kube-compose-api.

k8s_POD_coredns

The pause container for the ucp-coredns Pod.

k8s_POD_ucp-metrics

The pause container for the ucp-metrics.

k8s_POD_ucp-node-feature-discovery

The pause container for the node feature discovery labels on Kubernetes nodes.

Worker nodes

Worker nodes are instances of MCR that participate in a swarm for the purpose of executing containers. Such nodes receive and execute tasks dispatched from manager nodes. Worker nodes must have at least one manager node, as they do not participate in the Raft distributed state, perform scheduling, or serve the swarm mode HTTP API.

Note

Some Kubernetes components are run as Swarm services because the MKE control plane is itself a Docker Swarm cluster.

The following tables detail the MKE services that run on worker nodes.

Swarm services

MKE component

Description

ucp-hardware-info

A container for collecting host information regarding disks and hardware.

ucp-interlock-config

A service that manages Interlock configuration.

ucp-interlock-extension

A helper service that reconfigures the ucp-interlock-proxy service, based on the Swarm workloads that are running.

ucp-interlock-proxy

A service that provides load balancing and proxying for swarm workloads. Only runs when you enable layer 7 routing.

ucp-kube-proxy

The networking proxy running on every node, which enables Pods to contact Kubernetes services and other Pods through cluster IP addresses. Named ucp-kube-proxy-win in Windows systems.

ucp-kubelet

The Kubernetes node agent running on every node, which is responsible for running Kubernetes Pods, reporting the health of the node, and monitoring resource usage. Named ucp-kubelet-win in Windows systems.

ucp-pod-cleaner-win

A service that removes all the Kubernetes Pods that remain after Kubernetes components are removed from Windows nodes. Runs only on Windows nodes.

ucp-proxy

A TLS proxy that allows secure access from the local Mirantis Container Runtime to MKE components.

ucp-tigera-node-win

The Calico node agent that coordinates networking fabric for Windows nodes according to the cluster-wide Calico configuration. Runs on Windows nodes when Kubernetes is set as the orchestrator.

ucp-tigera-felix-win

A Calico component that runs on every machine that provides endpoints. Runs on Windows nodes when Kubernetes is set as the orchestrator.

ucp-worker-agent-x and ucp-worker-agent-y

A service that monitors the worker node and ensures that the correct MKE services are running. The ucp-worker-agent service ensures that only authorized users and other MKE services can run Docker commands on the node. The ucp-worker-agent-<x/y> deploys a set of containers onto worker nodes, which is a subset of the containers that ucp-manager-agent deploys onto manager nodes. This component is named ucp-worker-agent-win-<x/y> on Windows nodes.

Kubernetes components

MKE component

Description

k8s_calico-node

The Calico node agent that coordinates networking fabric according to the cluster-wide Calico configuration. Part of the calico-node DaemonSet. Runs on all nodes.

k8s_firewalld-policy_calico-node

An init container for calico-node that verifies whether systems with firewalld are compatible with Calico.

k8s_install-cni_calico-node

A container that installs the Calico CNI plugin binaries and configuration on each host. Part of the calico-node DaemonSet. Runs on all nodes.

k8s_ucp-node-feature-discovery-master

A container that provides node feature discovery labels for Kubernetes nodes.

k8s_ucp-node-feature-discovery-worker

A container that provides node feature discovery labels for Kubernetes nodes.

Kubernetes pause containers

MKE component

Description

k8s_POD_calico-node

The pause container for the Calico-node Pod. This container is hidden by default, but you can see it by running the following command:

docker ps -a

k8s_POD_ucp-node-feature-discovery

The pause container for the node feature discovery labels on Kubernetes nodes.

Admission controllers

Admission controllers are plugins that govern and enforce cluster usage. There are two types of admission controllers: default and custom. The tables below list the available admission controllers. For more information, see Kubernetes documentation: Using Admission Controllers.

Note

You cannot enable or disable custom admission controllers.


Default admission controllers

Name

Description

DefaultStorageClass

Adds a default storage class to PersistentVolumeClaim objects that do not request a specific storage class.

DefaultTolerationSeconds

Sets the pod default forgiveness toleration to tolerate the notready:NoExecute and unreachable:NoExecute taints based on the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds Kubernetes API server input parameters if they do not already have toleration for the node.kubernetes.io/not-ready:NoExecute or node.kubernetes.io/unreachable:NoExecute taints. The default value for both input parameters is five minutes.

LimitRanger

Ensures that incoming requests do not violate the constraints in a namespace LimitRange object.

MutatingAdmissionWebhook

Calls any mutating webhooks that match the request.

NamespaceLifecycle

Ensures that users cannot create new objects in namespaces undergoing termination and that MKE rejects requests in nonexistent namespaces. It also prevents users from deleting the reserved default, kube-system, and kube-public namespaces.

NodeRestriction

Limits the Node and Pod objects that a kubelet can modify.

PersistentVolumeLabel (deprecated)

Attaches region or zone labels automatically to PersistentVolumes as defined by the cloud provider.

PodNodeSelector

Limits which node selectors can be used within a namespace by reading a namespace annotation and a global configuration.

PodSecurityPolicy

Determines whether a new or modified pod should be admitted based on the requested security context and the available Pod Security Policies.

ResourceQuota

Observes incoming requests and ensures they do not violate any of the constraints in a namespace ResourceQuota object.

ServiceAccount

Implements automation for ServiceAccount resources.

ValidatingAdmissionWebhook

Calls any validating webhooks that match the request.


Custom admission controllers

Name

Description

UCPAuthorization

  • Annotates Docker Compose-on-Kubernetes Stack resources with the identity of the user performing the request so that the Docker Compose-on-Kubernetes resource controller can manage Stacks with correct user authorization.

  • Detects the deleted ServiceAccount resources to correctly remove them from the scheduling authorization back end of an MKE node.

  • Simplifies creation of the RoleBindings and ClusterRoleBindings resources by automatically converting user, organization, and team Subject names into their corresponding unique identifiers.

  • Prevents users from deleting the built-in cluster-admin, ClusterRole, or ClusterRoleBinding resources.

  • Prevents under-privileged users from creating or updating PersistentVolume resources with host paths.

  • Works in conjunction with the built-in PodSecurityPolicies admission controller to prevent under-privileged users from creating Pods with privileged options. To grant non-administrators and non-cluster-admins access to privileged attributes, refer to Use admission controllers for access in the MKE Operations Guide.

CheckImageSigning

Enforces MKE Docker Content Trust policy which, if enabled, requires that all pods use container images that have been digitally signed by trusted and authorized users, which are members of one or more teams in MKE.

UCPNodeSelector

Adds a com.docker.ucp.orchestrator.kubernetes:* toleration to pods in the kube-system namespace and removes the com.docker.ucp.orchestrator.kubernetes tolerations from pods in other namespaces. This ensures that user workloads do not run on swarm-only nodes, which MKE taints with com.docker.ucp.orchestrator.kubernetes:NoExecute. It also adds a node affinity to prevent pods from running on manager nodes depending on MKE settings.

Pause containers

Every Kubernetes Pod includes an empty pause container, which bootstraps the Pod to establish all of the cgroups, reservations, and namespaces before its individual containers are created. The pause container image is always present, so the pod resource allocation happens instantaneously as containers are created.


To display pause containers:

When using the client bundle, pause containers are hidden by default.

  • To display pause containers when using the client bundle:

    docker ps -a | grep -I pause
    
  • To display pause containers when not using the client bundle:

    1. Log in to a manager or worker node.

    2. Display pause containers:

      docker ps | grep -I pause
      

    Example output on a manager node:

    6d55b40f80ff   mirantis/ucp-pause:3.4.9   "/pause"   4 minutes ago   Up 4 minutes   k8s_POD_ucp-node-feature-discovery-m8552_node-feature-discovery_bd2fd91b-ffb8-426d-b006-0edde0071f44_0
    7f914f75b898   mirantis/ucp-pause:3.4.9   "/pause"   4 minutes ago   Up 4 minutes   k8s_POD_compose-5bd794b958-qgnk8_kube-system_c31fcbe7-29f1-4e7f-9a57-2ca34fb7b42f_0
    4107244c6f8d   mirantis/ucp-pause:3.4.9   "/pause"   4 minutes ago   Up 4 minutes   k8s_POD_compose-api-64d65cc8c8-dgcf9_kube-system_2bb25d91-54bc-4129-869a-5b96a02b2e9e_0
    2afd5515b535   mirantis/ucp-pause:3.4.9   "/pause"   4 minutes ago   Up 4 minutes   k8s_POD_ucp-metrics-2t4tl_kube-system_1c9d6e11-4bc4-41e5-858e-59dc8fb91ef1_0
    14baabbb85c7   mirantis/ucp-pause:3.4.9   "/pause"   4 minutes ago   Up 4 minutes   k8s_POD_coredns-5968d48-hhnwp_kube-system_edda9821-077f-4223-a52f-d72a370506b3_0
    00f73ccd7266   mirantis/ucp-pause:3.4.9   "/pause"   4 minutes ago   Up 4 minutes   k8s_POD_coredns-5968d48-j224d_kube-system_3fc289e0-efca-4ce1-99d1-e70b2c8ae994_0
    8c2c2d5540b3   mirantis/ucp-pause:3.4.9   "/pause"   5 minutes ago   Up 5 minutes   k8s_POD_calico-kube-controllers-7789bcd9cc-jb8kp_kube-system_8ca7f7ef-4500-4164-931b-ff02785f6484_0
    0682011872d1   mirantis/ucp-pause:3.4.9   "/pause"   5 minutes ago   Up 5 minutes   k8s_POD_calico-node-l4cl7_kube-system_02ed0562-f3c5-4724-815a-788ad38ff3c4_0
    

    Example output on a worker node:

    d94e211ae1fe   mirantis/ucp-pause:3.4.9   "/pause"   3 minutes ago   Up 3 minutes   k8s_POD_ucp-node-feature-discovery-58mtc_node-feature-discovery_472be762-14e6-47ba-80b5-c86dce79fbab_2
    80282e0baa43   mirantis/ucp-pause:3.4.9   "/pause"   3 minutes ago   Up 3 minutes   k8s_POD_calico-node-wxk5l_kube-system_030fc501-ec75-4675-a742-d19929818065_0
    

See also

Kubernetes Pods

Volumes

MKE uses named volumes to persist data on all nodes on which it runs.

Volumes used by MKE manager nodes

Volume name

Contents

ucp-auth-api-certs

Certificate and keys for the authentication and authorization service.

ucp-auth-store-certs

Certificate and keys for the authentication and authorization store.

ucp-auth-store-data

Data of the authentication and authorization store, replicated across managers.

ucp-auth-worker-certs

Certificate and keys for authentication worker.

ucp-auth-worker-data

Data of the authentication worker.

ucp-client-root-ca

Root key material for the MKE root CA that issues client certificates.

ucp-cluster-root-ca

Root key material for the MKE root CA that issues certificates for swarm members.

ucp-controller-client-certs

Certificate and keys that the MKE web server uses to communicate with other MKE components.

ucp-controller-server-certs

Certificate and keys for the MKE web server running in the node.

ucp-kv

MKE configuration data, replicated across managers.

ucp-kv-certs

Certificates and keys for the key-value store.

ucp-metrics-data

Monitoring data that MKE gathers.

ucp-metrics-inventory

Configuration file that the ucp-metrics service uses.

ucp-node-certs

Certificate and keys for node communication.

ucp-backup

Backup artifacts that are created while processing a backup. The artifacts persist on the volume for the duration of the backup and are cleaned up when the backup completes, though the volume itself remains.

mke-containers

Symlinks to MKE component log files, created by ucp-agent.

Volumes used by MKE worker nodes

Volume name

Contents

ucp-node-certs

Certificate and keys for node communication.

mke-containers

Symlinks to MKE component log files, created by ucp-agent.

You can customize the volume driver for the volumes by creating the volumes prior to installing MKE. During installation, MKE determines which volumes do not yet exist on the node and creates those volumes using the default volume driver.

By default, MKE stores the data for these volumes at /var/lib/docker/volumes/<volume-name>/_data.

Configuration

The table below presents the configuration files in use by MKE:

Configuration files in use by MKE

Configuration file name

Description

com.docker.interlock.extension

Configuration of the Interlock extension service that monitors and configures the proxy service

com.docker.interlock.proxy

Configuration of the service that handles and routes user requests

com.docker.license

MKE license

com.docker.ucp.interlock.conf

Configuration of the core Interlock service

Web UI and CLI

You can interact with MKE either through the web UI or the CLI.

With the MKE web UI you can manage your swarm, grant and revoke user permissions, deploy, configure, manage, and monitor your applications.

In addition, MKE exposes the standard Docker API, so you can continue using such existing tools as the Docker CLI client. As MKE secures your cluster with RBAC, you must configure your Docker CLI client and other client tools to authenticate your requests using client certificates that you can download from your MKE profile page.

Role-based access control

MKE allows administrators to authorize users to view, edit, and use cluster resources by granting role-based permissions for specific resource sets.

To authorize access to cluster resources across your organization, high-level actions that MKE administrators can take include the following:

  • Add and configure subjects (users, teams, organizations, and service accounts).

  • Define custom roles (or use defaults) by adding permitted operations per resource type.

  • Group cluster resources into resource sets of Swarm collections or Kubernetes namespaces.

  • Create grants by combining subject, role, and resource set.

Note

Only administrators can manage Role-based access control (RBAC).

The following table describes the core elements used in RBAC:

Element

Description

Subjects

Subjects are granted roles that define the permitted operations for one or more resource sets and include:

User

A person authenticated by the authentication back end. Users can belong to more than one team and more than one organization.

Team

A group of users that share permissions defined at the team level. A team can be in only one organization.

Organization

A group of teams that share a specific set of permissions, defined by the roles of the organization.

Service account

A Kubernetes object that enables a workload to access cluster resources assigned to a namespace.

Roles

Roles define what operations can be done by whom. A role is a set of permitted operations for a type of resource, such as a container or volume. It is assigned to a user or a team with a grant.

For example, the built-in Restricted Control role includes permissions to view and schedule but not to update nodes. Whereas a custom role may include permissions to read, write, and execute (r-w-x) volumes and secrets.

Most organizations use multiple roles to fine-tune the appropriate access for different subjects. Users and teams may have different roles for the different resources they access.

Resource sets

Users can group resources into two types of resource sets to control user access: Docker Swarm collections and Kubernetes namespaces.

Docker Swarm collections

Collections have a directory-like structure that holds Swarm resources. You can create collections in MKE by defining a directory path and moving resources into it. Alternatively, you can use labels in your YAML file to assign application resources to the path. Resource types that users can access in Swarm collections include containers, networks, nodes, services, secrets, and volumes.

Each Swarm resource can be in only one collection at a time, but collections can be nested inside one another to a maximum depth of two layers. Collection permission includes permission for child collections.

For child collections and users belonging to more than one team, the system concatenates permissions from multiple roles into an effective role for the user, which specifies the operations that are allowed for the target.

Kubernetes namespaces

Namespaces are virtual clusters that allow multiple teams to access a given cluster with different permissions. Kubernetes automatically sets up four namespaces, and users can add more as necessary, though unlike Swarm collections they cannot be nested. Resource types that users can access in Kubernetes namespaces include pods, deployments, network policies, nodes, services, secrets, and more.

Note

MKE uses two default security policies: privileged and unprivileged. To prevent users from bypassing the MKE security model, only administrators and service accounts granted the cluster-admin ClusterRole for all Kubernetes namespaces through a ClusterRoleBinding can deploy pods with privileged options. Refer to Default Pod security policies in MKE for more information.

Grants

Grants consist of a subject, role, and resource set, and define how specific users can access specific resources. All the grants of an organization taken together constitute an access control list (ACL), which is a comprehensive access policy for the organization.

For complete information on how to configure and use role-based access control in MKE, refer to Authorize role-based access.

MKE limitations

See also

Kubernetes

Installation Guide

Warning

In correlation with the end of life (EOL) date for MKE 3.4.x, Mirantis stopped maintaining this documentation version as of 2023-04-11. The latest MKE product documentation is available here.

The MKE Installation Guide provides everything you need to install and configure Mirantis Kubernetes Engine (MKE). The guide offers detailed information, procedures, and examples that are specifically designed to help DevOps engineers and administrators install and configure the MKE container orchestration platform.

Plan the deployment

Default install directories

The following table details the default MKE install directories:

Path

Description

/var/lib/docker

Docker data root directory

/var/lib/kubelet

kubelet data root directory (created with ftype = 1)

/var/lib/containerd

containerd data root directory (created with ftype = 1)

Host name strategy

Before installing MKE, plan a single host name strategy to use consistently throughout the cluster, keeping in mind that MKE and MCR both use host names.

There are two general strategies for creating host names: short host names and fully qualified domain names (FQDN). Consider the following examples:

  • Short host name: engine01

  • Fully qualified domain name: node01.company.example.com

MCR considerations

A number of MCR considerations must be taken into account when deploying any MKE cluster.

default-address-pools

MCR uses three separate IP ranges for the docker0, docker_gwbridge, and ucp-bridge interfaces. By default, MCR assigns the first available subnet in default-address-pools (172.17.0.0/16) to docker0, the second (172.18.0.0/16) to docker_gwbridge, and the third (172.19.0.0/16) to ucp-bridge.

Note

The ucp-bridge bridge network specifically supports MKE component containers.

You can reassign the docker0, docker_gwbridge, and ucp-bridge subnets in default-address-pools. To do so, replace the relevant values in default-address-pools in the /etc/docker/daemon.json file, making sure that the setting includes at least three IP pools. Be aware that you must restart the docker.service to activate your daemon.json file edits.

By default, default-address-pools contains the following values:

{
  "default-address-pools": [
   {"base":"172.17.0.0/16","size":16}, <-- docker0
   {"base":"172.18.0.0/16","size":16}, <-- docker_gwbridge
   {"base":"172.19.0.0/16","size":16}, <-- ucp-bridge
   {"base":"172.20.0.0/16","size":16},
   {"base":"172.21.0.0/16","size":16},
   {"base":"172.22.0.0/16","size":16},
   {"base":"172.23.0.0/16","size":16},
   {"base":"172.24.0.0/16","size":16},
   {"base":"172.25.0.0/16","size":16},
   {"base":"172.26.0.0/16","size":16},
   {"base":"172.27.0.0/16","size":16},
   {"base":"172.28.0.0/16","size":16},
   {"base":"172.29.0.0/16","size":16},
   {"base":"172.30.0.0/16","size":16},
   {"base":"192.168.0.0/16","size":20}
   ]
 }
The default-address-pools parameters

Parameter

Description

default-address-pools

The list of CIDR ranges used to allocate subnets for local bridge networks.

base

The CIDR range allocated for bridge networks in each IP address pool.

size

The CIDR netmask that determines the subnet size to allocate from the base pool. If the size matches the netmask of the base, then the pool contains one subnet. For example, {"base":"172.17.0.0/16","size":16} creates the subnet: 172.17.0.0/16 (172.17.0.1 - 172.17.255.255).

For example, {"base":"192.168.0.0/16","size":20} allocates /20 subnets from 192.168.0.0/16, including the following subnets for bridge networks:

192.168.0.0/20 (192.168.0.1 - 192.168.15.255)

192.168.16.0/20 (192.168.16.1 - 192.168.31.255)

192.168.32.0/20 (192.168.32.1 - 192.168.47.255)

192.168.48.0/20 (192.168.48.1 - 192.168.63.255)

192.168.64.0/20 (192.168.64.1 - 192.168.79.255)

192.168.240.0/20 (192.168.240.1 - 192.168.255.255)

docker0

MCR creates and configures the host system with the docker0 virtual network interface, an ethernet bridge through which all traffic between MCR and the container moves. MCR uses docker0 to handle all container routing. You can specify an alternative network interface when you start the container.

MCR allocates IP addresses from the docker0 configurable IP range to the containers that connect to docker0. The default IP range, or subnet, for docker0 is 172.17.0.0/16.

You can change the docker0 subnet in /etc/docker/daemon.json using the settings in the following table. Be aware that you must restart the docker.service to activate your daemon.json file edits.

Parameter

Description

default-address-pools

Modify the first pool in default-address-pools.

Caution

By default, MCR assigns the second pool to docker_gwbridge. If you modify the first pool such that the size does not match the base netmask, it can affect docker_gwbridge.

{
   "default-address-pools": [
         {"base":"172.17.0.0/16","size":16}, <-- Modify this value
         {"base":"172.18.0.0/16","size":16},
         {"base":"172.19.0.0/16","size":16},
         {"base":"172.20.0.0/16","size":16},
         {"base":"172.21.0.0/16","size":16},
         {"base":"172.22.0.0/16","size":16},
         {"base":"172.23.0.0/16","size":16},
         {"base":"172.24.0.0/16","size":16},
         {"base":"172.25.0.0/16","size":16},
         {"base":"172.26.0.0/16","size":16},
         {"base":"172.27.0.0/16","size":16},
         {"base":"172.28.0.0/16","size":16},
         {"base":"172.29.0.0/16","size":16},
         {"base":"172.30.0.0/16","size":16},
         {"base":"192.168.0.0/16","size":20}
   ]
}

fixed-cidr

Configures a CIDR range.

Customize the subnet for docker0 using standard CIDR notation. The default subnet is 172.17.0.0/16, the network gateway is 172.17.0.1, and MCR allocates IPs 172.17.0.2 - 172.17.255.254 for your containers.

{
  "fixed-cidr": "172.17.0.0/16",
}

bip

Configures a gateway IP address and CIDR netmask of the docker0 network.

Customize the subnet for docker0 using the <gateway IP>/<CIDR netmask> notation. The default subnet is 172.17.0.0/16, the network gateway is 172.17.0.1, and MCR allocates IPs 172.17.0.2 - 172.17.255.254 for your containers.

{
  "bip": "172.17.0.0/16",
}
docker_gwbridge

The docker_gwbridge is a virtual network interface that connects overlay networks (including ingress) to individual MCR container networks. Initializing a Docker swarm or joining a Docker host to a swarm automatically creates docker_gwbridge in the kernel of the Docker host. The default docker_gwbridge subnet (172.18.0.0/16) is the second available subnet in default-address-pools.

To change the docker_gwbridge subnet, open daemon.json and modify the second pool in default-address-pools:

{
    "default-address-pools": [
       {"base":"172.17.0.0/16","size":16},
       {"base":"172.18.0.0/16","size":16}, <-- Modify this value
       {"base":"172.19.0.0/16","size":16},
       {"base":"172.20.0.0/16","size":16},
       {"base":"172.21.0.0/16","size":16},
       {"base":"172.22.0.0/16","size":16},
       {"base":"172.23.0.0/16","size":16},
       {"base":"172.24.0.0/16","size":16},
       {"base":"172.25.0.0/16","size":16},
       {"base":"172.26.0.0/16","size":16},
       {"base":"172.27.0.0/16","size":16},
       {"base":"172.28.0.0/16","size":16},
       {"base":"172.29.0.0/16","size":16},
       {"base":"172.30.0.0/16","size":16},
       {"base":"192.168.0.0/16","size":20}
   ]
}

Caution

  • Modifying the first pool to customize the docker0 subnet can affect the default docker_gwbridge subnet. Refer to docker0 for more information.

  • You can only customize the docker_gwbridge settings before you join the host to the swarm or after temporarily removing it.

Docker swarm

The default address pool that Docker Swarm uses for its overlay network is 10.0.0.0/8. If this pool conflicts with your current network implementation, you must use a custom IP address pool. Prior to installing MKE, specify your custom address pool using the --default-addr-pool option when initializing swarm.

Note

The Swarm default-addr-pool and MCR default-address-pools settings define two separate IP address ranges used for different purposes.

Kubernetes

Kubernetes uses two internal IP ranges, either of which can overlap and conflict with the underlying infrastructure, thus requiring custom IP ranges.

The pod network

Either Calico or Azure IPAM services gives each Kubernetes pod an IP address in the default 192.168.0.0/16 range. To customize this range, during MKE installation, use the --pod-cidr flag with the ucp install command.

The services network

You can access Kubernetes services with a VIP in the default 10.96.0.0/16 Cluster IP range. To customize this range, during MKE installation, use the --service-cluster-ip-range flag with the ucp install command.

See also

docker data-root

The storage path for such persisted data as images, volumes, and cluster state is docker data root (data-root in /etc/docker/daemon.json).

MKE clusters require that all nodes have the same docker data-root for the Kubernetes network to function correctly. In addition, if the data-root is changed on all nodes you must recreate the Kubernetes network configuration in MKE by running the following commands:

kubectl -n kube-system delete configmap/calico-config
kubectl -n kube-system delete ds/calico-node deploy/calico-kube-controllers

See also

Kubernetes

no-new-privileges

MKE currently does not support no-new-privileges: true in the /etc/docker/daemon.json file, as this causes several MKE components to enter a failed state.

Device Mapper storage driver

MCR hosts that run the devicemapper storage driver use the loop-lvm configuration mode by default. This mode uses sparse files to build the thin pool used by image and container snapshots and is designed to work without any additional configuration.

Note

Mirantis recommends that you use direct-lvm mode in production environments in lieu of loop-lvm mode. direct-lvm mode is more efficient in its use of system resources than loop-lvm mode, and you can scale it as necessary.

For information on how to configure direct-lvm mode, refer to the Docker documentation, Use the Device Mapper storage driver.

Memory metrics reporting

To report accurate memory metrics, MCR requires that you enable specific kernel settings that are often disabled on Ubuntu and Debian systems. For detailed instructions on how to do this, refer to the Docker documentation, Your kernel does not support cgroup swap limit capabilities.

Perform pre-deployment configuration

Configure networking

A well-configured network is essential for the proper functioning of your MKE deployment. Pay particular attention to such key factors as IP address provisioning, port management, and traffic enablement.

IP considerations

Before installing MKE, adopt the following practices when assigning IP addresses:

  • Ensure that your network and nodes support using a static IPv4 address and assign one to every node.

  • Avoid IP range conflicts. The following table lists the recommended addresses you can use to avoid IP range conflicts:

    Component

    Subnet

    Range

    Recommended IP address

    MCR

    default-address-pools

    CIDR range for interface and bridge networks

    172.17.0.0/16 - 172.30.0.0/16, 192.168.0.0/16

    Swarm

    default-addr-pool

    CIDR range for Swarm overlay networks

    10.0.0.0/8

    Kubernetes

    pod-cidr

    CIDR range for Kubernetes pods

    192.168.0.0/16

    Kubernetes

    service-cluster-ip-range

    CIDR range for Kubernetes services

    10.96.0.0/16

    Minimum: 10.96.0.0/24

See also

Kubernetes

Open ports to incoming traffic

When installing MKE on a host, you need to open specific ports to incoming traffic. Each port listens for incoming traffic from a particular set of hosts, known as the port scope.

MKE uses the following scopes:

Scope

Description

External

Traffic arrives from outside the cluster through end-user interaction.

Internal

Traffic arrives from other hosts in the same cluster.

Self

Traffic arrives to that port only from processes on the same host.


Open the following ports for incoming traffic on each host type:

Hosts

Port

Scope

Purpose

Managers, workers

TCP 179

Internal

BGP peers, used for Kubernetes networking

Managers

TCP 443 (configurable)

External, internal

MKE web UI and API

Managers

TCP 2376 (configurable)

Internal

Docker swarm manager, used for backwards compatibility

Managers

TCP 2377 (configurable)

Internal

Control communication between swarm nodes

Managers, workers

UDP 4789

Internal

Overlay networking

Managers

TCP 6443 (configurable)

External, internal

Kubernetes API server endpoint

Managers, workers

TCP 6444

Self

Kubernetes API reverse proxy

Managers, workers

TCP, UDP 7946

Internal

Gossip-based clustering

Managers, workers

TCP 9091

Self

Felix Prometheus calico-node metrics

Managers

TCP 9094

Self

Felix Prometheus kube-controller metrics

Managers, workers

TCP 9099

Self

Calico health check

Managers, workers

TCP 10250

Internal

Kubelet

Managers, workers

TCP 12376

Internal

TLS authentication proxy that provides access to MCR

Managers, workers

TCP 12378

Self

etcd reverse proxy

Managers

TCP 12379

Internal

etcd Control API

Managers

TCP 12380

Internal

etcd Peer API

Managers

TCP 12381

Internal

MKE cluster certificate authority

Managers

TCP 12382

Internal

MKE client certificate authority

Managers

TCP 12383

Internal

Authentication storage back end

Managers

TCP 12384

Internal

Authentication storage back end for replication across managers

Managers

TCP 12385

Internal

Authentication service API

Managers

TCP 12386

Internal

Authentication worker

Managers

TCP 12387

Internal

Prometheus server

Beta, non-production use only

Managers

TCP 12388

Internal

Kubernetes API server

Managers, workers

TCP 12389

Self

Hardware Discovery API

Calico networking

Calico is the default networking plugin for MKE. The default Calico encapsulation setting for MKE is VXLAN, however the plugin also supports IP-in-IP encapsulation. Refer to the Calico documentation on Overlay networking for more information.

Important

NetworkManager can impair the Calico agent routing function. To resolve this issue, you must create a file called /etc/NetworkManager/conf.d/calico.conf with the following content:

[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
Enable ESP traffic

For overlay networks with encryption to function, you must allow IP protocol 50 Encapsulating Security Payload (ESP) traffic.

If you are running RHEL 8.x, Rocky Linux 8.x, or CentOS 8, install kernel module xt_u32:

sudo dnf install kernel-modules-extra
Avoid firewall conflicts

Avoid firewall conflicts in the following Linux distributions:

Linux distribution

Procedure

SUSE Linux Enterprise Server 12 SP2

Installations have the FW_LO_NOTRACK flag turned on by default in the openSUSE firewall. It speeds up packet processing on the loopback interface but breaks certain firewall setups that redirect outgoing packets via custom rules on the local machine.

To turn off the FW_LO_NOTRACK option:

  1. In /etc/sysconfig/SuSEfirewall2, set FW_LO_NOTRACK="no".

  2. Either restart the firewall or reboot the system.

SUSE Linux Enterprise Server 12 SP3

No change is required, as installations have the FW_LO_NOTRACK flag turned off by default.

Red Hat Enterprise Linux (RHEL) 8

Configure the FirewallBackend option:

  1. Verify that firewalld is running.

  2. In /etc/firewalld/firewalld.conf, set FirewallBackend=iptables (formerly FirewallBackend=nftables).

Alternatively, to allow traffic to enter the default bridge network (docker0), run the following commands:

firewall-cmd --permanent --zone=trusted --add-interface=docker0
firewall-cmd --reload
DNS entry in hosts file

MKE adds the proxy.local DNS entry to the following files at install time:

Linux

/etc/hosts

Windows

c:\Windows\System32\Drivers\etc\hosts


To configure MCR to connect to the Internet using HTTP_PROXY you must set the value of proxy.local to NOPROXY.

Preconfigure an SLES installation

Before performing SUSE Linux Enterprise Server (SLES) installations, consider the following prerequisite steps:

  • For SLES 15 installations, disable CLOUD_NETCONFIG_MANAGE prior to installing MKE:

    1. Set CLOUD_NETCONFIG_MANAGE="no" in the /etc/sysconfig/network/ifcfg-eth0 network interface configuration file.

    2. Run the service network restart command.

  • By default, SLES disables connection tracking. To allow Kubernetes controllers in Calico to reach the Kubernetes API server, enable connection tracking on the loopback interface for SLES by running the following commands for each node in the cluster:

    sudo mkdir -p /etc/sysconfig/SuSEfirewall2.d/defaults
    echo FW_LO_NOTRACK=no | sudo tee \
    /etc/sysconfig/SuSEfirewall2.d/defaults/99-docker.cfg
    sudo SuSEfirewall2 start
    

See also

Verify the timeout settings

Confirm that MKE components have the time they require to effectively communicate.

Default timeout settings

Component

Timeout (ms)

Configurable

Raft consensus between manager nodes

3000

no

Gossip protocol for overlay networking

5000

no

etcd

500

yes

RethinkDB

10000

no

Stand-alone cluster

90000

no

Network lag of more than two seconds between MKE manager nodes can cause problems in your MKE cluster. For example, such a lag can indicate to MKE components that the other nodes are down, resulting in unnecessary leadership elections that will result in temporary outages and reduced performance. To resolve the issue, decrease the latency of the MKE node communication network.

See also

Configure time synchronization

Configure all containers in an MKE cluster to regularly synchronize with a Network Time Protocol (NTP) server, to ensure consistency between all containers in the cluster and to circumvent unexpected behavior that can lead to poor performance.

  1. Install NTP on every machine in your cluster:

    sudo apt-get update && sudo apt-get install ntp ntpdate
    
    sudo yum install ntp ntpdate
    sudo systemctl start ntpd
    sudo systemctl enable ntpd
    sudo systemctl status ntpd
    sudo ntpdate -u -s 0.centos.pool.ntp.org
    sudo systemctl restart ntpd
    
    sudo zypper ref && zypper install ntp
    

    In addition to installing NTP, the command sequence starts ntpd, a daemon that periodically syncs the machine clock to a central server.

  2. Sync the machine clocks:

    sudo ntpdate pool.ntp.org
    
  3. Verify that the time of each machine is in sync with the NTP servers:

    sudo ntpq -p
    

    Example output, which illustrates how much the machine clock is out of sync with the NTP servers:

         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
     45.35.50.61     139.78.97.128    2 u   24   64    1   60.391  4623378   0.004
     time-a.timefreq .ACTS.           1 u   23   64    1   51.849  4623377   0.004
     helium.constant 128.59.0.245     2 u   22   64    1   71.946  4623379   0.004
     tock.usshc.com  .GPS.            1 u   21   64    1   59.576  4623379   0.004
     golem.canonical 17.253.34.253    2 u   20   64    1  145.356  4623378   0.004
    

Configure a load balancer

Though MKE does not include a load balancer, you can configure your own to balance user requests across all manager nodes. Before that, decide whether you will add nodes to the load balancer using their IP address or their fully qualified domain name (FQDN), and then use that strategy consistently throughout the cluster. Take note of all IP addresses or FQDNs before you start the installation.

If you plan to deploy both MKE and MSR, your load balancer must be able to differentiate between the two: either by IP address or port number. Because both MKE and MSR use port 443 by default, your options are as follows:

  • Configure your load balancer to expose either MKE or MSR on a port other than 443.

  • Configure your load balancer to listen on port 443 with separate virtual IP addresses for MKE and MSR.

  • Configure separate load balancers for MKE and MSR, both listening on port 443.

If you want to install MKE in a high-availability configuration with a load balancer in front of your MKE controllers, include the appropriate IP address and FQDN for the load balancer VIP. To do so, use one or more --san flags either with the ucp install command or in interactive mode when MKE requests additional SANs.

Configure IPVS

MKE supports the setting of values for all IPVS related parameters that are exposed by kube-proxy.

Kube-proxy runs on each cluster node, its role being to load-balance traffic whose destination is services (via cluster IPs and node ports) to the correct backend pods. Of the modes in which kube-proxy can run, IPVS (IP Virtual Server) offers the widest choice of load balancing algorithms and superior scalability.

Refer to the Calico documentation, Comparing kube-proxy modes: iptables or IPVS? for detailed information on IPVS.

Caution

You can only enable IPVS for MKE at installation, and it persists throughout the life of the cluster. Thus, you cannot switch to iptables at a later stage or switch over existing MKE clusters to use IPVS proxier.

MKE supports setting values for all IPVS-related parameters. For full parameter details, refer to the Kubernetes documentation for kube-proxy.

Use the kube-proxy-mode parameter at install time to enable IPVS proxier. The two valid values are iptables (default) and ipvs.

You can specify the following ipvs parameters for kube-proxy:

  • ipvs_exclude_cidrs

  • ipvs_min_sync_period

  • ipvs_scheduler

  • ipvs_strict_arp = false

  • ipvs_sync_period

  • ipvs_tcp_timeout

  • ipvs_tcpfin_timeout

  • ipvs_udp_timeout

To set these values at the time of bootstrap/installation:

  1. Add the required values under [cluster_config] in a TOML file (for example, config.toml).

  2. Create a config named com.docker.ucp.config from this TOML file:

    docker config create com.docker.ucp.config config.toml
    
  3. Use the --existing-config parameter when installing MKE. You can also change these values post-install using the MKE-s ucp/config-toml endpoint.

Use an External Certificate Authority

You can customize MKE to use certificates signed by an External Certificate Authority (ECA). When using your own certificates, include a certificate bundle with the following:

  • ca.pem file with the root CA public certificate.

  • cert.pem file with the server certificate and any intermediate CA public certificates. This certificate should also have Subject Alternative Names (SANs) for all addresses used to reach the MKE manager.

  • key.pem file with a server private key.

You can either use separate certificates for every manager node or one certificate for all managers. If you use separate certificates, you must use a common SAN throughout. For example, MKE permits the following on a three-node cluster:

  • node1.company.example.org with the SAN mke.company.org

  • node2.company.example.org with the SAN mke.company.org

  • node3.company.example.org with the SAN mke.company.org

If you use a single certificate for all manager nodes, MKE automatically copies the certificate files both to new manager nodes and to those promoted to a manager role.

Customize named volumes

Note

Skip this step if you want to use the default named volumes.

MKE uses named volumes to persist data. If you want to customize the drivers that manage such volumes, create the volumes before installing MKE. During the installation process, the installer will automatically detect the existing volumes and start using them. Otherwise, MKE will create the default named volumes.

Configure kernel parameters

MKE uses the kernel parameters detailed here. The information is presented in tables that are organized by parameter prefix, offering both the default parameter values and the values as they are set following MKE installation.

Note

The MKE parameter values are not set by MKE, but by either MCR or an upstream component.

kernel.<subtree>

Parameter

Values

Description

panic

  • Default: Distribution dependent

  • MKE: 1

Sets the number of seconds the kernel waits to reboot following a panic.

panic_on_oops

  • Default: Distribution dependent

  • MKE: 1

Sets whether the kernel should panic on an oops rather than continuing to attempt operations.

pty.nr

  • Default: Dependent on number of logins. Not user-configurable.

  • MKE: 1

Sets the number of open PTYs.

net.bridge.bridge-nf-<subtree>

Parameter

Values

Description

call-arptables

  • Default: No default

  • MKE: 1

Sets whether arptables rules apply to bridged network traffic. If the bridge module is not loaded, and thus no bridges are present, this key is not present.

call-ip6tables

  • Default: No default

  • MKE: 1

Sets whether ip6tables rules apply to bridged network traffic. If the bridge module is not loaded, and thus no bridges are present, this key is not present.

call-iptables

  • Default: No default

  • MKE: 1

Sets whether iptables rules apply to bridged network traffic. If the bridge module is not loaded, and thus no bridges are present, this key is not present.

filter-pppoe-tagged

  • Default: No default

  • MKE: 0

Sets whether netfilter rules apply to bridged PPPOE network traffic. If the bridge module is not loaded, and thus no bridges are present, this key is not present.

filter-vlan-tagged

  • Default: No default

  • MKE: 0

Sets whether netfilter rules apply to bridged VLAN network traffic. If the bridge module is not loaded, and thus no bridges are present, this key is not present.

pass-vlan-input-dev

  • Default: No default

  • MKE: 0

Sets whether netfilter strips the incoming VLAN interface name from bridged traffic. If the bridge module is not loaded, and thus no bridges are present, this key is not present.

net.fan.<subtree>

Parameter

Values

Description

vxlan

  • Default: No default

  • MKE: 4

Sets the version of the VXLAN module on older kernels, not present on kernel version 5.x. If the VXLAN module is not loaded this key is not present.

net.ipv4.<subtree>

Note

  • The *.vs.* default values persist, changing only because the ipvs kernel module was not previously loaded. For more information, refer to the Linux kernel documentation.

Parameter

Values

Description

conf.all.accept_redirects

  • Default: 1

  • MKE: 0

Sets whether ICMP redirects are permitted. This key affects all interfaces.

conf.all.forwarding

  • Default: 0

  • MKE: 1

Sets whether network traffic is forwarded. This key affects all interfaces.

conf.all.route_localnet

  • Default: 0

  • MKE: 1

Sets 127/8 for local routing. This key affects all interfaces.

conf.default.forwarding

  • Default: 0

  • MKE: 1

Sets 127/8 for local routing. This key affects new interfaces.

conf.lo.forwarding

  • Default: 0

  • MKE: 1

Sets forwarding for localhost traffic.

ip_forward

  • Default: 0

  • MKE: 1

Sets whether traffic forwards between interfaces. For Kubernetes to run, this parameter must be set to 1.

vs.am_droprate

  • Default: 10

  • MKE: 10

Sets the always mode drop rate used in mode 3 of the drop_rate defense.

vs.amemthresh

  • Default: 1024

  • MKE: 1024

Sets the available memory threshold in pages, which is used in the automatic modes of defense. When there is not enough available memory, this enables the strategy and the variable is set to 2. Otherwise, the strategy is disabled and the variable is set to 1.

vs.backup_only

  • Default: 0

  • MKE: 0

Sets whether the director function is disabled while the server is in back-up mode, to avoid packet loops for DR/TUN methods.

vs.cache_bypass

  • Default: 0

  • MKE: 0

Sets whether packets forward directly to the original destination when no cache server is available and the destination address is not local (iph->daddr is RTN_UNICAST). This mostly applies to transparent web cache clusters.

vs.conn_reuse_mode

  • Default: 1

  • MKE: 1

Sets how IPVS handles connections detected on port reuse. It is a bitmap with the following values:

  • 0 disables any special handling on port reuse. The new connection is delivered to the same real server that was servicing the previous connection, effectively disabling expire_nodest_conn.

  • bit 1 enables rescheduling of new connections when it is safe. That is, whenever expire_nodest_conn and for TCP sockets, when the connection is in TIME_WAIT state (which is only possible if you use NAT mode).

  • bit 2 is bit 1 plus, for TCP connections, when connections are in FIN_WAIT state, as this is the last state seen by load balancer in Direct Routing mode. This bit helps when adding new real servers to a very busy cluster.

vs.conntrack

  • Default: 0

  • MKE: 0

Sets whether connection-tracking entries are maintained for connections handled by IPVS. Enable if connections handled by IPVS are to be subject to stateful firewall rules. That is, iptables rules that make use of connection tracking. Otherwise, disable this setting to optimize performance. Connections handled by the IPVS FTP application module have connection tracking entries regardless of this setting, which is only available when IPVS is compiled with CONFIG_IP_VS_NFCT enabled.

vs.drop_entry

  • Default: 0

  • MKE: 0

Sets whether entries are randomly dropped in the connection hash table, to collect memory back for new connections. In the current code, the drop_entry procedure can be activated every second, then it randomly scans 1/32 of the whole and drops entries that are in the SYN-RECV/SYNACK state, which should be effective against syn-flooding attack.

The valid values of drop_entry are 0 to 3, where 0 indicates that the strategy is always disabled, 1 and 2 indicate automatic modes (when there is not enough available memory, the strategy is enabled and the variable is automatically set to 2, otherwise the strategy is disabled and the variable is set to 1), and 3 indicates that the strategy is always enabled.

vs.drop_packet

  • Default: 0

  • MKE: 0

Sets whether rate packets are dropped prior to being forwarded to real servers. Rate 1 drops all incoming packets.

The value definition is the same as that for drop_entry. In automatic mode, the following formula determines the rate: rate = amemthresh / (amemthresh - available_memory) when available memory is less than the available memory threshold. When mode 3 is set, the always mode drop rate is controlled by the /proc/sys/net/ipv4/vs/am_droprate.

vs.expire_nodest_conn

  • Default: 0

  • MKE: 0

Sets whether the load balancer silently drops packets when its destination server is not available. This can be useful when the user-space monitoring program deletes the destination server (due to server overload or wrong detection) and later adds the server back, and the connections to the server can continue.

If this feature is enabled, the load balancer terminates the connection immediately whenever a packet arrives and its destination server is not available, after which the client program will be notified that the connection is closed. This is equivalent to the feature that is sometimes required to flush connections when the destination is not available.

vs.ignore_tunneled

  • Default: 0

  • MKE: 0

Sets whether IPVS configures the ipvs_property on all packets of unrecognized protocols. This prevents users from routing such tunneled protocols as IPIP, which is useful in preventing the rescheduling packets that have been tunneled to the IPVS host (that is, to prevent IPVS routing loops when IPVS is also acting as a real server).

vs.nat_icmp_send

  • Default: 0

  • MKE: 0

Sets whether ICMP error messages (ICMP_DEST_UNREACH) are sent for VS/NAT when the load balancer receives packets from real servers but the connection entries do not exist.

vs.pmtu_disc

  • Default: 0

  • MKE: 0

Sets whether all DF packets that exceed the PMTU are rejected with FRAG_NEEDED, irrespective of the forwarding method. For the TUN method, the flag can be disabled to fragment such packets.

vs.schedule_icmp

  • Default: 0

  • MKE: 0

Sets whether scheduling ICMP packets in IPVS is enabled.

vs.secure_tcp

  • Default: 0

  • MKE: 0

Sets the use of a more complicated TCP state transition table. For VS/NAT, the secure_tcp defense delays entering the TCP ESTABLISHED state until the three-way handshake completes. The value definition is the same as that of drop_entry and drop_packet.

vs.sloppy_sctp

  • Default: 0

  • MKE: 0

Sets whether IPVS is permitted to create a connection state on any packet, rather than an SCTP INIT only.

vs.sloppy_tcp

  • Default: 0

  • MKE: 0

Sets whether IPVS is permitted to create a connection state on any packet, rather than a TCP SYN only.

vs.snat_reroute

  • Default: 0

  • MKE: 1

Sets whether the route of SNATed packets is recalculated from real servers as if they originate from the director. If disabled, SNATed packets are routed as if they have been forwarded by the director.

If policy routing is in effect, then it is possible that the route of a packet originating from a director is routed differently to a packet being forwarded by the director.

If policy routing is not in effect, then the recalculated route will always be the same as the original route. It is an optimization to disable snat_reroute and avoid the recalculation.

vs.sync_persist_mode

  • Default: 0

  • MKE: 0

Sets the synchronization of connections when using persistence. The possible values are defined as follows:

  • 0 means all types of connections are synchronized.

  • 1 attempts to reduce the synchronization traffic depending on the connection type. For persistent services, avoid synchronization for normal connections, do it only for persistence templates. In such case, for TCP and SCTP it may need enabling sloppy_tcp and sloppy_sctp flags on back-up servers. For non-persistent services such optimization is not applied, mode 0 is assumed.

vs.sync_ports

  • Default: 1

  • MKE: 1

Sets the number of threads that the master and back-up servers can use for sync traffic. Every thread uses a single UDP port, thread 0 uses the default port 8848, and the last thread uses port 8848+sync_ports-1.

vs.sync_qlen_max

  • Default: Calculated

  • MKE: Calculated

Sets a hard limit for queued sync messages that are not yet sent. It defaults to 1/32 of the memory pages but actually represents number of messages. It will protect us from allocating large parts of memory when the sending rate is lower than the queuing rate.

vs.sync_refresh_period

  • Default: 0

  • MKE: 0

Sets (in seconds) the difference in the reported connection timer that triggers new sync messages. It can be used to avoid sync messages for the specified period (or half of the connection timeout if it is lower) if the connection state has not changed since last sync.

This is useful for normal connections with high traffic, to reduce sync rate. Additionally, retry sync_retries times with period of sync_refresh_period/8.

vs.sync_retries

  • Default: 0

  • MKE: 0

Sets sync retries with period of sync_refresh_period/8. Useful to protect against loss of sync messages. The range of the sync_retries is 0 to 3.

vs.sync_sock_size

  • Default: 0

  • MKE: 0

Sets the configuration of SNDBUF (master) or RCVBUF (slave) socket limit. Default value is 0 (preserve system defaults).

vs.sync_threshold

  • Default: 3 50

  • MKE: 3 50

Sets the synchronization threshold, which is the minimum number of incoming packets that a connection must receive before the connection is synchronized. A connection will be synchronized every time the number of its incoming packets modulus sync_period equals the threshold. The range of the threshold is 0 to sync_period. When sync_period and sync_refresh_period are 0, send sync only for state changes or only once when packets matches sync_threshold.

vs.sync_version

  • Default: 1

  • MKE: 1

Sets the version of the synchronization protocol to use when sending synchronization messages. The possible values are:

  • ``0 ``selects the original synchronization protocol (version 0). This should be used when sending synchronization messages to a legacy system that only understands the original synchronization protocol.

  • 1 selects the current synchronization protocol (version 1). This should be used whenever possible.

Kernels with this sync_version entry are able to receive messages of both version 1 and version 2 of the synchronization protocol.

net.netfilter.nf_conntrack_<subtree>

Note

  • The net.netfilter.nf_conntrack_<subtree> default values persist, changing only when the nf_conntrack kernel module has not been previously loaded. For more information, refer to the Linux kernel documentation.

Parameter

Values

Description

acct

  • Default: 0

  • MKE: 0

Sets whether connection-tracking flow accounting is enabled. Adds 64-bit byte and packet counter per flow.

buckets

  • Default: Calculated

  • MKE: Calculated

Sets the size of the hash table. If not specified during module loading, the default size is calculated by dividing total memory by 16384 to determine the number of buckets. The hash table will never have fewer than 1024 and never more than 262144 buckets. This sysctl is only writeable in the initial net namespace.

checksum

  • Default: 0

  • MKE: 0

Sets whether the checksum of incoming packets is verified. Packets with bad checksums are in an invalid state. If this is enabled, such packets are not considered for connection tracking.

dccp_loose

  • Default: 0

  • MKE: 1

Sets whether picking up already established connections for Datagram Congestion Control Protocol (DCCP) is permitted.

dccp_timeout_closereq

  • Default: Distribution dependent

  • MKE: 64

The parameter description is not yet available in the Linux kernel documentation.

dccp_timeout_closing

  • Default: Distribution dependent

  • MKE: 64

The parameter description is not yet available in the Linux kernel documentation.

dccp_timeout_open

  • Default: Distribution dependent

  • MKE: 43200

The parameter description is not yet available in the Linux kernel documentation.

dccp_timeout_partopen

  • Default: Distribution dependent

  • MKE: 480

The parameter description is not yet available in the Linux kernel documentation.

dccp_timeout_request

  • Default: Distribution dependent

  • MKE: 240

The parameter description is not yet available in the Linux kernel documentation.

dccp_timeout_respond

  • Default: Distribution dependent

  • MKE: 480

The parameter description is not yet available in the Linux kernel documentation.

dccp_timeout_timewait

  • Default: Distribution dependent

  • MKE: 240

The parameter description is not yet available in the Linux kernel documentation.

events

  • Default: 0

  • MKE: 1

Sets whether the connection tracking code provides userspace with connection-tracking events through ctnetlink.

expect_max

  • Default: Calculated

  • MKE: 1024

Sets the maximum size of the expectation table. The default value is nf_conntrack_buckets / 256. The minimum is 1.

frag6_high_thresh

  • Default: Calculated

  • MKE: 4194304

Sets the maximum memory used to reassemble IPv6 fragments. When nf_conntrack_frag6_high_thresh bytes of memory is allocated for this purpose, the fragment handler tosses packets until nf_conntrack_frag6_low_thresh is reached. The size of this parameter is calculated based on system memory.

frag6_low_thresh

  • Default: Calculated

  • MKE: 3145728

See nf_conntrack_frag6_high_thresh. The size of this parameter is calculated based on system memory.

frag6_timeout

  • Default: 60

  • MKE: 60

Sets the time to keep an IPv6 fragment in memory.

generic_timeout

  • Default: 600

  • MKE: 600

Sets the default for a generic timeout. This refers to layer 4 unknown and unsupported protocols.

gre_timeout

  • Default: 30

  • MKE: 30

Set the GRE timeout from the conntrack table.

gre_timeout_stream

  • Default: 180

  • MKE: 180

Sets the GRE timeout for streamed connections. This extended timeout is used when a GRE stream is detected.

helper

  • Default: 0

  • MKE: 0

Sets whether the automatic conntrack helper assignment is enabled. If disabled, you must set up iptables rules to assign helpers to connections. See the CT target description in the iptables-extensions(8) main page for more information.

icmp_timeout

  • Default: 30

  • MKE: 30

Sets the default for ICMP timeout.

icmpv6_timeout

  • Default: 30

  • MKE: 30

Sets the default for ICMP6 timeout.

log_invalid

  • Default: 0

  • MKE: 0

Sets whether invalid packets of a type specified by value are logged.

max

  • Default: Calculated

  • MKE: 131072

Sets the maximum number of allowed connection tracking entries. This value is set to nf_conntrack_buckets by default.

Connection-tracking entries are added to the table twice, once for the original direction and once for the reply direction (that is, with the reversed address). Thus, with default settings a maxed-out table will have an average hash chain length of 2, not 1.

sctp_timeout_closed

  • Default: Distribution dependent

  • MKE: 10

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_cookie_echoed

  • Default: Distribution dependent

  • MKE: 3

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_cookie_wait

  • Default: Distribution dependent

  • MKE: 3

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_established

  • Default: Distribution dependent

  • MKE: 432000

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_heartbeat_acked

  • Default: Distribution dependent

  • MKE: 210

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_heartbeat_sent

  • Default: Distribution dependent

  • MKE: 30

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_shutdown_ack_sent

  • Default: Distribution dependent

  • MKE: 3

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_shutdown_recd

  • Default: Distribution dependent

  • MKE: 0

The parameter description is not yet available in the Linux kernel documentation.

sctp_timeout_shutdown_sent

  • Default: Distribution dependent

  • MKE: 0

The parameter description is not yet available in the Linux kernel documentation.

tcp_be_liberal

  • Default: 0

  • MKE: 0

Sets whether only out of window RST segments are marked as INVALID.

tcp_loose

  • Default: 0

  • MKE: 1

Sets whether already established connections are picked up.

tcp_max_retrans

  • Default: 3

  • MKE: 3

Sets the maximum number of packets that can be retransmitted without receiving an acceptable ACK from the destination. If this number is reached, a shorter timer is started. Timeout for unanswered.

tcp_timeout_close

  • Default: Distribution dependent

  • MKE: 10

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_close_wait

  • Default: Distribution dependent

  • MKE: 3600

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_fin_wait

  • Default: Distribution dependent

  • MKE: 120

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_last_ack

  • Default: Distribution dependent

  • MKE: 30

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_max_retrans

  • Default: Distribution dependent

  • MKE: 300

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_syn_recv

  • Default: Distribution dependent

  • MKE: 60

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_syn_sent

  • Default: Distribution dependent

  • MKE: 120

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_time_wait

  • Default: Distribution dependent

  • MKE: 120

The parameter description is not yet available in the Linux kernel documentation.

tcp_timeout_unacknowledged

  • Default: Distribution dependent

  • MKE: 30

The parameter description is not yet available in the Linux kernel documentation.

timestamp

  • Default: 0

  • MKE: 0

Sets whether connection-tracking flow timestamping is enabled.

udp_timeout

  • Default: 30

  • MKE: 30

Sets the UDP timeout.

udp_timeout_stream

  • Default: 120

  • MKE: 120

Sets the extended timeout that is used whenever a UDP stream is detected.

net.nf_conntrack_<subtree>

Note

  • The net.nf_conntrack_<subtree> default values persist, changing only when the nf_conntrack kernel module has not been previously loaded. For more information, refer to the Linux kernel documentation.

Parameter

Values

Description

max

  • Default: Calculated

  • MKE: 131072

Sets the maximum number of connections to track. The size of this parameter is calculated based on system memory.

vm.overcommit_<subtree>

Parameter

Values

Description

memory

  • Default: Distribution dependent

  • MKE: 1

Sets whether the kernel permits memory overcommitment from malloc() calls.

Install the MKE image

To install MKE:

  1. Log in to the target host using Secure Shell (SSH).

  2. Pull the latest version of MKE:

    docker image pull mirantis/ucp:3.4.15
    
  3. Install MKE:

    docker container run --rm -it --name ucp \
    -v /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp:3.4.15 install \
    --host-address <node-ip-address> \
    --interactive
    

    The ucp install command runs in interactive mode, prompting you for the necessary configuration values. For more information about the ucp install command, including how to install MKE on a system with SELinux enabled, refer to the MKE Operations Guide: mirantis/ucp install.

Note

MKE installs Project Calico for Kubernetes container-to-container communication. However, you may install an alternative CNI plugin, such as Cilium, Weave, or Flannel. For more information, refer to the MKE Operations Guide: Installing an unmanaged CNI plugin.

Obtain the license

After you Install the MKE image, proceed with downloading your MKE license as described below. This section also contains steps to apply your new license using the MKE web UI.

Warning

Users are not authorized to run MKE without a valid license. For more information, refer to Mirantis Agreements and Terms.

To download your MKE license:

  1. Open an email from Mirantis Support with the subject Welcome to Mirantis’ CloudCare Portal and follow the instructions for logging in.

    If you did not receive the CloudCare Portal email, it is likely that you have not yet been added as a Designated Contact. To remedy this, contact your Designated Administrator.

  2. In the top navigation bar, click Environments.

  3. Click the Cloud Name associated with the license you want to download.

  4. Scroll down to License Information and click the License File URL. A new tab opens in your browser.

  5. Click View file to download your license file.

To update your license settings in the MKE web UI:

  1. Log in to your MKE instance using an administrator account.

  2. In the left navigation, click Settings.

  3. On the General tab, click Apply new license. A file browser dialog displays.

  4. Navigate to where you saved the license key (.lic) file, select it, and click Open. MKE automatically updates with the new settings.

Note

Though MKE is generally a subscription-only service, Mirantis offers a free trial license by request. Use our contact form to request a free trial license.

Install MKE on AWS

This section describes how to customize your MKE installation on AWS. It is for those deploying Kubernetes workloads while leveraging the AWS Kubernetes cloud provider, which provides dynamic volume and loadbalancer provisioning.

Note

You may skip this topic if you plan to install MKE on AWS with no customizations or if you will only deploy Docker Swarm workloads. Refer to Install the MKE image for the appropriate installation instruction.

Prerequisites

Complete the following prerequisites prior to installing MKE on AWS.

  1. Log in to the AWS Management Console.

  2. Assign a host name to your instance. To determine the host name, run the following curl command within the EC2 instance:

    curl http://169.254.169.254/latest/meta-data/hostname
    
  3. Tag your instance, VPC, and subnets by specifying kubernetes.io/cluster/<unique-cluster-id> in the Key field and <cluster-type> in the Value field. Possible <cluster-type> values are as follows:

    • owned, if the cluster owns and manages the resources that it creates

    • shared, if the cluster shares its resources between multiple clusters

    For example, Key: kubernetes.io/cluster/1729543642a6 and Value: owned.

  4. To enable introspection and resource provisioning, specify an instance profile with appropriate policies for manager nodes. The following is an example of a very permissive instance profile:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [ "ec2:*" ],
          "Resource": [ "*" ]
        },
        {
          "Effect": "Allow",
          "Action": [ "elasticloadbalancing:*" ],
          "Resource": [ "*" ]
        },
        {
          "Effect": "Allow",
          "Action": [ "route53:*" ],
          "Resource": [ "*" ]
        },
        {
          "Effect": "Allow",
          "Action": "s3:*",
          "Resource": [ "arn:aws:s3:::kubernetes-*" ]
        }
      ]
    }
    
  5. To enable access to dynamically provisioned resources, specify an instance profile with appropriate policies for worker nodes. The following is an example of a very permissive instance profile:

    {
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": "s3:*",
          "Resource": ["arn:aws:s3:::kubernetes-*"]
        },
        {
          "Effect": "Allow",
          "Action": "ec2:Describe*",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": "ec2:AttachVolume",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": "ec2:DetachVolume",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": ["route53:*"],
          "Resource": ["*"]
        }
      ]
    }
    

Install MKE

After you perform the steps described in Prerequisites, run the following command to install MKE on a master node. Substitute <ucp-ip> with the private IP address of the master node.

Warning

If your cluster includes Kubernetes Windows worker nodes, you must omit the --cloud-provider aws flag from the following command, as its inclusion causes the Kubernetes Windows worker nodes never to enter a healthy state.

docker container run --rm -it \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
mirantis/ucp:3.4.15 install \
--host-address <ucp-ip> \
--cloud-provider aws \
--interactive

Install MKE on Azure

Mirantis Kubernetes Engine (MKE) closely integrates with Microsoft Azure for its Kubernetes Networking and Persistent Storage feature set. MKE deploys the Calico CNI provider. In Azure, the Calico CNI leverages the Azure networking infrastructure for data path networking and the Azure IPAM for IP address management.

Prerequisites

To avoid significant issues during the installation process, you must meet the following infrastructure prerequisites to successfully deploy MKE on Azure.

  • Deploy all MKE nodes (managers and workers) into the same Azure resource group. You can deploy the Azure networking components (virtual network, subnets, security groups) in a second Azure resource group.

  • Size the Azure virtual network and subnet appropriately for your environment, because addresses from this pool will be consumed by Kubernetes Pods.

  • Attach all MKE worker and manager nodes to the same Azure subnet.

  • Set internal IP addresses for all nodes to Static rather than the Dynamic default.

  • Match the Azure virtual machine object name to the Azure virtual machine computer name and the node operating system hostname that is the FQDN of the host (including domain names). All characters in the names must be in lowercase.

  • Ensure the presence of an Azure Service Principal with Contributor access to the Azure resource group hosting the MKE nodes. Kubernetes uses this Service Principal to communicate with the Azure API. The Service Principal ID and Secret Key are MKE prerequisites.

    If you are using a separate resource group for the networking components, the same Service Principal must have Network Contributor access to this resource group.

  • Ensure that an open NSG between all IPs on the Azure subnet passes into MKE during installation. Kubernetes Pods integrate into the underlying Azure networking stack, from an IPAM and routing perspective with the Azure CNI IPAM module. As such, Azure network security groups (NSG) impact pod-to-pod communication. End users may expose containerized services on a range of underlying ports, resulting in a manual process to open an NSG port every time a new containerized service deploys on the platform, affecting only workloads that deploy on the Kubernetes orchestrator.

    To limit exposure, restrict the use of the Azure subnet to container host VMs and Kubernetes Pods. Additionally, you can leverage Kubernetes Network Policies to provide micro segmentation for containerized applications and services.

The MKE installation requires the following information:

subscriptionId

Azure Subscription ID in which to deploy the MKE objects

tenantId

Azure Active Directory Tenant ID in which to deploy the MKE objects

aadClientId

Azure Service Principal ID

aadClientSecret

Azure Service Principal Secret Key

Networking

MKE configures the Azure IPAM module for Kubernetes so that it can allocate IP addresses for Kubernetes Pods. Per Azure IPAM module requirements, the configuration of each Azure VM that is part of the Kubernetes cluster must include a pool of IP addresses.

You can use automatic or manual IPs provisioning for the Kubernetes cluster on Azure.

  • Automatic provisioning

    Allows for IP pool configuration and maintenance for standalone Azure virtual machines (VMs). This service runs within the calico-node daemonset and provisions 128 IP addresses for each node by default.

    Note

    If you are using a VXLAN data plane, MKE automatically uses Calico IPAM. It is not necessary to do anything specific for Azure IPAM.

    New MKE installations use Calico VXLAN as the default data plane (the MKE configuration calico_vxlan is set to true). MKE does not use Calico VXLAN if the MKE version is lower than 3.3.0 or if you upgrade MKE from lower than 3.3.0 to 3.3.0 or higher.

  • Manual provisioning

    Manual provisioning of additional IP address for each Azure VM can be done through the Azure Portal, the Azure CLI az network nic ip-config create, or an ARM template.

Azure configuration file

For MKE to integrate with Microsoft Azure, the azure.json configuration file must be identical across all manager and worker nodes in your cluster. For Linux nodes, place the file in /etc/kubernetes on each host. For Windows nodes, place the file in C:\k on each host. Because root owns the configuration file, set its permissions to 0644 to ensure that the container user has read access.

The following is an example template for azure.json.

{
    "cloud":"AzurePublicCloud",
    "tenantId": "<parameter_value>",
    "subscriptionId": "<parameter_value>",
    "aadClientId": "<parameter_value>",
    "aadClientSecret": "<parameter_value>",
    "resourceGroup": "<parameter_value>",
    "location": "<parameter_value>",
    "subnetName": "<parameter_value>",
    "securityGroupName": "<parameter_value>",
    "vnetName": "<parameter_value>",
    "useInstanceMetadata": true
}

Optional parameters are available for Azure deployments:

primaryAvailabilitySetName

Worker nodes availability set

vnetResourceGroup

Virtual network resource group if your Azure network objects live in a separate resource group

routeTableName

Applicable if you have defined multiple route tables within an Azure subnet

Guidelines for IPAM configuration

Warning

To avoid significant issue during the installation process, follow these guidelines to either use the appropriate size network in Azure or take the necessary actions to fit within the subnet.

Configure the subnet and the virtual network associated with the primary interface of the Azure VMs with an adequate address prefix/range. The number of required IP addresses depends on the workload and the number of nodes in the cluster.

For example, for a cluster of 256 nodes, make sure that the address space of the subnet and the virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods concurrently on a node. This is in addition to initial IP allocations to VM network interface card (NICs) during Azure resource creation.

Accounting for the allocation of IP addresses to NICs that occur during VM bring-up, set the address space of the subnet and virtual network to 10.0.0.0/16. This ensures that the network can dynamically allocate at least 32768 addresses, plus a buffer for initial allocations for primary IP addresses.

Note

The Azure IPAM module queries the metadata of an Azure VM to obtain a list of the IP addresses that are assigned to the VM NICs. The IPAM module allocates these IP addresses to Kubernetes pods. You configure the IP addresses as ipConfigurations in the NICs associated with a VM or scale set member, so that Azure IPAM can provide the addresses to Kubernetes on request.

Manually provision IP address pools as part of an Azure VM scale set

Configure IP Pools for each member of the VM scale set during provisioning by associating multiple ipConfigurations with the scale set’s networkInterfaceConfigurations.

The following example networkProfile configuration for an ARM template configures pools of 32 IP addresses for each VM in the VM scale set.

"networkProfile": {
  "networkInterfaceConfigurations": [
    {
      "name": "[variables('nicName')]",
      "properties": {
        "ipConfigurations": [
          {
            "name": "[variables('ipConfigName1')]",
            "properties": {
              "primary": "true",
              "subnet": {
                "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
              },
              "loadBalancerBackendAddressPools": [
                {
                  "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/backendAddressPools/', variables('bePoolName'))]"
                }
              ],
              "loadBalancerInboundNatPools": [
                {
                  "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/inboundNatPools/', variables('natPoolName'))]"
                }
              ]
            }
          },
          {
            "name": "[variables('ipConfigName2')]",
            "properties": {
              "subnet": {
                "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
              }
            }
          }
          .
          .
          .
          {
            "name": "[variables('ipConfigName32')]",
            "properties": {
              "subnet": {
                "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
              }
            }
          }
        ],
        "primary": "true"
      }
    }
  ]
}

Adjust the IP count value

During an MKE installation, you can alter the number of Azure IP addresses that MKE automatically provisions for pods.

By default, MKE will provision 128 addresses, from the same Azure subnet as the hosts, for each VM in the cluster. If, however, you have manually attached additional IP addresses to the VMs (by way of an ARM Template, Azure CLI or Azure Portal) or you are deploying in to small Azure subnet (less than /16), you can use an --azure-ip-count flag at install time.

Note

Do not set the --azure-ip-count variable to a value of less than 6 if you have not manually provisioned additional IP addresses for each VM. The MKE installation needs at least 6 IP addresses to allocate to the core MKE components that run as Kubernetes pods (in addition to the VM’s private IP address).

Below are several example scenarios that require the defining of the --azure-ip-count variable.

Scenario 1: Manually provisioned addresses

If you have manually provisioned additional IP addresses for each VM and want to disable MKE from dynamically provisioning more IP addresses, you must pass --azure-ip-count 0 into the MKE installation command.

Scenario 2: Reducing the number of provisioned addresses

Pass --azure-ip-count <custom_value> into the MKE installation command to reduce the number of IP addresses dynamically allocated from 128 to a custom value due to:

  • Primary use of the Swarm Orchestrator

  • Deployment of MKE on a small Azure subnet (for example, /24)

  • Plans to run a small number of Kubernetes pods on each node

To adjust this value post-installation, refer to the instructions on how to download the MKE configuration file, change the value, and update the configuration via the API.

Note

If you reduce the value post-installation, existing VMs will not reconcile and you will need to manually edit the IP count in Azure.

Run the following command to install MKE on a manager node.

docker container run --rm -it \
  --name ucp \
  --volume /var/run/docker.sock:/var/run/docker.sock \
  mirantis/ucp:3.4.15 install \
  --host-address <ucp-ip> \
  --pod-cidr <ip-address-range> \
  --cloud-provider Azure \
  --interactive
  • The --pod-cidr option maps to the IP address range that you configured for the Azure subnet.

    Note

    The pod-cidr range must match the Azure virtual network’s subnet attached to the hosts. For example, if the Azure virtual network had the range 172.0.0.0/16 with VMs provisioned on an Azure subnet of 172.0.1.0/24, then the Pod CIDR should also be 172.0.1.0/24.

    This requirement applies only when MKE does not use the VXLAN data plane. If MKE uses the VXLAN data plane, the pod-cidr range must be different than the node IP subnet.

  • The --host-address maps to the private IP address of the master node.

  • The --azure-ip-count serves to adjust the amount of IP addresses provisioned to each VM.

Azure custom roles

You can create your own Azure custom roles for use with MKE. You can assign these roles to users, groups, and service principals at management group (in preview only), subscription, and resource group scopes.

Deploy an MKE cluster into a single resource group

A resource group is a container that holds resources for an Azure solution. These resources are the virtual machines (VMs), networks, and storage accounts that are associated with the swarm.

To create a custom all-in-one role with permissions to deploy an MKE cluster into a single resource group:

  1. Create the role permissions JSON file.

    For example:

    {
      "Name": "Docker Platform All-in-One",
      "IsCustom": true,
      "Description": "Can install and manage Docker platform.",
      "Actions": [
        "Microsoft.Authorization/*/read",
        "Microsoft.Authorization/roleAssignments/write",
        "Microsoft.Compute/availabilitySets/read",
        "Microsoft.Compute/availabilitySets/write",
        "Microsoft.Compute/disks/read",
        "Microsoft.Compute/disks/write",
        "Microsoft.Compute/virtualMachines/extensions/read",
        "Microsoft.Compute/virtualMachines/extensions/write",
        "Microsoft.Compute/virtualMachines/read",
        "Microsoft.Compute/virtualMachines/write",
        "Microsoft.Network/loadBalancers/read",
        "Microsoft.Network/loadBalancers/write",
        "Microsoft.Network/loadBalancers/backendAddressPools/join/action",
        "Microsoft.Network/networkInterfaces/read",
        "Microsoft.Network/networkInterfaces/write",
        "Microsoft.Network/networkInterfaces/join/action",
        "Microsoft.Network/networkSecurityGroups/read",
        "Microsoft.Network/networkSecurityGroups/write",
        "Microsoft.Network/networkSecurityGroups/join/action",
        "Microsoft.Network/networkSecurityGroups/securityRules/read",
        "Microsoft.Network/networkSecurityGroups/securityRules/write",
        "Microsoft.Network/publicIPAddresses/read",
        "Microsoft.Network/publicIPAddresses/write",
        "Microsoft.Network/publicIPAddresses/join/action",
        "Microsoft.Network/virtualNetworks/read",
        "Microsoft.Network/virtualNetworks/write",
        "Microsoft.Network/virtualNetworks/subnets/read",
        "Microsoft.Network/virtualNetworks/subnets/write",
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.Resources/subscriptions/resourcegroups/read",
        "Microsoft.Resources/subscriptions/resourcegroups/write",
        "Microsoft.Security/advancedThreatProtectionSettings/read",
        "Microsoft.Security/advancedThreatProtectionSettings/write",
        "Microsoft.Storage/*/read",
        "Microsoft.Storage/storageAccounts/listKeys/action",
        "Microsoft.Storage/storageAccounts/write"
      ],
      "NotActions": [],
      "AssignableScopes": [
        "/subscriptions/6096d756-3192-4c1f-ac62-35f1c823085d"
      ]
    }
    
  2. Create the Azure RBAC role.

    az role definition create --role-definition all-in-one-role.json
    
Deploy MKE compute resources

Compute resources act as servers for running containers.

To create a custom role to deploy MKE compute resources only:

  1. Create the role permissions JSON file.

    For example:

    {
      "Name": "Docker Platform",
      "IsCustom": true,
      "Description": "Can install and run Docker platform.",
      "Actions": [
        "Microsoft.Authorization/*/read",
        "Microsoft.Authorization/roleAssignments/write",
        "Microsoft.Compute/availabilitySets/read",
        "Microsoft.Compute/availabilitySets/write",
        "Microsoft.Compute/disks/read",
        "Microsoft.Compute/disks/write",
        "Microsoft.Compute/virtualMachines/extensions/read",
        "Microsoft.Compute/virtualMachines/extensions/write",
        "Microsoft.Compute/virtualMachines/read",
        "Microsoft.Compute/virtualMachines/write",
        "Microsoft.Network/loadBalancers/read",
        "Microsoft.Network/loadBalancers/write",
        "Microsoft.Network/networkInterfaces/read",
        "Microsoft.Network/networkInterfaces/write",
        "Microsoft.Network/networkInterfaces/join/action",
        "Microsoft.Network/publicIPAddresses/read",
        "Microsoft.Network/virtualNetworks/read",
        "Microsoft.Network/virtualNetworks/subnets/read",
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.Resources/subscriptions/resourcegroups/read",
        "Microsoft.Resources/subscriptions/resourcegroups/write",
        "Microsoft.Security/advancedThreatProtectionSettings/read",
        "Microsoft.Security/advancedThreatProtectionSettings/write",
        "Microsoft.Storage/storageAccounts/read",
        "Microsoft.Storage/storageAccounts/listKeys/action",
        "Microsoft.Storage/storageAccounts/write"
      ],
      "NotActions": [],
      "AssignableScopes": [
        "/subscriptions/6096d756-3192-4c1f-ac62-35f1c823085d"
      ]
    }
    
  2. Create the Docker Platform RBAC role.

    az role definition create --role-definition platform-role.json
    
Deploy MKE network resources

Network resources are services inside your cluster. These resources can include virtual networks, security groups, address pools, and gateways.

To create a custom role to deploy MKE network resources only:

  1. Create the role permissions JSON file.

    For example:

    {
      "Name": "Docker Networking",
      "IsCustom": true,
      "Description": "Can install and manage Docker platform networking.",
      "Actions": [
        "Microsoft.Authorization/*/read",
        "Microsoft.Network/loadBalancers/read",
        "Microsoft.Network/loadBalancers/write",
        "Microsoft.Network/loadBalancers/backendAddressPools/join/action",
        "Microsoft.Network/networkInterfaces/read",
        "Microsoft.Network/networkInterfaces/write",
        "Microsoft.Network/networkInterfaces/join/action",
        "Microsoft.Network/networkSecurityGroups/read",
        "Microsoft.Network/networkSecurityGroups/write",
        "Microsoft.Network/networkSecurityGroups/join/action",
        "Microsoft.Network/networkSecurityGroups/securityRules/read",
        "Microsoft.Network/networkSecurityGroups/securityRules/write",
        "Microsoft.Network/publicIPAddresses/read",
        "Microsoft.Network/publicIPAddresses/write",
        "Microsoft.Network/publicIPAddresses/join/action",
        "Microsoft.Network/virtualNetworks/read",
        "Microsoft.Network/virtualNetworks/write",
        "Microsoft.Network/virtualNetworks/subnets/read",
        "Microsoft.Network/virtualNetworks/subnets/write",
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.Resources/subscriptions/resourcegroups/read",
        "Microsoft.Resources/subscriptions/resourcegroups/write"
      ],
      "NotActions": [],
      "AssignableScopes": [
        "/subscriptions/6096d756-3192-4c1f-ac62-35f1c823085d"
      ]
    }
    
  2. Create the Docker Networking RBAC role.

    az role definition create --role-definition networking-role.json
    

Install MKE offline

To install MKE on an offline host, you must first use a separate computer with an Internet connection to download a single package with all the images and then copy that package to the host where you will install MKE. Once the package is on the host and loaded, you can install MKE offline as described in Install the MKE image.

Note

During the offline installation, both manager and worker nodes must be offline.

To install MKE offline:

  1. Download the required MKE package:

    Note

    MKE 3.4.1 and 3.4.3 are discontinued and thus not available for download.

    Caution

    Users running kernel version 4.15 or earlier may encounter an issue with MKE 3.4.2 wherein support dumps fail and nodes disconnect. Mirantis strongly recommends that these users either upgrade to kernel version 4.16 (or later) or upgrade to MKE 3.4.4.

  2. Copy the MKE package to the host machine:

    scp ucp.tar.gz <user>@<host>
    
  3. Use SSH to log in to the host where you transferred the package.

  4. Load the MKE images from the .tar.gz file:

    docker load -i ucp.tar.gz
    
  5. Install the MKE image.

Uninstall MKE

This topic describes how to uninstall MKE from your cluster. After uninstalling MKE, your instances of MCR will continue running in swarm mode and your applications will run normally. You will not, however, be able to do the following unless you reinstall MKE:

  • Enforce role-based access control (RBAC) to the cluster.

  • Monitor and manage the cluster from a central place.

  • Join new nodes using docker swarm join.

    Note

    You cannot join new nodes to your cluster after uninstalling MKE because your cluster will be in swarm mode, and swarm mode relies on MKE to provide the CA certificates that allow nodes to communicate with each other. After the certificates expire, the nodes will not be able to communicate at all. Either reinstall MKE before the certificates expire, or disable swarm mode by running docker swarm leave --force on every node.

To uninstall MKE:

Note

If SELinux is enabled, you must temporarily disable it prior to running the uninstall-ucp command.

  1. Log in to a manager node using SSH.

  2. Run the uninstall-ucp command in interactive mode, thus prompting you for the necessary configuration values:

    docker container run --rm -it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v /var/log:/var/log \
      --name ucp \
      mirantis/ucp:3.4.15 uninstall-ucp --interactive
    

    Note

    The uninstall-ucp command completely removes MKE from every node in the cluster. You do not need to run the command from multiple nodes.

    If the uninstall-ucp command fails, manually uninstall MKE.

    1. On any manager node, remove the remaining MKE services:

      docker service rm $(docker service ls -f name=ucp- -q)
      
    2. On each manager node, remove the remaining MKE containers:

      docker container rm -f $(docker container ps -a -f name=ucp- -f name=k8s_ -q)
      
    3. On each manager node, remove the remaining MKE volumes:

      docker volume rm $(docker volume ls -f name=ucp -q)
      

    Note

    For more information about the uninstall-ucp failure, refer to the logs in /var/log on any manager node. Be aware that you will not be able to access the logs if the volume /var/log:/var/log is not mounted while running the ucp container.

  3. Optional. Delete the MKE configuration:

    docker container run --rm -it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v /var/log:/var/log \
      --name ucp \
      mirantis/ucp:3.4.15 uninstall-ucp \
      --purge-config --interactive
    

    MKE keeps the configuration by default in case you want to reinstall MKE later with the same configuration. For all available uninstall-ucp options, refer to mirantis/ucp uninstall-ucp.

  4. Optional. Restore the host IP tables to their pre-MKE installation values by restarting the node.

    Note

    The Calico network plugin changed the host IP tables from their original values during MKE installation.

Deploy Swarm-only mode

Caution

Swarm-only mode is currently in beta, and thus you may experience issues in your use of the feature.

Swarm-only mode is an MKE configuration that supports only Swarm orchestration. Lacking Kubernetes and its operational and health-check dependencies, the resulting highly-stable application is smaller than a typical mixed-orchestration MKE installation.

You can only enable or disable swarm-only mode at the time of MKE installation. MKE preserves the swarm-only setting through upgrades, backups, and system restoration. Installing MKE in swarm-only mode pulls only the images required to run MKE in this configuration. Refer to Swarm-only images for more information.

Note

Installing MKE in swarm-only mode removes all Kubernetes options from the web UI.

To install MKE in Swarm-only mode:

  1. Complete the steps and recommendations in Plan the deployment and Perform pre-deployment configuration.

  2. Install MKE in swarm-only mode by adding the --swarm-only flag to the install command found in Install the MKE image:

    docker container run --rm -it --name ucp \
    -v /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp:3.4.15 install \
    --host-address <node-ip-address> \
    --interactive \
    --swarm-only
    

Note

In addition, MKE includes the --swarm-only flag with the bootstrapper images command, which you can use to pull or to check the required images on manager nodes.

Caution

To restore Swarm-only clusters, invoke the ucp restore command with the --swarm-only option.

Swarm-only images

Installing MKE in swarm-only mode pulls the following set of images, which is smaller than that of a typical MKE installation:

  • ucp-agent (ucp-agent-win on Windows)

  • ucp-auth-store

  • ucp-auth

  • ucp-azure-ip-allocator

  • ucp-cfssl

  • ucp-compose

  • ucp-containerd-shim-process (ucp-containerd-shim-process-win on Windows)

  • ucp-controller

  • ucp-csi-attacher

  • ucp-csi-liveness-probe

  • ucp-csi-node-driver-registrar

  • ucp-csi-provisioner

  • ucp-csi-resizer

  • ucp-csi-snapshotter

  • ucp-dsinfo (ucp-dsinfo-win on Windows)

  • ucp-etcd

  • ucp-interlock-config

  • ucp-interlock-extension

  • ucp-interlock-proxy

  • ucp-interlock

  • ucp-metrics

  • ucp-openstack-ccm

  • ucp-openstack-cinder-csi-plugin

  • ucp-swarm

Prometheus

In swarm-only mode, MKE runs the Prometheus server and the authenticating proxy in a single container on each manager node. Thus, unlike in conventional MKE installations, you cannot configure Prometheus server placement. Prometheus does not collect Kubernetes metrics in swarm-only mode, and it requires an additional reserved port on manager nodes: 12387.

See also

Kubernetes

See also

Kubernetes

Operations Guide

Warning

In correlation with the end of life (EOL) date for MKE 3.4.x, Mirantis stopped maintaining this documentation version as of 2023-04-11. The latest MKE product documentation is available here.

The MKE Operations Guide provides the comprehensive information you need to run the MKE container orchestration platform. The guide is intended for anyone who needs to effectively develop and securely administer applications at scale, on private clouds, public clouds, and on bare metal.

Access an MKE cluster

You can access an MKE cluster in a variety of ways including through the MKE web UI, Docker CLI, and kubectl (the Kubernetes CLI). To use the Docker CLI and kubectl with MKE, first download a client certificate bundle. This topic describes the MKE web UI, how to download and configure the client bundle, and how to configure kubectl with MKE.

Access the MKE web UI

MKE allows you to control your cluster visually using the web UI. Role-based access control (RBAC) gives administrators and non-administrators access to the following web UI features:

  • Administrators:

    • Manage cluster configurations.

    • View and edit all cluster images, networks, volumes, and containers.

    • Manage the permissions of users, teams, and organizations.

    • Grant node-specific task scheduling permissions to users.

  • Non-administrators:

    • View and edit all cluster images, networks, volumes, and containers. Requires administrator to grant access.

To access the MKE web UI:

  1. Open a browser and navigate to https://<ip-address> (substituting <ip-address> with the IP address of the machine that ran docker run).

  2. Enter the user name and password that you set up when installing the MKE image.

Note

To set up two-factor authentication for logging in to the MKE web UI, see Use two-factor authentication.

Download and configure the client bundle

Download and configure the MKE client certificate bundle to use MKE with Docker CLI and kubectl. The bundle includes:

  • A private and public key pair for authorizing your requests using MKE

  • Utility scripts for configuring Docker CLI and kubectl with your MKE deployment

Note

MKE issues different certificates for each user type:

User certificate bundles

Allow running docker commands only through MKE manager nodes.

Administrator certificate bundles

Allow running docker commands through all node types.

Download the client bundle

This section explains how to download the client certificate bundle using either the MKE web UI or the MKE API.

To download the client certificate bundle using the MKE web UI:

  1. Navigate to My Profile.

  2. Click Client Bundles > New Client Bundle.

To download the client certificate bundle using the MKE API on Linux:

  1. Create an environment variable with the user security token:

    AUTHTOKEN=$(curl -sk -d \
    '{"username":"<username>","password":"<password>"}' \
    https://<mke-ip>/auth/login | jq -r .auth_token)
    
  2. Download the client certificate bundle:

    curl -k -H "Authorization: Bearer $AUTHTOKEN" \
    https://<mke-ip>/api/clientbundle -o bundle.zip
    

To download the client certificate bundle using the MKE API on Windows Server 2016:

  1. Open an elevated PowerShell prompt.

  2. Create an environment variable with the user security token:

    $AUTHTOKEN=((Invoke-WebRequest -Body '{"username":"<username>", \
    "password":"<password>"}' -Uri https://`<mke-ip`>/auth/login \
    -Method POST).Content)|ConvertFrom-Json|select auth_token \
    -ExpandProperty auth_token
    
  3. Download the client certificate bundle:

    [io.file]::WriteAllBytes("ucp-bundle.zip", \
    ((Invoke-WebRequest -Uri https://`<mke-ip`>/api/clientbundle \
    -Headers @{"Authorization"="Bearer $AUTHTOKEN"}).Content))
    
Configure the client bundle

This section explains how to configure the client certificate bundle to authenticate your requests with MKE using the Docker CLI and kubectl.

To configure the client certificate bundle:

  1. Extract the client bundle .zip file into a directory, and use the appropriate utility script for your system:

    • For Linux:

      cd client-bundle && eval "$(<env.sh)"
      
    • For Windows (from an elevated PowerShell prompt):

      cd client-bundle && env.cmd
      

    The utility scripts do the following:

    • Update DOCKER_HOST to make the client tools communicate with your MKE deployment.

    • Update DOCKER_CERT_PATH to use the certificates included in the client bundle.

    • Configure kubectl with the kubectl config command.

      Note

      The kubeconfig file is named kube.yaml and is located in the unzipped client bundle directory.

  2. Verify that your client tools communicate with MKE:

    docker version --format '{{.Server.Version}}'
    kubectl config current-context
    

    The expected Docker CLI server version starts with ucp/, and the expected kubectl context name starts with ucp_.

  3. Optional. Change your context directly using the client certificate bundle .zip files. In the directory where you downloaded the user bundle, add the new context:

    cd client-bundle && docker context \
    import myucp ucp-bundle-$USER.zip
    

Note

If you use the client certificate bundle with buildkit, make sure that builds are not accidentally scheduled on manager nodes. For more information, refer to Manage services node deployment.

Configure kubectl with MKE

MKE installations include Kubernetes. Users can deploy, manage, and monitor Kubernetes using either the MKE web UI or kubectl.

To install and use kubectl:

  1. Identify which version of Kubernetes you are running by using the MKE web UI, the MKE API version endpoint, or the Docker CLI docker version command with the client bundle.

    Caution

    Kubernetes requires that kubectl and Kubernetes be within one minor version of each other.

  2. Refer to Kubernetes: Install Tools to download and install the appropriate kubectl binary.

  3. Download the client bundle.

  4. Refer to Configure the client bundle to configure kubectl with MKE using the certificates and keys contained in the client bundle.

  5. Optional. Install Helm, the Kubernetes package manager, and Tiller, the Helm server.

    Caution

    Helm requires MKE 3.1.x or higher.

    To use Helm and Tiller with MKE, grant the default service account within the kube-system namespace the necessary roles:

    kubectl create rolebinding default-view --clusterrole=view \
    --serviceaccount=kube-system:default --namespace=kube-system
    
    kubectl create clusterrolebinding add-on-cluster-admin \
    --clusterrole=cluster-admin --serviceaccount=kube-system:default
    

    Note

    Helm recommends that you specify a Role and RoleBinding to limit the scope of Tiller to a particular namespace. Refer to the official Helm documentation for more information.

See also

Kubernetes

Administer an MKE cluster

Add labels to cluster nodes

With MKE, you can add labels to your nodes. Labels are metadata that describe the node, such as:

  • node role (development, QA, production)

  • node region (US, EU, APAC)

  • disk type (HDD, SSD)

Once you apply a label to a node, you can specify constraints when deploying a service to ensure that the service only runs on nodes that meet particular criteria.

Hint

Use resource sets (MKE collections or Kubernetes namespaces) to organize access to your cluster, rather than creating labels for authorization and permissions to resources.

Apply labels to a node

The following example procedure applies the ssd label to a node.

  1. Log in to the MKE web UI with administrator credentials.

  2. Click Shared Resources in the navigation menu to expand the selections.

  3. Click Nodes. The details pane will display the full list of nodes.

  4. Click the node on the list that you want to attach labels to. The details pane will transition, presenting the Overview information for the selected node.

  5. Click the settings icon in the upper-right corner to open the Edit Node page.

  6. Navigate to the Labels section and click Add Label.

  7. Add a label, entering disk into the Key field and ssd into the Value field.

  8. Click Save to dismiss the Edit Node page and return to the node Overview.

Hint

You can use the CLI to apply a label to a node:

docker node update --label-add <key>=<value> <node-id>
Deploy a service with constraints

The following example procedure deploys a service with a constraint that ensures that the service only runs on nodes with SSD storage node.labels.disk == ssd.


To deploy an application stack with service constraints:

  1. Log in to the MKE web UI with administrator credentials.

  2. Verify that the target node orchestrator is set to Swarm.

  3. Click Shared Resources in the left-side navigation panel to expand the selections.

  4. Click Stacks. The details pane will display the full list of stacks.

  5. Click the Create Stack button to open the Create Application page.

  6. Under 1. Configure Application, enter “wordpress” into the Name field .

  7. Under ORCHESTRATOR NODE, select Swarm Services.

  8. Under 2. Add Application File, paste the following stack file in the docker-compose.yml editor:

    version: "3.1"
    
    services:
      db:
        image: mysql:5.7
        deploy:
          placement:
            constraints:
              - node.labels.disk == ssd
          restart_policy:
            condition: on-failure
        networks:
          - wordpress-net
        environment:
          MYSQL_ROOT_PASSWORD: wordpress
          MYSQL_DATABASE: wordpress
          MYSQL_USER: wordpress
          MYSQL_PASSWORD: wordpress
      wordpress:
        depends_on:
          - db
        image: wordpress:latest
        deploy:
          replicas: 1
          placement:
            constraints:
              - node.labels.disk == ssd
          restart_policy:
            condition: on-failure
            max_attempts: 3
        networks:
          - wordpress-net
        ports:
          - "8000:80"
        environment:
          WORDPRESS_DB_HOST: db:3306
          WORDPRESS_DB_PASSWORD: wordpress
    
    networks:
      wordpress-net:
    
  9. Click Create to deploy the stack.

  10. Click Done once the stack deployment completes to return to the stacks list which now features your newly created stack.


To verify service tasks deployed to labeled node:

  1. In the left-side navigation panel, navigate to Shared Resources > Nodes. The details pane will display the full list of nodes.

  2. Click the node with the disk label.

  3. In the details pane, click the Metrics tab to verify that WordPress containers are scheduled on the node.

  4. In the left-side navigation panel, navigate to Shared Resources > Nodes.

  5. Click any node that does not have the disk label.

  6. In the details pane, click the Metrics tab to verify that there are no WordPress containers scheduled on the node.

Add Swarm placement constraints

If a node is set to use Kubernetes as its orchestrator while simultaneously running Swarm services, you must deploy placement constraints to prevent those services from being scheduled on the node.

The necessary service constraints will be automatically adopted by any new MKE-created Swarm services, as well as by older Swarm services that you have updated. MKE does not automatically add placement constraints, however, to Swarm services that were created using older versions of MKE, as to do so would restart the service tasks.


To add placement constraints to older Swarm services:

  1. Download and configure the client bundle.

  2. Identify the Swarm services that do not have placement constraints:

    services=$(docker service ls -q)
    for service in $services; do
        if docker service inspect $service --format '{{.Spec.TaskTemplate.Placement.Constraints}}' | grep -q -v 'node.labels.com.docker.ucp.orchestrator.swarm==true'; then
            name=$(docker service inspect $service --format '{{.Spec.Name}}')
            if [ $name = "ucp-agent" ] || [ $name = "ucp-agent-win" ] ||  [ $name = "ucp-agent-s390x" ]; then
                continue
            fi
            echo "Service $name (ID: $service) is missing the node.labels.com.docker.ucp.orchestrator.swarm=true placement constraint"
        fi
    done
    
  3. Add placement constraints to the Swarm services you identified:

    Note

    All service tasks will restart, thus causing some amount of service downtime.

    services=$(docker service ls -q)
    for service in $services; do
        if docker service inspect $service --format '{{.Spec.TaskTemplate.Placement.Constraints}}' | grep -q -v 'node.labels.com.docker.ucp.orchestrator.swarm=true'; then
            name=$(docker service inspect $service --format '{{.Spec.Name}}')
            if [ $name = "ucp-agent" ] || [ $name = "ucp-agent-win" ]; then
                continue
            fi
            echo "Updating service $name (ID: $service)"
            docker service update --detach=true --constraint-add node.labels.com.docker.ucp.orchestrator.swarm==true $service
        fi
    done
    
Add or remove a service constraint using the MKE web UI

You can declare the deployment constraints in your docker-compose.yml file or when you create a stack. Also, you can apply constraints when you create a service.

To add or remove a service constraint:

  1. Verify whether a service has deployment constraints:

    1. Navigate to the Services page and select that service.

    2. In the details pane, click Constraints to list the constraint labels.

  2. Edit the constraints on the service:

    1. Click Configure and select Details to open the Update Service page.

    2. Click Scheduling to view the constraints.

    3. Add or remove deployment constraints.

Add SANs to cluster certificates

A SAN (Subject Alternative Name) is a structured means for associating various values (such as domain names, IP addresses, email addresses, URIs, and so on) with a security certificate.

MKE always runs with HTTPS enabled. As such, whenever you connect to MKE, you must ensure that the MKE certificates recognize the host name in use. For example, if MKE is behind a load balancer that forwards traffic to your MKE instance, your requests will not be for the MKE host name or IP address but for the host name of the load balancer. Thus, MKE will reject the requests, unless you include the address of the load balancer as a SAN in the MKE certificates.

Note

  • To use your own TLS certificates, confirm first that these certificates have the correct SAN values.

  • To use the self-signed certificate that MKE offers out-of-the-box, you can use the --san argument to set up the SANs during MKE deployment.

To add new SANs using the MKE web UI:

  1. Log in to the MKE web UI using administrator credentials.

  2. Navigate to the Nodes page.

  3. Click on a manager node to display the details pane for that node.

  4. Click Configure and select Details.

  5. In the SANs section, click Add SAN and enter one or more SANs for the cluster.

  6. Click Save.

  7. Repeat for every existing manager node in the cluster.

    Note

    Thereafter, the SANs are automatically applied to any new manager nodes that join the cluster.

To add new SANs using the MKE CLI:

  1. Get the current set of SANs for the given manager node:

    docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs"
    }}' <node-id>
    

    Example of system response:

    default-cs,127.0.0.1,172.17.0.1
    
  2. Append the desired SAN to the list (for example, default-cs,127.0.0.1,172.17.0.1,example.com) and run:

    docker node update --label-add com.docker.ucp.SANs=<SANs-list> <node-id>
    

    Note

    <SANs-list> is the comma-separated list of SANs with your new SAN appended at the end.

  3. Repeat the command sequence for each manager node.

Collect MKE cluster metrics with Prometheus

Prometheus is an open-source systems monitoring and alerting toolkit to which you can configure MKE as a target.

Prometheus runs as a Kubernetes deployment that, by default, is a DaemonSet that runs on every manager node. A key benefit of this is that you can set the DaemonSet to not schedule on any nodes, which effectively disables Prometheus if you do not use the MKE web interface.

Along with events and logs, metrics are data sources that provide a view into your cluster, presenting numerical data values that have a time-series component. There are several sources from which you can derive metrics, each providing different meanings for a business and its applications.

As the metrics data is stored locally on disk for each Prometheus server, it does not replicate on new managers or if you schedule Prometheus to run on a new node. The metrics are kept no longer than 24 hours.

MKE metrics types

MKE provides a base set of metrics that gets you into production without having to rely on external or third-party tools. Mirantis strongly encourages, though, the use of additional monitoring to provide more comprehensive visibility into your specific MKE environment.

Metrics types

Metric type

Description

Business

High-level aggregate metrics that typically combine technical, financial, and organizational data to create IT infrastructure information for business leaders. Examples of business metrics include:

  • Company or division-level application downtime

  • Aggregation resource utilization

  • Application resource demand growth

Application

Metrics on APM tools domains (such as AppDynamics and DynaTrace) that supply information on the state or performance of the application itself.

  • Service state

  • Container platform

  • Host infrastructure

Service

Metrics on the state of services that are running on the container platform. Such metrics have very low cardinality, meaning the values are typically from a small fixed set of possibilities (commonly binary).

  • Application health

  • Convergence of Kubernetes deployments and Swarm services

  • Cluster load by number of services or containers or pods

Note

Web UI disk usage (including free space) reflects only the MKE managed portion of the file system: /var/lib/docker. To monitor the total space available on each filesystem of an MKE worker or manager, deploy a third-party monitoring solution to oversee the operating system.

See also

Kubernetes

Metrics labels

The metrics that MKE exposes in Prometheus have standardized labels, depending on the target resource.

Container labels

Label name

Value

collection

The collection ID of the collection the container is in, if any.

container

The ID of the container.

image

The name of the container image.

manager

Set to true if the container node is an MKE manager.

name

The container name.

podName

The pod name, if the container is part of a Kubernetes Pod.

podNamespace

The pod namespace, if the container is part of a Kubernetes Pod namespace.

podContainerName

The container name in the pod spec, if the container is part of a Kubernetes pod.

service

The service ID, if the container is part of a Swarm service.

stack

The stack name, if the container is part of a Docker Compose stack.

Container networking labels

Label name

Value

collection

The collection ID of the collection the container is in, if any.

container

The ID of the container.

image

The name of the container image.

manager

Set to true if the container node is an MKE manager.

name

The container name.

network

The ID of the network.

podName

The pod name, if the container is part of a Kubernetes pod.

podNamespace

The pod namespace, if the container is part of a Kubernetes pod namespace.

podContainerName

The container name in the pod spec, if the container is part of a Kubernetes pod.

service

The service ID, if the container is part of a Swarm service.

stack

The stack name, if the container is part of a Docker Compose stack.

Note

The container networking labels are the same as the Container labels, with the addition of network.

Node labels

Label name

Value

manager

Set to true if the node is an MKE manager.

See also

Kubernetes

MKE Metrics exposed by Prometheus

MKE exports metrics on every node and also exports additional metrics from every controller.

Node-sourced MKE metrics

The metrics that MKE exports from nodes are specific to those nodes (for example, the total memory on that node).

The tables below offer detail on the node-sourced metrics that MKE exposes in Prometheus with the ucp_ label.

ucp_engine_container_cpu_percent

Units

Percentage

Description

Percentage of CPU time in use by the container

Labels

Container

ucp_engine_container_cpu_total_time_nanoseconds

Units

Nanoseconds

Description

Total CPU time used by the container

Labels

Container

ucp_engine_container_health

Units

0.0 or 1.0

Description

The container health, according to its healthcheck.

The 0 value indicates that the container is not reporting as healthy, which is likely because it either does not have a healthcheck defined or because healthcheck results have not yet been returned

Labels

Container

ucp_engine_container_memory_max_usage_bytes

Units

Bytes

Description

Maximum memory in use by the container in bytes

Labels

Container

ucp_engine_container_memory_usage_bytes

Units

Bytes

Description

Current memory in use by the container in bytes

Labels

Container

ucp_engine_container_memory_usage_percent

Units

Percentage

Description

Percentage of total node memory currently in use by the container

Labels

Container

ucp_engine_container_network_rx_bytes_total

Units

Bytes

Description

Number of bytes received by the container over the network in the last sample

Labels

Container networking

ucp_engine_container_network_rx_dropped_packets_total

Units

Number of packets

Description

Number of packets bound for the container over the network that were dropped in the last sample

Labels

Container networking

ucp_engine_container_network_rx_errors_total

Units

Number of errors

Description

Number of received network errors for the container over the network in the last sample

Labels

Container networking

ucp_engine_container_network_rx_packets_total

Units

Number of packets

Description

Number of packets received by the container over the network in the last sample

Labels

Container networking

ucp_engine_container_network_tx_bytes_total

Units

Bytes

Description

Number of bytes sent by the container over the network in the last sample

Labels

Container networking

ucp_engine_container_network_tx_dropped_packets_total

Units

Number of packets

Description

Number of packets sent from the container over the network that were dropped in the last sample

Labels

Container networking

ucp_engine_container_network_tx_errors_total

Units

Number of errors

Description

Number of sent network errors for the container on the network in the last sample

Labels

Container networking

ucp_engine_container_network_tx_packets_total

Units

Number of packets

Description

Number of sent packets for the container over the network in the last sample

Labels

Container networking

ucp_engine_container_unhealth

Units

0.0 or 1.0

Description

Indicates whether the container is healthy, according to its healthcheck.

The 0 value indicates that the container is not reporting as healthy, which is likely because it either does not have a healthcheck defined or because healthcheck results have not yet been returned

Labels

Container

ucp_engine_containers

Units

Number of containers

Description

Total number of containers on the node

Labels

Node

ucp_engine_cpu_total_time_nanoseconds

Units

Nanoseconds

Description

System CPU time used by the container

Labels

Container

ucp_engine_disk_free_bytes

Units

Bytes

Description

Free disk space on the Docker root directory on the node, in bytes. This metric is not available to Windows nodes

Labels

Node

ucp_engine_disk_total_bytes

Units

Bytes

Description

Total disk space on the Docker root directory on this node in bytes. Note that the ucp_engine_disk_free_bytes metric is not available for Windows nodes

Labels

Node

ucp_engine_images

Units

Number of images

Description

Total number of images on the node

Labels

Node

ucp_engine_memory_total_bytes

Units

Bytes

Description

Total amount of memory on the node

Labels

Node

ucp_engine_networks

Units

Number of networks

Description

Total number of networks on the node

Labels

Node

ucp_engine_num_cpu_cores

Units

Number of cores

Description

Number of CPU cores on the node

Labels

Node

ucp_engine_volumes

Units

Number of volumes

Description

Total number of volumes on the node

Labels

Node

Controller-sourced MKE metrics

The metrics that MKE exports from controllers are cluster-scoped (for example, the total number of Swarm services).

The tables below offer detail on the controller-sourced metrics that MKE exposes in Prometheus with the ucp_ label.

ucp_controller_services

Units

Number of services

Description

Total number of Swarm services

Labels

Not applicable

ucp_engine_node_health

Units

0.0 or 1.0

Description

Health status of the node, as determined by MKE

Labels

nodeName: node name, nodeAddr: node IP address

ucp_engine_pod_container_ready

Units

0.0 or 1.0

Description

Readiness of the container in a Kubernetes pod, as determined by its readiness probe

Labels

Pod

ucp_engine_pod_ready

Units

0.0 or 1.0

Description

Readiness of the container in a Kubernetes pod, as determined by its readiness probe

Labels

Pod

See also

Kubernetes Pods

Deploy Prometheus on worker nodes

MKE deploys Prometheus by default on the manager nodes to provide a built-in metrics back end. For cluster sizes over 100 nodes, or if you need to scrape metrics from Prometheus instances, Mirantis recommends that you deploy Prometheus on dedicated worker nodes in the cluster.

To deploy Prometheus on worker nodes:

  1. Source an admin bundle.

  2. Verify that ucp-metrics pods are running on all managers:

    $ kubectl -n kube-system get pods -l k8s-app=ucp-metrics -o wide
    
    NAME               READY  STATUS   RESTARTS  AGE  IP            NODE
    ucp-metrics-hvkr7  3/3    Running  0         4h   192.168.80.66 3a724a-0
    
  3. Add a Kubernetes node label to one or more workers. For example, a label with key ucp-metrics and value "" to a node with name 3a724a-1.

    $ kubectl label node 3a724a-1 ucp-metrics=
    
    node "test-3a724a-1" labeled
    

    SELinux Prometheus Deployment

    If you use SELinux, label your ucp-node-certs directories properly on the worker nodes before you move the ucp-metrics workload to them. To run ucp-metrics on a worker node, update the ucp-node-certs label by running:

    sudo chcon -R system_u:object_r:container_file_t:s0 /var/lib/docker/volumes/ucp-node-certs/_data.

  4. Patch the ucp-metrics DaemonSet’s nodeSelector with the same key and value in use for the node label. This example shows the key ucp-metrics and the value "".

    $ kubectl -n kube-system patch daemonset ucp-metrics --type json -p
    '[{"op": "replace", "path": "/spec/template/spec/nodeSelector", "value":
    {"ucp-metrics": ""}}]' daemonset "ucp-metrics" patched
    
  5. Confirm that ucp-metrics pods are running only on the labeled workers.

    $ kubectl -n kube-system get pods -l k8s-app=ucp-metrics -o wide
    
    NAME               READY  STATUS       RESTARTS  AGE IP           NODE
    ucp-metrics-88lzx  3/3    Running      0         12s 192.168.83.1 3a724a-1
    ucp-metrics-hvkr7  3/3    Terminating  0         4h 192.168.80.66 3a724a-0
    

See also

Kubernetes

Configure external Prometheus to scrape metrics from MKE

To configure your external Prometheus server to scrape metrics from Prometheus in MKE:

  1. Source an admin bundle.

  2. Create a Kubernetes secret that contains your bundle TLS material.

    (cd $DOCKER_CERT_PATH && kubectl create secret generic prometheus --from-file=ca.pem --from-file=cert.pem --from-file=key.pem)
    
  3. Create a Prometheus deployment and ClusterIP service using YAML.

    On AWS with the Kubernetes cloud provider configured:

    1. Replace ClusterIP with LoadBalancer in the service YAML.

    2. Access the service through the load balancer.

    3. If you run Prometheus external to MKE, change the domain for the inventory container in the Prometheus deployment from ucp-controller.kube-system.svc.cluster.local to an external domain, to access MKE from the Prometheus node.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: prometheus
    data:
      prometheus.yaml: |
        global:
          scrape_interval: 10s
        scrape_configs:
        - job_name: 'ucp'
          tls_config:
            ca_file: /bundle/ca.pem
            cert_file: /bundle/cert.pem
            key_file: /bundle/key.pem
            server_name: proxy.local
          scheme: https
          file_sd_configs:
          - files:
            - /inventory/inventory.json
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: prometheus
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: prometheus
      template:
        metadata:
          labels:
            app: prometheus
        spec:
          containers:
          - name: inventory
            image: alpine
            command: ["sh", "-c"]
            args:
            - apk add --no-cache curl &&
              while :; do
                curl -Ss --cacert /bundle/ca.pem --cert /bundle/cert.pem --key /bundle/key.pem --output /inventory/inventory.json https://ucp-controller.kube-system.svc.cluster.local/metricsdiscovery;
                sleep 15;
              done
            volumeMounts:
            - name: bundle
              mountPath: /bundle
            - name: inventory
              mountPath: /inventory
          - name: prometheus
            image: prom/prometheus
            command: ["/bin/prometheus"]
            args:
            - --config.file=/config/prometheus.yaml
            - --storage.tsdb.path=/prometheus
            - --web.console.libraries=/etc/prometheus/console_libraries
            - --web.console.templates=/etc/prometheus/consoles
            volumeMounts:
            - name: bundle
              mountPath: /bundle
            - name: config
              mountPath: /config
            - name: inventory
              mountPath: /inventory
          volumes:
          - name: bundle
            secret:
              secretName: prometheus
          - name: config
            configMap:
              name: prometheus
          - name: inventory
            emptyDir:
              medium: Memory
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: prometheus
    spec:
      ports:
      - port: 9090
        targetPort: 9090
      selector:
        app: prometheus
      sessionAffinity: ClientIP
    EOF
    
  4. Determine the service ClusterIP:

    $ kubectl get service prometheus
    
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    prometheus   ClusterIP   10.96.254.107   <none>        9090/TCP   1h
    
  5. Forward port 9090 on the local host to the ClusterIP. The tunnel you create does not need to be kept alive as its only purpose is to expose the Prometheus UI.

    ssh -L 9090:10.96.254.107:9090 ANY_NODE
    
  6. Visit http://127.0.0.1:9090 to explore the MKE metrics that Prometheus is collecting.

See also

Kubernetes

See also

Kubernetes

Configure native Kubernetes role-based access control

MKE uses native Kubernetes RBAC, which is active by default for Kubernetes clusters. The YAML files of many ecosystem applications and integrations use Kubernetes RBAC to access service accounts. Also, organizations looking to run MKE both on-premises and in hosted cloud services want to run Kubernetes applications in both environments without having to manually change RBAC in their YAML file.

Note

Kubernetes and Swarm roles have separate views. Using the MKE web UI, you can view all the roles for a particular cluster:

  1. Click Access Control in the navigation menu at the left.

  2. Click Roles.

  3. Select the Kubernetes tab or the Swarm tab to view the specific roles for each.

Create a Kubernetes role

You create Kubernetes roles either through the CLI using Kubernetes kubectl tool or through the MKE web UI.

To create a Kubernetes role using the MKE web UI:

  1. Log in to the the MKE web UI.

  2. In the navigation menu at the left, click Access Control to display the available options.

  3. Click Roles.

  4. At the top of the details pane, click the Kubernetes tab.

  5. Click Create to open the Create Kubernetes Object page.

  6. Click Namespace to select a namespace for the role from one of the available options.

  7. Provide the YAML file for the role. To do this, either enter it in the Object YAML editor, or upload an existing .yml file using the Click to upload a .yml file selection link at the right.

  8. Click Create to complete role creation.

See also

Create a Kubernetes role grant

Kubernetes provides two types of role grants:

  • ClusterRoleBinding (applies to all namespaces)

  • RoleBinding (applies to a specific namespace)

To create a grant for a Kubernetes role in the MKE web UI:

  1. Log in to the the MKE web UI.

  2. In the navigation menu at the left, click Access Control to display the available options.

  3. Click the Grants option.

  4. At the top of the details paine, click the Kubernetes tab. All existing grants to Kubernetes roles are present in the details pane.

  5. Click Create Role Binding to open the Create Role Binding page.

  6. Select the subject type at the top of the 1. Subject section (Users, Organizations, or Service Account).

  7. Create a role binding for the selected subject type:

    • Users: Select a type from the User drop-down list.

    • Organizations: Select a type from the Organization drop-down list. Optionally, you can also select a team using the Team(optional) drop-down list, if any have been established.

    • Service Account: Select a NAMESPACE from the Namespace drop-down list, then a type from the Service Account drop-down list.

  8. Click Next to activate the 2. Resource Set section.

  9. Select a resource set for the subject.

    By default, the default namespace is indicated. To use a different namespace, select the Select Namespace button associated with the desired namespace.

    For ClusterRoleBinding, slide the Apply Role Binding to all namespace (Cluster Role Binding) selector to the right.

  10. Click Next to activate the 3. Role section.

  11. Select the role type.

    • Role

    • Cluster Role

    Note

    Cluster Role type is the only role type available if you enabled Apply Role Binding to all namespace (Cluster Role Binding) in the 2. Resource Set section.

  12. Select the role from the from the drop-down list.

  13. Click Create to complete grant creation.

See also

Kubernetes

MKE audit logging

Audit logs are a chronological record of security-relevant activities by individual users, administrators, or software components that have had an effect on an MKE system. They focus on external user/agent actions and security, rather than attempting to understand state or events of the system itself.

Audit logs capture all HTTP actions (GET, PUT, POST, PATCH, DELETE) to all MKE API, Swarm API, and Kubernetes API endpoints (with the exception of the ignored list) that are invoked and and sent to Mirantis Container Runtime via stdout.

The benefits that audit logs provide include:

Historical troubleshooting

You can use audit logs to determine a sequence of past events that can help explain why an issue occurred.

Security analysis and auditing

A full record of all user interactions with the container infrastructure can provide your security team with the visibility necessary to root out questionable or unauthorized access attempts.

Chargeback

Use audit log about the resources to generate chargeback information.

Alerting

With a watch on an event stream or a notification the event creates, you can build alerting features on top of event tools that generate alerts for ops teams (PagerDuty, OpsGenie, Slack, or custom solutions).

Logging levels

MKE provides three levels of audit logging to administrators:

None

Audit logging is disabled.

Metadata

Includes:
  • Method and API endpoint for the request

  • MKE user who made the request

  • Response status (success or failure)

  • Timestamp of the call

  • Object ID of any created or updated resource (for create or update API calls). We do not include names of created or updated resources.

  • License key

  • Remote address

Request

Includes all fields from the Metadata level, as well as the request payload.

Once you enable MKE audit logging, the audit logs will collect within the container logs of the ucp-controller container on each MKE manager node.

Note

Be sure to configure a logging driver with log rotation set, as audit logging can generate a large amount of data.

Enable MKE audit logging

You can enable MKE audit logging using the MKE web user interface, the MKE API, and the MKE configuration file.

Enable MKE audit logging using the web UI
  1. Log in to the MKE web user interface.

  2. Click admin to open the navigation menu at the left.

  3. Click Admin Settings.

  4. Click Logs & Audit Logs to open the Logs & Audit Logs details pane.

  5. In the Configure Audit Log Level section, select the relevant logging level.

  6. Click Save.

Enable MKE audit logging using the API
  1. Download the MKE client bundle from the command line, as described in Download the client bundle.

  2. Retrieve the JSON file for current audit log configuration:

    export DOCKER_CERT_PATH=~/ucp-bundle-dir/
    curl --cert ${DOCKER_CERT_PATH}/cert.pem --key ${DOCKER_CERT_PATH}/key.pem --cacert ${DOCKER_CERT_PATH}/ca.pem -k -X GET https://ucp-domain/api/ucp/config/logging > auditlog.json
    
  3. In auditlog.json, edit the auditlevel field to metadata or request:

    {
        "logLevel": "INFO",
        "auditLevel": "metadata",
        "supportDumpIncludeAuditLogs": false
    }
    
  4. Send the JSON request for the audit logging configuration with the same API path, but using the PUT method:

    curl --cert ${DOCKER_CERT_PATH}/cert.pem --key
    ${DOCKER_CERT_PATH}/key.pem --cacert ${DOCKER_CERT_PATH}/ca.pem -k -H
    "Content-Type: application/json" -X PUT --data $(cat auditlog.json)
    https://ucp-domain/api/ucp/config/logging
    
Enable MKE audit logging using the configuration file

You can enable MKE audit logging using the MKE configuration file before or after MKE installation.

The section of the MKE configuration file that controls MKE auditing logging is [audit_log_configuration]:

[audit_log_configuration]
  level = "metadata"
  support_dump_include_audit_logs = false

The level setting supports the following variables:

  • ""

  • "metadata"

  • "request"

Caution

The support_dump_include_audit_logs flag specifies whether user identification information from the ucp-controller container logs is included in the support bundle. To prevent this information from being sent with the support bundle, verify that support_dump_include_audit_logs is set to false. When disabled, the support bundle collection tool filters out any lines from the ucp-controller container logs that contain the substring auditID.

Access audit logs using the docker CLI

The audit logs are exposed through the ucp-controller logs. You can access these logs locally through the Docker CLI.

Note

You can also access MKE audit logs using an external container logging solution, such as ELK.

To access audit logs using the Docker CLI:

  1. Source a MKE client bundle.

  2. Run docker logs to obtain audit logs.

    The following example tails the command to show the last log entry.

    $ docker logs ucp-controller --tail 1
    
    {"audit":{"auditID":"f8ce4684-cb55-4c88-652c-d2ebd2e9365e","kind":"docker-swarm","level":"metadata","metadata":{"creationTimestamp":null},"requestReceivedTimestamp":"2019-01-30T17:21:45.316157Z","requestURI":"/metricsservice/query?query=(%20(sum%20by%20(instance)%20(ucp_engine_container_memory_usage_bytes%7Bmanager%3D%22true%22%7D))%20%2F%20(sum%20by%20(instance)%20(ucp_engine_memory_total_bytes%7Bmanager%3D%22true%22%7D))%20)%20*%20100\u0026time=2019-01-30T17%3A21%3A45.286Z","sourceIPs":["172.31.45.250:48516"],"stage":"RequestReceived","stageTimestamp":null,"timestamp":null,"user":{"extra":{"licenseKey":["FHy6u1SSg_U_Fbo24yYUmtbH-ixRlwrpEQpdO_ntmkoz"],"username":["admin"]},"uid":"4ec3c2fc-312b-4e66-bb4f-b64b8f0ee42a","username":"4ec3c2fc-312b-4e66-bb4f-b64b8f0ee42a"},"verb":"GET"},"level":"info","msg":"audit","time":"2019-01-30T17:21:45Z"}
    

    Sample audit log for a Kubernetes cluster:

    {"audit"; {
          "metadata": {...},
          "level": "Metadata",
          "timestamp": "2018-08-07T22:10:35Z",
          "auditID": "7559d301-fa6b-4ad6-901c-b587fab75277",
          "stage": "RequestReceived",
          "requestURI": "/api/v1/namespaces/default/pods",
          "verb": "list",
          "user": {"username": "alice",...},
          "sourceIPs": ["127.0.0.1"],
          ...,
          "requestReceivedTimestamp": "2018-08-07T22:10:35.428850Z"}}
    

    Sample audit log for a Swarm cluster:

    {"audit"; {
          "metadata": {...},
          "level": "Metadata",
          "timestamp": "2018-08-07T22:10:35Z",
          "auditID": "7559d301-94e7-4ad6-901c-b587fab31512",
          "stage": "RequestReceived",
          "requestURI": "/v1.30/configs/create",
          "verb": "post",
          "user": {"username": "alice",...},
          "sourceIPs": ["127.0.0.1"],
          ...,
          "requestReceivedTimestamp": "2018-08-07T22:10:35.428850Z"}}
    
API endpoints logging constraints

With regard to audit logging, for reasons having to do with system security a number of MKE API endpoints are either ignored or have their information redacted.

API endpoints ignored

The following API endpoints are ignored since they are not considered security events and can create a large amount of log entries:

  • /_ping

  • /ca

  • /auth

  • /trustedregistryca

  • /kubeauth

  • /metrics

  • /info

  • /version\*

  • /debug

  • /openid_keys

  • /apidocs

  • /kubernetesdocs

  • /manage

API endpoints information redacted

For security purposes, information for the following API endpoints is redacted from the audit logs:

  • /secrets/create (POST)

  • /secrets/{id}/update (POST)

  • /swarm/join (POST)

  • /swarm/update (POST) -/auth/login (POST)

  • Kubernetes secrets create/update endpoints

See also

Kubernetes

See also

Kubernetes

Enable MKE telemetry

You can set MKE to automatically record and transmit data to Mirantis through an encrypted channel for monitoring and analysis purposes. The data collected provides the Mirantis Customer Success Organization with information that helps us to better understand the operational use of MKE by our customers. It also provides key feedback in the form of product usage statistics, which enable our product teams to enhance Mirantis products and services.

Specifically, with MKE you can send hourly usage reports, as well as information on API and UI usage.

Caution

To send the telemetry, verify that dockerd and the MKE application container can resolve api.segment.io and create a TCP (HTTPS) connection on port 443.

To enable telemetry in MKE:

  1. Log in to the MKE web UI as an administrator.

  2. At the top of the navigation menu at the left, click the user name drop-down to display the available options.

  3. Click Admin Settings to display the available options.

  4. Click Usage to open the Usage Reporting screen.

  5. Toggle the Enable API and UI tracking slider to the right.

  6. (Optional) Enter a unique label to identify the cluster in the usage reporting.

  7. Click Save.

Enable and integrate SAML authentication

Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between parties. It is commonly supported by enterprise authentication systems. SAML-based single sign-on (SSO) gives you access to MKE through a SAML 2.0-compliant identity provider.

MKE supports the Okta and ADFS identity providers.

The SAML integration process is as follows.

  1. Configure the Identity Provider (IdP).

  2. Enable SAML and configure MKE as the Service Provider under Admin Settings > Authentication and Authorization.

  3. Create (Edit) Teams to link with the Group memberships. This updates team membership information when a user signs in with SAML.

Note

If you enable LDAP integration, you cannot enable SAML for authentication. Note, though, that this does not affect local MKE user account authentication.

Configure SAML integration on identity provider

Identity providers require certain values to successfully integrate with MKE. As these values vary depending on the identity provider, consult your identity provider documentation for instructions on how to best provide the needed information.

Okta integration values

Okta integration requires the following values:

Value

Description

URL for single signon (SSO)

URL for MKE, qualified with /enzi/v0/saml/acs. For example, https://111.111.111.111/enzi/v0/saml/acs.

Service provider audience URI

URL for MKE, qualified with /enzi/v0/saml/metadata. For example, https://111.111.111.111/enzi/v0/saml/metadata.

NameID format

Select Unspecified.

Application user name

Email. For example, a custom ${f:substringBefore(user.email, "@")} specifies the user name portion of the email address.

Attribute Statements

  • Name: fullname
    Value: user.displayName

Group Attribute Statement

  • Name: member-of
    Filter: (user defined) for associate group membership.
    The group name is returned with the assertion.
  • Name: is-admin
    Filter: (user defined) for identifying whether the user is an admin.

Okta configuration

When two or more group names are expected to return with the assertion, use the regex filter. For example, use the value apple|orange to return groups apple and orange.

ADFS integration values

To enable ADFS integration:

  1. Add a relying party trust.

  2. Obtain the service provider metadata URI.

    The service provider metadata URI value is the URL for MKE, qualified with /enzi/v0/saml/metadata. For example, https://111.111.111.111/enzi/v0/saml/metadata.

  3. Add claim rules.

    1. Convert values from AD to SAML

      • Display-name : Common Name

      • E-Mail-Addresses : E-Mail Address

      • SAM-Account-Name : Name ID

    2. Create a full name for MKE (custom rule):

      c:[Type == "http://schemas.xmlsoap.org/claims/CommonName"]      => issue(Type = "fullname", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value,       ValueType = c.ValueType);
      
    3. Transform account name to Name ID:

      • Incoming type: Name ID

      • Incoming format: Unspecified

      • Outgoing claim type: Name ID

      • Outgoing format: Transient ID

    4. Pass admin value to allow admin access based on AD group. Send group membership as claim:

      • Users group: your admin group

      • Outgoing claim type: is*admin

      • Outgoing claim value: 1

    5. Configure group membership for more complex organizations, with multiple groups able to manage access.

      • Send LDAP attributes as claims

      • Attribute store: Active Directory

        • Add two rows with the following information:

          • LDAP attribute = email address; outgoing claim type: email address

          • LDAP attribute = Display*Name; outgoing claim type: common name

      • Mapping:

        • Token-Groups - Unqualified Names : member-of

Note

Once you enable SAML, Service Provider metadata is available at https://<SPHost>/enzi/v0/saml/metadata. The metadata link is also labeled as entityID.

Only POST binding is supported for the Assertion Consumer Service, which is located at https://<SP Host>/enzi/v0/saml/acs.

Configure SAML integration on MKE

SAML configuration requires that you know the metadata URL for your chosen identity provider, as well as the URL for the MKE host that contains the IP address or domain of your MKE installation.

To configure SAML integration on MKE:

  1. Log in to the MKE web UI.

  2. In the navigation menu at the left, click the user name drop-down to display the available options.

  3. Click Admin Settings to display the available options.

  4. Click Authentication & Authorization.

  5. In the Identity Provider section in the details pane, move the slider next to SAML to enable the SAML settings.

  6. In the SAML idP Server subsection, enter the URL for the identity provider metadata in the IdP Metadata URL field.

    Note

    If the metadata URL is publicly certified, you can continue with the default settings:

    • Skip TLS Verification unchecked

    • Root Certificates Bundle blank

    Mirantis recommends TLS verification in production environments. If the metadata URL cannot be certified by the default certificate authority store, you must provide the certificates from the identity provider in the Root Certificates Bundle field.

  7. In the SAML Service Provider subsection, in the MKE Host field, enter the URL that includes the IP address or domain of your MKE installation.

    The port number is optional. The current IP address or domain displays by default.

  8. (Optional) Customize the text of the sign-in button by entering the text for the button in the Customize Sign In Button Text field. By default, the button text is Sign in with SAML.

  9. Copy the SERVICE PROVIDER METADATA URL, the ASSERTION CONSUMER SERVICE (ACS) URL, and the SINGLE LOGOUT (SLO) URL to paste into the identity provider workflow.

  10. Click Save.

Note

  • To configure a service provider, enter the Identity Provider’s metadata URL to obtain its metadata. To access the URL, you may need to provide the CA certificate that can verify the remote server.

  • To link group membership with users, use the Edit or Create team dialog to associate SAML group assertion with the MKE team to synchronize user team membership when the user log in.

SAML security considerations

From the MKE web UI you can download a client bundle with which you can access MKE using the CLI and the API.

A client bundle is a group of certificates that enable command-line access and API access to the software. It lets you authorize a remote Docker engine to access specific user accounts that are managed in MKE, absorbing all associated RBAC controls in the process. Once you obtain the client bundle, you can execute Docker Swarm commands from your remote machine to take effect on the remote cluster.

Previously-authorized client bundle users can still access MKE, regardless of the newly configured SAML access controls.

Mirantis recomments that you take the following steps to ensure that access from the client bundle is in sync with the identity provider, and to thus prevent any previously-authorized users from accessing MKE through their existing client bundle:

  1. Remove the user account from MKE that grants the client bundle access.

  2. If group membership in the identity provider changes, replicate the change in MKE.

  3. Continue using LDAP to sync group membership.

To download the client bundle:

  1. Log in to the MKE web UI.

  2. In the navigation menu at the left, click the user name drop-down to display the available options.

  3. Click your account name to display the available options.

  4. Click My Profile.

  5. Click the New Client Bundle drop-down in the details pane and select Generate Client Bundle.

  6. (Optional) Enter a name for the bundle into the Label field.

  7. Click Confirm to initiate the bundle download.

Enable Helm with MKE

To use Helm with MKE, you must define the necessary roles in the kube-system default service account.

Note

For comprehensive information on the use of Helm, refer to the Helm user documentation.

To enable Helm with MKE, enter the following kubectl commands in sequence:

kubectl create rolebinding default-view --clusterrole=view
--serviceaccount=kube-system:default --namespace=kube-system

kubectl create clusterrolebinding add-on-cluster-admin
--clusterrole=cluster-admin --serviceaccount=kube-system:default

Integrate SCIM

System for Cross-domain Identity Management (SCIM) is a standard for automating the exchange of user identity information between identity domains or IT systems. It offers an LDAP alternative for provisioning and managing users and groups in MKE, as well as for syncing users and groups with an upstream identity provider. Using SCIM schema and API, you can utilize Single sign-on services (SSO) across various tools.

Mirantis certifies the use of Okta 3.2.0, however MKE offers the discovery endpoints necessary to provide any system or application with the product SCIM configuration.

Configure SCIM for MKE

The Mirantis SCIM implementation uses SCIM version 2.0.

MKE SCIM intregration typically involves the following steps:

  1. Enable SCIM.

  2. Configure SCIM for authentication and access.

  3. Specify user attributes.

Enable SCIM
  1. Log in to the MKE web UI.

  2. Click Admin Settings > Authentication & Authorization.

  3. In the Identity Provider Integration section in the details pane, move the slider next to SCIM to enable the SCIM settings.

Configure SCIM authentication and access

In the SCIM configuration subsection, either enter the API token in the API Token field or click Generate to have MKE generate a UUID.

The base URL for all SCIM API calls is https://<Host IP>/enzi/v0/scim/v2/. All SCIM methods are accessible API endpoints of this base URL.

Bearer Auth is the API authentication method. When configured, you access SCIM API endpoints through the Bearer <token> HTTP Authorization request header.

Note

  • SCIM API endpoints are not accessible by any other user (or their token), including the MKE administrator and MKE admin Bearer token.

  • The only SCIM method MKE supports is an HTTP authentication request header that contains a Bearer token.

Specify user attributes

The following table maps the user attribute fields in use by Mirantis to SCIM and SAML attributes.

MKE

SAML

SCIM

Account name

nameID in response

userName

Account full name

Attribute value in fullname assertion

User’s name.formatted

Team group link name

Attribute value in member-of assertion

Group’s displayName

Team name

N/A

When creating a team, use the group’s displayName + _SCIM

Supported SCIM API endpoints

MKE supports SCIM API endpoints across three operational areas: User, Group, and Service Provider Configuration.

User operations

The SCIM API endpoints that serve in user operations provide the means to:

  • Retrieve user information

  • Create a new user

  • Update user information

For user GET and POST operations:

  • Filtering is only supported using the userName attribute and eq operator. For example, filter=userName Eq "john".

  • Attribute name and attribute operator are case insensitive. For example, the following two expressions have the same logical value:

    • filter=userName Eq "john"

    • filter=Username eq "john"

  • Pagination is fully supported.

  • Sorting is not supported.

GET /Users

Returns a list of SCIM users (by default, 200 users per page).

Use the startIndex and count query parameters to paginate long lists of users. For example, to retrieve the first 20 Users, set startIndex to 1 and count to 20, provide the following JSON request:

GET Host IP/enzi/v0/scim/v2/Users?startIndex=1&count=20
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8

The response to the previous query returns paging metadata that is similar to the following example:

{
  "totalResults":100,
  "itemsPerPage":20,
  "startIndex":1,
  "schemas":["urn:ietf:params:scim:api:messages:2.0:ListResponse"],
  "Resources":[{
     ...
  }]
}
GET /Users/{id}

Retrieves a single user resource.

The value of the {id} should be the user’s ID. You can also use the userName attribute to filter the results.

GET {Host IP}/enzi/v0/scim/v2/Users?{user ID}
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
POST /Users

Creates a user.

The operation must include the userName attribute and at least one email address.

POST {Host IP}/enzi/v0/scim/v2/Users
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
PATCH /Users/{id}

Updates a user’s active status.

Reactivate inactive users by specifying "active": true. To deactivate active users, specify "active": false. The value of the {id} should be the user’s ID.

PATCH {Host IP}/enzi/v0/scim/v2/Users?{user ID}
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
PUT /Users/{id}

Updates existing user information.

All attribute values are overwritten, including attributes for which empty values or no values have been provided. If a previously set attribute value is left blank during a PUT operation, the value is updated with a blank value in accordance with the attribute data type and storage provider. The value of the {id} should be the user’s ID.

Group operations

The SCIM API endpoints that serve in group operations provide the means to:

  • Create a new user group

  • Retrieve group information

  • Update user group membership (add/replace/remove users)

For group GET and POST operations:

  • Pagination is fully supported.

  • Sorting is not supported.

GET /Groups/{id}

Retrieves information for a single group.

GET /scim/v1/Groups?{Group ID}
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
GET /Groups

Returns a paginated list of groups (by default, ten groups per page).

Use the startIndex and count query parameters to paginate long lists of groups.

GET /scim/v1/Groups?startIndex=4&count=500 HTTP/1.1
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
POST /Groups

Creates a new group.

Add users to the group during group creation by supplying user ID values in the members array.

PATCH /Groups/{id}

Updates an existing group resource, allowing the addition or removal of individual (or groups of) users from the group with a single operation. Add is the default operation.

To remove members from a group, set the operation attribute of a member object to delete.

PUT /Groups/{id}

Updates an existing group resource, overwriting all values for a group even if an attribute is empty or is not provided.

PUT replaces all members of a group with members that are provided by way of the members attribute. If a previously set attribute is left blank during a PUT operation, the new value is set to blank in accordance with the data type of the attribute and the storage provider.

Service Provider configuration operations

The SCIM API endpoints that serve in Service provider configuration operations provide the means to:

  • Retrieve service provider resource type metadata

  • Retrieve schema for service provider and SCIM resources

  • Retrieve schema for service provider configuration

SCIM defines three endpoints to facilitate discovery of the SCIM service provider features and schema that you can retrieve using HTTP GET:

GET /ResourceTypes

Discovers the resource types available on a SCIM service provider (for example, Users and Groups).

Each resource type defines the endpoints, the core schema URI that defines the resource, and any supported schema extensions.

GET /Schemas

Retrieves information about all supported resource schemas supported by a SCIM service provider.

GET /ServiceProviderConfig

Returns a JSON structure that describes the SCIM specification features that are available on a service provider using a schemas attribute of urn:ietf:params:scim:schemas:core:2.0:ServiceProviderConfig.

Integrate with an LDAP directory

MKE integrates with LDAP directory services, thus allowing you to manage users and groups from your organization directory and to automatically propagate the information to MKE and MSR.

Once you enable LDAP, MKE uses a remote directory server to create users automatically, and all logins are forwarded thereafter to the directory server.

When you switch from built-in authentication to LDAP authentication, all manually created users whose usernames fail to match any LDAP search results remain available.

When you enable LDAP authentication, you configure MKE to create user accounts only when users log in for the first time.

Note

If you enable SAML integration, you cannot enable LDAP for authentication. This does not affect local MKE user account authentication.

MKE integration with LDAP

To control the integration of MKE with LDAP, you create user searches. For these user searches, you use the MKE web UI to specify multiple search configurations and specify multiple LDAP servers with which to integrate. Searches start with the Base DN, the Distinguished Name of the node in the LDAP directory tree in which the search looks for users.

MKE to LDAP synchronization workflow

The following occurs when MKE synchronizes with LDAP:

  1. MKE creates a set of search results by iterating over each of the user search configurations, in an order that you specify.

  2. MKE choses an LDAP server from the list of domain servers by considering the Base DN from the user search configuration and selecting the domain server with the longest domain suffix match.

    Note

    If no domain server has a domain suffix that matches the Base DN from the search configuration, MKE uses the default domain server.

  3. MKE creates a list of users from the search and creates MKE accounts for each one.

    Note

    If you select the Just-In-Time User Provisioning option, user accounts are created only when users first log in.

Example workflow:

Consider an example with three LDAP domain servers and three user search configurations.

The example LDAP domain servers:

LDAP domain server name

URL

default

ldaps://ldap.example.com

dc=subsidiary1,dc=com

ldaps://ldap.subsidiary1.com

dc=subsidiary2,dc=subsidiary1,dc=com

ldaps://ldap.subsidiary2.com

The example user search configurations:

User search configurations

Description

baseDN=\ ou=people,dc=subsidiary1,dc=com

For this search configuration, dc=subsidiary1,dc=com is the only server with a domain that is a suffix, so MKE uses the server ldaps://ldap.subsidiary1.com for the search request.

baseDN=\ ou=product,dc=subsidiary2,dc=subsidiary1,dc=com

For this search configuration, two of the domain servers have a domain that is a suffix of this Base DN. As dc=subsidiary2,dc=subsidiary1,dc=com is the longer of the two, however, MKE uses the server ldaps://ldap.subsidiary2.com for the search request.

baseDN=\ ou=eng,dc=example,dc=com

For this search configuration, no server with a domain specified is a suffix of this Base DN, so MKE uses the default server, ldaps://ldap.example.com, for the search request.

Whenever user search results contain username collisions between the domains, MKE uses only the first search result, and thus the ordering of the user search configurations can be important. For example, if both the first and third user search configurations result in a record with the username jane.doe, the first has higher precedence and the second is ignored. As such, it is important to implement a username attribute that is unique for your users across all domains. As a best practice, choose something that is specific to the subsidiary, such as the email address for each user.

Configure the LDAP integration

Note

MKE saves a minimum amount of user data required to operate, including any user name and full name attributes that you specify in the configuration, as well as the Distinguished Name (DN) of each synced user. MKE does not store any other data from the directory server.

Use the MKE web UI to configure MKE to create and authenticate users using an LDAP directory.

Access the LDAP controls

To configure LDAP integration you must first gain access to the controls for the service protocol.

  1. Log in to the MKE web UI.

  2. In the left-side navigation menu, click the user name drop-down to display the available options.

  3. Navigate to Admin Settings > Authentication & Authorization.

  4. In the Identity Provider section in the details pane, move the slider next to LDAP to enable the LDAP settings.

Set up an LDAP server

To configure an LDAP server, perform the following steps:

  1. To set up a new LDAP server, configure the settings in the LDAP Server subsection:

    Control

    Description

    LDAP Server URL

    The URL for the LDAP server.

    Reader DN

    The DN of the LDAP account that is used to search entries in the LDAP server. As a best practice, this should be an LDAP read-only user.

    Reader Password

    The password of the account used to search entries in the LDAP server.

    Skip TLS verification

    Sets whether to verify the LDAP server certificate when TLS is in use. The connection is still encrypted, however it is vulnerable to man-in-the-middle attacks.

    Use Start TLS

    Defines whether to authenticate or encrypt the connection after connection is made to the LDAP server over TCP. To ignore the setting, set the LDAP Server URL field to ldaps://.

    No Simple Pagination (RFC 2696)

    Indicates that your LDAP server does not support pagination.

    Just-In-Time User Provisioning

    Sets whether to create user accounts only when users log in for the first time. Mirantis recommends using the default true value.

  2. Click Save to add your LDAP server.

Add additional LDAP domains

To integrate MKE with additional LDAP domains:

  1. In the LDAP Additional Domains subsection, click Add LDAP Domain +. A set of input tools for configuring the additional domain displays.

  2. Configure the settings for the new LDAP domain:

    Control

    Description

    LDAP Domain

    Text field in which to enter the root domain component of this server. A longest-suffix match of the Base DN for LDAP searches is used to select which LDAP server to use for search requests. If no matching domain is found, the default LDAP server configuration is put to use.

    LDAP Server URL

    Text field in which to enter the URL for the LDAP server.

    Reader DN

    Text field in which to enter the DN of the LDAP account that is used to search entries in the LDAP server. As a best practice, this should be an LDAP read-only user.

    Reader Password

    The password of the account used to search entries in the LDAP server.

    Skip TLS verification

    Sets whether to verify the LDAP server certificate when TLS is in use. The connection is still encrypted, however it is vulnerable to man-in-the-middle attacks.

    Use Start TLS

    Sets whether to authenticate or encrypt the connection after connection is made to the LDAP server over TCP. To ignore the setting, set the LDAP Server URL field to ldaps://.

    No Simple Pagination (RFC 2696)

    Select if your LDAP server does not support pagination.

  3. Click Confirm to add the new LDAP domain.

  4. Repeat the procedure to add any additional LDAP domains.

Add LDAP user search configurations

To add LDAP user search configurations to your LDAP integration:

  1. In the LDAP User Search Configurations subsection, click Add LDAP User Search Configuration +.A set of input tools for configuring the LDAP user search configurations displays.

    Field

    Description

    Base DN

    Text field in which to enter the DN of the node in the directory tree, where the search should begin seeking out users.

    Username Attribute

    Text field in which to enter the LDAP attribute that serves as username on MKE. Only user entries with a valid username will be created.

    A valid username must not be longer than 100 characters and must not contain any unprintable characters, whitespace characters, or any of the following characters: / \ [ ] : ; | = , + * ? < > ' ".

    Full Name Attribute

    Text field in which to enter the LDAP attribute that serves as the user’s full name, for display purposes. If the field is left empty, MKE does not create new users with a full name value.

    Filter

    Text field in which to enter an LDAP search filter to use to find users. If the field is left empty, all directory entries in the search scope with valid username attributes are created as users.

    Search subtree instead of just one level

    Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN.

    Match Group Members

    Sets whether to filter users further, by selecting those who are also members of a specific group on the directory server. The feature is helpful when the LDAP server does not support memberOf search filters.

    Iterate through group members

    Sets whether, when the Match Group Members option is enabled to sync users, the sync is done by iterating over the target group’s membership and making a separate LDAP query for each member, rather than through the use of a broad user search filter. This option can increase efficiency in situations where the number of members of the target group is significantly smaller than the number of users that would match the above search filter, or if your directory server does not support simple pagination of search results.

    Group DN

    Text field in which to enter the DN of the LDAP group from which to select users, when the Match Group Members option is enabled.

    Group Member Attribute

    Text field in which to enter the name of the LDAP group entry attribute that corresponds to the DN of each of the group members.

  2. Click Confirm to add the new LDAP user search configurations.

  3. Repeat the procedure to add any additional user search configurations. More than one such configuration can be useful in cases where users may be found in multiple distinct subtrees of your organization directory. Any user entry that matches at least one of the search configurations will be synced as a user.

Test LDAP login

Prior to saving your configuration changes, you can use the dedicated LDAP Test login tool to test the integration using the login credentials of an LDAP user.

  1. Input the credentials for the test user into the provided Username and Passworfd fields:

    Field

    Description

    Username

    An LDAP user name for testing authentication to MKE. The value corresponds to the Username Attribute that is specified in the Add LDAP user search configurations section.

    Password

    The password used to authenticate (BIND) to the directory server.

  2. Click Test. A search is made against the directory using the provided search Base DN, scope, and filter. Once the user entry is found in the directory, a BIND request is made using the input user DN and the given password value.

Set LDAP synchronization

Following LDAP integration, MKE synchronizes users at the top of the hour, based on an intervial that is defined in hours.

To set LDAP synchronization, configure the following settings in the LDAP Sync Configuration section:

Field

Description

Sync interval

The interval, in hours, to synchronize users between MKE and the LDAP server. When the synchronization job runs, new users found in the LDAP server are created in MKE with the default permission level. MKE users that do not exist in the LDAP server become inactive.

Enable sync of admin users

This option specifies that system admins should be synced directly with members of a group in your organization’s LDAP directory. The admins will be synced to match the membership of the group. The configured recovery admin user will also remain a system admin.

Manually synchronize LDAP

In addition to configuring MKE LDAP synchronization, you can also perform a hot synchronization by clicking the Sync Now button in the LDAP Sync Jobs subsection. Here you can also view the logs for each sync jobs by clicking View Logs link associated with a particular job.

Revoke user access

Whenever a user is removed from LDAP, the effect on their MKE account is determined by the Just-In-Time User Provisioning setting:

  • false: Users deleted from LDAP become inactive in MKE following the next LDAP synchronization runs.

  • true: A user deleted from LDAP cannot authenticate. Their MKE accounts remain active, however, and thus they can use their client bundles to run commands. To prevent this, deactivate the user’s MKE user account.

Synchronize teams with LDAP

MKE enables the syncing of teams within Organizations with LDAP, using either a search query or by matching a group that is established in your LDAP directory.

  1. Log in to the MKE web UI as an administrator.

  2. Navigate to Access Control > Orgs & Teams to display the Organizations that exist within your MKE instance.

  3. Locate the name of the Organization that contains the MKE team that you want to sync to LDAP and click it to display all of the MKE teams for that Organization.

  4. Hover your cursor over the MKE team that you want to sync with LDAP to reveal its vertical ellipsis, at the far right.

  5. Click the vertical ellipsis and select Edit to call the Details screen for the team.

  6. Toggle ENABLE SYNC TEAM MEMBERS to Yes to reveal the LDAP sync controls.

  7. Toggle LDAP MATCH METHOD to set the LDAP match method you want to use to make the sync, Match Search Results (default) or Match Group Members.

    • For Match Search Results:

      1. Enter a Base DN into the Search Base DN field, as it is established in LDAP.

      2. Enter a search filter based on one or more attributes into the Search filter field.

      3. Optional. Check Search subtree instead of just one level to enable search down through any sub-groups that exist within the group you entered into the Search Base DN field.

    • For Match Group Members:

      1. Enter the group Distinguised Name (DN) into the Group DN field.

      2. Enter a member attribute into the Group Member field.

  8. Toggle IMMEDIATELY SYNC TEAM MEMBERS as appropriate.

  9. Toggle ALLOW NON-LDAP MEMBERS as appropriate.

  10. Click Save.

LDAP Configuration through API

LDAP-specific GET and PUT API endpoints are available in the configuration resource. Swarm mode must be enabled to use the following endpoints:

  • GET /api/ucp/config/auth/ldap - Returns information on your current system LDAP configuration.

  • PUT /api/ucp/config/auth/ldap - Updates your LDAP configuration.

Manage services node deployment

You can configure MKE to allow users to deploy and run services in worker nodes only, to ensure that all cluster management functionality remains performant and to enhance cluster security.

Important

If for whatever reason a user deploys a malicious service that can affect the node on which it is running, that service will not be able to strike any other nodes in the cluster or have any impact on cluster management functionality.

Restrict services deployment to Swarm worker nodes

To keep manager nodes performant, it is necessary at times to restrict service deployment to Swarm worker nodes.

To restrict services deployment to Swarm worker nodes:

  1. Log in to the MKE web UI with administrator credentials.

  2. Click the user name at the top of the navigation menu.

  3. Navigate to Admin Settings > Orchestration.

  4. Under Container Scheduling, toggle all of the sliders to the left to restrict the deployment only to worker nodes.

Note

Creating a grant with the Scheduler role against the / collection takes precedence over any other grants with Node Schedule on subcollections.

Restrict services deployment to Kubernetes worker nodes

By default, MKE clusters use Kubernetes taints and tolerations to prevent user workloads from deploying to MKE manager or MSR nodes.

Note

Workloads deployed by an administrator in the kube-system namespace do not follow scheduling constraints. If an administrator deploys a workload in the kube-system namespace, a toleration is applied to bypass the taint, and the workload is scheduled on all node types.

To view the taints, run the following command:

$ kubectl get nodes <mkemanager> -o json | jq -r '.spec.taints | .[]'

Example of system response:

{
  "effect": "NoSchedule",
  "key": "com.docker.ucp.manager"
}
Allow services deployment on Kubernetes MKE manager or MSR nodes

You can circumvent the protections put in place by Kubernetes taints and tolerations. For details, refer to Restrict services deployment to Kubernetes worker nodes.

Schedule services deployment on manager and MSR nodes
  1. Log in to the MKE web UI with administrator credentials.

  2. Click the user name at the top of the navigation menu.

  3. Navigate to Admin Settings > Orchestration.

  4. Select from the following options:

    • Under Container Scheduling, toggle to the right the slider for Allow administrators to deploy containers on MKE managers or nodes running MSR.

    • Under Container Scheduling, toggle to the right the slider for Allow all authenticated users, including service accounts, to schedule on all nodes, including MKE managers and MSR nodes..

Following any scheduling action, MKE applies a toleration to new workloads, to allow the Pods to be scheduled on all node types. For existing workloads, however, it is necessary to manually add the toleration to the Pod specification.

Add a toleration to the Pod specification for existing workloads
  1. Add the following toleration to the Pod specification, either through the MKE web UI or using the kubectl edit <object> <workload> command:

    tolerations:
    - key: "com.docker.ucp.manager"
      operator: "Exists"
    
  2. Run the following command to confirm the successful application of the toleration:

    kubectl get <object> <workload> -o json | jq -r '.spec.template.spec.tolerations | .[]'
    

Example of system response:

{
"key": "com.docker.ucp.manager",
"operator": "Exists"
}

Caution

A NoSchedule taint is present on MKE manager and MSR nodes, and if you disable scheduling on managers and/or workers a toleration for that taint will not be applied to the deployments. As such, you should not schedule on these nodes, except when the Kubernetes workload is deployed in the kube-system namespace.

Run only the images you trust

With MKE you can force applications to use only Docker images that are signed by MKE users you trust. Every time a user attempts to deploy an application to the cluster, MKE verifies that the application is using a trusted Docker image. If a trusted Docker image is not in use, MKE halts the deployment.

By signing and verifying the Docker images, you ensure that the images in use in your cluster are trusted and have not been altered, either in the image registry or on their way from the image registry to your MKE cluster.

Example workflow

  1. A developer makes changes to a service and pushes their changes to a version control system.

  2. A CI system creates a build, runs tests, and pushes an image to the Mirantis Secure Registry (MSR) with the new changes.

  3. The quality engineering team pulls the image, runs more tests, and signs and pushes the image if the image is verified.

  4. IT operations deploys the service, but only if the image in use is signed by the QA team. Otherwise, MKE will not deploy.

To configure MKE to only allow running services that use Docker trusted images:

  1. Log in to the MKE web UI.

  2. In the left-side navigation menu, click the user name drop-down to display the available options.

  3. Click Admin Settings > Docker Content Trust to reveal the Content Trust Settings page.

  4. Enable Run only signed images.

    Important

    At this point, MKE allows the deployment of any signed image, regardless of signee.

  5. (Optional) Make it necessary for the image to be signed by a particular team or group of teams:

    1. Click Add Team+ to reveal the two-part tool.

    2. From the drop-down at the left, select an organization.

    3. From the drop-down at the right, select a team belonging to the organization you selected.

    4. Repeat the procedure to configure additional teams.

      Note

      If you specify multiple teams, the image must be signed by a member of each team, or someone who is a member of all of the teams.

  6. Click Save.

    MKE immediately begins enforcing the image trust policy. Existing services continue to run and you can restart them as necessary. From this point, however, MKE only allows the deployment of new services that use a trusted image.

Set user session properties

MKE enables the setting of various user sessions properties, such as session timeout and the permitted number of concurrent sessions.

To configure MKE login session properties:

  1. Log in to the MKE web UI.

  2. In the left-side navigation menu, click the user name drop-down to display the available options.

  3. Click Admin Settings > Authentication & Authorization to reveal the MKE login session controls.

The following table offers information on the MKE login session controls:

Field

Description

Lifetime Minutes

The set duration of a login session in minutes, starting from the moment MKE generates the session. MKE invalidates the active session once this period expires and the user must re-authenticate to establish a new session.

  • Default: 60

  • Minimum: 10

Renewal Threshold Minutes

The time increment in minutes by which MKE extends an active session prior to session expiration. MKE extends the session by the amount specified in Lifetime Minutes. The threshold value cannot be greater than that set in Lifetime Minutes.

To specify that sessions not be extended, set the threshold value to 0. Be aware, though, that this may cause MKE web UI users to be unexpectedly logged out.

  • Default: 20

  • Maximum: 5 minutes less than Lifetime Minutes

Per User Limit

The maximum number of sessions a user can have running simultaneously. If the creation of a new session results in the exceeding of this limit, MKE will delete the session least recently put to use. Specifically, every time you use a session token, the server marks it with the current time (lastUsed metadata). When you create a new session exceeds the per-user limit, the session with the oldest lastUsed time is deleted, which is not necessarily the oldest session.

To disable the Per User Limit setting, set the value to 0.

  • Default: 10

  • Minimum: 1 / Maximum: No limit

Configure an MKE cluster

Important

The MKE configuration file documentation is up-to-date for the latest MKE 3.4.x release. As such, if you are running an earlier version of MKE, you may encounter detail for configuration options and parameters that are not applicable to the version of MKE you are currently running.

Refer to the MKE Release Notes for specific version-by-version information on MKE configuration file additions and changes.

The configuring of an MKE cluster takes place through the application of a TOML file. You use this file, the MKE configuration file, to import and export MKE configurations, to both create new MKE instances and to modify existing ones.

Refer to example-config in the MKE CLI reference documentation to learn how to download an example MKE configuration file.

Use an MKE configuration file

Put the MKE configuration file to work for the following use cases:

  • Set the configuration file to run at the install time of new MKE clusters

  • Use the API to import the file back into the same cluster

  • Use the API to import the file into multiple clusters

To make use of an MKE configuration file, you edit the file using either the MKE web UI or the command line interface (CLI). Using the CLI, you can either export the existing configuration file for editing, or use the example-config command to view and edit an example TOML MKE configuration file.

docker container run --rm
  -v /var/run/docker.sock:/var/run/docker.sock \
  mirantis/ucp:3.4.15 \ example-config
Modify an existing MKE configuration

Working as an MKE admin, use the config-toml API from within the directory of your client certificate bundle to export the current MKE settings to a TOML file.

As detailed herein, the command set exports the current configuration for the MKE hostname MKE_HOST to a file named mke-config.toml:

  1. Define the following environment variables:

    export MKE_USERNAME=<mke-username>
    export MKE_PASSWORD=<mke-password>
    export MKE_HOST=<mke-fqdm-or-ip-address>
    
  2. Obtain and define an AUTHTOKEN environment variable:

    AUTHTOKEN=$(curl --silent --insecure --data '{"username":"'$MKE_USERNAME'","password":"'$MKE_PASSWORD'"}' https://$MKE_HOST/auth/login | jq --raw-output .auth_token)
    
  3. Download the current MKE configuration file.

    curl --silent --insecure -X GET "https://$MKE_HOST/api/ucp/config-toml" -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" > mke-config.toml
    
  4. Edit the MKE configuration file, as needed. For comprehensive detail, refer to Configuration options.

  5. Upload the newly edited MKE configuration file:

    Note

    You may need to reacquire the AUTHTOKEN, if significant time has passed since you first acquired it.

    curl --silent --insecure -X PUT -H "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'mke-config.toml' https://$MKE_HOST/api/ucp/config-toml
    
Apply an existing configuration at install time

To customize a new MKE instance using a configuration file, you must create the file prior to installation. Then, once the new configuration file is ready, you can configure MKE to import it during the installation process using Docker Swarm.

To import a configuration file at installation:

  1. Create a Docker Swarm Config object named com.docker.mke.config and the TOML value of your MKE configuration file contents.

  2. When installing MKE on the cluster, specify the --existing-config flag to force the installer to use the new Docker Swarm Config object for its initial configuration.

  3. Following the installation, delete the com.docker.mke.config object.

Configuration options
auth table

Parameter

Required

Description

backend

no

The name of the authorization back end to use, managed or ldap.

Default: managed

default_new_user_role

no

The role assigned to new users for their private resource sets.

Valid values: admin, viewonly, scheduler, restrictedcontrol, or fullcontrol.

Default: restrictedcontrol

auth.sessions

Parameter

Required

Description

lifetime_minutes

no

The initial session lifetime, in minutes.

Default: 60

renewal_threshold_minutes

no

The length of time, in minutes, before the expiration of a session where, if used, a session will be extended by the current configured lifetime from then. A value of 0 disables session extension.

Default: 20

per_user_limit

no

The maximum number of sessions that a user can have simultaneously active. If creating a new session will put a user over this limit, the least recently used session is deleted.

A value of 0 disables session limiting.

Default: 10

store_token_per_session

no

If set, the user token is stored in sessionStorage instead of localStorage. Setting this option logs the user out and requires that they log back in, as they are actively changing the manner in which their authentication is stored.

registries array (optional)

An array of tables that specifies the MSR instances that are managed by the current MKE instance.

Parameter

Required

Description

host_address

yes

Sets the address for connecting to the MSR instance tied to the MKE cluster.

service_id

yes

Sets the MSR instance’s OpenID Connect Client ID, as registered with the Docker authentication provider.

ca_bundle

no

Specifies the root CA bundle for the MSR instance if you are using a custom certificate authority (CA). The value is a string with the contents of a ca.pem file.

audit_log_configuration table (optional)

Configures audit logging options for MKE components.

Parameter

Required

Description

level

no

Specifies the audit logging level.

Valid values: empty (to disable audit logs), metadata, request.

Default: empty

support_dump_include_audit_logs

no

Sets support dumps to include audit logs in the logs of the ucp-controller container of each manager node.

Valid values: true, false.

Default: false

scheduling_configuration table (optional)

Specifies scheduling options and the default orchestrator for new nodes.

Note

If you run a kubectl command, such as kubectl describe nodes, to view scheduling rules on Kubernetes nodes, the results that present do not reflect the MKE admin settings conifguration. MKE uses taints to control container scheduling on nodes and is thus unrelated to the kubectl Unschedulable boolean flag.

Parameter

Required

Description

enable_admin_ucp_scheduling

no

Determines whether administrators can schedule containers on manager nodes.

Valid values: true, false.

Default: false

You can also set the parameter using the MKE web UI:

  1. Log in to the MKE web UI as an administrator.

  2. Click the user name drop-down in the left-side navigation panel.

  3. Click Admin Settings > Orchestration to view the Orchestration screen.

  4. Scroll down to the Container Scheduling section and toggle on the Allow administrators to deploy containers on MKE managers or nodes running MSR slider.

default_node_orchestrator

no

Sets the type of orchestrator to use for new nodes that join the cluster.

Valid values: swarm, kubernetes.

Default: swarm

tracking_configuration table (optional)

Specifies the analytics data that MKE collects.

Parameter

Required

Description

disable_usageinfo

no

Set to disable analytics of usage information.

Valid values: true, false.

Default: false

disable_tracking

no

Set to disable analytics of API call information.

Valid values: true, false.

Default: false

cluster_label

no

Set a label to be included with analytics.

trust_configuration table (optional)

Specifies whether MSR images require signing.

Parameter

Required

Description

require_content_trust

no

Set to require the signing of images by content trust.

Valid values: true, false.

Default: false

You can also set the parameter using the MKE web UI:

  1. Log in to the MKE web UI as an administrator.

  2. Click the user name drop-down in the left-side navigation panel.

  3. Click Admin Settings > Docker Content Trust to open the Content Trust Settings screen.

  4. Toggle on the Run only signed images slider.

require_signature_from

no

A string array that specifies which users or teams must sign images.

allow_repos

no

A string array that specifies repos that are to bypass content trust check, for example, ["docker.io/mirantis/dtr-rethink" , "docker.io/mirantis/dtr-registry" ....].

log_configuration table (optional)

Configures the logging options for MKE components.

Parameter

Required

Description

protocol

no

The protocol to use for remote logging.

Valid values: tcp, udp.

Default: tcp

host

no

Specifies a remote syslog server to receive sent MKE controller logs. If omitted, controller logs are sent through the default Docker daemon logging driver from the ucp-controller container.

level

no

The logging level for MKE components.

Valid values (syslog priority levels): debug, info, notice, warning, err, crit, alert, emerg.

license_configuration table (optional)

Enables automatic renewal of the MKE license.

Parameter

Required

Description

auto_refresh

no

Set to enable attempted automatic license renewal when the license nears expiration. If disabled, you must manually upload renewed license after expiration.

Valid values: true, false.

Default: true

custom headers (optional)

Included when you need to set custom API headers. You can repeat this section multiple times to specify multiple separate headers. If you include custom headers, you must specify both name and value.

[[custom_api_server_headers]]

Item

Description

name

Set to specify the name of the custom header with name = “X-Custom-Header-Name”.

value

Set to specify the value of the custom header with value = “Custom Header Value”.

user_workload_defaults (optional)

A map describing default values to set on Swarm services at creation time if those fields are not explicitly set in the service spec.

[user_workload_defaults]

[user_workload_defaults.swarm_defaults]

Parameter

Required

Description

[tasktemplate.restartpolicy.delay]

no

Delay between restart attempts. The value is input in the <number><value type> formation. Valid value types include:

  • ns = nanoseconds

  • us = microseconds

  • ms = milliseconds

  • s = seconds

  • m = minutes

  • h = hours

Default: value = "5s"

[tasktemplate.restartpolicy.maxattempts]

no

Maximum number of restarts before giving up.

Default: value = "3"

cluster_config table (required)

Configures the cluster that the current MKE instance manages.

The dns, dns_opt, and dns_search settings configure the DNS settings for MKE components. These values, when assigned, override the settings in a container /etc/resolv.conf file.

Parameter

Required

Description

controller_port

yes

Sets the port that the ucp-controller monitors.

Default: 443

kube_apiserver_port

yes

Sets the port the Kubernetes API server monitors.

swarm_port

yes

Sets the port that the ucp-swarm-manager monitors.

Default: 2376

swarm_strategy

no

Sets placement strategy for container scheduling. Be aware that this does not affect swarm-mode services.

Valid values: spread, binpack, random.

dns

yes

Array of IP addresses that serve as nameservers.

dns_opt

yes

Array of options in use by DNS resolvers.

dns_search

yes

Array of domain names to search whenever a bare unqualified host name is used inside of a container.

profiling_enabled

no

Determines whether specialized debugging endpoints are enabled for profiling MKE performance.

Valid values: true, false.

Default: false

authz_cache_timeout

no

Sets the timeout in seconds for the RBAC information cache of MKE non-Kubernetes resource listing APIs. Setting changes take immediate effect and do not require a restart of the MKE controller.

Default: 0 (cache is not enabled)

Once you enable the cache, the result of non-Kubernetes resource listing APIs only reflects the latest RBAC changes for the user when the cached RBAC info times out.

kv_timeout

no

Sets the key-value store timeout setting, in milliseconds.

Default: 5000

kv_snapshot_count

Required

Sets the key-value store snapshot count.

Default: 20000

external_service_lb

no

Specifies an optional external load balancer for default links to services with exposed ports in the MKE web interface.

cni_installer_url

no

Specifies the URL of a Kubernetes YAML file to use to install a CNI plugin. Only applicable during initial installation. If left empty, the default CNI plugin is put to use.

metrics_retention_time

no

Sets the metrics retention time.

metrics_scrape_interval

no

Sets the interval for how frequently managers gather metrics from nodes in the cluster.

metrics_disk_usage_interval

no

Sets the interval for the gathering of storage metrics, an operation that can become expensive when large volumes are present.

nvidia_device_plugin Available since MKE 3.4.6

no

Enables the nvidia-gpu-device-plugin, which is disabled by default.

rethinkdb_cache_size

no

Sets the size of the cache for MKE RethinkDB servers.

Default: 1GB

Leaving the field empty or specifying auto instructs RethinkDB to automatically determine the cache size.

exclude_server_identity_headers

no

Determines whether the X-Server-Ip and X-Server-Name headers are disabled.

Valid values: true, false.

Default: false

cloud_provider

no

Sets the cloud provider for the Kubernetes cluster.

pod_cidr

yes

Sets the subnet pool from which the IP for the Pod should be allocated from the CNI IPAM plugin.

Default: 192.168.0.0/16

calico_mtu

no

Sets the maximum transmission unit (MTU) size for the Calico plugin.

ipip_mtu

no

Sets the IPIP MTU size for the Calico IPIP tunnel interface.

azure_ip_count

yes

Sets the IP count for Azure allocator to allocate IPs per Azure virtual machine.

service_cluster_ip_range

yes

Sets the subnet pool from which the IP for Services should be allocated.

Default: 10.96.0.0/16

nodeport_range

yes

Sets the port range for Kubernetes services within which the type NodePort can be exposed.

Default: 32768-35535

custom_kube_api_server_flags

no

Sets the configuration options for the Kubernetes API server.

Be aware that this parameter function is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the MKE Software Support Agreement.

custom_kube_controller_manager_flags

no

Sets the configuration options for the Kubernetes controller manager.

Be aware that this parameter function is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the MKE Software Support Agreement.

custom_kubelet_flags

no

Sets the configuration options for kubelet.

Be aware that this parameter function is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the MKE Software Support Agreement.

custom_kube_scheduler_flags

no

Sets the configuration options for the Kubernetes scheduler.

Be aware that this arameter function is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the MKE Software Support Agreement.

local_volume_collection_mapping

no

Set to store data about collections for volumes in the MKE local KV store instead of on the volume labels. The parameter is used to enforce access control on volumes.

manager_kube_reserved_resources

no

Reserves resources for MKE and Kubernetes components that are running on manager nodes.

worker_kube_reserved_resources

no

Reserves resources for MKE and Kubernetes components that are running on worker nodes.

kubelet_max_pods

yes

Sets the number of Pods that can run on a node.

Maximum: 250

Default: 110

kubelet_pods_per_core

no

Sets the maximum number of Pods per core.

0 indicates that there is no limit on the number of Pods per core. The number cannot exceed the kubelet_max_pods setting.

Recommended: 10

Default: 0

secure_overlay

no

Enables IPSec network encryption in Kubernetes.

Valid values: true, false.

Default: false

image_scan_aggregation_enabled

no

Enables image scan result aggregation. The feature displays image vulnerabilities in shared resource/containers and shared resources/images pages.

Valid values: true, false.

Default: false

swarm_polling_disabled

no

Determines whether resource polling is disabled for both Swarm and Kubernetes resources, which is recommended for production instances.

Valid values: true, false.

Default: false

oidc_client_id

no

Sets the OIDC client ID, using the eNZi service ID that is in the ODIC authorization flow.

hide_swarm_ui

no

Determines whether the UI is hidden for all Swarm-only object types (has no effect on Admin Settings).

Valid values: true, false.

Default: false

You can also set the parameter using the MKE web UI:

  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation panel, click the user name drop-down.

  3. Click Admin Settings > Tuning to open the Tuning screen.

  4. Toggle on the Hide Swarm Navigation slider located under the Configure MKE UI heading.

unmanaged_cni

yes

Sets Calico as the CNI provider, managed by MKE. Note that Calico is the default CNI provider.

kube_proxy_mode

yes

Sets the operational mode for kube-proxy.

Valid values: iptables, ipvs, disabled.

Default: iptables

cipher_suites_for_kube_api_server

no

Sets the value for the kube-apiserver --tls-cipher-suites parameter.

cipher_suites_for_kubelet

no

Sets the value for the kubelet --tls-cipher-suites parameter.

cipher_suites_for_etcd_server

no

Sets the value for the etcd server --cipher-suites parameter.

shared_sans

no

Subject alternative names for manager nodes.

cluster_config.service_mesh (optional)

Set the configuration for the Istio ingress to manage ingress traffic from outside the cluster.

Parameter

Required

Description

enabled

No

Disable HTTP ingress for Kubernetes. Default: false

ingress_num_replicas

No

Set the number of Istio Ingress Gateway (proxy) deployment replicas. Default: 2

ingress_external_ips

No

Set the list of external IPs for Ingress Gateway service. Default: [] (empty)

ingress_enable_lb

No

Enable external load balancer. Default: false

ingress_preserve_client_ip

No

Enable preserving inbound traffic source IP. Default: false

ingress_exposed_ports

No

Set ports to expose.

For each port, supply arrays containing the following port information (defaults shown):

  • name = “http2”

  • port = 80

  • target_port = 0

  • node_port = 33000


  • name = “https”

  • port = 443

  • target_port = 0

  • node_port = 33001


  • name = “tcp”

  • port = 31400

  • target_port = 0

  • node_port = 33002

ingress_node_affinity

No

Set node affinity.

  • key = “com.docker.ucp.manager”

  • value = “”

  • target_port = 0

  • node_port = 0

ingress_node_toleration

No

Set node toleration.

For each node, supply an array containing the following information (defaults shown):

  • key = “com.docker.ucp.manager”

  • value = “”

  • operator = “Exists”

  • effect = “NoSchedule”

etcd-storage-quota Available since MKE 3.4.11

no

Sets the etcd storage size limit.

Example values: 500M, 4GB, 8G.

Default value: 2G.

iSCSI (optional)

Configures iSCSI options for MKE.

Parameter

Required

Description

--storage-iscsi=true

no

Enables iSCSI-based Persistent Volumes in Kubernetes.

Valid values: true, false.

Default: false

--iscsiadm-path=<path>

no

Specifies the path of the iscsiadm binary on the host.

Default: /usr/sbin/iscsiadm

--iscsidb-path=<path>

no

Specifies the path of the iscsi database on the host.

Default: /etc/iscsi

pre_logon_message

Configures a pre-logon message.

Parameter

Required

Description

pre_logon_message

no

Sets a pre-logon message to alert users prior to log in.

Scale an MKE cluster

By adding or removing nodes from the MKE cluster, you can horizontally scale MKE to fit your needs as your applications grow in size and use.

Scale using the MKE web UI

For detail on how to use the MKE web UI to scale your cluster, refer to Join Linux nodes or Join Windows worker nodes, depending on which operating system you use. In particular, these topics offer information on adding nodes to a cluster and configuring node availability.

Scale using the CLI

You can also use the command line to perform all scaling operations.

Scale operation

Command

Obtain the join token

Run the following command on a manager node to obtain the join token that is required for cluster scaling. Use either worker or manager for the <node-type>:

docker swarm join-token <node-type>

Configure a custom listen address

Specify the address and port where the new node listens for inbound cluster management traffic:

docker swarm join \
   --token  SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj \
   --listen-addr 234.234.234.234 \
   192.168.99.100:2377

Verify node addition

Once your node is added, run the following command on a manager node to verify its presence:

docker node ls

Set node availability state

Use the --availability option to set node availability, indicating active, pause, or drain:

docker node update --availability <availability-state> <node-hostname>

Remove the node

docker node rm <node-hostname>

Configure KMS plugin for MKE

Mirantis Kubernetes Engine (MKE) offers support for a Key Management Service (KMS) plugin that allows access to third-party secrets management solutions, such as Vault. MKE uses this plugin to facilitate access from Kubernetes clusters.

MKE will not health check, clean up, or otherwise manage the KMS plugin. Thus, you must deploy KMS before a machine becomes a MKE manager, or else it may be considered unhealthy.

Configuration

Use MKE to configure the KMS plugin configuration. MKE maintains ownership of the Kubernetes EncryptionConfig file, where the KMS plugin is configured for Kubernetes. MKE does not check the file contents following deployment.

MKE adds new configuration options to the cluster configuration table. Configuration of these options takes place through the API and not the MKE web UI.

The following table presents the configuration options for the KMS plugin, all of which are optional.

Parameter

Type

Description

kms_enabled

bool

Sets MKE to configure a KMS plugin.

kms_name

string

Name of the KMS plugin resource (for example, vault).

kms_endpoint

string

Path of the KMS plugin socket. The path must refer to a UNIX socket on the host (for example, /tmp/socketfile.sock). MKE bind mounts this file to make it accessible to the API server.

kms_cachesize

int

Number of data encryption keys (DEKs) to cache in the clear.

Use a local node network in a swarm

Mirantis Kubernetes Engine (MKE) can use local network drivers to orchestrate your cluster. You can create a config network with a driver such as MAC VLAN, and use this network in the same way as any other named network in MKE. In addition, if it is set up as attachable you can attach containers.

Warning

Encrypting communication between containers on different nodes only works with overlay networks.

Create node-specific networks with MKE

To create a node-specific network for use with MKE, always do so through MKE, using either the MKE web UI or the CLI with an admin bundle. If you create such a network without MKE, it will not have the correct access label and it will not be available in MKE.

Create a MAC VLAN network
  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation menu, click Swarm > Networks.

  3. Click Create to call the Create Network screen.

  4. Select macvlan from the Drivers` dropdown.

  5. Enter macvlan into the Name field.

  6. Select the type of network to create, Network or Local Config.

    • If you select Local Config, the SCOPE is automatically set to Local. You subsequently select the nodes for which to create the Local Config from those listed. MKE will prefix the network with the node name for each selected node to ensure consistent application of access labels, and you then select a Collection for the Local Configs to reside in. All Local Configs with the same name must be in the same collection, or MKE returns an error. If you do not not select a Collection, the network is placed in your default collection, which is / in a new MKE installation.

    • If you select Network, the SCOPE is automatically set to Swarm. Choose an existing Local Config from which to create the network. The network and its labels and collection placement are inherited from the related Local Configs.

  7. Optional. Configure IPAM.

  8. Click Create.

Use your own TLS certificates

To ensure all communications between clients and MKE are encrypted, all MKE services are exposed using HTTPS. By default, this is done using self-signed TLS certificates that are not trusted by client tools such as web browsers. Thus, when you try to access MKE, your browser warns that it does not trust MKE or that MKE has an invalid certificate.

You can configure MKE to use your own TLS certificates. As a result, your browser and other client tools will trust your MKE installation.

Mirantis recommends that you make this change outside of peak business hours. Your applications will continue to run normally, but existing MKE client certificates will become invalid, and thus users will have to download new certificates to access MKE from the CLI.


To configure MKE to use your own TLS certificates and keys:

  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation panel, navigate to <user name> > Admin Settings > Certificates.

  3. Upload your certificates and keys based on the following table.

    Note

    All keys and certificates must be uploaded in PEM format.

    Type

    Description

    Private key

    The unencrypted private key for MKE. This key must correspond to the public key used in the server certificate. This key does not use a password.

    Click Upload Key to upload a PEM file.

    Server certificate

    The MKE public key certificate, which establishes a chain of trust up to the root CA certificate. It is followed by the certificates of any intermediate certificate authorities.

    Click Upload Certificate to upload a PEM file.

    CA certificate

    The public key certificate of the root certificate authority that issued the MKE server certificate. If you do not have a CA certificate, use the top-most intermediate certificate instead.

    Click Upload CA Certificate to upload a PEM file.

    Client CA

    This field may contain one or more Root CA certificates that the MKE controller uses to verify that client certificates are issued by a trusted entity.

    Click Upload CA Certificate to upload a PEM file.

    Click Download MKE Server CA Certificate to download the certificate as a PEM file.

    Note

    MKE is automatically configured to trust its internal CAs, which issue client certificates as part of generated client bundles. However, you may supply MKE with additional custom root CA certificates using this field to enable MKE to trust the client certificates issued by your corporate or trusted third-party certificate authorities. Note that your custom root certificates will be appended to MKE internal root CA certificates.

  4. Click Save.

After replacing the TLS certificates, your users will not be able to authenticate with their old client certificate bundles. Ask your users to access the MKE web UI and download new client certificate bundles.

Finally, Mirantis Secure Registry (MSR) deployments must be reconfigured to trust the new MKE TLS certificates. If you are running MSR 3.0.x, refer to Use your own TLS certificates. If you are running MSR 2.9.x, refer to Use your own TLS certificates.

Manage and deploy private images

Mirantis offers its own image registry, Mirantis Secure Registry (MSR), which you can use to store and manage the images that you deploy to your cluster. This topic describes how to use MKE to push the official WordPress image to MSR and later deploy that image to your cluster.


To create an MSR image repository:

  1. Log in to the MKE web UI.

  2. From the left-side navigation panel, navigate to <user name> > Admin Settings > Mirantis Secure Registry.

  3. In the Installed MSRs section, capture the MSR URL for your cluster.

  4. In a new browser tab, navigate to the MSR URL captured in the previous step.

  5. From the left-side navigation panel, click Repositories.

  6. Click New repository.

  7. In the namespace field under New Repository, select the required namespace. The default namespace is your user name.

  8. In the name field under New Repository, enter the name wordpress.

  9. To create the repository, click Save.


To push an image to MSR:

In this example, you will pull the official WordPress image from Docker Hub, tag it, and push it to MSR. Once pushed to MSR, only authorized users will be able to make changes to the image. Pushing to MSR requires CLI access to a licensed MSR installation.

  1. Pull the public WordPress image from Docker Hub:

    docker pull wordpress
    
  2. Tag the image, using the IP address or DNS name of your MSR instance. For example:

    docker tag wordpress:latest <msr-url>:<port>/<namespace>/wordpress:latest
    
  3. Log in to an MKE manager node.

  4. Push the tagged image to MSR:

    docker image push <msr-url>:<port>/admin/wordpress:latest
    
  5. Verify that the image is stored in your MSR repository:

    1. Log in to the MSR web UI.

    2. In the left-side navigation panel, click Repositories.

    3. Click admin/wordpress to open the repo.

    4. Click the Tags tab to view the stored images.

    5. Verify that the latest tag is present.


To deploy the private image to MKE:

  1. Log in to the MKE web UI.

  2. In the left-side navigation panel, click Kubernetes.

  3. Click Create to open the Create Kubernetes Object page.

  4. In the Namespace dropdown, select default.

  5. In the Object YAML editor, paste the following Deployment object YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: wordpress-deployment
    spec:
      selector:
        matchLabels:
          app: wordpress
      replicas: 2
      template:
        metadata:
          labels:
            app: wordpress
        spec:
          containers:
            - name: wordpress
              image: 52.10.217.20:444/admin/wordpress:latest
              ports:
                - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: wordpress-service
      labels:
        app: wordpress
    spec:
      type: NodePort
      ports:
        - port: 80
          nodePort: 32768
      selector:
        app: wordpress
    

    The Deployment object YAML specifies your MSR image in the Pod template spec: image: <msr-url>:<port>/admin/wordpress:latest. Also, the YAML file defines a NodePort service that exposes the WordPress application so that it is accessible from outside the cluster.

  6. Click Create. Creating the new Kubernetes objects will open the Controllers page.

  7. After a few seconds, verify that wordpress-deployment has a green status icon and is thus successfully deployed.

Set the node orchestrator

When you add a node to your cluster, by default its workloads are managed by Swarm. Changing the default orchestrator does not affect existing nodes in the cluster. You can also change the orchestrator type for individual nodes in the cluster.

Select the node orchestrator

The workloads on your cluster can be scheduled by Kubernetes, Swarm, or a combination of the two. If you choose to run a mixed cluster, be aware that different orchestrators are not aware of each other, and thus there is no coordination between them.

Mirantis recommends that you decide which orchestrator you will use when initially setting up your cluster. Once you start deploying workloads, avoid changing the orchestrator setting. If you do change the node orchestrator, your workloads will be evicted and you will need to deploy them again using the new orchestrator.

Caution

When you promote a worker node to be a manager, its orchestrator type automatically changes to Mixed. If you later demote that node to be a worker, its orchestrator type remains as Mixed.

Note

The default behavior for Mirantis Secure Registry (MSR) nodes is to run in the Mixed orchestration mode. If you change the MSR orchestrator type to Swarm or Kubernetes only, reconciliation will revert the node back to the Mixed mode.

Changing a node orchestrator

When you change the node orchestrator, existing workloads are evicted and they are not automatically migrated to the new orchestrator. You must manually migrate them to the new orchestrator. For example, if you deploy WordPress on Swarm, and you change the node orchestrator to Kubernetes, MKE does not migrate the workload, and WordPress continues running on Swarm. You must manually migrate your WordPress deployment to Kubernetes.

The following table summarizes the results of changing a node orchestrator.

Workload

Orchestrator-related change

Containers

Containers continue running on the node.

Docker service

The node is drained and tasks are rescheduled to another node.

Pods and other imperative resources

Imperative resources continue running on the node.

Deployments and other declarative resources

New declarative resources will not be scheduled on the node and existing ones will be rescheduled at a time that can vary based on resource details.

If a node is running containers and you change the node to Kubernetes, the containers will continue running and Kubernetes will not be aware of them. This is functionally the same as running the node in the Mixed mode.

Warning

The Mixed mode is not intended for production use and it may impact the existing workloads on the node.

This is because the two orchestrator types have different views of the node resources and they are not aware of the other orchestrator resources. One orchestrator can schedule a workload without knowing that the node resources are already committed to another workload that was scheduled by the other orchestrator. When this happens, the node can run out of memory or other resources.

Mirantis strongly recommends against using the Mixed mode in production environments.

Change the node orchestrator

This topic describes how to set the default orchestrator and change the orchestrator for individual nodes.

Set the default orchestrator

To set the default orchestrator using the MKE web UI:

  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation panel, navigate to <user name> > Admin Settings > Orchestration.

  3. Under Scheduler, select the required default orchestrator.

  4. Click Save.

New workloads will now be scheduled by the specified orchestrator type. Existing nodes in the cluster are not affected.

Once a node is joined to the cluster, you can change the orchestrator that schedules its workloads.


To set the default orchestrator using the MKE configuration file:

  1. Obtain the current MKE configuration file for your cluster.

  2. Set default_node_orchestrator to "swarm" or "kubernetes".

  3. Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.

Change the node orchestrator

To change the node orchestrator using the MKE web UI:

  1. Log in to the MKE web UI as an administrator.

  2. From the left-side navigation panel, navigate to Shared Resources > Nodes.

  3. Click the node that you want to assign to a different orchestrator.

  4. In the upper right, click the Edit Node icon.

  5. In the Details pane, in the Role section under ORCHESTRATOR TYPE, select either Swarm, Kubernetes, or Mixed.

    Warning

    Mirantis strongly recommends against using the Mixed mode in production environments.

  6. Click Save to assign the node to the selected orchestrator.


To change the node orchestrator using the CLI:

Set the orchestrator on a node by assigning the orchestrator labels, com.docker.ucp.orchestrator.swarm or com.docker.ucp.orchestrator.kubernetes to true.

  1. Change the node orchestrator. Select from the following options:

    • Schedule Swarm workloads on a node:

      docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
      
    • Schedule Kubernetes workloads on a node:

      docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
      
    • Schedule both Kubernetes and Swarm workloads on a node:

      docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
      docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
      

      Warning

      Mirantis strongly recommends against using the Mixed mode in production environments.

    • Change the orchestrator type for a node from Swarm to Kubernetes:

      docker node update --label-add com.docker.ucp.orchestrator.kubernetes=true <node-id>
      docker node update --label-rm com.docker.ucp.orchestrator.swarm <node-id>
      
    • Change the orchestrator type for a node from Kubernetes to Swarm:

      docker node update --label-add com.docker.ucp.orchestrator.swarm=true <node-id>
      docker node update --label-rm com.docker.ucp.orchestrator.kubernetes <node-id>
      

    Note

    You must first add the target orchestrator label and then remove the old orchestrator label. Doing this in the reverse order can fail to change the orchestrator.

  2. Verify the value of the orchestrator label by inspecting the node:

    docker node inspect <node-id> | grep -i orchestrator
    

    Example output:

    "com.docker.ucp.orchestrator.kubernetes": "true"
    

Important

The com.docker.ucp.orchestrator label is not displayed in the MKE web UI Labels list, which presents in the Overview pane for each node.

View Kubernetes objects in a namespace

MKE administrators can filter the view of Kubernetes objects by the namespace that the objects are assigned to, specifying a single namespace or all available namespaces. This topic describes how to deploy services to two newly created namespaces and then view those services, filtered by namespace.


To create two namespaces:

  1. Log in to the MKE web UI as an administrator.

  2. From the left-side navigation panel, click Kubernetes.

  3. Click Create to open the Create Kubernetes Object page.

  4. Leave the Namespace drop-down blank.

  5. In the Object YAML editor, paste the following YAML code:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: blue
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: green
    
  6. Click Create to create the blue and green namespaces.


To deploy services:

  1. Create a NodePort service in the blue namespace:

    1. From the left-side navigation panel, navigate to Kubernetes > Create.

    2. In the Namespace drop-down, select blue.

    3. In the Object YAML editor, paste the following YAML code:

      apiVersion: v1
      kind: Service
      metadata:
        name: app-service-blue
        labels:
          app: app-blue
      spec:
        type: NodePort
        ports:
          - port: 80
            nodePort: 32768
        selector:
          app: app-blue
      
    4. Click Create to deploy the service in the blue namespace.

  2. Create a NodePort service in the green namespace:

    1. From the left-side navigation panel, navigate to Kubernetes > Create.

    2. In the Namespace drop-down, select green.

    3. In the Object YAML editor, paste the following YAML code:

      apiVersion: v1
      kind: Service
      metadata:
        name: app-service-green
        labels:
          app: app-green
      spec:
        type: NodePort
        ports:
          - port: 80
            nodePort: 32769
        selector:
          app: app-green
      
    4. Click Create to deploy the service in the green namespace.


To view the newly created services:

  1. In the left-side navigation panel, click Namespaces.

  2. In the upper-right corner, click the Set context for all namespaces toggle. The indicator in the left-side navigation panel under Namespaces changes to All Namespaces.

  3. Click Services to view your services.


Filter the view by namespace:

  1. In the left-side navigation panel, click Namespaces.

  2. Hover over the blue namespace and click Set Context. The indicator in the left-side navigation panel under Namespaces changes to blue.

  3. Click Services to view the app-service-blue service. Note that the app-service-green service does not display.

Perform the forgoing steps on the green namespace to view only the services deployed in the green namespace.

Join Nodes

Set up high availability

MKE is designed to facilitate high availability (HA). You can join multiple manager nodes to the cluster, so that if one manager node fails, another one can automatically take its place without impacting the cluster.

Including multiple manager nodes in your cluster allows you to handle manager node failures and load-balance user requests across all manager nodes.

The following table exhibits the relationship between the number of manager nodes used and the number of faults that your cluster can tolerate:

Manager nodes

Failures tolerated

1

0

3

1

5

2

For deployment into product environments, follow these best practices:

  • For HA with minimal network overhead, Mirantis recommends using three manager nodes and a maximum of five. Adding more manager nodes than this can lead to performance degradation, as configuration changes must be replicated across all manager nodes.

  • You should bring failed manager nodes back online as soon as possible, as each failed manager node decreases the number of failures that your cluster can tolerate.

  • You should distribute your manager nodes across different availability zones. This way your cluster can continue working even if an entire availability zone goes down.

Join Linux nodes

MKE allows you to add or remove nodes from your cluster as your needs change over time.

Because MKE leverages the clustering functionality provided by Mirantis Container Runtime (MCR), you use the docker swarm join command to add more nodes to your cluster. When you join a new node, MCR services start running on the node automatically.

You can add both Linux manager and worker nodes to your cluster.

Join a node to the cluster
  1. Log in to the MKE web UI.

  2. In the left-side navigation panel, navigate to Shared Resources > Nodes.

  3. Click Add Node.

  4. Select Linux for the node type.

  5. Select either Manager or Worker, as required.

  6. Optional. Select Use a custom listen address to specify the address and port where the new node listens for inbound cluster management traffic.

  7. Optional. Select Use a custom advertise address to specify the IP address that is advertised to all members of the cluster for API access.

  8. Copy the displayed command, which looks similar to the following:

    docker swarm join --token <token> <mke-node-ip>
    
  9. Use SSH to log in to the host that you want to join to the cluster.

  10. Run the docker swarm join command captured previously.

    The node will display in the Shared Resources > Nodes page.

Pause or drain a node

Note

You can pause or drain a node only with swarm workloads.

You can configure the availability of a node so that it is in one of the following three states:

Active

The node can receive and execute tasks.

Paused

The node continues running existing tasks, but does not receive new tasks.

Drained

Existing tasks are stopped, while replica tasks are launched in active nodes. The node does not receive new tasks.


To pause or drain a node:

  1. Log in to the MKE web UI.

  2. In the left-side navigation panel, navigate to Shared Resources > Nodes and select the required node.

  3. In the Details pane, click Configure and select Details to open the Edit Node page.

  4. In the upper right, select the Edit Node icon.

  5. In the Availability section, click Active, Pause, or Drain.

  6. Click Save.

Promote or demote a node

You can promote worker nodes to managers to make MKE fault tolerant. You can also demote a manager node into a worker node.

  1. Log in to the MKE web UI.

  2. In the left-side navigation panel, navigate to Shared Resources > Nodes and select the required node.

  3. In the upper right, select the Edit Node icon.

  4. In the Role section, click Manager or Worker.

  5. Click Save and wait until the operation completes.

  6. Navigate to Shared Resources > Nodes and verify the new node role.

Note

If you are load balancing user requests to MKE across multiple manager nodes, you must remove these nodes from the load-balancing pool when demoting them to workers.

Remove a node from the cluster

To remove a worker node:

  1. Log in to the MKE web UI.

  2. In the left-side navigation panel, navigate to Shared Resources > Nodes and select the required node.

  3. In the upper right, select the vertical ellipsis and click Remove.

  4. When prompted, click Confirm.


To remove a manager node:

  1. Verify that all nodes in the cluster are healthy.

    Warning

    Do not remove the manager node if all nodes are not healthy.

  2. Demote the manager to a worker node.

  3. Remove the newly-demoted worker from the cluster, as described in the preceding steps.

Join Windows worker nodes

MKE allows you to add or remove nodes from your cluster as your needs change over time.

Because MKE leverages the clustering functionality provided by Mirantis Container Runtime (MCR), you use the docker swarm join command to add more nodes to your cluster. When you join a new node, MCR services start running on the node automatically.

MKE supports running worker nodes on Windows Server. You must run all manager nodes on Linux.

Windows nodes limitations

The following features are not yet supported using Windows Server:

Category

Feature

Networking

Encrypted networks are not supported. If you have upgraded from a previous version of MKE, you will need to recreate an unencrypted version of the ucp-hrm network.

Secrets

  • When using secrets with Windows services, Windows stores temporary secret files on your disk. You can use BitLocker on the volume containing the Docker root directory to encrypt the secret data at rest.

  • When creating a service that uses Windows containers, the options to specify UID, GID, and mode are not supported for secrets. Secrets are only accessible by administrators and users with system access within the container.

Mounts

On Windows, Docker cannot listen on a Unix socket. Use TCP or a named pipe instead.

Configure the Docker daemon for Windows nodes

Note

If the cluster is deployed in a site that is offline, sideload MKE images onto the Windows Server nodes. For more information, refer to Install MKE offline.

  1. On a manager node, list the images that are required on Windows nodes:

    docker container run --rm -v /var/run/docker.sock:/var/run/docker.sock mirantis/ucp:3.4.15 images --list --enable-windows
    

    Example output:

    mirantis/ucp-agent-win:3.4.15
    mirantis/ucp-dsinfo-win:3.4.15
    
  2. Pull the required images. For example:

    docker image pull mirantis/ucp-agent-win:3.4.15
    docker image pull mirantis/ucp-dsinfo-win:3.4.15
    
Join Windows nodes to the cluster
  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation panel, navigate to Shared Resources > Nodes.

  3. Click Add Node.

  4. Select Windows for the node type.

  5. Optional. Select Use a custom listen address to specify the address and port where the new node listens for inbound cluster management traffic.

  6. Optional. Select Use a custom advertise address to specify the IP address that is advertised to all members of the cluster for API access.

  7. Copy the displayed command, which looks similar to the following:

    docker swarm join --token <token> <mke-worker-ip>
    

    Alternatively, you can use the command line to obtain the join token. Using your MKE client bundle, run:

    docker swarm join-token worker
    
  8. Run the docker swarm join command captured in the previous step on each instance of Windows Server that will be a worker node.

Use a load balancer

After joining multiple manager nodes for high availability (HA), you can configure your own load balancer to balance user requests across all manager nodes.

Use of a load balancer allows users to access MKE using a centralized domain name. The load balancer can detect when a manager node fails and stop forwarding requests to that node, so that users are unaffected by the failure.

Configure load balancing on MKE
  1. Because MKE uses TLS, do the following when configuring your load balancer:

    • Load-balance TCP traffic on ports 443 and 6443.

    • Do not terminate HTTPS connections.

    • On each manager node, use the /_ping endpoint to verify whether the node is healthy and whether or not it should remain in the load balancing pool.

  2. Use the following examples to configure your load balancer for MKE:

    user  nginx;
       worker_processes  1;
    
       error_log  /var/log/nginx/error.log warn;
       pid        /var/run/nginx.pid;
    
       events {
          worker_connections  1024;
       }
    
       stream {
          upstream ucp_443 {
             server <UCP_MANAGER_1_IP>:443 max_fails=2 fail_timeout=30s;
             server <UCP_MANAGER_2_IP>:443 max_fails=2 fail_timeout=30s;
             server <UCP_MANAGER_N_IP>:443  max_fails=2 fail_timeout=30s;
          }
          server {
             listen 443;
             proxy_pass ucp_443;
          }
       }
    
    global
          log /dev/log    local0
          log /dev/log    local1 notice
    
       defaults
             mode    tcp
             option  dontlognull
             timeout connect     5s
             timeout client      50s
             timeout server      50s
             timeout tunnel      1h
             timeout client-fin  50s
       ### frontends
       # Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
       frontend ucp_stats
             mode http
             bind 0.0.0.0:8181
             default_backend ucp_stats
       frontend ucp_443
             mode tcp
             bind 0.0.0.0:443
             default_backend ucp_upstream_servers_443
       ### backends
       backend ucp_stats
             mode http
             option httplog
             stats enable
             stats admin if TRUE
             stats refresh 5m
       backend ucp_upstream_servers_443
             mode tcp
             option httpchk GET /_ping HTTP/1.1\r\nHost:\ <UCP_FQDN>
             server node01 <UCP_MANAGER_1_IP>:443 weight 100 check check-ssl verify none
             server node02 <UCP_MANAGER_2_IP>:443 weight 100 check check-ssl verify none
             server node03 <UCP_MANAGER_N_IP>:443 weight 100 check check-ssl verify none
    
    {
          "Subnets": [
             "subnet-XXXXXXXX",
             "subnet-YYYYYYYY",
             "subnet-ZZZZZZZZ"
          ],
          "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
          "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
          "ListenerDescriptions": [
             {
                   "Listener": {
                      "InstancePort": 443,
                      "LoadBalancerPort": 443,
                      "Protocol": "TCP",
                      "InstanceProtocol": "TCP"
                   },
                   "PolicyNames": []
             }
          ],
          "HealthCheck": {
             "HealthyThreshold": 2,
             "Interval": 10,
             "Target": "HTTPS:443/_ping",
             "Timeout": 2,
             "UnhealthyThreshold": 4
          },
          "VPCId": "vpc-XXXXXX",
          "BackendServerDescriptions": [],
          "Instances": [
             {
                   "InstanceId": "i-XXXXXXXXX"
             },
             {
                   "InstanceId": "i-XXXXXXXXX"
             },
             {
                   "InstanceId": "i-XXXXXXXXX"
             }
          ],
          "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
          "SecurityGroups": [
             "sg-XXXXXXXXX"
          ],
          "Policies": {
             "LBCookieStickinessPolicies": [],
             "AppCookieStickinessPolicies": [],
             "OtherPolicies": []
          },
          "LoadBalancerName": "ELB-UCP",
          "CreatedTime": "2017-02-13T21:40:15.400Z",
          "AvailabilityZones": [
             "us-west-2c",
             "us-west-2a",
             "us-west-2b"
          ],
          "Scheme": "internet-facing",
          "SourceSecurityGroup": {
             "OwnerAlias": "XXXXXXXXXXXX",
             "GroupName":  "XXXXXXXXXXXX"
          }
       }
    
  3. Create either the nginx.conf or haproxy.cfg file, as required.

    For instruction on deploying with AWS LB, refer to Getting Started with Network Load Balancers in the AWS documentation.

  4. Deploy the load balancer:

    docker run --detach \
    --name ucp-lb \
    --restart=unless-stopped \
    --publish 443:443 \
    --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
    nginx:stable-alpine
    
    docker run --detach \
    --name ucp-lb \
    --publish 443:443 \
    --publish 8181:8181 \
    --restart=unless-stopped \
    --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
    haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
    
Load balancing MKE and MSR together

By default, both MKE and Mirantis Secure Registry (MSR) use port 443. If you plan to deploy MKE and MSR together, your load balancer must distinguish traffic between the two by IP address or port number.

If you want MKE and MSR both to use port 443, then you must either use separate load balancers for each or use two virtual IPs. Otherwise, you must configure your load balancer to expose MKE or MSR on a port other than 443.

Use two-factor authentication

Two-factor authentication (2FA) adds an extra layer of security when logging in to the MKE web UI. Once enabled, 2FA requires the user to submit an additional authentication code generated on a separate mobile device along with their user name and password at login.

Configure 2FA

MKE 2FA requires the use of a time-based one-time password (TOTP) application installed on a mobile device to generate a time-based authentication code for each login to the MKE web UI. Examples of such applications include 1Password, Authy, and LastPass Authenticator.

To configure 2FA:

  1. Install a TOTP application to your mobile device.

  2. In the MKE web UI, navigate to My Profile > Security.

  3. Toggle the Two-factor authentication control to enabled.

  4. Open the TOTP application and scan the offered QR code. The device will display a six-digit code.

  5. Enter the six-digit code in the offered field and click Register. The TOTP application will save your MKE account.

    Important

    A set of recovery codes displays in the MKE web UI when two-factor authentication is enabled. Save these codes in a safe location, as they can be used to access the MKE web UI if for any reason the configured mobile device becomes unavailable. Refer to Recover 2FA for details.

Access MKE using 2FA

Once 2FA is enabled, you will need to provide an authentication code each time you log in to the MKE web UI. Typically, the TOTP application installed on your mobile device generates the code and refreshes it every 30 seconds.

Access the MKE web UI with 2FA enabled:

  1. In the MKE web UI, click Sign in. The Sign in page will display.

  2. Enter a valid user name and password.

  3. Access the MKE code in the TOTP application on your mobile device.

  4. Enter the current code in the 2FA Code field in the MKE web UI.

Note

Multiple authentication failures may indicate a lack of synchronization between the mobile device clock and the mobile provider.

Disable 2FA

Mirantis strongly recommends using 2FA to secure MKE accounts. If you need to temporarily disable 2FA, re-enable it as soon as possible.

To disable 2FA:

  1. In the MKE web UI, navigate to My Profile > Security.

  2. Toggle the Two-factor authentication control to disabled.

Recover 2FA

If the mobile device with authentication codes is unavailable, you can re-access MKE using any of the recovery codes that display in the MKE web UI when 2FA is first enabled.

To recover 2FA:

  1. Enter one of the recovery codes when prompted for the two-factor authentication code upon login to the MKE web UI.

  2. Navigate to My Profile > Security.

  3. Disable 2FA and then re-enable it.

  4. Open the TOTP application and scan the offered QR code. The device will display a six-digit code.

  5. Enter the six-digit code in the offered field and click Register. The TOTP application will save your MKE account.

If there are no recovery codes to draw from, ask your system administrator to disable 2FA in order to regain access to the MKE web UI. Once done, repeat the Configure 2FA procedure to reinstate 2FA protection.

MKE administrators are not able to re-enable 2FA for users.

Migrate an MKE cluster to a new OS

MKE supports the use of a Node-replacement strategy in migrating an active cluster to any supported Linux OS.

Migrate manager nodes

When migrating manager Nodes, Mirantis recommends that you replace one manager Node at a time, to preserve fault tolerance and minimize performance impact.

  1. Add a Node that is running the new OS to your MKE cluster.

  2. Promote the new Node to an MKE manager and wait until the Node becomes healthy.

  3. Demote a manager node that is running the old OS.

  4. Remove the demoted Node from the cluster.

  5. Repeat the previous steps until all manager Nodes are running the new OS.

Migrate worker nodes

It is not necessary to migrate worker Nodes one at a time.

  1. Add the required worker nodes that are running the new OS to your MKE cluster.

  2. Remove the worker Nodes that are running the old OS.

Authorize role-based access

MKE allows administrators to authorize users to view, edit, and use cluster resources by granting role-based permissions for specific resource sets. This section describes how to configure all the relevant components of role-based access control (RBAC).

Refer to Role-based access control for detailed reference information.

Create organizations, teams, and users

This topic describes how to create organizations, teams, and users.

Note

  • Individual users can belong to multiple teams but a team can belong to only one organization.

  • New users have a default permission level that you can extend by adding the user to a team and creating grants. Alternatively, you can make the user an administrator to extend their permission level.

  • In addition to integrating with LDAP services, MKE provides built-in authentication. You must manually create users to use MKE built-in authentication.

Create an organization
  1. Log in to the MKE web UI as an administrator.

  2. Navigate to Access Control > Orgs & Teams > Create.

  3. Enter a unique organization name that is 1-100 characters in length and which does not contain any of the following:

    • Capital letters

    • Spaces

    • The following non-alphabetic characters: \*+[\]:;|=,?<>"'

  4. Click Create.

Create a team in the organization
  1. Log in to the MKE web UI as an administrator.

  2. Navigate to the required organization and click the plus icon in the top right corner to call the Create Team dialog.

  3. Enter a team name with a maximum of 100 characters.

  4. Optional. Enter a description for the team. Maximum: 140 characters.

  5. Click Create.

Add an existing user to a team
  1. Log in to the MKE web UI as an administrator.

  2. Navigate to the required team and click the plus sign in the top right corner.

  3. Select the users you want to include and click Add Users.

Create a user
  1. Log in to the MKE web UI as an administrator.

  2. Navigate to Access Control > Users > Create.

  3. Enter a unique user name that is 1-100 characters in length and which does not contain any of the following:

    • Capital letters

    • Spaces

    • The following non-alphabetic characters: \*+[\]:;|=,?<>"'

  4. Enter a password that contains at least 8 characters.

  5. Enter the full name of the user.

  6. Optional. Toggle IS A MIRANTIS KUBERNETES ENGINE ADMIN to Yes to give the user administrator privileges.

  7. Click Create.

Enable LDAP and sync teams and users

This topic describes how to enable LDAP and to sync your LDAP directory to the teams and users that you have created in MKE.


To enable LDAP:

  1. Log in to the MKE web UI as an MKE administrator.

  2. In the left-side navigation panel, navigate to <user name> > Admin Settings > Authentication & Authorization.

  3. Scroll down to the Identity Provider Integration section.

  4. Toggle LDAP to Enabled. A list of LDAP settings displays.

  5. Enter the values that correspond with your LDAP server installation.

  6. Use the built-in MKE LDAP Test login tool to confirm that your LDAP settings are correctly configured.


To synchronize LDAP users into MKE teams:

  1. In the left-side navigation panel, navigate to Access Control > Orgs & Teams and select an organization.

  2. Click + to create a team.

  3. Enter a team name and description.

  4. Toggle ENABLE SYNC TEAM MEMBERS to Yes.

  5. Choose between the following two methods for matching group members from an LDAP directory. Refer to the table below for more information.

    • Keep the default Match Search Results method and fill out Search Base DN, Search filter, and Search subtree instead of just one level as required.

    • Toggle LDAP MATCH METHOD to change the method for matching group members in the LDAP directory to Match Group Members.

  6. Optional. Select Immediately Sync Team Members to run an LDAP sync operation after saving the configuration for the team.

  7. Click Create.

  8. Repeat the preceding steps to synchronize LDAP users into additional teams.


There are two methods for matching group members from an LDAP directory:

Bind method

Description

Match Search Results (search bind)

Specifies that team members are synced using a search query against the LDAP directory of your organization. The team membership is synced to match the users in the search results.

Search Base DN

The distinguished name of the node in the directory tree where the search starts looking for users.

Search filter

Filter to find users. If empty, existing users in the search scope are added as members of the team.

Search subtree instead of just one level

Defines search through the full LDAP tree, not just one level, starting at the base DN.

Match Group Members (direct bind)

Specifies that team members are synced directly with members of a group in your LDAP directory. The team membership syncs to match the membership of the group.

Group DN

The distinguished name of the group from which you select users.

Group Member Attribute

The value of this attribute corresponds to the distinguished names of the members of the group.

Define roles with authorized API operations

Roles define a set of API operations permitted for a resource set. You apply roles to users and teams by creating grants. Roles have the following important characteristics:

  • Roles are always enabled.

  • Roles cannot be edited. To change a role, you must delete it and create a new role with the changes you want to implement.

  • To delete roles used within a grant, you must first delete the grant.

  • Only administrators can create and delete roles.

This topic explains how to create custom Swarm roles and describes default and Swarm operations roles.

Default roles

The following describes the built-in roles:

Role

Description

None

Users have no access to Swarm or Kubernetes resources. Maps to No Access role in UCP 2.1.x.

View Only

Users can view resources but cannot create them.

Restricted Control

Users can view and edit resources but cannot run a service or container in a way that affects the node where it is running. Users cannot mount a node directory, exec into containers, or run containers in privileged mode or with additional kernel capabilities.

Scheduler

Users can view worker and manager nodes and schedule, but not view, workloads on these nodes. By default, all users are granted the Scheduler role for the Shared collection. To view workloads, users need Container View permissions.

Full Control

Users can view and edit all granted resources. They can create containers without any restriction, but cannot see the containers of other users.

To learn how to apply a default role using a grant, refer to Create grants.

Create a custom Swarm role

You can use default or custom roles.

To create a custom Swarm role:

  1. Log in to the MKE web UI.

  2. Click Access Control > Roles.

  3. Select the Swarm tab and click Create.

  4. On the Details tab, enter the role name.

  5. On the Operations tab, select the permitted operations for each resource type. For the operation descriptions, refer to Swarm operations roles.

  6. Click Create.

Note

  • The Roles page lists all applicable default and custom roles in the organization.

  • You can apply a role with the same name to different resource sets.

To learn how to apply a custom role using a grant, refer to Create grants.

Swarm operations roles

The following describes the set of operations (calls) that you can execute to the Swarm resources. Each permission corresponds to a CLI command and enables the user to execute that command. Refer to the Docker CLI documentation for a complete list of commands and examples.

Operation

Command

Description

Config

docker config

Manage Docker configurations.

Container

docker container

Manage Docker containers.

Container

docker container create

Create a new container.

Container

docker create [OPTIONS] IMAGE [COMMAND] [ARG...]

Create new containers.

Container

docker update [OPTIONS] CONTAINER [CONTAINER...]

Update configuration of one or more containers. Using this command can also prevent containers from consuming too many resources from their Docker host.

Container

docker rm [OPTIONS] CONTAINER [CONTAINER...]

Remove one or more containers.

Image

docker image COMMAND

Remove one or more containers.

Image

docker image remove

Remove one or more images.

Network

docker network

Manage networks. You can use child commands to create, inspect, list, remove, prune, connect, and disconnect networks.

Node

docker node COMMAND

Manage Swarm nodes.

Secret

docker secret COMMAND

Manage Docker secrets.

Service

docker service COMMAND

Manage services.

Volume

docker volume create [OPTIONS] [VOLUME]

Create a new volume that containers can consume and store data in.

Volume

docker volume rm [OPTIONS] VOLUME [VOLUME...]

Remove one or more volumes. Users cannot remove a volume that is in use by a container.

Use collections and namespaces

MKE enables access control to cluster resources by grouping them into two types of resource sets: Swarm collections (for Swarm workloads) and Kubernetes namespaces (for Kubernetes workloads). Refer to Role-based access control for a description of the difference between Swarm collections and Kubernetes namespaces. Administrators use grants to combine resources sets, giving users permission to access specific cluster resources.

Swarm collection labels

Users assign resources to collections with labels. The following resource types have editable labels and thus you can assign them to collections: services, nodes, secrets, and configs. For these resources types, change com.docker.ucp.access.label to move a resource to a different collection. Collections have generic names by default, but you can assign them meaningful names as required (such as dev, test, and prod).

Note

The following resource types do not have editable labels and thus you cannot assign them to collections: containers, networks, and volumes.

Groups of resources identified by a shared label are called stacks. You can place one stack of resources in multiple collections. MKE automatically places resources in the default collection. Users can change this using a specific com.docker.ucp.access.label in the stack/compose file.

The system uses com.docker.ucp.collection.* to enable efficient resource lookup. You do not need to manage these labels, as MKE controls them automatically. Nodes have the following labels set to true by default:

  • com.docker.ucp.collection.root

  • com.docker.ucp.collection.shared

  • com.docker.ucp.collection.swarm

Default and built-in Swarm collections

This topic describes both MKE default and built-in Swarm collections.


Default Swarm collections

Each user has a default collection, which can be changed in the MKE preferences.

To deploy resources, they must belong to a collection. When a user deploys a resource without using an access label to specify its collection, MKE automatically places the resource in the default collection.

Default collections are useful for the following types of users:

  • Users who work only on a well-defined portion of the system

  • Users who deploy stacks but do not want to edit the contents of their compose files

Custom collections are appropriate for users with more complex roles in the system, such as administrators.

Note

For those using Docker Compose, the system applies default collection labels across all resources in the stack unless you explicitly set com.docker.ucp.access.label.

Built-in Swarm collections

MKE includes the following built-in Swarm collections:

Built-in Swarm collection

Description

/

Path to all resources in the Swarm cluster. Resources not in a collection are put here.

/System

Path to MKE managers, MSR nodes, and MKE/MSR system services. By default, only administrators have access to this collection.

/Shared

Path to a user’s private collection. Private collections are not created until the user logs in for the first time.

/Shared/Private

Path to a user’s private collection. Private collections are not created until the user logs in for the first time.

/Shared/Legacy

Path to the access control labels of legacy versions (UCP 2.1 and earlier).

Group and isolate cluster resources

This topic describes how to group and isolate cluster resources into swarm collections and Kubernetes namespaces.

Log in to the MKE web UI as an administrator and complete the following steps:

To create a Swarm collection:

  1. Navigate to Shared Resources > Collections.

  2. Click View Children next to Swarm.

  3. Click Create Collection.

  4. Enter a collection name and click Create.


To add a resource to the collection:

  1. Navigate to the resource you want to add to the collection. For example, click Shared Resources > Nodes and then click the node you want to add.

  2. Click the gear icon in the top right to edit the resource.

  3. Scroll down to Labels and enter the name of the collection you want to add the resource to, for example, Prod.


To create a Kubernetes namespace:

  1. Navigate to Kubernetes > Namespaces and click Create.

  2. Leave the Namespace drop-down blank.

  3. Paste the following in the Object YAML editor:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: namespace-name
    
  4. Click Create.

Note

For more information on assigning resources to a particular namespace, refer to Kubernetes Documentation: Namespaces Walkthrough.

See also

Kubernetes

See also

Kubernetes

Create grants

MKE administrators create grants to control how users and organizations access resource sets. A grant defines user permissions to access resources. Each grant associates one subject with one role and one resource set. For example, you can grant the Prod Team Restricted Control over services in the /Production collection.

The following is a common workflow for creating grants:

  1. create-manually.

  2. Define custom roles (or use defaults) by adding permitted API operations per type of resource.

  3. Group cluster resources into Swarm collections or Kubernetes namespaces.

  4. Create grants by combining subject, role, and resource set.

Note

This section assumes that you have created the relevant objects for the grant, including the subject, role, and resource set (Kubernetes namespace or Swarm collection).

To create a Kubernetes grant:

  1. Log in to the MKE web UI.

  2. Navigate to Access Control > Grants.

  3. Select the Kubernetes tab and click Create Role Binding.

  4. Under Subject, select Users, Organizations, or Service Account.

    • For Users, select the user from the pull-down menu.

    • For Organizations, select the organization and, optionally, the team from the pull-down menu.

    • For Service Account, select the namespace and service account from the pull-down menu.

  5. Click Next to save your selections.

  6. Under Resource Set, toggle the switch labeled Apply Role Binding to all namespaces (Cluster Role Binding).

  7. Click Next.

  8. Under Role, select a cluster role.

  9. Click Create.


To create a Swarm grant:

  1. Log in to the MKE web UI.

  2. Navigate to Access Control > Grants.

  3. Select the Swarm tab and click Create Grant.

  4. Under Subject, select Users or Organizations.

    • For Users, select a user from the pull-down menu.

    • For Organizations, select the organization and, optionally, the team from the pull-down menu.

  5. Click Next to save your selections.

  6. Under Resource Set, click View Children until the required collection displays.

  7. Click Select Collection next to the required collection.

  8. Click Next.

  9. Under Role, select a role type from the drop-down menu.

  10. Click Create.

Note

MKE places new users in the docker-datacenter organization by default. To apply permissions to all MKE users, create a grant with the docker-datacenter organization as a subject.

Grant users permission to pull images

By default, only administrators can pull images into a cluster managed by MKE. This topic describes how to give non-administrator users permission to pull images.

Images are always in the swarm collection, as they are a shared resource. Grant users the Image Create permission for the Swarm collection to allow them to pull images.

To grant a user permission to pull images:

  1. Log in to the MKE web UI as an administrator.

  2. Navigate to Access Control > Roles.

  3. Select the Swarm tab and click Create.

  4. On the Details tab, enter Pull images for the role name.

  5. On the Operations tab, select Image Create from the IMAGE OPERATIONS drop-down.

  6. Click Create.

  7. Navigate to Access Control > Grants.

  8. Select the Swarm tab and click Create Grant.

  9. Under Subject, click Users and select the required user from the drop-down.

  10. Click Next.

  11. Under Resource Set, select the Swarm collection and click Next.

  12. Under Role, select Pull images from the drop-down.

  13. Click Create.

Reset passwords

This topic describes how to reset passwords for users and administrators.

To change a user password in MKE:

  1. Log in to the MKE web UI with administrator credentials.

  2. Click Access Control > Users.

  3. Select the user whose password you want to change.

  4. Click the gear icon in the top right corner.

  5. Select Security from the left navigation.

  6. Enter the new password, confirm that it is correct, and click Update Password.

Note

For users managed with an LDAP service, you must change user passwords on the LDAP server.

To change an administrator password in MKE:

  1. SSH to an MKE manager node and run:

    docker run --net=host -v ucp-auth-api-certs:/tls -it \
    "$(docker inspect --format \
    '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' \
    ucp-auth-api)" \
    "$(docker inspect --format \
    '{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}' \
    ucp-auth-api)" \
    passwd -i
    
  2. Optional. If you have DEBUG set as your global log level within MKE, running $(docker inspect --format '{{ index .Spec.TaskTemplate.ContainerSpec.Args 0 }}` returns --debug instead of --db-addr.

    Pass Args 1 to $docker inspect instead to reset your administrator password:

    docker run --net=host -v ucp-auth-api-certs:/tls -it \
    "$(docker inspect --format \
    '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' \
    ucp-auth-api)" \
    "$(docker inspect --format \
    '{{ index .Spec.TaskTemplate.ContainerSpec.Args 1 }}' \
    ucp-auth-api)" \
    passwd -i
    

Note

Alternatively, ask another administrator to change your password.

RBAC tutorials

This section contains a collection of tutorials that explain how to use RBAC in a variety of scenarios.

Deploy a simple stateless app with RBAC

This topic describes how to deploy an NGINX web server, limiting access to one team using role-based access control (RBAC).

You are the MKE system administrator and will configure permissions to company resources using a four-step process:

  1. Build the organization with teams and users.

  2. Define roles with allowable operations per resource type, such as permission to run containers.

  3. Create collections or namespaces for accessing actual resources.

  4. Create grants that join team, role, and resource set.


To deploy a simple stateless app with RBAC:

  1. Build the organization:

    1. Log in to the MKE web UI.

    2. Add an organization called company-datacenter.

    3. Create three teams according to the following structure:

      Team

      Users

      DBA

      Alex

      Dev

      Bett

      Ops

      Alex, Chad

  2. Deploy NGINX with Kubernetes:

    1. Click Kubernetes > Namespaces.

    2. Paste the following manifest in the Object YAML editor and click Create.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: nginx-namespace
      
    3. Create a simple role for the Ops team called Kube Deploy.

    4. Create a grant for the Ops team to access the nginx-namespace with the Kube Deploy custom role.

    5. Log in to the MKE web UI as Chad on the Ops team.

    6. Click Kubernetes > Namespaces.

    7. Paste the following manifest in the Object YAML editor and click Create.

      apiVersion: apps/v1beta2
      kind: Deployment
      metadata:
      name: nginx-deployment
      spec:
      selector:
         matchLabels:
            app: nginx
      replicas: 2
      template:
         metadata:
            labels:
            app: nginx
         spec:
            containers:
            - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
      

      Note

      Use apps/v1beta1 for versions lower than 1.8.0.

    8. Sign in as each user and verify that the following users cannot see nginx-namespace:

      • Alex on the DBA team

      • Bett on the Dev team

  3. Deploy NGINX as a Swarm service:

    1. Create a collection for NGINX resources called nginx-collection nested under the Shared collection. To view child collections, click View Children.

    2. Create a simple role for the Ops team called Swarm Deploy.

    3. Create a grant for the Ops team to access the nginx-collection with the Swarm Deploy custom role.

    4. Log in to the MKE web UI as Chad on the Ops team.

    5. Click Swarm > Services > Create.

    6. On the Details tab, enter the following:

      • Name: nginx-service

      • Image: nginx:latest

    7. On the Collection tab, click View Children next to Swarm and then next to Shared.

    8. Click nginx-collection, then click Create.

    9. Sign in as each user and verify that the following users cannot see nginx-collection:

      • Alex on the DBA team

      • Bett on the Dev team

Isolate volumes to specific teams

This topic describes how to grant two teams access to separate volumes in two different resource collections such that neither team can see the volumes of the other team. MKE allows you to do this even if the volumes are on the same nodes.

To create two teams:

  1. Log in to the MKE web UI.

  2. Navigate to Orgs & Teams.

  3. Create two teams in the engineering organization named Dev and Prod.

  4. Add a non-admin MKE user to the Dev team.

  5. Add a non-admin MKE user to the Prod team.

To create two resource collections:

  1. Create a Swarm collection called dev-volumes nested under the Shared collection.

  2. Create a Swarm collection called prod-volumes nested under the Shared collection.

To create grants for controlling access to the new volumes:

  1. Create a grant for the Dev team to access the dev-volumes collection with the Restricted Control built-in role.

  2. Create a grant for the Prod team to access the prod-volumes collection with the Restricted Control built-in role.

To create a volume as a team member:

  1. Log in as one of the users on the Dev team.

  2. Navigate to Swarm > Volumes and click Create.

  3. On the Details tab, name the new volume dev-data.

  4. On the Collection tab, navigate to the dev-volumes collection and click Create.

  5. Log in as one of the users on the Prod team.

  6. Navigate to Swarm > Volumes and click Create.

  7. On the Details tab, name the new volume prod-data.

  8. On the Collection tab, navigate to the prod-volumes collection and click Create.

As a result, the user on the Prod team cannot see the Dev team volumes, and the user on the Dev team cannot see the Prod team volumes. MKE administrators can see all of the volumes created by either team.

Isolate nodes

You can use MKE to physically isolate resources by organizing nodes into collections and granting Scheduler access for different users. Control access to nodes by moving them to dedicated collections where you can grant access to specific users, teams, and organizations.

The following tutorials explain how to isolate nodes using Swarm and Kubernetes.

Isolate cluster nodes with Swarm

This tutorial explains how to give a team access to a node collection and a resource collection. MKE access control ensures that team members cannot view or use Swarm resources that are not in their collection.

Note

You need an MKE license and at least two worker nodes to complete this tutorial.

The following is a high-level overview of the steps you will take to isolate cluster nodes:

  1. Create an Ops team and assign a user to it.

  2. Create a Prod collection for the team node.

  3. Assign a worker node to the Prod collection.

  4. Grant the Ops teams access to its collection.


To create a team:

  1. Log in to the MKE web UI.

  2. Create a team named Ops in your organization.

  3. Add a user to the team who is not an administrator.


To create the team collections:

In this example, the Ops team uses a collection for its assigned nodes and another for its resources.

  1. Create a Swarm collection called Prod nested under the Swarm collection.

  2. Create a Swarm collection called Webserver nested under the Prod collection.

The Prod collection is for the worker nodes and the Webserver sub-collection is for an application that you will deploy on the corresponding worker nodes.


To move a worker node to a different collection:

Note

MKE places worker nodes in the Shared collection by default, and it places those running MSR in the System collection.

  1. Navigate to Shared Resources > Nodes to view all of the nodes in the swarm.

  2. Find a node located in the Shared collection. You cannot move worker nodes that are assigned to the System collection.

  3. Click the gear icon on the node details page.

  4. In the Labels section on the Details tab, change com.docker.ucp.access.label from /Shared to /Prod.

  5. Click Save to move the node to the Prod collection.


To create two grants for team access to the two collections:

  1. Create a grant for the Ops team to access the Webserver collection with the built-in Restricted Control role.

  2. Create a grant for the Ops team to access the Prod collection with the built-in Scheduler role.

The cluster is now set up for node isolation. Users with access to nodes in the Prod collection can deploy Swarm services and Kubernetes apps. They cannot, however, schedule workloads on nodes that are not in the collection.


To deploy a Swarm service as a team member:

When a user deploys a Swarm service, MKE assigns its resources to the default collection. As a user on the Ops team, set Webserver to be your default collection.

Note

From the resource target collection, MKE walks up the ancestor collections until it finds the highest ancestor that the user has Scheduler access to. MKE schedules tasks on any nodes in the tree below this ancestor. In this example, MKE assigns the user service to the Webserver collection and schedules tasks on nodes in the Prod collection.

  1. Log in as a user on the Ops team.

  2. Navigate to Shared Resources > Collections.

  3. Navigate to the Webserver collection.

  4. Under the vertical ellipsis menu, select Set to default.

  5. Navigate to Swarm > Services and click Create to create a Swarm service.

  6. Name the service NGINX, enter nginx:latest in the Image* field, and click Create.

  7. Click the NGINX service when it turns green.

  8. Scroll down to TASKS, click the NGINX container, and confirm that it is in the Webserver collection.

  9. Navigate to the Metrics tab on the container page, select the node, and confirm that it is in the Prod collection.

Note

An alternative approach is to use a grant instead of changing the default collection. An administrator can create a grant for a role that has the Service Create permission for the Webserver collection or a child collection. In this case, the user sets the value of com.docker.ucp.access.label to the new collection or one of its children that has a Service Create grant for the required user.

Isolate cluster nodes with Kubernetes

This topic describes how to use a Kubernetes namespace to deploy a Kubernetes workload to worker nodes using the MKE web UI.

MKE uses the scheduler.alpha.kubernetes.io/node-selector annotation key to assign node selectors to namespaces. Assigning the name of the node selector to this annotation pins all applications deployed in the namespace to the nodes that have the given node selector specified.

To isolate cluster nodes with Kubernetes:

  1. Create a Kubernetes namespace.

    Note

    You can also associate nodes with a namespace by providing the namespace definition information in a configuration file.

    1. Log in to the MKE web UI as an administrator.

    2. In the left-side navigation panel, navigate to Kubernetes and click Create to open the Create Kubernetes Object page.

    3. Paste the following in the Object YAML editor:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: namespace-name
      
    4. Click Create to create the namespace-name namespace.

  2. Grant access to the Kubernetes namespace:

    1. Create a role binding for a user of your choice to access the namespace-name namespace with the built-in cluster-admin Cluster Role.

  3. Associate nodes with the namespace:

    1. From the left-side navigation panel, navigate to Shared Resources > Nodes.

    2. Select the required node.

    3. Click the Edit Node icon in the upper-right corner.

    4. Scroll down to the Kubernetes Labels section and click Add Label.

    5. In the Key field, enter zone.

    6. In the Value field, enter example-zone.

    7. Click Save.

    8. Add a scheduler node selector annotation as part of the namespace definition:

      apiVersion: v1
         kind: Namespace
         metadata:
            annotations:
            scheduler.alpha.kubernetes.io/node-selector: zone=example-zone
            name: ops-nodes
      
Set up access control architecture

This tutorial explains how to set up a complete access architecture for a fictitious company called OrcaBank.

OrcaBank is reorganizing their application teams by product with each team providing shared services as necessary. Developers at OrcaBank perform their own DevOps and deploy and manage the lifecycle of their applications.

OrcaBank has four teams with the following resource needs:

  • Security needs view-only access to all applications in the cluster.

  • DB (database) needs full access to all database applications and resources.

  • Mobile needs full access to their mobile applications and limited access to shared DB services.

  • Payments needs full access to their payments applications and limited access to shared DB services.

OrcaBank is taking advantage of the flexibility in the MKE grant model by applying two grants to each application team. One grant allows each team to fully manage the apps in their own collection, and the second grant gives them the (limited) access they need to networks and secrets within the db collection.

The resulting access architecture has applications connecting across collection boundaries. By assigning multiple grants per team, the Mobile and Payments applications teams can connect to dedicated database resources through a secure and controlled interface, leveraging database networks and secrets.

Note

MKE deploys all resources across the same group of worker nodes while providing the option to segment nodes.


To set up a complete access control architecture:

  1. Set up LDAP/AD integration and create the required teams.

    OrcaBank will standardize on LDAP for centralized authentication to help their identity team scale across all the platforms they manage.

    To implement LDAP authentication in MKE, OrcaBank is using the MKE native LDAP/AD integration to map LDAP groups directly to MKE teams. You can add or remove users from MKE teams via LDAP, which the OrcaBank identity team will centrally manage.

    1. Enable LDAP in MKE and sync your directory.

    2. Create the following teams: Security, DB, Mobile, and Payments.

  2. Define the required roles:

    1. Define an Ops role that allows users to perform all operations against configs, containers, images, networks, nodes, secrets, services, and volumes.

    2. Define a View & Use Networks + Secrets role that enables users to view and connect to networks and view and use secrets used by DB containers, but that prevents them from seeing or impacting the DB applications themselves.

    Note

    You will also use the built-in View Only role that allows users to see all resources, but not edit or use them.

  3. Create the required Swarm collections.

    All OrcaBank applications share the same physical resources, so all nodes and applications are configured in collections that nest under the built-in Shared collection.

    Create the following collections:

    • /Shared/mobile to host all mobile applications and resources.

    • /Shared/payments to host all payments applications and resources.

    • /Shared/db to serve as a top-level collection for all db resources.

    • /Shared/db/mobile to hold db resources for mobile applications.

    • /Shared/db/payments to hold db resources for payments applications.

    Note

    The OrcaBank grant composition will ensure that the Swarm collection architecture gives the DB team access to all db resources and restricts app teams to shared db resources.

  4. Create the required grants:

    1. For the Security team, create grants to access the following collections with the View Only built-in role: /Shared/mobile, /Shared/payments, /Shared/db, /Shared/db/mobile, and /Shared/db/payments.

    2. For the DB team, create grants to access the /Shared/db, /Shared/db/mobile, and /Shared/db/payments collections with the Ops custom role.

    3. For the Mobile team, create a grant to access the /Shared/mobile collection with the Ops custom role.

    4. For the Mobile team, create a grant to access the /Shared/db/mobile collection with the View & Use Networks + Secrets custom role.

    5. For the Payments team, create a grant to access the /Shared/payments collection with the Ops custom role.

    6. For the Payments team, create a grant to access the /Shared/db/payments collection with the View & Use Networks + Secrets custom role.

Set up access control architecture with additional security requirements

Caution

Complete the Set up access control architecture tutorial before you attempt this advanced tutorial.

In the previous tutorial, you assigned multiple grants to resources across collection boundaries on a single platform. In this tutorial, you will implement the following stricter security requirements for the fictitious company, OrcaBank:

  • OrcaBank is adding a staging zone to their deployment model, deploying applications first from development, then from staging, and finally from production.

  • OrcaBank will no longer permit production applications to share any physical infrastructure with non-production infrastructure. They will use node access control to segment application scheduling and access.

    Note

    Node access control is an MKE feature that provides secure multi-tenancy with node-based isolation. Use it to place nodes in different collections so that you can schedule and isolate resources on disparate physical or virtual hardware. For more information, refer to Isolate nodes.

OrcaBank will still use its three application teams from the previous tutorial (DB, Mobile, and Payments) but with varying levels of segmentation between them. The new access architecture will organize the MKE cluster into staging and production collections with separate security zones on separate physical infrastructure.

The four OrcaBank teams now have the following production and staging needs:

  • Security` needs view-only access to all applications in production and no access to staging.

  • DB needs full access to all database applications and resources in production and no access to staging.

  • In both production and staging, Mobile needs full access to their applications and limited access to shared DB services.

  • In both production and staging, Payments needs full access to their applications and limited access to shared DB services.

The resulting access architecture will provide physical segmentation between production and staging using node access control.

Applications are scheduled only on MKE worker nodes in the dedicated application collection. Applications use shared resources across collection boundaries to access the databases in the /prod/db collection.


To set up a complete access control architecture with additional security requirements:

  1. Verify LDAP, teams, and roles are set up properly:

    1. Verify LDAP is enabled and syncing. If it is not, configure that now.

    2. Verify the following teams are present in your organization: Security, DB, Mobile, and Payment, and if they are not, create them.

    3. Verify that there is a View & Use Networks + Secrets role. If there is not, define a View & Use Networks + Secrets role that enables users to view and connect to networks and view and use secrets used by DB containers. Configure the role so that it prevents those who use it from seeing or impacting the DB applications themselves.

    Note

    You will also use the following built-in roles:

    • View Only allows users to see but not edit all cluster resources.

    • Full Control allows users complete control of all collections granted to them. They can also create containers without restriction but cannot see the containers of other users. This role will replace the custom Ops role from the previous tutorial.

  2. Create the required Swarm collections.

    In the previous tutorial, OrcaBank created separate collections for each application team and nested them all under /Shared.

    To meet their new security requirements for production, OrcaBank will add top-level prod and staging collections with mobile and payments application collections nested underneath. The prod collection (but not the staging collection) will also include a db collection with a second set of mobile and payments collections nested underneath.

    OrcaBank will also segment their nodes such that the production and staging zones will have dedicated nodes, and in production each application will be on a dedicated node.

    Create the following collections:

    • /prod

    • /prod/mobile

    • /prod/payments

    • /prod/db

    • /prod/db/mobile

    • /prod/db/payments

    • /staging

    • /staging/mobile

    • /staging/payments

  3. Create the required grants as described in Create grants:

    1. For the Security team, create grants to access the following collections with the View Only built-in role: /prod, /prod/mobile, /prod/payments, /prod/db, /prod/db/mobile, and /prod/db/payments.

    2. For the DB team, create grants to access the following collections with the Full Control built-in role: /prod/db, /prod/db/mobile, and /prod/db/payments.

    3. For the Mobile team, create grants to access the /prod/mobile and /staging/mobile collections with the Full Control built-in role.

    4. For the Mobile team, create a grant to access the /prod/db/mobile collection with the View & Use Networks + Secrets custom role.

    5. For the Payments team, create grants to access the /prod/payments and /staging/payments collections with the Full Control built-in role.

    6. For the Payments team, create a grant to access the /prod/db/payments collection with the View & Use Networks + Secrets custom role.

Upgrade an MKE installation

Note

Prior to upgrading MKE, review the MKE release notes for information that may be relevant to the upgrade process.

In line with your MKE upgrade, you should plan to upgrade the Mirantis Container Runtime (MCR) instance on each cluster node to version 20.10.0 or later. Mirantis recommends that you schedule the upgrade for non-business hours to ensure minimal user impact.

Do not make changes to your MKE configuration while upgrading, as doing so can cause misconfigurations that are difficult to troubleshoot.

Semantic versioning

MKE uses semantic versioning. While downgrades are not supported, Mirantis supports upgrades according to the following rules:

  • When you upgrade from one patch version to another, you can skip patch versions as no data migration takes place between patch versions.

  • When you upgrade between minor releases, you cannot skip releases. You can, however, upgrade from any patch version from the previous minor release to any patch version of the subsequent minor release.

  • When you upgrade between major releases, you cannot skip releases.

Warning

Upgrading from one MKE minor version to another minor version can result in a downgrading of MKE middleware components. For more information, refer to the component listings in the release notes of both the source and target MKE versions.

Supported upgrade paths

Description

From

To

Supported

Patch upgrade

x.y.0

x.y.1

Yes

Skip patch version

x.y.0

x.y.2

Yes

Patch downgrade

x.y.2

x.y.1

No

Minor upgrade

x.y.*

x.y+1.*

Yes

Skip minor version

x.y.*

x.y+2.*

No

Minor downgrade

x.y.*

x.y-1.*

No

Major upgrade

x.y.z

x+1.0.0

Yes

Major upgrade skipping minor version

x.y.z

x+1.y+1.z

No

Skip major version

x.*.*

x+2.*.*

No

Major downgrade

x.*.*

x-1.*.*

No

Verify your environment

Before you perform the environment verifications necessary to ensure a smooth upgrade, Mirantis recommends that you run upgrade checks:

docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
mirantis/ucp \
upgrade checks [command options]

This process confirms:

  • Port availability

  • Sufficient memory and disk space

  • Supported OS version is in use

  • Existing backup availability


To perform system verifications:

  1. Verify time synchronization across all nodes and assess time daemon logs for any large time drifting.

  2. Verify that PROD=4vCPU/16GB system requirements are met for MKE managers and MSR replicas.

  3. Verify that your port configurations meet all MKE, MSR, and MCR port requirements.

  4. Verify that your cluster nodes meet the minimum requirements.

  5. Verify that you meet all minimum hardware and software requirements.

Note

Azure installations have additional prerequisites. Refer to Install MKE on Azure for more information.


To perform storage verifications:

  1. Verify that no more than 70% of /var/ storage is used. If more than 70% is used, allocate enough storage to meet this requirement. Refer to MKE hardware requirements for the minimum and recommended storage requirements.

  2. Verify whether any node local file systems have disk storage issues, including MSR back-end storage, for example, NFS.

  3. Verify that you are using Overlay2 storage drivers, as they are more stable. If you are not, you should transition to Overlay2 at this time. Transitioning from device mapper to Overlay2 is a destructive rebuild.


To perform operating system verifications:

  1. Patch all relevant packages to the most recent cluster node operating system version, including the kernel.

  2. Perform rolling restart of each node to confirm in-memory settings are the same as startup scripts.

  3. After performing rolling restarts, run check-config.sh on each cluster node checking for kernel compatibility issues.


To perform procedural verifications:

  1. Perform Swarm, MKE, and MSR backups.

  2. Gather Compose, service, and stack files.

  3. Generate an MKE support bundle for this specific point in time.

  4. Preinstall MKE, MSR, and MCR images. If your cluster does not have an Internet connection, Mirantis provides tarballs containing all the required container images. If your cluster does have an Internet connection, pull the required container images onto your nodes:

    $ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp:3.4.15 images \
    --list | xargs -L 1 docker pull
    
  5. Load troubleshooting packages, for example, netshoot.


To upgrade MCR:

The MKE upgrade requires MCR 20.10.0 or later to be running on every cluster node. If it is not, perform the following steps first on manager and then on worker nodes:

  1. Log in to the node using SSH.

  2. Upgrade MCR to version 20.10.0 or later.

  3. Using the MKE web UI, verify that the node is in a healthy state:

    1. Log in to the MKE web UI.

    2. Navigate to Shared Resources > Nodes.

    3. Verify that the node is healthy and a part of the cluster.

Caution

Mirantis recommends upgrading in the following order: MCR, MKE, MSR. This topic is limited to the upgrade instructions for MKE.


To perform cluster verifications:

  1. Verify that your cluster is in a healthy state, as it will be easier to troubleshoot should a problem occur.

  2. Create a backup of your cluster, thus allowing you to recover should something go wrong during the upgrade process.

Note

You cannot use the backup archive during the upgrade process, as it is version specific. For example, if you create a backup archive for an MKE 3.4.2 cluster, you cannot use the archive file after you upgrade to MKE 3.4.4.

Perform the upgrade

This topic describes the following three different methods of upgrading MKE:

Note

To upgrade MKE on machines that are not connected to the Internet, refer to Install MKE offline to learn how to download the MKE package for offline installation.

In all three methods, manager nodes are automatically upgraded in place. You cannot control the order of manager node upgrades. For each worker node that requires an upgrade, you can upgrade that node in place or you can replace the node with a new worker node. The type of upgrade you perform depends on what is needed for each node.

Consult the following table to determine which method is right for you:

Upgrade method

Description

Automated in-place cluster upgrade

Performed on any manager node. This method automatically upgrades the entire cluster.

Phased in-place cluster upgrade

Automatically upgrades manager nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more advanced than the automated in-place cluster upgrade.

Replace existing worker nodes using blue-green deployment

This type of upgrade allows you to stand up a new cluster in parallel to the current one and switch over when the upgrade is complete. It requires that you join new worker nodes, schedule workloads to run on them, pause, drain, and remove old worker nodes in batches (rather than one at a time), and shut down servers to remove worker nodes. This is the most advanced upgrade method.

Automated in-place cluster upgrade method:

This is the standard method of upgrading MKE. It updates all MKE components on all nodes within the MKE cluster one-by-one until the upgrade is complete, and is thus not ideal for those needing to upgrade their worker nodes in a particular order.

  1. Verify that all MCR instances have been upgraded to the corresponding new version.

  2. SSH into one MKE manager node and run the following command (do not run this command on a workstation with a client bundle):

    docker container run --rm -it \
    --name ucp \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp:3.4.15 \
    upgrade \
    --interactive
    

    The upgrade command will print messages as it automatically upgrades MKE on all nodes in the cluster.

Phased in-place cluster upgrade

This method allows granular control of the MKE upgrade process by first upgrading a manager node and then allowing you to upgrade worker nodes manually in the order that you select. This allows you to migrate workloads and control traffic while upgrading. You can temporarily run MKE worker nodes with different versions of MKE and MCR.

This method allows you to handle failover by adding additional worker node capacity during an upgrade. You can add worker nodes to a partially-upgraded cluster, migrate workloads, and finish upgrading the remaining worker nodes.

  1. Verify that all MCR instances have been upgraded to the corresponding new version.

  2. SSH into one MKE manager node and run the following command (do not run this command on a workstation with a client bundle):

    docker container run --rm -it \
    --name ucp \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp:3.4.15 \
    upgrade \
    --manual-worker-upgrade \
    --interactive
    

    The --manual-worker-upgrade flag allows MKE to upgrade only the manager nodes. It adds an upgrade-hold label to all worker nodes, which prevents MKE from upgrading each worker node until you remove the label.

  3. Optional. Join additional worker nodes to your cluster:

    docker swarm join --token SWMTKN-<swarm-token> <manager-ip>:2377
    

    For more information, refer to Join Linux nodes.

    Note

    New worker nodes will already have the newer version of MCR and MKE installed when they join the cluster.

  4. Remove the upgrade-hold label from each worker node to upgrade:

    docker node update --label-rm com.docker.ucp.upgrade-hold \
    <node-name-or-id>
    
Replace existing worker nodes using blue-green deployment

This method creates a parallel environment for a new deployment, which reduces downtime, upgrades worker nodes without disrupting workloads, and allows you to migrate traffic to the new environment with worker node rollback capability.

Note

You do not have to replace all worker nodes in the cluster at one time, but can instead replace them in groups.

  1. Verify that all MCR instances have been upgraded to the corresponding new version.

  2. SSH into one MKE manager node and run the following command (do not run this command on a workstation with a client bundle):

    docker container run --rm -it \
    --name ucp \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp:3.4.15 \
    upgrade \
    --manual-worker-upgrade \
    --interactive
    

    The --manual-worker-upgrade flag allows MKE to upgrade only the manager nodes. It adds an upgrade-hold label to all worker nodes, which prevents MKE from upgrading each worker node until the label is removed.

  3. Join additional worker nodes to your cluster:

    docker swarm join --token SWMTKN-<swarm-token> <manager-ip>:2377
    

    For more information, refer to Join Linux nodes.

    Note

    New worker nodes will already have the newer version of MCR and MKE installed when they join the cluster.

  4. Join MCR to the cluster:

    docker swarm join --token SWMTKN-<your-token> <manager-ip>:2377
    
  5. Pause all existing worker nodes to ensure that MKE does not deploy new workloads on existing nodes:

    docker node update --availability pause <node-name>
    
  6. Drain the paused nodes in preparation for migrating your workloads:

    docker node update --availability drain <node-name>
    

    Note

    MKE automatically reschedules workloads onto new nodes while existing nodes are paused.

  7. Remove each fully-drained node:

    docker swarm leave <node-name>
    
  8. Remove each manager node after its worker nodes become unresponsive:

    docker node rm <node-name>
    
  9. From any manager node, remove old MKE agents after the upgrade is complete, including 390x and Windows agents carried over from the previous install:

    docker service rm ucp-agent
    docker service rm ucp-agent-win
    docker service rm ucp-agent-s390x
    
Troubleshoot the upgrade process

This topic describes common problems and errors that occur during the upgrade process and how to identify and resolve them.


To check for multiple conflicting upgrades:

The upgrade command automatically checks for multiple ucp-worker-agents, the existence of which can indicate that the cluster is still undergoing a prior manual upgrade. You must resolve the conflicting node labels before proceeding with the upgrade.


To resolve upgrade failures:

You can resolve upgrade failures on worker nodes by changing the node labels back to the previous version, but this is not supported on manager nodes.


To check Kubernetes errors:

For more information on anything that might have gone wrong during the upgrade process, check Kubernetes errors in node state messages after the upgrade is complete.

Deploy applications with Swarm

Deploy a single-service application

This topic describes how to use both the MKE web UI and the CLI to deploy an NGINX web server and make it accessible on port 8000.


To deploy a single-service application using the MKE web UI:

  1. Log in to the MKE web UI.

  2. Navigate to Swarm > Services and click Create a service.

  3. In the Service Name field, enter nginx.

  4. In the Image Name field, enter nginx:latest.

  5. Navigate to Network > Ports and click Publish Port.

  6. In the Target port field, enter 80.

  7. In the Protocol field, enter tcp.

  8. In the Publish mode field, enter Ingress.

  9. In the Published port field, enter 8000.

  10. Click Confirm to map the ports for the NGINX service.

  11. Specify the service image and ports.

  12. Click Create to deploy the service into the MKE cluster.


To view the default NGINX page through the MKE web UI:

  1. Navigate to Swarm > Services.

  2. Click nginx.

  3. Click Published Endpoints.

  4. Click the link to open a new tab with the default NGINX home page.


To deploy a single service using the CLI:

  1. Verify that you have downloaded and configured the client bundle.

  2. Deploy the single-service application:

    docker service create --name nginx \
    --publish mode=ingress,target=80,published=8000 \
    --label com.docker.ucp.access.owner=<your-username> \
    nginx
    
  3. View the default NGINX page by visiting http://<node-ip>:8000.

See also

NGINX

Deploy a multi-service application

This topic describes how to use both the MKE web UI and the CLI to deploy a multi-service application for voting on whether you prefer cats or dogs.


To deploy a multi-service application using the MKE web UI:

  1. Log in to the MKE web UI.

  2. Navigate to Shared Resources > Stacks and click Create Stack.

  3. In the Name field, enter voting-app.

  4. Under ORCHESTRATOR MODE, select Swarm Services and click Next.

  5. In the Add Application File editor, paste the following application definition written in the docker-compose.yml format:

    version: "3"
    services:
    
      # A Redis key-value store to serve as message queue
      redis:
        image: redis:alpine
        ports:
          - "6379"
        networks:
          - frontend
    
      # A PostgreSQL database for persistent storage
      db:
        image: postgres:9.4
        volumes:
          - db-data:/var/lib/postgresql/data
        networks:
          - backend
    
      # Web UI for voting
      vote:
        image: dockersamples/examplevotingapp_vote:before
        ports:
          - 5000:80
        networks:
          - frontend
        depends_on:
          - redis
    
      # Web UI to count voting results
      result:
        image: dockersamples/examplevotingapp_result:before
        ports:
          - 5001:80
        networks:
          - backend
        depends_on:
          - db
    
      # Worker service to read from message queue
      worker:
        image: dockersamples/examplevotingapp_worker
        networks:
          - frontend
          - backend
    
    networks:
      frontend:
      backend:
    
    volumes:
      db-data:
    
  6. Click Create to deploy the stack.

  7. In the list on the Shared Resources > Stacks page, verify that the application is deployed by looking for voting-app. If the application is in the list, it is deployed.

  8. To view the individual application services, click voting-app and navigate to the Services tab.

  9. Cast votes by accessing the service on port 5000.

Caution

  • MKE does not support referencing external files when using the MKE web UI to deploy applications, and thus does not support the following keywords:

    • build

    • dockerfile

    • env_file

  • You must use a version control system to store the stack definition used to deploy the stack, as MKE does not store the stack definition.


To deploy a multi-service application using the MKE CLI:

  1. Download and configure the client bundle.

  2. Create a file named docker-compose.yml with the following content:

    version: "3"
    services:
    
      # A Redis key-value store to serve as message queue
      redis:
        image: redis:alpine
        ports:
          - "6379"
        networks:
          - frontend
    
      # A PostgreSQL database for persistent storage
      db:
        image: postgres:9.4
        volumes:
          - db-data:/var/lib/postgresql/data
        networks:
          - backend
        environment:
          - POSTGRES_PASSWORD=<password>
    
      # Web UI for voting
      vote:
        image: dockersamples/examplevotingapp_vote:before
        ports:
          - 5000:80
        networks:
          - frontend
        depends_on:
          - redis
    
      # Web UI to count voting results
      result:
        image: dockersamples/examplevotingapp_result:before
        ports:
          - 5001:80
        networks:
          - backend
        depends_on:
          - db
    
      # Worker service to read from message queue
      worker:
        image: dockersamples/examplevotingapp_worker
        networks:
          - frontend
          - backend
    
    networks:
      frontend:
      backend:
    
    volumes:
      db-data:
    
  3. Create the application:

    docker stack deploy --compose-file docker-compose.yml voting-app
      
    docker-compose --file docker-compose.yml --project-name voting-app up -d
      
  4. Verify that the application is deployed:

    docker stack ps voting-app
    
  5. Cast votes by accessing the service on port 5000.

Deploy services to a Swarm collection

This topic describes how to use both the CLI and a Compose file to deploy application resources to a particular Swarm collection. Attach the Swarm collection path to the service access label to assign the service to the required collection. MKE automatically assigns new services to the default collection unless you use either of the methods presented here to assign a different Swarm collection.

Caution

To assign services to Swarm collections, an administrator must first create the Swarm collection and grant the user access to the required collection. Otherwise the deployment will fail.

Note

If required, you can place application resources into multiple collections.


To deploy a service to a Swarm collection using the CLI:

Use docker service create to deploy your service to a collection:

docker service create \
--name <service-name> \
--label com.docker.ucp.access.label="</collection/path>"
<app-name>:<version>

To deploy a service to a Swarm collection using a Compose file:

  1. Use a labels: dictionary in a Compose file and add the Swarm collection path to the com.docker.ucp.access.label key.

    The following example specifies two services, WordPress and MySQL, and assigns /Shared/wordpress to their access labels:

    version: '3.1'
    
    services:
    
      wordpress:
        image: wordpress
        networks:
          - wp
        ports:
          - 8080:80
        environment:
          WORDPRESS_DB_PASSWORD: example
        deploy:
          labels:
            com.docker.ucp.access.label: /Shared/wordpress
      mysql:
        image: mysql:5.7
        networks:
          - wp
        environment:
          MYSQL_ROOT_PASSWORD: example
        deploy:
          labels:
            com.docker.ucp.access.label: /Shared/wordpress
    
    networks:
      wp:
        driver: overlay
        labels:
          com.docker.ucp.access.label: /Shared/wordpress
    
  2. Log in to the MKE web UI.

  3. Navigate to the Shared Resources > Stacks and click Create Stack.

  4. Name the application wordpress.

  5. Under ORCHESTRATOR MODE, select Swarm Services and click Next.

  6. In the Add Application File editor, paste the Compose file.

  7. Click Create to deploy the application

  8. Click Done when the deployment completes.

Note

MKE reports an error if the /Shared/wordpress collection does not exist or if you do not have a grant for accessing it.


To confirm that the service deployed to the correct Swarm collection:

  1. Navigate to Shared Resources > Stacks and select your application.

  2. Navigate to the to Services tab and select the required service.

  3. On the details pages, verify that the service is assigned to the correct Swarm collection.

Note

MKE creates a default overlay network for your stack that attaches to each container you deploy. This works well for administrators and those assigned full control roles. If you have lesser permissions, define a custom network with the same com.docker.ucp.access.label label as your services and attach this network to each service. This correctly groups your network with the other resources in your stack.

Use secrets in Swarm deployments

This topic describes how to create and use secrets with MKE by showing you how to deploy a WordPress application that uses a secret for storing a plaintext password. Other sensitive information you might use a secret to store includes TLS certificates and private keys. MKE allows you to securely store secrets and configure who can access and manage them using role-based access control (RBAC).

The application you will create in this topic includes the following two services:

  • wordpress

    Apache, PHP, and WordPress

  • wordpress-db

    MySQL database

The following example stores a password in a secret, and the secret is stored in a file inside the container that runs the services you will deploy. The services have access to the file, but no one else can see the plaintext password. To make things simple, you will not configure the database to persist data, and thus when the service stops, the data is lost.


To create a secret:

  1. Log in to the MKE web UI.

  2. Navigate to Swarm > Secrets and click Create.

    Note

    After you create the secret, you will not be able to edit or see the secret again.

  3. Name the secret wordpress-password-v1.

  4. In the Content field, assign a value to the secret.

  5. Optional. Define a permission label so that other users can be given permission to use this secret.

    Note

    To use services and secrets together, they must either have the same permission label or no label at all.


To create a network for your services:

  1. Navigate to Swarm > Networks and click Create.

  2. Create a network called wordpress-network with the default settings.


To create the MySQL service:

  1. Navigate to Swarm > Services and click Create.

  2. Under Service Details, name the service wordpress-db.

  3. Under Task Template, enter mysql:5.7.

  4. In the left-side menu, navigate to Network, click Attach Network +, and select wordpress-network from the drop-down.

  5. In the left-side menu, navigate to Environment, click Use Secret +, and select wordpress-password-v1 from the drop-down.

  6. Click Confirm to associate the secret with the service.

  7. Scroll down to Environment variables and click Add Environment Variable +.

  8. Enter the following string to create an environment variable that contains the path to the password file in the container:

    MYSQL_ROOT_PASSWORD_FILE=/run/secrets/wordpress-password-v1
    
  9. If you specified a permission label on the secret, you must set the same permission label on this service.

  10. Click Create to deploy the MySQL service.

This creates a MySQL service that is attached to the wordpress-network network and that uses the wordpress-password-v1 secret. By default, this creates a file with the same name in /run/secrets/<secret-name> inside the container running the service.

We also set the MYSQL_ROOT_PASSWORD_FILE environment variable to configure MySQL to use the content of the /run/secrets/wordpress-password-v1 file as the root password.


To create the WordPress service:

  1. Navigate to Swarm > Services and click Create.

  2. Under Service Details, name the service wordpress.

  3. Under Task Template, enter wordpress:latest.

  4. In the left-side menu, navigate to Network, click Attach Network +, and select wordpress-network from the drop-down.

  5. In the left-side menu, navigate to Environment, click Use Secret +, and select wordpress-password-v1 from the drop-down.

  6. Click Confirm to associate the secret with the service.

  7. Scroll down to Environment variables and click Add Environment Variable +.

  8. Enter the following string to create an environment variable that contains the path to the password file in the container:

    WORDPRESS_DB_PASSWORD_FILE=/run/secrets/wordpress-password-v1
    
  9. Add another environment variable and enter the following string:

    WORDPRESS_DB_HOST=wordpress-db:3306
    
  10. If you specified a permission label on the secret, you must set the same permission label on this service.

  11. Click Create to deploy the WordPress service.

This creates a WordPress service that is attached to the same network as the MySQL service so that they can communicate, and maps the port 80 of the service to port 8000 of the cluster routing mesh.

Once you deploy this service, you will be able to access it on port 8000 using the IP address of any node in your MKE cluster.


To update a secret:

If the secret is compromised, you need to change it, update the services that use it, and delete the old secret.

  1. Create a new secret named wordpress-password-v2.

  2. From Swarm > Secrets, select the wordpress-password-v1 secret to view all the services that you need to update. In this example, it is straightforward, but that will not always be the case.

  3. Update wordpress-db to use the new secret.

  4. Update the MYSQL_ROOT_PASSWORD_FILE environment variable with either of the following methods:

    • Update the environment variable directly with the following:

      MYSQL_ROOT_PASSWORD_FILE=/run/secrets/wordpress-password-v2
      
    • Mount the secret file in /run/secrets/wordpress-password-v1 by setting the Target Name field with wordpress-password-v1. This mounts the file with the wordpress-password-v2 content in /run/secrets/wordpress-password-v1.

  5. Delete the wordpress-password-v1 secret and click Update.

  6. Repeat the foregoing steps for the WordPress service.

Interlock

Layer 7 routing

MKE includes a system for application-layer (layer 7) routing that offers both application routing and load balancing (ingress routing) for Swarm orchestration. The Interlock architecture leverages Swarm components to provide scalable layer 7 routing and Layer 4 VIP mode functionality.

Swarm mode provides MCR with a routing mesh, which enables users to access services using the IP address of any node in the swarm. layer 7 routing enables you to access services through any node in the swarm by using a domain name, with Interlock routing the traffic to the node with the relevant container.

Interlock uses the Docker remote API to automatically configure extensions such as NGINX and HAProxy for application traffic. Interlock is designed for:

  • Full integration with MCR, including Swarm services, secrets, and configs

  • Enhanced configuration, including context roots, TLS, zero downtime deployment, and rollback

  • Support through extensions for external load balancers, such as NGINX, HAProxy, and F5

  • Least privilege for extensions, such that they have no Docker API access

Note

Interlock and Layer 7 routing are used for Swarm deployments. Refer to Use Istio Ingress for Kubernetes for information on routing traffic to your Kubernetes applications.

Terminology
Cluster

A group of compute resources running MKE

Swarm

An MKE cluster running in Swarm mode

Upstream

An upstream container that serves an application

Proxy service

A service, such as NGINX, that provides load balancing and proxying

Extension service

A secondary service that configures the proxy service

Service cluster

A combined Interlock extension and proxy service

gRPC

A high-performance RPC framework

Interlock services
Interlock

The central piece of the layer 7 routing solution. The core service is responsible for interacting with the Docker remote API and building an upstream configuration for the extensions. Interlock uses the Docker API to monitor events, and manages the extension and proxy services, and it serves this on a gRPC API that the extensions are configured to access.

Interlock manages extension and proxy service updates for both configuration changes and application service deployments. There is no operator intervention required.

The Interlock service starts a single replica on a manager node. The Interlock extension service runs a single replica on any available node, and the Interlock proxy service starts two replicas on any available node. Interlock prioritizes replica placement in the following order:

  • Replicas on the same worker node

  • Replicas on different worker nodes

  • Replicas on any available nodes, including managers

Interlock extension

A secondary service that queries the Interlock gRPC API for the upstream configuration. The extension service configures the proxy service according to the upstream configuration. For proxy services that use files such as NGINX or HAProxy, the extension service generates the file and sends it to Interlock using the gRPC API. Interlock then updates the corresponding Docker configuration object for the proxy service.

Interlock proxy

A proxy and load-balancing service that handles requests for the upstream application services. Interlock configures these using the data created by the corresponding extension service. By default, this service is a containerized NGINX deployment.

Features and benefits
High availability

All layer 7 routing components are failure-tolerant and leverage Docker Swarm for high availability.

Automatic configuration

Interlock uses the Docker API for automatic configuration, without needing you to manually update or restart anything to make services available. MKE monitors your services and automatically reconfigures proxy services.

Scalability

Interlock uses a modular design with a separate proxy service, allowing an operator to individually customize and scale the proxy Layer to handle user requests and meet services demands, with transparency and no downtime for users.

TLS

You can leverage Docker secrets to securely manage TLS certificates and keys for your services. Interlock supports both TLS termination and TCP passthrough.

Context-based routing

Interlock supports advanced application request routing by context or path.

Host mode networking

Layer 7 routing leverages the Docker Swarm routing mesh by default, but Interlock also supports running proxy and application services in host mode networking, allowing you to bypass the routing mesh completely, thus promoting maximum application performance.

Security

The layer 7 routing components that are exposed to the outside world run on worker nodes, thus your cluster will not be affected if they are compromised.

SSL

Interlock leverages Docker secrets to securely store and use SSL certificates for services, supporting both SSL termination and TCP passthrough.

Blue-green and canary service deployment

Interlock supports blue-green service deployment allowing an operator to deploy a new application while the current version is serving. Once the new application verifies the traffic, the operator can scale the older version to zero. If there is a problem, the operation is easy to reverse.

Service cluster support

Interlock supports multiple extension and proxy service combinations, thus allowing for operators to partition load balancing resources to be used, for example, in region- or organization-based load balancing.

Least privilege

Interlock supports being deployed where the load balancing proxies do not need to be colocated with a Swarm manager. This is a more secure approach to deployment as it ensures that the extension and proxy services do not have access to the Docker API.

Optimize Interlock deployments

This topic describes various ways to optimize your Interlock deployments. First, it will be helpful to review the stages of an Interlock deployment. The following process occurs each time you update an application:

  1. The user updates a service with a new version of an application.

  2. The default stop-first policy stops the first replica before scheduling the second. The Interlock proxies remove ip1.0 from the back-end pool as the app.1 task is removed.

  3. Interlock reschedules the first application task with the new image after the first task stops.

  4. Interlock reschedules proxy.1 with the new NGINX configuration containing the new app.1 task update.

  5. After proxy.1 is complete, proxy.2 redeploys with the updated NGINX configuration for the app.1 task.

In this scenario, the service is unavailable for less than 30 seconds.

Application update optimizations

To optimize your application update order:

Using --update-order, Swarm allows you to control the order in which tasks are stopped when you replace them with new tasks:

Optimization type

Description

stop-first (default)

Configures the old task to stop before the new task starts. Use this if the old and new tasks cannot serve clients at the same time.

start-first

Configures the old task to stop after the new task starts. Use this if you have a single application replica and you cannot have service interruption. This optimizes for high availability.

To optimize the order in which you update your application, [need-instructions-from-sme].


To set an application update delay:

Using update-delay, Swarm allows you to control how long it takes an application to update by adding a delay between updating tasks. The delay occurs between the time when the first task enters a healthy state and when the next task begins its update. The default is 0 seconds, meaning there is no delay.

Use update-delay if either of the following applies:

  • You can tolerate a longer update cycle with the benefit of fewer dropped connections.

  • Interlock update convergence takes a long time in your environment, often due to having a large number of overlay networks.

Do not use update-delay if either of the following applies:

  • You need service updates to occur rapidly.

  • The old and new tasks cannot serve clients at the same time.

To set the update delay, [need-instructions-from-sme].


To configure application health checks:

Using health-cmd, Swarm allows you to check application health to ensure that updates do not cause service interruption. Without using health-cmd, Swarm considers an application healthy as soon as the container process is running, even if the application is not yet capable of serving clients, thus leading to dropped connections. You can configure health-cmd using either a Dockerfile or a Compose file.

To configure health-cmd, [need-instructions-from-sme].


To configure an application stop grace period:

Using stop-grace-period, Swarm allows you to set the maximum wait time before it force-kills a task. A task can run no longer than the value of this setting after initiating its shutdown cycle. The default is 10 seconds. Use longer wait times for applications that require long periods to process requests, allowing connections to terminate normally.

To configure stop-grace-period, [need-instructions-from-sme].

Interlock optimizations

To use service clusters for Interlock segmentation:

Interlock can be segmented into multiple logical instances called service clusters, with independently-managed proxies. Application traffic can be fully-segmented, as it only uses the proxies for a particular service cluster. Each service cluster only connects to the networks that use that specific service cluster, reducing the number of overlay networks that proxies connect to. The use of separate proxies enables service clusters to reduce the amount of load balancer configuration churn during service updates.

To configure service clusters, [need-instructions-from-sme].


To minimize the number of overlay networks:

Every overlay network connected to Interlock adds one to two seconds of additional update delay, and too many connected networks cause the load balancer configuration to be out of date for too long, resulting in dropped traffic.

The following are two different ways you can minimize the number of overlay networks that Interlock connects to:

  • Group applications together to share a network if the architecture permits doing so.

  • Use Interlock service clusters, as they segment which networks are connected to Interlock, reducing the number of networks each proxy is connected to. And use admin-defined networks, limiting the number of networks per service cluster.


To use Interlock VIP Mode:

Using VIP mode, Interlock allows you to reduce the impact of application updates on the Interlock proxies. It uses the Swarm L4 load balancing VIPs instead of individual task IPs to load balance traffic to a more stable internal endpoint. This prevents the proxy load balancer configurations from changing for most kinds of app service updates, thus reducing Interlock churn.

These are the features that VIP mode supports:

  • Host and context routing

  • Context root rewrites

  • Interlock TLS termination

  • TLS passthrough

  • Service clusters

These are the features that VIP mode does not support:

  • Sticky sessions

  • Websockets

  • Canary deployments

To use Interlock VIP mode, [need-instructions-from-sme].

Deploy
Deploy a layer 7 routing solution

This topic describes how to route traffic to Swarm services by deploying a layer 7 routing solution into a Swarm-orchestrated cluster. It has the following prerequisites:


Enabling layer 7 routing causes the following to occur:

  1. MKE creates the ucp-interlock overlay network.

  2. MKE deploys the ucp-interlock service and attaches it both to the Docker socket and the overlay network that was created. This allows the Interlock service to use the Docker API, which is why this service needs to run on a manger node.

  3. The ucp-interlock service starts the ucp-interlock-extension service and attaches it to the ucp-interlock network, allowing both services to communicate.

  4. The ucp-interlock-extension generates a configuration for the proxy service to use. By default the proxy service is NGINX, so this service generates a standard NGINX configuration. MKE creates the com.docker.ucp.interlock.conf-1 configuration file and uses it to configure all the internal components of this service.

  5. The ucp-interlock service takes the proxy configuration and uses it to start the ucp-interlock-proxy service.

Note

Layer 7 routing is disabled by default.


To enable layer 7 routing using the MKE web UI:

  1. Log in to the MKE web UI as an administrator.

  2. Navigate to <user-name> > Admin Settings.

  3. Click Ingress.

  4. Toggle the Swarm HTTP ingress slider to the right.

  5. Optional. By default, the routing mesh service listens on port 8080 for HTTP and 8443 for HTTPS. Change these ports if you already have services using them.

The three primary Interlock services include the core service, the extensions, and the proxy. The following is the default MKE configuration, which is created automatically when you enable Interlock as described in this topic.

ListenAddr = ":8080"
DockerURL = "unix:///var/run/docker.sock"
AllowInsecure = false
PollInterval = "3s"

[Extensions]
  [Extensions.default]
    Image = "mirantis/ucp-interlock-extension:3.4.15"
    ServiceName = "ucp-interlock-extension"
    Args = []
    Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
    ProxyImage = "mirantis/ucp-interlock-proxy:3.4.15"
    ProxyServiceName = "ucp-interlock-proxy"
    ProxyConfigPath = "/etc/nginx/nginx.conf"
    ProxyReplicas = 2
    ProxyStopSignal = "SIGQUIT"
    ProxyStopGracePeriod = "5s"
    ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
    PublishMode = "ingress"
    PublishedPort = 8080
    TargetPort = 80
    PublishedSSLPort = 8443
    TargetSSLPort = 443
    [Extensions.default.Labels]
      "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
    [Extensions.default.ContainerLabels]
      "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
    [Extensions.default.ProxyLabels]
      "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
    [Extensions.default.ProxyContainerLabels]
      "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm"
    [Extensions.default.Config]
      Version = ""
      User = "nginx"
      PidPath = "/var/run/proxy.pid"
      MaxConnections = 1024
      ConnectTimeout = 5
      SendTimeout = 600
      ReadTimeout = 600
      IPHash = false
      AdminUser = ""
      AdminPass = ""
      SSLOpts = ""
      SSLDefaultDHParam = 1024
      SSLDefaultDHParamPath = ""
      SSLVerify = "required"
      WorkerProcesses = 1
      RLimitNoFile = 65535
      SSLCiphers = "HIGH:!aNULL:!MD5"
      SSLProtocols = "TLSv1.2"
      AccessLogPath = "/dev/stdout"
      ErrorLogPath = "/dev/stdout"
      MainLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" '\n\t\t    '$status $body_bytes_sent \"$http_referer\" '\n\t\t    '\"$http_user_agent\" \"$http_x_forwarded_for\"';"
      TraceLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" $status '\n\t\t    '$body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n\t\t    '\"$http_x_forwarded_for\" $request_id $msec $request_time '\n\t\t    '$upstream_connect_time $upstream_header_time $upstream_response_time';"
      KeepaliveTimeout = "75s"
      ClientMaxBodySize = "32m"
      ClientBodyBufferSize = "8k"
      ClientHeaderBufferSize = "1k"
      LargeClientHeaderBuffers = "4 8k"
      ClientBodyTimeout = "60s"
      UnderscoresInHeaders = false
      HideInfoHeaders = false

Note

The value of LargeClientHeaderBuffers indicates the number of buffers to use to read a large client request header, as well as the size of those buffers.


To enable layer 7 routing from the command line:

Interlock uses a TOML file for the core service configuration. The following example uses Swarm deployment and recovery features by creating a Docker config object.

  1. Create a Docker config object:

    cat << EOF | docker config create service.interlock.conf -
    ListenAddr = ":8080"
    DockerURL = "unix:///var/run/docker.sock"
    PollInterval = "3s"
    
    [Extensions]
      [Extensions.default]
        Image = "mirantis/ucp-interlock-extension:3.4.15"
        Args = ["-D"]
        ProxyImage = "mirantis/ucp-interlock-proxy:3.4.15"
        ProxyArgs = []
        ProxyConfigPath = "/etc/nginx/nginx.conf"
        ProxyReplicas = 1
        ProxyStopGracePeriod = "3s"
        ServiceCluster = ""
        PublishMode = "ingress"
        PublishedPort = 8080
        TargetPort = 80
        PublishedSSLPort = 8443
        TargetSSLPort = 443
        [Extensions.default.Config]
          User = "nginx"
          PidPath = "/var/run/proxy.pid"
          WorkerProcesses = 1
          RlimitNoFile = 65535
          MaxConnections = 2048
    EOF
    oqkvv1asncf6p2axhx41vylgt
    
  2. Create a dedicated network for Interlock and the extensions:

    docker network create --driver overlay ucp-interlock
    
  3. Create the Interlock service:

    docker service create \
    --name ucp-interlock \
    --mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
    --network ucp-interlock \
    --constraint node.role==manager \
    --config src=service.interlock.conf,target=/config.toml \
    mirantis/ucp-interlock:3.4.15 -D run -c /config.toml
    

    Note

    The Interlock core service must have access to a Swarm manager (--constraint node.role==manager), however the extension and proxy services are recommended to run on workers.

  4. Verify that the three services are created, one for the Interlock service, one for the extension service, and one for the proxy service:

    docker service ls
    ID                  NAME                     MODE                REPLICAS            IMAGE                                                                PORTS
    sjpgq7h621ex        ucp-interlock            replicated          1/1                 mirantis/ucp-interlock:3.4.15
    oxjvqc6gxf91        ucp-interlock-extension  replicated          1/1                 mirantis/ucp-interlock-extension:3.4.15
    lheajcskcbby        ucp-interlock-proxy      replicated          1/1                 mirantis/ucp-interlock-proxy:3.4.15        *:80->80/tcp *:443->443/tcp
    
Configure layer 7 routing for production

This topic describes how to configure Interlock for a production environment and builds upon the instruction in the previous topic, Deploy a layer 7 routing solution. It does not describe infrastructure deployment, and it assumes you are using a typical Swarm cluster, using docker init and docker swarm join from the nodes.

The layer 7 solution that ships with MKE is highly available, fault tolerant, and designed to work independently of how many nodes you manage with MKE.

The following procedures require that you dedicate two worker nodes for running the ucp-interlock-proxy service. This tuning ensures the following:

  • The proxy services have dedicated resources to handle user requests. You can configure these nodes with higher performance network interfaces.

  • No application traffic can be routed to a manager node, thus making your deployment more secure.

  • If one of the two dedicated nodes fails, layer 7 routing continues working.


To dedicate two nodes to running the proxy service:

  1. Select two nodes that you will dedicate to running the proxy service.

  2. Log in to one of the Swarm manager nodes.

  3. Add labels to the two dedicated proxy service nodes, configuring them as load balancer worker nodes, for example, lb-00 and lb-01:

    docker node update --label-add nodetype=loadbalancer lb-00
    lb-00
    docker node update --label-add nodetype=loadbalancer lb-01
    lb-01
    
  4. Verify that the labels were added successfully:

    docker node inspect -f '{{ .Spec.Labels  }}' lb-00
    map[nodetype:loadbalancer]
    docker node inspect -f '{{ .Spec.Labels  }}' lb-01
    map[nodetype:loadbalancer]
    

To update the proxy service:

You must update the ucp-interlock-proxy service configuration to deploy the proxy service properly constrained to the dedicated worker nodes.

  1. From a manager node, add a constraint to the ucp-interlock-proxy service to update the running service:

    docker service update --replicas=2 \
    --constraint-add node.labels.nodetype==loadbalancer \
    --stop-signal SIGQUIT \
    --stop-grace-period=5s \
    $(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
    

    This updates the proxy service to have two replicas, ensures that they are constrained to the workers with the label nodetype==loadbalancer, and configures the stop signal for the tasks to be a SIGQUIT with a grace period of five seconds. This ensures that NGINX does not exit before the client request is finished.

  2. Inspect the service to verify that the replicas have started on the selected nodes:

    docker service ps $(docker service ls -f \
    'label=type=com.docker.interlock.core.proxy' -q)
    

    Example of system response:

    ID            NAME                    IMAGE          NODE     DESIRED STATE   CURRENT STATE                   ERROR   PORTS
    o21esdruwu30  interlock-proxy.1       nginx:alpine   lb-01    Running         Preparing 3 seconds ago
    n8yed2gp36o6   \_ interlock-proxy.1   nginx:alpine   mgr-01   Shutdown        Shutdown less than a second ago
    aubpjc4cnw79  interlock-proxy.2       nginx:alpine   lb-00    Running         Preparing 3 seconds ago
    
  3. Add the constraint to the ProxyConstraints array in the interlock-proxy service configuration in case Interlock is restored from backup:

    [Extensions]
      [Extensions.default]
        ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"]
    
  4. Optional. By default, the config service is global, scheduling one task on every node in the cluster. To modify constraint scheduling, update the ProxyConstraints variable in the Interlock configuration file. Refer to Configure layer 7 routing service for more information.

  5. Verify that the proxy service is running on the dedicated nodes:

    docker service ps ucp-interlock-proxy
    
  6. Update the settings in the upstream load balancer, such as ELB or F5, with the addresses of the dedicated ingress workers, thus directing all traffic to these two worker nodes.

See also

NGINX

Offline installation considerations

To install Interlock on your cluster without an Internet connection, you must have the required Docker images loaded on your computer. This topic describes how to export the required images from a local instance of MCR and then load them to your Swarm-orchestrated cluster.

To export Docker images from a local instance:

  1. Using a local instance of MCR, save the required images:

    docker save mirantis/ucp-interlock:3.4.15 > interlock.tar
    docker save mirantis/ucp-interlock-extension:3.4.15 > interlock-extension-nginx.tar
    docker save mirantis/ucp-interlock-proxy:3.4.15 > interlock-proxy-nginx.tar
    

    This saves the following three files:

    • interlock.tar - the core Interlock application.

    • interlock-extension-nginx.tar - the Interlock extension for NGINX.

    • interlock-proxy-nginx.tar - the official NGINX image based on Alpine.

    Note

    Replace mirantis/ucp-interlock-extension:3.4.15 and mirantis/ucp-interlock-proxy:3.4.15 with the corresponding extension and proxy image if you are not using NGINX.

  2. Copy the three files you just saved to each node in the cluster and load each image:

    docker load < interlock.tar
    docker load < interlock-extension-nginx.tar
    docker load < interlock-proxy-nginx.tar
    

Refer to Deploy a layer 7 routing solution to continue the installation.

See also

NGINX

Configure
Configure layer 7 routing service

This section describes how to customize layer 7 routing by updating the ucp-interlock service with a new Docker configuration, including configuration options and the procedure for creating a proxy service.

Configure the Interlock service

This topic describes how to update the ucp-interlock service with a new Docker configuration.

  1. Obtain the current configuration for the ucp-interlock service and save it as a TOML file named config.toml:

    CURRENT_CONFIG_NAME=$(docker service inspect --format \
    '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' \
    ucp-interlock) && docker config inspect --format \
    '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
    
  2. Configure config.toml as required. Refer to Configuration file options for layer 7 routing for layer 7 routing customization options.

  3. Create a new Docker configuration object from the config.toml file:

    NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$\
    (( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
    docker config create $NEW_CONFIG_NAME config.toml
    
  4. Verify that the configuration was successfully created:

    docker config ls --filter name=com.docker.ucp.interlock
    

    Example output:

    ID                          NAME                              CREATED          UPDATED
    vsnakyzr12z3zgh6tlo9mqekx   com.docker.ucp.interlock.conf-1   6 hours ago      6 hours ago
    64wp5yggeu2c262z6flhaos37   com.docker.ucp.interlock.conf-2   54 seconds ago   54 seconds ago
    
  5. Optional. If you provide an invalid configuration, the ucp-interlock service is configured to roll back to a previous stable configuration, by default. Configure the service to pause instead of rolling back:

    docker service update \
    --update-failure-action pause \
    ucp-interlock
    
  6. Update the ucp-interlock service to begin using the new configuration:

    docker service update \
    --config-rm $CURRENT_CONFIG_NAME \
    --config-add source=$NEW_CONFIG_NAME,target=/config.toml \
    ucp-interlock
    

Enable Interlock proxy NGINX debugging mode

As Interlock proxy NGINX debugging mode generates copious log files and can produce core dumps, you can only set it manually to run.

Caution

Mirantis strongly recommends that you use debugging mode only for as long as is necessary, and that you do not use it in production environments.

  1. Obtain the current configuration for the ucp-interlock service and save it as a TOML file named config.toml:

    CURRENT_CONFIG_NAME=$(docker service inspect --format \
    '{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' \
    ucp-interlock) docker config inspect --format \
    '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
    
  2. Add the ProxyArgs attribute to the config.toml file, if it is not already present, and assign to it the following value:

    ProxyArgs = ["/entrypoint.sh","nginx-debug","-g","daemon off;"]
    
  3. Set the value of ProxyArgs to ["/entrypoint.sh","nginx-debug","-g","daemon off;"].

  4. Create a new Docker configuration object from the config.toml file:

    NEW_CONFIG_NAME="com.docker.ucp.interlock.conf-$\
    (( $(cut -d '-' -f 2 <<< "$CURRENT_CONFIG_NAME") + 1 ))"
    docker config create $NEW_CONFIG_NAME config.toml
    
  5. Update the ucp-interlock service to begin using the new configuration:

    docker service update \
    --config-rm $CURRENT_CONFIG_NAME \
    --config-add source=$NEW_CONFIG_NAME,target=/config.toml \
    ucp-interlock
    
Configuration file options for layer 7 routing

This topic describes the configuration options for the primary Interlock services.

For configuration instructions, see Configure layer 7 routing service.

Core configuration

The following core configuration options are available for the ucp-interlock service:

Option

Type

Description

ListenAddr

string

Address to serve the Interlock GRPC API. The default is 8080.

DockerURL

string

Path to the socket or TCP address to the Docker API. The default is unix:// /var/run/docker.sock.

TLSCACert

string

Path to the CA certificate for connecting securely to the Docker API.

TLSCert

string

Path to the certificate for connecting securely to the Docker API.

TLSKey

string

Path to the key for connecting securely to the Docker API.

AllowInsecure

bool

A value of true skips TLS verification when connecting to the Docker API via TLS.

PollInterval

string

Interval to poll the Docker API for changes. The default is 3s.

EndpointOverride

string

Override the default GRPC API endpoint for extensions. Swarm detects the default.

Extensions

[]extension

Refer to Extension configuration for the array of extensions.

Extension configuration

The following options are available to configure the extensions. Interlock must contain at least one extension to service traffic.

Option

Type

Description

Image

string

Name of the Docker image to use for the extension.

Args

[]string

Arguments to pass to the extension service.

Labels

map[string]string

Labels to add to the extension service.

Networks

[]string

Allows the administrator to cherry pick a list of networks that Interlock can connect to. If this option is not specified, the proxy service can connect to all networks.

ContainerLabels

map[string]string

Labels for the extension service tasks.

Constraints

[]string

One or more constraints to use when scheduling the extension service.

PlacementPreferences

[]string

One of more placement preferences.

ServiceName

string

Name of the extension service.

ProxyImage

string

Name of the Docker image to use for the proxy service.

ProxyArgs

[]string

Arguments to pass to the proxy service.

ProxyLabels

map[string]string

Labels to add to the proxy service.

ProxyContainerLabels

map[string]string

Labels to add to the proxy service tasks.

ProxyServiceName

string

Name of the proxy service.

ProxyConfigPath

string

Path in the service for the generated proxy configuration.

ProxyReplicas

unit

Number or proxy service replicas.

ProxyStopSignal

string

Stop signal for the proxy service. For example, SIGQUIT.

ProxyStopGracePeriod

string

Stop grace period for the proxy service in seconds. For example, 5s.

ProxyConstraints

[]string

One or more constraints to use when scheduling the proxy service. Set the variable to false, as it is currently set to true by default.

ProxyPlacementPreferences

[]string

One or more placement preferences to use when scheduling the proxy service.

ProxyUpdateDelay

string

Delay between rolling proxy container updates.

ServiceCluster

string

Name of the cluster that this extension serves.

PublishMode

string (ingress or host)

Publish mode that the proxy service uses.

PublishedPort

int

Port on which the proxy service serves non-SSL traffic.

PublishedSSLPort

int

Port on which the proxy service serves SSL traffic.

Template

int

Docker configuration object that is used as the extension template.

Config

config

Proxy configuration used by the extensions as described in this section.

HitlessServiceUpdate

bool

When set to true, services can be updated without restarting the proxy container.

ConfigImage

config

Name for the config service used by hitless service updates. For example, mirantis/ucp-interlock-config:3.2.1.

ConfigServiceName

config

Name of the config service. This name is equivalent to ProxyServiceName. For example, ucp-interlock-config.

Proxy configuration

Options are available to the extensions, and the extensions use the options needed for proxy service configuration. This provides overrides to the extension configuration.

Because Interlock passes the extension configuration directly to the extension, each extension has different configuration options available.

The default proxy service used by MKE to provide layer 7 routing is NGINX. If users try to access a route that has not been configured, they will see the default NGINX 404 page.

You can customize this by labeling a service with com.docker.lb.default_backend=true. If users try to access a route that is not configured, they will be redirected to the custom service.

For details, see Create a proxy service.

See also

NGINX

Create a proxy service

If you want to customize the default NGINX proxy service used by MKE to provide layer 7 routing, follow the steps below to create an example proxy service where users will be redirected if they try to access a route that is not configured.

To create an example proxy service:

  1. Create a docker-compose.yml file:

    version: "3.2"
    
    services:
      demo:
        image: httpd
        deploy:
          replicas: 1
          labels:
            com.docker.lb.default_backend: "true"
            com.docker.lb.port: 80
        networks:
          - demo-network
    
    networks:
      demo-network:
        driver: overlay
    
  2. Download and configure the client bundle and deploy the service:

    docker stack deploy --compose-file docker-compose.yml demo
    

    If users try to access a route that is not configured, they are directed to this demo service.

  3. Optional. To minimize forwarding interruption to the updating service while updating a single replicated service, add the following line to the labels section of the docker-compose.yml file:

    com.docker.lb.backend_mode: "vip"
    

    And then update the existing service:

    docker stack deploy --compose-file docker-compose.yml demo
    

Refer to Use service labels for information on how to set Interlock labels on services.

Configure host mode networking

Layer 7 routing components communicate with one another by default using overlay networks, but Interlock also supports host mode networking in a variety of ways, including proxy only, Interlock only, application only, and hybrid.

When using host mode networking, you cannot use DNS service discovery, since that functionality requires overlay networking. For services to communicate, each service needs to know the IP address of the node where the other service is running.

Note

Use an alternative to DNS service discovery such as Registrator if you require this functionality.

The following is a high-level overview of how to use host mode instead of overlay networking:

  1. Update the ucp-interlock configuration.

  2. Deploy your Swarm services.

  3. Configure proxy services.

If you have not already done so, configure the layer 7 routing solution for production with the ucp-interlock-proxy service replicas running on their own dedicated nodes.

Update the ucp-interlock configuration
  1. Update the PublishMode key in the ucp-interlock service configuration so that it uses host mode networking:

    PublishMode = "host"
    
  2. Update the ucp-interlock service to use the new Docker configuration so that it starts publishing its port on the host:

    docker service update \
    --config-rm $CURRENT_CONFIG_NAME \
    --config-add source=$NEW_CONFIG_NAME,target=/config.toml \
    --publish-add mode=host,target=8080 \
    ucp-interlock
    

    The ucp-interlock and ucp-interlock-extension services are now communicating using host mode networking.

Deploy Swarm services

This section describes how to deploy an example Swarm service on an eight-node cluster using host mode networking to route traffic without using overlay networks. The cluster has three manager nodes and five worker nodes, with two workers configured as dedicated ingress cluster load balancer nodes that will receive all application traffic.

This example does not cover the actual infrastructure deployment, and assumes you have a typical Swarm cluster using docker init and docker swarm join from the nodes.

  1. Download and configure the client bundle.

  2. Deploy an example Swarm demo service that uses host mode networking:

    docker service create \
    --name demo \
    --detach=false \
    --label com.docker.lb.hosts=app.example.org \
    --label com.docker.lb.port=8080 \
    --publish mode=host,target=8080 \
    --env METADATA="demo" \
    mirantiseng/docker-demo
    

    This example allocates a high random port on the host where the service can be reached.

  3. Test that the service works:

    curl --header "Host: app.example.org" \
    http://<proxy-address>:<routing-http-port>/ping
    
    • <proxy-address> is the domain name or IP address of a node where the proxy service is running.

    • <routing-http-port> is the port used to route HTTP traffic.

    A properly-working service will produce a result similar to the following:

    {"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
    
  4. Log in to one of the manager nodes and configure the load balancer worker nodes with node labels in order to pin the Interlock Proxy service:

    docker node update --label-add nodetype=loadbalancer lb-00
    lb-00
    docker node update --label-add nodetype=loadbalancer lb-01
    lb-01
    
  5. Verify that the labels were successfully added to each node:

    docker node inspect -f '{{ .Spec.Labels  }}' lb-00
    map[nodetype:loadbalancer]
    docker node inspect -f '{{ .Spec.Labels  }}' lb-01
    map[nodetype:loadbalancer]
    
  6. Create a configuration object for Interlock that specifies host mode networking:

    cat << EOF | docker config create service.interlock.conf -
    ListenAddr = ":8080"
    DockerURL = "unix:///var/run/docker.sock"
    PollInterval = "3s"
    
    [Extensions]
      [Extensions.default]
        Image = "mirantis/ucp-interlock-extension:3.4.15"
        Args = []
        ServiceName = "interlock-ext"
        ProxyImage = "mirantis/ucp-interlock-proxy:3.4.15"
        ProxyArgs = []
        ProxyServiceName = "interlock-proxy"
        ProxyConfigPath = "/etc/nginx/nginx.conf"
        ProxyReplicas = 1
        PublishMode = "host"
        PublishedPort = 80
        TargetPort = 80
        PublishedSSLPort = 443
        TargetSSLPort = 443
        [Extensions.default.Config]
          User = "nginx"
          PidPath = "/var/run/proxy.pid"
          WorkerProcesses = 1
          RlimitNoFile = 65535
          MaxConnections = 2048
    EOF
    oqkvv1asncf6p2axhx41vylgt
    
  7. Create the Interlock service using host mode networking:

    docker service create \
    --name interlock \
    --mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
    --constraint node.role==manager \
    --publish mode=host,target=8080 \
    --config src=service.interlock.conf,target=/config.toml \
    mirantis/ucp-interlock:3.4.15 -D run -c /config.toml
    sjpgq7h621exno6svdnsvpv9z
    
Configure proxy services

You can use node labels to reconfigure the Interlock Proxy services to be constrained to the workers.

  1. From a manager node, pin the proxy services to the load balancer worker nodes:

    docker service update \
    --constraint-add node.labels.nodetype==loadbalancer \
    interlock-proxy
    
  2. Deploy the application:

    docker service create \
    --name demo \
    --detach=false \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    --publish mode=host,target=8080 \
    --env METADATA="demo" \
    mirantiseng/docker-demo
    

    This runs the service using host mode networking. Each task for the service has a high port, such as 32768, and uses the node IP address to connect.

  3. Inspect the headers from the request to verify that each task uses the node IP address to connect:

    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    

    Example of system response:

    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
    > GET /ping HTTP/1.1
    > Host: demo.local
    > User-Agent: curl/7.54.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Server: nginx/1.13.6
    < Date: Fri, 10 Nov 2017 15:38:40 GMT
    < Content-Type: text/plain; charset=utf-8
    < Content-Length: 110
    < Connection: keep-alive
    < Set-Cookie: session=1510328320174129112; Path=/; Expires=Sat, 11 Nov 2017 15:38:40 GMT; Max-Age=86400
    < x-request-id: e4180a8fc6ee15f8d46f11df67c24a7d
    < x-proxy-id: d07b29c99f18
    < x-server-info: interlock/2.0.0-preview (17476782) linux/amd64
    < x-upstream-addr: 172.20.0.4:32768
    < x-upstream-response-time: 1510328320.172
    <
    {"instance":"897d3c7b9e9c","version":"0.1","metadata":"demo","request_id":"e4180a8fc6ee15f8d46f11df67c24a7d"}
    
Configure NGINX

By default, NGINX is used as a proxy. The following configuration options are available for the NGINX extension.

Note

The ServerNamesHashBucketSize option, which allowed the user to manually set the bucket size for the server names hash table, was removed in MKE 3.4.2 because MKE now adaptively calculates the setting and overrides any manual input.

Option

Type

Description

Defaults

User

string

User name for the proxy

nginx

PidPath

string

Path to the PID file for the proxy service

/var/run/proxy.pid

MaxConnections

int

Maximum number of connections for the proxy service

1024

ConnectTimeout

int

Timeout in seconds for clients to connect

600

SendTimeout

int

Timeout in seconds for the service to read a response from the proxied upstream

600

ReadTimeout

int

Timeout in seconds for the service to read a response from the proxied upstream

600

SSLOpts

int

Options to be passed when configuring SSL

N/A

SSLDefaultDHParam

int

Size of DH parameters

1024

SSLDefaultDHParamPath

string

Path to DH parameters file

N/A

SSLVerify

string

SSL client verification

required

WorkerProcesses

string

Number of worker processes for the proxy service

1

RLimitNoFile

int

Maximum number of open files for the proxy service

65535

SSLCiphers

string

SSL ciphers to use for the proxy service

HIGH:!aNULL:!MD5

SSLProtocols

string

Enable the specified TLS protocols

TLSv1.2

HideInfoHeaders

bool

Hide proxy-related response headers

N/A

KeepaliveTimeout

string

Connection keep-alive timeout

75s

ClientMaxBodySize

string

Maximum allowed client request body size

1 m

ClientBodyBufferSize

string

Buffer size for reading client request body

8k

ClientHeaderBufferSize

string

Maximum number and size of buffers used for reading large client request header

1k

LargeClientHeaderBuffers

string

Maximum number and size of buffers used for reading large client request header

4 8k

ClientBodyTimeout

string

Timeout for reading client request body

60s

UnderscoresInHeaders

bool

Enables or disables the use of underscores in client request header fields

false

UpstreamZoneSize

int

Size of the shared memory zone (in KB)

64

GlobalOptions

[]string

List of options that are included in the global configuration

N/A

HTTPOptions

[]string

List of options that are included in the HTTP configuration

N/A

TCPOptions

[]string

List of options that are included in the stream (TCP) configuration

N/A

AccessLogPath

string

Path to use for access logs

/dev/stdout

ErrorLogPath

string

Path to use for error logs

/dev/stdout

MainLogFormat

string

Format to use for main logger

N/A

TraceLogFormat

string

Format to use for trace logger

N/A

See also

NGINX

Tune the proxy service

This topic describes how to tune various components of the proxy service.

  • Constrain the proxy service to multiple dedicated worker nodes:

    <need-sme-instructions>
    
  • Adjust the stop signal and grace period, for example, to SIGTERM for the stop signal and ten seconds for the grace period:

    docker service update --stop-signal=SIGTERM \
    --stop-grace-period=10s interlock-proxy
    
  • Change the action that Swarm takes when an update fails using update-failure-action (the default is pause), for example, to rollback to the previous configuration:

    docker service update --update-failure-action=rollback \
    interlock-proxy
    
  • Change the amount of time between proxy updates using update-delay (the default is to use rolling updates), for example, setting the delay to thirty seconds:

    docker service update --update-delay=30s interlock-proxy
    
Update Interlock services

This topic describes how to update Interlock services by first updating the Interlock configuration to specify the new extension or proxy image versions and then updating the Interlock services to use the new configuration and image.

To update Interlock services:

  1. Create the new Interlock configuration:

    docker config create service.interlock.conf.v2 <path-to-new-config>
    
  2. Remove the old configuration and specify the new configuration:

    docker service update --config-rm \
    service.interlock.conf ucp-interlock
    docker service update --config-add \
    source=service.interlock.conf.v2,target=/config.toml \
    ucp-interlock
    
  3. Update the Interlock service to use the new image, for example, to pull the latest version of MKE:

    docker pull v/ucp:latest
    

    Example output:

    latest: Pulling from mirantis/ucp
    cd784148e348: Already exists
    3871e7d70c20: Already exists
    cad04e4a4815: Pull complete
    Digest: sha256:63ca6d3a6c7e94aca60e604b98fccd1295bffd1f69f3d6210031b72fc2467444
    Status: Downloaded newer image for mirantis/ucp:latest
    docker.io/mirantis/ucp:latest
    
  4. List all of the latest MKE images:

    docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
    mirantis/ucp images --list
    

    Example output

    mirantis/ucp-agent:3.4.15
    mirantis/ucp-auth-store:3.4.15
    mirantis/ucp-auth:3.4.15
    mirantis/ucp-azure-ip-allocator:3.4.15
    mirantis/ucp-calico-cni:3.4.15
    mirantis/ucp-calico-kube-controllers:3.4.15
    mirantis/ucp-calico-node:3.4.15
    mirantis/ucp-cfssl:3.4.15
    mirantis/ucp-compose:3.4.15
    mirantis/ucp-controller:3.4.15
    mirantis/ucp-dsinfo:3.4.15
    mirantis/ucp-etcd:3.4.15
    mirantis/ucp-hyperkube:3.4.15
    mirantis/ucp-interlock-extension:3.4.15
    mirantis/ucp-interlock-proxy:3.4.15
    mirantis/ucp-interlock:3.4.15
    mirantis/ucp-kube-compose-api:3.4.15
    mirantis/ucp-kube-compose:3.4.15
    mirantis/ucp-kube-dns-dnsmasq-nanny:3.4.15
    mirantis/ucp-kube-dns-sidecar:3.4.15
    mirantis/ucp-kube-dns:3.4.15
    mirantis/ucp-metrics:3.4.15
    mirantis/ucp-pause:3.4.15
    mirantis/ucp-swarm:3.4.15
    mirantis/ucp:3.4.15
    
  5. Start Interlock to verify the configuration object, which has the new extension version, and deploy a rolling update on all extensions:

    docker service update \
    --image mirantis/ucp-interlock:3.4.15 \
    ucp-interlock
    
Routing traffic to services
Route traffic to a Swarm service

After Interlock is deployed, you can launch and publish services and applications. This topic describes how to configure services to publish themselves to the load balancer by using service labels.

Caution

The following procedures assume a DNS entry exists for each of the applications (or local hosts entry for local testing).


To publish a demo service with four replicas to the host (demo.local):

  1. Create a Docker Service using the following two labels:

    • com.docker.lb.hosts for Interlock to determine where the service is available.

    • com.docker.lb.port for the proxy service to determine which port to use to access the upstreams.

  2. Create an overlay network so that service traffic is isolated and secure:

    docker network create -d overlay demo
    1se1glh749q1i4pw0kf26mfx5
    
  3. Deploy the application:

    docker service create \
    --name demo \
    --network demo \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    mirantiseng/docker-demo
    6r0wiglf5f3bdpcy6zesh1pzx
    

    Interlock detects when the service is available and publishes it.

  4. After tasks are running and the proxy service is updated, the application is available through http://demo.local:

    curl -s -H "Host: demo.local" http://127.0.0.1/ping
    {"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
    
  5. To increase service capacity, use the docker service scale command:

    docker service scale demo=4
    demo scaled to 4
    

The load balancer balances traffic across all four service replicas configured in this example.


To publish a service with a web interface

This procedure deploys a simple service that includes the following:

  • A JSON endpoint that returns the ID of the task serving the request.

  • A web interface available at http://app.example.org that shows how many tasks the service is running.


  1. Create a docker-compose.yml file that includes the following:

    version: "3.2"
    
    services:
      demo:
        image: mirantiseng/docker-demo
        deploy:
          replicas: 1
          labels:
            com.docker.lb.hosts: app.example.org
            com.docker.lb.network: demo_demo-network
            com.docker.lb.port: 8080
        networks:
          - demo-network
    
    networks:
      demo-network:
        driver: overlay
    

    Label

    Description

    com.docker.lb.hosts

    Defines the hostname for the service. When the layer 7 routing solution gets a request containing app.example.org in the host header, that request is forwarded to the demo service.

    com.docker.lb.network

    Defines which network the ucp-interlock-proxy should attach to in order to communicate with the demo service. To use layer 7 routing, you must attach your services to at least one network. If your service is attached to a single network, you do not need to add a label to specify which network to use for routing. When using a common stack file for multiple deployments leveraging MKE Interlock and layer 7 routing, prefix com.docker.lb.network with the stack name to ensure traffic is directed to the correct overlay network. In combination with com.docker.lb.ssl_passthrough, the label in mandatory even if your service is only attached to a single network.

    com.docker.lb.port

    Specifies which port the ucp-interlock-proxy service should use to communicate with this demo service. Your service does not need to expose a port in the Swarm routing mesh. All communications are done using the network that you have specified.

    The ucp-interlock service detects that your service is using these labels and automatically reconfigures the ucp-interlock-proxy service.

  2. Download and configure the client bundle and deploy the service:

    docker stack deploy --compose-file docker-compose.yml demo
    

To test your services using the CLI:

Verify that requests are routed to the demo service:

curl --header "Host: app.example.org" \
http://<mke-address>:<routing-http-port>/ping
  • <mke-address> is the domain name or IP address of an MKE node.

  • <routing-http-port> is the port used to route HTTP traffic.

Example of a successful response:

{"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}

To test your services using a browser:

Because the demo service exposes an HTTP endpoint, you can also use your browser to validate that it works.

  1. Verify that the /etc/hosts file in your system has an entry mapping app.example.org to the IP address of an MKE node.

  2. Navigate to http://app.example.org in your browser.

Publish a service as a canary instance

This topic describes how to publish an initial or an updated service as a canary instance.


To publish a service as a canary instance:

  1. Create an overlay network to isolate and secure service traffic:

    docker network create -d overlay demo
    

    Example output:

    1se1glh749q1i4pw0kf26mfx5
    
  2. Create the initial service:

    docker service create \
    --name demo-v1 \
    --network demo \
    --detach=false \
    --replicas=4 \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    --env METADATA="demo-version-1" \
    mirantiseng/docker-demo
    

    Interlock detects when the service is available and publishes it.

  3. After tasks are running and the proxy service is updated, the application is available at http://demo.local:

    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    

    Example output:

    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to demo.local (127.0.0.1) port 80 (#0)
    > GET /ping HTTP/1.1
    > Host: demo.local
    > User-Agent: curl/7.54.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Server: nginx/1.13.6
    < Date: Wed, 08 Nov 2017 20:28:26 GMT
    < Content-Type: text/plain; charset=utf-8
    < Content-Length: 120
    < Connection: keep-alive
    < Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400
    < x-request-id: f884cf37e8331612b8e7630ad0ee4e0d
    < x-proxy-id: 5ad7c31f9f00
    < x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
    < x-upstream-addr: 10.0.2.4:8080
    < x-upstream-response-time: 1510172906.714
    <
    {"instance":"df20f55fc943","version":"0.1","metadata":"demo-version-1","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"}
    

    The value of metadata is demo-version-1.


To deploy an updated service as a canary instance:

  1. Deploy an updated service as a canary instance:

    docker service create \
    --name demo-v2 \
    --network demo \
    --detach=false \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    --env METADATA="demo-version-2" \
    --env VERSION="0.2" \
    mirantiseng/docker-demo
    

    Because this has one replica and the initial version has four replicas, 20% of application traffic is sent to demo-version-2:

    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    {"instance":"23d9a5ec47ef","version":"0.1","metadata":"demo-version-1","request_id":"060c609a3ab4b7d9462233488826791c"}
    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    {"instance":"f42f7f0a30f9","version":"0.1","metadata":"demo-version-1","request_id":"c848e978e10d4785ac8584347952b963"}
    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    {"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"}
    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    {"instance":"1b0d55ed3d2f","version":"0.2","metadata":"demo-version-2","request_id":"b86ff1476842e801bf20a1b5f96cf94e"}
    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    {"instance":"c2a686ae5694","version":"0.1","metadata":"demo-version-1","request_id":"724c21d0fb9d7e265821b3c95ed08b61"}
    
  2. Optional. Increase traffic to the new version by adding more replicas. For example:

    docker service scale demo-v2=4
    

    Example output:

    demo-v2
    
  3. Complete the upgrade by scaling the demo-v1 service to zero replicas:

    docker service scale demo-v1=0
    

    Example output:

    demo-v1
    

    This routes all application traffic to the new version. If you need to roll back your service, scale the v1 service back up and the v2 service back down.

Use context or path-based routing

This topic describes how to publish a service using context or path-based routing.


  1. Create an overlay network to isolate and secure service traffic:

    docker network create -d overlay demo
    

    Example output:

    1se1glh749q1i4pw0kf26mfx5
    
  2. Create the initial service:

    docker service create \
    --name demo \
    --network demo \
    --detach=false \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    --label com.docker.lb.context_root=/app \
    --label com.docker.lb.context_root_rewrite=true \
    --env METADATA="demo-context-root" \
    mirantiseng/docker-demo
    

    Interlock detects when the service is available and publishes it.

    Note

    Interlock only supports one path per host for each service cluster. When a specific com.docker.lb.hosts label is applied, it cannot be applied again in the same service cluster.

  3. After the tasks are running and the proxy service is updated, the application is available at http://demo.local:

    curl -vs -H "Host: demo.local" http://127.0.0.1/app/
    

    Example output:

    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
    > GET /app/ HTTP/1.1
    > Host: demo.local
    > User-Agent: curl/7.54.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Server: nginx/1.13.6
    < Date: Fri, 17 Nov 2017 14:25:17 GMT
    < Content-Type: text/html; charset=utf-8
    < Transfer-Encoding: chunked
    < Connection: keep-alive
    < x-request-id: 077d18b67831519defca158e6f009f82
    < x-proxy-id: 77c0c37d2c46
    < x-server-info: interlock/2.0.0-dev (732c77e7) linux/amd64
    < x-upstream-addr: 10.0.1.3:8080
    < x-upstream-response-time: 1510928717.306
    
Configure a routing mode

This topic describes how to publish services using the task and VIP back-end routing modes.

Routing modes

The following table describes the two back-end routing modes:

Routing modes

Task mode

VIP mode

Default

yes

no

Traffic routing

Interlock uses back-end task IPs to route traffic from the proxy to each container. Traffic to the front-end route is layer 7 load balanced directly to service tasks. This allows for routing functionality such as sticky sessions for each container. Task routing mode applies layer 7 routing and then sends packets directly to a container.

Interlock uses the Swarm service VIP as the back-end IP instead of using container IPs. Traffic to the front-end route is layer 7 load balanced to the Swarm service VIP, which Layer 4 load balances to back-end tasks. VIP mode is useful for reducing the amount of churn in Interlock proxy service configurations, which can be an advantage in highly dynamic environments.

VIP mode optimizes for fewer proxy updates with the tradeoff of a reduced feature set. Most application updates do not require configuring back ends in VIP mode. In VIP routing mode, Interlock uses the service VIP, which is a persistent endpoint that exists from service creation to service deletion, as the proxy back end. VIP routing mode applies Layer 7 routing and then sends packets to the Swarm Layer 4 load balancer, which routes traffic to service containers.

Canary deployments

In task mode, a canary service with one task next to an existing service with four tasks represents one out of five total tasks, so the canary will receive 20% of incoming requests.

Because VIP mode routes by service IP rather than by task IP, it affects the behavior of canary deployments. In VIP mode, a canary service with one task next to an existing service with four tasks will receive 50% of incoming requests, as it represents one out of two total services.

Specify a routing mode

You can set each service to use either the task or the VIP back-end routing mode. Task mode is the default and is used if a label is not specified or if it is set to task.

Set the routing mode to VIP
  1. Apply the following label to set the routing mode to VIP:

    com.docker.lb.backend_mode=vip
    
  2. Perform a proxy reconfiguration for the following two updates, as they create or remove a service VIP:

    • Adding or removing a network on a service

    • Deploying or deleting a service

    Note

    The following is a non-exhaustive list of application events that do not require proxy reconfiguration in VIP mode:

    • Increasing or decreasing a service replica

    • Deploying a new image

    • Updating a configuration or secret

    • Adding or removing a label

    • Adding or removing an environment variable

    • Rescheduling a failed application task

Publish a default host service

The following example publishes a service to be a default host. The service responds whenever a request is made to an unconfigured host.

  1. Create an overlay network to isolate and secure the service traffic:

    docker network create -d overlay demo
    

    Example output:

    1se1glh749q1i4pw0kf26mfx5
    
  2. Create the initial service:

    docker service create \
    --name demo-default \
    --network demo \
    --detach=false \
    --replicas=1 \
    --label com.docker.lb.default_backend=true \
    --label com.docker.lb.port=8080 \
    ehazlett/interlock-default-app
    

    Interlock detects when the service is available and publishes it. After tasks are running and the proxy service is updated, the application is available at any URL that is not configured.

Publish a service using the VIP back-end mode
  1. Create an overlay network to isolate and secure the service traffic:

    docker network create -d overlay demo
    

    Example output:

    1se1glh749q1i4pw0kf26mfx5
    
  2. Create the initial service:

    docker service create \
    --name demo \
    --network demo \
    --detach=false \
    --replicas=4 \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    --label com.docker.lb.backend_mode=vip \
    --env METADATA="demo-vip-1" \
    mirantiseng/docker-demo
    

    Interlock detects when the service is available and publishes it.

  3. After tasks are running and the proxy service is updated, the application is available at http://demo.local:

    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    

    Example output:

    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to demo.local (127.0.0.1) port 80 (#0)
    > GET /ping HTTP/1.1
    > Host: demo.local
    > User-Agent: curl/7.54.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Server: nginx/1.13.6
    < Date: Wed, 08 Nov 2017 20:28:26 GMT
    < Content-Type: text/plain; charset=utf-8
    < Content-Length: 120
    < Connection: keep-alive
    < Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400
    < x-request-id: f884cf37e8331612b8e7630ad0ee4e0d
    < x-proxy-id: 5ad7c31f9f00
    < x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
    < x-upstream-addr: 10.0.2.9:8080
    < x-upstream-response-time: 1510172906.714
    <
    {"instance":"df20f55fc943","version":"0.1","metadata":"demo","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"}
    

    Using VIP mode causes Interlock to use the virtual IPs of the service for load balancing rather than using each task IP.

  4. Inspect the service to see the VIPs, as in the following example:

    "Endpoint": {
        "Spec": {
                    "Mode": "vip"
    
        },
        "VirtualIPs": [
            {
                    "NetworkID": "jed11c1x685a1r8acirk2ylol",
                    "Addr": "10.0.2.9/24"
            }
        ]
    }
    

    In this example, Interlock configures a single upstream for the host using IP 10.0.2.9. Interlock skips further proxy updates as long as there is at least one replica for the service, as the only upstream is the VIP.

Use service labels

Interlock uses service labels to configure how applications are published, to define the host names that are routed to the service, to define the applicable ports, and to define other routing configurations.

The following occurs when you deploy or update a Swarm service with service labels:

  1. The ucp-interlock service monitors the Docker API for events and publishes the events to the ucp-interlock-extension service.

  2. The ucp-interlock-extension service generates a new configuration for the proxy service based on the labels you have added to your services.

  3. The ucp-interlock service takes the new configuration and reconfigures ucp-interlock-proxy to start using the new configuration.

This process occurs in milliseconds and does not interrupt services.


The following table lists the service labels that Interlock uses:

Label

Description

Example

com.docker.lb.hosts

Comma-separated list of the hosts for the service to serve.

example.com, test.com

com.docker.lb.port

Port to use for internal upstream communication.

8080

com.docker.lb.network

Name of the network for the proxy service to attach to for upstream connectivity.

app-network-a

com.docker.lb.context_root

Context or path to use for the application.

/app

com.docker.lb.context_root_rewrite

Changes the path from the value of label com.docker.lb.context_root to / when set to true.

true

com.docker.lb.ssl_cert

Docker secret to use for the SSL certificate.

example.com.cert

com.docker.lb.ssl_key

Docker secret to use for the SSL key.

example.com.key

com.docker.lb.websocket_endpoints

Comma-separated list of endpoints to be upgraded for websockets.

/ws,/foo

com.docker.lb.service_cluster

Name of the service cluster to use for the application.

us-east

com.docker.lb.sticky_session_cookie

Cookie to use for sticky sessions.

app_session

com.docker.lb.redirects

Semicolon-separated list of redirects to add in the format of <source>, <target>.

http://old.example.com, http://new.example.com

com.docker.lb.ssl_passthrough

Enables SSL passthrough when set to true.

false

com.docker.lb.backend_mode

Selects the back-end mode that the proxy should use to access the upstreams. The default is task.

vip

Configure redirects

This topic describes how to publish a service with a redirect from old.local to new.local.

Note

Redirects do not work if a service is configured for TLS passthrough in the Interlock proxy.


  1. Create an overlay network to isolate and secure service traffic:

    docker network create -d overlay demo
    

    Example output:

    1se1glh749q1i4pw0kf26mfx5
    
  2. Create the service with the redirect:

    docker service create \
    --name demo \
    --network demo \
    --detach=false \
    --label com.docker.lb.hosts=old.local,new.local \
    --label com.docker.lb.port=8080 \
    --label com.docker.lb.redirects=http://old.local,http://new.local \
    --env METADATA="demo-new" \
    mirantiseng/docker-demo
    

    Interlock detects when the service is available and publishes it.

  3. After tasks are running and the proxy service is updated, the application is available through http://new.local with a redirect configured that sends http://old.local to http://new.local:

    curl -vs -H "Host: old.local" http://127.0.0.1
    

    Example output:

    * Rebuilt URL to: http://127.0.0.1/
    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
    > GET / HTTP/1.1
    > Host: old.local
    > User-Agent: curl/7.54.0
    > Accept: */*
    >
    < HTTP/1.1 302 Moved Temporarily
    < Server: nginx/1.13.6
    < Date: Wed, 08 Nov 2017 19:06:27 GMT
    < Content-Type: text/html
    < Content-Length: 161
    < Connection: keep-alive
    < Location: http://new.local/
    < x-request-id: c4128318413b589cafb6d9ff8b2aef17
    < x-proxy-id: 48854cd435a4
    < x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64
    <
    <html>
    <head><title>302 Found</title></head>
    <body bgcolor="white">
    <center><h1>302 Found</h1></center>
    <hr><center>nginx/1.13.6</center>
    </body>
    </html>
    
Service clusters

Reconfiguring the single proxy service that Interlock manages by default can take one to two seconds for each overlay network that the proxy manages. You can scale up to a larger number of Interlock-routed networks and services by implementing a service cluster. Service clusters use Interlock to manage multiple proxy services, each responsible for routing to a separate set of services and their corresponding networks, thereby minimizing proxy reconfiguration time.

Configure service clusters

This topic and the next assume that the following prerequisites have been met:

  • You have an operational MKE cluster with at least two worker nodes (mke-node-0 and mke-node-1), which you will use as dedicated proxy servers for two independent Interlock service clusters.

  • You have enabled Interlock with an HTTP port of 80 and an HTTPS port of 8443.


  1. From a manager node, apply node labels to the MKE workers that you have chosen to use as your proxy servers:

    docker node update --label-add nodetype=loadbalancer --label-add region=east mke-node-0
    docker node update --label-add nodetype=loadbalancer --label-add region=west mke-node-1
    

    In this example, mke-node-0 serves as the proxy for the east region and mke-node-1 serves as the proxy for the west region.

  2. Create a dedicated overlay network for each region proxy to manage traffic:

    docker network create --driver overlay eastnet
    docker network create --driver overlay westnet
    
  3. Modify the Interlock configuration to create two service clusters:

    CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ \
    (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' \
    ucp-interlock)
    docker config inspect --format '{{ printf "%s" .Spec.Data }}' \
    $CURRENT_CONFIG_NAME > old_config.toml
    
  4. Create the following config.toml file that declares two service clusters, east and west:

    ListenAddr = ":8080"
    DockerURL = "unix:///var/run/docker.sock"
    AllowInsecure = false
    PollInterval = "3s"
    
    [Extensions]
      [Extensions.east]
        Image = "mirantis/ucp-interlock-extension:3.2.3"
        ServiceName = "ucp-interlock-extension-east"
        Args = []
        Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"]
        ConfigImage = "mirantis/ucp-interlock-config:3.2.3"
        ConfigServiceName = "ucp-interlock-config-east"
        ProxyImage = "mirantis/ucp-interlock-proxy:3.2.3"
        ProxyServiceName = "ucp-interlock-proxy-east"
        ServiceCluster="east"
        Networks=["eastnet"]
        ProxyConfigPath = "/etc/nginx/nginx.conf"
        ProxyReplicas = 1
        ProxyStopSignal = "SIGQUIT