Layer 7 routing¶
MKE includes a system for application-layer (Layer 7) routing that offers both application routing and load balancing (ingress routing) for Swarm orchestration. The Interlock architecture leverages Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
Swarm mode provides MCR with a routing mesh, which enables users to access services using the IP address of any node in the swarm. Layer 7 routing enables you to access services through any node in the swarm by using a domain name, with Interlock routing the traffic to the node with the relevant container.
Interlock uses the Docker remote API to automatically configure extensions such as NGINX and HAProxy for application traffic. Interlock is designed for:
Full integration with MCR, including Swarm services, secrets, and configs
Enhanced configuration, including context roots, TLS, zero downtime deployment, and rollback
Support through extensions for external load balancers, such as NGINX, HAProxy, and F5
Least privilege for extensions, such that they have no Docker API access
Interlock and Layer 7 routing are used for Swarm deployments. Refer to NGINX Ingress Controller for information on routing traffic to your Kubernetes applications.
A group of compute resources running MKE
An MKE cluster running in Swarm mode
An upstream container that serves an application
- Proxy service
A service, such as NGINX, that provides load balancing and proxying
- Extension service
A secondary service that configures the proxy service
- Service cluster
A combined Interlock extension and proxy service
A high-performance RPC framework
The central piece of the Layer 7 routing solution. The core service is responsible for interacting with the Docker remote API and building an upstream configuration for the extensions. Interlock uses the Docker API to monitor events, and manages the extension and proxy services, and it serves this on a gRPC API that the extensions are configured to access.
Interlock manages extension and proxy service updates for both configuration changes and application service deployments. There is no operator intervention required.
The Interlock service starts a single replica on a manager node. The Interlock extension service runs a single replica on any available node, and the Interlock proxy service starts two replicas on any available node. Interlock prioritizes replica placement in the following order:
Replicas on the same worker node
Replicas on different worker nodes
Replicas on any available nodes, including managers
- Interlock extension
A secondary service that queries the Interlock gRPC API for the upstream configuration. The extension service configures the proxy service according to the upstream configuration. For proxy services that use files such as NGINX or HAProxy, the extension service generates the file and sends it to Interlock using the gRPC API. Interlock then updates the corresponding Docker configuration object for the proxy service.
- Interlock proxy
A proxy and load-balancing service that handles requests for the upstream application services. Interlock configures these using the data created by the corresponding extension service. By default, this service is a containerized NGINX deployment.
Features and benefits¶
- High availability
All Layer 7 routing components are failure-tolerant and leverage Docker Swarm for high availability.
- Automatic configuration
Interlock uses the Docker API for automatic configuration, without needing you to manually update or restart anything to make services available. MKE monitors your services and automatically reconfigures proxy services.
Interlock uses a modular design with a separate proxy service, allowing an operator to individually customize and scale the proxy Layer to handle user requests and meet services demands, with transparency and no downtime for users.
You can leverage Docker secrets to securely manage TLS certificates and keys for your services. Interlock supports both TLS termination and TCP passthrough.
- Context-based routing
Interlock supports advanced application request routing by context or path.
- Host mode networking
Layer 7 routing leverages the Docker Swarm routing mesh by default, but Interlock also supports running proxy and application services in host mode networking, allowing you to bypass the routing mesh completely, thus promoting maximum application performance.
The Layer 7 routing components that are exposed to the outside world run on worker nodes, thus your cluster will not be affected if they are compromised.
Interlock leverages Docker secrets to securely store and use SSL certificates for services, supporting both SSL termination and TCP passthrough.
- Blue-green and canary service deployment
Interlock supports blue-green service deployment allowing an operator to deploy a new application while the current version is serving. Once the new application verifies the traffic, the operator can scale the older version to zero. If there is a problem, the operation is easy to reverse.
- Service cluster support
Interlock supports multiple extension and proxy service combinations, thus allowing for operators to partition load balancing resources to be used, for example, in region- or organization-based load balancing.
- Least privilege
Interlock supports being deployed where the load balancing proxies do not need to be colocated with a Swarm manager. This is a more secure approach to deployment as it ensures that the extension and proxy services do not have access to the Docker API.