Configure layer 7 routing for production¶
This topic describes how to configure Interlock for a production environment and builds upon the instruction in the previous topic, Deploy a layer 7 routing solution. It does not describe infrastructure deployment, and it assumes you are using a typical Swarm cluster, using docker init and docker swarm join from the nodes.
The layer 7 solution that ships with MKE is highly available, fault tolerant, and designed to work independently of how many nodes you manage with MKE.
The following procedures require that you dedicate two worker nodes for running
the ucp-interlock-proxy
service. This tuning ensures the following:
The proxy services have dedicated resources to handle user requests. You can configure these nodes with higher performance network interfaces.
No application traffic can be routed to a manager node, thus making your deployment more secure.
If one of the two dedicated nodes fails, layer 7 routing continues working.
To dedicate two nodes to running the proxy service:
Select two nodes that you will dedicate to running the proxy service.
Log in to one of the Swarm manager nodes.
Add labels to the two dedicated proxy service nodes, configuring them as load balancer worker nodes, for example,
lb-00
andlb-01
:docker node update --label-add nodetype=loadbalancer lb-00 lb-00 docker node update --label-add nodetype=loadbalancer lb-01 lb-01
Verify that the labels were added successfully:
docker node inspect -f '{{ .Spec.Labels }}' lb-00 map[nodetype:loadbalancer] docker node inspect -f '{{ .Spec.Labels }}' lb-01 map[nodetype:loadbalancer]
To update the proxy service:
You must update the ucp-interlock-proxy
service configuration to deploy the
proxy service properly constrained to the dedicated worker nodes.
From a manager node, add a constraint to the
ucp-interlock-proxy
service to update the running service:docker service update --replicas=2 \ --constraint-add node.labels.nodetype==loadbalancer \ --stop-signal SIGQUIT \ --stop-grace-period=5s \ $(docker service ls -f 'label=type=com.docker.interlock.core.proxy' -q)
This updates the proxy service to have two replicas, ensures that they are constrained to the workers with the label
nodetype==loadbalancer
, and configures the stop signal for the tasks to be aSIGQUIT
with a grace period of five seconds. This ensures that NGINX does not exit before the client request is finished.Inspect the service to verify that the replicas have started on the selected nodes:
docker service ps $(docker service ls -f \ 'label=type=com.docker.interlock.core.proxy' -q)
Example of system response:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS o21esdruwu30 interlock-proxy.1 nginx:alpine lb-01 Running Preparing 3 seconds ago n8yed2gp36o6 \_ interlock-proxy.1 nginx:alpine mgr-01 Shutdown Shutdown less than a second ago aubpjc4cnw79 interlock-proxy.2 nginx:alpine lb-00 Running Preparing 3 seconds ago
Add the constraint to the
ProxyConstraints
array in theinterlock-proxy
service configuration in case Interlock is restored from backup:[Extensions] [Extensions.default] ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"]
Optional. By default, the config service is global, scheduling one task on every node in the cluster. To modify constraint scheduling, update the
ProxyConstraints
variable in the Interlock configuration file. Refer to Configure layer 7 routing service for more information.Verify that the proxy service is running on the dedicated nodes:
docker service ps ucp-interlock-proxy
Update the settings in the upstream load balancer, such as ELB or F5, with the addresses of the dedicated ingress workers, thus directing all traffic to these two worker nodes.
See also