Optimize Interlock deployments

This topic describes various ways to optimize your Interlock deployments. First, it will be helpful to review the stages of an Interlock deployment. The following process occurs each time you update an application:

  1. The user updates a service with a new version of an application.

  2. The default stop-first policy stops the first replica before scheduling the second. The Interlock proxies remove ip1.0 from the back-end pool as the app.1 task is removed.

  3. Interlock reschedules the first application task with the new image after the first task stops.

  4. Interlock reschedules proxy.1 with the new NGINX configuration containing the new app.1 task update.

  5. After proxy.1 is complete, proxy.2 redeploys with the updated NGINX configuration for the app.1 task.

In this scenario, the service is unavailable for less than 30 seconds.

Application update optimizations

To optimize your application update order:

Using --update-order, Swarm allows you to control the order in which tasks are stopped when you replace them with new tasks:

Optimization type

Description

stop-first (default)

Configures the old task to stop before the new task starts. Use this if the old and new tasks cannot serve clients at the same time.

start-first

Configures the old task to stop after the new task starts. Use this if you have a single application replica and you cannot have service interruption. This optimizes for high availability.

To optimize the order in which you update your application, [need-instructions-from-sme].


To set an application update delay:

Using update-delay, Swarm allows you to control how long it takes an application to update by adding a delay between updating tasks. The delay occurs between the time when the first task enters a healthy state and when the next task begins its update. The default is 0 seconds, meaning there is no delay.

Use update-delay if either of the following applies:

  • You can tolerate a longer update cycle with the benefit of fewer dropped connections.

  • Interlock update convergence takes a long time in your environment, often due to having a large number of overlay networks.

Do not use update-delay if either of the following applies:

  • You need service updates to occur rapidly.

  • The old and new tasks cannot serve clients at the same time.

To set the update delay, [need-instructions-from-sme].


To configure application health checks:

Using health-cmd, Swarm allows you to check application health to ensure that updates do not cause service interruption. Without using health-cmd, Swarm considers an application healthy as soon as the container process is running, even if the application is not yet capable of serving clients, thus leading to dropped connections. You can configure health-cmd using either a Dockerfile or a Compose file.

To configure health-cmd, [need-instructions-from-sme].


To configure an application stop grace period:

Using stop-grace-period, Swarm allows you to set the maximum wait time before it force-kills a task. A task can run no longer than the value of this setting after initiating its shutdown cycle. The default is 10 seconds. Use longer wait times for applications that require long periods to process requests, allowing connections to terminate normally.

To configure stop-grace-period, [need-instructions-from-sme].

Interlock optimizations

To use service clusters for Interlock segmentation:

Interlock can be segmented into multiple logical instances called service clusters, with independently-managed proxies. Application traffic can be fully-segmented, as it only uses the proxies for a particular service cluster. Each service cluster only connects to the networks that use that specific service cluster, reducing the number of overlay networks that proxies connect to. The use of separate proxies enables service clusters to reduce the amount of load balancer configuration churn during service updates.

To configure service clusters, [need-instructions-from-sme].


To minimize the number of overlay networks:

Every overlay network connected to Interlock adds one to two seconds of additional update delay, and too many connected networks cause the load balancer configuration to be out of date for too long, resulting in dropped traffic.

The following are two different ways you can minimize the number of overlay networks that Interlock connects to:

  • Group applications together to share a network if the architecture permits doing so.

  • Use Interlock service clusters, as they segment which networks are connected to Interlock, reducing the number of networks each proxy is connected to. And use admin-defined networks, limiting the number of networks per service cluster.


To use Interlock VIP Mode:

Using VIP mode, Interlock allows you to reduce the impact of application updates on the Interlock proxies. It uses the Swarm L4 load balancing VIPs instead of individual task IPs to load balance traffic to a more stable internal endpoint. This prevents the proxy load balancer configurations from changing for most kinds of app service updates, thus reducing Interlock churn.

These are the features that VIP mode supports:

  • Host and context routing

  • Context root rewrites

  • Interlock TLS termination

  • TLS passthrough

  • Service clusters

These are the features that VIP mode does not support:

  • Sticky sessions

  • Websockets

  • Canary deployments

To use Interlock VIP mode, [need-instructions-from-sme].