Service discovery registers a service and publishes its connectivity information so that other services are aware of how to connect to the service. As applications move toward microservices and service-oriented architectures, service discovery has become an integral part of any distributed system, increasing the operational complexity of these environments.
Docker Enterprise Edition includes service discovery and load balancing capabilities to aid the devops initiatives across any organization. Service discovery and load balancing make it easy for developers to create applications that can dynamically discover each other. Also, these features simplify the scaling of applications by operations engineers.
Docker uses a concept called services to deploy applications. Services consist of containers created from the same image. Each service consists of tasks that execute on worker nodes and define the state of the application. When deploying a service, a service definition is included upon service creation. The service definition consists of information that includes, among other things, the containers that comprise the service, which ports are published, which networks are attached, and the number of replicas. All of these tasks together make up the desired state of the service. If a node fails a health check or if a specific service task defined in a service definition fails a health check, then the cluster reconciles the service state to another healthy node. Docker Enterprise includes service discovery, load balancing, scaling, and reconciliation events so that this orchestration works seamlessly.
For Kubernetes-based service discovery and load-balancing, please refer to ucp-ingress-k8s.
This reference architecture covers the solutions that Docker Enterprise provides in the topic areas of service discovery and load balancing for swarm mode workloads. In swarm mode, Docker uses DNS for service discovery as services are created, and different routing meshes are built into Docker to ensure your applications remain highly available. The release of UCP 3.0 introduced a versatile and an enhanced version of application layer (Layer 7) routing mesh called the Interlock Proxy that routes HTTP traffic based on DNS hostname. After reading this document, you will have a good understanding of how Interlock Proxy works and how it integrates with the other service discovery and load balancing features native to Docker.
Additionally, UCP 3.0 introduced enterprise support for using Kubernetes as the orchestrator of your application workloads. The document linked below will provide a good understanding of how to use Kubernetes resource objects within Docker Enterprise to deploy, run, and manage application workloads.
Docker uses embedded DNS to provide service discovery for containers
running on a single Docker engine and tasks
running in a Docker
swarm. Docker engine has an internal DNS server that provides name
resolution to all of the containers on the host in user-defined bridge,
overlay, and MACVLAN networks. Each Docker container ( or task
in
swarm mode) has a DNS resolver that forwards DNS queries to the Docker
engine, which acts as a DNS server. The Docker engine then checks if the
DNS query belongs to a container or service
on each network that the
requesting container belongs to. If it does, then the Docker engine
looks up the IP address that matches the name of a container, task
,
orservice
in its key-value store and returns that IP or
service
Virtual IP (VIP) back to the requester.
Service discovery is network-scoped, meaning only containers or tasks that are on the same network can use the embedded DNS functionality. Containers not on the same network cannot resolve each others’ addresses. Additionally, only the nodes that have containers or tasks on a particular network store that network’s DNS entries. This promotes security and performance.
If the destination container or service
and the source container are
not on the same network, the Docker engine forwards the DNS query to the
default DNS server.
In this example, there is a service of two containers called
myservice
. A second service (client
) exists on the same network.
The client
executes two curl
operations for docker.com
and
myservice
. These are the resulting actions:
client
for docker.com
and
myservice
.127.0.0.11:53
and sends them to Mirantis Container Runtime’s DNS server.myservice
resolves to the Virtual IP (VIP) of that service which
is internally load balanced to the individual task IP addresses.
Container names are resolved as well, albeit directly to their IP
addresses.docker.com
does not exist as a service name in the mynet
network, so the request is forwarded to the configured default DNS
server.When services are created in a Docker swarm cluster, they are automatically assigned a Virtual IP (VIP) that is part of the service’s network. The VIP is returned when resolving the service’s name. Traffic to the VIP is automatically sent to all healthy tasks of that service across the overlay network. This approach avoids any client-side load balancing because only a single IP is returned to the client. Docker takes care of routing and equally distributes the traffic across the healthy service tasks.
To get the VIP of a service, run the
docker service inspect myservice
command like so:
# Create an overlay network called mynet
$ docker network create -d overlay mynet
a59umzkdj2r0ua7x8jxd84dhr
# Create myservice with 2 replicas as part of that network
$ docker service create --network mynet --name myservice --replicas 2 busybox ping localhost
8t5r8cr0f0h6k2c3k7ih4l6f5
# Get the VIP that was created for that service
$ docker service inspect myservice
...
"VirtualIPs": [
{
"NetworkID": "a59umzkdj2r0ua7x8jxd84dhr",
"Addr": "10.0.0.3/24"
},
]
Note
DNS round robin (DNS RR) load balancing is another load
balancing option for services (configured with --endpoint-mode
).
In DNS RR mode, a VIP is not created for each service. The Docker
DNS server resolves a service name to individual container IPs in
round robin fashion.
You can expose services externally by using the --publish
flag when
creating or updating the service. Publishing ports in Docker swarm mode
means that every node in your cluster is listening on that port, but
what happens if the service’s task isn’t on the node that is listening
on that port?
This is where routing mesh comes into play. Routing mesh combines
ipvs
and iptables
to create a powerful cluster-wide
transport-layer (L4) load balancer. It allows all the swarm nodes to
accept connections on the services published ports. When any swarm node
receives traffic destined to the published TCP/UDP port of a running
service
, it forwards the traffic to the service’s VIP using a
pre-defined overlay network called ingress
. The ingress
network
behaves similarly to other overlay networks, but its sole purpose is to
transport mesh routing traffic from external clients to cluster
services. It uses the same VIP-based internal load balancing as
described in the previous section.
Once you launch services, you can create an external DNS record for your applications and map it to any or all Docker swarm nodes. You do not need to worry about where your container is running as all nodes in your cluster look as one with the routing mesh routing feature.
# Create an overlay network called appnet
$ docker network create -d overlay appnet
# Create a service with two replicas and export port 8000 on the cluster
$ docker service create --name app --replicas 2 --network appnet --publish 8000:80 nginx
This diagram illustrates how the routing mesh works.
8000
.8000
on each host in the cluster.ingress
overlay network to a healthy service replica.The swarm mode routing mesh is great for transport-layer routing. It routes to services using the service’s published ports. But what if you wanted to route traffic to services based on hostname instead? The “Swarm Layer 7 Routing (Interlock)” is a feature that enables service discovery on the application layer (L7). This Layer 7 Routing extends upon the swarm mode routing mesh by adding application layer capabilities such as inspecting the HTTP header. Interlock and swarm mode routing meshes are used together for flexible and robust service delivery. The addition of Interlock allows for each service to be accessible via a DNS label passed to the service. As the service scales horizontally and more replicas are added, the service uses round-robin load balancing as well.
The Interlock Proxy works by using the HTTP/1.1
header field
definition. Every HTTP/1.1
TCP request contains a Host:
header.
A HTTP request header can be viewed using curl
:
$ curl -v docker.com
* Rebuilt URL to: docker.com/
* Trying 52.20.149.52...
* Connected to docker.com (52.20.149.52) port 80 (#0)
> GET / HTTP/1.1
> Host: docker.com
> User-Agent: curl/7.49.1
> Accept: */*
When using Interlock with HTTP requests, both the swarm mode routing
mesh and Interlock are used in tandem. When a service is created using
the com.docker.lb.hosts
label, the Interlock configuration is
updated to route all HTTP requests that contain the Host:
header
specified in the com.docker.lb.hosts
label to route to the VIP of
the newly created services. Since Interlock runs as a service, it is
accessible on any node in the cluster using the configured published
port.
The following is an overview diagram to show how the swarm mode routing mesh and Interlock work together.
ucp-interlock-proxy
service is configured to listen on port
8080 and 8443, so any request to port 8080 or 8443 on the MKE cluster
will hit this service first.Host:
header.The following graphic represents a closer look of the previous diagram. You can see how Interlock works under the hood.
ingress
network to the Interlock Proxy service’s published port.ucp-interlock
. This listens to the
Docker remote API for events and configures an upstream service
that is accessed by another service called
ucp-interlock-extension
.ucp-interlock-extension
. This
service queries the core ucp-interlock
service and uses the
response information from that service to generate the
configuration file appropriate for the proxy service called
ucp-interlock-proxy
. The configuration file is generated in
the form of a Docker config object which will be used by the proxy
service.ucp-interlock-proxy
. This is the
reverse proxy and is responsible for handling the actual requests
for application services. The proxy uses the configuration
generated by the corresponding extension service
ucp-interlock-extension
.ucp-interlock-proxy
service receives the TCP packet and
inspects the HTTP header.com.docker.lb.hosts
are
checked to see if they match the HTTP Host:
header.Host:
header and service label match, then the value of
the label com.docker.lb.port
is queried for that service. This
instructs what port the ucp-interlock-proxy
should use to
access the application service.The main difference between the Interlock Proxy and swarm mode routing mesh is that the Interlock Proxy is intended to be used only for HTTP traffic at the application layer, while the swarm mode routing mesh works at a lower level on the transport layer.
Deciding which to use depends on the application. If the application is intended to be publicly accessible and is an HTTP service, then the Interlock Proxy could be a good fit. If mutual TLS is required for the backend application, then using the transport layer would probably be preferred.
Another advantage of using the Interlock Proxy is that less configuration is required for traffic to be routed to the service. Often times only a DNS record is needed along with setting the label on the service. If a wildcard DNS entry is used, then no configuration outside of setting the service label is necessary. In many organizations, access to load balancers and DNS is restricted. Being able to control requests to applications by just a service label can empower developers to quickly iterate over changes. With the swarm mode routing mesh, any frontend load balancer can be configured to send traffic to the service’s published port.
The following diagram shows an example with wildcard DNS:
The Interlock Proxy can be enabled from the MKE web console. To enable it:
Once enabled, MKE creates three services on the swarm cluster:
ucp-interlock
, ucp-interlock-extension
, and
ucp-interlock-proxy
. The ucp-interlock-proxy
service is
responsible for routing traffic to the specified container based on the
HTTP Host:
header. Since the Interlock Proxy service is a swarm
mode service, every node in the MKE cluster can route traffic to it by
receiving traffic from ports 8080
and 8443
. By default the
Interlock Proxy service exposes ports 8080 and 8443 cluster-wide, and
any requests on ports 8080 and 8443 to any node in the cluster are sent
to the Interlock Proxy service.
The Interlock Proxy uses one or more overlay networks to communicate
with the backend application services. To allow the Interlock Proxy to
communicate with and consequently forward requests to application
frontend services, it needs to share a network with that service. This
is accomplished by setting a label com.docker.lb.network
to a value
which is the name of the network the Interlock Proxy service should
attach to for upstream connectivity. As such this action does not
require administrator level access within MKE to be performed.
This configuration also allows the isolation between frontend services using the Interlock Proxy since the exposed application services do not share a common network with other similar exposed services.
There are three requirements that services must satisfy to use the Interlock Proxy:
com.docker.lb.network
.com.docker.lb.port
in the service.com.docker.lb.hosts
to
specify the host (or FQDN) served by the service. Multiple hosts can
optionally be configured using a comma-separated list.This section covers how to configure DNS for services using the Interlock Proxy. To use the Interlock Proxy, a DNS record for the service needs to point to the MKE cluster. This can be accomplished through a variety of different ways because of the flexibility that the swarm mode routing mesh provides.
If a service needs to be publicly accessible for requests to
foo.example.com
, then the DNS record for that service can be
configured in one of the following ways:
foo.example.com
will get routed through that node to
the Interlock Proxy.foo.example.com
will get routed through the Interlock Proxy.foo.example.com
to point to the
external load balancer.Which nodes should be used to route traffic, managers, or workers? There are a few ways to approach that question.
Regardless of which type of instance your frontend load balancer is directing traffic to, it’s important to make sure the instances have an adequate network connection.
The following sections cover various use cases and the deployment syntax for the Interlock Proxy for HTTP routing, logging, monitoring, and setting up of secure application clusters (also known as service clusters).
For services to be published using Interlock Proxy, they must contain,
among other labels, at least two labels where the keys are
com.docker.lb.hosts
and com.docker.lb.port
.
com.docker.lb.hosts
should be the host
that the service should serve. Optionally it can also be a
comma-separated list of the hosts that the service should serve.com.docker.lb.port
should contain the port
to use for the internal upstream communication with the service. Note
that this port on the service need not be published externally.com.docker.lb.network
can be set to point
to the name of the network that Interlock Proxy service needs to
attach to for upstream connectivity. This label is required only if
the service to be published using Interlock Proxy is attached to
multiple overlay networks.To monitor the Interlock Proxy from a frontend load balancer, set the
load balancer to monitor the exposed Interlock Proxy ports on the
cluster using a TCP health check. If Interlock Proxy is configured to
listen on the default ports of 8080
and 8443
, then the frontend
load balancer would need to simply perform a TCP health check on all
nodes that are in its pool.
This section discusses a few usage considerations with regards to Interlock Proxy running in high-availability mode.
The ucp-interlock-proxy
can be scaled up to have more replicas, and
those replicas can be constrained to only those nodes that have high
performance network interfaces. The additional benefit of this
architecture is that it improves security by avoiding all application
traffic from routing through the managers.
The following steps needed to accomplish this design:
Update the high performance nodes by using node labels to identify them:
$ docker node update --label-add nodetype=loadbalancer <node>
Constrain the Interlock Proxy service tasks to only run on the high
performance nodes using the node labels. This is done by updating the
ucp-interlock
service configuration to deploy the interlock proxy
service with the updated constraints in the ProxyConstraints
array as explained below:
Retrieve the configuration that is currently being used for the
ucp-interlock
service and save it to a file (config.toml
for example):
$ CURRENT_CONFIG_NAME=$(docker service inspect --format \
'{{ (index .Spec.TaskTemplate.ContainerSpec.Configs 0).ConfigName }}' \
ucp-interlock)
$ docker config inspect --format '{{ printf "%s" .Spec.Data }}' \
$CURRENT_CONFIG_NAME > config.toml
Update the ProxyConstraints
array in the config.toml
file
as shown below:
[Extensions]
[Extensions.default]
ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux", "node.labels.nodetype==loadbalancer"]
Create a new Docker config
from the file that was just edited:
$ docker config create $NEW_CONFIG_NAME config.toml
Update the ucp-interlock
service to start using the new
configuration:
$ docker service update \
--config-rm $CURRENT_CONFIG_NAME \
--config-add source=$NEW_CONFIG_NAME,target=/config.toml \
ucp-interlock
Configure the upstream load balancer to direct requests to only those high performance nodes on the Interlock Proxy ports. This ensures all traffic is directed to only these nodes.
For more information and steps see: - Configure Layer 7 routing for production - Configure Layer 7 routing service
This section explains the following types of applications, using all of the available networking modes for Interlock Proxy:
To run through these examples showcasing service discovery and load balancing, the following are required:
A Docker client that has the MKE client bundle loaded and communicating with the MKE cluster.
DNS pointing to a load balancer sitting in front of your MKE cluster. If no load balancer can be used, then direct entries in your local hosts’ file to a host in your MKE cluster. If connecting directly to a host in your MKE cluster, connect over the published Interlock Proxy ports (8080 and 8443 by default).
Note: The repository for the sample application can be found on GitHub.
Consider an example, standard 3-tier application that showcases service discovery and load balancing in Docker EE.
To deploy the application stack, run these commands with the MKE client bundle loaded:
$ wget https://raw.githubusercontent.com/dockersuccess/counter-demo/master/interlock-docker-compose.yml
$ DOMAIN=<domain-to-route> docker stack deploy -c interlock-docker-compose.yml http-example
Then access the example application at http://<domain-to-route>/
.
The example also demonstrates support for websockets using the label
com.docker.lb.websocket_endpoints
with its value set to /total
as shown in this section. The value can also be a comma-separated list
of endpoints to configure to be upgraded for websockets.
This is the contents of the compose file if you just want to copy/paste into the MKE UI instead:
version: "3.3"
services:
web:
image: dockersuccess/webserver:latest
environment:
app_url: app:8080
deploy:
replicas: 2
labels:
com.docker.lb.hosts: ${DOMAIN:-app.dockerdemos.com}
com.docker.lb.port: 2015
com.docker.lb.websocket_endpoints: /total
networks:
- frontend
app:
image: dockersuccess/counter-demo:latest
environment:
ENVIRONMENT: ${env:-PRODUCTION}
deploy:
replicas: 5
endpoint_mode: dnsrr
networks:
- frontend
- backend
db:
image: redis:latest
volumes:
- data:/data
networks:
backend:
networks:
frontend:
driver: overlay
backend:
driver: overlay
volumes:
data:
It is also possible to deploy through the MKE UI by going to Shared
Resources -> Stacks -> Create Stack. Name the stack, change
the Mode to Swarm Services, and copy/paste the above compose
file into the open text field. Be sure to replace
${DOMAIN:-app.dockerdemos.com}
with the correct domain name when
deploying through the UI. Click on the Create button to deploy the
stack.
The Interlock Proxy polls the Docker API for changes every 3 seconds
(default), so once an application is deployed, Interlock Proxy polls for
the new service and finds it at http://<domain-to-route>
.
When the application stack is deployed using the compose file used as an the example in this section, the following happens:
web
that is the
frontend running a Caddy webserver, a service called app
which
contains the application logic, and another service db
running a
Redis database to store data.<stack-name>_frontend
and
<stack-name>_backend
. The web
and app
services share the
<stack-name>_frontend
network, and the app
and db
services share the <stack-name>_backend
network. In other words
the web
service cannot connect to the db
service directly; it
needs to connect to the app
service, which is the only service
that can connect to the db
service.app
service creates a DNS A Record of app
on the
<stack-name>_frontend
network. This DNS record directs to the IP
address(es) of the app
containers.web
service uses an environment variable app_url
, the
value of which is set to app:8080
. This value points to the
app
service and does not need to change regardless of the
stack name.db
on the
<stack-name>_backend
network.app
service does not need to change for every stack when
accessing Redis DB. It will always connect to db
, and this can
be hardcoded inside the app
service. This is also independent
of the stack name.web
contains the labels
com.docker.lb.hosts
and com.docker.lb.port
. The value for the
label com.docker.lb.hosts
is set to the domain(s) where the
application needs to be made available. This can be conveniently set
using the $DOMAIN
environment variable. The value for the label
com.docker.lb.ports
is set to the port where the web
service
is running, which is 2015
in this example. Because the web
service is connected to a single overlay network
<stack-name>_frontend
, Interlock Proxy was able to attach itself
to the <stack-name>_frontend
network.web
service replicas on the configured port.web
service is 2 replicas, so
2 replica tasks are created. The two web
service replicas are
configured as upstreams. Interlock Proxy is responsible for balancing
traffic across all published service replicas.$DOMAIN
environment variable that was
passed for the stack deploy.http://$DOMAIN
in a web browser, the hit
counter should increment with every request. It is load balancing
across all of the frontend web
service replicas.com.docker.lb.*
labels on any newly created or updated
services.Interlock Proxy has the ability to route to a specific backend service
based on a named cookie. For example, if your application uses a cookie
named JSESSIONID
as the session cookie, you can persist connections
to a specific service replica task by setting the value of the label
com.docker.lb.sticky_session_cookie
to JSESSIONID
.
Why would cookie-based persistence need to be used? It can reduce load on the load balancer. The load balancer picks a certain instance in the backend pool and maintains the connection instead of having to re-route on new requests. Another use case could be for rolling deployments. When you bring in a new application server into the load balancer pool you can avoid the “thundering herd” of new instances. Instead, it eases connections to the new instances into load balancing as sessions expire.
In general, sticky sessions are better suited for improving cache performance and lessening the load on certain aspects of the system. If you need to hit the same backend every time because your application is not using distributed storage, then you can run into more problems down the road when swarm mode reschedules your tasks. It’s important to keep this in mind while using application cookie-based persistence.
To deploy the example service for sticky sessions, run these commands
with the MKE client bundle loaded. Note the label
com.docker.lb.sticky_session_cookie
used to indicate the cookie to
use to enable sticky sessions.
# Create an overlay network so that service traffic is isolated and secure
$ docker network create -d overlay demo
# Next create the service with the cookie to use for sticky sessions.
# Replace <domain-to-route> with a valid domain.
$ docker service create \
--name demo \
--network demo \
--detach=false \
--replicas=5 \
--label com.docker.lb.hosts=<domain-to-route> \
--label com.docker.lb.sticky_session_cookie=session \
--label com.docker.lb.port=8080 \
--env METADATA="demo-sticky" \
dockersuccess/docker-demo
Access the example application at http://<domain-to-route>
in a
browser tab or use curl
as shown below:
Note
To test using a browser you need a DNS entry to
<domain-to-route>
with a valid domain pointing to a load
balancer sitting in front of your MKE cluster. If no load balancer
can be used, then direct entries in your local hosts’ file to a host
in your MKE cluster. If connecting directly to a host in your MKE
cluster, you need to connect over the published Interlock Proxy
ports (8080 and 8443 by default). The following examples use
demo.local
in lieu of <domain-to-route>
.
$ curl -vs -c cookie.txt -b cookie.txt http://demo.local/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
GET /ping HTTP/1.1
Host: demo.local
User-Agent: curl/7.54.0
Accept: */*
Cookie: session=1510171444496686286
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 20:04:36 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 117
< Connection: keep-alive
* Replaced cookie session="1510171444496686286" for domain demo.local, path /, expire 0
< Set-Cookie: session=1510171444496686286
< x-request-id: 3014728b429320f786728401a83246b8
< x-proxy-id: eae36bf0a3dc
< x-server-info: interlock/2.0.0 (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.5:8080
< x-upstream-response-time: 1510171476.948
<
{"instance":"9c67a943ffce","version":"0.1","metadata":"demo-sticky","request_id":"3014728b429320f786728401a83246b8"}
In the output of curl
above, the Set-Cookie
attribute from the
application is sent with subsequent requests, which are pinned to the
same instance. The same x-upstream-addr
is used for new requests.
Interlock Proxy also supports IP Hashing
. In this mode a unique hash
key is generated using the IP addresses of the source and destination.
This hash is used by the load balancer (Interlock Proxy in our case) to
allocate clients to a particular backend server.
Below is an example that uses IP hashing to enable sticky sessions. The
label to use in this case is com.docker.lb.ip_hash
with its value
set to true.
Run the following commands with the MKE bundle loaded.
$ docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
$ docker service create \
--name demo \
--network demo \
--detach=false \
--replicas=5 \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ip_hash=true \
--env METADATA="demo-sticky" \
dockersuccess/docker-demo
Interlock Proxy has support for routing using HTTPS / SSL. Both “SSL Termination” and “SSL Passthrough” can be setup to provide different configurations of load balancing encrypted web traffic.
In SSL Terminations, SSL encrypted requests are decrypted by Interlock Proxy (the Load Balancer layer), and the unencrypted request is sent to the backend servers.
In SSL Passthrough, the SSL requests are sent as-is directly to the backend servers. The requests remain encrypted, and the backend servers become responsible for the decryption. This also implies that the backend servers have the necessary certificates and libraries to perform the decryption.
Secrets and certificates are almost always involved when dealing with encrypted communications. Before deploying the example application, generate the necessary certificates. You can also use one of several Certificate Authorities like Let’s Encrypt to generate the certificates.
Here are some helpful commands to generate self-signed certificates to
use with the application using openssl
.
$ openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj "/C=US/ST=SomeState/L=SomeCity/O=Interlock/CN=demo.local" \
-keyout demo.local.key \
-out demo.local.cert
Now that the certificates are generated, you can use the resulting files as input to create two Docker secrets using the following commands:
$ docker secret create demo.local.cert demo.local.cert
ywn8ykni6cmnq4iz64um1pj7s
$ docker secret create demo.local.key demo.local.key
e2xo036ukhfapip05c0sizf5w
The secrets are now encrypted in the cluster-wide key value store. The secrets are encrypted at rest and using TLS while in motion to nodes that need the secret. Secrets can only be viewed by the application that needs to use them.
Note
For more details on using Docker secrets please refer to the Reference Architecture covering Securing Docker Enterprise and Security Best Practices.
Now create an overlay network so that service traffic is isolated and secure:
$ docker network create -d overlay demo
1se1glh749q1i4pw0kf26mfx5
$ docker service create \
--name demo \
--network demo \
--label com.docker.lb.hosts=demo.local \
--label com.docker.lb.port=8080 \
--label com.docker.lb.ssl_cert=demo.local.cert \
--label com.docker.lb.ssl_key=demo.local.key \
dockersuccess/docker-demo
6r0wiglf5f3bdpcy6zesh1pzx
Interlock Proxy detects when the service is available and publish it.
Once the tasks are running and the proxy service has been updated the
application should be available via https://demo.local
.
Note
To test using a browser you need a DNS entry to
<domain-to-route>
pointing to a load balancer sitting in front
of your MKE cluster. If no load balancer can be used, then direct
entries in your local hosts’ file to a host in your MKE cluster. If
connecting directly to a host in your MKE cluster, you would need to
connect over the published Interlock Proxy ports (8080 and 8443 by
default). The following examples use demo.local
in lieu of
<domain-to-route>
.
$ curl -vsk https://demo.local/ping
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to demo.local (127.0.0.1) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* start date: Nov 8 16:23:03 2017 GMT
* expire date: Nov 6 16:23:03 2027 GMT
* issuer: C=US; ST=SomeState; L=SomeCity; O=Interlock; CN=demo.local
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /ping HTTP/1.1
> Host: demo.local
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.6
< Date: Wed, 08 Nov 2017 16:26:55 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 92
< Connection: keep-alive
< Set-Cookie: session=1510158415298009207; Path=/; Expires=Thu, 09 Nov 2017 16:26:55 GMT; Max-Age=86400
< x-request-id: 4b15ab2aaf2e0bbdea31f5e4c6b79ebd
< x-proxy-id: a783b7e646af
< x-server-info: interlock/2.0.0 (147ff2b1) linux/amd64
< x-upstream-addr: 10.0.2.3:8080
{"instance":"c2f1afe673d4","version":"0.1",request_id":"7bcec438af14f8875ffc3deab9215bc5"}
Since the certificate and key are stored securely within the Docker swarm, you can safely scale this service as well as the proxy service, and the Docker swarm will handle granting access to the credentials only as needed.
Below are common optimizations for production deployments.
Interlock service clusters allow Interlock to be segmented into multiple logical instances called “service clusters”, which have independently managed proxies. Application traffic only uses the proxies for a specific service cluster, allowing the full segmentation of traffic. Each service cluster only connects to the networks using that specific service cluster, which reduces the number of overlay networks to which proxies connect. Because service clusters also deploy separate proxies, this also reduces the amount of churn in LB configs when there are service updates.
The Interlock proxy containers will connect to the overlay network of every Swarm service. Having many networks connected to Interlock adds incremental delay when Interlock updates its load balancer configuration. Each network connected to Interlock generally adds 1-2 seconds of update delay. With many networks the Interlock update delay will cause the LB config to be out of date for too long which can cause traffic to be dropped.
Minimizing the number of overlay networks that Interlock connects to can be accomplished in two ways: - Reduce the number of networks. If the architecture permits it, applications can be grouped together to use the same networks. - Use Interlock Service Clusters. By segmenting Interlock, service clusters also segment which networks are connected to Interlock, reducing the number of networks that each proxy is connected to.
VIP Mode can be used to reduce the impact of application updates on the Interlock proxies. It utilizes the Swarm L4 load balancing VIPs instead of individual task IPs to load balance traffic to a more stable internal endpoint. This prevents the proxy LB configs from changing for most kinds of app service updates reducing churn for Interlock. The major tradeoff is that the following features are not supported in VIP mode: - Sticky Sessions - Canary Deployments
The following features are still supported in VIP mode: - Host & Context Routing - Context Root Rewrites - Interlock TLS Termination - TLS Passthrough - Service Clusters
For additional tips on optimizing interlock for applications, see Optimizing Interlock for applications
The ability to scale and discover services in Docker is now easier than ever. With the service discovery and load balancing features built into Docker, engineers can spend less time creating these types of supporting capabilities on their own and more time focusing on their applications. Instead of creating API calls to set DNS for service discovery, Docker automatically handles it for you. If an application needs to be scaled, Docker takes care of adding it to the load balancer pool. By leveraging these features, organizations can deliver highly available and resilient applications in a shorter amount of time.