Specify a routing mode¶
You can set each service to use either the task or the VIP backend routing
mode. Task mode is the default and is used if a label is not specified or if it
is set to task
.
Set the routing mode to VIP¶
Apply the following label to set the routing mode to VIP:
com.docker.lb.backend_mode=vip
Perform a proxy reconfiguration for the following two updates, as they create or remove a service VIP:
Adding or removing a network on a service
Deploying or deleting a service
Note
The following is a non-exhaustive list of application events that do not require proxy reconfiguration in VIP mode:
Increasing or decreasing a service replica
Deploying a new image
Updating a configuration or secret
Adding or removing a label
Adding or removing an environment variable
Rescheduling a failed application task
Publish a default host service¶
The following example publishes a service to be a default host. The service responds whenever a request is made to an unconfigured host.
Create an overlay network to isolate and secure the service traffic:
docker network create -d overlay demo
Example output:
1se1glh749q1i4pw0kf26mfx5
Create the initial service:
docker service create \ --name demo-default \ --network demo \ --detach=false \ --replicas=1 \ --label com.docker.lb.default_backend=true \ --label com.docker.lb.port=8080 \ ehazlett/interlock-default-app
Interlock detects when the service is available and publishes it. After tasks are running and the proxy service is updated, the application is available at any URL that is not configured.
Publish a service using the VIP backend mode¶
Create an overlay network to isolate and secure the service traffic:
docker network create -d overlay demo
Example output:
1se1glh749q1i4pw0kf26mfx5
Create the initial service:
docker service create \ --name demo \ --network demo \ --detach=false \ --replicas=4 \ --label com.docker.lb.hosts=demo.local \ --label com.docker.lb.port=8080 \ --label com.docker.lb.backend_mode=vip \ --env METADATA="demo-vip-1" \ mirantiseng/docker-demo
Interlock detects when the service is available and publishes it.
After tasks are running and the proxy service is updated, the application is available at
http://demo.local
:curl -vs -H "Host: demo.local" http://127.0.0.1/ping
Example output:
* Trying 127.0.0.1... * TCP_NODELAY set * Connected to demo.local (127.0.0.1) port 80 (#0) > GET /ping HTTP/1.1 > Host: demo.local > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.13.6 < Date: Wed, 08 Nov 2017 20:28:26 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 120 < Connection: keep-alive < Set-Cookie: session=1510172906715624280; Path=/; Expires=Thu, 09 Nov 2017 20:28:26 GMT; Max-Age=86400 < x-request-id: f884cf37e8331612b8e7630ad0ee4e0d < x-proxy-id: 5ad7c31f9f00 < x-server-info: interlock/2.0.0-development (147ff2b1) linux/amd64 < x-upstream-addr: 10.0.2.9:8080 < x-upstream-response-time: 1510172906.714 < {"instance":"df20f55fc943","version":"0.1","metadata":"demo","request_id":"f884cf37e8331612b8e7630ad0ee4e0d"}
Using VIP mode causes Interlock to use the virtual IPs of the service for load balancing rather than using each task IP.
Inspect the service to see the VIPs, as in the following example:
"Endpoint": { "Spec": { "Mode": "vip" }, "VirtualIPs": [ { "NetworkID": "jed11c1x685a1r8acirk2ylol", "Addr": "10.0.2.9/24" } ] }
In this example, Interlock configures a single upstream for the host using IP
10.0.2.9
. Interlock skips further proxy updates as long as there is at least one replica for the service, as the only upstream is the VIP.