Configure host mode networking

Layer 7 routing components communicate with one another by default using overlay networks, but Interlock also supports host mode networking in a variety of ways, including proxy only, Interlock only, application only, and hybrid.

When using host mode networking, you cannot use DNS service discovery, since that functionality requires overlay networking. For services to communicate, each service needs to know the IP address of the node where the other service is running.

Note

Use an alternative to DNS service discovery such as Registrator if you require this functionality.

The following is a high-level overview of how to use host mode instead of overlay networking:

  1. Update the ucp-interlock configuration.

  2. Deploy your Swarm services.

  3. Configure proxy services.

If you have not already done so, configure the layer 7 routing solution for production with the ucp-interlock-proxy service replicas running on their own dedicated nodes.

Update the ucp-interlock configuration

  1. Update the PublishMode key in the ucp-interlock service configuration so that it uses host mode networking:

    PublishMode = "host"
    
  2. Update the ucp-interlock service to use the new Docker configuration so that it starts publishing its port on the host:

    docker service update \
    --config-rm $CURRENT_CONFIG_NAME \
    --config-add source=$NEW_CONFIG_NAME,target=/config.toml \
    --publish-add mode=host,target=8080 \
    ucp-interlock
    

    The ucp-interlock and ucp-interlock-extension services are now communicating using host mode networking.

Deploy Swarm services

This section describes how to deploy an example Swarm service on an eight-node cluster using host mode networking to route traffic without using overlay networks. The cluster has three manager nodes and five worker nodes, with two workers configured as dedicated ingress cluster load balancer nodes that will receive all application traffic.

This example does not cover the actual infrastructure deployment, and assumes you have a typical Swarm cluster using docker init and docker swarm join from the nodes.

  1. Download and configure the client bundle.

  2. Deploy an example Swarm demo service that uses host mode networking:

    docker service create \
    --name demo \
    --detach=false \
    --label com.docker.lb.hosts=app.example.org \
    --label com.docker.lb.port=8080 \
    --publish mode=host,target=8080 \
    --env METADATA="demo" \
    mirantiseng/docker-demo
    

    This example allocates a high random port on the host where the service can be reached.

  3. Test that the service works:

    curl --header "Host: app.example.org" \
    http://<proxy-address>:<routing-http-port>/ping
    
    • <proxy-address> is the domain name or IP address of a node where the proxy service is running.

    • <routing-http-port> is the port used to route HTTP traffic.

    A properly-working service will produce a result similar to the following:

    {"instance":"63b855978452", "version":"0.1", "request_id":"d641430be9496937f2669ce6963b67d6"}
    
  4. Log in to one of the manager nodes and configure the load balancer worker nodes with node labels in order to pin the Interlock Proxy service:

    docker node update --label-add nodetype=loadbalancer lb-00
    lb-00
    docker node update --label-add nodetype=loadbalancer lb-01
    lb-01
    
  5. Verify that the labels were successfully added to each node:

    docker node inspect -f '{{ .Spec.Labels  }}' lb-00
    map[nodetype:loadbalancer]
    docker node inspect -f '{{ .Spec.Labels  }}' lb-01
    map[nodetype:loadbalancer]
    
  6. Create a configuration object for Interlock that specifies host mode networking:

    cat << EOF | docker config create service.interlock.conf -
    ListenAddr = ":8080"
    DockerURL = "unix:///var/run/docker.sock"
    PollInterval = "3s"
    
    [Extensions]
      [Extensions.default]
        Image = "mirantis/ucp-interlock-extension:3.4.15"
        Args = []
        ServiceName = "interlock-ext"
        ProxyImage = "mirantis/ucp-interlock-proxy:3.4.15"
        ProxyArgs = []
        ProxyServiceName = "interlock-proxy"
        ProxyConfigPath = "/etc/nginx/nginx.conf"
        ProxyReplicas = 1
        PublishMode = "host"
        PublishedPort = 80
        TargetPort = 80
        PublishedSSLPort = 443
        TargetSSLPort = 443
        [Extensions.default.Config]
          User = "nginx"
          PidPath = "/var/run/proxy.pid"
          WorkerProcesses = 1
          RlimitNoFile = 65535
          MaxConnections = 2048
    EOF
    oqkvv1asncf6p2axhx41vylgt
    
  7. Create the Interlock service using host mode networking:

    docker service create \
    --name interlock \
    --mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \
    --constraint node.role==manager \
    --publish mode=host,target=8080 \
    --config src=service.interlock.conf,target=/config.toml \
    mirantis/ucp-interlock:3.4.15 -D run -c /config.toml
    sjpgq7h621exno6svdnsvpv9z
    

Configure proxy services

You can use node labels to reconfigure the Interlock Proxy services to be constrained to the workers.

  1. From a manager node, pin the proxy services to the load balancer worker nodes:

    docker service update \
    --constraint-add node.labels.nodetype==loadbalancer \
    interlock-proxy
    
  2. Deploy the application:

    docker service create \
    --name demo \
    --detach=false \
    --label com.docker.lb.hosts=demo.local \
    --label com.docker.lb.port=8080 \
    --publish mode=host,target=8080 \
    --env METADATA="demo" \
    mirantiseng/docker-demo
    

    This runs the service using host mode networking. Each task for the service has a high port, such as 32768, and uses the node IP address to connect.

  3. Inspect the headers from the request to verify that each task uses the node IP address to connect:

    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    curl -vs -H "Host: demo.local" http://127.0.0.1/ping
    

    Example of system response:

    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
    > GET /ping HTTP/1.1
    > Host: demo.local
    > User-Agent: curl/7.54.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Server: nginx/1.13.6
    < Date: Fri, 10 Nov 2017 15:38:40 GMT
    < Content-Type: text/plain; charset=utf-8
    < Content-Length: 110
    < Connection: keep-alive
    < Set-Cookie: session=1510328320174129112; Path=/; Expires=Sat, 11 Nov 2017 15:38:40 GMT; Max-Age=86400
    < x-request-id: e4180a8fc6ee15f8d46f11df67c24a7d
    < x-proxy-id: d07b29c99f18
    < x-server-info: interlock/2.0.0-preview (17476782) linux/amd64
    < x-upstream-addr: 172.20.0.4:32768
    < x-upstream-response-time: 1510328320.172
    <
    {"instance":"897d3c7b9e9c","version":"0.1","metadata":"demo","request_id":"e4180a8fc6ee15f8d46f11df67c24a7d"}