With Docker Enterprise, you can enable physical isolation of resources by
organizing nodes into collections and granting Scheduler
access for
different users. To control access to nodes, move them to dedicated collections
where you can grant access to specific users, teams, and organizations.
In this example, a team gets access to a node collection and a resource collection, and MKE access control ensures that the team members cannot view or use swarm resources that aren’t in their collection.
Note
You need a Docker Enterprise license and at least two worker nodes to complete this example.
To isolate cluster nodes:
Ops
team and assign a user to it./Prod
collection for the team’s node./Prod
collection.Ops
teams access to its collection.In the web UI, navigate to the Organizations & Teams page to create a team named “Ops” in your organization. Add a user who is not a MKE administrator to the team.
In this example, the Ops team uses an assigned group of nodes, which it accesses through a collection. Also, the team has a separate collection for its resources.
Create two collections: one for the team’s worker nodes and another for the team’s resources.
You’ve created two new collections. The /Prod
collection is for the worker
nodes, and the /Prod/Webserver
sub-collection is for access control to an
application that you’ll deploy on the corresponding worker nodes.
By default, worker nodes are located in the /Shared
collection.
Worker nodes that are running MSR are assigned to the /System
collection. To control access to the team’s nodes, move them to a
dedicated collection.
Move a worker node by changing the value of its access label key,
com.docker.ucp.access.label
, to a different collection.
/System
collection, click another
worker node, because you can’t move nodes that are in the /System
collection. By default, worker nodes are assigned to the /Shared
collection.com.docker.ucp.access.label
and
change its value from /Shared
to /Prod
./Prod
collection.Note
If you don’t have a Docker Enterprise license, you will get the following error message when you try to change the access label: Nodes must be in either the shared or system collection without a license.
You need two grants to control access to nodes and container resources:
Ops
team the Restricted Control
role for the
/Prod/Webserver
resources.Ops
team the Scheduler
role against the nodes in
the /Prod
collection.Create two grants for team access to the two collections:
/Prod/Webserver
collection.The same steps apply for the nodes in the /Prod
collection.
Scheduler
access to the
nodes in the /Prod
collection.The cluster is set up for node isolation. Users with access to nodes in the
/Prod
collection can deploy Swarm services and Kubernetes apps, and their
workloads won’t be scheduled on nodes that aren’t in the collection.
When a user deploys a Swarm service, MKE assigns its resources to the user’s default collection.
From the target collection of a resource, MKE walks up the ancestor
collections until it finds the highest ancestor that the user has
Scheduler
access to. Tasks are scheduled on any nodes in the tree
below this ancestor. In this example, MKE assigns the user’s service to
the /Prod/Webserver
collection and schedules tasks on nodes in the
/Prod
collection.
As a user on the Ops
team, set your default collection to
/Prod/Webserver
.
Ops
team.Deploy a service automatically to worker nodes in the /Prod
collection. All resources are deployed under the user’s default
collection, /Prod/Webserver
, and the containers are scheduled only
on the nodes under /Prod
.
Another approach is to use a grant instead of changing the user’s default
collection. An administrator can create a grant for a role that has the
Service Create
permission against the /Prod/Webserver
collection or a
child collection. In this case, the user sets the value of the service’s access
label, com.docker.ucp.access.label
, to the new collection or one of its
children that has a Service Create
grant for the user.
Starting in Docker Enterprise Edition 2.0, you can deploy a Kubernetes workload to worker nodes, based on a Kubernetes namespace.
An administrator must create a Kubernetes namespace to enable node isolation for Kubernetes workloads.
In the left pane, click Kubernetes.
Click Create to open the Create Kubernetes Object page.
In the Object YAML editor, paste the following YAML.
apiVersion: v1
kind: Namespace
metadata:
Name: namespace-name
Click Create to create the namespace-name
namespace.
Create a grant to the namespace-name
namespace:
Full Control
grant.Namespaces can be associated with a node collection in either of the following ways:
The scheduler.alpha.kubernetes.io/node-selector
annotation key
assigns node selectors to namespaces. If you define a
scheduler.alpha.kubernetes.io/node-selector: name-of-node-selector
annotation key when creating a namespace, all applications deployed in
that namespace are pinned to the nodes with the node selector specified.
The following example labels nodes as example-zone
, and adds a
scheduler node selector annotation as part of the ops-nodes
namespace definition:
For example, to pin all applications deployed in the ops-nodes
namespace to nodes in the example-zone
region:
Label the nodes with example-zone
.
Add an scheduler node selector annotation as part of the namespace definition.
```
apiVersion: v1
kind: Namespace
metadata:
annotations:
scheduler.alpha.kubernetes.io/node-selector: zone=example-zone
name: ops-nodes
```