Install Docker Trusted Registry

Install Docker Trusted Registry

DTR system requirements

Docker Trusted Registry can be installed on-premises or on the cloud. Before installing, be sure your infrastructure has these requirements.

Hardware and Software requirements

You can install DTR on-premises or on a cloud provider. To install DTR, all nodes must:

  • Be a worker node managed by UCP (Universal Control Plane)

  • Have a fixed hostname

Minimum requirements

  • 16GB of RAM for nodes running DTR

  • 2 vCPUs for nodes running DTR

  • 10GB of free disk space

Ports used

When installing DTR on a node, make sure the following ports are open on that node:






Web app and API client access to DTR.



Web app and API client access to DTR.

These ports are configurable when installing DTR.

UCP Configuration

When installing or backing up DTR on a UCP cluster, Administrators need to be able to deploy containers on “UCP manager nodes or nodes running DTR”. This setting can be adjusted in the UCP Settings menu.

The DTR installation or backup will fail with the following error message if Administrators are unable to deploy on “UCP manager nodes or nodes running DTR”.

Error response from daemon: {"message":"could not find any nodes on which the container could be created"}

Compatibility and maintenance lifecycle

Docker Enterprise Edition is a software subscription that includes three products:

  • Docker Enterprise Engine

  • Docker Trusted Registry

  • Docker Universal Control Plane

Step-by-step DTR installation

Docker Trusted Registry (DTR) is a containerized application that runs on a swarm managed by the Universal Control Plane (UCP). It can be installed on-premises or on a cloud infrastructure.

Step 1. Validate the system requirements

Before installing DTR, make sure your infrastructure meets the DTR system requirements.

Step 2. Install UCP

DTR requires Docker Universal Control Panel (UCP) to run.


Prior to installing DTR:

  • When upgrading, upgrade UCP before DTR for each major version. For example, if you are upgrading four major versions, upgrade one major version at a time, first UCP, then DTR, and then repeat for the remaining three versions. - UCP upgraded to the most recent version before an initial install of DTR.

  • Docker Engine should be updated to the most recent version before installing or updating UCP.

DTR and UCP must not be installed on the same node, due to the potential for resource and port conflicts. Instead, install DTR on worker nodes that will be managed by UCP. Note also that DTR cannot be installed on a standalone Docker Engine.

Step 3. Install DTR

  1. Once UCP is installed, navigate to the UCP web interface as an admin. Expand your profile on the left navigation pane, and select Admin Settings > Docker Trusted Registry.

  2. After you configure all the options, you should see a Docker CLI command that you can use to install DTR. Before you run the command, take note of the --dtr-external-url parameter:

    $ docker run -it --rm \
      docker/dtr:2.8.0 install \
      --dtr-external-url <> \
      --ucp-node <ucp-node-name> \
      --ucp-username admin \
      --ucp-url <ucp-url>

    If you want to point this parameter to a load balancer that uses HTTP for health probes over port 80 or 443, temporarily reconfigure the load balancer to use TCP over a known open port. Once DTR is installed, you can configure the load balancer however you need to.

  3. Run the DTR install command on any node connected to the UCP cluster, and with the Docker Engine installed. DTR will not be installed on the node you run the install command on. DTR will be installed on the UCP worker defined by the --ucp-node flag.

    For example, you could SSH into a UCP node and run the DTR install command from there. Running the installation command in interactive TTY or -it mode means you will be prompted for any required additional information.

    Here are some useful options you can set during installation:

    • To install a specific version of DTR, replace 2.8.0 with your desired version in the installation command above.

    • DTR is deployed with self-signed certificates by default, so UCP might not be able to pull images from DTR. Use the --dtr-external-url <dtr-domain>:<port> optional flag during installation, or during a reconfiguration, so that UCP is automatically reconfigured to trust DTR.

    • Starting with DTR 2.7, you can enable browser authentication via client certificates at install time. This bypasses the DTR login page and hides the logout button, thereby skipping the need for entering your username and password.

  4. Verify that DTR is installed. Either:

    • See https://<ucp-fqdn>/manage/settings/dtr, or;

    • Navigate to Admin Settings > Docker Trusted Registry from the UCP web UI. Under the hood, UCP modifies /etc/docker/certs.d for each host and adds DTR’s CA certificate. UCP can then pull images from DTR because the Docker Engine for each node in the UCP swarm has been configured to trust DTR.

  5. Reconfigure your load balancer back to your desired protocol and port.

Step 4. Check that DTR is running

  1. In your browser, navigate to the UCP web interface.

  2. Select Shared Resources > Stacks from the left navigation pane. You should see DTR listed as a stack.

  3. To verify that DTR is accessible from the browser, enter your DTR IP address or FQDN on the address bar. Since HSTS (HTTP Strict-Transport-Security) header is included in all API responses, make sure to specify the FQDN (Fully Qualified Domain Name) of your DTR prefixed with https://, or your browser may refuse to load the web interface.

Step 5. Configure DTR

After installing DTR, you should configure:

  • The certificates used for TLS communication

  • The storage backend to store the Docker images

Web interface

  • To update your TLS certificates, access DTR from the browser and navigate to System > General.

  • To configure your storage backend, navigate to System > Storage. If you are upgrading and changing your existing storage backend, see Switch storage backends for the recommended steps.

Command line interface

To reconfigure DTR using the CLI, refer to reconfigure.

Step 6. Test pushing and pulling

Now that you have a working installation of DTR, you should test that you can push and pull images:

  • Configure your local Docker Engine

  • Create a repository

  • Push and pull images

Step 7. Join replicas to the cluster

This step is optional.

To set up DTR for high availability, you can add more replicas to your DTR cluster. Adding more replicas allows you to load-balance requests across all replicas, and keep DTR working if a replica fails.

For high-availability, you should set 3 or 5 DTR replicas. The replica nodes also need to be managed by the same UCP.

To add replicas to a DTR cluster, use the join command.

  1. Load your UCP user bundle.

  2. Run the join command.

    docker run -it --rm \
      docker/dtr:2.8.0 join \
      --ucp-node <ucp-node-name> \


    The <ucp-node-name> following the --ucp-node flag is the target node to install the DTR replica. This is NOT the UCP Manager URL.

    When you join a replica to a DTR cluster, you need to specify the ID of a replica that is already part of the cluster. You can find an existing replica ID by going to the Shared Resources > Stacks page on UCP.

  3. Check that all replicas are running.

    In your browser, navigate to UCP’s web interface. Select Shared Resources > Stacks. All replicas should be displayed.

Installing DTR Offline

The procedure to install Docker Trusted Registry on a host is the same, whether that host has access to the internet or not.

The only difference when installing on an offline host, is that instead of pulling the UCP images from Docker Hub, you use a computer that is connected to the internet to download a single package with all the images. Then you copy that package to the host where you’ll install DTR.

Download the offline package

Use a computer with internet access to download a package with all DTR images:

$ wget <package-url> -O dtr.tar.gz

Now that you have the package in your local machine, you can transfer it to the machines where you want to install DTR.

For each machine where you want to install DTR:

  1. Copy the DTR package to that machine.

    $ scp dtr.tar.gz <user>@<host>
  2. Use ssh to log into the hosts where you transferred the package.

  3. Load the DTR images.

    Once the package is transferred to the hosts, you can use the docker load command to load the Docker images from the tar archive:

    $ docker load -i dtr.tar.gz

Install DTR

Now that the offline hosts have all the images needed to install DTR, you can install DTR on that host.

Preventing outgoing connections

DTR makes outgoing connections to:

  • report analytics,

  • check for new versions,

  • check online licenses,

  • update the vulnerability scanning database

All of these uses of online connections are optional. You can choose to disable or not use any or all of these features on the admin settings page.

Upgrade DTR

DTR uses semantic versioning. While downgrades are not supported, Mirantis supports upgrades according to the following rules:

  • When upgrading from one patch version to another, you can skip patch versions because no data migration is performed for patch versions.

  • When upgrading between minor versions, you cannot skip versions, however you can upgrade from any patch version of the previous minor version to any patch version of the current minor version.

  • When upgrading between major versions, make sure to upgrade one major version at a time - and also to upgrade to the earliest available minor version. It is strongly recommended that you first upgrade to the latest minor/patch version for your major version.





patch upgrade




skip patch version




patch downgrade




minor upgrade




skip minor version




minor downgrade




skip major version




major downgrade




major upgrade




major upgrade skipping minor version




A few seconds of interruption may occur during the upgrade of a DTR cluster, so schedule the upgrade to take place outside of peak hours to avoid any business impacts.

2.5 to 2.6 upgrade


Upgrade Best Practices

Important changes have been made to the upgrade process that, if not correctly followed, can have impact on the availability of applications running on the Swarm during upgrades. These constraints impact any upgrades coming from any version before 18.09 to version 18.09 or greater.

In addition, to ensure high availability during the DTR upgrade, drain the DTR replicas and move their workloads to updated workers. This can be done by joining new workers as DTR replicas to your existing cluster and then removing the old replicas.

Minor upgrade

Before starting the upgrade, confirm that:

  • The version of UCP in use is supported by the upgrade version of DTR.

  • The DTR backup is recent.

  • Docker content trust in UCP is disabled.

  • All system requirements are met.

Step 1. Upgrade DTR to 2.8 if necessary

Confirm that you are running DTR 2.7. If this is not the case, upgrade your installation to the previous version.

Step 2. Upgrade DTR

Pull the latest version of DTR:

docker pull docker/dtr:2.8.0

Confirm that at least 16GB RAM is available on the node on which you are running the upgrade. If the DTR node does not have access to the internet, follow the offline installation documentation to get the images.

Once you have the latest image on your machine (and the images on the target nodes, if upgrading offline), run the upgrade command.


The upgrade command can be run from any available node, as UCP is aware of which worker nodes have replicas.

docker run -it --rm \
  docker/dtr:2.8.0 upgrade

By default, the upgrade command runs in interactive mode and prompts for any necessary information. If you are performing the upgrade on an existing replica, pass the --existing-replica-id flag.

The upgrade command will start replacing every container in your DTR cluster, one replica at a time. It will also perform certain data migrations. If anything fails or the upgrade is interrupted for any reason, rerun the upgrade command (the upgrade will resume from the point of interruption).

Metadata Store Migration

When upgrading from 2.5 to 2.6, the system will run a metadatastoremigration job following a successful upgrade. This involves migrating the blob links for your images, which is necessary for online garbage collection. With 2.6, you can log into the DTR web interface and navigate to System > Job Logs to check the status of the metadatastoremigration job.

Garbage collection is disabled while the migration is running. In the case of a failed metadatastoremigration, the system will retry twice.

If the three attempts fail, it will be necessary to manually retrigger the metadatastoremigration job. To do this, send a POST request to the /api/v0/jobs endpoint:

curl https://<dtr-external-url>/api/v0/jobs -X POST \
-u username:accesstoken -H 'Content-Type':'application/json' -d \
'{"action": "metadatastoremigration"}'

Alternatively, select API from the bottom left navigation pane of the DTR web interface and use the Swagger UI to send your API request.

Patch upgrade

A patch upgrade changes only the DTR containers and is always safer than a minor version upgrade. The command is the same as for a minor upgrade.

DTR cache upgrade

If you have previously deployed a cache, be sure to upgrade the node dedicated for your cache to keep it in sync with your upstream DTR replicas. This prevents authentication errors and other strange behaviors.

Download the vulnerability database

After upgrading DTR, it is necessary to redownload the vulnerability database.

Uninstalling DTR

Uninstalling DTR can be done by simply removing all data associated with each replica. To do that, you just run the destroy command once per replica:

docker run -it --rm \
  docker/dtr:2.8.0 destroy \

You will be prompted for the UCP URL, UCP credentials, and which replica to destroy.