Introduction

This documentation provides information on how to deploy and operate a Mirantis Secure Registry (MSR). The documentation is intended to help operators to understand the core concepts of the product. The documentation provides sufficient information to deploy and operate the solution.

The information provided in this documentation set is being constantly improved and amended based on the feedback and kind requests from the consumers of MSR.

Product Overview

Mirantis Secure Registry (MSR) is a solution that enables enterprises to store and manage their container images on-premise or in their virtual private clouds. Built-in security enables you to verify and trust the provenance and content of your applications and ensure secure separation of concerns. Using MSR, you meet security and regulatory compliance requirements. In addition, the automated operations and integration with CI/CD speed up application testing and delivery. The most common use cases for MSR include:

Helm charts repositories

Deploying applications to Kubernetes can be complex. Setting up a single application can involve creating multiple interdependent Kubernetes resources, such as pods, services, deployments, and replica sets. Each of these requires manual creation of a detailed YAML manifest file as well. This is a lot of work and time invested. With Helm charts (packages that consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files) you can save time and install the software you need with all the dependencies, upgrade, and configure it.

Automated development

Easily create an automated workflow where you push a commit that triggers a build on a CI provider, which pushes a new image into your registry. Then, the registry fires off a webhook and triggers deployment on a staging environment, or notifies other systems that a new image is available.

Secure and vulnerable free images

When an industry requires applications to comply with certain security standards to meet regulatory compliances, your applications are as secure as the images that run those applications. To ensure that your images are secure and do not have any vulnerabilities, track your images using a binary image scanner to detect components in images and identify associated CVEs. In addition, you may also run image enforcement policies to prevent vulnerable or inappropriate images from being pulled and deployed from your registry.

Reference Architecture

The MSR Reference Architecture provides comprehensive technical information on Mirantis Secure Registry (MSR), including component particulars, infrastructure specifications, and networking and volumes detail.

Introduction to MSR

Mirantis Secure Registry (MSR) is an enterprise-grade image storage solution. Installed behind a firewall, either on-premises or on a virtual private cloud, MSR provides a secure environment where users can store and manage their images.

The advantages of MSR include the following:

Image and job management

MSR has a web-based user interface used for browsing images and auditing repository events. With the web UI, you can see which Dockerfile lines produced an image and, if security scanning is enabled, a list of all of the software installed in that image and any Common Vulnerabilities and Exposures (CVEs). You can also audit jobs with the web UI.

MSR can serve as a continuous integration and continuous delivery (CI/CD) component, in the building, shipping, and running of applications.

Availability

MSR is highly available through the use of multiple replicas of all containers and metadata. As such, MSR will continue to operate in the event of machine failure, thus allowing for repair.

Efficiency

MSR can reduce the bandwidth used when pulling images by caching images closer to users. In addition, MSR can clean up unreferenced manifests and layers.

Built-in access control

As with Mirantis Kubernetes Engine (MKE), MSR uses role-based access control (RBAC), which allows you to manage image access, either manually, with LDAP, or with Active Directory.

Security scanning

A security scanner is built into MSR, which can be used to discover the versions of the software that is in use in your images. This tool scans each layer and aggregates the results, offering a complete picture of what is being shipped as a part of your stack. Most importantly, as the security scanner is kept up-to-date by tapping into a periodically updated vulnerability database, it is able to provide unprecedented insight into your exposure to known security threats.

Image signing

MSR ships with Notary, which allows you to sign and verify images using Docker Content Trust.

Components

Mirantis Secure Registry (MSR) is a containerized application that runs on a Mirantis Kubernetes Engine cluster. After deploying MSR, you can use your Docker CLI client to log in, push, and pull images. For high availability, you can deploy multiple MSR replicas, one on each MKE worker node.

All MSR replicas run the same set of services, and changes to the configuration of one is replica is automatically propagated to other replicas.

Installing MSR on a node starts the containers that are detailed in the following table:

Name

Description

dtr-api-<replica_id>

Executes the MSR business logic, serving the MSR web application and API.

dtr-garant-<replica_id>

Manages MSR authentication.

dtr-jobrunner-<replica_id>

Runs cleanup jobs in the background.

dtr-nginx-<replica_id>

Receives HTTP and HTTPS requests and proxies those requests to other MSR components. By default, the container listens to host ports 80 and 443.

dtr-notary-server-<replica_id>

Receives, validates, and serves Content Trust metadata, and is consulted when pushing to or pulling from MSR with Content Trust enabled.

dtr-notary-signer-<replica_id>

Performs server-side timestamp and snapshot signing for Content Trust metadata.

dtr-registry-<replica_id>

Implements pull and push functionality for Docker images and manages the storage of images.

dtr-rethinkdb-<replica_id>

Serves as a database for persisting repository metadata.

dtr-scanningstore-<replica_id>

Stores security scanning data.

Important

Do not use the MSR components in your applications, as they are for internal MSR use only.

System requirements

Mirantis Secure Registry can be installed on-premises or on the cloud. Before installing, be sure your infrastructure has these requirements.

You can install MSR on-premises or on a cloud provider. To install MSR, all nodes must:

  • Be a worker node managed by MKE (Mirantis Kubernetes Engine)

  • Have a fixed hostname

Minimum requirements:

  • 16GB of RAM for nodes running MSR

  • 4 vCPUs for nodes running MSR

  • 25GB of free disk space

Recommended production requirements:

  • 32GB of RAM for nodes running MSR

  • 4 vCPUs for nodes running MSR

  • 100GB of free disk space

Note that Windows container images are typically larger than Linux ones and for this reason, you should consider provisioning more local storage for Windows nodes and for MSR setups that will store Windows container images.

When the image scanning feature is used, we recommend that you have at least 32 GB of RAM. As developers and teams push images into MSR, the repository grows over time. As such, you should regularly inspect RAM, CPU, and disk usage on MSR nodes, and increase resources whenever resource saturation is seen to occur on a regular basis.

Networks

MSR creates the dtr-ol network at the time of installation. This network allows for communication between MSR components running on different nodes, for the purpose of MSR data replication.

When installing MSR on a node, make sure the following ports are open on that node:

Port

Direction

Purpose

80/tcp

in

Web app and API client access to MSR.

443/tcp

in

Web app and API client access to MSR.

You can configure these ports during MSR installation.

Volumes

MSR uses these named volumes for persisting data:

Volume name

Description

dtr-ca-<replica_id>

Root key material for the MSR root CA that issues certificates

dtr-notary-<replica_id>

Certificate and keys for the Notary components

dtr-postgres-<replica_id>

Vulnerability scans data

dtr-registry-<replica_id>

Docker images data, if MSR is configured to store images on the local filesystem

dtr-rethink-<replica_id>

Repository metadata

dtr-nfs-registry-<replica_id>

Docker images data, if MSR is configured to store images on NFS

You can customize the volume driver used for these volumes, by creating the volumes before installing MSR. During the installation, MSR checks which volumes don’t exist in the node, and creates them using the default volume driver.

By default, the data for these volumes can be found at /var/lib/docker/volumes/<volume-name>/_data.

Storage

By default, Mirantis Secure Registry stores images on the filesystem of the node where it is running, but you should configure it to use a centralized storage backend.

MSR supports the following storage systems:

Persistent volume

  • NFS (v3 and v4)

  • Bind mount

  • Volume

Cloud storage providers

  • Amazon S3

  • Microsoft Azure

  • Google Cloud Storage

  • Alibaba Cloud Object Storage Service

Note

Deploying MSR to Windows nodes is not supported.

MSR Web UI

MSR has a web UI where you can manage settings and user permissions.

You can push and pull images using the standard Docker CLI client or other tools that can interact with a Docker registry.

Rule engine

MSR uses a rule engine to evaluate policies, such as tag pruning and image enforcement.

The rule engine supports the following operators:

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

  • starts with

  • ends with

  • contains

  • one of

  • not one of

  • matches

  • before

  • after

Note

The matches operator conforms subject fields to a user-provided regular expression (regex). The regex for matches must follow the specification in the official Go documentation: Package syntax.

Each of the following policies uses the rule engine:

Installation Guide

Targeted to deployment specialists and QA engineers, the MSR Installation Guide provides the detailed information and procedures you need to install and configure Mirantis Secure Registry (MSR).

Pre-configure MKE

When installing or backing up MSR on a MKE cluster, Administrators need to be able to deploy containers on MKE manager nodes or nodes running MSR”. This setting can be adjusted in the MKE Settings menu.

The MSR installation or backup will fail with the following error message if Administrators are unable to deploy on MKE manager nodes or nodes running MSR”.

Error response from daemon: {"message":"could not find any nodes on which the container could be created"}

See also

compatibility-matrix

Install MSR online

Mirantis Secure Registry (MSR) is a containerized application that runs on a swarm managed by Mirantis Kubernetes Engine (MKE). It can be installed on-premises or on a cloud-based infrastructure.

Prerequisite steps

  1. Verify that your infrastructure meets the MSR system requirements.

  2. Update Mirantis Container Runtime (MCR) to the latest version. For details, refer to the section of the MCR installation guide that corresponds with your operating system.

  3. Upgrade MKE to the latest version.

    Note

    MKE and MSR must be installed on different nodes, due to the potential for resource and port conflicts. Install MSR on worker nodes that will be managed by MKE. Note also that MSR cannot be installed on a standalone MCR.

Install MSR

  1. Log in to the MKE web UI as an administrator.

  2. In the left-side navigation panel, navigate to <user name> > Admin Settings > Mirantis Secure Registry.

  3. Optional. Provide an external URL for MSR.

  4. Select the MKE worker node where you want to install MSR.

  5. Optional. Enable any of the following options, as required:

    • Assign an MSR replica ID

    • Disable TLS CA certificate for MKE

    • Use a PEM-encoded TLS CA certificate for MKE

  6. A Docker CLI command used to install MSR will display. For example:

    docker run -it --rm \
      mirantis/dtr:2.9.16 install \
      --dtr-external-url <msr.example.com> \
      --ucp-node <mke-node-name> \
      --ucp-username admin \
      --ucp-url <mke-url>
    
  7. Optional. To run a load balancer that uses HTTP for health probes over port 80 or 443, temporarily reconfigure it to use TCP over a known open port and enter the load balancer IP address as the value of ``–dtr-external-url ``. Once MSR is installed, you can reconfigure the load balancer to meet your requirements.

  8. Run the MSR install command on any node that is both connected to the MKE cluster and running MCR. Running the installation command in interactive TTY (or -it) mode will prompt you for any required additional information.

    Note

    MSR will not be installed on the node where you run the install command. MSR will be installed on the MKE worker defined by the --ucp-node flag.

    • To install a different version of MSR, replace 2.9.16 with the required version of MSR in the provided command.

    • MSR is deployed with self-signed certificates by default, so MKE might not be able to successfully pull images from MSR. Use the optional --dtr-external-url <msr-domain>:<port> flag during installation or during a reconfiguration to automatically reconfigure MKE to trust MSR.

    • You can enable browser authentication using client certificates at install time. This bypasses the MSR login page and hides the logout button, thus overriding the requirement that you log in with a user name and password.

  9. Verify that MSR is installed by logging in to the MKE web UI and then navigating to <user name> > Admin Settings > Mirantis Secure Registry. A successful installation will display the MSR fully qualified domain name (FQDN).

    Note

    MKE modifies /etc/docker/certs.d for each host and adds the MSR CA certificate. MKE can then pull images from MSR because MCR for each node in the MKE swarm has been configured to trust MSR.

  10. Optional. Reconfigure your load balancer back to your desired protocol and port.

Verify MSR after installation

  1. Log in to the MKE web UI.

  2. From the left-side navigation panel, select Shared Resources > Stacks . You should see MSR listed as a stack.

  3. Verify that the MSR web UI is accessible by navigating either to your MSR IP address or FQDN in a browser window.

    Note

    Be sure to prefix the IP address or FQDN with https:// or your browser may not load the web UI.

Configure MSR

  1. Configure the certificates used for TLS communication:

    1. Log in to the MSR web UI.

    2. From the left-side navigation panel, navigate to System and select the General tab.

    3. Scroll down to Domain & Proxies and select Show TLS settings.

    4. Enter your TLS information and click Save.

  2. Configure the storage back end to store your Docker images:

    1. Log in to the MSR web UI.

    2. From the left-side navigation panel, navigate to System and select the Storage tab.

    3. Configure the storage settings as required.

To configure MSR using the CLI, refer to the CLI reference documentation.

Join replicas to the cluster (optional)

To make MSR highly available, you can add additional replicas to your MSR cluster. Adding more replicas allows you to load-balance requests across all replicas, thus enabling MSR to continue working if a replica fails.

For high-availability, you should set 3 or 5 MSR replicas. The replica nodes must be managed by the same MKE.


To join replicas to your MSR cluster:

  1. Download and configure the MKE client bundle.

  2. Run the join command, as in the following example:

    docker run -it --rm \
      mirantis/dtr:2.9.16 join \
      --ucp-node <mke-node-name> \
      --ucp-insecure-tls
    

    Important

    The <mke-node-name> following the --ucp-node flag is the target node to install the MSR replica. This is not the MKE manager URL.

    When you join a replica to an MSR cluster, you need to specify the ID of a replica that is already part of the cluster. You can find an existing replica ID by navigating to the Shared Resources > Stacks page in the MKE web UI.

  3. Verify that all replicas are running:

    1. Log in to the MKE web UI.

    2. Select Shared Resources > Stacks. All replicas will display.

Install MSR offline

To install MSR on an offline host, you must first use a separate computer with an Internet connection to download a single package with all the images and then copy that package to the host where you will install MSR. Once the package is on the host and loaded, you can install MSR offline as described in Install MSR online.

To install MSR offline:

  1. Download the required MSR package:

    Note

    MSR 2.9.2 is discontinued and thus not available for download.

  2. Copy the MSR package to the host machine:

    scp dtr.tar.gz <user>@<host>
    
  3. Use SSH to log in to the host where you transferred the package.

  4. Load the MSR images from the dtr.tar.gz file:

    docker load -i dtr.tar.gz
    
  5. Follow the instructions in Install MSR online.

  6. Optional. Disable outgoing connections in the MSR web UI Admin Settings. MSR makes outgoing connections for the following tasks:

    • Analytics reporting

    • New version notifications

    • Online license verification

    • Vulnerability scanning database updates

Obtain the license

After you install MSR, download your new MSR license and apply it using the MSR web UI.

Warning

Users are not authorized to run MSR without a valid license. For more information, refer to Mirantis Agreements and Terms.

To download your MSR license:

  1. Open an email from Mirantis Support with the subject Welcome to Mirantis’ CloudCare Portal and follow the instructions for logging in.

    If you did not receive the CloudCare Portal email, you likely have not yet been added as a Designated Contact and should contact your Designated Administrator.

  2. In the top navigation bar, click Environments.

  3. Click the Cloud Name associated with the license you want to download.

  4. Scroll down to License Information and click the License File URL. A new tab opens in your browser.

  5. Click View file to download your license file.

To update your license settings in the MSR web UI:

  1. Log in to your MSR instance as an administrator.

  2. In the left-side navigation panel, click Settings.

  3. On the General tab, click Apply new license. A file browser dialog displays.

  4. Navigate to where you saved the license key (.lic) file, select it, and click Open. MSR automatically updates with the new settings.

Uninstall MSR

Uninstalling MSR is a simple matter of removing all data associated with each replica. To do this, run the destroy command once per replica:

docker run -it --rm \
  mirantis/dtr:2.9.16 destroy \
  --ucp-insecure-tls

Each time you run the destroy command, the system will prompt you for the MKE URL, your MKE credentials, and the name of the replica you want to destroy.

Operations Guide

The MSR Operations Guide provides the detailed information you need to store and manage images on-premises or in a virtual private cloud, to meet security or regulatory compliance requirements.

Access MSR

Configure your Mirantis Container Runtime

By default Mirantis Container Runtime uses TLS when pushing and pulling images to an image registry like Mirantis Secure Registry (MSR).

If MSR is using the default configurations or was configured to use self-signed certificates, you need to configure your Mirantis Container Runtime to trust MSR. Otherwise, when you try to log in, push to, or pull images from MSR, you’ll get an error:

docker login msr.example.org

x509: certificate signed by unknown authority

The first step to make your Mirantis Container Runtime trust the certificate authority used by MSR is to get the MSR CA certificate. Then you configure your operating system to trust that certificate.

Configure your host
macOS

In your browser navigate to https://<msr-url>/ca to download the TLS certificate used by MSR. Then add that certificate to macOS Keychain.

After adding the CA certificate to Keychain, restart Docker Desktop for Mac.

Windows

In your browser navigate to https://<msr-url>/ca to download the TLS certificate used by MSR. Open Windows Explorer, right-click the file you’ve downloaded, and choose Install certificate.

Then, select the following options:

  • Store location: local machine

  • Check place all certificates in the following store

  • Click Browser, and select Trusted Root Certificate Authorities

  • Click Finish

Learn more about managing TLS certificates.

After adding the CA certificate to Windows, restart Docker Desktop for Windows.

Ubuntu/ Debian
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /usr/local/share/ca-certificates/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-certificates
# Restart the Docker daemon
sudo service docker restart
RHEL/ CentOS
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /etc/pki/ca-trust/source/anchors/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-trust
# Restart the Docker daemon
sudo /bin/systemctl restart docker.service
Boot2Docker
  1. Log into the virtual machine with ssh:

    docker-machine ssh <machine-name>
    
  2. Create the bootsync.sh file, and make it executable:

    sudo touch /var/lib/boot2docker/bootsync.sh
    sudo chmod 755 /var/lib/boot2docker/bootsync.sh
    
  3. Add the following content to the bootsync.sh file. You can use nano or vi for this.

    #!/bin/sh
    
    cat /var/lib/boot2docker/server.pem >> /etc/ssl/certs/ca-certificates.crt
    
  4. Add the MSR CA certificate to the server.pem file:

    curl -k https://<msr-domain-name>/ca | sudo tee -a /var/lib/boot2docker/server.pem
    
  5. Run bootsync.sh and restart the Docker daemon:

    sudo /var/lib/boot2docker/bootsync.sh
    sudo /etc/init.d/docker restart
    
Log into MSR

To validate that your Docker daemon trusts MSR, try authenticating against MSR.

docker login msr.example.org
Where to go next

Configure your Notary client

Configure your Notary client as described in Delegations for content trust.

Use a cache

Mirantis Secure Registry can be configured to have one or more caches. This allows you to choose from which cache to pull images from for faster download times.

If an administrator has set up caches, you can choose which cache to use when pulling images.

In the MSR web UI, navigate to your Account, and check the Content Cache options.

Once you save, your images are pulled from the cache instead of the central MSR.

Manage access tokens

You can create and distribute access tokens in MSR that grant users access at specific permission levels.

Access tokens are associated with a particular user account. They take on the permissions of that account when in use, adjusting automatically to any permissions changes that are made to the associated user account.

Note

Regular MSR users can create access tokens that adopt their own account permissions, while administrators can create access tokens that adopt the account permissions of any account they choose, including the admin account.

Access tokens are of use in building CI/CD pipelines and other integrations, as you can issue separate tokens for each integration and henceforth deactivate or delete such tokens at any time. You can also use access tokens to generate a temporary password for a user who is locked out of their account.

Create an access token

  1. Log in to the MSR web UI as the user whose permissions you want associated with the token.

  2. In the left-side navigation panel, navigate to <user name> > Profile.

  3. Select the Access Tokens tab.

  4. Click New access token.

  5. Add a description for the new token. You can, for example, describe the purpose of the token or illustrate a use scenario.

  6. Click Create. The token will temporarily display. Once you click Done, you will never again be able to see the token.

Modify an access token

Although you cannot view the access token itself following its initial display, you can give it a new description, deactivate, or delete the token.

To give an access token a new description:

  1. Select the View details link associated with the required access token.

  2. Enter a new description in the Description field.

  3. Click Save.

To deactivate an access token:

  1. Select View details next to the required access token.

  2. Slide the Is active toggle to the left.

  3. Click Save.

To delete an access token:

  1. Select the checkbox associated with the access token you want to delete.

  2. Click Delete.

  3. Type delete in the pop-up window and click OK.

Use an access token

You can use an access token anywhere you need an MSR password.

Examples:

  • You can pass your access token to the --password or -p option when logging in from your Docker CLI client:

    docker login dtr.example.org --username <username> --password <token>
    
  • You can pass your access token to an MSR API endpoint to list the repositories to which the associated user has access:

    curl --silent --insecure --user <username>:<token> dtr.example.org/api/v0/repositories
    

Configure MSR

Add a custom TLS certificate

Mirantis Secure Registry (MSR) services are exposed using HTTPS by default, which ensures encrypted communications between clients and your trusted registry. If you do not pass a PEM-encoded TLS certificate during installation, MSR generates a self-signed certificate, which can lead to an insecure site warning whenever you access MSR through a browser. In addition, MSR includes an HTTP Strict Transport Security (HSTS) header in all API responses, which can cause your browser not to load the MSR web UI.

You can configure MSR to use your own TLS certificates, so that it is automatically trusted by your browsers and client tools. You can also enable user authentication using the client certificates provided by your organization Public Key Infrastructure (PKI).

You can upload your own TLS certificates and keys using the MSR web UI, or you can pass them as CLI options during installation or whenever you reconfigure your MSR instance.


To replace the server certificates using the MSR web UI:

  1. Log in at https://<msr-url>.

  2. In the left-side navigation panel, navigate to System and scroll down to Domain & Proxies.

  3. Enter your MSR domain name and upload or copy and paste the certificate information:

    Certificate information

    Description

    Load balancer/public address

    The domain name for accessing MSR.

    TLS private key

    The server private key.

    TLS certificate chain

    The server certificate and any intermediate public certificates from your certificate authority (CA). The certificate must be valid for the MSR public address and have SANs for all addresses that are used to reach the MSR replicas, including load balancers.

    TLS CA

    The root CA public certificate.

  4. Click Save.

At this point, if youy have added certificates issued by a globally trusted CA, any web browser or client tool should trust MSR. If you are using an internal CA, you must configure the client systems to trust that CA.


To replace the server certificates using the CLI:

Refer to install and reconfigure for TLS certificate options and usage information.

Enable single sign-on

MSR and MKE share users by default, but the applications have distinct web UIs that each require separate authentication. You can, however, configure MSR to use single sign-on with MKE.

Note

Once you configure MSR to use single sign-on, you must create an access token to interact with MSR using the CLI.

Enable at install time

Include --dtr-external-url <msr-url> in the MSR install command, where <msr-url> is the MSR fully qualified domain name (FQDN) or a load balancer, if one is in use:

docker run --rm -it \
mirantis/dtr:2.9.16 install \
--dtr-external-url <msr-url> \
--dtr-cert "$(cat cert.pem)" \
--dtr-ca "$(cat dtr_ca.pem)" \
--dtr-key "$(cat key.pem)" \
--ucp-url <mke-url> \
--ucp-username <user name> \
--ucp-ca "$(cat ucp_ca.pem)"

When you navigate to the MSR web UI, you will be redirected to the MKE log in page for authentication. After authentication, you will be directed back to the MSR web UI.

Enable after install time

To enable single sign-on using the MSR web UI:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System.

  3. On the General tab, scroll down to Domains & Proxies.

  4. In the Load Balancer / Public Address field, enter the MSR FQDN or load balancer IP address, if one is in use. This is the URL where users will be redirected once they are logged in.

  5. Click Save.

  6. Scroll down to Single Sign-On and slide the toggle that is next to Automatically redirect users to MKE for login.

To enable single sign-on using the CLI:

Run the following reconfigure command:

docker run --rm -it \
mirantis/dtr:2.9.16 reconfigure \
--dtr-external-url <msr-url> \
--dtr-cert "$(cat cert.pem)" \
--dtr-ca "$(cat dtr_ca.pem)" \
--dtr-key "$(cat key.pem)" \
--ucp-url <mke-url>  \
--ucp-username <user name> \
--ucp-ca "$(cat ucp_ca.pem)"

Disable persistent cookies

By default, Mirantis Secure Registry (MSR) uses persistent cookies. Alternatively, you can switch to using session-based authentication cookies that expire when you close your browser.

To disable persistent cookies:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System.

  3. On the General tab, scroll down to Browser Cookies.

  4. Slide the toggle to the right next to Disable persistent cookies.

  5. Verify that persistent cookies are disabled:

    1. Log in to the MSR web UI using Chrome.

    2. Right-click any page and select Inspect.

    3. In the Developer Tools panel, navigate to Application > Cookies > https://<msr-external-url>.

    4. Verify that Expires / Max-Age is set to Session.

    1. Log in to the MSR web UI using Firefox.

    2. Right-click any page and select Inspect.

    3. In the Developer Tools panel, navigate to Storage > Cookies > https://<msr-external-url>.

    4. Verify that Expires / Max-Age is set to Session.

Disable MSR telemetry

By default, MSR automatically records and transmits data to Mirantis through an encrypted channel for monitoring and analysis purposes. The data collected provides the Mirantis Customer Success Organization with information that helps Mirantis to better understand the operational use of MSR by our customers. It also provides key feedback in the form of product usage statistics, which assists our product teams in making enhancements to Mirantis products and services.

Caution

To send MSR telemetry, the container runtime and the jobrunner container must be able to resolve api.segment.io and create a TCP (HTTPS) connection on port 443.

To disable telemetry for MSR:

  1. Log in to the MSR web UI as an administrator.

  2. Click System in the left-side navigation panel to open the System page.

  3. Click the General tab in the details pane.

  4. Scroll down in the details pane to the Analytics section.

  5. Toggle the Send data slider to the left.

External storage

Configure MSR image storage
Configure your storage back end

By default MSR uses the local filesystem of the node where it is running to store your Docker images. You can configure MSR to use an external storage back end, for improved performance or high availability.

If your MSR deployment has a single replica, you can continue using the local filesystem for storing your Docker images. If your MSR deployment has multiple replicas, make sure all replicas are using the same storage back end for high availability. Whenever a user pulls an image, the MSR node serving the request needs to have access to that image.

MSR supports the following storage systems:

  • Local filesystem

    • NFS

    • Bind Mount

    • Volume

  • Cloud Storage Providers

    • Amazon S3

    • Microsoft Azure

    • OpenStack Swift

    • Google Cloud Storage

Note

Some of the previous links are meant to be informative and are not representative of MSR’s implementation of these storage systems.

To configure the storage back end, log in to the MSR web interface as an admin, and navigate to System > Storage.

The storage configuration page gives you the most common configuration options, but you have the option to upload a configuration file in .yml, .yaml, or .txt format.

Local filesystem

By default, MSR creates a volume named dtr-registry-<replica-id> to store your images using the local filesystem. You can customize the name and path of the volume by using mirantis/dtr install --dtr-storage-volume or mirantis/dtr reconfigure --dtr-storage-volume.

Important

When running 2.6.0 to 2.6.3 (with experimental online garbage collection), there is an issue with reconfiguring MSR with --nfs-storage-url which leads to erased tags. Make sure to back up your MSR metadata before you proceed. To work around the `–nfs-storage-url`` flag issue, manually create a storage volume on each MSR node. If MSR is already installed in your cluster, reconfigure MSR with the --dtr-storage-volume flag using your newly-created volume.

If you’re deploying MSR with high-availability, you need to use NFS or any other centralized storage back end so that all your MSR replicas have access to the same images.

To check how much space your images are utilizing in the local filesystem, SSH into the MSR node and run:

# Find the path to the volume
docker volume inspect dtr-registry-<replica-id>

# Check the disk usage
sudo du -hs \
$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<msr-replica>))
NFS

You can configure your MSR replicas to store images on an NFS partition, so that all replicas can share the same storage back end.

Cloud Storage
Amazon S3

MSR supports Amazon S3 or other storage systems that are S3-compatible like Minio.

Switching storage back ends

Switching storage back ends initializes a new metadata store and erases your existing tags. This helps facilitate online garbage collection. In earlier versions, MSR would subsequently start a tagmigration job to rebuild tag metadata from the file layout in the image layer store. This job has been discontinued for DTR 2.5.x (with garbage collection) and DTR 2.6, as your storage back end could get out of sync with your MSR metadata, like your manifests and existing repositories. As a best practice, MSR storage back ends and metadata should always be moved, backed up, and restored together.

The --storage-migrated flag in reconfigure lets you indicate the migration status of your storage data during a reconfigure. If you are not worried about losing your existing tags, you can skip the recommended steps below and perform a reconfigure.

Note

Starting with MSR 2.9.0, switching your storage back end does not initialize a new metadata store or erase your existing storage. MSR now requires the new storage back end to contain an exact copy of the prior configuration’s data. If this requirement is not met, the storage must be reinitialized using the --reinitialize-storage flag with the dtr reconfigure command, which reinitializes a new metadata store and erases your existing tags.

It is a best practice to always move, back up, and restore your storage back ends with your metadata.

Best practice for data migration
  1. Disable garbage collection by selecting “Never” under System > Garbage Collection, so blobs referenced in the backup that you create continue to exist. Make sure to keep it disabled while you’re performing the metadata backup and migrating your storage data.

  2. Back up your existing metadata.

  3. Migrate the contents of your current storage back end to the new one you are switching to. For example, upload your current storage data to your new NFS server.

  4. Restore MSR from your backup and specify your new storage back end.

  5. With MSR restored from your backup and your storage data migrated to your new back end, garbage collect any dangling blobs using the following API request:

    curl -u <username>:$TOKEN -X POST "https://<msr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action": \"onlinegc_blobs\" }"
    

    On success, you should get a 202 Accepted response with a job id and other related details. This ensures any blobs which are not referenced in your previously created backup get destroyed.

Alternative option for data migration

If you have a long maintenance window, you can skip some steps from above and do the following:

  1. Put MSR in “read-only” mode using the following API request:

    curl -u <username>:$TOKEN -X POST "https://<msr-url>/api/v0/meta/settings" -H "accept: application/json" -H "content-type: application/json" -d "{ \"readOnlyRegistry\": true }"
    

    On success, you should get a 202 Accepted response.

  2. Migrate the contents of your current storage back end to the new one you are switching to. For example, upload your current storage data to your new NFS server.

  3. Reconfigure MSR while specifying the --storage-migrated flag to preserve your existing tags.

Regarding previous versions…

  • Make sure to perform a backup before you change your storage back end when running DTR 2.5 (with online garbage collection) and 2.6.0-2.6.3.

  • Upgrade to DTR 2.6.4 and follow best practice for data migration to avoid the wiped tags issue when moving from one NFS server to another.

Configuring MSR for S3

You can configure MSR to store Docker images on Amazon S3, or other file servers with an S3-compatible API like Cleversafe or Minio.

Amazon S3 and compatible services store files in “buckets”, and users have permissions to read, write, and delete files from those buckets. When you integrate MSR with Amazon S3, MSR sends all read and write operations to the S3 bucket so that the images are persisted there.

Create a bucket on Amazon S3

Before configuring MSR you need to create a bucket on Amazon S3. To get faster pulls and pushes, you should create the S3 bucket on a region that’s physically close to the servers where MSR is running.

Start by creating a bucket. Then, as a best practice you should create a new IAM user just for the MSR integration and apply an IAM policy that ensures the user has limited permissions.

This user only needs permissions to access the bucket that you’ll use to store images, and be able to read, write, and delete files.

Here’s an example of a user policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::<bucket-name>"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::<bucket-name>/*"
        }
    ]
}
Configure MSR

Once you’ve created a bucket and user, you can configure MSR to use it. In your browser, navigate to https://<msr-url. Select System > Storage.

Select the S3 option, and fill-in the information about the bucket and user.

Field

Description

Root directory

The path in the bucket where images are stored

AWS Region name

The region where the bucket is.

S3 bucket name

The name of the bucket to store the images.

AWS access key

The access key to use to access the S3 bucket. This can be left empty if you’re using an IAM policy.

AWS secret key

The secret key to use to access the S3 bucket. This can be left empty if you’re using an IAM policy.

Region endpoint

The endpoint name for the region you’re using.

There are also some advanced settings.

Field

Description

Signature version 4 auth

Authenticate the requests using AWS signature version 4.

Use HTTPS

Secure all requests with HTTPS, or make requests in an insecure way.

Skip TLS verification

Encrypt all traffic, but don’t verify the TLS certificate used by the storage back end.

Root CA certificate

The public key certificate of the root certificate authority that issued the storage back end certificate.

Once you click Save, MSR validates the configurations and saves the changes.

Configure your clients

If you’re using a TLS certificate in your storage back end that’s not globally trusted, you’ll have to configure all Mirantis Container Runtimes that push or pull from MSR to trust that certificate. When you push or pull an image MSR redirects the requests to the storage back end, so if clients don’t trust the TLS certificates of both MSR and the storage back end, they won’t be able to push or pull images.

And if you’ve configured MSR to skip TLS verification, you also need to configure all Mirantis Container Runtimes that push or pull from MSR to skip TLS verification. You do this by adding MSR to the list of insecure registries when starting Docker.

Supported regions

MSR supports the following S3 regions:

S3 Regions

us-east-1

us-east-1

us-east-2

us-west-1

us-west-2

eu-west-1

eu-west-2

eu-central-1

ap-south-1

ap-southeast-1

ap-southeast-2

ap-northeast-1

ap-northeast-2

sa-east-1

cn-north-1

us-gov-west-1

ca-central-1

Update your S3 settings on the web interface

When running 2.6.0 to 2.6.4 (with experimental online garbage collection), there is an issue with changing your S3 settings on the web interface which leads to erased metadata. Make sure to back up your MSR metadata before you proceed.

Restore MSR with S3

To restore MSR using your previously configured S3 settings, use restore with --dtr-use-default-storage to keep your metadata.

Configuring MSR for NFS

You can configure MSR to store Docker images in an NFS directory. Starting in DTR 2.6, changing storage back ends involves initializing a new metadatastore instead of reusing an existing volume. This helps facilitate online garbage collection. See changes to NFS reconfiguration below if you have previously configured MSR to use NFS.

Before installing or configuring MSR to use an NFS directory, make sure that:

  • The NFS server has been correctly configured

  • The NFS server has a fixed IP address

  • All hosts running MSR have the correct NFS libraries installed

To confirm that the hosts can connect to the NFS server, try to list the directories exported by your NFS server:

showmount -e <nfsserver>

You should also try to mount one of the exported directories:

mkdir /tmp/mydir && sudo mount -t nfs <nfs server>:<directory> /tmp/mydir
Install MSR with NFS

One way to configure MSR to use an NFS directory is at install time:

docker run -it --rm mirantis/dtr:2.9.16 install \
  --nfs-storage-url <nfs-storage-url> \
  <other options>

Use the format nfs://<nfs server>/<directory> for the NFS storage URL. To support NFS v4, you can now specify additional options when running install with --nfs-storage-url.

When joining replicas to a MSR cluster, the replicas will pick up your storage configuration, so you will not need to specify it again.

Reconfigure MSR to use NFS

You can use the --storage-migrated flag with the reconfigure CLI command to indicate the migration status of your storage data during a reconfigure.

To reconfigure MSR using an NFSv4 volume as a storage back end:

docker run --rm -it \
  mirantis/dtr:2.9.16 reconfigure \
  --ucp-url <mke_url> \
  --ucp-username <mke_username> \
  --nfs-storage-url <msr-registry-nf>
  --async-nfs
  --storage-migrated

To reconfigure MSR to stop using NFS storage, leave the --nfs-storage-url option blank:

docker run -it --rm mirantis/dtr:2.9.16 reconfigure \
  --nfs-storage-url ""

Set up high availability

Mirantis Secure Registry is designed to scale horizontally as your usage increases. You can add more replicas to make MSR scale to your demand and for high availability.

All MSR replicas run the same set of services and changes to their configuration are automatically propagated to other replicas.

To make MSR tolerant to failures, add additional replicas to the MSR cluster.

MSR replicas

Failures tolerated

1

0

3

1

5

2

7

3

When sizing your MSR installation for high-availability, follow these rules of thumb:

  • Don’t create a MSR cluster with just two replicas. Your cluster won’t tolerate any failures, and it’s possible that you experience performance degradation.

  • When a replica fails, the number of failures tolerated by your cluster decreases. Don’t leave that replica offline for long.

  • Adding too many replicas to the cluster might also lead to performance degradation, as data needs to be replicated across all replicas.

To have high-availability on MKE and MSR, you need a minimum of:

  • 3 dedicated nodes to install MKE with high availability,

  • 3 dedicated nodes to install MSR with high availability,

  • As many nodes as you want for running your containers and applications.

You also need to configure the MSR replicas to share the same object storage.

Join more MSR replicas

To add replicas to an existing MSR deployment:

  1. Use ssh to log into any node that is already part of MKE.

  2. Run the MSR join command:

    docker run -it --rm \
      mirantis/dtr:2.9.16 join \
      --ucp-node <mke-node-name> \
      --ucp-insecure-tls
    

    Where the --ucp-node is the hostname of the MKE node where you want to deploy the MSR replica. --ucp-insecure-tls tells the command to trust the certificates used by MKE.

  3. If you have a load balancer, add this MSR replica to the load balancing pool.

Remove existing replicas

To remove a MSR replica from your deployment:

  1. Use ssh to log into any node that is part of MKE.

  2. Run the MSR remove command:

    docker run -it --rm \
    mirantis/dtr:2.9.16 remove \
    --ucp-insecure-tls
    

    You will be prompted for:

    • Existing replica id: the id of any healthy MSR replica of that cluster

    • Replica id: the id of the MSR replica you want to remove. It can be the id of an unhealthy replica

    • MKE username and password: the administrator credentials for MKE

If you’re load-balancing user requests across multiple MSR replicas, don’t forget to remove this replica from the load balancing pool.

Use a load balancer

With a load balancer, users can access MSR using a single domain name.

Once you have achieved high availability by joining multiple MSR replica nodes, you can configure a load balancer to balance user requests across those replicas. The load balancer detects when a replica fails and immediately stops forwarding requests to it, thus ensuring that the failure goes unnoticed by users.

MSR does not provide a load balancing service. You must use either an on-premises or cloud-based load balancer to balance requests across multiple MSR replicas.

Important

Additional steps are needed to use the same load balancer with both MSR and MKE. For more information, refer to Configure a load balancer in the MKE documentation.

Verify cluster health

MSR exposes several endpoints that you can use to assess the health of an MSR replica:

/_ping

Verifies whether the MSR replica is healthy. This is useful for load balancing and other automated health check tasks. This endpoint is unauthenticated.

/nginx_status

Returns the number of connections handled by the MSR NGINX front end.

/api/v0/meta/cluster_status

Returns detailed information about all MSR replicas.

You can use the unauthenticated /_ping endpoint on each MSR replica, to check the health status of the replica and whether it should remain in the load balancing pool or not.

The /_ping endpoint returns a JSON object for the replica being queried that takes the following form:

{
  "Error": "<error-message>",
  "Healthy": true
}

A response of "Healthy": true indicates that the replica is suitable for taking requests. It also signifies that the HTTP status code is 200.

An unhealthy replica will return 503 as the status code and populate "Error" with more details on any of the following services:

  • Storage container (MSR)

  • Authorization (Garant)

  • Metadata persistence (RethinkDB)

  • Content trust (Notary)

Note that the purpose of the /_ping endpoint is to check the health of a single replica. To obtain the health of every replica in a cluster, you must individually query each replica.

Load balance MSR
  1. Configure your load balancer for MSR, using the pertinent example below:

    user  nginx;
       worker_processes  1;
    
       error_log  /var/log/nginx/error.log warn;
       pid        /var/run/nginx.pid;
    
       events {
          worker_connections  1024;
       }
    
       stream {
          upstream dtr_80 {
             server <MSR_REPLICA_1_IP>:80  max_fails=2 fail_timeout=30s;
             server <MSR_REPLICA_2_IP>:80  max_fails=2 fail_timeout=30s;
             server <MSR_REPLICA_N_IP>:80   max_fails=2 fail_timeout=30s;
          }
          upstream dtr_443 {
             server <MSR_REPLICA_1_IP>:443 max_fails=2 fail_timeout=30s;
             server <MSR_REPLICA_2_IP>:443 max_fails=2 fail_timeout=30s;
             server <MSR_REPLICA_N_IP>:443  max_fails=2 fail_timeout=30s;
          }
          server {
             listen 443;
             proxy_pass dtr_443;
          }
    
          server {
             listen 80;
             proxy_pass dtr_80;
          }
       }
    
    global
          log /dev/log    local0
          log /dev/log    local1 notice
    
       defaults
             mode    tcp
             option  dontlognull
             timeout connect 5s
             timeout client 50s
             timeout server 50s
             timeout tunnel 1h
             timeout client-fin 50s
       ### frontends
       # Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
       frontend dtr_stats
             mode http
             bind 0.0.0.0:8181
             default_backend dtr_stats
       frontend dtr_80
             mode tcp
             bind 0.0.0.0:80
             default_backend dtr_upstream_servers_80
       frontend dtr_443
             mode tcp
             bind 0.0.0.0:443
             default_backend dtr_upstream_servers_443
       ### backends
       backend dtr_stats
             mode http
             option httplog
             stats enable
             stats admin if TRUE
             stats refresh 5m
       backend dtr_upstream_servers_80
             mode tcp
             option httpchk GET /_ping HTTP/1.1\r\nHost:\ <MSR_FQDN>
             server node01 <MSR_REPLICA_1_IP>:80 check weight 100
             server node02 <MSR_REPLICA_2_IP>:80 check weight 100
             server node03 <MSR_REPLICA_N_IP>:80 check weight 100
       backend dtr_upstream_servers_443
             mode tcp
             option httpchk GET /_ping HTTP/1.1\r\nHost:\ <MSR_FQDN>
             server node01 <MSR_REPLICA_1_IP>:443 weight 100 check check-ssl verify none
             server node02 <MSR_REPLICA_2_IP>:443 weight 100 check check-ssl verify none
             server node03 <MSR_REPLICA_N_IP>:443 weight 100 check check-ssl verify none
    
    {
          "Subnets": [
             "subnet-XXXXXXXX",
             "subnet-YYYYYYYY",
             "subnet-ZZZZZZZZ"
          ],
          "CanonicalHostedZoneNameID": "XXXXXXXXXXX",
          "CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
          "ListenerDescriptions": [
             {
                   "Listener": {
                      "InstancePort": 443,
                      "LoadBalancerPort": 443,
                      "Protocol": "TCP",
                      "InstanceProtocol": "TCP"
                   },
                   "PolicyNames": []
             }
          ],
          "HealthCheck": {
             "HealthyThreshold": 2,
             "Interval": 10,
             "Target": "HTTPS:443/_ping",
             "Timeout": 2,
             "UnhealthyThreshold": 4
          },
          "VPCId": "vpc-XXXXXX",
          "BackendServerDescriptions": [],
          "Instances": [
             {
                   "InstanceId": "i-XXXXXXXXX"
             },
             {
                   "InstanceId": "i-XXXXXXXXX"
             },
             {
                   "InstanceId": "i-XXXXXXXXX"
             }
          ],
          "DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
          "SecurityGroups": [
             "sg-XXXXXXXXX"
          ],
          "Policies": {
             "LBCookieStickinessPolicies": [],
             "AppCookieStickinessPolicies": [],
             "OtherPolicies": []
          },
          "LoadBalancerName": "ELB-MSR",
          "CreatedTime": "2017-02-13T21:40:15.400Z",
          "AvailabilityZones": [
             "us-west-2c",
             "us-west-2a",
             "us-west-2b"
          ],
          "Scheme": "internet-facing",
          "SourceSecurityGroup": {
             "OwnerAlias": "XXXXXXXXXXXX",
             "GroupName":  "XXXXXXXXXXXX"
          }
       }
    
  2. Deploy your load balancer:

    # Create the nginx.conf file, then
    # deploy the load balancer
    
    docker run --detach \
    --name dtr-lb \
    --restart=unless-stopped \
    --publish 80:80 \
    --publish 443:443 \
    --volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
    nginx:stable-alpine
    
    # Create the haproxy.cfg file, then
    # deploy the load balancer
    
    docker run --detach \
    --name dtr-lb \
    --publish 443:443 \
    --publish 80:80 \
    --publish 8181:8181 \
    --restart=unless-stopped \
    --volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
    haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
    

Configure your load balancer to:

  • Load balance TCP traffic on ports 80 and 443.

  • Not terminate HTTPS connections.

  • Not buffer requests.

  • Correctly forward the Host HTTP header.

  • Not include a timeout for idle connections, or set the timeout to more than 10 minutes.

Set up security scanning

For MSR to perform security scanning, you must have a running deployment of Mirantis Secure Registry (MSR), administrator access, and an MSR license that includes security scanning.

Before you can set up security scanning, you must verify that your Docker ID can access and download your MSR license from DockerHub. If you are using a license that is associated with an organization account, verify that your Docker ID is a member of the Owners team, as only members of that team can download license files for an organization. If you are using a license associated with an individual account, no additional action is needed.

Note

To verify that your MSR license includes security scanning:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

If the Enable Scanning toggle displays, the license includes security scanning.

To learn how to obtain and install your MSR license, refer to Obtain the license.

Enable MSR security scanning
  1. Log in to the MSR web UI as an administrator.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

  3. Slide the Enable Scanning toggle to the right.

  4. Set the security scanning mode by selecting either Online or Offline.

    • Online mode:

      Online mode downloads the latest vulnerability database from a Docker server and installs it.

      To enable online security scanning, click Sync Database now.

    • Offline mode:

      Offline mode requires that you manually perform the following steps.

      1. Download the most recent CVE database.

        Be aware that the example command specifies default values. It instructs the container to output the database file to the ~/Downloads directory and configures the volume to map from the local machine into the container. If the destination for the database is in a separate directory, you must define an additional volume. For more information, refer to the table that follows this procedure.

        docker run -it --rm \
        -v ${HOME}/Downloads:/data \
        -e CVE_DB_URL_ONLY=false \
        -e CLOBBER_FILE=false \
        -e DATABASE_OUTPUT="/data" \
        -e DATABASE_SCHEMA=3 \
        -e DEBUG=false \
        -e VERSION_ONLY=false \
        mirantis/get-dtr-cve-db:latest
        
      2. Click Select Database and open the downloaded CVE database file.

Runtime environment variable override

Variable

Default

Override detail

CLOBBER_FILE

false

Set to true to overwrite an existing file with the same database name.

CVE_DB_URL_ONLY

false

Set to true to output the CVE database URL; does not download the CVE database.

DATABASE_OUTPUT

/data

Indicates the database download directory inside the container.

DATABASE_SCHEMA

3

Valid values:

  • 1 (DTR 2.2.5 or lower)

  • 2 (DTR 2.3.x; 2.4.x; 2.5.15 or lower; 2.6.11 or lower; 2.7.4 or lower)

  • 3 (DTR 2.5.16 or higher; 2.6.12 or higher; 2.7.5 or higher)

DEBUG

false

Set to true to execute the script with set -x.

VERSION_ONLY

false

Set to true to produce a dry run that outputs the CVE database version number, but does not download the CVE database.

Set repository scanning mode

Two image scanning modes are available:

On push

The image is re-scanned (1) on each docker push to the repository and (2) when a user with write access clicks the Start Scan links or the Scan button.

Manual

The image is scanned only when a user with write access clicks the Start Scan links or Scan button.

By default, new repositories are set to scan On push, and any repositories that existed before scanning was enabled are set to Manual.

To change the scanning mode for an individual repository:

  1. Verify that you have write or admin access to the repository.

  2. Navigate to the repository, and click the Settings tab.

  3. Scroll down to the Image scanning section.

  4. Select the desired scanning mode.

Update the CVE scanning database

MSR security scanning indexes the components in your MSR images and compares them against a CVE database. This database is routinely updated with new vulnerability signatures, and thus MSR must be regularly updated with the latest version to properly scan for all possible vulnerabilities. After updating the database, MSR matches the components in the new CVE reports to the indexed components in your images, and generates an updated report.

Note

MSR users with administrator access can learn when the CVE database was last updated by accessing the Security tab in the MSR System page.

Update CVE database in online mode

In online mode, MSR security scanning monitors for updates to the vulnerability database, and downloads them when available.

To ensure that MSR can access the database updates, verify that the host can access both https://license.mirantis.com and https://dss-cve-updates.mirantis.com/ on port 443 using HTTPS.

MSR checks for new CVE database updates every day at 3:00 AM UTC. If an update is available, it is automatically downloaded and applied, without interrupting any scans in progress. Once the update is completed, the security scanning system checks the indexed components for new vulnerabilities.

To set the update mode to online:

  1. Log in to the MSR web UI as an administrator.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

  3. Click Online.

Your choice is saved automatically.

Note

To check immediately for a CVE database update, click Sync Database now.

Update CVE database in offline mode

When connection to the update server is not possible, you can update the CVE database for your MSR instance using a .tar file that contains the database updates.

To set the update mode to offline:

  1. Log in to the MSR web UI as an administrator.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

  3. Select Offline

  4. Click Select Database and open the downloaded CVE database file.

MSR installs the new CVE database and begins checking the images that are already indexed for components that match new or updated vulnerabilities.

Caches

The time needed to pull and push images is directly influenced by the distance between your users and the geographic location of your MSR deployment. This is because the files need to traverse the physical space and cross multiple networks. You can, however, deploy MSR caches at different geographic locations, to add greater efficiency and shorten user wait time.

With MSR caches you can:

  • Accelerate image pulls for users in a variety of geographical regions.

  • Manage user permissions from a central location.

MSR caches are inconspicuous to your users, as they will continue to log in and pull images using the provided MSR URL address.

When MSR receives a user request, it first authenticates the request and verifies that the user has permission to pull the requested image. Assuming the user has permission, they then receive an image manifest that contains the list of image layers to pull and which directs them to pull the images from a particular cache.

When your users request image layers from the indicated cache, the cache pulls these images from MSR and maintains a copy. This enables the cache to serve the image layers to other users without having to retrieve them again from MSR.

Note

Avoid using caches if your users need to push images faster or if you want to implement region-based RBAC policies. Instead, deploy multiple MSR clusters and apply mirroring policies between them. For further details, refer to Promotion policies and monitoring.

MSR cache prerequisites

Before deploying an MSR cache in a datacenter:

  • Obtain access to the Kubernetes cluster that is running MSR in your data center.

  • Join the nodes into a cluster.

  • Dedicate one or more worker nodes for running the MSR cache.

  • Obtain TLS certificates with which to secure the cache.

  • Configure a shared storage system, if you want the cache to be highly available.

  • Configure your firewall rules to ensure that your users have access to the cache through your chosen port.

    Note

    For illustration purposes only, the MSR cache documentation details caches that are exposed on port 443/TCP using an ingress controller.

MSR cache deployment scenario

MSR caches running in different geographic locations can provide your users with greater efficiency and shorten the amount of time required to pull images from MSR.

Consider a scenario in which you are running an MSR instance that is installed in the United States, with a user base that includes developers located in the United States, Asia, and Europe. The US-based developers can pull their images from MSR quickly, however those working in Asia and Europe have to contend with unacceptably long wait times to pull the same images. You can address this issue by deploying MSR caches in Asia and Europe, thus reducing the wait time for developers located in those areas.

The described MSR cache scenario requires three datacenters:

  1. US-based datacenter, running MSR configured for high availability

  2. Asia-based datacenter, running an MSR cache that is configured to fetch images from MSR

  3. Europe-based datacenter, running an MSR cache that is configured to fetch images from MSR

For information on datacenter configuration, refer to MSR cache prerequisites.

Deploy an MSR cache with Swarm

Note

The MSR on Swarm deployment detailed herein assumes that you have a running MSR deployment and that you have provisioned multiple nodes and joined them into a swarm.

You will deploy your MSR cache as a Docker service, thus ensuring that Docker automatically schedules and restarts the service in the event of a problem.

You manage the cache configuration using a Docker configuration and the TLS certificates using Docker secrets. This setup enables you to securely manage the node configurations for the node on which the cache is running.

Prepare the cache deployment

Important

To ensure MSR cache functionality, Mirantis highly recommends that you deploy the cache on a dedicated node.

Label the cache node

To target your deployment to the cache node, you must first label that node. To do this, SSH into a manager node of the swarm within which you want to deploy the MSR cache.

docker node update --label-add dtr.cache=true <node-hostname>

Note

If you are using MKE to manage that swarm, use a client bundle to configure your Docker CLI client to connect to the swarm.

Configure the MSR cache

Following cache preparation, you will have the following file structure on your workstation:

├── docker-stack.yml
├── config.yml          # The cache configuration file
└── certs
    ├── cache.cert.pem  # The cache public key certificate
    ├── cache.key.pem   # The cache private key
    └── dtr.cert.pem    # MSR CA certificate

With the configuration detailed herein, the cache fetches image layers from MSR and keeps a local copy for 24 hours. After that, if a user requests that image layer, the cache re-fetches it from MSR.

The cache is configured to persist data inside its container. If something goes wrong with the cache service, Docker automatically redeploys a new container, but previously cached data is not persisted. You can customize the storage parameters, if you want to store the image layers using a persistent storage back end.

Also, the cache is configured to use port 443. If you are already using that port in the swarm, update the deployment and configuration files to use another port. Remember to create firewall rules for the port you choose.

Edit the docker-stack.yml file

With a single command, you can deploy the cache using the docker-stack.yml file, which you mount into the container.

Edit the sample MSR cache configuration file that follows to fit your environment:

version: "3.3"
services:
  cache:
    image: mirantis/dtr-content-cache:2.8.2
    entrypoint:
      - "/start.sh"
      - "/config.yml"
    ports:
      - 443:443
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.dtr.cache == true]
      restart_policy:
        condition: on-failure
    configs:
      - config.yml
    secrets:
      - dtr.cert.pem
      - cache.cert.pem
      - cache.key.pem
configs:
  config.yml:
    file: ./config.yml
secrets:
  dtr.cert.pem:
    file: ./certs/dtr.cert.pem
  cache.cert.pem:
    file: ./certs/cache.cert.pem
  cache.key.pem:
    file: ./certs/cache.key.pem
Edit the config.yml file

You configure the MSR cache using a configuration file that you mount into the container.

Edit the sample MSR cache configuration file that follows to fit your environment, entering the relevant external MSR cache, worker node, or external loadbalancer FQDN. Once configured, the cache fetches image layers from MSR and maintains a local copy for 24 hours. If a user requests the image layer after that period, the cache re-fetches it from MSR.

version: 0.1
log:
  level: info
storage:
  delete:
    enabled: true
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: '0.0.0.0:443'
  secret: generate-random-secret
  host: 'https://<cache-url>'
  tls:
    certificate: /run/secrets/cache.cert.pem
    key: /run/secrets/cache.key.pem
middleware:
  registry:
    - name: downstream
      options:
        blobttl: 24h
        upstreams:
          - https://<msr-url>:<msr-port>
        cas:
          - /run/secrets/dtr.cert.pem
Create the MSR cache certificates

To deploy the MSR cache with a TLS endpoint, you must generate a TLS certificate and key from a certificate authority.

Be aware that to expose the MSR cache through a node port or a host port, you must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.

Create the MSR cache certificates:

  1. Create a cache certificate:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -m PEM
    
  2. Create a directory called certs and place in it the newly created certificate cache.cert.pem and key cache.key.pem for your MSR cache.

  3. Configure the cert pem files, as detailed below:

    pem file

    Content to add

    cache.cert.pem

    Add the public key certificate for the cache. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file.

    cache.key.pem

    Add the unencrypted private key for the cache.

    dtr.cert.pem

    The cache communicates with MSR using TLS. If you have customized MSR to use TLS certificates issued by a globally trusted certificate authority, the cache automatically trusts MSR. If, though, you are using the default MSR configuration, or MSR is using TLS certificates signed by your own certificate authority, you need to configure the cache to trust MSR, and edit the daemon.json file to allow for insecure registries.

    1. Add the MSR CA certificate to the certs/dtr.cert. pem file:

      curl -sk https://<msr-url>/ca > certs/dtr.cert.pem
      
    2. Modify the daemon.json file to include:

      "insecure-registries" : "<msraddress:portnumber>"
      
Deploy the cache
  1. Run the following command to deploy the cache:

    docker stack deploy --compose-file docker-stack.yml dtr-cache
    
  2. Verify the successful deployment of the cache:

    docker stack ps dtr-cache
    

    Docker should display the dtr-cache stack as running.

  3. Register the cache with MSR.

    You must configure MSR to recognize the cache. Use the POST /api/v0/content_caches API to do this, by way of the MSR interactive API documentation.

    1. Access the MSR web UI.

    2. Select API docs from the top-right menu.

    3. Navigate to POST /api/v0/content_caches and click to expand it.

    1. Type the following into the body field:

      {
      "name": "region-asia",
      "host": "https://<cache-url>:<cache-port>"
      }
      
    2. Click Try it out! to make the API call.

  4. Configure your user account.

    In the MSR web UI, navigate to your Account, click the Settings tab, and edit the Content Cache settings to the newly deployed cache.

    Note

    To set up user accounts for multiple users simultaneously, use the /api/v0/accounts/{username}/settings API endpoint.

    Henceforth, you will be using the cache whenever you pull images.

  5. Test the cache.

    1. Verify that the cache is functioning properly:

      1. Push an image to MSR.

      2. Verify that the cache is configured to your user account.

      3. Delete the image from your local system.

      4. Pull the image from MSR.

    2. Check the logs to verify that the cache is serving your request:

      docker service logs --follow dtr-cache_cache
      

      Issues with TLS authentication are the most common causes of cache misconfiguration, including:

      • MSR not trusting the cache TLS certificates.

      • The cache not trusting MSR TLS certificates.

      • Your machine not trusting MSR or the cache.

      You can use the logs to troubleshoot cache misconfigurations.

  6. Clean up sensitive files, such as private keys for the cache, by running the following command:

    rm -rf certs
    
Deploy a MSR cache with Kubernetes

Note

The MSR with Kubernetes deployment detailed herein assumes that you have a running MSR deployment.

When you establish the MSR cache as a Kubernetes deployment, you ensure that Kubernetes will automatically schedule and restart the service in the event of a problem.

You manage the cache configuration with a Kubernetes Config Map and the TLS certificates with Kubernetes secrets. This setup enables you to securely manage the configurations of the node on which the cache is running.

Prepare the cache deployment

Following cache preparation, you will have the following file structure on your workstation:

├── dtrcache.yaml
├── config.yaml
└── certs
    ├── cache.cert.pem
    ├── cache.key.pem
    └── dtr.cert.pem
dtrcache.yaml

The YAML file that allows you to deploy the cache with a single command.

config.yaml

The cache configuration file.

certs

The certificates subdirectory.

cache.cert.pem

The cache public key certificate, including any intermediaries.

cache.key.pem

The cache private key.

dtr.cert.pem

The MSR CA certificate.

Create the MSR cache certificates

To deploy the MSR cache with a TLS endpoint you must generate a TLS ceritificate and key from a certificate authority.

The manner in which you expose the MSR cache changes the Storage Area Networks (SANs) that are required for the certificate. For example:

  • To deploy the MSR cache with an ingress object you must use an external MSR cache address that resolves to your ingress controller as part of your certificate.

  • To expose the MSR cache through a Kubernetes Cloud Provider, you must have the external Loadbalancer address as part of your certificate.

  • To expose the MSR cache through a Node port or a host port you must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.

Create the MSR cache certficates:

  1. Create a cache certificate:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -m PEM
    
  2. Create a directory called certs.

  3. In the certs directory, place the newly created certificate cache.cert.pem and key cache.key.pem for your MSR cache.

  4. Place the certificate authority in the certs directory, including any intermedite certificate authorities of the certificate from your MSR deployment. If your MSR deployment uses cert-manager, use kebectl to source this from the main MSR deployment.

    kubectl get secret msr-nginx-ca-cert -o go-template='{{ index .data "ca.crt" | base64decode }}'
    

Note

If cert-manager is not in use, you must provide your custom nginx.webtls certificate.

Configure the MSR cache

The MSR cache takes its configuration from a configuration file that you mount into the container.

You can edit the following MSR cache configuration file for your environment, entering the relevant external MSR cache, worker node, or external loadbalancer FQDN. Once you have configured the cache it fetches image layers from MSR and maintains a local copy for 24 hours. If a user requests the image layer after that period, the cache fetches it again from MSR.

cat > config.yaml <<EOF
version: 0.1
log:
  level: info
storage:
  delete:
    enabled: true
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: 0.0.0.0:443
  secret: generate-random-secret
  host: https://<external-fqdn-dtrcache> # Could be MSR Cache / Loadbalancer / Worker Node external FQDN
  tls:
    certificate: /certs/cache.cert.pem
    key: /certs/cache.key.pem
middleware:
  registry:
      - name: downstream
        options:
          blobttl: 24h
          upstreams:
            - https://<msr-url> # URL of the Main MSR Deployment
          cas:
            - /certs/msr.cert.pem
EOF

By default, the cache stores image data inside its container. Thus, if something goes wrong with the cache service and Kubernetes deploys a new Pod, cached data is not persisted. The data is not lost, however, as it persists in the primary MSR.

Note

Kubernetes persistent volumes or persistent volume claims must be in use to provide persistent back end storage capabilities for the cache.

Define Kubernetes resources

The Kubernetes manifest file you use to deploy the MSR cache is independent from how you choose to expose the MSR cache within your environment.

cat > dtrcache.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dtr-cache
  namespace: dtr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dtr-cache
  template:
    metadata:
      labels:
        app: dtr-cache
      annotations:
       seccomp.security.alpha.kubernetes.io/pod: docker/default
    spec:
      containers:
        - name: dtr-cache
          image: mirantis/{{ page.dtr_namespace }}/dtr-content-cache:2.9.16
          command: ["bin/sh"]
          args:
            - start.sh
            - /config/config.yaml
          ports:
          - name: https
            containerPort: 443
          volumeMounts:
          - name: dtr-certs
            readOnly: true
            mountPath: /certs/
          - name: dtr-cache-config
            readOnly: true
            mountPath: /config
      volumes:
      - name: dtr-certs
        secret:
          secretName: dtr-certs
      - name: dtr-cache-config
        configMap:
          defaultMode: 0666
          name: dtr-cache-config
EOF
Create Kubernetes resources

To create the Kubernetes resources, you must have the kubectl command line tool configured to communicate with your Kubernetes cluster, through either a Kubernetes configuration file or an MKE client bundle.

Note

The documentation herein assumes that you have the necessary file stucture on your workstation.

To create the Kubernetes resources:

  1. Create a Kubernetes namespace to logically separate all of the MSR cache components:

    kubectl create namespace msr
    
  2. Create the Kubernetes Secrets that contain the MSR cache TLS certificates and a Kubernetes ConfigMap that contains the MSR cache configuration file:

    kubectl -n msr create secret generic msr-certs \
      --from-file=certs/msr.cert.pem \
      --from-file=certs/cache.cert.pem \
      --from-file=certs/cache.key.pem
    
    kubectl -n msr create configmap msr-cache-config \
      --from-file=config.yaml
    
  3. Create the Kubernetes deployment:

    kubectl create -f msrcache.yaml
    
  4. Review the running Pods in your cluster to confirm successful deployment:

    kubectl -n msr get pods
    
  5. Optional. Troubleshoot your deployment:

    kubectl -n msr describe pods <pods>
    
    and / or
    
    `kubectl -n msr logs <pods>
    
Expose the MSR Cache

To provide external access to your MSR cache you must expose the cache Pods.

Important

  • Expose your MSR cache through only one external interface.

  • To ensure TLS certificate validity, you must expose the cache through the same interface for which you previously created a certificate.

Kubernetes supports several methods for exposing a service, based on your infrastructure and your environment. Detail is offered below for the NodePort method and the Ingress Controllers method.

NodePort method
  1. Add a worker node FQDN to the TLS certificate at the start and access the MSR cache through an exposed port on a worker node FQDN.

    cat > dtrcacheservice.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: dtr-cache
      namespace: dtr
    spec:
      type: NodePort
      ports:
      - name: https
        port: 443
        targetPort: 443
        protocol: TCP
      selector:
        app: dtr-cache
    EOF
    
    kubectl create -f dtrcacheservice.yaml
    
  2. Run the following command to determine the port on which you have exposed the MSR cache:

    kubectl -n dtr get services
    
  3. Test the external reachability of your MSR cache. To do this, use curl to hit the API endpoint, using both the external address of a worker node and the NodePort:

    curl -X GET https://<workernodefqdn>:<nodeport>/v2/_catalog
    {"repositories":[]}
    
Ingress Controllers method

In the ingress contoller exposure scheme, you expose the MSR cache through an ingress object.

  1. Create a DNS rule in your environment to resolve an MSR cache external FQDN address to the address of your ingress controller. In addition, specify at the start the same MSR cache external FQDN within the MSR cache certificate.

    cat > dtrcacheingress.yaml <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: dtr-cache
      namespace: dtr
      annotations:
        nginx.ingress.kubernetes.io/ssl-passthrough: "true"
        nginx.ingress.kubernetes.io/secure-backends: "true"
    spec:
      tls:
      - hosts:
        - <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
      rules:
      - host: <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
        http:
          paths:
          - pathType: Prefix
            path: "/cache"
            backend:
              service:
                name: dtr-cache
                port:
                  number: 443
    EOF
    
    kubectl create -f dtrcacheingress.yaml
    
  2. Test the external reachability of your MSR cache. To do this, use curl to hit the API endpoint. The address should be the one you have previously defined in the service definition file.

curl -X GET https://external-msr-cache-fqdn/v2/_catalog
{"repositories":[]}

See also

Configure caches for high availability

To ensure that your MSR cache is always available to users and is highly performant, configure it for high availability.

You will require the following to deploy MSR caches with high availability:

  • Multiple nodes, one for each cache replica

  • A load balancer

  • Shared storage system that has read-after-write consistency

With high availability, Mirantis recommends that you configure the replicas to store data using a shared storage system. MSR cache deployment is the same, though, regardless of whether you are deploying a single replica or multiple replicas.

When using a shared storage system, once an image layer is cached, any replica is able to serve it to users without having to fetch a new copy from MSR.

MSR caches support the following storage systems:

  • Alibaba Cloud Object Storage Service

  • Amazon S3

  • Azure Blob Storage

  • Google Cloud Storage

  • NFS

  • Openstack Swift

Note

If you are using NFS as a shared storage system, ensure read-after-write consistency by verifying that the shared directory is configured with:

/dtr-cache *(rw,root_squash,no_wdelay)

In addition, mount the NFS directory on each node where you will deploy an MSR cache replica.

To configure caches for high availability:

  1. Use SSH to log in to a manager node of the cluster on which you want to deploy the MSR cache. If you are using MKE to manage that cluster, you can also use a client bundle to configure your Docker CLI client to connect to the cluster.

  2. Label each node that is going to run the cache replica:

    docker node update --label-add dtr.cache=true <node-hostname>
    
  3. Create the cache configuration files by following the instructions for deploying a single cache replica. Be sure to adapt the storage object, using the configuration options for the shared storage of your choice.

  4. Deploy a load balancer of your choice to balance requests across your set of replicas.

MSR cache configuration

MSR caches are based on Docker Registry, and use the same configuration file format. The MSR cache extends the Docker Registry configuration file format, though, introducing a new middleware called downstream with three configuration options: blobttl, upstreams, and cas:

middleware:
  registry:
      - name: downstream
        options:
          blobttl: 24h
          upstreams:
            - <Externally-reachable address for upstream registry or content cache in format scheme://host:port>
          cas:
            - <Absolute path to next-hop upstream registry or content cache
              CA certificate in the container's filesystem>

The following table offers detail specific to MSR caches for each parameter:

Parameter

Required

Description

blobttl

no

The TTL (Time to Live) value for blobs in the cache, offered as a positive integer and suffix denoting a unit of time.

Valid values:

  • ns (nanoseconds)

  • us (microseconds)

  • ms (milliseconds)

  • s (seconds)

  • m (minutes)

  • h (hours)

Note

If the suffix is omitted, the system interprets the value as nanoseconds.

If blobttl is configured, storage.delete.enabled must be set to true.

cas

no

An optional list of absolute paths to PEM-encoded CA certificates of upstream registries or content caches.

upstreams

yes

A list of externally-reachable addresses for upstream registries of content caches. If you specify more than one host, it will pull from registries in a round-robin fashion.

Garbage collection

Mirantis Secure Registry (MSR) supports garbage collection, the automatic cleanup of unused image layers. You can configure garbage collection to occur at regularly scheduled times, as well as set a specific duration for the process.

Garbage collection first identifies and marks unused image layers, then subsequently deletes the layers that have been marked.

Schedule garbage collection
  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System and select the Garbage collection tab.

  3. Set the duration for the garbage collection job:

    • Until done

    • For <number> minutes

    • Never

  4. Set the garbage collection schedule:

    • Custom cron schedule (<hour, date, month, day>)

    • Daily at midnight UTC

    • Every Saturday at 1AM UTC

    • Every Sunday at 1AM UTC

    • Do not repeat

  5. Click either Save & Start or Save. Save & Start runs the garbage collection job immediately and Save runs the job at the next scheduled time.

  6. At the scheduled start time, verify that garbage collection has begun by navigating to the Job Logs tab.

How garbage collection works

In conducting garbage collection, MSR performs the following actions in sequence:

  1. Establishes a cutoff time.

  2. Marks each referenced manifest file with a timestamp. When manifest files are pushed to MSR, they are also marked with a timestamp.

  3. Sweeps each manifest file that does not have a timestamp after the cutoff time.

  4. Deletes the file if it is never referenced, meaning that no image tag uses it.

  5. Repeats the process for blob links and blob descriptors.

Each image stored in MSR is comprised of the following files:

  • The image filesystem, which consists of a list of unioned image layers.

  • A configuration file, which contains the architecture of the image along with other metadata.

  • A manifest file, which contains a list of all the image layers and the configuration file for the image.

MSR tracks these files in its metadata store, using RethinkDB, doing so in a content-addressable manner in which each file corresponds to a cryptographic hash of the file content. Thus, if two image tags hold exactly the same content, MSR only stores the content once, which makes hash collisions nearly impossible even when image tag names differ. For example, if wordpress:4.8 and wordpress:latest have the same content, MSR will only store that content once. If you delete one of these tags, the other will remain intact.

As a result, when you delete an image tag, MSR cannot delete the underlying files as it is possible that other tags also use the same underlying files.

Create a new repository when pushing an image

By default, MSR only allows users to push images to repositories that already exist, and for which the user has write privileges. Alternatively, you can configure MSR to create a new private repository when an image is pushed.

To create a new repository when pushing an image:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click Settings and scroll down to Repositories.

  3. Slide the Create repository on push toggle to the right.

  4. Push an image to a non-existing repository:

    curl --user <admin-user>:<password> \
    --request POST "<msr-url>/api/v0/meta/settings" \
    --header "accept: application/json" \
    --header "content-type: application/json" \
    --data "{ \"createRepositoryOnPush\": true}"
    

Pushing an image to a non-existing repository will create a new repository using the following naming convention:

  • Non-admin users: <user-name>/<repository>

  • Admin users: <organization>/<repository>

Use a web proxy

Mirantis Secure Registry (MSR) makes outgoing connections to check for new versions, automatically renew its license, and update its vulnerability database. If MSR cannot access the Internet, you must manually apply any updates.

One option to keep your environment secure while still allowing MSR access to the Internet is to use a web proxy. If you have an HTTP or HTTPS proxy, you can configure MSR to use it. To avoid downtime, you should do this configuration outside business peak hours.

To configure MSR for web proxy use:

  1. Log in as an administrator to a node where MSR is deployed.

  2. Reconfigure MSR to use a web proxy:

    docker run -it --rm \
      mirantis/dtr:2.9.16 reconfigure \
      --http-proxy http://<domain>:<port> \
      --https-proxy https://<doman>:<port> \
      --ucp-insecure-tls
    

    If the web proxy requires authentication, submit your user name and password:

    docker run -it --rm \
      mirantis/dtr:2.9.16 reconfigure \
      --http-proxy username:password@<domain>:<port> \
      --https-proxy username:password@<doman>:<port> \
      --ucp-insecure-tls
    

    Note

    MSR does not display the password portion of the URL when it is presented in the MSR UI.

  3. Verify that your web proxy is properly configured:

    1. Log in to the MSR web UI.

    2. In the left-side navigation panel, navigate to System.

    3. Scroll down to Domains & Proxies and review the values of HTTP proxy and HTTPS proxy.

Manage applications

In addition to storing individual and multi-architecture container images and plugins, MSR supports the storage of applications as their own distinguishable type.

Applications include the following two tags:

Image

Tag

Type

Under the hood

Invocation

<app-tag>-invoc

Container image represented by OS and architecture.

For example, linux amd64.

Uses Mirantis Container Runtime. The Docker daemon is responsible for building and pushing the image. Includes scan results for the invocation image.

Application with bundled components

<app-tag>

Application

Uses the application client to build and push the image. Includes scan results for the bundled components. Docker App is an experimental Docker CLI feature.

Use docker app push to push your applications to MSR. For more information, refer to Docker App in the official Docker documentation.

View application vulnerabilities

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click Repositories.

  3. Select the desired repository and click the Tags tab.

  4. Click View details on the <app-tag> or <app-tag>-invoc row.

Limitations

  • You cannot sign an application as the Notary signer cannot sign Open Container Initiative (OCI) indices.

  • Scanning-based policies do not take effect until after all images bundled in the application have been scanned.

  • Docker Content Trust (DCT) does not work for applications and multi-architecture images, which have the same underlying structure.

Parity with existing repository and image features

The following repository and image management events also apply to applications:

Manage images

Create a repository

MSR requires that you create the image repository before pushing any images to the registry.

To create an image repository:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click New repository.

  4. Select the required namespace and enter the name for your repository using only lowercase letters, numbers, underscores, and hyphens.

  5. Select whether your repository is public or private:

    • Public repositories are visible to all users, but can only be modified by those with write permissions.

    • Private repositories are visible only to users with repository permissions.

  6. Optional. Click Show advanced settings:

    • Select On to make tags immutable, and thus unable to be overwritten.

    • Select On push to configure images to be scanned automatically when they are pushed to MSR. You will also be able to scan them manually.

  7. Click Create.

Note

To enable tag pruning, refer to Set a tag limit. This feature requires that tag immutability is turned off at the repository level.

Image names in MSR

MSR image names must have the following characteristics:

  • The organization and repository names both must have fewer than 56 characters.

  • The complete image name, which includes the domain, organization, and repository name, must not exceed 255 characters.

  • When you tag your images for MSR, they must take the following form:

    <msr-domain-name>/<user-or-org>/<repository-name>.

    For example, https://127.0.0.1/admin/nginx.

Multi-architecture images

While it is possible to enable the just-in-time creation of multi-architecture image repositories when creating a repository using the API, Mirantis does not recommend using this option, as it will cause Docker Content Trust to fail along with other issues. To manage Docker image manifests and manifest lists, instead use the experimental command docker manifest.

Review repository information

The MSR web UI has an Info page for each repository that includes the following sections:

  • A README file, which is editable by admin users.

  • The docker pull command for pulling the images contained in the given repository. To learn more about pulling images, refer to Pull and push images.

  • The permissions associated with the user who is currently logged in.

To view the Info section:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click Repositories.

  3. Select the required repository by clicking the repository name rather than the namespace name that precedes the /.

    The Info tab displays by default.

To view the repository events that your permissions level has access to, hover over the question mark next to the permissions level that displays under Your permission.

Note

Your permissions list may include repository events that are not displayed in the Activity tab. Also, it is not an exhaustive list of the event types that are displayed in your activity stream. To learn more about repository events, refer to Audit repository events.

Pull and push images

Just as with Docker Hub, interactions with MSR consist in the following:

  • docker login <msr-url> authenticates the user on MSR

  • docker pull <image>:<tag> pulls an image from MSR

  • docker push <image>:<tag> pushes an image to MSR

Pull an image

Note

It is only necessary to authenticate using docker login before pulling a private image.

  1. If you need to pull a private image, log in to MSR:

    docker login <registry-host-name>
    
  2. Pull the required image:

    docker pull <registry-host-name>/<namespace>/<repository>:<tag>
    
Push an image

Before you can push an image to MSR, you must create a repository and tag your image.

  1. Create a repository for the required image.

  2. Tag the image using the host name, namespace, repository name, and tag:

    docker tag <image-name> <registry-host-name>/<namespace>/<repository>:<tag>
    
  3. Log in to MSR:

    docker login <registry-host-name>
    
  4. Push the image to MSR:

    docker push <registry-host-name>/<namespace>/<repository>:<tag>
    
  5. Verify that the image successfully pushed:

    1. Log in to the MSR web UI.

    2. In the left-side navigation panel, click Repositories.

    3. Select the relevant repository.

    4. Navigate to the Tags tab.

    5. Verify that the required tag is listed on the page.

Windows image limitations

The base layers of the Microsoft Windows base images have redistribution restrictions. When you push a Windows image to MSR, Docker only pushes the image manifest and the layers that are above the Windows base layers. As a result:

  • When a user pulls a Windows image from MSR, the Windows base layers are automatically fetched from Microsoft.

  • Because MSR does not have access to the image base layers, it cannot scan those image layers for vulnerabilities. The Windows base layers are, however, scanned by Docker Hub.

On air-gapped or similarly limited systems, you can configure Docker to push Windows base layers to MSR by adding the following line to C:\ProgramData\docker\config\daemon.json:

"allow-nondistributable-artifacts": ["<msr-host-name>:<msr-port>"]

Caution

For production environments, Mirantis does not recommend configuring Docker to push Windows base layers to MSR.

Delete images

Note

If your MSR instance uses image signing, you will need to remove any trust data on the image before you can delete it. For more information, refer to Delete signed images.

To delete an image:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the relevant repository and navigate to the Tags tab.

  4. Select the check box next to the tags that you want to delete.

  5. Click Delete.

Alternatively, you can delete every tag for a particular image by deleting the relevant repository.

To delete a repository:

  1. Click the required repository and navigate to the Settings tab.

  2. Scroll down to Delete repository and click Delete.

Scan images for vulnerabilities

Mirantis Secure Registry (MSR) has the ability to scan images for security vulnerabilities contained in the US National Vulnerability Database. Security scan results are reported for each image tag contained in a repository.

Security scanning is available as an add-on to MSR. If security scan results are not available on your repositories, your organization may not have purchased the security scanning feature or it may be disabled. Administrator permissions are required to enable security scanning on your MSR instance.

Note

Only users with write access to a repository can manually start a scan. Users with read-only access can, however, view the scan results.

Security scan process

Scans run on demand when you initiate them in the MSR web UI or automatically when you push an image to the registry.

The scanner first performs a binary scan on each layer of the image, identifies the software components in each layer, and indexes the SHA of each component in a bill-of-materials. A binary scan evaluates the components on a bit-by-bit level, so vulnerable components are discovered even if they are statically linked or use a different name.

The scan then compares the SHA of each component against the US National Vulnerability Database that is installed on your MSR instance. When this database is updated, MSR verifies whether the indexed components have newly discovered vulnerabilities.

MSR has the ability to scan both Linux and Windows images. However, because Docker defaults to not pushing foreign image layers for Windows images, MSR does not scan those layers. If you want MSR to scan your Windows images, configure Docker to always push image layers, and it will scan the non-foreign layers.

Scan images

Note

Only users with write access to a repository can manually start a scan. Users with read-only access can, however, view the scan results.

Security scan on push

By default, a security scan runs automatically when you push an image to the registry.

To view the results of a security scan:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the required repository and select the Tags tab.

  4. Click View details on the required tag.

Manual scanning

You can manually start a scan for images in repositories that you have write access to.

To manually scan an image:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the required repository and select the Tags tab.

  4. Click Start a scan on the required image tag.

  5. To review the scan results, click View details.

Change the scanning mode

You can change the scanning mode for each individual repository at any time. You might want to disable scanning in either of the following scenarios:

  • You are pushing an image repeatedly during troubleshooting and do not want to waste resources on rescanning.

  • A repository contains legacy code that is not used or updated frequently.

Note

To change an individual repository scanning mode, you must have write or administrator access to the repository.

To change the scanning mode:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the required repository and select the Settings tab.

  4. Scroll down to Image scanning and under Scan on push, select either On push or Manual.

Review security scan results

Once MSR has run a security scan for an image, you can view the results.

Scan summaries

A summary of the results displays next to each scanned tag on the repository Tags tab, and presents in one of the following ways:

  • If the scan did not find any vulnerabilities, the word Clean displays in green.

  • If the scan found vulnerabilities, the severity level, Critical, Major, or Minor, displays in red or orange with the number of vulnerabilities. If the scan could not detect the version of a component, the vulnerabilities are reported for all versions of the component.

Detailed report

To view the full scanning report, click View details for the required image tag.

The top of the resulting page includes metadata about the image including the SHA, image size, last push date, user who initiated the push, security scan summary, and the security scan progress.

The scan results for each image include two different modes so you can quickly view details about the image, its components, and any vulnerabilities found:

  • The Layers view lists the layers of the image in the order that they are built by the Dockerfile.

    This view can help you identify which command in the build introduced the vulnerabilities, and which components are associated with that command. Click a layer to see a summary of its components. You can then click on a component to switch to the Component view and obtain more details about the specific item.

    Note

    The layers view can be long, so be sure to scroll down if you do not immediately see the reported vulnerabilities.

  • The Components view lists the individual component libraries indexed by the scanning system in order of severity and number of vulnerabilities found, with the most vulnerable library listed first.

    Click an individual component to view details on the vulnerability it introduces, including a short summary and a link to the official CVE database report. A single component can have multiple vulnerabilities, and the scan report provides details on each one. In addition, the component details include the license type used by the component, the file path to the component in the image, and the number of layers that contain the component.

Note

The CVE count presented in the scan summary of an image with multiple layers may differ from the count obtained through summation of the CVEs for each individual image component. This is because the scan summary performs a summation of the CVEs in every layer of the image, and a component may be present in more than one layer of an image.

What to do next

If you find that an image in your registry contains vulnerable components, you can use the linked CVE scan information in each scan report to evaluate the vulnerability and decide what to do.

If you discover vulnerable components, you should verify whether there is an updated version available where the security vulnerability has been addressed. If necessary, you can contact the component maintainers to ensure that the vulnerability is being addressed in a future version or a patch update.

If the vulnerability is in a base layer, such as an operating system, you might not be able to correct the issue in the image. In this case, you can switch to a different version of the base layer, or you can find a less vulnerable equivalent.

You can address vulnerabilities in your repositories by updating the images to use updated and corrected versions of vulnerable components or by using a different component that offers the same functionality. When you have updated the source code, run a build to create a new image, tag the image, and push the updated image to your MSR instance. You can then re-scan the image to confirm that you have addressed the vulnerabilities.

Override a vulnerability

MSR security scanning sometimes reports image vulnerabilities that you know have already been fixed. In such cases, it is possible to hide the vulnerability warning.

To override a vulnerability:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Navigate to the required repository and click View details.

  4. To review the vulnerabilities associated with each component in the image, click the Components tab.

  5. Select the component with the vulnerability you want to ignore, navigate to the vulnerability, and click Hide.

Once dismissed, the vulnerability is hidden system-wide and will no longer be reported as a vulnerability on affected images with the same layer IDs or digests. In addition, MSR will not re-evaluate the promotion policies that have been set up for the repository.

To re-evaluate the promotion policy for the affected image:

After hiding a particular vulnerability, you can re-evaluate the promotion policy for the affected image.

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Navigate to the required repository and click View details.

  4. Click Promote.

Prevent tags from being overwritten

By default, users can push the same tag multiple times to a repository, thus overwriting the older versions of the tag. This can however lead to problems if a user pushes an image with the same tag name but different functionality. Also, when images are overwritten, it can be difficult to determine which build originally generated the image.

To prevent tags from being overwritten, you can configure a repository to be immutable. Once configured, MSR will not allow another image with the same tag to be pushed to the repository.

Note

Enabling tag immutability disables repository tag limits.

Make tags immutable

You can enable tag immutability when creating a new repository or at a later time.

To enable tag immutability when creating a new repository:

  1. Log in to the MSR web UI.

  2. Follow the steps in Create a repository.

  3. On the new repository creation page, click Show advanced settings.

  4. Under Immutability, select On.

To enable tag immutability on an existing repository:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Select the relevant repository and navigate to the Settings tab.

  4. In the General section under Immutability, select On.

Once tag immutability is enabled, MSR will return an error message such as the following when you try to push a tag that already exists:

docker push msr-example.com/library/wordpress:latest
unknown: tag=latest cannot be overwritten because
msr-example.com/library/wordpress is an immutable repository

Sign images

Sign an image

Two key components of the Mirantis Secure Registry are the Notary Server and the Notary Signer. These two containers provide the required components for using Docker Content Trust (DCT) out of the box. Docker Content Trust allows you to sign image tags, therefore giving consumers a way to verify the integrity of your image.

Note

If the MSR certificate authority (CA) is self-signed, you must take steps to make the machine running the docker trust command trust the MSR CA. You can do this by creating a folder with the name of MSR hostname under $HOME/.docker/tls/ and placing the MSR CA file in that folder. For example:

mkdir -p $HOME/.docker/tls/msr.example.com curl -k -o
$HOME/.docker/tls/msr.example.com/msr-ca.crt https://msr.example.com/ca

As part of MSR, both the Notary and the Registry servers are accessed through a front-end proxy, with both components sharing the MKE’s RBAC (Role-based Access Control) Engine. Therefore, you do not need additional Docker client configuration in order to use DCT.

DCT is integrated with the Docker CLI, and allows you to:

  • Configure repositories

  • Add signers

  • Sign images using the docker trust command

Sign images that MKE can trust

MKE has a feature that prevent untrusted images from being deployed on the cluster. To use the feature, you need to sign and push images to your MSR. To tie the signed images back to MKE, you need to sign the images with the private keys of the MKE users. From a MKE client bundle, use key.pem as your private key, and cert.pem as your public key on an x509 certificate.

To sign images in a way that MKE can trust, you need to:

  1. Download a client bundle for the user account you want to use for signing the images.

  2. Add the user’s private key to your machine’s trust store.

  3. Initialize trust metadata for the repository.

  4. Delegate signing for that repository to the MKE user.

  5. Sign the image.

The following example shows the nginx image getting pulled from Docker Hub, tagged as msr.example.com/dev/nginx:1, pushed to MSR, and signed in a way that is trusted by MKE.

Import a MKE user’s private key

After downloading and extracting a MKE client bundle into your local directory, you need to load the private key into the local Docker trust store (~/.docker/trust). To illustrate the process, we will use jeff as an example user.

$ docker trust key load --name jeff key.pem
Loading key from "key.pem"...
Enter passphrase for new jeff key with ID a453196:
Repeat passphrase for new jeff key with ID a453196:
Successfully imported key from key.pem
Initialize the trust metadata and add the user’s public certificate

Next,initiate trust metadata for a MSR repository. If you have not already done so, navigate to the MSR web UI, and create a repository for your image. This example uses the nginx repository in the prod namespace.

As part of initiating the repository, the public key of the MKE user needs to be added to the Notary server as a signer for the repository. You will be asked for a number of passphrases to protect the keys.Make a note of these passphrases.

$ docker trust signer add --key cert.pem jeff msr.example.com/prod/nginx
Adding signer "jeff" to msr.example.com/prod/nginx...
Initializing signed repository for msr.example.com/prod/nginx...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID e0d15a2:
Repeat passphrase for new repository key with ID e0d15a2:
Successfully initialized "msr.example.com/prod/nginx"
Successfully added signer: jeff to msr.example.com/prod/nginx

Inspect the trust metadata of the repository to make sure the user has been added correctly.

$ docker trust inspect --pretty msr.example.com/prod/nginx

No signatures for msr.example.com/prod/nginx

List of signers and their keys for msr.example.com/prod/nginx

SIGNER              KEYS
jeff                927f30366699

Administrative keys for msr.example.com/prod/nginx

  Repository Key:       e0d15a24b7...540b4a2506b
  Root Key:             b74854cb27...a72fbdd7b9a
Sign the image

Finally, user jeff can sign an image tag. The following steps include downloading the image from Hub, tagging the image for Jeff’s MSR repository, pushing the image to Jeff’s MSR, as well as signing the tag with Jeff’s keys.

$ docker pull nginx:latest

$ docker tag nginx:latest msr.example.com/prod/nginx:1

$ docker trust sign msr.example.com/prod/nginx:1
Signing and pushing trust data for local image msr.example.com/prod/nginx:1, may overwrite remote trust data
The push refers to repository [msr.example.com/prod/nginx]
6b5e2ed60418: Pushed
92c15149e23b: Pushed
0a07e81f5da3: Pushed
1: digest: sha256:5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 size: 948
Signing and pushing trust metadata
Enter passphrase for jeff key with ID 927f303:
Successfully signed msr.example.com/prod/nginx:1

Inspect the trust metadata again to make sure the image tag has been signed successfully.

$ docker trust inspect --pretty msr.example.com/prod/nginx:1

Signatures for msr.example.com/prod/nginx:1

SIGNED TAG          DIGEST                   SIGNERS
1                   5b49c8e2c8...90fbb2033   jeff

List of signers and their keys for msr.example.com/prod/nginx:1

SIGNER              KEYS
jeff                927f30366699

Administrative keys for msr.example.com/prod/nginx:1

  Repository Key:       e0d15a24b74...96540b4a2506b
  Root Key:             b74854cb27c...1ea72fbdd7b9a

Alternatively, you can review the signed image from the MSR web UI.

Add delegations

You have the option to sign an image using multiple MKE users’ keys. For example, an image needs to be signed by a member of the Security team and a member of the Developers team. Let’s assume jeff is a member of the Developers team. In this case, we only need to add a member of the Security team.

To do so, first add the private key of the Security team member to the local Docker trust store.

$ docker trust key load --name ian key.pem
Loading key from "key.pem"...
Enter passphrase for new ian key with ID 5ac7d9a:
Repeat passphrase for new ian key with ID 5ac7d9a:
Successfully imported key from key.pem

Upload the user’s public key to the Notary Server and sign the image. You will be asked for jeff, the developer’s passphrase, as well as the ian user’s passphrase to sign the tag.

$ docker trust signer add --key cert.pem ian msr.example.com/prod/nginx
Adding signer "ian" to msr.example.com/prod/nginx...
Enter passphrase for repository key with ID e0d15a2:
Successfully added signer: ian to msr.example.com/prod/nginx

$ docker trust sign msr.example.com/prod/nginx:1
Signing and pushing trust metadata for msr.example.com/prod/nginx:1
Existing signatures for tag 1 digest 5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 from:
jeff
Enter passphrase for jeff key with ID 927f303:
Enter passphrase for ian key with ID 5ac7d9a:
Successfully signed msr.example.com/prod/nginx:1

Finally, check the tag again to make sure it includes two signers.

$ docker trust inspect --pretty msr.example.com/prod/nginx:1

Signatures for msr.example.com/prod/nginx:1

SIGNED TAG     DIGEST                                                            SIGNERS
1              5b49c8e2c89...5bb69e2033  jeff, ian

List of signers and their keys for msr.example.com/prod/nginx:1

SIGNER     KEYS
jeff       927f30366699
ian        5ac7d9af7222

Administrative keys for msr.example.com/prod/nginx:1

  Repository Key:       e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b
  Root Key:     b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a
Delete trust data

If an administrator wants to delete a MSR repository that contains trust metadata, they will be prompted to delete the trust metadata first before removing the repository.

To delete trust metadata, you need to use the Notary CLI.

$ notary delete msr.example.com/prod/nginx --remote
Deleting trust data for repository msr.example.com/prod/nginx
Enter username: admin
Enter password:
Successfully deleted local and remote trust data for repository msr.example.com/prod/nginx

If you don’t include the --remote flag, Notary deletes local cached content but will not delete data from the Notary server.

Delete signed images

To delete signed images, you must first identify the roles that signed the image and then remove the trust data for each of those roles.

Identify the roles that signed an image
  1. Determine the roles that are trusted to sign the image:

    1. Configure your Notary client.

    2. List the trusted roles:

      notary delegation list <registry-host-name>/<namespace>/<repository>
      

      Example output:

      ROLE                PATHS             KEY IDS                  THRESHOLD
      ----                -----             -------                  ---------
      targets/releases    "" <all paths>    c3470c45cefde5...2ea9bc8    1
      targets/qa          "" <all paths>    c3470c45cefde5...2ea9bc8    1
      

      In this example, the repository owner delegated trust to the targets/releases and targets/qa roles.

  2. For each role listed in the previous step, identify whether it signed the image:

    notary list <registry-host-name>/<namespace>/<repository> --roles <role-name>
    
Remove trust data for a role

Note

Only users with private keys that have the required roles can perform this operation.

For each role that signed the image, remove the trust data for that role:

notary remove <registry-host-name>/<namespace>/<repository> <tag> \
--roles <role-name> --publish

The image will display as unsigned once the trust data has been removed for all of the roles that signed the image.

Delete the image

To delete the image, refer to Delete images.

Where to go next
Using Docker Content Trust with a Remote MKE Cluster

For more advanced deployments, you may want to share one Mirantis Secure Registry across multiple Mirantis Kubernetes Engines. However, customers wanting to adopt this model alongside the Only Run Signed Images MKE feature, run into problems as each MKE operates an independent set of users.

Docker Content Trust (DCT) gets around this problem, since users from a remote MKE are able to sign images in the central MSR and still apply runtime enforcement.

In the following example, we will connect MSR managed by MKE cluster 1 with a remote MKE cluster which we are calling MKE cluster 2, sign the image with a user from MKE cluster 2, and provide runtime enforcement within MKE cluster 2. This process could be repeated over and over, integrating MSR with multiple remote MKE clusters, signing the image with users from each environment, and then providing runtime enforcement in each remote MKE cluster separately.

Note

Before attempting this guide, familiarize yourself with Docker Content Trust and Only Run Signed Images on a single MKE. Many of the concepts within this guide may be new without that background.

Prerequisites
  • Cluster 1, running UCP 3.0.x or higher, with a DTR 2.5.x or higher deployed within the cluster.

  • Cluster 2, running UCP 3.0.x or higher, with no MSR node.

  • Nodes on Cluster 2 need to trust the Certificate Authority which signed MSR’s TLS Certificate. This can be tested by logging on to a cluster 2 virtual machine and running curl https://msr.example.com.

  • The MSR TLS Certificate needs to be properly configured, ensuring that the Loadbalancer/Public Address field has been configured, with this address included within the certificate.

  • A machine with the Docker Client (CE 17.12 / EE 1803 or newer) installed, as this contains the relevant docker trust commands.

Registering MSR with a remote Mirantis Kubernetes Engine

As there is no registry running within cluster 2, by default MKE will not know where to check for trust data. Therefore, the first thing we need to do is register MSR within the remote MKE in cluster 2. When you normally install MSR, this registration process happens by default to a local MKE, or cluster 1.

Note

The registration process allows the remote MKE to get signature data from MSR, however this will not provide Single Sign On (SSO). Users on cluster 2 will not be synced with cluster 1’s MKE or MSR. Therefore when pulling images, registry authentication will still need to be passed as part of the service definition if the repository is private. See the Kubernetes example.

To add a new registry, retrieve the Certificate Authority (CA) used to sign the MSR TLS Certificate through the MSR URL’s /ca endpoint.

$ curl -ks https://msr.example.com/ca > dtr.crt

Next, convert the MSR certificate into a JSON configuration file for registration within the MKE for cluster 2.

You can find a template of the dtr-bundle.json below. Replace the host address with your MSR URL, and enter the contents of the MSR CA certificate between the new line commands \n and \n.

Note

JSON Formatting

Ensure there are no line breaks between each line of the MSR CA certificate within the JSON file. Use your favorite JSON formatter for validation.

$ cat dtr-bundle.json
{
  "hostAddress": "msr.example.com",
  "caBundle": "-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----END CERTIFICATE-----"
}

Now upload the configuration file to cluster 2’s MKE through the MKE API endpoint, /api/config/trustedregistry_. To authenticate against the API of cluster 2’s MKE, we have downloaded an MKE client bundle, extracted it in the current directory, and will reference the keys for authentication.

$ curl --cacert ca.pem --cert cert.pem --key key.pem \
    -X POST \
    -H "Accept: application/json" \
    -H "Content-Type: application/json" \
    -d @dtr-bundle.json \
    https://cluster2.example.com/api/config/trustedregistry_

Navigate to the MKE web interface to verify that the JSON file was imported successfully, as the MKE endpoint will not output anything. Select Admin > Admin Settings > Mirantis Secure Registry. If the registry has been added successfully, you should see the MSR listed.

Additionally, you can check the full MKE configuration file within cluster 2’s MKE. Once downloaded, the ucp-config.toml file should now contain a section called [registries]

$ curl --cacert ca.pem --cert cert.pem --key key.pem https://cluster2.example.com/api/ucp/config-toml > ucp-config.toml

If the new registry isn’t shown in the list, check the ucp-controller container logs on cluster 2.

Signing an image in MSR

We will now sign an image and push this to MSR. To sign images we need a user’s public private key pair from cluster 2. It can be found in a client bundle, with key.pem being a private key and cert.pem being the public key on an X.509 certificate.

First, load the private key into the local Docker trust store (~/.docker/trust). The name used here is purely metadata to help keep track of which keys you have imported.

$ docker trust key load --name cluster2admin key.pem
Loading key from "key.pem"...
Enter passphrase for new cluster2admin key with ID a453196:
Repeat passphrase for new cluster2admin key with ID a453196:
Successfully imported key from key.pem

Next initiate the repository, and add the public key of cluster 2’s user as a signer. You will be asked for a number of passphrases to protect the keys. Keep note of these passphrases, and see [Docker Content Trust documentation] (/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server) to learn more about managing keys.

$ docker trust signer add --key cert.pem cluster2admin msr.example.com/admin/trustdemo
Adding signer "cluster2admin" to msr.example.com/admin/trustdemo...
Initializing signed repository for msr.example.com/admin/trustdemo...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID dd4460f:
Repeat passphrase for new repository key with ID dd4460f:
Successfully initialized "msr.example.com/admin/trustdemo"
Successfully added signer: cluster2admin to msr.example.com/admin/trustdemo

Finally, sign the image tag. This pushes the image up to MSR, as well as signs the tag with the user from cluster 2’s keys.

$ docker trust sign msr.example.com/admin/trustdemo:1
Signing and pushing trust data for local image msr.example.com/admin/trustdemo:1, may overwrite remote trust data
The push refers to repository [dtr.olly.dtcntr.net/admin/trustdemo]
27c0b07c1b33: Layer already exists
aa84c03b5202: Layer already exists
5f6acae4a5eb: Layer already exists
df64d3292fd6: Layer already exists
1: digest: sha256:37062e8984d3b8fde253eba1832bfb4367c51d9f05da8e581bd1296fc3fbf65f size: 1153
Signing and pushing trust metadata
Enter passphrase for cluster2admin key with ID a453196:
Successfully signed msr.example.com/admin/trustdemo:1

Within the MSR web interface, you should now be able to see your newly pushed tag with the Signed text next to the size.

You could sign this image multiple times if required, whether it’s multiple teams from the same cluster wanting to sign the image, or you integrating MSR with more remote MKEs so users from clusters 1, 2, 3, or more can all sign the same image.

Enforce Signed Image Tags on the Remote MKE

We can now enable Only Run Signed Images on the remote MKE. To do this, login to cluster 2’s MKE web interface as an admin. Select Admin > Admin Settings > Docker Content Trust.

Finally we can now deploy a workload on cluster 2, using a signed image from a MSR running on cluster 1. This workload could be a simple $ docker run, a Swarm Service, or a Kubernetes workload. As a simple test, source a client bundle, and try running one of your signed images.

$ source env.sh

$ docker service create msr.example.com/admin/trustdemo:1
nqsph0n6lv9uzod4lapx0gwok
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

$ docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE                                   PORTS
nqsph0n6lv9u        laughing_lamarr         replicated          1/1                 msr.example.com/admin/trustdemo:1
Troubleshooting

If the image is stored in a private repository within MSR, you need to pass credentials to the Orchestrator as there is no SSO between cluster 2 and MSR. See the relevant Kubernetes documentation for more details.

Example Errors
Image or trust data does not exist
image or trust data does not exist for msr.example.com/admin/trustdemo:1

This means something went wrong when initiating the repository or signing the image, as the tag contains no signing data.

Image did not meet required signing policy
Error response from daemon: image did not meet required signing policy

msr.example.com/admin/trustdemo:1: image did not meet required signing policy

This means that the image was signed correctly, however the user who signed the image does not meet the signing policy in cluster 2. This could be because you signed the image with the wrong user keys.

MSR URL must be a registered trusted registry
Error response from daemon: msr.example.com must be a registered trusted registry. See 'docker run --help'.

This means you have not registered MSR to work with a remote MKE instance yet, as outlined in Registering MSR with a remote Mirantis Kubernetes Engine.

Manage jobs

Job queue

Mirantis Secure Registry (MSR) uses a job queue to schedule batch jobs. Jobs are added to a cluster-wide job queue, and then consumed and executed by a job runner within MSR.

All MSR replicas have access to the job queue, and have a job runner component that can get and execute work.

How it works

When a job is created, it is added to a cluster-wide job queue and enters the waiting state. When one of the MSR replicas is ready to claim the job, it waits a random time of up to 3 seconds to give every replica the opportunity to claim the task.

A replica claims a job by adding its replica ID to the job. That way, other replicas will know the job has been claimed. Once a replica claims a job, it adds that job to an internal queue, which in turn sorts the jobs by their scheduledAt time. Once that happens, the replica updates the job status to running, and starts executing it.

The job runner component of each MSR replica keeps a heartbeatExpiration entry on the database that is shared by all replicas. If a replica becomes unhealthy, other replicas notice the change and update the status of the failing worker to dead. Also, all the jobs that were claimed by the unhealthy replica enter the worker_dead state, so that other replicas can claim the job.

Job types

MSR runs periodic and long-running jobs. The following is a complete list of jobs you can filter for via the user interface or the API.

Job types

Job

Description

gc

A garbage collection job that deletes layers associated with deleted images.

onlinegc

A garbage collection job that deletes layers associated with deleted images without putting the registry in read-only mode.

onlinegc_metadata

A garbage collection job that deletes metadata associated with deleted images.

onlinegc_joblogs

A garbage collection job that deletes job logs based on a configured job history setting.

metadatastoremigration

A necessary migration that enables the onlinegc feature.

sleep

Used for testing the correctness of the jobrunner. It sleeps for 60 seconds.

false

Used for testing the correctness of the jobrunner. It runs the false command and immediately fails.

tagmigration

Used for synchronizing tag and manifest information between the MSR database and the storage backend.

bloblinkmigration

A DTR 2.1 to 2.2 upgrade process that adds references for blobs to repositories in the database.

license_update

Checks for license expiration extensions if online license updates are enabled.

scan_check

An image security scanning job. This job does not perform the actual scanning, rather it spawns scan_check_single jobs (one for each layer in the image). Once all of the scan_check_single jobs are complete, this job will terminate.

scan_check_single

A security scanning job for a particular layer given by the parameter: SHA256SUM. This job breaks up the layer into components and checks each component for vulnerabilities.

scan_check_all

A security scanning job that updates all of the currently scanned images to display the latest vulnerabilities.

update_vuln_db

A job that is created to update MSR’s vulnerability database. It uses an Internet connection to check for database updates through https://dss-cve-updates.docker.com/ and updates the dtr-scanningstore container if there is a new update available.

scannedlayermigration

A DTR 2.4 to 2.5 upgrade process that restructures scanned image data.

push_mirror_tag

A job that pushes a tag to another registry after a push mirror policy has been evaluated.

poll_mirror

A global cron that evaluates poll mirroring policies.

webhook

A job that is used to dispatch a webhook payload to a single endpoint.

nautilus_update_db

The old name for the update_vuln_db job. This may be visible on old log files.

ro_registry

A user-initiated job for manually switching MSR into read-only mode.

tag_pruning

A job for cleaning up unnecessary or unwanted repository tags which can be configured by repository admins.

Job status

Jobs can have one of the following status values:

Job values

Status

Description

waiting

Unclaimed job waiting to be picked up by a worker.

running

The job is currently being run by the specified workerID.

done

The job has succesfully completed.

errors

The job has completed with errors.

cancel_request

The status of a job is monitored by the worker in the database. If the job status changes to cancel_request, the job is canceled by the worker.

cancel

The job has been canceled and ws not fully executed.

deleted

The job and its logs have been removed.

worker_dead

The worker for this job has been declared dead and the job will not continue.

worker_shutdown

The worker that was running this job has been gracefully stopped.

worker_resurrection

The worker for this job has reconnected to the databsase and will cancel this job.

Audit jobs with the web interface

As of DTR 2.2, admins were able to view and audit jobs within the software using the API. MSR 2.6 enhances those capabilities by adding a Job Logs tab under System settings on the user interface. The tab displays a sortable and paginated list of jobs along with links to associated job logs.

Prerequisite
  • Job Queue

View jobs list

To view the list of jobs within MSR, do the following:

  1. Navigate to https://<msr-url> and log in with your MKE credentials.

  2. Select System from the left-side navigation panel, and then click Job Logs. You should see a paginated list of past, running, and queued jobs. By default, Job Logs shows the latest 10 jobs on the first page.

  3. Specify a filtering option. Job Logs lets you filter by:

    • Action

    • Worker ID (the ID of the worker in a MSR replica that is responsible for running the job)

  4. Optional: Click Edit Settings on the right of the filtering options to update your Job Logs settings.

Job details

The following is an explanation of the job-related fields displayed in Job Logs and uses the filtered online_gc action from above.

Jobs values

Job Detail

Description

Example

Action

The type of action or job being performed.

onlinegc

ID

The ID of the job.

ccc05646-569a-4ac4-b8e1-113111f63fb9

Worker

The ID of the worker node responsible for ruinning the job.

8f553c8b697c

Status

Current status of the action or job.

done

Start Time

Time when the job started.

9/23/2018 7:04 PM

Last updated

Time when the job was last updated.

9/23/2018 7:04 PM

View Logs

Links to the full logs for the job.

[View Logs]

View job-specific logs

To view the log details for a specific job, do the following:

  1. Click View Logs next to the job’s Last Updated value. You will be redirected to the log detail page of your selected job.

    Notice how the job ID is reflected in the URL while the Action and the abbreviated form of the job ID are reflected in the heading. Also, the JSON lines displayed are job-specific MSR container logs.

  2. Enter or select a different line count to truncate the number of lines displayed. Lines are cut off from the end of the logs.

Audit jobs with the API

Overview

This covers troubleshooting batch jobs via the API and was introduced in DTR 2.2. Starting in MSR 2.6, admins have the ability to audit jobs using the web interface.

Prerequisite
  • Job Queue

Job capacity

Each job runner has a limited capacity and will not claim jobs that require a higher capacity. You can see the capacity of a job runner via the GET /api/v0/workers endpoint:

{
  "workers": [
    {
      "id": "000000000000",
      "status": "running",
      "capacityMap": {
        "scan": 1,
        "scanCheck": 1
      },
      "heartbeatExpiration": "2017-02-18T00:51:02Z"
    }
  ]
}

This means that the worker with replica ID 000000000000 has a capacity of 1 scan and 1 scanCheck. Next, review the list of available jobs:

{
  "jobs": [
    {
      "id": "0",
      "workerID": "",
      "status": "waiting",
      "capacityMap": {
        "scan": 1
      }
    },
    {
       "id": "1",
       "workerID": "",
       "status": "waiting",
       "capacityMap": {
         "scan": 1
       }
    },
    {
     "id": "2",
      "workerID": "",
      "status": "waiting",
      "capacityMap": {
        "scanCheck": 1
      }
    }
  ]
}

If worker 000000000000 notices the jobs in waiting state above, then it will be able to pick up jobs 0 and 2 since it has the capacity for both. Job 1 will have to wait until the previous scan job, 0, is completed. The job queue will then look like:

{
  "jobs": [
    {
      "id": "0",
      "workerID": "000000000000",
      "status": "running",
      "capacityMap": {
        "scan": 1
      }
    },
    {
       "id": "1",
       "workerID": "",
       "status": "waiting",
       "capacityMap": {
         "scan": 1
       }
    },
    {
     "id": "2",
      "workerID": "000000000000",
      "status": "running",
      "capacityMap": {
        "scanCheck": 1
      }
    }
  ]
}

You can get a list of jobs via the GET /api/v0/jobs/ endpoint. Each job looks like:

{
    "id": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
    "retryFromID": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
    "workerID": "000000000000",
    "status": "done",
    "scheduledAt": "2017-02-17T01:09:47.771Z",
    "lastUpdated": "2017-02-17T01:10:14.117Z",
    "action": "scan_check_single",
    "retriesLeft": 0,
    "retriesTotal": 0,
    "capacityMap": {
          "scan": 1
    },
    "parameters": {
          "SHA256SUM": "1bacd3c8ccb1f15609a10bd4a403831d0ec0b354438ddbf644c95c5d54f8eb13"
    },
    "deadline": "",
    "stopTimeout": ""
}

The JSON fields of interest here are:

  • id: The ID of the job

  • workerID: The ID of the worker in a MSR replica that is running this job

  • status: The current state of the job

  • action: The type of job the worker will actually perform

  • capacityMap: The available capacity a worker needs for this job to run

Cron jobs

Several of the jobs performed by MSR are run in a recurrent schedule. You can see those jobs using the GET /api/v0/crons endpoint:

{
  "crons": [
    {
      "id": "48875b1b-5006-48f5-9f3c-af9fbdd82255",
      "action": "license_update",
      "schedule": "57 54 3 * * *",
      "retries": 2,
      "capacityMap": null,
      "parameters": null,
      "deadline": "",
      "stopTimeout": "",
      "nextRun": "2017-02-22T03:54:57Z"
    },
    {
      "id": "b1c1e61e-1e74-4677-8e4a-2a7dacefffdc",
      "action": "update_db",
      "schedule": "0 0 3 * * *",
      "retries": 0,
      "capacityMap": null,
      "parameters": null,
      "deadline": "",
      "stopTimeout": "",
      "nextRun": "2017-02-22T03:00:00Z"
    }
  ]
}

The schedule field uses a cron expression following the (seconds) (minutes) (hours) (day of month) (month) (day of week) format. For example, 57 54 3 * * * with cron ID 48875b1b-5006-48f5-9f3c-af9fbdd82255 will be run at 03:54:57 on any day of the week or the month, which is 2017-02-22T03:54:57Z in the example JSON response above.

Enable auto-deletion of job logs

Mirantis Secure Registry has a global setting for auto-deletion of job logs which allows them to be removed as part of garbage collection. MSR admins can enable auto-deletion of repository events in MSR 2.6 based on specified conditions which are covered below.

  1. In your browser, navigate to https://<msr-url> and log in with your MKE credentials.

  2. Select System on the left-side navigation panel, which will display the Settings page by default.

  3. Scroll down to Job Logs and turn on Auto-Deletion.

  4. Specify the conditions with which a job log auto-deletion will be triggered.

    MSR allows you to set your auto-deletion conditions based on the following optional job log attributes:

    Name

    Description

    Example

    Age

    Lets you remove job logs which are older than your specified number of hours, days, weeks or months

    2 months

    Max number of events

    Lets you specify the maximum number of job logs allowed within MSR.

    100

    If you check and specify both, job logs will be removed from MSR during garbage collection if either condition is met. You should see a confirmation message right away.

  5. Click Start Deletion if you’re ready. Read more about configure-garbage-collection> if you’re unsure about this operation.

  6. Navigate to System > Job Logs to confirm that onlinegc_joblogs has started.

Note

When you enable auto-deletion of job logs, the logs will be permanently deleted during garbage collection.

Manage users

Authentication and authorization in MSR

With MSR you get to control which users have access to your image repositories.

By default, anonymous users can only pull images from public repositories. They can’t create new repositories or push to existing ones. You can then grant permissions to enforce fine-grained access control to image repositories. For that:

  • Start by creating a user.

    Users are shared across MKE and MSR. When you create a new user in Docker Universal Control Plane, that user becomes available in MSR and vice versa. Registered users can create and manage their own repositories.

    You can also integrate with an LDAP service to manage users from a single place.

  • Extend the permissions by adding the user to a team.

    To extend a user’s permission and manage their permissions over repositories, you add the user to a team. A team defines the permissions users have for a set of repositories.

Organizations and teams

When a user creates a repository, only that user can make changes to the repository settings, and push new images to it.

Organizations take permission management one step further, since they allow multiple users to own and manage a common set of repositories. This is useful when implementing team workflows. With organizations you can delegate the management of a set of repositories and user permissions to the organization administrators.

An organization owns a set of repositories, and defines a set of teams. With teams you can define fine-grain permissions that a team of user has for a set of repositories.

In this example, the ‘Whale’ organization has three repositories and two teams:

  • Members of the blog team can only see and pull images from the whale/java repository,

  • Members of the billing team can manage the whale/golang repository, and push and pull images from the whale/java repository.

Create and manage teams

You can extend a user’s default permissions by granting them individual permissions in other image repositories, by adding the user to a team. A team defines the permissions a set of users have for a set of repositories.

To create a new team, go to the MSR web UI, and navigate to the Organizations page. Then click the organization where you want to create the team.

Navigate to the Teams tab, click the New team button, and give the team a name.

Add users to a team

Once you have created a team, click the team name, to manage its settings. The first thing we need to do is add users to the team. Click the Add Member button and add users to the team.

Manage team permissions

The next step is to define the permissions this team has for a set of repositories. Navigate to the Repositories tab, and click the Add repository button.

Choose the repositories this team has access to, and what permission levels the team members have.

Three permission levels are available:

Permission level

Description

Read only

View repository and pull images.

Read & Write

View repository, pull and push images.

Admin

Manage repository and change its settings, pull and push images.

Delete a team

If you’re an organization owner, you can delete a team in that organization. Navigate to the Team, choose the Settings tab, and click Delete.

Create and manage organizations

When a user creates a repository, only that user has permissions to make changes to the repository.

For team workflows, where multiple users have permissions to manage a set of common repositories, create an organization. By default, MSR has one organization called ‘docker-datacenter’, that is shared between MSR and MKE.

To create a new organization, navigate to the MSR web UI, and go to the Organizations page.

Click the New organization button, and choose a meaningful name for the organization.

Repositories owned by this organization will contain the organization name, so to pull an image from that repository, you’ll use:

docker pull <msr-domain-name>/<organization>/<repository>:<tag>

Click Save to create the organization, and then click the organization to define which users are allowed to manage this organization. These users will be able to edit the organization settings, edit all repositories owned by the organization, and define the user permissions for this organization.

For this, click the Add user button, select the users that you want to grant permissions to manage the organization, and click Save. Then change their permissions from ‘Member’ to Org Owner.

Permission levels

Mirantis Secure Registry allows you to define fine-grain permissions over image repositories.

Administrators

Users are shared across MKE and MSR. When you create a new user in Mirantis Kubernetes Engine, that user becomes available in MSR and vice versa. When you create a trusted admin in MSR, the admin has permissions to manage:

  • Users across MKE and MSR

  • MSR repositories and settings

  • MKE resources and settings

Team permission levels

With Teams you can define the repository permissions for a set of users (read, read-write, and admin).

Repository operation

read

read-write

admin

View/browse

x

x

x

Pull

x

x

x

Push

x

x

Start a scan

x

x

Delete tags

x

x

Edit description

x

Set public or private

x

Manage user access

x

Delete repository

x

Note

Team permissions are additive. When a user is a member of multiple teams, they have the highest permission level defined by those teams.

Overall permissions

Permission level

Description

Anonymous or unauthenticated Users

Can search and pull public repositories.

Authenticated Users

Can search and pull public repos, and create and manage their own repositories.

Team Member

Everything a user can do, plus the permissions granted by the team the user is a member of..

Organization Owner

Can manage repositories and teams for the organization.

Admin

Can manage anything across MKE and MSR.

Manage webhooks

You can configure MSR to automatically post event notifications to a webhook URL of your choosing. This lets you build complex CI and CD pipelines with your Docker images.

Webhook types

Event type

Scope

Access level

Availability

Tag pushed to repository (TAG_PUSH)

Individual repositories

Repository admin

Web UI and API

Tag pulled from repository (TAG_PULL)

Individual repositories

Repository admin

Web UI and API

Tag deleted from repository (TAG_DELETE)

Individual repositories

Repository admin

Web UI and API

Manifest pushed to repository (MANIFEST_PUSH)

Individual repositories

Repository admin

Web UI and API

Manifest pulled from repository (MANIFEST_PULL)

Individual repositories

Repository admin

Web UI and API

Manifest deleted from repository (MANIFEST_DELETE)

Individual repositories

Repository admin

Web UI and API

Security scan completed (SCAN_COMPLETED)

Individual repositories

Repository admin

Web UI and API

Security scan failed (SCAN_FAILED)

Individual repositories

Repository admin

Web UI and API

Image promoted from repository (PROMOTION)

Individual repositories

Repository admin

Web UI and API

Image mirrored from repository (PUSH_MIRRORING)

Individual repositories

Repository admin

Web UI and API

Image mirrored from remote repository (POLL_MIRRORING)

Individual repositories

Repository admin

Web UI and API

Repository created, updated, or deleted (REPO_CREATED, REPO_UPDATED, and REPO_DELETED)

Namespace, organizations

Namespace, organization owners

API only

Security scanner update completed (SCANNER_UPDATE_COMPLETED))

Global

MSR admin

API only

You must have admin privileges to a repository or namespace in order to subscribe to its webhook events. For example, a user must be an admin of repository “foo/bar” to subscribe to its tag push events. A MSR admin can subscribe to any event.

Manage repository webhooks with the web interface

Prerequisites
  • You must have admin privileges to the repository in order to create a webhook.

  • See Webhook types a list of events you can trigger notifications for using the web interface.

Create a webhook for your repository
  1. In your browser, navigate to https://<msr-url> and log in with your credentials.

  2. Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

  3. Select the Webhooks tab, and click New Webhook.

  4. From the drop-down list, select the event that will trigger the webhook.

  5. Set the URL that will receive the JSON payload. Click Test next to the Webhook URL field, so that you can validate that the integration is working. At your specified URL, you should receive a JSON payload for your chosen event type notification.

    {
      "type": "TAG_PUSH",
      "createdAt": "2019-05-15T19:39:40.607337713Z",
      "contents": {
        "namespace": "foo",
        "repository": "bar",
        "tag": "latest",
        "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
        "imageName": "foo/bar:latest",
        "os": "linux",
        "architecture": "amd64",
        "author": "",
        "pushedAt": "2015-01-02T15:04:05Z"
      },
      "location": "/repositories/foo/bar/tags/latest"
    }
    
  6. Expand “Show advanced settings” to paste the TLS certificate associated with your webhook URL. For testing purposes, you can test over HTTP instead of HTTPS.

  7. Click Create. Once saved, your webhook is active and starts sending POST notifications whenever your chosen event type is triggered.

As a repository admin, you can add or delete a webhook at any point. Additionally, you can create, view, and delete webhooks for your organization or trusted registry using the API.

Manage repository webhooks with the API

Triggering notifications

Refer to Webhook types for a list of events you can trigger notifications for via the API.

Your MSR hostname serves as the base URL for your API requests.

From the MSR web UI, click API on the in the left-side navigation panel to explore the API resources and endpoints. Click Execute to send your API request.

API requests via curl

You can use curl to send HTTP or HTTPS API requests. Note that you will have to specify skipTLSVerification: true on your request in order to test the webhook endpoint over HTTP.

Example curl request
curl -u test-user:$TOKEN -X POST "https://msr-example.com/api/v0/webhooks" -H "accept: application/json" -H "content-type: application/json" -d "{ \"endpoint\": \"https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019\", \"key\": \"maria-testorg/lab-words\", \"skipTLSVerification\": true, \"type\": \"TAG_PULL\"}"
Example JSON response
{
  "id": "b7bf702c31601efb4796da59900ddc1b7c72eb8ca80fdfb1b9fecdbad5418155",
  "type": "TAG_PULL",
  "key": "maria-testorg/lab-words",
  "endpoint": "https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019",
  "authorID": "194efd8e-9ee6-4d43-a34b-eefd9ce39087",
  "createdAt": "2019-05-22T01:55:20.471286995Z",
  "lastSuccessfulAt": "0001-01-01T00:00:00Z",
  "inactive": false,
  "tlsCert": "",
  "skipTLSVerification": true
}
Subscribe to events

To subscribe to events, send a POST request to /api/v0/webhooks with the following JSON payload:

Example usage
{
  "type": "TAG_PUSH",
  "key": "foo/bar",
  "endpoint": "https://example.com"
}

The keys in the payload are:

  • type: The event type to subcribe to.

  • key: The namespace/organization or repo to subscribe to. For example, “foo/bar” to subscribe to pushes to the “bar” repository within the namespace/organization “foo”.

  • endpoint: The URL to send the JSON payload to.

Normal users must supply a “key” to scope a particular webhook event to a repository or a namespace/organization. MSR admins can choose to omit this, meaning a POST event notification of your specified type will be sent for all MSR repositories and namespaces.

Receive a payload

Whenever your specified event type occurs, MSR will send a POST request to the given endpoint with a JSON-encoded payload. The payload will always have the following wrapper:

{
  "type": "...",
  "createdAt": "2012-04-23T18:25:43.511Z",
  "contents": {...}
}
  • type refers to the event type received at the specified subscription endpoint.

  • contents refers to the payload of the event itself. Each event is different, therefore the structure of the JSON object in contents will change depending on the event type. See Content structure for more details.

Test payload subscriptions

Before subscribing to an event, you can view and test your endpoints using fake data. To send a test payload, send POST request to /api/v0/webhooks/test with the following payload:

{
  "type": "...",
  "endpoint": "https://www.example.com/"
}

Change type to the event type that you want to receive. MSR will then send an example payload to your specified endpoint. The example payload sent is always the same.

Content structure

Comments after (//) are for informational purposes only, and the example payloads have been clipped for brevity.

Repository event content structure

Tag push

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag just pushed
  "digest": "",       // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
  "os": "",           // (string) the OS for the tag's manifest
  "architecture": "", // (string) the architecture for the tag's manifest
  "author": "",       // (string) the username of the person who pushed the tag
  "pushedAt": "",     // (string) JSON-encoded timestamp of when the push occurred
  ...
}

Tag delete

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag just deleted
  "digest": "",       // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
  "os": "",           // (string) the OS for the tag's manifest
  "architecture": "", // (string) the architecture for the tag's manifest
  "author": "",       // (string) the username of the person who deleted the tag
  "deletedAt": "",     // (string) JSON-encoded timestamp of when the delete occurred
  ...
}

Manifest push

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "digest": "",       // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
  "os": "",           // (string) the OS for the manifest
  "architecture": "", // (string) the architecture for the manifest
  "author": "",       // (string) the username of the person who pushed the manifest
  ...
}

Manifest delete

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "digest": "",       // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
  "os": "",           // (string) the OS for the manifest
  "architecture": "", // (string) the architecture for the manifest
  "author": "",       // (string) the username of the person who deleted the manifest
  "deletedAt": "",    // (string) JSON-encoded timestamp of when the delete occurred
  ...
}

Security scan completed

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag scanned
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
  "scanSummary": {
    "namespace": "",          // (string) repository's namespace/organization name
    "repository": "",         // (string) repository name
    "tag": "",                // (string) the name of the tag just pushed
    "critical": 0,            // (int) number of critical issues, where CVSS >= 7.0
    "major": 0,               // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
    "minor": 0,               // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
    "last_scan_status": 0,    // (int) enum; see scan status section
    "check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
    ...
  }
}

Security scan failed

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag scanned
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
  "error": "",        // (string) the error that occurred while scanning
  ...
}
Namespace-specific event structure

Repository event (created/updated/deleted)

{
  "namespace": "",    // (string) repository's namespace/organization name
  "repository": "",   // (string) repository name
  "event": "",        // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
  "author": "",       // (string) the name of the user responsible for the event
  "data": {}          // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
Global event structure

Security scanner update complete

{
  "scanner_version": "",
  "scanner_updated_at": "", // (string) JSON-encoded timestamp of when the scanner updated
  "db_version": 0,          // (int) newly updated database version
  "db_updated_at": "",      // (string) JSON-encoded timestamp of when the database updated
  "success": <true|false>   // (bool) whether the update was successful
  "replicas": {             // (object) a map keyed by replica ID containing update information for each replica
    "replica_id": {
      "db_updated_at": "",  // (string) JSON-encoded time of when the replica updated
      "version": "",        // (string) version updated to
      "replica_id": ""      // (string) replica ID
    },
    ...
  }
}
Security scan status codes
  • 0: Failed. An error occurred checking an image’s layer

  • 1: Unscanned. The image has not yet been scanned

  • 2: Scanning. Scanning in progress

  • 3: Pending: The image will be scanned when a worker is available

  • 4: Scanned: The image has been scanned but vulnerabilities have not yet been checked

  • 5: Checking: The image is being checked for vulnerabilities

  • 6: Completed: The image has been fully security scanned

View and manage existing subscriptions
View all subscriptions

To view existing subscriptions, send a GET request to /api/v0/webhooks. As a normal user (i.e., not a MSR admin), this will show all of your current subscriptions across every namespace/organization and repository. As a MSR admin, this will show every webhook configured for your MSR.

The API response will be in the following format:

[
  {
    "id": "",        // (string): UUID of the webhook subscription
    "type": "",      // (string): webhook event type
    "key": "",       // (string): the individual resource this subscription is scoped to
    "endpoint": "",  // (string): the endpoint to send POST event notifications to
    "authorID": "",  // (string): the user ID resposible for creating the subscription
    "createdAt": "", // (string): JSON-encoded datetime when the subscription was created
  },
  ...
]
View subscriptions for a particular resource

You can also view subscriptions for a given resource that you are an admin of. For example, if you have admin rights to the repository “foo/bar”, you can view all subscriptions (even other people’s) from a particular API endpoint. These endpoints are:

  • GET /api/v0/repositories/{namespace}/{repository}/webhooks: View all webhook subscriptions for a repository

  • GET /api/v0/repositories/{namespace}/webhooks: View all webhook subscriptions for a namespace/organization

Delete a subscription

To delete a webhook subscription, send a DELETE request to /api/v0/webhooks/{id}, replacing {id} with the webhook subscription ID which you would like to delete.

Only a MSR admin or an admin for the resource with the event subscription can delete a subscription. As a normal user, you can only delete subscriptions for repositories which you manage.

Manage repository events

Audit repository events

Starting in DTR 2.6, each repository page includes an Activity tab which displays a sortable and paginated list of the most recent events within the repository. This offers better visibility along with the ability to audit events. Event types listed vary according to your repository permission level. Additionally, MSR admins can enable auto-deletion of repository events as part of maintenance and cleanup.

In the following section, we will show you how to view and audit the list of events in a repository. We will also cover the event types associated with your permission level.

View List of Events

As of DTR 2.3, admins were able to view a list of MSR events using the API. MSR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following:

  1. Navigate to https://<msr-url> and log in with your MSR credentials.

  2. Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

  3. Select the Activity tab. You should see a paginated list of the latest events based on your repository permission level. By default, Activity shows the latest 10 events and excludes pull events, which are only visible to repository and MSR admins.

    • If you’re a repository or a MSR admin, uncheck Exclude pull to view pull events. This should give you a better understanding of who is consuming your images.

    • To update your event view, select a different time filter from the drop-down list.

Activity Stream

The following table breaks down the data included in an event and uses the highlighted Create Promotion Policy event as an example.

Event detail

Description

Example

Label

Friendly name of the event.

Create Promotion Policy

Repository

This will always be the repository in review following the <user-or-org>/<repository_name> convention outlined in Create a repository

test-org/test-repo-1

Tag

Tag affected by the event, when applicable.

test-org/test-repo-1:latest where latest is the affected tag

SHA

The digest value for ``CREATE` operations such as creating a new image tag or a promotion policy.

sha256:bbf09ba3

Type

Event type. Possible values are: CREATE, GET, UPDATE, DELETE, SEND, FAIL and SCAN.

CREATE

Initiated by

The actor responsible for the event. For user-initiated events, this will reflect the user ID and link to that user’s profile. For image events triggered by a policy – pruning, pull / push mirroring, or promotion – this will reflect the relevant policy ID except for manual promotions where it reflects PROMOTION MANUAL_P, and link to the relevant policy page. Other event actors may not include a link.

PROMOTION CA5E7822

Date and Time

When the event happened in your configured time zone.

2018 9:59 PM

Event Audits

Given the level of detail on each event, it should be easy for MSR and security admins to determine what events have taken place inside of MSR. For example, when an image which shouldn’t have been deleted ends up getting deleted, the security admin can determine when and who initiated the deletion.

Event Permissions

Repository event

Description

Minimum permission level

Push

Refers to Create Manifest and Update Tag events. Learn more about pushing images.

Authenticated users

Scan

Requires security scanning to be set up by an MSR admin. Once enabled, this will display as a SCAN event type.

Authenticated users

Promotion

Refers to a Create Promotion Policy event which links to the Promotions tab of the repository where you can edit the existing promotions. See Promotion Policies for different ways to promote an image.

Repository admin

Delete

Refers to “Delete Tag” events. Learn more about Delete images.

Authenticated users

Pull

Refers to “Get Tag” events. Learn more about Pull an image.

Repository admin

Mirror

Refers to Pull mirroring and Push mirroring events. See Mirror images to another registry and Mirror images from another registry for more details.

Repository admin

Create repo

Refers to Create Repository events. See Create a repository for more details.

Authenticated users

Where to go next

Enable Auto-Deletion of Repository Events

Mirantis Secure Registry has a global setting for repository event auto-deletion. This allows event records to be removed as part of garbage collection. MSR administrators can enable auto-deletion of repository events in DTR 2.6 based on specified conditions which are covered below.

  1. In your browser, navigate to https://<msr-url> and log in with your admin credentials.

  2. Select System from the left-side navigation panel, which displays the Settings page by default.

  3. Scroll down to Repository Events and turn on Auto-Deletion.

  4. Specify the conditions with which an event auto-deletion will be triggered.

MSR allows you to set your auto-deletion conditions based on the following optional repository event attributes:

Name

Description

Example

Age

Lets you remove events older than your specified number of hours, days, weeks or months.

2 months

Max number of events

Lets you specify the maximum number of events allowed in the repositories.

6000

If you check and specify both, events in your repositories will be removed during garbage collection if either condition is met. You should see a confirmation message right away.

  1. Click Start GC if you are ready.

  2. Navigate to System > Job Logs to confirm that onlinegc has taken place.

Where to go next

Promotion policies and monitoring

Promotion policies overview

Mirantis Secure Registry allows you to automatically promote and mirror images based on a policy. In MSR 2.7, you have the option to promote applications with the experimental docker app CLI addition. Note that scanning-based promotion policies do not take effect until all application-bundled images have been scanned. This way you can create a Docker-centric development pipeline.

You can mix and match promotion policies, mirroring policies, and webhooks to create flexible development pipelines that integrate with your existing CI/CD systems.

Promote an image using policies

One way to create a promotion pipeline is to automatically promote images to another repository.

You start by defining a promotion policy that’s specific to a repository. When someone pushes an image to that repository, MSR checks if it complies with the policy you set up and automatically pushes the image to another repository.

Learn how to promote an image using policies.

Mirror images to another registry

You can also promote images between different MSR deployments. This not only allows you to create promotion policies that span multiple MSRs, but also allows you to mirror images for security and high availability.

You start by configuring a repository with a mirroring policy. When someone pushes an image to that repository, MSR checks if the policy is met, and if so pushes it to another MSR deployment or Docker Hub.

Learn how to mirror images to another registry.

Mirror images from another registry

Another option is to mirror images from another MSR deployment. You configure a repository to poll for changes in a remote repository. All new images pushed into the remote repository are then pulled into MSR.

This is an easy way to configure a mirror for high availability since you won’t need to change firewall rules that are in place for your environments.

Learn how to mirror images from another registry.

Promote an image using policies

Mirantis Secure Registry allows you to create image promotion pipelines based on policies.

In this example we will create an image promotion pipeline such that:

  1. Developers iterate and push their builds to the dev/website repository.

  2. When the team creates a stable build, they make sure their image is tagged with -stable.

  3. When a stable build is pushed to the dev/website repository, it will automatically be promoted to qa/website so that the QA team can start testing.

With this promotion policy, the development team doesn’t need access to the QA repositories, and the QA team doesn’t need access to the development repositories.

Configure your repository

Once you’ve created a repository, navigate to the repository page on the MSR web interface, and select the Promotions tab.

Note

Only administrators can globally create and edit promotion policies. By default users can only create and edit promotion policies on repositories within their user namespace.

Click New promotion policy, and define the image promotion criteria.

MSR allows you to set your promotion policy based on the following image attributes:

Image attributes

Name

Description

Example

Tag name

Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Promote to Target if Tag name ends in stable

Component

Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Promote to Target if Component name starts with b

Vulnarabilities

Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number

Promote to Target if Critical vulnerabilities = 3

License

Whether the image uses an intellectual property license and is one of or not one of your specified words

Promote to Target if License name = docker

Now you need to choose what happens to an image that meets all the criteria.

Select the target organization or namespace and repository where the image is going to be pushed. You can choose to keep the image tag, or transform the tag into something more meaningful in the destination repository, by using a tag template.

In this example, if an image in the dev/website is tagged with a word that ends in “stable”, MSR will automatically push that image to the qa/website repository. In the destination repository the image will be tagged with the timestamp of when the image was promoted.

Everything is set up! Once the development team pushes an image that complies with the policy, it automatically gets promoted. To confirm, select the Promotions tab on the dev/website repository.

You can also review the newly pushed tag in the target repository by navigating to qa/website and selecting the Tags tab.

Where to go next

Mirror images to another registry

Mirantis Secure Registry allows you to create mirroring policies for a repository. When an image gets pushed to a repository and meets the mirroring criteria, MSR automatically pushes it to a repository in a remote Mirantis Secure Registry or Hub registry.

This not only allows you to mirror images but also allows you to create image promotion pipelines that span multiple MSR deployments and datacenters.

In this example we will create an image mirroring policy such that:

  1. Developers iterate and push their builds to msr-example.com/dev/website the repository in the MSR deployment dedicated to development.

  2. When the team creates a stable build, they make sure their image is tagged with -stable.

  3. When a stable build is pushed to msr-example.com/dev/website, it will automatically be pushed to qa-example.com/qa/website, mirroring the image and promoting it to the next stage of development.

With this mirroring policy, the development team does not need access to the QA cluster, and the QA team does not need access to the development cluster.

You need to have permissions to push to the destination repository in order to set up the mirroring policy.

Configure your repository connection

Once you have created a repository, navigate to the repository page on the web interface, and select the Mirrors tab.

Click New mirror to define where the image will be pushed if it meets the mirroring criteria.

Under Mirror direction, choose Push to remote registry. Specify the following details:

Field

Description

Registry type

You can choose between Mirantis Secure Registry and Docker Hub. If you choose MSR, enter your MSR URL. Otherwise, Docker Hub defaults to https://index.docker.io

Username and password or access token

Your credentials in the remote repository you wish to push to. To use an access token instead of your password, see authentication token.

Repository

Enter the namespace and the repository_name after the /

Show advanced settings

Enter the TLS details for the remote repository or check Skip TLS verification. If the MSR remote repository is using self-signed TLS certificates or certificates signed by your own certificate authority, you also need to provide the public key certificate for that CA. You can retrieve the certificate by accessing https://<msr-domain>/ca. Remote certificate authority is optional for a remote repository in Docker Hub.

Note

Make sure the account you use for the integration has permissions to write to the remote repository.

Click Connect to test the integration.

In this example, the image gets pushed to the qa/example repository of a MSR deployment available at qa-example.com using a service account that was created just for mirroring images between repositories.

Next, set your push triggers. MSR allows you to set your mirroring policy based on the following image attributes:

Name

Description

Example

Tag name

Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Copy image to remote repository if Tag name ends in stable

Component

Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Copy image to remote repository if Component name starts with b

Vulnarabilities

Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number

Copy image to remote repository if Critical vulnerabilities = 3

License

Whether the image uses an intellectual property license and is one of or not one of your specified words

Copy image to remote repository if License name = docker

You can choose to keep the image tag, or transform the tag into something more meaningful in the remote registry by using a tag template.

In this example, if an image in the dev/website repository is tagged with a word that ends in stable, MSR will automatically push that image to the MSR deployment available at qa-example.com. The image is pushed to the qa/example repository and is tagged with the timestamp of when the image was promoted.

Everything is set up! Once the development team pushes an image that complies with the policy, it automatically gets promoted to qa/example in the remote trusted registry at qa-example.com.

Metadata persistence

When an image is pushed to another registry using a mirroring policy, scanning and signing data is not persisted in the destination repository.

If you have scanning enabled for the destination repository, MSR is going to scan the image pushed. If you want the image to be signed, you need to do it manually.

Where to go next

Mirror images from another registry.

Mirror images from another registry

Mirantis Secure Registry allows you to set up a mirror of a repository by constantly polling it and pulling new image tags as they are pushed. This ensures your images are replicated across different registries for high availability. It also makes it easy to create a development pipeline that allows different users access to a certain image without giving them access to everything in the remote registry.

To mirror a repository, start by creating a repository in the MSR deployment that will serve as your mirror. Previously, you were only able to set up pull mirroring from the API. Starting in DTR 2.6, you can also mirror and pull from a remote MSR or Docker Hub repository.

Pull mirroring on the web interface

To get started, navigate to https://<msr-url> and log in with your MKE credentials.

Select Repositories in the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

Next, select the Mirrors tab and click New mirror. On the New mirror page, choose Pull from remote registry.

Specify the following details:

Field

Description

Registry type

You can choose between Mirantis Secure Registry and Docker Hub. If you choose MSR, enter your MSR URL. Otherwise, Docker Hub defaults to https://index.docker.io

Username and password or access token

Your credentials in the remote repository you wish to poll from. To use an access token instead of your password, see authentication token.

Repository

Enter the namespace and the repository_name after the /

Show advanced settings

Enter the TLS details for the remote repository or check Skip TLS verification. If the MSR remote repository is using self-signed certificates or certificates signed by your own certificate authority, you also need to provide the public key certificate for that CA. You can retrieve the certificate by accessing https://<msr-domain>/ca. Remote certificate authority is optional for a remote repository in Docker Hub.

After you have filled out the details, click Connect to test the integration.

Once you have successfully connected to the remote repository, new buttons appear:

  • Click Save to mirror future tag, or;

  • To mirror all existing and future tags, click Save & Apply instead.

Pull mirroring on the API

There are a few different ways to send your MSR API requests. To explore the different API resources and endpoints from the web interface, click API on the bottom left-side navigation panel.

Search for the endpoint:

POST /api/v0/repositories/{namespace}/{reponame}/pollMirroringPolicies

Click Try it out and enter your HTTP request details. namespace and reponame refer to the repository that will be poll mirrored. The boolean field, initialEvaluation, corresponds to Save when set to false and will only mirror images created after your API request. Setting it to true corresponds to Save & Apply which means all tags in the remote repository will be evaluated and mirrored. The other body parameters correspond to the relevant remote repository details that you can see on the MSR web interface. As a best practice, use a service account just for this purpose. Instead of providing the password for that account, you should pass an authentication token.

If the MSR remote repository is using self-signed certificates or certificates signed by your own certificate authority, you also need to provide the public key certificate for that CA. You can get it by accessing https://<msr-domain>/ca. The remoteCA field is optional for mirroring a Docker Hub repository.

Click Execute. On success, the API returns an HTTP 201 response.

Review the poll mirror job log

Once configured, the system polls for changes in the remote repository and runs the poll_mirror job every 30 minutes. On success, the system will pull in new images and mirror them in your local repository. Starting in DTR 2.6, you can filter for poll_mirror jobs to review when it was last ran. To manually trigger the job and force pull mirroring, use the POST /api/v0/jobs API endpoint and specify poll_mirror as your action.

curl -X POST "https:/<msr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action\": \"poll_mirror\"}"

See Manage jobs to learn more about job management within MSR.

Where to go next

Mirror images to another registry.

Template reference

When defining promotion policies you can use templates to dynamically name the tag that is going to be created.

Important

Whenever an image promotion event occurs, the MSR timestamp for the event is in UTC (Coordinated Univeral Time). That timestamp, however, is converted by the browser and presents in the user’s time zone. Inversely, if a time-based tag is applied to a target image, MSR captures it in UTC but cannot convert it to the user’s timezone due to the tags being immutable strings.

You can use these template keywords to define your new tag:

Template

Description

Example result

%n

The tag to promote

1, 4.5, latest

%A

Day of the week

Sunday, Monday

%a

Day of the week, abbreviated

Sun, Mon, Tue

%w

Day of the week, as a number

0, 1, 6

%d

Number for the day of the month

01, 15, 31

%B

Month

January, December

%b

Month, abbreviated

Jan, Jun, Dec

%m

Month, as a number

01, 06, 12

%Y

Year

1999, 2015, 2048

%y

Year, two digits

99, 15, 48

%H

Hour, in 24 hour format

00, 12, 23

%I

Hour, in 12 hour format

01, 10, 10

%p

Period of the day

AM, PM

%M

Minute

00, 10, 59

%S

Second

00, 10, 59

%f

Microsecond

000000, 999999

%Z

Name for the timezone

UTC, PST, EST

%j

Day of the year

001, 200, 366

%W

Week of the year

00, 10, 53

Use Helm charts

Helm is a tool that manages Kubernetes packages called charts, which are put to use in defining, installing, and upgrading Kubernetes applications. These charts, in conjunction with Helm tooling, deploy applications into Kubernetes clusters. Charts are comprised of a collection of files and directories, arranged in a particular structure and packaged as a .tgz file. Charts define Kubernetes objects, such as the Service and DaemonSet objects used in the application under deployment.

MSR enables you to use Helm to store and serve Helm charts, thus allowing users to push charts to and pull charts from MSR repositories using the Helm CLI and the MSR API.

MSR supports both Helm v2 and v3. The two versions differ significantly with regard to the Helm CLI, which affects the applications under deployment rather than Helm chart support in MSR. One key difference is that while Helm v2 includes both the Helm CLI and Tiller (Helm Server), Helm v3 includes only the Helm CLI. Helm charts (referred to as releases following their installation in Kubernetes) are managed by Tiller in Helm v2 and by Helm CLI in Helm v3.

Note

For a breakdown of the key differences between Helm v2 and Helm v3, refer to Helm official documentation.

Add a Helm chart repository

Users can add a Helm chart repository to MSR through the MSR web UI.

  1. Login to the MSR web UI.

  2. Click Repositories in the navigation menu.

  3. Click New repository.

  4. In the name field, enter the name for the new repository and click Create.

  5. To add the new MSR repository as a Helm repository:

    helm repo add <reponame> https://<msrhost>/charts/<namespace>/<reponame> --username <username> --password <password> --ca-file ca.crt
    
    "<reponame>" has been added to your repositories
    
  6. To verify that the new MSR Helm repository has been added:

    helm repo list
    
    NAME        URL
    <reponame>  https://<msrhost>/charts/<namespace>/<reponame>
    

Pull charts and their provenance files

Helm charts can be pulled from MSR Helm repositories using either the MSR API or the Helm CLI.

Pull with the MSR API

Note

Though the MSR API can be used to pull both Helm charts and provenance files, it is not possible to use it to pull both at the same time.

Pull a chart

To pull a Helm chart:

curl -u <username>:<password> \
  --request GET https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz \
  -H "accept: application/octet-stream" \
  -o <chartname>-<chartversion>.tgz \
  --cacert ca.crt
Pull a provenance file

To pull a provenance file:

curl -u <username>:<password> \
  --request GET https://msrhost/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz.prov \
  -H "accept: application/octet-stream" \
  -o <chartname>-<chartversion>.tgz.prov \
  --cacert ca.crt
Pull with the Helm CLI

Note

Though the Helm CLI can be used to pull a Helm chart by itself or a Helm chart and its provenance file, it is not possible to use the Helm CLI to pull a provenance file by itself.

Pull a chart

Use the helm pull CLI command to pull a Helm chart:

helm pull <reponame>/<chartname> --version <chartversion>
ls
ca.crt  <chartname>-<chartversion>.tgz

Alternatively, use the following command:

helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt
Pull a chart and a provenance file in tandem

Use the helm pull CLI command with the --prov option to pull a Helm chart and a provenance file at the same time:

helm pull <reponame>/<chartname> --version <chartversion> --prov

ls
ca.crt  <chartname>-<chartversion>.tgz  <chartname>-<chartversion>.tgz.prov

Alternatively, use the following command:

helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt --prov

Push charts and their provenance files

You can use the MSR API or the Helm CLI to push Helm charts and their provenance files to an MSR Helm repository.

Note

Pushing and pulling Helm charts can be done with or without a provenance file.

Pushing charts with the MSR API

Using the MSR API, you can push Helm charts with application/octet-stream or multipart/form-data.

Pushing with application/octet-stream

To push a Helm chart through the MSR API with application/octet-stream:

curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Pushing with multipart/form-data

To push a Helm chart through the MSR API with multipart/form-data:

curl -F "chart=@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force pushing a chart

To overwrite an existing chart, turn off repository immutability and include a ?force query parameter in the HTTP request.

  1. Navigate to Repositories and click the Settings tab.

  2. Under Immutability, select Off.

To force push a Helm chart using the MSR API:

curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Pushing provenance files with the MSR API

You can use the MSR API to separately push provenance files related to Helm charts.

To push a provenance file through the MSR API:

curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" https://<msrhost>/charts/api/<namespace>/<reponame>/prov -u <username>:<password> --cacert ca.crt

Note

Attempting to push a provenance file for a nonexistent chart will result in an error.

Force pushing a provenance file

To force push a provenance file using the MSR API:

curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/prov?force" -u <username>:<password> --cacert ca.crt
Pushing a chart and its provenance file with a single API request

To push a Helm chart and a provenance file with a single API request:

curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" https://msrhost/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force pushing a chart and a provenance file

To force push both a Helm chart and a provenance file using a single API request:

curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Pushing charts with the Helm CLI

Note

To push a Helm chart using the Helm CLI, first install the helm push plugin from chartmuseum/helm-push. It is not possible to push a provenance file using the Helm CLI.

Use the helm push CLI command to push a Helm chart:

helm push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt
Force pushing a chart

Use the helm push CLI command with the --force option to force push a Helm chart:

helm push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt --force

View charts in a Helm repository

View charts in a Helm repository using either the MSR API or the MSR web UI.

Viewing charts with the MSR API

To view charts that have been pushed to a Helm repository using the MSR API, consider the following options:

Option

CLI command

View the index file

curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt

View a paginated list of all charts

curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt

View a paginated list of chart versions

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname> -u <username>:<password> \
--cacert ca.crt

Describe a version of a particular chart

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion> -u \
<username>:<password> --cacert ca.crt

Return the default values of a version of a particular chart

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/values -u \
<username>:<password> --cacert ca.crt

Produce a template of a version of a particular chart

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/template -u \
<username>:<password> --cacert ca.crt
Viewing charts with the MSR web UI

Use the MSR web UI to view the MSR Helm repository charts.

  1. In the MSR web UI, navigate to Repositories.

  2. Click the name of the repository that contains the charts you want to view. The page will refresh to display the detail for the selected Helm repository.

  3. Click the Charts tab. The page will refresh to display all the repository charts.

View

UI sequence

Chart versions

Click the View Chart button associated with the required Helm repository.

Chart description

  1. Click the View Chart button associated with the required Helm repository.

  2. Click the View Chart button for the particular chart version.

Default values

  1. Click the View Chart button associated with the required Helm repository.

  2. Click the View Chart button for the particular chart version.

  3. Click Configuration.

Chart templates

  1. Click the View Chart button associated with the required Helm repository.

  2. Click the View Chart button for the particular chart version.

  3. Click Template.

Delete charts from a Helm repository

You can only delete charts from MSR Helm repositories using the MSR API, not the web UI.

To delete a version of a particular chart from a Helm repository through the MSR API:

curl --request DELETE https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion> -u <username>:<password> --cacert ca.crt

Helm chart linting

Helm chart linting can enure that Kubernets YAML files and Helm charts adhere to a set of best practices, with a focus on production readiness and security.

A set of established rules forms the basis of Helm chart linting. The process generates a report that you can use to take any necessary actions.

Implement Helm linting

Perform Helm linting using either the MSR web UI or the MSR API.

Helm linting with the web UI
  1. Open the MSR web UI.

  2. Navigate to Repositories.

  3. Click the name of the repository that contains the chart you want to lint.

  4. Click the Charts tab.

  5. Click the View Chart button associated with the required Helm chart.

  6. Click the View Chart button for the required chart version.

  7. Click the Linting Summary tab.

  8. Click the Lint Chart button to generate a Helm chart linting report.

Helm linting with the API
  1. Run the Helm chart linter on a particular chart.

    curl -k -H "Content-Type: application/json" --request POST "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lint" -u <username>:<password>
    
  2. Generate a Helm chart linting report.

    curl -k -X GET "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lintsummary" -u <username>:<password>
    
Helm chart linting rules

Helm liniting reports offer the linting rules, rule decriptions, and remediations as they are presented in the following table.

Name

Description

Remediation

dangling-service

Indicates when services do not have any associated deployments.

Confirm that your service’s selector correctly matches the labels on one of your deployments.

default-service-account

Indicates when pods use the default service account.

Create a dedicated service account for your pod. Refer to Configure Service Accounts for Pods for details.

deprecated-service-account-field

Indicates when deployments use the deprecated serviceAccount field.

Use the serviceAccountName field instead.

drop-net-raw-capability

Indicates when containers do not drop NET_RAW capability.

NET_RAW makes it so that an application within the container is able to craft raw packets, use raw sockets, and bind to any address. Remove this capability in the containers under containers security contexts.

env-var-secret

Indicates when objects use a secret in an environment variable.

Do not use raw secrets in environment variables. Instead, either mount the secret as a file or use a secretKeyRef. Refer to Using Secrets for details.

mismatching-selector

Indicates when deployment selectors fail to match the pod template labels.

Confirm that your deployment selector correctly matches the labels in its pod template.

no-anti-affinity

Indicates when deployments with multiple replicas fail to specify inter-pod anti-affinity, to ensure that the orchestrator attempts to schedule replicas on different nodes.

Specify anti-affinity in your pod specification to ensure that the orchestrator attempts to schedule replicas on different nodes. Using podAntiAffinity, specify a labelSelector that matches pods for the deployment, and set the topologyKey to kubernetes.io/hostname. Refer to Inter-pod affinity and anti-afinity for details.

no-extensions-v1beta

Indicates when objects use deprecated API versions under extensions/ v1beta.

Migrate using the apps/v1 API versions for the objects. Refer to Deprecated APIs Removed In 1.16 for details.

no-liveness-probe

Indicates when containers fail to specify a liveness probe.

Specify a liveness probe in your container. Refer to Configure Liveness, Readiness, and Startup Probes for details.

no-read-only-root-fs

Indicates when containers are running without a read-only root filesystem.

Set readOnlyRootFilesystem to true in the container securityContext.

no-readiness-probe

Indicates when containers fail to specify a readiness probe.

Specify a readiness probe in your container. Refer to Configure Liveness, Readiness, and Startup Probes for details.

non-existent-service-account

Indicates when pods reference a service account that is not found.

Create the missing service account, or refer to an existing service account.

privileged-container

Indicates when deployments have containers running in privileged mode.

Do not run your container as privileged unless it is required.

required-annotation-email

Indicates when objects do not have an email annotation with a valid email address.

Add an email annotation to your object with the email address of the object’s owner.

required-label-owner

Indicates when objects do not have an email annotation with an owner label.

Add an email annotation to your object with the name of the object’s owner.

run-as-non-root

Indicates when containers are not set to runAsNonRoot.

Set runAsUser to a non-zero number and runAsNonRoot to true in your pod or container securityContext. Refer to Configure a Security Context for a Pod or Container for details.

ssh-port

Indicates when deployments expose port 22, which is commonly reserved for SSH access.

Ensure that non-SSH services are not using port 22. Confirm that any actual SSH servers have been vetted.

unset-cpu-requirements

Indicates when containers do not have CPU requests and limits set.

Set CPU requests and limits for your container based on its requirements. Refer to Requests and limits for details.

unset-memory-requirements

Indicates when containers do not have memory requests and limits set.

Set memory requests and limits for your container based on its requirements. Refer to Requests and limits for details.

writable-host-mount

Indicates when containers mount a host path as writable.

Set containers to mount host paths as readOnly, if you need to access files on the host.

cluster-admin-role-binding

CIS Benchmark 5.1.1 Ensure that the cluster-admin role is only used where required.

Create and assign a separate role that has access to specific resources/actions needed for the service account.

docker-sock

Alert on deployments with docker.sock mounted in containers.

Ensure the Docker socket is not mounted inside any containers by removing the associated Volume and VolumeMount in deployment yaml specification. If the Docker socket is mounted inside a container it could allow processes running within the container to execute Docker commands which would effectively allow for full control of the host.

exposed-services

Alert on services for forbidden types.

Ensure containers are not exposed through a forbidden service type such as NodePort or LoadBalancer.

host-ipc

Alert on pods/deployment-likes with sharing host’s IPC namespace.

Ensure the host’s IPC namespace is not shared.

host-network

Alert on pods/deployment-likes with sharing host’s network namespace.

Ensure the host’s network namespace is not shared.

host-pid

Alert on pods/deployment-likes with sharing host’s process namespace.

Ensure the host’s process namespace is not shared.

privilege-escalation-container

Alert on containers if allowing privilege escalation that could gain more privileges than its parent process.

Ensure containers do not allow privilege escalation by setting allowPrivilegeEscalation=false. See Configure a Security Context for a Pod or Container for more details.

privileged-ports

Alert on deployments with privileged ports mapped in containers.

Ensure privileged ports [0, 1024] are not mapped within containers.

sensitive-host-mounts

Alert on deployments with sensitive host system directories mounted in containers.

Ensure sensitive host system directories are not mounted in containers by removing those Volumes and VolumeMounts.

unsafe-proc-mount

Alert on deployments with unsafe /proc mount (procMount=Unmasked) that will bypass the default masking behavior of the container runtime.

Ensure container does not unsafely exposes parts of /proc by setting procMount=Default. Unmasked ProcMount bypasses the default masking behavior of the container runtime. See Pod Security Standards for more details.

unsafe-sysctls

Alert on deployments specifying unsafe sysctls that may lead to severe problems like wrong behavior of containers.

Ensure container does not allow unsafe allocation of system resources by removing unsafe sysctls configurations. For more details see Using sysctls in a Kubernetes Cluster and Configure namespaced kernel parameters (sysctls) at runtime.

Helm limitations

Storage redirects

The option to redirect clients on pull for Helm repositories is present in the web UI. However, it is currently ineffective. Refer to the relevant issue on GitHub for more information.

MSR API endpoints

For the following endpoints, note that while the Swagger API Reference does not specify example responses for HTTP 200 codes, this is due to a Swagger bug and responses will be returned.

# Get chart or provenance file from repo
GET     https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<filename>
# Template a chart version
GET     https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/template
Chart storage limit

Users can safely store up to 100,000 charts per repository; storing a greater number may compromise some MSR functionality.

Tag pruning

Tag pruning is the process of cleaning up unnecessary or unwanted repository tags. As of v2.6, you can configure the Mirants Secure Registry (MSR) to automatically perform tag pruning on repositories that you manage by:

  • Specifying a tag pruning policy or alternatively,

  • Setting a tag limit

Note

When run, tag pruning only deletes a tag and does not carry out any actual blob deletion.

Known Issue

While the tag limit field is disabled when you turn on immutability for a new repository, this is currently not the case with Repository Settings. As a workaround, turn off immutability when setting a tag limit via Repository Settings > Pruning.

In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy.

Specify a tag pruning policy

As a repository administrator, you can now add tag pruning policies on each repository that you manage. To get started, navigate to https://<msr-url> and log in with your credentials.

Select Repositories in the left-side navigation panel, and then click on the name of the repository that you want to update. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

Select the Pruning tab, and click New pruning policy to specify your tag pruning criteria:

MSR allows you to set your pruning triggers based on the following image attributes:

Image attributes

Name

Description

Example

Tag name

Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Tag name = test`

Component name

Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Component name starts with b

Vulnerabilities

Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number

Critical vulnerabilities = 3

License

Whether the image uses an intellectual property license and is one of or not one of your specified words

License name = docker

Last updated at

Whether the last image update was before your specified number of hours, days, weeks, or months. For details on valid time units, see Go’s ParseDuration function

Last updated at: Hours = 12

Specify one or more image attributes to add to your pruning criteria, then choose:

  • Prune future tags to save the policy and apply your selection to future tags. Only matching tags after the policy addition will be pruned during garbage collection.

  • Prune all tags to save the policy, and evaluate both existing and future tags on your repository.

Upon selection, you will see a confirmation message and will be redirected to your newly updated Pruning tab.

If you have specified multiple pruning policies on the repository, the Pruning tab will display a list of your prune triggers and details on when the last tag pruning was performed based on the trigger, a toggle for deactivating or reactivating the trigger, and a View link for modifying or deleting your selected trigger.

All tag pruning policies on your account are evaluated every 15 minutes. Any qualifying tags are then deleted from the metadata store. If a tag pruning policy is modified or created, then the tag pruning policy for the affected repository will be evaluated.

Set a tag limit

In addition to pruning policies, you can also set tag limits on repositories that you manage to restrict the number of tags on a given repository. Repository tag limits are processed in a first in first out (FIFO) manner. For example, if you set a tag limit of 2, adding a third tag would push out the first.

To set a tag limit, do the following:

  1. Select the repository that you want to update and click the Settings tab.

  2. Turn off immutability for the repository.

  3. Specify a number in the Pruning section and click Save. The Pruning tab will now display your tag limit above the prune triggers list along with a link to modify this setting.

Vulnerability scanning

In addition to its primary function of storing Docker images, MSR offers a deeply integrated vulnerability scanner that analyzes container images, either by manual user request or automatically whenever an image is uploaded to the registry.

MSR image scanning occurs in a service known as the dtr-jobrunner container. To scan an image, MSR:

  • Extracts a copy of the image layers from backend storage.

  • Extracts the files from the layer into a working directory inside the dtr-jobrunner container.

  • Executes the scanner against the files in this working directory, collecting a series of scanning data. Once the scanning data is collected, the working directory for the layer is removed.

Important

In scanning images for security vulnerabilities, MSR temporarily extracts the contents of your images to disk. If malware is contained in these images, external malware scanners may wrongly attribute that malware to MSR. The key indication of this is the detection of malware in the dtr-jobrunner container in /tmp/findlib-workdir-*. To prevent any recurrence of the issue, Mirantis recommends configuring the run-time scanner to exclude files found in the MSR dtr-jobrunner containers in /tmp or more specifically, if wildcards can be used, /tmp/findlib-workdir-*.

Scanner reporting

You can review vulnerability scanning results and submit those results to Mirantis Customer Support to help with the troubleshooting process.

Possible scanner report issues include:

  • Scanner crashes

  • Improperly extracted containers

  • Improperly detected components

  • Incorrectly matched backport

  • Vulnerabilities improperly matched to components

  • Vulnerability false positives

Export a scanner report

You can export a scanner report as a JSON (for support and diagnostics) or a CSV file (for processing using Windows or Linux shell scripts).

  1. Sign in to MSR.

  2. Navigate to Repositories > <repo-name> > Tags.

  3. Click View Details for the required image.

  4. Click Export Report and select Export as JSON or Export as CSV.

    Find the report as either scannerReport.json (for JSON) or scannerReport.txt (for CSV) in your browser downloads directory.

Submit a scanner report

You can send a scanner report directly to Mirantis Customer Support to help the group in their troubleshooting efforts.

  1. Sign in to MSR.

  2. Navigate to the View Details page and click the Components tab.

  3. Click Show layers affected for the layer you want to report.

  4. Click Report Issue. A pop-up window displays with the fields detailed in the following table:

    Field

    Description

    Component

    The Component field is automatically filled out and is not editable. If the information is incorrect, make a note in the Additional info field.

    Reported version or date

    The Reported version or date field is automatically filled out and is not editable. If the information is incorrect, make a note in the Additional info field.

    Report layer

    Indicate the image or image layer. Options include: Omit layer, Include layer, Include image.

    False Positive(s)

    Optional. Select from the drop-down menu all CVEs you suspect are false positives. Toggle the False Positive(s) control to edit the field.

    Missing Issue(s)

    Optional. List CVEs you suspect are missing from the report. Enter CVEs in the format CVE-yyyy-#### or CVE-yyyy-##### and separate each CVE with a comma. Toggle the Missing Issue(s) control to edit the field.

    Incorrect Component Version

    Optional. Enter any incorrect component version information in the Missing Issue(s) field. Toggle the Incorrect Component Version control to edit the field.

    Additional info

    Optional. Indicate anything else that does not pertain to the other fields. Toggle the Additional info control to edit this field.

  5. Fill out the fields in the pop-up window and click Submit.

MSR generates a JSON-formatted scanner report, which it bundles into a file together with the scan data. This file downloads to your local drive, at which point you can share it as needed with Mirantis Customer Support.

Important

To submit a scanner report along with the associated image, bundle the items into a .tgz file and include that file in a new Mirantis Customer Support ticket.

To download the relevant image:

docker save <msr-address>/<user>/<image-name>:tag <image-name>.tar

To bundle the report and image as a .tgz file:

tar -cvzf scannerIssuesReport.tgz <image-name>.tar scannerIssuesReport.json

Image enforcement policies and monitoring

MSR users can automatically block clients from pulling images stored in the registry by configuring enforcement policies at either the global or repository level.

An enforcement policy is a collection of rules used to determine whether an image can be pulled.

A good example of a scenario in which an enforcement policy can be useful is when an administrator wants to house images in MSR but does not want those images to be pulled into environments by MSR users. In this case, the administrator would configure an enforcement policy either at the global or repository level based on a selected set of rules.

Enforcement policies: global versus repository

Global image enforcement policies differ from those set at the repository level in several important respects:

  • Whereas both administrators and regular users can set up enforcement policies at the repository level, only administrators can set up enforcement policies at the global level.

  • Only one global enforcement policy can be set for each MSR instance, whereas multiple enforcement policies can be configured at the repository level.

  • Global enforcement policies are evaluated prior to repository policies.

Enforcement policy rule attributes

Global and repository enforcement policies are generated from the same set of rule attributes.

Note

All rules must evaluate to true for an image to be pulled; if any rules evaluate to false, the image pull will be blocked.

Rule attributes

Name

Filters

Example

Tag name

  • equals

  • starts with

  • ends with

  • contains

  • one of

  • not one of

Tag name starts with dev

Component name

  • equals

  • starts with

  • ends with

  • contains

  • one of

  • not one of

Component name starts with b

All CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

All CVSS 3 vulnerabilities less than 3

Critical CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

Critical CVSS vulnerabilities less than 3

High CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

High CVSS 3 vulnerabilities less than 3

Medium CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

Medium CVSS 3 vulnerabilities less than 3

Low CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

Low CVSS 3 vulnerabilities less than 3

License name

  • one of

  • not one of

License name one of msr

Last updated at

  • before

Last updated at before 12 hours

Configure enforcement policies

Use the MSR web UI to set up enforcement policies for both repository and global enforcement.

Set up repository enforcement

Important

Users can only create and edit enforcement policies for repositories within their user namespace.

To set up a repository enforcement policy using the MSR web UI:

  1. Log in to the MSR web UI.

  2. Navigate to Repositories.

  3. Select the repository to edit.

  4. Click the Enforcement tab and select New enforcement policy.

  5. Define the enforcement policy rules with the desired rule attributes and select Save. The screen displays the new enforcement policy in the Enforcement tab. By default, the new enforcement policy is toggled on.

Once a repository enforcement policy is set up and activated, pull requests that do not satisfy the policy rules will return the following error message:

Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: enforcement policies '<enforcement-policy-id>'
blocked request
Set up global enforcement

Important

Only administrators can set up global enforcement policies.

To set up a global enforcement policy using the MSR web UI:

  1. Log in to the MSR web UI.

  2. Navigate to System.

  3. Select the Enforcement tab.

  4. Confirm that the global enforcement function is Enabled.

  5. Define the enforcement policy rules with the desired criteria and select Save.

Once the global enforcement policy is set up, pull requests against any repository that do not satisfy the policy rules will return the following error message:

Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: global enforcement policy blocked request

Monitor enforcement activity

Administrators and users can monitor enforcement activity in the MSR web UI.

Important

Enforcement events can only be monitored at the repository level. It is not possible, for example, to view in one location all enforcement events that correspond to the global enforcement policy.

  1. Navigate to Repositories.

  2. Select the repository whose enforcement activity you want to review.

  3. Select the Activity tab to view enforcement event activity. For instance you can:

    • Identify which policy triggered an event using the enforcement ID displayed on the event entry. (The enforcement IDs for each enforcement policy are located on the Enforcement tab.)

    • Identify the user responsible for making a blocked pull request, and the time of the event.

Upgrade MSR

Upgrade MSR

MSR uses semantic versioning. While downgrades are not supported, Mirantis supports upgrades according to the following rules:

  • When upgrading from one patch version to another, you can skip patch versions because no data migration is performed for patch versions.

  • When upgrading between minor versions, you cannot skip versions, however you can upgrade from any patch version of the previous minor version to any patch version of the current minor version.

  • When upgrading between major versions, make sure to upgrade one major version at a time - and also to upgrade to the earliest available minor version. It is strongly recommended that you first upgrade to the latest minor/patch version for your major version.

Description

From

To

Supported

patch upgrade

x.y.0

x.y.1

yes

skip patch version

x.y.0

x.y.2

yes

patch downgrade

x.y.2

x.y.1

no

minor upgrade

x.y.*

x.y+1.*

yes

skip minor version

x.y.*

x.y+2.*

no

minor downgrade

x.y.*

x.y-1.*

no

skip major version

x..

x+2..

no

major downgrade

x..

x-1..

no

major upgrade

x.y.z

x+1.0.0

yes

major upgrade skipping minor version

x.y.z

x+1.y+1.z

no

A few seconds of interruption may occur during the upgrade of a MSR cluster, so schedule the upgrade to take place outside of peak hours to avoid any business impacts.

Minor upgrade

Important

Only perform the MSR upgrade once any correlating upgrades to Mirantis Kubernetes Engine (MKE) and/or Mirantis Container Runtime (MCR) have completed.

Mirantis recommends the following upgrade sequence:

  1. MCR

  2. MKE

  3. MSR

Before starting the MSR upgrade, confirm that:

  • The version of MKE in use is supported by the upgrade version of MSR.

  • The MKE and MSR backups are both recent.

  • A backup of current swarm state has been created.

    To create a swarm state backup, perform the following from a MKE manager node:

    ENGINE=$(docker version -f '{{.Server.Version}}')
    systemctl stop docker
    sudo tar czvf "/tmp/swarm-${ENGINE}-$(hostname -s)-$(date +%s%z).tgz" /var/lib/docker/swarm/
    systemctl start docker
    
  • (if possible) A backup exists of the images stored by MSR, if it is configured to store images on the local filesystem or within an NFS store.

    BACKUP_LOCATION=/example_directory/filename
    # If local filesystem
    sudo tar -cf ${BACKUP_LOCATION} -C /var/lib/docker/volumes/dtr-registry-${REPLICA_ID}
    # If NFS store
    sudo tar -cf ${BACKUP_LOCATION} -C /var/lib/docker/volumes/dtr-registry-nfs-${REPLICA_ID}
    
  • None of the MSR replica nodes are exhibiting time drift. To make this determination, review the kernel log timestamps for each of the nodes. If time drift is occurring, use clock synchronization (e.g., NTP) to keep node clocks in sync.

  • Local filesystems across MSR nodes are not exhibiting any disk storage issues.

  • Docker Content Trust in MKE is disabled.

  • All system requirements are met.

Step 1. Upgrade MSR to 2.8 if necessary

Confirm that you are running MSR 2.8. If this is not the case, upgrade your installation to the latest MSR 2.8.x iteration.

Step 2. Upgrade MSR

Pull the latest version of MSR:

docker pull mirantis/dtr:2.9.16

Confirm that at least 16GB RAM is available on the node on which you are running the upgrade. If the MSR node does not have access to the internet, follow the offline installation documentation to get the images.

Once you have the latest image on your machine (and the images on the target nodes, if upgrading offline), run the upgrade command.

Note

The upgrade command can be run from any available node, as MKE is aware of which worker nodes have replicas.

docker run -it --rm \
  mirantis/dtr:2.9.16 upgrade

By default, the upgrade command runs in interactive mode and prompts for any necessary information. If you are performing the upgrade on an existing replica, pass the --existing-replica-id flag.

The upgrade command will start replacing every container in your MSR cluster, one replica at a time. It will also perform certain data migrations. If anything fails or the upgrade is interrupted for any reason, rerun the upgrade command (the upgrade will resume from the point of interruption).

Step 3. Verify Upgrade Success

To confirm that the newly upgraded MSR environment is ready:

  • Make sure that all running MSR containers reflect the newly upgraded MSR version:

    docker ps --filter name=dtr
    
  • Verify that the MSR web UI is accessible and operational.

  • Confirm push and pull functionality of Docker images to and from the registry

  • Ensure that the MSR metadata store is in good standing:

    REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
    docker run -it --rm --net dtr-ol \
       -v dtr-ca-$REPLICA_ID:/ca \
       dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID
    # List problems in the cluster detected by the current node.
    > r.db("rethinkdb").table("current_issues")
    []
    
Metadata Store Migration

When upgrading from 2.5 to 2.6, the system will run a metadatastoremigration job following a successful upgrade. This involves migrating the blob links for your images, which is necessary for online garbage collection. With 2.6, you can log into the MSR web interface and navigate to System > Job Logs to check the status of the metadatastoremigration job.

Garbage collection is disabled while the migration is running. In the case of a failed metadatastoremigration, the system will retry twice.

If the three attempts fail, it will be necessary to manually retrigger the metadatastoremigration job. To do this, send a POST request to the /api/v0/jobs endpoint:

curl https://<msr-external-url>/api/v0/jobs -X POST \
-u username:accesstoken -H 'Content-Type':'application/json' -d \
'{"action": "metadatastoremigration"}'

Alternatively, select API from the bottom left-side navigation panel of the MSR web interface and use the Swagger UI to send your API request.

Patch upgrade

A patch upgrade changes only the MSR containers and is always safer than a minor version upgrade. The command is the same as for a minor upgrade.

MSR cache upgrade

If you have previously deployed a cache, be sure to upgrade the node dedicated for your cache to keep it in sync with your upstream MSR replicas. This prevents authentication errors and other strange behaviors.

Download the vulnerability database

After upgrading MSR, it is necessary to redownload the vulnerability database.

Monitor MSR

Mirantis Secure Registry is a Dockerized application. To monitor it, you can use the same tools and techniques you’re already using to monitor other containerized applications running on your cluster. One way to monitor MSR is using the monitoring capabilities of Docker Universal Control Plane.

In your browser, log in to Mirantis Kubernetes Engine (MKE), and navigate to the Stacks page. If you have MSR set up for high-availability, then all the MSR replicas are displayed.

To check the containers for the MSR replica, click the replica you want to inspect, click Inspect Resource, and choose Containers.

Now you can drill into each MSR container to see its logs and find the root cause of the problem.

Health check endpoints

MSR also exposes several endpoints you can use to assess if a MSR replica is healthy or not:

  • /_ping: Checks if the MSR replica is healthy, and returns a simple json response. This is useful for load balancing or other automated health check tasks.

  • /nginx_status: Returns the number of connections being handled by the NGINX front-end used by MSR.

  • /api/v0/meta/cluster_status: Returns extensive information about all MSR replicas.

Cluster status

The /api/v0/meta/cluster_status endpoint requires administrator credentials, and returns a JSON object for the entire cluster as observed by the replica being queried. You can authenticate your requests using HTTP basic auth.

curl -ksL -u <user>:<pass> https://<msr-domain>/api/v0/meta/cluster_status
{
  "current_issues": [
   {
    "critical": false,
    "description": "... some replicas are not ready. The following servers are
                    not reachable: dtr_rethinkdb_f2277ad178f7",
  }],
  "replica_health": {
    "f2277ad178f7": "OK",
    "f3712d9c419a": "OK",
    "f58cf364e3df": "OK"
  },
}

You can find health status on the current_issues and replica_health arrays. If this endpoint doesn’t provide meaningful information when trying to troubleshoot, try troubleshooting using logs.

Check notary audit logs

Docker Content Trust (DCT) keeps audit logs of changes made to trusted repositories. Every time you push a signed image to a repository, or delete trust data for a repository, DCT logs that information.

These logs are only available from the MSR API.

Get an authentication token

To access the audit logs you need to authenticate your requests using an authentication token. You can get an authentication token for all repositories, or one that is specific to a single repository.

curl --insecure --silent \
  --user <user>:<password> \
  "https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=registry:catalog:*"
curl --insecure --silent \
  --user <user>:<password> \
  "https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=repository:<dtr-url>/<repository>:pull"

MSR returns a JSON file with a token, even when the user doesn’t have access to the repository to which they requested the authentication token. This token doesn’t grant access to MSR repositories.

The JSON file returned has the following structure:

{
  "token": "<token>",
  "access_token": "<token>",
  "expires_in": "<expiration in seconds>",
  "issued_at": "<time>"
}

Changefeed API

Once you have an authentication token you can use the following endpoints to get audit logs:

URL

Description

Authorization

GET /v2/_trust/changefeed

Get audit logs for all repositories.

Global scope token

GET /v2/<msr-url>/<repository>/_trust/changefeed

Get audit logs for a specific repository.

Repositorhy-specific token

Both endpoints have the following query string parameters:

Field name

Required

Type

Description

change_id

Yes

String

A non-inclusive starting change ID from which to start returning results. This will typically be the first or last change ID from the previous page of records requested, depending on which direction your are paging in.

The value 0 indicates records should be returned starting from the beginning of time.

The value 1 indicates records should be returned starting from the most recent record. If 1 is provided, the implementation will also assume the records value is meant to be negative, regardless of the given sign.

records

Yes

String integer

The number of records to return. A negative value indicates the number of records preceding the change_id should be returned. Records are always returned sorted from oldest to newest.

The response is a JSON like:

{
  "count": 1,
  "records": [
    {
      "ID": "0a60ec31-d2aa-4565-9b74-4171a5083bef",
      "CreatedAt": "2017-11-06T18:45:58.428Z",
      "GUN": "msr.example.org/library/wordpress",
      "Version": 1,
      "SHA256": "a4ffcae03710ae61f6d15d20ed5e3f3a6a91ebfd2a4ba7f31fc6308ec6cc3e3d",
      "Category": "update"
    }
  ]
}

Below is the description for each of the fields in the response:

Field name

Description

count

The number of records returned.

ID

The ID of the change record. Should be used in the change_id field of requests to provide a non-exclusive starting index. It should be treated as an opaque value that is guaranteed to be unique within an instance of notary.

CreatedAt

The time the change happened.

GUN

The MSR repository that was changed.

Version

The version that the repository was updated to. This increments every time there’s a change to the trust repository.

This is always 0 for events representing trusted data being removed from the repository.

SHA256

The checksum of the timestamp being updated to. This can be used with the existing notary APIs to request said timestamp.

This is always an empty string for events representing trusted data being removed from the repository

Category

The kind of change that was made to the trusted repository. Can be update, or deletion.

The results only include audit logs for events that happened more than 60 seconds ago, and are sorted from oldest to newest.

Even though the authentication API always returns a token, the changefeed API validates if the user has access to see the audit logs or not:

  • If the user is an admin they can see the audit logs for any repositories,

  • All other users can only see audit logs for repositories they have read access.

Troubleshoot MSR

This guide contains tips and tricks for troubleshooting MSR problems.

Troubleshoot overlay networks

High availability in MSR depends on swarm overlay networking. One way to test if overlay networks are working correctly is to deploy containers to the same overlay network on different nodes and see if they can ping one another.

Use SSH to log into a node and run:

docker run -it --rm \
  --net dtr-ol --name overlay-test1 \
  --entrypoint sh mirantis/dtr

Then use SSH to log into another node and run:

docker run -it --rm \
  --net dtr-ol --name overlay-test2 \
  --entrypoint ping mirantis/dtr -c 3 overlay-test1

If the second command succeeds, it indicates overlay networking is working correctly between those nodes.

You can run this test with any attachable overlay network and any Docker image that has sh and ping.

Access RethinkDB directly

MSR uses RethinkDB for persisting data and replicating it across replicas. It might be helpful to connect directly to the RethinkDB instance running on a MSR replica to check the MSR internal state.

Warning

Modifying RethinkDB directly is not supported and may cause problems.

via RethinkCLI

The RethinkCLI can be run from a separate image in the mirantis organization. Note that the commands below are using separate tags for non-interactive and interactive modes.

Non-interactive

Use SSH to log into a node that is running a MSR replica, and run the following:

# List problems in the cluster detected by the current node.
REPLICA_ID=$(docker container ls --filter=name=dtr-rethink --format '{{.Names}}' | cut -d'/' -f2 | cut -d'-' -f3 | head -n 1) && echo 'r.db("rethinkdb").table("current_issues")' | docker run --rm -i --net dtr-ol -v "dtr-ca-${REPLICA_ID}:/ca" -e DTR_REPLICA_ID=$REPLICA_ID mirantis/rethinkcli:v2.2.0-ni non-interactive

On a healthy cluster the output will be [].

Interactive

Starting in DTR 2.5.5, you can run RethinkCLI from a separate image. First, set an environment variable for your MSR replica ID:

REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')

RethinkDB stores data in different databases that contain multiple tables. Run the following command to get into interactive mode and query the contents of the DB:

docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca mirantis/rethinkcli:v2.3.0 $REPLICA_ID
# List problems in the cluster detected by the current node.
> r.db("rethinkdb").table("current_issues")
[]

# List all the DBs in RethinkDB
> r.dbList()
[ 'dtr2',
  'jobrunner',
  'notaryserver',
  'notarysigner',
  'rethinkdb' ]

# List the tables in the dtr2 db
> r.db('dtr2').tableList()
[ 'blob_links',
  'blobs',
  'client_tokens',
  'content_caches',
  'events',
  'layer_vuln_overrides',
  'manifests',
  'metrics',
  'namespace_team_access',
  'poll_mirroring_policies',
  'promotion_policies',
  'properties',
  'pruning_policies',
  'push_mirroring_policies',
  'repositories',
  'repository_team_access',
  'scanned_images',
  'scanned_layers',
  'tags',
  'user_settings',
  'webhooks' ]

# List the entries in the repositories table
> r.db('dtr2').table('repositories')
[ { enableManifestLists: false,
    id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b',
    immutableTags: false,
    name: 'test-repo-1',
    namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
    namespaceName: 'admin',
    pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6',
    pulls: 0,
    pushes: 0,
    scanOnPush: false,
    tagLimit: 0,
    visibility: 'public' },
  { enableManifestLists: false,
    id: '9f43f029-9683-459f-97d9-665ab3ac1fda',
    immutableTags: false,
    longDescription: '',
    name: 'testing',
    namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
    namespaceName: 'admin',
    pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be',
    pulls: 0,
    pushes: 0,
    scanOnPush: false,
    shortDescription: '',
    tagLimit: 1,
    visibility: 'public' } ]

Individual DBs and tables are a private implementation detail and may change in MSR from version to version, but you can always use dbList() and tableList() to explore the contents and data structure.

Learn more about RethinkDB queries.

via API

To check on the overall status of your MSR cluster without interacting with RethinkCLI, run the following API request:

curl -u admin:$TOKEN -X GET "https://<msr-url>/api/v0/meta/cluster_status" -H "accept: application/json"
Example API Response
{
  "rethink_system_tables": {
    "cluster_config": [
      {
        "heartbeat_timeout_secs": 10,
        "id": "heartbeat"
      }
    ],
    "current_issues": [],
    "db_config": [
      {
        "id": "339de11f-b0c2-4112-83ac-520cab68d89c",
        "name": "notaryserver"
      },
      {
        "id": "aa2e893f-a69a-463d-88c1-8102aafebebc",
        "name": "dtr2"
      },
      {
        "id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd",
        "name": "jobrunner"
      },
      {
        "id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039",
        "name": "notarysigner"
      }
    ],
    "server_status": [
      {
        "id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a",
        "name": "dtr_rethinkdb_5eb9459a7832",
        "network": {
          "canonical_addresses": [
            {
              "host": "dtr-rethinkdb-5eb9459a7832.dtr-ol",
              "port": 29015
            }
          ],
          "cluster_port": 29015,
          "connected_to": {
            "dtr_rethinkdb_56b65e8c1404": true
          },
          "hostname": "9e83e4fee173",
          "http_admin_port": "<no http admin>",
          "reql_port": 28015,
          "time_connected": "2019-02-15T00:19:22.035Z"
        },
       }
     ...
    ]
  }
}

Recover from an unhealthy replica

When a MSR replica is unhealthy or down, the MSR web UI displays a warning:

Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.

To fix this, you should remove the unhealthy replica from the MSR cluster, and join a new one. Start by running:

docker run -it --rm \
  mirantis/dtr:2.9.16 remove \
  --ucp-insecure-tls

And then:

docker run -it --rm \
  mirantis/dtr:2.9.16 join \
  --ucp-node <mke-node-name> \
  --ucp-insecure-tls

Vulnerability scan warnings

Warnings display in a red banner at the top of the MSR web UI to indicate potential vulnerability scanning issues.

Warning

Cause

Warning: Cannot perform security scans because no vulnerability database was found.

Displays when vulnerabilty scanning is enabled but there is no vulnerability database available to MSR. Typically, the warning displays when a vulnerability database update is run for the first time and the operation fails, as no usable vulnerability database exists at this point.

Warning: Last vulnerability database sync failed.

Displays when a vulnerability database update fails, even though there is a previous usable vulnerabilty database available for vulnerability scans. The warning typically displays when a vulnerability database update fails, despite successful completion of a prior vulnerability database update.

Note

The terms vulnerability database sync and vulnerability database update are interchangeable, in the context of MSR web UI warnings.

Note

The issuing of warnings is the same regardless of whether vulnerability database updating is done manually or is performed automatically through a job.

MSR undergoes a number of steps in performing a vulnerability database update, including TAR file download and extraction, file validation, and the update operation itself. Errors that can trigger warnings can occur at any point in the update process. These errors can include such system-related matters as low disk space, issues with the transient network, or configuration complications. As such, the best strategy for troubleshooting MSR vulnerability scanning issues is to review the logs.


To view the logs for an online vulnerability database update:

Online vulnerability database updates are performed by a jobrunner container, the logs for which you can view through a docker CLI command or by using the MSR web UI:

  • CLI command:

    docker logs <jobrunner-container-name>
    
  • MSR web UI:

    Navigate to System > Job Logs in the left-side navigation panel.


To view the logs for an offline vulnerability database update:

The MSR vulnerability database update occurs through the dtr-api container. As such, access the logs for that container to ascertain the reason for update failure.


To obtain more log information:

If the logs do not initially offer enough detail on the cause of vulnerability database update failure, set MSR to enable debug logging, which will display additional debug logs.

Refer to the reconfigure CLI command documentation for information on how to enable debug logging. For example:

docker run -it --rm mirantis/dtr:<version-number> reconfigure --ucp-url
$MKE_URL --ucp-username $USER --ucp-password $PASSWORD --ucp-insecure-tls
--dtr-external-url $MSR_URL --log-level debug

Certificate issues when pushing and pulling images

If TLS is not properly configured, you are likely to encounter an x509: certificate signed by unknown authority error when attempting to run the following commands:

  • docker login

  • docker push

  • docker pull

To resolve the issue:

Verify that your MSR instance has been configured with your TLS certificate Fully Qualified Domain Name (FQDN). For more information, refer to Add a custom TLS certificate.

Alternatively, but only in testing scenarios, you can skip using a certificate by adding your registry host name as an insecure registry in the Docker daemon.json file:

{
    "insecure-registries" : [ "registry-host-name" ]
}

Disaster recovery

Disaster recovery overview

Mirantis Secure Registry is a clustered application. You can join multiple replicas for high availability.

For a MSR cluster to be healthy, a majority of its replicas (n/2 + 1) need to be healthy and be able to communicate with the other replicas. This is also known as maintaining quorum.

This means that there are three failure scenarios possible.

Replica is unhealthy but cluster maintains quorum

One or more replicas are unhealthy, but the overall majority (n/2 + 1) is still healthy and able to communicate with one another.

In this example the MSR cluster has five replicas but one of the nodes stopped working, and the other has problems with the MSR overlay network.

Even though these two replicas are unhealthy the MSR cluster has a majority of replicas still working, which means that the cluster is healthy.

In this case you should repair the unhealthy replicas, or remove them from the cluster and join new ones.

Learn how to repair a replica.

The majority of replicas are unhealthy

A majority of replicas are unhealthy, making the cluster lose quorum, but at least one replica is still healthy, or at least the data volumes for MSR are accessible from that replica.

Failure scenario 2

In this example the MSR cluster is unhealthy but since one replica is still running it’s possible to repair the cluster without having to restore from a backup. This minimizes the amount of data loss.

Learn how to do an emergency repair.

All replicas are unhealthy

This is a total disaster scenario where all MSR replicas were lost, causing the data volumes for all MSR replicas to get corrupted or lost.

Failure scenario 3

In a disaster scenario like this, you’ll have to restore MSR from an existing backup. Restoring from a backup should be only used as a last resort, since doing an emergency repair might prevent some data loss.

Learn how to restore from a backup.

Repair a single replica

When one or more MSR replicas are unhealthy but the overall majority (n/2 + 1) is healthy and able to communicate with one another, your MSR cluster is still functional and healthy.

Cluster with two nodes unhealthy

Given that the MSR cluster is healthy, there’s no need to execute any disaster recovery procedures like restoring from a backup.

Instead, you should:

  1. Remove the unhealthy replicas from the MSR cluster.

  2. Join new replicas to make MSR highly available.

Since a MSR cluster requires a majority of replicas to be healthy at all times, the order of these operations is important. If you join more replicas before removing the ones that are unhealthy, your MSR cluster might become unhealthy.

Split-brain scenario

To understand why you should remove unhealthy replicas before joining new ones, imagine you have a five-replica MSR deployment, and something goes wrong with the overlay network connection the replicas, causing them to be separated in two groups.

Cluster with network problem

Because the cluster originally had five replicas, it can work as long as three replicas are still healthy and able to communicate (5 / 2 + 1 = 3). Even though the network separated the replicas in two groups, MSR is still healthy.

If at this point you join a new replica instead of fixing the network problem or removing the two replicas that got isolated from the rest, it’s possible that the new replica ends up in the side of the network partition that has less replicas.

cluster with split brain

When this happens, both groups now have the minimum amount of replicas needed to establish a cluster. This is also known as a split-brain scenario, because both groups can now accept writes and their histories start diverging, making the two groups effectively two different clusters.

Remove replicas

To remove unhealthy replicas, you’ll first have to find the replica ID of one of the replicas you want to keep, and the replica IDs of the unhealthy replicas you want to remove.

You can find the list of replicas by navigating to Shared Resources > Stacks or Swarm > Volumes (when using swarm mode) on the MKE web interface, or by using the MKE client bundle to run:

docker ps --format "{{.Names}}" | grep dtr

# The list of MSR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6

Another way to determine the replica ID is to SSH into a MSR node and run the following:

REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
&& echo $REPLICA_ID

Then use the MKE client bundle to remove the unhealthy replicas:

docker run -it --rm mirantis/dtr:2.9.16 remove \
  --existing-replica-id <healthy-replica-id> \
  --replica-ids <unhealthy-replica-id> \
  --ucp-insecure-tls \
  --ucp-url <mke-url> \
  --ucp-username <user> \
  --ucp-password <password>

You can remove more than one replica at the same time, by specifying multiple IDs with a comma.

Healthy cluster
Join replicas

Once you’ve removed the unhealthy nodes from the cluster, you should join new ones to make sure your cluster is highly available.

Use your MKE client bundle to run the following command which prompts you for the necessary parameters:

docker run -it --rm \
  mirantis/dtr:2.9.16 join \
  --ucp-node <mke-node-name> \
  --ucp-insecure-tls
Where to go next

Repair a cluster

For a MSR cluster to be healthy, a majority of its replicas (n/2 + 1) need to be healthy and be able to communicate with the other replicas. This is known as maintaining quorum.

In a scenario where quorum is lost, but at least one replica is still accessible, you can use that replica to repair the cluster. That replica doesn’t need to be completely healthy. The cluster can still be repaired as the MSR data volumes are persisted and accessible.

Unhealthy cluster

Repairing the cluster from an existing replica minimizes the amount of data lost. If this procedure doesn’t work, you’ll have to restore from an existing backup.

Diagnose an unhealthy cluster

When a majority of replicas are unhealthy, causing the overall MSR cluster to become unhealthy, operations like docker login, docker pull, and docker push present internal server error.

Accessing the /_ping endpoint of any replica also returns the same error. It’s also possible that the MSR web UI is partially or fully unresponsive.

Perform an emergency repair

Use the mirantis/dtr emergency-repair command to try to repair an unhealthy MSR cluster, from an existing replica.

This command checks the data volumes for the MSR replica are uncorrupted, redeploys all internal MSR components and reconfigured them to use the existing volumes. It also reconfigures MSR removing all other nodes from the cluster, leaving MSR as a single-replica cluster with the replica you chose.

Start by finding the ID of the MSR replica that you want to repair from. You can find the list of replicas by navigating to Shared Resources > Stacks or Swarm > Volumes (when using swarm mode) on the MKE web interface, or by using a MKE client bundle to run:

docker ps --format "{{.Names}}" | grep dtr

# The list of MSR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6

Another way to determine the replica ID is to SSH into a MSR node and run the following:

REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
&& echo $REPLICA_ID

Then, use your MKE client bundle to run the emergency repair command:

docker run -it --rm mirantis/dtr:2.9.16 emergency-repair \
  --ucp-insecure-tls \
  --existing-replica-id <replica-id>

If the emergency repair procedure is successful, your MSR cluster now has a single replica. You should now join more replicas for high availability.

Note

Learn more about the high availability configuration in Set up high availability.

If the emergency repair command fails, try running it again using a different replica ID. As a last resort, you can restore your cluster from an existing backup.

Where to go next

Create a backup

Data managed by MSR

Mirantis Secure Registry maintains data about:

Data

Description

Configurations

The MSR cluster configurations.

Repository metadata

The metadata about the repositories and images deployed.

Access control to repos and images

Permissions for teams and repositories.

Scan results

Security scanning results for images.

Certificates and keys

The certificates, public keys, and private keys that are used for mutual TLS communication.

Images content

The images you push to MSR. This can be stored on the file system of the node running MSR, or other storage system, depending on the configuration.

This data is persisted on the host running MSR, using named volumes.

To perform a backup of a MSR node, run the mirantis/dtr backup <msr-cli-backup> command. This command backs up the following data:

Data

Backed up

Description

Configurations

yes

MSR settings

Repository metadata

yes

Metadata such as image architecture and size

Access control to repos and images

yes

Data about who has access to which images

Notary data

yes

Signatures and digests for images that are signed

Scan results

yes

Information about vulnerabilities in your images

Certificates and keys

yes

TLS certificates and keys used by MSR

Image content

no

Needs to be backed up separately, depends on MSR configuration

Users, orgs, teams

no

Create a MKE backup to back up this data

Vulnerability database

no

Can be redownloaded after a restore

Back up MSR data

To create a backup of MSR, you need to:

  1. Back up image content

  2. Back up MSR metadata

You should always create backups from the same MSR replica, to ensure a smoother restore. If you have not previously performed a backup, the web interface displays a warning for you to do so:

_images/backup-warning.png
Find your replica ID

Since you need your MSR replica ID during a backup, the following covers a few ways for you to determine your replica ID:

MKE web interface

You can find the list of replicas by navigating to Shared Resources > Stacks or Swarm > Volumes (when using swarm mode) on the MKE web interface.

MKE client bundle

From a terminal using a MKE client bundle, run:

docker ps --format "{{.Names}}" | grep dtr

# The list of MSR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6
SSH access

Another way to determine the replica ID is to log into a MSR node using SSH and run the following:

REPLICA_ID=$(docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-') && echo $REPLICA_ID
Back up image content

Since you can configure the storage backend that MSR uses to store images, the way you back up images depends on the storage backend you’re using.

If you’ve configured MSR to store images on the local file system or NFS mount, you can back up the images by using SSH to log in to an MSR node, and creating a tar archive of the MSR volume.

Example backup command for local images:

sudo tar -cvf image-backup.tar /var/lib/docker/volumes/dtr-registry-<replica-id>

Expected system response:

tar: Removing leading '/' from member names

If you’re using a different storage backend, follow the best practices recommended for that system.

Back up MSR metadata

To create a MSR backup, load your MKE client bundle, and run the following command.

Chained commands (Linux only):

DTR_VERSION=$(docker container inspect $(docker container ps -f name=dtr-registry -q) | \ >   grep -m1 -Po '(?<=DTR_VERSION=)\d+.\d+.\d+'); \
REPLICA_ID=$(docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'); \
read -p 'mke-url (The MKE URL including domain and port): ' UCP_URL; \
read -p 'mke-username (The MKE administrator username): ' UCP_ADMIN; \
read -sp 'mke password: ' UCP_PASSWORD; \
docker run --log-driver none -i --rm \
  --env UCP_PASSWORD=$UCP_PASSWORD \
  mirantis/dtr:$DTR_VERSION backup \
  --ucp-username $UCP_ADMIN \
  --ucp-url $UCP_URL \
  --ucp-ca "$(curl https://${UCP_URL}/ca)" \
  --existing-replica-id $REPLICA_ID > dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar
MKE field prompts
  • <mke-url> is the URL you use to access MKE.

  • <mke-username> is the username of a MKE administrator.

  • <mke-password> is the password for the indicated MKE administrator.

The above chained commands run through the following tasks:

  1. Sets your MSR version and replica ID. To back up a specific replica, set the replica ID manually by modifying the --existing-replica-id flag in the backup command.

  2. Prompts you for your MKE URL (domain and port) and admin username.

  3. Prompts you for your MKE password without saving it to your disk or printing it on the terminal.

  4. Retrieves the CA certificate for your specified MKE URL. To skip TLS verification, replace the --ucp-ca flag with --ucp-insecure-tls. Docker does not recommend this flag for production environments.

  5. Includes MSR version and timestamp to your tar backup file.

Important

To ensure constant user access to MSR, by default the backup command does not pause the MSR replica that is undergoing the backup operation. As such, you can continue to make changes to the replica, however those changes will not be saved into the backup. To circumvent this behavior, use the --offline-backup option and be sure to remove the replica from the load balancing pool to avoid user interruption.

As the backup contains sensitive information (for example, private keys), you can encrypt it by running:

gpg --symmetric {{ metadata_backup_file }}

This prompts you for a password to encrypt the backup, copies the backup file and encrypts it.

Refer to mirantis/dtr backup for more information on supported command options.

Test your backups

To validate that the backup was correctly performed, you can print the contents of the tar file created. The backup of the images should look like:

tar -tf {{ images_backup_file }}

dtr-backup-v2.9.16/
dtr-backup-v2.9.16/rethink/
dtr-backup-v2.9.16/rethink/layers/

And the backup of the MSR metadata should look like:

tar -tf {{ metadata_backup_file }}

# The archive should look like this
dtr-backup-v2.9.16/
dtr-backup-v2.9.16/rethink/
dtr-backup-v2.9.16/rethink/properties/
dtr-backup-v2.9.16/rethink/properties/0

If you’ve encrypted the metadata backup, you can use:

gpg -d {{ metadata_backup_file }} | tar -t

You can also create a backup of a MKE cluster and restore it into a new cluster. Then restore MSR on that new cluster to confirm that everything is working as expected.

Restore from backup

Restore MSR data

If your MSR has a majority of unhealthy replicas, you can restore it to a working state by restoring from an existing backup.

To restore MSR, you must:

  1. Stop and remove any MSR containers that might be running.

  2. Restore the images from a backup.

  3. Restore MSR metadata from a backup.

  4. Re-fetch the vulnerability database.

Important

  • You must restore MSR on the same MKE cluster upon which you created the backup. If you restore on a different MKE cluster, the MSR resources will be owned by non-existant users, and thus you will not be able to manage the resources despite their being stored in the MSR data store.

  • When restoring, you must use the same version of the mirantis/dtr image that you used in creating the backup.

Remove MSR containers

Start by removing any MSR container that is still running. Run the following command with the client bundle:

docker run -it --rm \
  mirantis/dtr:2.9.16 destroy \
  --ucp-insecure-tls

Note

If the client bundle is not activated, the command can be run from an MSR node.

Restore images

If you had MSR configured to store images on the local filesystem, you can extract your backup:

sudo tar -xf {{ image_backup_file }} -C /var/lib/docker/volumes

If you’re using a different storage backend, follow the best practices recommended for that system.

Restore MSR metadata

You can restore the MSR metadata with the mirantis/dtr restore command. This performs a fresh installation of MSR, and reconfigures it with the configuration created during a backup.

Load your MKE client bundle, and run the following command, replacing the placeholders for the real values:

read -sp 'ucp password: ' UCP_PASSWORD;

This prompts you for the MKE password. Next, run the following to restore MSR from your backup. You can learn more about the supported flags in mirantis/dtr restore.

docker run -i --rm \
  --env UCP_PASSWORD=$UCP_PASSWORD \
  mirantis/dtr:2.9.16 restore \
  --ucp-url <mke-url> \
  --ucp-insecure-tls \
  --ucp-username <mke-username> \
  --ucp-node <hostname> \
  --replica-id <replica-id> \
  --dtr-external-url <msr-external-url> < {{ metadata_backup_file }}

Where:

  • <mke-url> is the url you use to access MKE

  • <mke-username> is the username of a MKE administrator

  • <hostname> is the hostname of the node where you’ve restored the images

  • <replica-id> the id of the replica you backed up

  • <msr-external-url>the url that clients use to access MSR

DTR 2.5 and below

If you’re using NFS as a storage backend, also include --nfs-storage-url as part of your restore command, otherwise MSR is restored but starts using a local volume to persist your Docker images.

DTR 2.6.0-2.6.3 (with experimental online garbage collection)

Warning

When running 2.6.0 to 2.6.3 (with experimental online garbage collection), there is an issue with reconfiguring and restoring MSR with --nfs-storage-url, which leads to erased tags. Make sure to back up your MSR metadata before you proceed. To work around the --nfs-storage-url flag issue, manually create a storage volume on each MSR node. To restore MSR from an existing backup, use mirantis/dtr restore with --dtr-storage-volume and the new volume.

Re-fetch the vulnerability database

If you’re scanning images, you now need to download the vulnerability database.

After you successfully restore MSR, you can join new replicas the same way you would after a fresh installation.

Where to go next

Customer feedback

You can submit feedback on MSR to Mirantis either by rating your experience or through a Jira ticket.

To rate your MSR experience:

  1. Log in to the MSR web UI.

  2. Click Give feedback at the bottom of the screen.

  3. Rate your MSR experience from one to five stars, and add any additional comments in the provided field.

  4. Click Send feedback.

To offer more detailed feedback:

  1. Log in to the MSR web UI.

  2. Click Give feedback at the bottom of the screen.

  3. Click create a ticket in the 5-star review dialog to open a Jira feedback collector.

  4. Fill in the Jira feedback collector fields and add attachments as necessary.

  5. Click Submit.

Mirantis Migration Tool Guide

The Mirantis Migration Tool (MMT) provides a tool with which you can migrate metadata and image binaries to a new Kubernetes or Swarm MSR cluster. It is quite flexible, in that you can switch cluster orchestrators and deployment methods during the migration process, as well as transition to the same version or opt for an upgrade to a later major, minor, or patch version.

Available MSR system orchestrations include:

MSR system orchestrations

MSR system

Orchestration

  • MSR 3.1.x, Helm

  • MSR 3.1.x, Operator

  • MSR 3.0.x, Helm

Kubernetes orchestration

  • MSR 3.1.x, Swarm

Docker Swarm orchestration

  • MSR 3.1.x, Helm

  • MSR 3.1.x, Operator

  • MSR 3.1.x, Swarm

  • MSR 3.0.x, Helm

  • MSR 2.9.x

MKE orchestration

Migration paths

Note

MMT does not support migrating to 2.9.x target systems.

Supported migration paths

Source MSR system

Target MSR system

MSR 2.9

  • MSR 3.1, Helm

  • MSR 3.1, Swarm

  • MSR 3.0, Helm

MSR 3.0, Helm

  • MSR 3.1, Helm

  • MSR 3.1, Operator

  • MSR 3.1, Swarm

MSR 3.1, Helm

  • MSR 3.1, Operator

  • MSR 3.1, Swarm

MSR 3.1, Operator

  • MSR 3.1, Helm

  • MSR 3.1, Swarm

MSR 3.1, Swarm

  • MSR 3.1, Helm

  • MSR 3.1, Operator

The workflow for migrating MSR deployments is a multi-stage sequential operation.

Migrations from MSR 2.9.x:

  1. Source verification

  2. Estimation

  3. Extraction

  4. Transformation

  5. Restoration

Migrations from MSR 3.x.x:

  1. Extraction

  2. Restoration

Note

Refer to Kubernetes migrations for all migrations that include Kubernetes-based source or target systems.

Backup and restoration paths

You can use MMT to create an MSR system backup as well as to restore an MSR system from a previously created backup.

Source MSR system

Target MSR system

MSR 3.0, Helm

MSR 3.0, Helm

MSR 3.1, Helm

MSR 3.1, Helm

MSR 3.1, Operator

MSR 3.1, Operator

MSR 3.1, Swarm

MSR 3.1, Swarm

MMT architecture

The Mirantis Migration Tool is designed to work with MSR-based source registries.

The mmt command syntax is as follows:

mmt <command> <command-mode> --storage-mode <storage-mode> ... <directory>

The <command> argument represents the particular stage of the migration process:

Migration stage

Description

verify

Verification of the MSR source system configuration. The verify command must be run on the source MSR system. Refer to Verify the source system configuration for more information.

Applies only to migrations that originate from MSR 2.9.x systems.

estimate

Estimation of the number of images and the amount of metadata to migrate. The estimate command must be run on the source MSR system. Refer to Estimate the migration for more information.

Applies only to migrations that originate from MSR 2.9.x systems.

extract

Extraction of metadata, storage configuration, and blob storage in the case of the copy storage mode, from the source registry. The extract command must be run on the source MSR system. Refer to Extract the data for more information.

transform

Transformation of metadata from the source registry for use with the target MSR system. The transform command must be run on the target MSR system. Refer to Transform the data extract for more information.

Applies only to migrations that originate from MSR 2.9.x systems.

restore

Restoration of transformed metadata, storage configuration, and blob storage in the case of the copy storage mode, is made onto the target MSR environment. The restore command must be run on the target MSR system. Refer to Restore the data extract for more information.

The <command-mode> argument indicates the mode in which the command is to run specific to the source or target registry. msr and msr3 are currently the only accepted values, as MMT currently only supports the migration of MSR registries.

The --storage-mode flag and its accompanying <storage-mode> argument indicate the storage mode to use in migrating the registry blob storage.

Storage mode

Description

inplace

The binary image data remains in its original location.

The target MSR system must be configured to use the same external storage as the source MSR system. Refer to configure-external-storage for more information.

Important

Due to its ability to handle large amounts of data, Mirantis recommends the use of inplace storage mode for most migration scenarios.

copy

The binary image data is copied from the source system to a local directory on the workstation that is running MMT. This mode allows movement from one storage location to another. It is especially useful in air-gapped environments.

The <directory> argument is used to share state across each command. The resulting directory is typically the destination for the data that is extracted from the source registry, which then serves as the source for the extracted data in subsequent commands.

Migration prerequisites

You must meet certain prerequisites to successfully migrate an MSR system using the Mirantis Migration Tool (MMT):

  • Placement of the source MSR registry into read-only mode. To do this, execute the following API request:

    curl -u <username>:$TOKEN -X POST "https://<msr-url>/api/v0/meta/settings" -H "accept: application/json" -H "content-type: application/json" -d "{ \"readOnlyRegistry\": true }"
    

    A 202 Accepted response indicates success.

    Important

    • To avoid data inconsistencies, the source registry must remain in read-only mode throughout the migration to the target MSR system.

      Revert the value of readOnlyRegistry to false after the migration is complete.

    • Be aware that MSR 3.0.x source systems cannot be placed into read-only mode. If you are migrating from a 3.0.x source system, be careful not to write any files during the migration process.

  • An active MSR 3.x.x installation, version 3.0.3 or later, to serve as the migration target.

  • Configuration of the namespace for the MSR target installation, which you set by running the following command:

    kubectl config set-context --current --namespace=<NAMESPACE-for-MSR-3.x.x-migration-target>
    
  • You must pull the MMT image to both the source and target systems, using the following command:

    docker pull registry.mirantis.com/msr/mmt
    
  • 2.9.x source systems only. Administrator credentials for the MKE cluster on which the source MSR 2.9 system is running.

  • Kubernetes target systems only. A kubectl config file, which is typically located in $HOME/.kube.

  • Kubernetes target systems only. Credentials within the kubectl config file that supply cluster admin access to the Kubernetes cluster that is running MSR 3.x.x.

Select the storage mode

Once the prerequisites are met, you can select from two available storage modes for migrating binary image data from a source MSR system to a target MSR system: inplace and copy.

Note

In all but one stage of the migration workflow, you will indicate the storage mode of choice in the storage-mode parameter setting. The step in which you do not indicate the storage mode is Restore the data extract.

Storage mode

Description

inplace

The binary image data remains in its original location.

The target MSR system must be configured to use the same external storage as the source MSR system. Refer to configure-external-storage for more information.

Important

Due to its ability to handle large amounts of data, Mirantis recommends the use of inplace storage mode for most migration scenarios.

copy

The binary image data is copied from the source system to a local directory on the workstation that is running MMT. This mode allows movement from one storage location to another. It is especially useful in air-gapped environments.

Important

Migrations from source MSR systems that use Docker volumes for image storage, such as local filesystem storage back end, can only be performed using the copy storage mode. Refer to Filesystem storage back ends for more information.

Kubernetes migrations

For all Kubernetes-based migrations, Mirantis recommends running MMT in a Pod rather than using the docker run deployment method. Migration scenarios in which this does not apply are limited to MSR 2.9.x source systems and Swarm-based MSR 3.1.x source and target systems.

Important

  • All Kubernetes-based migrations that use a filesystem back end must run MMT in a Pod.

  • When performing a restore from within the MMT Pod, the Persistent Volume Claim (PVC) used by the Pod must contain the data extracted from the source MSR system.

Before you perform the migration, deploy the following Pod onto your Kubernetes-based source and target systems:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: mmt-serviceaccount
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: mmt-role
rules:
  - apiGroups: ["", "apps", "rbac.authorization.k8s.io", "cert-manager.io", "acid.zalan.do"]
    resources: ["*"]
    verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: mmt-rolebinding
subjects:
  - kind: ServiceAccount
    name: mmt-serviceaccount
roleRef:
  kind: Role
  name: mmt-role
  apiGroup: rbac.authorization.k8s.io
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mmt-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: "20Gi"
---
apiVersion: v1
kind: Pod
metadata:
  name: mmt
spec:
  serviceAccountName: mmt-serviceaccount
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: msr
    - name: migration
      persistentVolumeClaim:
        claimName: mmt-pvc
  containers:
    - name: mmt
      image: registry.mirantis.com/msr/mmt:v2.0.0
      imagePullPolicy: IfNotPresent
      command: ["sh", "-c", "tail -f /dev/null"]
      volumeMounts:
      - name: storage
        mountPath: /storage
      - name: migration
        mountPath: /migration
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 100m
          memory: 256Mi
  restartPolicy: Never

Note

  • In the rules section of the Role definition, add or remove permissions according to your requirements.

  • For the PersistentVolumeClaim definition, modify the the spec.resources.storage value according to your requirements.

  • In the Pod definition, the spec.volumes[0].persistentVolumeClaim.claimName field refers to the PVC used by the target MSR 3.x system. Modify the value as required.

Step-by-step migration

Once you have met the Migration prerequisites, configured your source MSR system and your target MSR system, and selected the storage mode, you can perform the migration workflow as a sequence of individual steps.

Migrations from MSR 2.9.x to 3.x.x must follow each of the five migration steps, whereas migrations from MSR 3.x.x source systems skip the verify, estimate, and transform steps, and instead begin with extract before proceeding directly to restore.

Important

All MMT commands that are run on MSR 3.x.x systems, including both source and target deployments, must include the --fullname option, which specifies the name of the MSR instance.

To obtain the name of your MSR instance:

helm ls
Verify the source system configuration

Note

If your migration originates from MSR 3.x.x, proceed directly to Extract the data.

The first step in migrating your source MSR system to a target MSR system is to verify the configuration of the source system.

docker run \
--rm \
-it \
registry.mirantis.com/msr/mmt:<mmt-version> \
verify msr  \
--source-mke-url <mke-url> \
--source-username <admin-username> \
--source-password <admin-password> \
--source-url <source-msr-url> \
--storage-mode <inplace|copy> \
--source-insecure-tls \
/migration

Note

Migrations that use the copy storage mode and a filesystem storage back end must also include the --mount option, to specify the MSR 2.9.x Docker volume that will be mounted to the MMT container at the /storage directory. As --mount is a Docker option, it must be included prior to the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.

--mount source=dtr-registry-<replica-id>,target=/storage

To obtain the MSR replica ID, run the following command from within an MSR node:

docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'
Command line parameters

Parameter

Description

source-mke-url

Set the URL for the source Mirantis Kubernetes Engine (MKE) system.

source-username

Set the username of the admin user. For MSR 2.9.x source systems, use the MKE admin user.

source-password

Set the password of the admin user. For MSR 2.9.x source systems, use the MKE admin user.

source-url

Set the URL for the source MSR system.

storage-mode

Set the registry migration storage mode.

Valid values: inplace, copy

source-insecure-tls

Optional. Set whether to use an insecure connection.

Valid values: true (skip certificate validation when communicating with the source system), false (perform certificate validation when communicating with the source system)

Default: false

Example output:

Note

Sizing information displays only when a migration is run in copy storage mode.

INFO[0000] Logging level set to "info"
INFO[0000] Migration will be performed with "copy" storage mode
INFO[0000] Verifying health of source MSR <source-msr-url>
INFO[0000] ok
INFO[0000] Verifying provided credentials with source MSR...
INFO[0000] ok
INFO[0000] Verifying health of source MKE <source-mke-url>
INFO[0000] ok
INFO[0000] Verifying provided credentials with source MKE...
INFO[0001] ok
INFO[0001] Extracting MSR storage configuration
INFO[0001] Checking the size of used source storage...
INFO[0001] Retrieving AWS S3 storage size
INFO[0001] Source has size 249 MB
INFO[0001] ok
Estimate the migration

Note

If your migration originates from MSR 3.x.x, proceed directly to Extract the data.

Before extracting the data for migration you must estimate the number of images and the amount of metadata to migrate from your source MSR system to the new MSR target system. To do so, run the following command on the source MSR system.

docker run \
--rm \
-it \
-v <local-migration-directory>:/migration:Z \
registry.mirantis.com/msr/mmt:<mmt-version> \
estimate msr  \
--source-mke-url <mke-url> \
--source-username <admin-username> \
--source-password <admin-password> \
--source-url <source-msr-url> \
--storage-mode <inplace|copy> \
--source-insecure-tls \
/migration

Note

Migrations that use the copy storage mode and a filesystem storage back end must also include the --mount option, to specify the MSR 2.9.x Docker volume that will be mounted to the MMT container at the /storage directory. As --mount is a Docker option, it must be included prior to the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.

--mount source=dtr-registry-<replica-id>,target=/storage

To obtain the MSR replica ID, run the following command from within an MSR node:

docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'
Command line parameters

Parameter

Description

source-mke-url

Set the URL for the source Mirantis Kubernetes Engine (MKE) system.

source-username

Set the username of the admin user. For MSR 2.9.x source systems, use the MKE admin user.

source-password

Set the password of the admin user. For MSR 2.9.x source systems, use the MKE admin user.

source-url

Set the URL for the source MSR system.

storage-mode

Set the registry migration storage mode.

Valid values: inplace, copy

source-insecure-tls

Optional. Set whether to use an insecure connection.

Valid values: true (skip certificate verification when communicating with the source system), false (perform certificate validation when communicating with the source system)

Example output:

Source Registry: "https://172.17.0.1" (Type: "msr") with authentication data from MKE: "https://172.17.0.1:444"
Mode: "copy"
Metadata: 30 MB
Image tags: 2 (2.8 MB)

As a result, all existing MSR storage is copied.

Extract the data

You can extract metadata and, optionally, binary image data from an MSR source system using commands that are presented herein.

Important

To avoid data inconsistencies, the source registry must remain in read-only mode throughout the migration to the target MSR system.

Be aware that MSR 3.0.x source systems cannot be placed into read-only mode. If you are migrating from a 3.0.x source system, be careful not to write any files during the migration process.

extract msr (2.9.x source systems)

Use the extract msr command for migrations that originate from an MSR 2.9.x system.

docker run \
--rm -it \
-v <local-migration-directory>:/migration:Z \
registry.mirantis.com/msr/mmt:<mmt-version> \
extract msr \
--source-mke-url <mke-url> \
--source-username <admin-username> \
--source-password <admin-password> \
--source-url <source-msr-url> \
--storage-mode <inplace|copy> \
--source-insecure-tls \
/migration

Note

Migrations that use the copy storage mode and a filesystem storage back end must also include the --mount option, to specify the MSR 2.9.x Docker volume that will be mounted to the MMT container at the /storage directory. As --mount is a Docker option, it must be included prior to the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.

--mount source=dtr-registry-<replica-id>,target=/storage

To obtain the MSR replica ID, run the following command from within an MSR node:

docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'
Command line parameters

Parameter

Description

source-mke-url

Set the URL for the source Mirantis Kubernetes Engine (MKE) system.

source-username

Set the username of the admin user. For MSR 2.9.x source systems, use the MKE admin user.

source-password

Set the password of the admin user. For MSR 2.9.x source systems, use the MKE admin user.

source-url

Set the URL for the source MSR system.

storage-mode

Set the registry migration storage mode.

Valid values: inplace, copy

source-insecure-tls

Optional. Set whether to use an insecure connection.

Valid values: true (skip certificate verification when communicating with the source system), false (perform certificate validation when communicating with the source system)

disable-analytics

Optional. Disables MMT metrics collection for the extract command. You must include the flag each time you run the command.

Example output:

The Mirantis Migration Tool extracted your registry of MSR 2.9, using the
following parameters:
Source Registry: https://172.17.0.1
Mode: copy
Image data: 2 blobs (2.8 MB)

The data extract is rendered as a TAR file with the name dtr-metadata-mmt-backup.tar in the <local-migration-directory>. The file name is later converted to msr-backup-<MSR-version>-mmt.tar, following the transform step.

extract msr3 (3.x.x source systems)

Available since MMT 2.0.0

Use the extract msr3 command for migrations that originate from an MSR 3.x.x system.

  1. Deploy MMT as a Pod onto your MSR source cluster.

  2. Exec into the MMT Pod.

  3. Execute the extract command:

    ./mmt extract msr3 \
    --storage-mode <inplace|copy> \
    --fullname <source-MSR-instance-name>
    /migration
    

Execute the following command on a Swarm worker node on which MSR is installed:

docker run \
--rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v <local-migration-directory>:/migration:Z \
--mount source=msr_msr-storage,target=/storage \
--network msr_msr-ol \
registry.mirantis.com/msr/mmt:<mmt-version> \
extract msr3 \
--storage-mode <inplace|copy> \
--fullname <source-MSR-instance-name> \
--swarm \
/migration
Command line parameters

Parameter

Description

disable-analytics

Optional. Disables MMT metrics collection for the extract command. You must include the flag each time you run the command.

fullname

Optional. Sets the name of the MSR instance from which MMT will perform the data extract.

Default: msr

ignore-events-table

Optional. Excludes the events table from the data extract.

parallel-io-count

Optional. Sets the number of parallel IO copies when performing blob storage copy tasks.

Default: 4

signed-images-only

Optional. Excludes unsigned images from the data extract.

storage-mode

Sets the registry migration storage mode.

Valid values: inplace, copy

swarm

Optional. Indicates that the source system runs on a Swarm cluster.

Example output:

INFO[0000] Migration will be performed with "inplace" storage mode
INFO[0000] Backing up metadata...
{"level":"info","msg":"Writing RethinkDB backup","time":"2023-07-06T01:25:51Z"}
{"level":"info","msg":"Backing up MSR","time":"2023-07-06T01:25:51Z"}
{"level":"info","msg":"Recording time of backup","time":"2023-07-06T01:25:51Z"}
{"level":"info","msg":"Backup file checksum is: 0e2134abf81147eef953e2668682b5e6b0e9761f3cbbb3551ae30204d0477291","time":"2023-07-06T01:25:51Z"}
INFO[0002] The Mirantis Migration Tool extracted your registry of MSR 3.x, using the following parameters:
Source Registry: MSR3
Mode: metadata only
Existing MSR3 storage will be backed up.
The source registry must remain in read-only mode for the duration of the operation to avoid data inconsistencies.

The data extract is rendered as a TAR file with the name msr-backup-<MSR-version>-mmt.tar in the <local-migration-directory>.

Transform the data extract

Note

If your migration originates from MSR 3.x.x, proceed directly to Restore the data extract.

Once you have extracted the data from your source MSR system, you must transform the metadata into a format that is suitable for migration to an MSR 3.x.x system.

  1. Deploy MMT as a Pod onto your MSR target cluster.

  2. Exec into the MMT Pod.

  3. Execute the transform command:

    ./mmt transform metadata msr \
    --fullname <dest-MSR-instance-name>
    --storage-mode <inplace|copy> \
    --enzipassword <source-MSR-password> \
    /migration
    

    Note

    The value of --enzipassword is the MSR source system password. This optional parameter is required when the source and target MSR passwords differ.

On the target system, run the following command from inside a worker node on which MSR is installed:

docker run \
--rm --it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v <local-migration-directory>:/migration:Z \
--mount source=msr_msr-storage,target=/storage \
--network msr_msr-ol \
registry.mirantis.com/msr/mmt:$MMT_VERSION \
transform metadata msr \
--storage-mode <inplace|copy> \
--enzipassword <source-MSR-password> \
--swarm=true \
/migration

Note

The value of --enzipassword is the MSR source system password. This optional parameter is required when the source and target MSR passwords differ.

Command line parameters

Parameter

Description

storage-mode

Set the registry migration storage mode.

Valid values: inplace, copy

disable-analytics

Optional. Disables MMT metrics collection for the transform command. You must include the flag each time you run the command.

swarm

Optional. Specifies that the source system runs on Docker Swarm.

Default: false

fullname

Sets the name of the MSR instance to which MMT will migrate the transformed data extract. Use only when the target system runs on a Kubernetes cluster.

Default: msr

Example output:

Writing migration summary file
Finalizing backup directory structure
Creating tar file
Cleaning transform operation artifacts from directory: "/home/<user-directory>/tmp/migrate"
Restore the data extract

You can restore a transformed data extract into a target MSR environment using commands that are presented herein on the target MSR system.

  1. Deploy MMT as a Pod onto your MSR target cluster.

  2. Exec into the MMT Pod.

  3. Execute the restore command:

    ./mmt restore msr \
    --storage-mode <inplace|copy> \
    --fullname <source-MSR-instance-name>
    /migration
    

    Example output:

    Successfully restored metadata from:
    "/home/<user-directory>/tmp/migrate/msr-backup-<MSR-version>-mmt.tar"
    
  4. Register MSR with eNZi:

    kubectl exec -it deployment/<msr-instance-name>-api -- \
    msr auth register \
    --username <username> \
    --password <password> \
    https://<msr-instance-name>-enzi:4443/enzi
    
  5. Restart the affected MSR Pods:

    kubectl -n <msr-namespace> rollout restart deployment \
    <msr-instance-name>-enzi-api \
    <msr-instance-name>-api \
    <msr-instance-name>-registry \
    <msr-instance-name>-garant \
    <msr-instance-name>-jobrunner-<postfix>
    
  1. On the target system, run the following command from inside a worker node on which MSR is installed:

    docker run \
    --rm --it \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v <local-migration-directory>:/migration:Z \
    --mount source=msr_msr-storage,target=/storage \
    --network msr_msr-ol \
    registry.mirantis.com/msr/mmt:$MMT_VERSION \
    restore msr
    --storage-mode <inplace|copy> \
    --swarm \
    /migration
    

    Example output:

    Successfully restored metadata from:
    "/home/<user-directory>/tmp/migrate/msr-backup-<MSR-version>-mmt.tar"
    
  2. Register MSR with eNZi:

    docker exec -it $(docker ps -q --filter "name=msr-api") sh -c 'msr auth register https://$TASK_SLOT.msr-enzi-api:4443/enzi'
    
  3. Restart the affected services:

    docker service update --force msr_msr-enzi-api && \
    docker service update --force msr_msr-api-server && \
    docker service update --force msr_msr-registry && \
    docker service update --force msr_msr-garant && \
    docker service update --force msr_msr-jobrunner
    
Command line parameters

Parameter

Target system orchestrator

Description

backup-file

Kubernetes, Swarm

Optional. Sets the path to the data extract from which to restore your MSR deployment.

Default: Data extract in the current directory.

blob-dir

Kubernetes, Swarm

Optional. Sets the path to the blob storage directory from which to restore your MSR image blobs. Use only if extraction was performed in copy mode.

Default: Blob storage in the current directory.

disable-analytics

Kubernetes, Swarm

Optional. Disables MMT metrics collection for the restore command. You must include the flag each time you run the command.

enzipassword

Swarm

Optional. Sets the eNZi admin password.

fullname

Kubernetes

Sets the name of the MSR instance to which MMT will restore the data extract.

Note

Use only when the target system runs on a Kubernetes cluster.

Default: msr

manifests-dir

Kubernetes, Swarm

Optional. Sets the path to the manifests directory from which to load the configuration.

Default: Manifests in the current directory.

msr-chart

Kubernetes

Optional. Sets the location of the MSR 3.x chart.

Valid values: path to chart directory or packaged chart, URL for MSR repository, or fully qualified chart URL.

Default: https://registry.mirantis.com/charts/msr/msr.

namespace

Kubernetes

Optional. Sets the namespace scope for the given command.

Default: default.

parallel-io-count

Kubernetes, Swarm

Optional. Sets the number of parallel IO copies when performing blob storage copy tasks.

Default: 4

storage-mode

Kubernetes, Swarm

Sets the registry migration storage mode.

Valid values: inplace, copy

swarm

Swarm

Optional. Specifies that the source system runs on Docker Swarm.

Default: false

Settings not migrated

MMT settings that do not persist through the migration process include:

  • Single Sign-On, located in the General tab of the MSR web UI.

  • Automatic Scanning Timeouts, located in the Security tab of the MSR web UI.

  • Vulnerability database

  • Results of image scans

  • MSR license

Telemetry

Available as of MMT 1.0.1

By default, MMT sends usage metrics to Mirantis whenever you run the extract, transform, and restore commands. To disable this functionality, include the --disable-analytics flag whenever you issue any of these commands.

MMT collects the following metrics to improve the product and facilitate its use:

Metric

Description

BlobImageCount

Number of images stored in the source MSR system.

BlobStorageSize

Total size of all the images stored in the source MSR system.

EndTime

Time at which the command stops running.

ErrorCount

Number of errors that occurred during the given migration step.

MigrationStep

Migration step for which metrics are being collected. For example, extract.

StartTime

Time at which the command begins running.

Status

Command status.

In the case of command failure, MMT reports all associated error messages.

StorageMode

Storage mode used for migration.

Valid values: copy and inplace.

StorageType

Storage type used in the MSR source and target systems

Valid values: s3, azure, swift, gcs, filesystem, and nfs.

UserId

Source MSR IP address or URL that is used to associate metrics from separate commands.

Troubleshoot migration

You can address various potential MMT issues using the tips and suggestions detailed herein.

Filesystem storage back ends

Migrations from source MSR 2.9.x systems that use Docker volumes for image storage can only be performed using the copy storage mode. Such migrations must have the Docker volume and associated Persistent Volume Claims (PVCs) mounted to the MMT container.

Important

To run the estimate and extract commands on a filesystem back end, you must download and configure the MKE client bundle.

  1. Obtain the name of the source MSR 2.9.x volume:

    docker volume ls --filter name=dtr-registry
    

    The volume name returns as dtr-registry-<volume-id>.

  2. Mount the source MSR 2.9.x volume to the MMT container at /storage to provide the container with access to the volume data, for both the Estimate and Extract migration stages.

    Estimate

    docker run \
    --rm \
    -it \
    -v <local-migration-directory>:/migration:Z \
    --mount source=<dtr-registry-id>,target=/storage \
    registry.mirantis.com/msr/mmt:<mmt-version> \
    estimate msr  \
    --source-mke-url <mke-url> \
    --source-username <mke-admin-username> \
    --source-password <mke-admin-password> \
    --source-url <msr-2.9-url> \
    --storage-mode copy \
    --source-insecure-tls \
    /migration
    

    Extract

    docker run \
    --rm \
    -it \
    -v <local-migration-directory>:/migration:Z \
    --mount source=<dtr-registry-id>,target=/storage \
    registry.mirantis.com/msr/mmt:<mmt-version> \
    extract msr  \
    --source-mke-url <mke-url> \
    --source-username <mke-admin-username> \
    --source-password <mke-admin-password> \
    --source-url <msr-2.9-url> \
    --storage-mode copy \
    --source-insecure-tls \
    /migration
    
  3. Obtain the name of the target MSR 3.0 volume.

    kubectl get pvc <instance-name> --template {{.spec.volumeName}}
    
  4. Migrate the data extract to the PVC of the target MSR 3.0.x system. To do this, you must run the MMT container in a Pod with the PVC mounted at the container /storage directory.

    Note

    In the event the PVC is not mounted at the MMT container /storage directory, the Restore migration step may still complete, and the target MSR 3.0.x system may display the restored data in the MSR web UI. Pulling images from the target system, however, will fail, as the source MSR 2.9.x image data is not migrated to the MSR 3.0.x PVC.

    Use the YAML template that follows as an example for how to create the MMT Pod and other required Kubernetes objects:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: <mmt-serviceaccount-name>
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: <mmt-role-name>
    rules:
      # Add/remove more permissions as needed
      - apiGroups: ["", "apps", "rbac.authorization.k8s.io", "cert-manager.
        io", "acid.zalan.do"]
        resources: ["*"]
        verbs: ["*"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <mmt-rolebinding-name>
    subjects:
      - kind: ServiceAccount
        name: <mmt-serviceaccount-name>
    roleRef:
      kind: Role
      name: <mmt-role-name>
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: <mmt-pod-name>
    spec:
      serviceAccountName: <mmt-serviceaccount-name>
      volumes:
        - name: storage
          persistentVolumeClaim:
            # This is the PersistentVolumeClaim that the destination/target MSR 3.0.x is using.
            # This PVC is acting as the filesystem storage backend for MSR 3.0.x.
            claimName: <msr-pvc-name>
      containers:
        - name: msr-migration-tool
          image: registry.mirantis.com/msr/mmt:<mmt-image-tag>
          imagePullPolicy: IfNotPresent
          command: [ "sh", "-c", "while true; do sleep 30; done;" ]
          volumeMounts:
          - name: storage
            mountPath: /storage
      restartPolicy: Never
    
  5. Once <mmt-pod-name> is running, copy your source MSR 2.9.x data extract to the /migration location of the MMT container running within the Pod:

    kubectl cp <local-migration-directory> <mmt-pod-name>:/migration
    
  6. Open a shell into the <mmt-pod-name>:

    kubectl exec --stdin --tty <mmt-pod-name> -- sh
    
  7. Perform the Transform step from within the container:

    ./mmt transform metadata msr  \
    --storage-mode copy \
    /migration
    
  8. While still inside the container, perform the Restore migration step.

    ./mmt restore msr \
    --fullname <MSR-3.0.x-Helm-release-name> \
    --storage-mode copy \
    /migration
    
  9. Delete the Kubernetes resources you created for MMT.

Too many open files

When migrating a large source installation to your MSR target environment, MMT can fail due to too many files being open. If this happens, the following error message displays:

failed to extract blob data to filesystem: failed to copy file <filename>
from storage driver to <registry name>: error creating file: <filename>:
open <filename>: too many open files

To resolve the issue, run the following command in the terminal on which you are running MMT:

ulimit -n 1048576
Failure to load data error message

During the Restore stage of the migration workflow you may encounter the following error:

failed to determine custom restore path options: failed to get MSR version
information: no pod found in namespace with component label: api: default
  1. Run kubectl config get-contexts to list all contexts available.

  2. Find the correct context and run the following command:

    kubectl config use-context
    <name-of-context-that-connects-to-cluster-running-MSR-3.0>`
    
No space left on device

During the Extract stage of the migration workflow, you may encounter the following error message:

failed to extract blob data to filesystem: failed to copy file <filename>
from storage driver to <registry location>: error copying file <filename> to
<registry location>: write <filename>: no space left on device

To resolve this error ensure the directory provided as a parameter has enough space to store the migration data.

Failed to estimate migration error message

During the Estimate stage of the migration workflow, you may encounter the following error message:

failed to estimate MSR registry migration: failed to verify given directory: unable to get directory FileInfo: /mnt/test2: stat /mnt/test2: no such file or directory
  1. When running MMT from a Docker container, ensure that the path provided for storing migration data has been mounted as a docker volume to the local machine.

  2. When running MMT outside of Docker, ensure the path provided exists.

rethinkdb row cannot be restored

During the Restore stage of the migration workflow, you may encounter an error message that indicates an issue with rethinkdb row restoration:

Can't restore rethinkdb row: rethinkdb: Cannot perform write: lost contact
with primary replica in:\n<rethink-db-statement>
Kubernetes deployments

The error is reported when the rethinkdb Pod for the destination MSR 3.x installation does not have enough disk space available due to the sizing of its provisioned volume.

  1. Edit the values.yaml file you used for MSR deployment, changing the rethinkdb.cluster.persistentVolume.size value to match the source RethinkDB volume size

  2. Run the helm upgrade --values <path to values.yaml> msr msr/msr command.

Swarm deployments

The error is reported when the node on which RethinkDB is running on the target MSR system does not have enough available disk space.

  1. SSH into the node on which RethinkDB is running.

  2. Review the amount of disk space used by the docker daemon on the node:

    docker system df
    
  3. Review the total size and available storage of the node filesystem:

    df
    
  4. Allocate more storage to the host machine on which the target node is running.

Admin password on MSR 3.0.x target no longer works

As a result of the migration, the source MSR system security settings completely replace the settings in the target MSR system. Thus, to gain admin access to the target system, you must use the admin password for the source system.

Blob image copy considerations

MMT uses several parallel sub-routines in copying image blobs, the number of which is controlled by the --parallel-io-count parameter, which has a default value of 4.

Image blobs are copied only when you are using the copy storage mode for your migration, during the Extract and Restore stages of the migration workflow. For optimum performance, the number of CPU resources to allocate for the MMT container (--cpus=<value>) is --parallel-io-count, plus one for MMT itself.

Additional parameters

Errors can occur in the use of MMT that require the use of additional parameters at various stages of the migration process.

For scenarios wherein the pulling of Docker images has failed, you can use the parameters detailed in the following table to pull the needed images to your MKE cluster running MSR 2.9.x.

Command line parameters

Parameter

Description

bootstrap-image-name

Set the MSR 2.9.x dtr image that is to run in the MSR 2.9.x environment during migration.

Default: dtr

bootstrap-image-repo

Set the MSR 2.9.x repository within which the dtr image will run in the MSR 2.9.x environment during migration.

Default: mirantis

bootstrap-image-tag

Set the image tag of the MSR 2.9.x repository where the dtr image is to run in the MSR 2.9.x environment during migration.

Defaults to the version of the 2.9.x source system.

mmt-image-name

Set the MMT Docker image, which you use during migration to run a copy of the MMT image within the 2.9.x (MKE) environment.

Default: mmt

mmt-image-repo

Set the MMT repository that is to be used during migration to run a copy of the MMT image within the 2.9.x (MKE) environment.

Default: registry.mirantis.com/msr

mmt-image-tag

Set the image tag of the MMT Docker image that is to be used during migration to run a copy of the MMT image within the MSR 2.9.x (MKE) environment.

Default: 1.0

Additional volume mappings for containers

During the Transform and Restore stages of the migration workflow, you may encounter the following error message:

[unable to read client-cert
/home/<username>/.minikube/profiles/minikube/client.crt for minikube due to
open /home/<username>/.minikube/profiles/minikube/client.crt: no such file
or directory, unable to read client-key
/home/<username>/.minikube/profiles/minikube/client.key for minikube due to
open /home/<username>/.minikube/profiles/minikube/client.key: no such file
or directory, unable to read certificate-authority
/home/<username>/.minikube/ca.crt for minikube due to open
/home/<username>/.minikube/ca.crt: no such file or directory]

To address this error, add additional volume mappings to running Docker containers as needed:

-v $HOME/.minikube/profiles/minikube:/.minikube/profiles/minikube
Failed to query for metadata size

You must pull the MMT image to both your source MSR system and your target MSR system, otherwise the migration will fail with the following error message:

Failed to query RethinkDB for total metadata size: failed to convert query
response to int: strconv.Atoi: parsing "": invalid syntax
Continuing without total metadata size ...

To remedy this you must pull the MMT image to each of the two systems, using the following commands:

docker login registry.mirantis.com -u <username>
docker pull registry.mirantis.com/msr/mmt
flag provided but not defined: -append

MSR 3.0.3 or later must be running on your target MSR 3.x cluster, otherwise the restore step will fail with the following error message:

{"level":"fatal","msg":"flag provided but not defined: -append","time":"<time>"}
failed to restore metadata from "/migration/msr-backup-<msr-version>-mmt.tar": restore failed: command terminated with exit code 1

To resolve the issue, upgrade your target cluster to MSR 3.0.3 or later. Refer to Upgrade MSR for more information.

Storage configuration is out of sync with metadata

With the inplace storage mode, an error message will display if you fail to configure the external storage location for your target MSR system to the same storage location that your source MSR system uses:

Storage configuration may be out of sync with metadata: storage backend is
missing expected files (expected BlobStoreID <BlobStoreID>)

To remedy the error, do one of the following:

  • Configure your target MSR system to use the same external storage as your source MSR system. Refer to configure-external-storage for more information.

  • Rerun the migration using the copy storage mode.

  • Manually copy the files from the source MSR system to the target MSR system.

The estimate command returns an image data value of 0

Running the estimate command on a filesystem back end can result in the display of an image data size of zero bytes:

Image data: 0 blobs (0 B)

Note

If the estimate command produces the issue, it is certain to carry forward to the output of the extract command.

To resolve this issue, you must Download and configure the MKE client bundle prior to performing the migration.

Unable to get FileInfo: /blobs

Running the restore command on a filesystem back end can result in the following error, indicating that the command did not succeed:

failed to verify given file path: unable to get FileInfo: /blobs

To resolve the issue, you must download and configure the MKE client bundle before you perform the migration.

failed to run container: mmt-dtr-rethinkdb-backup

During the Estimate and Extract stages of the migration workflow, you may encounter the following error message:

FATA[0001] failed to extract MSR metadata: \
failed to run container: \
mmt-dtr-rethinkdb-backup: \
Error response from daemon: \
Conflict: \
The name mmt-dtr-rethinkdb-backup is already assigned. \
You have to delete (or rename) that container to be able to assign \
mmt-dtr-rethinkdb-backup to a container again.
  1. Identify the node on which mmt-dtr-rethinkdb-backup was created.

  2. From the node on which the mmt-dtr-rethinkdb-backup container was created, delete the RethinkDB backup container:

    docker rm -f mmt-dtr-rethinkdb-backup
    

MMT release notes

MMT 2.0.1 current

Patch release for MMT 2.0 release that introduces the following key features:

  • Addressed issues

MMT 2.0.0

Initial MMT 2.0 release that introduces the following key features:

  • New migration paths

  • Additional command line operations

MMT 1.0.1

Patch release for MMT 1.0 that introduces the following key feature:

  • MMT usage metrics

2.0.1

(2023-11-20)

Enhancements
  • MMT supports the following migration paths:

    Source

    Destination

    2.9

    3.0 Helm, 3.1 Swarm

    3.0 Helm

    3.0 Helm, 3.1 Operator, 3.1 Swarm

    3.1 Helm

    3.1 Operator

    3.1 Operator

    3.1 Operator

    3.1 Swarm

    3.1 Swarm

  • Implemented NFS storage migration for inplace mode.

  • Implemented the new --swarm option, which enables extract/transform/restore from MSR 2.9 to MSR 3.x on Docker

  • Implemented mmt extract msr3 - backup of MSR 3.x metadata and images, for Swarm, Helm, and MSR Operator

  • Implemented MSR Operator support in MMT for transform/restore.

  • Introduced the --enzipassword` option, which adds the eNZi admin password to Swarm and MSR Operator restore.

  • Fixed the Helm upgrade process.

  • Fixed an issue wherein container exec` failed with an unknown http header.

  • Fixed MMT versioning.

  • Fixed the process of pulling an MMT image on a random node during the estimation step.

  • Upgraded:

    • Alpine to version 3.18

    • Go to 1.20.10

    • Go modules to fix CVEs

Addressed issues
  • [FIELD-6379] Fixed an issue wherein the estimation command in air-gapped environments failed due to attempts to pull the MMT image on a random node. The fix ensures that the MMT image is pulled on the required node, where the estimation command is executed.

2.0.0

(2023-09-28)

Enhancements
  • MMT supports the following migration paths:

    Source

    Destination

    2.9

    3.0 Helm, 3.1 Swarm

    3.0 Helm

    3.0 Helm, 3.1 Swarm

    3.1 Swarm

    3.1 Swarm

  • MMT now includes the following command line operations:

    extract msr3 command

    Performs the extract migration step on MSR 3.x source systems.

    --swarm option

    Used in conjunction with the extract msr3, transform, and restore msr commands to indicate that the source or target system runs on a Swarm cluster.

1.0.1

(2023-05-16)

Enhancements

[ENGDTR-3102] MMT now collects migration data, which Mirantis will use to identify ways to improve the product and facilitate its use.

Learn more

Telemetry

Addressed issues
  • [ENGDTR-3517] Fixed an issue wherein the restore command did not continue from its stopping point when it was terminated prior to completion.

  • [ENGDTR-3385] When run again following an interruption, the extract command now logs the number of blobs that it previously copied.

  • To improve MMT CLI help text readability, commands are now grouped into types.

Security
  • The critical and high severity CVEs addressed in this MMT release are detailed in the following table:

    CVE

    Status

    Problem details from upstream

    CVE-2022-1996

    Resolved

    Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.

    CVE-2022-41716

    Resolved

    Due to unsanitized NUL values, attackers may be able to maliciously set environment variables on Windows. In syscall.StartProcess and os/exec.Cmd, invalid environment variable values containing NUL values are not properly checked for. A malicious environment variable value can exploit this behavior to set a value for a different environment variable. For example, the environment variable string "A=B\x00C=D" sets the variables "A=B" and "C=D".

Get Support

Subscriptions for MKE, MSR, and MCR provide access to prioritized support for designated contacts from your company, agency, team, or organization. Mirantis service levels for MKE, MSR, and MCR are based on your subscription level and the Cloud (or cluster) you designate in your technical support case. Our support offerings are described here, and if you do not already have a support subscription, you may inquire about one via the contact us form.

Mirantis’ primary means of interacting with customers who have technical issues with MKE, MSR, or MCR is our CloudCare Portal. Access to our CloudCare Portal requires prior authorization by your company, agency, team, or organization, and a brief email verification step. After Mirantis sets up its back end systems at the start of the support subscription, a designated administrator at your company, agency, team or organization, can designate additional contacts. If you have not already received and verified an invitation to our CloudCare Portal, contact your local designated administrator, who can add you to the list of designated contacts. Most companies, agencies, teams, and organizations have multiple designated administrators for the CloudCare Portal, and these are often the persons most closely involved with the software. If you don’t know who is a local designated administrator, or are having problems accessing the CloudCare Portal, you may also send us an email.

Once you have verified your contact details via our verification email, and changed your password as part of your first login, you and all your colleagues will have access to all of the cases and resources purchased. We recommend you retain your ‘Welcome to Mirantis’ email, because it contains information on accessing our CloudCare Portal, guidance on submitting new cases, managing your resources, and so forth. Thus, it can serve as a reference for future visits.

We encourage all customers with technical problems to use the knowledge base, which you can access on the Knowledge tab of our CloudCare Portal. We also encourage you to review the MKE, MSR, and MCR products documentation which includes release notes, solution guides, and reference architectures. These are available in several formats. We encourage use of these resources prior to filing a technical case; we may already have fixed the problem in a later release of software, or provided a solution or technical workaround to a problem experienced by other customers.

One of the features of the CloudCare Portal is the ability to associate cases with a specific MKE cluster; these are known as “Clouds” in our portal. Mirantis has pre-populated customer accounts with one or more Clouds based on your subscription(s). Customers may also create and manage their Clouds to better match how you use your subscription.

We also recommend and encourage our customers to file new cases based on a specific Cloud in your account. This is because most Clouds also have associated support entitlements, licenses, contacts, and cluster configurations. These greatly enhance Mirantis’ ability to support you in a timely manner.

You can locate the existing Clouds associated with your account by using the “Clouds” tab at the top of the portal home page. Navigate to the appropriate Cloud, and click on the Cloud’s name. Once you’ve verified that Cloud represents the correct MKE cluster and support entitlement, you can create a new case via the New Case button towards the top of the Cloud’s page.

One of the key items required for technical support of most MKE, MSR, and MCR cases is the support dump. This is a compressed archive of configuration data and log files from the cluster. There are several ways to gather a support dump, each described in the paragraphs below. After you have collected a support dump, you can upload the dump to your new technical support case by following this guidance and using the “detail” view of your case.

Use the Web UI to get a support dump

To get the support dump from the web UI:

  1. Log into the MKE web UI with an administrator account.

  2. In the top-left menu, click your username and choose Support Dump.

It may take a few minutes for the download to complete.

To submit the support dump to Mirantis Customer Support:

  1. Click Share support bundle on the success prompt that displays when the support dump finishes downloading.

  2. Fill in the Jira feedback dialog, and click Submit.

Use the CLI to get a support dump

To get the support dump from the CLI, use SSH to log into a node and run:

MKE_VERSION=$((docker container inspect ucp-proxy --format '{{index .Config.Labels "com.docker.ucp.version"}}' 2>/dev/null || echo -n 3.2.6)|tr -d [[:space:]])

docker container run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
--log-driver none \
mirantis/ucp:${MKE_VERSION} \
support > \
docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz

Note

The support dump only contains logs for the node where you’re running the command. If your MKE is highly available, you should collect support dumps from all of the manager nodes.

To submit the support dump to Mirantis Customer Support, add the --submit option to the support command. This will send the support dump along with the following information:

  • Cluster ID

  • MKE version

  • MCR version

  • OS/architecture

  • Cluster size

Use PowerShell to get a support dump

On Windows worker nodes, run the following command to generate a local support dump:

docker container run --name windowssupport -v 'C:\ProgramData\docker\daemoncerts:C:\ProgramData\docker\daemoncerts' -v 'C:\Windows\system32\winevt\logs:C:\eventlogs:ro' mirantis/ucp-dsinfo-win:3.2.6; docker cp windowssupport:'C:\dsinfo' .; docker rm -f windowssupport

This command creates a directory named dsinfo in your current directory. If you want an archive file, you need to create it from the dsinfo directory.

API Reference

The Mirantis Secure Registry (MSR) API is a REST API, available using HTTPS, that enables programmatic access to resources managed by MSR.

CLI Reference

The CLI tool has commands to install, configure, and backup Mirantis Secure Registry (MSR). It also allows uninstalling MSR. By default the tool runs in interactive mode. It prompts you for the values needed.

Additional help is available for each command with the –help option.

Syntax

docker run -it --rm mirantis/dtr \
    command [command options]

If not specified, mirantis/dtr uses the latest tag by default. To work with a different version, specify it in the command. For example, docker run -it --rm mirantis/dtr:2.9.16.

mirantis/dtr backup

Create a backup of MSR.

Usage
docker run -i --rm mirantis/dtr \
    backup [command options] > backup.tar
Example Commands
Basic
docker run -i --rm --log-driver none mirantis/dtr:2.9.16 \
    backup --ucp-ca "$(cat ca.pem)" --existing-replica-id 5eb9459a7832 > backup.tar
Advanced (with chained commands)

The following command has been tested on Linux:

DTR_VERSION=$(docker container inspect $(docker container ps -f \
  name=dtr-registry -q) | grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-'); \
read -p 'ucp-url (The MKE URL including domain and port): ' UCP_URL; \
read -p 'ucp-username (The MKE administrator username): ' UCP_ADMIN; \
read -sp 'ucp password: ' UCP_PASSWORD; \
docker run --log-driver none -i --rm \
  --env UCP_PASSWORD=$UCP_PASSWORD \
  mirantis/dtr:$DTR_VERSION backup \
  --ucp-username $UCP_ADMIN \
  --ucp-url $UCP_URL \
  --ucp-ca "$(curl https://${UCP_URL}/ca)" \
  --existing-replica-id $REPLICA_ID > \
  dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar
Description

The backup command creates a tar file with the contents of the volumes used by MSR, and prints it. You can then use mirantis/dtr restore to restore the data from an existing backup.

Note

  • This command only creates backups of configurations, and image metadata. It does not back up users and organizations. Users and organizations can be backed up during an MKE backup.

    It also does not back up Docker images stored in your registry. You should implement a separate backup policy for the Docker images stored in your registry, taking into consideration whether your MSR installation is configured to store images on the filesystem or is using a cloud provider.

  • This backup contains sensitive information and should be stored securely.

  • Using the --offline-backup flag temporarily shuts down the RethinkDB container. Take the replica out of your load balancer to avoid downtime.

Options

Option

Environment variable

Description

--debug

$DEBUG

Enable debug mode for additional logs.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify an MSR replica, you must connect to the database of an existing replica.

--help-extended

$$MSR_EXTENDED_HELP

Display extended help text for a given command.

--ignore-events-table

MSR_IGNORE_EVENTS_TABLE

Option to prevent backup of the events table for online backups, to reduce backup size (the option is not available for offline backups).

--nocolor

$NOCOLOR

Disable output coloring in logs.

--offline-backup

$MSR_OFFLINE_BACKUP

This flag takes RethinkDB down during backup and takes a more reliable backup. If you back up MSR with this flag, RethinkDB will go down during backup. However, offline backups are guaranteed to be more consistent than online backups.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

mirantis/dtr destroy

Destroy MSR replica data.

Usage
docker run -it --rm mirantis/dtr \
    destroy [command options]
Description

The destroy command forcefully removes all containers and volumes associated with an MSR replica without notifying the rest of the cluster. Use this command on all replicas uninstall MSR.

Use the remove command to gracefully scale down your MSR cluster.

Options

Option

Environment variable

Description

--replica-id

$MSR_DESTROY_REPLICA_ID

The ID of the replica to destroy.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--debug

$DEBUG

Enable debug mode for additional logs.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE.Download the MKE TLS CA certificate from https:// /ca, and use --ucp-ca "$(cat ca.pem)".

mirantis/dtr emergency-repair

Recover MSR from loss of quorum.

Usage
docker run -it --rm mirantis/dtr \
    emergency-repair [command options]
Description

The emergency-repair command repairs an MSR cluster that has lost quorum by reverting your cluster to a single MSR replica.

There are three actions you can take to recover an unhealthy MSR cluster:

  • If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join new ones for high availability.

  • If the majority of replicas are unhealthy, use the emergency-repair command to revert your cluster to a single MSR replica.

  • If you cannot repair your cluster to a single replica, you must restore from an existing backup, using the restore command.

When you run this command, an MSR replica of your choice is repaired and turned into the only replica in the whole MSR cluster. The containers for all the other MSR replicas are stopped and removed. When using the force option, the volumes for these replicas are also deleted.

After repairing the cluster, you should use the join command to add more MSR replicas for high availability.

Options

Option

Environment variable

Description

--debug

$DEBUG

Enable debug mode for additional logs.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to the database of an existing healthy replica.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--overlay-subnet

$MSR_OVERLAY_SUBNET

The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24. For high-availability, MSR creates an overlay network between MKE nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where MSR replicas are deployed.

--prune

$PRUNE

Delete the data volumes of all unhealthy replicas. With this option, the volume of the MSR replica you’re restoring is preserved but the volumes for all other replicas are deleted. This has the same result as completely uninstalling MSR from those replicas.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https:// /ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

--y, yes

$YES

Answer yes to any prompts.

--max-wait

$MAX_WAIT

The maximum amount of time MSR allows an operation to complete within. This is frequently used to allocate more startup time to very large MSR databases. The value is a Golang duration string. For example, "10m" represents 10 minutes.

mirantis/dtr images

List all the images necessary to install MSR.

Usage
docker run -it --rm mirantis/dtr \
    images [command options]
Description

The images command lists all the images necessary to install MSR.

mirantis/dtr install

Install Mirantis Secure Registry.

Usage
docker run -it --rm mirantis/dtr \
    install [command options]
Description

The install command installs Mirantis Secure Registry (MSR) on a node managed by Mirantis Kubernetes Engine (MKE).

After installing MSR, you can join additional MSR replicas using mirantis/dtr join.

Example Usage
$ docker run -it --rm mirantis/dtr:2.9.16 install \
    --ucp-node <UCP_NODE_HOSTNAME> \
    --ucp-insecure-tls

Note

Use --ucp-ca "$(cat ca.pem)" instead of --ucp-insecure-tls for a production deployment.

Options

Option

Environment variable

Description

--async-nfs

$ASYNC_NFS

Use async NFS volume options on the replica specified in the --existing-replica-id option. The NFS configuration must be set with --nfs-storage-url explicitly to use this option. Using --async-nfs will bring down any containers on the replica that use the NFS volume, delete the NFS volume, bring it back up with the appropriate configuration, and restart any containers that were brought down.

--client-cert-auth-ca

$CLIENT_CA

Specify root CA certificates for client authentication with --client-cert-auth-ca "$(cat ca.pem)".

--custom-ca-cert-bundle

$CUSTOM_CA_CERTS_BUNDLE

Provide a file containing additional CA certificates for MSR service containers to use when verifying TLS server certificates.

--debug

$DEBUG

Enable debug mode for additional logs.

--dtr-ca

$MSR_CA

Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own root CA public certificate with --dtr-ca "$(cat ca.pem)".

--dtr-cert

$MSR_CERT

Use a PEM-encoded TLS certificate for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own public key certificate with --dtr-cert "$(cat cert.pem)". If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file to establish a chain of trust.

--dtr-external-url

$MSR_EXTERNAL_URL

URL of the host or load balancer clients use to reach MSR. When you use this flag, users are redirected to MKE for logging in. Once authenticated they are redirected to the URL you specify in this flag. If you do not use this flag, MSR is deployed without single sign-on with MKE. Users and teams are shared but users log in separately into the two applications. You can enable and disable single sign-on within your MSR system settings. Format https://host[:port], where port is the value you used with --replica-https-port. Since HSTS (HTTP Strict-Transport-Security) header is included in all API responses, make sure to specify the FQDN (Fully Qualified Domain Name) of your MSR, or your browser fails to load the web interface.

--dtr-key

$MSR_KEY

Use a PEM-encoded TLS private key for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with --dtr-key "$(cat key.pem)".

--dtr-storage-volume

$MSR_STORAGE_VOLUME

Customize the volume to store Docker images. By default MSR creates a volume to store the Docker images in the local filesystem of the node where MSR is running, without high-availability. Use this flag to specify a full path or volume name for MSR to store images. For high-availability, make sure all MSR replicas can read and write data on this volume. If you are using NFS, use --nfs-storage-url instead.

--enable-client-cert-auth

$ENABLE_CLIENT_CERT_AUTH

Enables TLS client certificate authentication. Use --enable-client-cert-auth=false to disable it. If enabled, MSR will additionally authenticate users via TLS client certificates. You must also specify the root certificate authorities (CAs) that issued the certificates with --client-cert-auth-ca.

--enable-pprof

$MSR_PPROF

Enables pprof profiling of the server. Use --enable-pprof=false to disable it. Once MSR is deployed with this flag, you can access the pprof endpoint for the API server at /debug/pprof, and the registry endpoint at /registry_debug_pprof/debug/pprof.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--http-proxy

$MSR_HTTP_PROXY

The HTTP proxy used for outgoing requests.

--https-proxy

$MSR_HTTPS_PROXY

The HTTPS proxy used for outgoing requests.

--log-host

$LOG_HOST

The syslog system to send logs to. The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp.

--log-level

$LOG_LEVEL

Log level for all container logs when logging to syslog. Default: INFO. The supported log levels are debug, info, warn, error, or fatal.

--log-protocol

$LOG_PROTOCOL

The protocol for sending logs. Default is internal. By default, MSR internal components log information using the logger specified in the Docker daemon in the node where the MSR replica is deployed. Use this option to send MSR logs to an external syslog system. The supported values are tcp, udp, or internal. Internal is the default option, stopping MSR from sending logs to an external system. Use this flag with --log-host.

--nfs-options

$NFS_OPTIONS

Pass in NFS volume options verbatim for the replica specified in the --existing-replica-id option. The NFS configuration must be set with --nfs-storage-url explicitly to use this option. Specifying --nfs-options will pass in character-for-character the options specified in the argument when creating or recreating the NFS volume. For instance, to use NFS v4 with async, pass in “rw,nfsvers=4,async” as the argument.

--nfs-storage-url

$NFS_STORAGE_URL

Use NFS to store Docker images following this format: nfs://<ip| hostname>/<mountpoint>. By default, MSR creates a volume to store the Docker images in the local filesystem of the node where MSR is running, without high availability. To use this flag, you need to install an NFS client library like nfs-common in the node where you are deploying MSR. You can test this by running showmount -e <nfs-server>. When you join new replicas, they will start using NFS so there is no need to specify this flag. To reconfigure MSR to stop using NFS, leave --nfs-storage-url "" option empty. Refer to Configuring MSR for NFS for more details.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--no-proxy

$MSR_NO_PROXY

List of domains the proxy should not be used for. When using --http-proxy you can use this flag to specify a list of domains that you do not want to route through the proxy. Format acme.com[, acme.org].

--overlay-subnet

$MSR_OVERLAY_SUBNET

The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24. For high-availability, MSR creates an overlay network between MKE nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where MSR replicas are deployed.

--replica-http-port

$REPLICA_HTTP_PORT

The public HTTP port for the MSR replica. Default is 80. This allows you to customize the HTTP port where users can reach MSR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks.

--replica-https-port

$REPLICA_HTTPS_PORT

The public HTTPS port for the MSR replica. Default is 443. This allows you to customize the HTTPS port where users can reach MSR. Each replica can use a different port.

--replica-id

$MSR_INSTALL_REPLICA_ID

Assign a 12-character hexadecimal ID to the MSR replica. Random by default.

--replica-rethinkdb-cache-mb

$RETHINKDB_CACHE_MB

The maximum amount of space in MB for RethinkDB in-memory cache used by the given replica. Default is auto. Auto is (available_memory - 1024) / 2. This config allows changing the RethinkDB cache usage per replica. You need to run it once per replica to change each one.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-node

$UCP_NODE

The hostname of the MKE node to use to deploy MSR. Random by default. You can find the hostnames of the nodes in the cluster in the MKE web interface, or by running docker node ls command on an MKE manager node. Note that MKE and MSR must not be installed on the same node, and you should instead install MSR on worker nodes that will be managed by MKE.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

mirantis/dtr join

Add a new replica to an existing MSR cluster. Use SSH to log into any node that is already part of MKE.

Usage
docker run -it --rm \
  mirantis/dtr:2.9.16 join \
  --ucp-node <mke-node-name> \
  --ucp-insecure-tls
Description

The join command creates a replica of an existing MSR on a node managed by Mirantis Kubernetes Engine (MKE).

For setting MSR for high-availability, create 3, 5, or 7 replicas of MSR.

Options

Option

Environment variable

Description

--debug

$DEBUG

Enable debug mode for additional logs.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to the database of an existing healthy replica.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--replica-http-port

$REPLICA_HTTP_PORT

The public HTTP port for the MSR replica. Default is 80. This allows you to customize the HTTP port where users can reach MSR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks.

--replica-https-port

$REPLICA_HTTPS_PORT

The public HTTPS port for the MSR replica. Default is 443. This allows you to customize the HTTPS port where users can reach MSR. Each replica can use a different port.

--replica-id

$MSR_INSTALL_REPLICA_ID

Assign a 12-character hexadecimal ID to the MSR replica. Random by default.

--replica-rethinkdb-cache-mb

$RETHINKDB_CACHE_MB

The maximum amount of space in MB for RethinkDB in-memory cache used by the given replica. Default is auto. Auto is (available_memory - 1024) / 2. This config allows changing the RethinkDB cache usage per replica. You need to run it once per replica to change each one.

--skip-network-test

$MSR_SKIP_NETWORK_TEST

Do not test whether overlay networks are working correctly between MKE nodes. For high-availability, MSR creates an overlay network between MKE nodes and tests it before joining replicas. .. important:

Do not use the --skip-network-test option in production deployments.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-node

$UCP_NODE

The hostname of the MKE node to use to deploy MSR. Random by default. You can find the hostnames of the nodes in the cluster in the MKE web interface, or by running docker node ls on an MKE manager node. Note that MKE and MSR cannot be installed on the same node, instead install MSR on worker nodes that will be managed by MKE.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

--unsafe-join

$MSR_UNSAFE_JOIN

Join a new replica even if the cluster is unhealthy. Joining replicas to an unhealthy MSR cluster leads to split-brain scenarios, and data loss. Don’t use this option for production deployments.

--max-wait

$MAX_WAIT

The maximum amount of time MSR allows an operation to complete within. This is frequently used to allocate more startup time to very large MSR databases. The value is a Golang duration string. For example, "10m" represents 10 minutes.

mirantis/dtr reconfigure

Change MSR configurations.

Usage
docker run -it --rm mirantis/dtr reconfigure [command options]
Description

The reconfigure command changes MSR configuration settings.

MSR is restarted for the new configurations to take effect. To have no down time, configure your MSR for high availability.

Options

Option

Environment variable

Description

--async-nfs

$ASYNC_NFS

Use async NFS volume options on the replica specified in the --existing-replica-id option. The NFS configuration must be set with --nfs-storage-url explicitly to use this option. Using --async-nfs will bring down any containers on the replica that use the NFS volume, delete the NFS volume, bring it back up with the appropriate configuration, and restart any containers that were brought down.

--client-cert-auth-ca

$CLIENT_CA

Specify root CA certificates for client authentication with --client-cert-auth-ca "$(cat ca.pem)".

--custom-ca-cert-bundle

$CUSTOM_CA_CERTS_ BUNDLE

Specify additional CA certificates for MSR service containers to use when verifying TLS server certificates with --custom-ca-cert-bundle "$(cat ca.pem)"

--debug

$DEBUG

Enable debug mode for additional logs of this bootstrap container (the log level of downstream MSR containers can be set with --log-level).

--dtr-ca

$MSR_CA

Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own root CA public certificate with --dtr-ca "$(cat ca.pem)".

--dtr-cert

$MSR_CERT

Use a PEM-encoded TLS certificate for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own public key certificate with --dtr-cert "$(cat cert.pem)". If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file to establish a chain of trust.

--dtr-external-url

$MSR_EXTERNAL_URL

URL of the host or load balancer clients use to reach MSR. When you use this flag, users are redirected to MKE for logging in. Once authenticated they are redirected to the url you specify in this flag. If you don’t use this flag, MSR is deployed without single sign-on with MKE. Users and teams are shared but users login separately into the two applications. You can enable and disable single sign-on in the MSR settings. Format https://host[:port], where port is the value you used with --replica-https-port. Since HSTS (HTTP Strict-Transport-Security) header is included in all API responses, make sure to specify the FQDN (Fully Qualified Domain Name) of your MSR, or your browser may refuse to load the web interface.

--dtr-key

$MSR_KEY

Use a PEM-encoded TLS private key for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with --dtr-key "$(cat key.pem)".

--dtr-storage-volume

$MSR_STORAGE_ VOLUME

Customize the volume to store Docker images. By default MSR creates a volume to store the Docker images in the local filesystem of the node where MSR is running, without high-availability. Use this flag to specify a full path or volume name for MSR to store images. For high-availability, make sure all MSR replicas can read and write data on this volume. If you’re using NFS, use --nfs-storage-url instead.

--enable-client-cert-auth

$ENABLE_CLIENT_CERT_ AUTH

Enables TLS client certificate authentication; use --enable-client-cert-auth=false to disable it. If enabled, MSR will additionally authenticate users via TLS client certificates. You must also specify the root certificate authorities (CAs) that issued the certificates with --client-cert-auth-ca.

--enable-pprof

$MSR_PPROF

Enables pprof profiling of the server. Use --enable-pprof=false to disable it. Once MSR is deployed with this flag, you can access the pprof endpoint for the api server at /debug/pprof, and the registry endpoint at /registry_debug_pprof/debug/pprof.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database.

--force-recreate-nfs-volume

$FORCE_RECREATE_NFS_ VOLUME

Force MSR to recreate NFS volumes on the replica specified by --existing-replica-id.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--http-proxy

$MSR_HTTP_PROXY

The HTTP proxy used for outgoing requests.

--https-proxy

$MSR_HTTPS_PROXY

The HTTPS proxy used for outgoing requests.

--log-host

$LOG_HOST

The syslog system to send logs to. The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp.

--log-level

$LOG_LEVEL

Log level for all container logs when logging to syslog. Default: INFO. The supported log levels are debug, info, warn, error, or fatal.

--log-protocol

$LOG_PROTOCOL

The protocol for sending logs. Default is internal. By default, MSR internal components log information using the logger specified in the Docker daemon in the node where the MSR replica is deployed. Use this option to send MSR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping MSR from sending logs to an external system. Use this flag with --log-host.

--max-wait

$MAX_WAIT

The maximum amount of time MSR allows an operation to complete within. This is frequently used to allocate more startup time to very large MSR databases. The value is a Golang duration string. For example, "10m" represents 10 minutes.

--nfs-options

$NFS_OPTIONS

Pass in NFS volume options verbatim for the replica specified in the --existing-replica-id option. The NFS configuration must be set with --nfs-storage-url explicitly to use this option. Specifying --nfs-options will pass in character-for-character the options specified in the argument when creating or recreating the NFS volume. For instance, to use NFS v4 with async, pass in “rw,nfsvers=4,async” as the argument.

--nfs-storage-url

$NFS_STORAGE_URL

Set the URL for the NFS storage back end.

docker run -it --rm mirantis/dtr:2.9.16 reconfigure --nfs-storage-url nfs://<IP-of-NFS-server>/path/to/mountdir

To reconfigure MSR to stop using NFS, leave the option empty:

docker run -it --rm mirantis/dtr:{{ page.dtr_version}} reconfigure --nfs-storage-url ""

Refer to Reconfigure MSR to use NFS for more details.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--no-proxy

$MSR_NO_PROXY

List of domains the proxy should not be used for. When using --http-proxy you can use this flag to specify a list of domains that you don’t want to route through the proxy. Format acme.com[, acme.org].

--reinitialize-storage

$REINITIALIZE_STORAGE

Set the flag when you have changed storage back ends but have not moved the contents of the old storage back end to the new one. Erases all tags in the registry.

--replica-http-port

$REPLICA_HTTP_PORT

The public HTTP port for the MSR replica. Default is 80. This allows you to customize the HTTP port where users can reach MSR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with –replica-https-port. This port can also be used for unencrypted health checks.

--replica-https-port

$REPLICA_HTTPS_PORT

The public HTTPS port for the MSR replica. Default is 443. This allows you to customize the HTTPS port where users can reach MSR. Each replica can use a different port.

--replica-rethinkdb-cache-mb

$RETHINKDB_CACHE_ MB

The maximum amount of space in MB for RethinkDB in-memory cache used by the given replica. Default is auto. Auto is (available_memory - 1024) / 2. This config allows changing the RethinkDB cache usage per replica. You need to run it once per replica to change each one.

--storage-migrated

$STORAGE_MIGRATED

A flag added in 2.6.4 which lets you indicate the migration status of your storage data. Specify this flag if you are migrating to a new storage back end and have already moved all contents from your old back end to your new one. If not specified, MSR will assume the new back end is empty during a back end storage switch, and consequently destroy your existing tags and related image metadata.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

mirantis/dtr remove

Remove an MSR replica from a cluster.

Usage
docker run -it --rm mirantis/dtr \
    remove [command options]
Description

The remove command scales down your MSR cluster by removing exactly one replica. All other replicas must be healthy and will remain healthy after this operation.

Options

Option

Environment variable

Description

--debug

$DEBUG

Enable debug mode for additional logs.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to the database of an existing healthy replica.

--force

$DTR_FORCE_REMOVE_REPLICA

Ignore pre-checks when removing a replica.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--replica-id

$MSR_REMOVE_REPLICA_ID

DEPRECATED Alias for --replica-ids

--replica-ids

$MSR_REMOVE_REPLICA_IDS

A comma separated list of IDs of replicas to remove from the cluster.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to MITM (man-in-the-middle) attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

mirantis/dtr restore

Install and restore MSR from an existing backup.

Usage
docker run -i --rm mirantis/dtr \
    restore
    --replica-id <replica-id>
    [command options] < backup.tar
Description

The restore command performs a fresh installation of MSR, and reconfigures it with configuration data from a tar file generated by mirantis/dtr backup. If you are restoring MSR after a failure, please make sure you have destroyed the old MSR fully.

There are three actions you can take to recover an unhealthy MSR cluster:

  • If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join new nodes for high availability.

  • If the majority of replicas are unhealthy, use the emergency-repair command to revert your cluster to a single MSR replica.

  • If you cannot repair your cluster to a single replica, you must restore from an existing backup, using the restore command.

This command does not restore Docker images. You should implement a separate restore procedure for the Docker images stored in your registry, taking in consideration whether your MSR installation is configured to store images on the local filesystem or using a cloud provider.

After restoring the cluster, you should use the :command`join` command to add more MSR replicas for high availability.

Options

Option

Environment variable

Description

--async-nfs

$ASYNC_NFS

Use async NFS volume options on the replica specified by --existing-replica-id.

--client-cert-auth-ca

$CLIENT_CA

PEM-encoded TLS root CA certificates for client certificate authentication.

--custom-ca-cert-bundle

$CUSTOM_CA_CERTS_BUNDLE

Provide a file containing additional CA certificates for MSR service containers to use when verifying TLS server certificates.

--debug

$DEBUG

Enable debug mode for additional logs.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database.

--dtr-ca

$MSR_CA

Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own TLS CA certificate with --dtr-ca "$(cat ca.pem)".

--dtr-cert

$MSR_CERT

Use a PEM-encoded TLS certificate for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own TLS certificate with --dtr-cert "$(cat ca.pem)".

--dtr-external-url

$MSR_EXTERNAL_URL

URL of the host or load balancer clients use to reach MSR. When you use this flag, users are redirected to MKE for logging in. Once authenticated they are redirected to the URL you specify in this flag. If you don’t use this flag, MSR is deployed without single sign-on with MKE. Users and teams are shared but users log in separately into the two applications. You can enable and disable single sign-on within your MSR system settings. Format https://host[:port], where port is the value you used with --replica-https-port.

--dtr-key

$MSR_KEY

Use a PEM-encoded TLS private key for MSR. By default MSR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with --dtr-key "$(cat ca.pem)".

--dtr-storage-volume

$MSR_STORAGE_VOLUME

Note

One of three options you can use for MSR backend storage, the other two being dtr_use-default-storage and nfs_storage_url. The use of one of the three options is mandatory, depending on your setup, to allow for MSR to fall back to the storage setting you configured at the time of backup.

If you have previously configured MSR to use a full path or volume name for storage, specify the --dtr-storage-volume option as this will cause MSR to use the same setting on restore. Refer to mirantis/dtr install <msr-cli-install>` and mirantis/dtr reconfigure <msr-cli-reconfigure>` for usage detail.

--dtr-use-default-storage

$MSR_DEFAULT_STORAGE

Note

One of three options you can use for MSR backend storage, the other two being dtr_storage-volume and nfs_storage_url. The use of one of the three options is mandatory, depending on your setup, to allow for MSR to fall back to the storage setting you configured at the time of backup.

If cloud storage was previously configured, then the default storage on restore is cloud storage. Otherwise, local storage is used.

--nfs-storage-url

$NFS_STORAGE_URL

Note

One of three options you can use for MSR backend storage, the other two being dtr_storage-volume and --dtr-use-default-storage. The use of one of the three options is mandatory, depending on your setup, to allow for MSR to fall back to the storage setting you configured at the time of backup.

If NFS was previously configured, you must manually create a storage volume on each MSR node and specify --dtr-storage-volume with the newly-created volume instead. For additional NFS configuration options to support NFS v4, refer to mirantis/dtr install and mirantis/dtr reconfigure.

--enable-client-cert-auth

$ENABLE_CLIENT_CERT_AUTH

Enables TLS client certificate authentication; use --enable-client-cert-auth=false to disable it.

--enable-pprof

$MSR_PPROF

Enables pprof profiling of the server. Use --enable-pprof=false to disable it. Once MSR is deployed with this flag, you can access the pprof endpoint for the api server at /debug/pprof, and the registry endpoint at /registry_debug_pprof/debug/pprof.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--http-proxy

$MSR_HTTP_PROXY

The HTTP proxy used for outgoing requests.

--https-proxy

$MSR_HTTPS_PROXY

The HTTPS proxy used for outgoing requests.

--log-host

$LOG_HOST

The syslog system to send logs to.The endpoint to send logs to. Use this flag if you set --log-protocol to tcp or udp.

--log-level

$LOG_LEVEL

Log level for all container logs when logging to syslog. Default: INFO. The supported log levels are debug, info, warn, error, or fatal.

--log-protocol

$LOG_PROTOCOL

The protocol for sending logs. Default is internal.By default, MSR internal components log information using the logger specified in the Docker daemon in the node where the MSR replica is deployed. Use this option to send MSR logs to an external syslog system. The supported values are tcp, udp, and internal. Internal is the default option, stopping MSR from sending logs to an external system. Use this flag with --log-host.

--max-wait

$MAX_WAIT

The maximum amount of time MSR allows an operation to complete within. This is frequently used to allocate more startup time to very large MSR databases. The value is a Golang duration string. For example, "10m" represents 10 minutes.

--nfs-options

$NFS_OPTIONS

Pass in NFS volume options verbatim for the replica specified by --existing-replica-id.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--no-proxy

$MSR_NO_PROXY

List of domains the proxy should not be used for.When using --http-proxy you can use this flag to specify a list of domains that you don’t want to route through the proxy. Format acme.com[, acme.org].

--replica-http-port

$REPLICA_HTTP_PORT

The public HTTP port for the MSR replica. Default is 80. This allows you to customize the HTTP port where users can reach MSR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks.

--replica-https-port

$REPLICA_HTTPS_PORT

The public HTTPS port for the MSR replica. Default is 443. This allows you to customize the HTTPS port where users can reach MSR. Each replica can use a different port.

--replica-id

$MSR_INSTALL_REPLICA_ID

Assign a 12-character hexadecimal ID to the MSR replica. Mandatory.

--replica-rethinkdb-cache-mb

$RETHINKDB_CACHE_MB

The maximum amount of space in MB for RethinkDB in-memory cache used by the given replica. Default is auto. Auto is (available_memory - 1024) / 2. This config allows changing the RethinkDB cache usage per replica. You need to run it once per replica to change each one.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-node

$UCP_NODE

The hostname of the MKE node to use to deploy MSR. Random by default. You can find the hostnames of the nodes in the cluster in the MKE web interface, or by running docker node ls on an MKE manager node. Note that MKE and MSR must not be installed on the same node, instead install MSR on worker nodes that will be managed by MKE.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

mirantis/dtr upgrade

Upgrade a DTR 2.8.x cluster to DTR 2.9.x.

Usage
docker run -it --rm mirantis/dtr \
    upgrade [command options]
Description

The dtr upgrade command upgrades DTR 2.8.x to the current version (2.9.x) of the image.

Options

Option

Environment variable

Description

--debug

$DEBUG

Enable debug mode for additional logs.

--existing-replica-id

$MSR_REPLICA_ID

The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database.

--help-extended

$MSR_EXTENDED_HELP

Display extended help text for a given command.

--nocolor

$NOCOLOR

Disable output coloring in logs.

--ucp-ca

$UCP_CA

Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA certificate from https://<mke-url>/ca, and use --ucp-ca "$(cat ca.pem)".

--ucp-insecure-tls

$UCP_INSECURE_TLS

Disable TLS verification for MKE. The installation uses TLS but always trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use --ucp-ca "$(cat ca.pem)" instead.

--ucp-password

$UCP_PASSWORD

The MKE administrator password.

--ucp-url

$UCP_URL

The MKE URL including domain and port.

--ucp-username

$UCP_USERNAME

The MKE administrator username.

--max-wait

$MAX_WAIT

The maximum amount of time MSR allows an operation to complete within. This is frequently used to allocate more startup time to very large MSR databases. The value is a Golang duration string. For example, "10m" represents 10 minutes.

Release Notes

This document describes the latest changes, enhancements, known issues, and fixes for Mirantis Secure Registry (MSR) for versions 2.9.x.

2.9.16

(2024-JAN-31)

Enhancements

  • (FIELD-6040) Previously slow to respond, the performance of the /repositories/tags API call has been significantly revamped, and thus operators will no longer need to wait long periods of time for the tags to display.

  • (ENGDTR-4066) Whereas job filtering was previously only available for the running job status, the functionality is now extended to include all available job status options.

Addressed issues

  • (FIELD-6748) Fixed an issue wherein the navigation buttons in the MSR web UI Organizations tab were not enabled, and thus users could not navigate to organizations that were not in the default view of 10.

Security information

  • Updated Alpine to 3.18.5, Golang to 1.20.12, and Synopsys Scanner to 2023.9 to resolve vulnerabilities.

  • All CVEs reported in Pillow are false positives, the result of being picked up from cache but for a version not in use with MSR.

  • Resolved CVEs, as detailed:

    CVE

    Status

    Problem details from upstream

    CVE-2023-37920

    Not Vulnerable

    Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi prior to version 2023.07.22 recognizes “e-Tugra” root certificates. e-Tugra’s root certificates were subject to an investigation prompted by reporting of security issues in their systems. Certifi 2023.07.22 removes root certificates from “e-Tugra” from the root store.

    CVE-2023-25173

    Resolved

    containerd is an open source container runtime. A bug was found in containerd prior to versions 1.6.18 and 1.5.18 where supplementary groups are not set up properly inside a container. If an attacker has direct access to a container and manipulates their supplementary group access, they may be able to use supplementary group access to bypass primary group restrictions in some cases, potentially gaining access to sensitive information or gaining the ability to execute code in that container. Downstream applications that use the containerd client library may be affected as well. This bug has been fixed in containerd v1.6.18 and v.1.5.18. Users should update to these versions and recreate containers to resolve this issue. Users who rely on a downstream application that uses containerd’s client library should check that application for a separate advisory and instructions. As a workaround, ensure that the USER $USERNAME Dockerfile instruction is not used. Instead, set the container entrypoint to a value similar to ENTRYPOINT ["su", "-", "user"] to allow su to properly set up supplementary groups.

    CVE-2023-25153

    Resolved

    containerd is an open source container runtime. Before versions 1.6.18 and 1.5.18, when importing an OCI image, there was no limit on the number of bytes read for certain files. A maliciously crafted image with a large file where a limit was not applied could cause a denial of service. This bug has been fixed in containerd 1.6.18 and 1.5.18. Users should update to these versions to resolve the issue. As a workaround, ensure that only trusted images are used and that only trusted users have permissions to import images.

    CVE-2023-0286

    Resolved

    There is a type confusion vulnerability relating to X.400 address processing inside an X.509 GeneralName. X.400 addresses were parsed as an ASN1_STRING but the public structure definition for GENERAL_NAME incorrectly specified the type of the x400Address field as ASN1_TYPE. This field is subsequently interpreted by the OpenSSL function GENERAL_NAME_cmp as an ASN1_TYPE rather than an ASN1_STRING. When CRL checking is enabled (i.e. the application sets the X509_V_FLAG_CRL_CHECK flag), this vulnerability may allow an attacker to pass arbitrary pointers to a memcmp call, enabling them to read memory contents or enact a denial of service. In most cases, the attack requires the attacker to provide both the certificate chain and CRL, neither of which need to have a valid signature. If the attacker only controls one of these inputs, the other input must already contain an X.400 address as a CRL distribution point, which is uncommon. As such, this vulnerability is most likely to only affect applications which have implemented their own functionality for retrieving CRLs over a network.

    CVE-2023-0215

    Resolved

    The public API function BIO_new_NDEF is a helper function used for streaming ASN.1 data via a BIO. It is primarily used internally to OpenSSL to support the SMIME, CMS and PKCS7 streaming capabilities, but may also be called directly by end user applications. The function receives a BIO from the caller, prepends a new BIO_f_asn1 filter BIO onto the front of it to form a BIO chain, and then returns the new head of the BIO chain to the caller. Under certain conditions, for example if a CMS recipient public key is invalid, the new filter BIO is freed and the function returns a NULL result indicating a failure. However, in this case, the BIO chain is not properly cleaned up and the BIO passed by the caller still retains internal pointers to the previously freed filter BIO. If the caller then goes on to call BIO_pop() on the BIO then a use-after-free will occur. This will most likely result in a crash. This scenario occurs directly in the internal function B64_write_ASN1() which may cause BIO_new_NDEF() to be called and will subsequently call BIO_pop() on the BIO. This internal function is in turn called by the public API functions PEM_write_bio_ASN1_stream, PEM_write_bio_CMS_stream, PEM_write_bio_PKCS7_stream, SMIME_write_ASN1, SMIME_write_CMS and SMIME_write_PKCS7. Other public API functions that may be impacted by this include i2d_ASN1_bio_stream, BIO_new_CMS, BIO_new_PKCS7, i2d_CMS_bio_stream and i2d_PKCS7_bio_stream. The OpenSSL cms and smime command line applications are similarly affected.

    CVE-2022-23471

    Resolved

    containerd is an open source container runtime. A bug was found in containerd’s CRI implementation where a user can exhaust memory on the host. In the CRI stream server, a goroutine is launched to handle terminal resize events if a TTY is requested. If the user’s process fails to launch due to, for example, a faulty command, the goroutine will be stuck waiting to send without a receiver, resulting in a memory leak. Kubernetes and crictl can both be configured to use containerd’s CRI implementation and the stream server is used for handling container IO. This bug has been fixed in containerd 1.6.12 and 1.5.16. Users should update to these versions to resolve the issue. Users unable to upgrade should ensure that only trusted images and commands are used and that only trusted users have permissions to execute commands in running containers.

    CVE-2022-4450

    Resolved

    The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and decodes the name (e.g. CERTIFICATE), any header data and the payload data. If the function succeeds then the name_out, header and data arguments are populated with pointers to buffers containing the relevant decoded data. The caller is responsible for freeing those buffers. It is possible to construct a PEM file that results in 0 bytes of payload data. In this case PEM_read_bio_ex() will return a failure code but will populate the header argument with a pointer to a buffer that has already been freed. If the caller also frees this buffer then a double free will occur. This will most likely lead to a crash. This could be exploited by an attacker who has the ability to supply malicious PEM files for parsing to achieve a denial of service attack. The functions PEM_read_bio() and PEM_read() are simple wrappers around PEM_read_bio_ex() and therefore these functions are also directly affected. These functions are also called indirectly by a number of other OpenSSL functions including PEM_X509_INFO_read_bio_ex() and SSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL internal uses of these functions are not vulnerable because the caller does not free the header argument if PEM_read_bio_ex() returns a failure code. These locations include the PEM_read_bio_TYPE() functions as well as the decoders introduced in OpenSSL 3.0. The OpenSSL asn1parse command line application is also impacted by this issue.

    CVE-2021-3826

    Resolved

    Heap/stack buffer overflow in the dlang_lname function in d-demangle.c in libiberty allows attackers to potentially cause a denial of service (segmentation fault and crash) via a crafted mangled symbol.

    CVE-2019-19450

    Not Vulnerable

    paraparser in ReportLab before 3.5.31 allows remote code execution because start_unichar in paraparser.py evaluates untrusted user input in a unichar element in a crafted XML document with <unichar code=" followed by arbitrary Python code, a similar issue to CVE-2019-17626.

    CVE-2019-17626

    Not Vulnerable

    ReportLab through 3.5.26 allows remote code execution because of toColor(eval(arg)) in colors.py, as demonstrated by a crafted XML document with <span color=" followed by arbitrary Python code.

2.9.15

(2023-11-13)

Enhancements

  • (FIELD-5384) A search field is now present on the Organizations screen in the MSR web UI, to aid customers in filtering through large numbers of organizations on their clusters.

  • (ENGDTR-3949) Improvement made to displayed message on any attempt to override an image with the same tag. Previously error 500, now denied: Repository is marked as immutable.

Addressed issues

  • (FIELD-6384) Corrected license global notication banners in the MSR web UI.

Security information

  • Updated Alpine to 3.18, Golang to 1.20.10, and NGINX to 1.24.0 to resolve vulnerabilities.

2.9.14

(2023-09-26)

Addressed issues

  • (FIELD-6364) Fixed an issue wherein the --enable-pprof flag did not function as expected.

  • (FIELD-6330) Fixed an issue wherein a specific corner case could incorrectly produce a nil pointer error in the dtr-api container.

  • (FIELD-2238) Fixed an issue wherein pushing an image to an immutable registry resulted in an uninformative error message.

    Note

    FIELD-2238 was appended to these release notes on 2023-09-28.

Security information

  • Resolved CVEs, as detailed:

    CVE

    Status

    Problem details from upstream

    CVE-2023-39417

    Resolved

    In the extension script, a SQL Injection vulnerability was found in PostgreSQL if it uses @extowner@, @extschema@, or @extschema:...@ inside a quoting construct (dollar quoting, '', or ""). If an administrator has installed files of a vulnerable, trusted, non-bundled extension, an attacker with database-level CREATE privilege can execute arbitrary code as the bootstrap superuser.

2.9.13

(2023-07-13)

Enhancements

  • (ENGDTR-3902) Updated Go to version 1.20.5.

  • (ENGDTR-3813) Updated Synopsis Scanner to version 2023.3.0.

Security information

The following CVEs are no longer reported as false positives:

2.9.12

(2023-05-16)

Enhancements

(ENGDTR-3799) Updated Go to version 1.20.3.

Addressed issues

(FIELD-5947) Resolved sub-optimal garbage collection performance.

Note

FIELD-5947 was appended to these release notes on 2023-07-18.

Security information

Resolved CVEs, as detailed:

CVE

Status

Problem details from upstream

CVE-2023-0401

Resolved

A NULL pointer can be dereferenced when signatures are being verified on PKCS7 signed or signedAndEnveloped data. In case the hash algorithm used for the signature is known to the OpenSSL library but the implementation of the hash algorithm is not available the digest initialization will fail. There is a missing check for the return value from the initialization function which later leads to invalid usage of the digest API most likely leading to a crash. The unavailability of an algorithm can be caused by using FIPS enabled configuration of providers or more commonly by not loading the legacy provider. PKCS7 data is processed by the SMIME library calls and also by the time stamp (TS) library calls. The TLS implementation in OpenSSL does not call these functions however third party applications would be affected if they call these functions to verify signatures on untrusted data.

CVE-2023-0217

Resolved

An invalid pointer dereference on read can be triggered when an application tries to check a malformed DSA public key by the EVP_PKEY_public_check() function. This will most likely lead to an application crash. This function can be called on public keys supplied from untrusted sources which could allow an attacker to cause a denial of service attack. The TLS implementation in OpenSSL does not call this function but applications might call the function if there are additional security requirements imposed by standards such as FIPS 140-3.

CVE-2023-0216

Resolved

An invalid pointer dereference on read can be triggered when an application tries to load malformed PKCS7 data with the d2i_PKCS7(), d2i_PKCS7_bio() or d2i_PKCS7_fp() functions. The result of the dereference is an application crash which could lead to a denial of service attack. The TLS implementation in OpenSSL does not call this function however third party applications might call these functions on untrusted data.

CVE-2022-41862

Resolved

In PostgreSQL, a modified, unauthenticated server can send an unterminated string during the establishment of Kerberos transport encryption. In certain conditions a server can cause a libpq client to over-read and report an error message containing uninitialized bytes.

CVE-2022-3996

Resolved

If an X.509 certificate contains a malformed policy constraint and policy processing is enabled, then a write lock will be taken twice recursively. On some operating systems (most widely: Windows) this results in a denial of service when the affected process hangs. Policy processing being enabled on a publicly facing server is not considered to be a common setup. Policy processing is enabled by passing the -policy argument to the command line utilities or by calling the X509_VERIFY_PARAM_set1_policies() function. Update (31 March 2023): The description of the policy processing enablement was corrected based on CVE-2023-0466.

CVE-2017-9225

False positive

An issue was discovered in Oniguruma 6.2.0, as used in Oniguruma-mod in Ruby through 2.4.1 and mbstring in PHP through 7.1.5. A stack out-of-bounds write in onigenc_unicode_get_case_fold_codes_by_str() occurs during regular expression compilation. Code point 0xFFFFFFFF is not properly handled in unicode_unfold_key(). A malformed regular expression could result in 4 bytes being written off the end of a stack buffer of expand_case_fold_string() during the call to onigenc_unicode_get_case_fold_codes_by_str(), a typical stack buffer overflow.

2.9.11

(2023-02-16)

Enhancements

  • (ENGDTR-3573) MSR now offers the option to disable coloring in the log output.

  • (ENGDTR-3558) Updated Go to version 1.19.4.

  • (ENGDTR-3649) Updated Synopsys scanner to version 2022-12-2.

Addressed issues

  • (FIELD-5447) Fixed an issue with the /api/v0/api_tokens endpoint wherein changing the value of the pageStart parameter did not change the page returned in the request output.

    When upgrading from a previous MSR version, for the fix to go into effect you must run a particular command sequence using the RethinkDB CLI. Contact Mirantis support for the RethinkDB CLI instructions. Fresh installations do not require the manual CLI steps.

  • (ENGDTR-3421) Fixed an issue wherein the MSR web UI would break whenever a user tried to access the repository page for an organization from a repository list.

  • (FIELD-4211) MSR now issues a warning when installations or upgrades fail due to the disabling of MKE admin container scheduling.

Security information

  • CVE information, as detailed:

    CVE

    Status

    Problem details from upstream

    CVE-2022-46908

    Resolved

    SQLite through 3.40.0, when relying on --safe for execution of an untrusted CLI script, does not properly implement the azProhibitedFunctions protection mechanism, and instead allows UDF functions such as WRITEFILE.

    CVE-2017-9225

    False positive

    An issue was discovered in Oniguruma 6.2.0, as used in Oniguruma-mod in Ruby through 2.4.1 and mbstring in PHP through 7.1.5. A stack out-of-bounds write in onigenc_unicode_get_case_fold_codes_by_str() occurs during regular expression compilation. Code point 0xFFFFFFFF is not properly handled in unicode_unfold_key(). A malformed regular expression could result in 4 bytes being written off the end of a stack buffer of expand_case_fold_string() during the call to onigenc_unicode_get_case_fold_codes_by_str(), a typical stack buffer overflow.

2.9.10

(2022-11-21)

Enhancements

  • (ENGDTR-3410) Updated Synopsys scanner to version 2022.9.1.

Addressed issues

  • (FIELD-5205) MSR repo names are now limited to 55 characters at creation. Prior to this fix, MSR users could create repo names in excess of 55 characters, this despite a 55 character system limitation that resulted in non-specific error messages.

  • (FIELD-4421) Fixed an issue wherein the MSR web UI would sometimes go blank when the user clicked any of the toggles on the Settings page.

  • (FIELD-5131) Fixed an issue wherein API calls to push mirror tags from MSR 2.9.x to MSR 3.0.x would fail.

  • (FIELD-5121) Fixed an issue wherein promotion policies listed using the API were missing a counter header.

  • (ENGDTR-2783) Fixed an issue wherein API requests with an improperly specified Helm chart version returned an internal server error.

Security information

  • Updated Golang to version 1.18.7 to resolve vulnerabilities. For more information, refer to the following announcements for version 1.18.7.

  • Resolved CVEs, as detailed:

    CVE

    Status

    Problem details from upstream

    CVE-2022-31030

    Resolved

    A bug was found in the containerd CRI implementation where programs inside a container can cause the containerd daemon to consume memory without bound during invocation of the ExecSync API. This can cause containerd to consume all available memory on the computer, denying service to other legitimate workloads. Kubernetes and crictl can both be configured to use the containerd CRI implementation; ExecSync may be used when running probes or when executing processes via an exec facility. This bug has been fixed in containerd 1.6.6 and 1.5.13. Users should update to these versions to resolve the issue. Users unable to upgrade should ensure that only trusted images and commands are used.

    CVE-2022-1996

    Resolved

    Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.

    CVE-2022-35737

    Resolved

    SQLite 1.0.12 through 3.39.x before 3.39.2 sometimes allow an array-bounds overflow if billions of bytes are used in a string argument to a C API.

    CVE-2022-36359

    Resolved

    An issue was discovered in the HTTP FileResponse class in Django 3.2 before 3.2.15 and 4.0 before 4.0.7. An application is vulnerable to a reflected file download (RFD) attack that sets the Content-Disposition header of a FileResponse when the filename is derived from user-supplied input.

    CVE-2022-32207

    Resolved

    When curl < 7.84.0 saves cookies, alt-svc and hsts data to local files, it makes the operation atomic by finalizing the operation with a rename from a temporary name to the final target file name. In that rename operation, it might accidentally widen the permissions for the target file, leaving the updated file accessible to more users than intended.

    CVE-2022-40674

    Resolved

    libexpat before 2.4.9 has a use-after-free in the doContent function in xmlparse.c.

    CVE-2022-3358

    Resolved

    OpenSSL supports creating a custom cipher via the legacy EVP_CIPHER_meth_new() function and associated function calls. This function was deprecated in OpenSSL 3.0 and application authors are instead encouraged to use the new provider mechanism in order to implement custom ciphers. OpenSSL versions 3.0.0 to 3.0.5 incorrectly handle legacy custom ciphers passed to the EVP_EncryptInit_ex2(), EVP_DecryptInit_ex2(), and EVP_CipherInit_ex2() functions (as well as other similarly named encryption and decryption initialization functions). Instead of using the custom cipher directly it incorrectly tries to fetch an equivalent cipher from the available providers. An equivalent cipher is found based on the NID passed to EVP_CIPHER_meth_new(). This NID is supposed to represent the unique NID for a given cipher. However it is possible for an application to incorrectly pass NID_undef as this value in the call to EVP_CIPHER_meth_new(). When NID_undef is used in this way the OpenSSL encryption/decryption initialization function will match the NULL cipher as being equivalent and will fetch this from the available providers. This will succeed if the default provider has been loaded (or if a third party provider has been loaded that offers this cipher). Using the NULL cipher means that the plaintext is emitted as the ciphertext. Applications are only affected by this issue if they call EVP_CIPHER_meth_new() using NID_undef and subsequently use it in a call to an encryption/decryption initialization function. Applications that only use SSL/TLS are not impacted by this issue. Fixed in OpenSSL 3.0.6 (Affected 3.0.0-3.0.5).

    CVE-2022-3602

    Resolved

    A buffer overrun can be triggered in X.509 certificate verification, specifically in name constraint checking. Note that this occurs after certificate chain signature verification and requires either a CA to have signed the malicious certificate or for the application to continue certificate verification despite failure to construct a path to a trusted issuer. An attacker can craft a malicious email address to overflow four attacker-controlled bytes on the stack. This buffer overflow could result in a crash (causing a denial of service) or potentially remote code execution. Many platforms implement stack overflow protections which would mitigate against the risk of remote code execution. The risk may be further mitigated based on stack layout for any given platform/compiler. Users are encouraged to upgrade to a new version as soon as possible. In a TLS client, this can be triggered by connecting to a malicious server. In a TLS server, this can be triggered if the server requests client authentication and a malicious client connects. Fixed in OpenSSL 3.0.7 (Affected 3.0.0,3.0.1,3.0.2,3.0.3,3.0.4,3.0.5,3.0.6).

    CVE-2022-3786

    Resolved

    A buffer overrun can be triggered in X.509 certificate verification, specifically in name constraint checking. Note that this occurs after certificate chain signature verification and requires either a CA to have signed a malicious certificate or for an application to continue certificate verification despite failure to construct a path to a trusted issuer. An attacker can craft a malicious email address in a certificate to overflow an arbitrary number of bytes containing the ‘.’ character (decimal 46) on the stack. This buffer overflow could result in a crash (causing a denial of service). In a TLS client, this can be triggered by connecting to a malicious server. In a TLS server, this can be triggered if the server requests client authentication and a malicious client connects.

2.9.9

(2022-08-11)

Enhancements

  • (ENGDTR-3220) Upgraded Synopsys scanner to version 2022.6.0.

Addressed issues

  • (FIELD-4537) Fixed the invalid documentation links that are embedded in MSR vulnerability scan warnings.

Security information

  • Updated Golang to version 1.17.13 to resolve vulnerabilities. For more information, refer to the following announcements for versions 1.17.12 and 1.17.13.

  • Resolved CVEs, as detailed:

    CVE

    Status

    Problem details from upstream

    CVE-2022-29458

    Resolved

    ncurses 6.3 before patch 20220416 has an out-of-bounds read and segmentation violation in convert_strings in tinfo/read_entry.c in the terminfo library.

    CVE-2022-29155

    Resolved

    In OpenLDAP 2.x before 2.5.12 and 2.6.x before 2.6.2, an SQL injection vulnerability exists in the experimental back-sql back end to slapd, through an SQL statement within an LDAP query. This can occur during an LDAP search operation when the search filter is processed, due to a lack of proper escaping.

    CVE-2022-34265

    Resolved

    An issue was discovered in Django 3.2 before 3.2.14 and 4.0 before 4.0.6. The Trunc() and Extract() database functions are subject to SQL injection if untrusted data is used as a kind/lookup_name value. Applications that constrain the lookup name and kind choice to a known safe list are unaffected.

    CVE-2022-2274

    Resolved

    The OpenSSL 3.0.4 release introduced a serious bug in the RSA implementation for X86_64 CPUs supporting the AVX512IFMA instructions. This issue makes the RSA implementation with 2048-bit private keys incorrect on such machines and memory corruption will happen during the computation. As a consequence of the memory corruption, an attacker may be able to trigger a remote code execution on the machine performing the computation. SSL/TLS servers or other servers using 2048-bit RSA private keys running on machines that support AVX512IFMA instructions of the X86_64 architecture are affected by this issue.

    CVE-2015-20107

    Resolved

    In Python (aka CPython) through 3.10.4, the mailcap module does not add escape characters into commands discovered in the system mailcap file. This may allow attackers to inject shell commands into applications that call mailcap.findmatch with untrusted input (if they lack validation of user-provided file names or arguments).

    CVE-2022-30065

    Resolved

    A use-after-free in Busybox 1.35-x’s awk applet leads to denial of service and possibly code execution when processing a crafted awk pattern in the copyvar function.

    CVE-2022-27782

    Resolved

    libcurl would reuse a previously created connection even when a TLS or SSH-related option had been changed that should have prohibited reuse. libcurl keeps previously used connections in a connection pool for subsequent transfers to reuse if one of them matches the setup. However, several TLS and SSH settings were left out from the configuration match checks, making them match too easily.

    CVE-2022-27781

    Resolved

    libcurl provides the CURLOPT_CERTINFO option to allow applications to request details to be returned about a server’s certificate chain. Due to an erroneous function, a malicious server could make libcurl built with NSS get stuck in a never-ending busy-loop when trying to retrieve that information.

    CVE-2022-32148

    Resolved

    No description is available for this CVE.

    CVE-2022-30631

    Resolved

    No description is available for this CVE.

    CVE-2022-30633

    Resolved

    No description is available for this CVE.

    CVE-2022-28131

    Resolved

    No description is available for this CVE.

    CVE-2022-30635

    Resolved

    No description is available for this CVE.

    CVE-2022-30632

    Resolved

    No description is available for this CVE.

    CVE-2022-30630

    Resolved

    No description is available for this CVE.

    CVE-2022-1962

    Resolved

    No description is available for this CVE.

    CVE-2022-32189

    Resolved

    No description is available for this CVE.

    CVE-2022-1996

    Not vulnerable 1

    Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.

    CVE-2021-41556

    False positive

    sqclass.cpp in Squirrel through 2.2.5 and 3.x through 3.1 allows an out-of-bounds read in the core interpreter that can lead to code execution. If a victim executes an attacker-controlled squirrel script, it is possible for the attacker to break out of the squirrel script sandbox even if all dangerous functionality such as file system functions have been disabled. An attacker might abuse this bug to target, for example, cloud services that allow customization using SquirrelScripts, or distribute malware through video games that embed a Squirrel Engine.

    1

    The issue is likely to be triggered in software that uses a CORS filter, which MSR does not.

2.9.8

(2022-06-22)

Enhancements

  • Upgraded Synopsys scanner to version 2022.3.1.

Addressed issues

  • (FIELD-4718) Fixed a pagination issue in the MSR API GET /api/v0/imagescan/scansummary/cve/{cve} endpoint. The fix requires that you upgrade MSR to 2.9.8 and that you take certain manual steps using the database CLI (contact Mirantis Support for the steps). Note that the manual CLI steps are not required for fresh MSR installations.

  • (ENGDTR-3184) Fixed an issue wherein Ubuntu 22.04-based images could not be successfully scanned for vulnerabilities.

Security information

  • All CVEs reported in OpenJDK 1.8.0u302 have been resolved by removal of the component.

  • All CVEs reported in NumPy are false positives, the result of being picked up from cache but for a version not in use with MSR.

  • Resolved CVEs, as detailed:

    CVE

    Status

    Description

    CVE-2018-25032

    Resolved

    Prior to 1.2.12, zlib allows memory corruption when deflating when the input has many distant matches.

    CVE-2022-28391

    Resolved

    BusyBox up through version 1.35.0 allows remote attackers to execute arbitrary code when netstat is used to print the value of a DNS PTR record to a VT-compatible terminal. Alternatively, attackers can choose to change the colors of the terminal.

    CVE-2019-15562

    Resolved

    Prior to 1.9.10, GORM permits SQL injection through incomplete parentheses. Note that misusing GORM by passing untrusted user input when GORM expects trusted SQL fragments is not a vulnerability in GORM but in the application.

    CVE-2022-23648

    Resolved

    A bug was found in containerd prior to versions 1.6.1, 1.5.10, and 1.14.12 in which containers launched through containerd’s CRI implementation on Linux with a specially-crafted image configuration could gain access to read-only copies of arbitrary files and directories on the host.

    CVE-2022-29155

    Not Vulnerable

    The CVE is present in the JobRunner image, however while it is a required dependency of a component running in JobRunner, its functionality is never excercised.

    In OpenLDAP 2.x prior to 2.5.12 and in 2.6.x prior to 2.6.2, a SQL injection vulnerability exists in the experimental back-sql backend to slapd, via a SQL statement within an LDAP query. This can occur during an LDAP search operation when the search filter is processed, due to a lack of proper escaping.

    CVE-2022-1292

    False Positive

    Though Alpine Linux contains the affected OpenSSL version, the c_rehash script has been replaced by a C binary.

    The c_rehash script does not properly sanitize shell metacharacters to prevent command injection. Some operating systems distribute this script in a manner in which it is automatically executed, in which case attackers can execute arbitrary commands with the privileges of the script. Use of this script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. The vulernability is fixed in OpenSSL 3.0.3, OpenSSL 1.1.1o, and in OpenSSL 1.0.2ze.

    CVE-2019-6446

    False Positive

    NumPy 1.16.0 and earlier use the pickle Python module in an unsafe manner that allows remote attackers to execute arbitrary code via a crafted serialized object, as demonstrated by a numpy.load call. Note that third parties dispute the issue as, for example, it is a behavior that can have legitimate applications in loading serialized Python object arrays from trusted and authenticated sources.

2.9.7

(2022-04-18)

Enhancements

  • Improvements have been made to clarify the presentation of vulnerability scan summary counts in the MSR web UI, for Critical, High, Medium, and Low in both the Vulnerabilities column and in the View Details view.

    Note

    Although ENGDTR-3008 was reported as a known issue for MSR 2.9.6, the reported counts were at all times reliable and factually correct.

    (ENGDTR-3008)

Addressed issues

  • Fixed an issue in the MSR web UI wherein an input was missing from the team LDAP sync form that prevented users from submitting the form (ENGDTR-3089, FIELD-4587).

Security information

2.9.6

(2022-02-10)

Enhancements

  • A Synopsys scanner update, to release 2021.12.0.

    With the 2021.12.0 release, Synopsys scanner can now self-scan all MSR components and run other test cases without any regressions.

    (ENGDTR-2816)

Addressed issues

  • Fixed an issue wherein, on logout from the MSR web UI, users sometimes received the warning: Sorry, we don't recognize this path (FIELD-4339).

  • Fixed an issue with the MSR web UI wherein a user could not be added to an organization that has “team” in its name (FIELD-4436).

  • Fixed an issue in the MSR web UI wherein if a user who wants to change their password entered an incorrect password into the Current password field and clicked Save, the screen would go blank (ENGDTR-2785).

Known issues

  • Vulnerability scan miscalculation in MSR web UI

    The summary counts that MSR displays for Critical, High, Medium, and Low in both the Vulnerabilities column and in the View Details view are unreliable and may be incorrect when displaying non-zero values. The Components tab displays correct values for each component.

    Workaround:

    Navigate to the Components tab, review the individual non-green components, and separately calculate the total of the numbers that present as Critical, High, Medium, and Low.

    (ENGDTR-3008)

Security information

2.9.5

(2021-11-09)

Enhancements

  • Added new sub-command rotate-certificates to the rethinkops binary that exists inside of the dtr-rethinkdb image. This command allows you to rotate the certificates that provide intracluster communication between the MSR system containers and RethinkDB.

    To rotate certificates, docker exec into the dtr-rethinkdb container and use the command below (you can provide the --debug flag for more information):

    REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
    $ docker exec -e DTR_REPLICA_ID=$REPLICA_ID -it $(docker ps -q --filter name=dtr-rethinkdb)
    # rethinkops rotate-certificates --replica-id $DTR_REPLICA_ID --debug
    

    (FIELD-4044)

Addressed issues

  • Fixed an issue wherein the webhook could fail to trigger, thus issuing the “argument list too long” error (FIELD-3424).

  • Fixed an issue with the MSR web UI wherein the value of {{tag}} is absent from the scanning report (FIELD-3931).

  • Fixed an issue wherein the MSR image scan CSV report was missing the CVSS3 score and only had the CVSS2 score (FIELD-3946).

  • Fixed issues wherein the list of org repositories was limited to ten and was wrapping incorrectly (FIELD-3987).

  • Fixed an issue with the MSR web UI wherein the Teams page displayed no more than 10 users and 10 repositories and the Organizations page displayed no more than 10 teams (FIELD-4187).

  • Fixed an issue with the MSR web UI wherein the Add User button failed to display for organization owners (FIELD-4261).

  • Fixed an issue with the MSR web UI wherein performing a search from the left-side navigation panel produced search results that displayed on top of the background text (FIELD-4268).

  • Made improvements to MSR administrative actions to circumvent failures that can result from stale containers (FIELD-4270) (FIELD-4291).

  • Fixed an image signing regression issue that applies to MSR 2.9.3 and MSR 2.9.4 (FIELD-4320).

Known issues

  • The image signing functionality in MSR 2.9.3 and 2.9.4 is incompatible with other MSR versions.

    Workaround:

    For images signed by MSR 2.9.3 and 2.9.4 it is necessary to delete trust data and re-sign the images using MSR 2.9.5 (FIELD-4320).

Security information

2.9.4

(2021-08-19)

Enhancements

  • To help administrators troubleshoot authorization issues, MSR now includes the name and ID of the requesting user in log messages from the dtr-garant container when handling /auth/token API requests (FIELD-3509).

  • MSR now includes support for the GET /v2/_catalog endpoint from the Docker Registry HTTP API V2. Authenticated MSR users can use this API to list all the repositories in the registry that they have permission to view (ENGDTR-2667).

  • MSR now accepts only JWT licenses. To upgrade MSR, customers using a Docker Hub-issued license must first replace it with the new license version (ENGDTR-2631).

    To request a JWT license, contact support@mirantis.com.

  • KubeLinter has been updated to version 0.2.2, which includes 11 additional rules, and new rule-mediation descriptions have been added to existing rules (ENGDTR-2624).

  • The following MSR commands now include a --max-wait option:

    • emergency-repair

    • join

    • reconfigure

    • restore

    • upgrade

    With this new option you can set the maximum amount of time that MSR allows for operations to complete. The --max-wait option is especially useful when allocating additional startup time for very large MSR databases (FIELD-4070).

Addressed issues

  • Fixed an issue wherein the webhook client timeout settings caused reconnections to wait too long (FIELD-4083).

  • Fixed an issue with the MSR web UI wherein the enforcement policy page did not allow users to enable or disable enforcement policies within a repository (ENGDTR-2679).

  • Fixed an issue wherein connecting to MSR with IPv6 failed after an MCR upgrade to version 20.10.0 or later (FIELD-4144).

Known issues

  • MSR administrative actions such as backup, restore, and reconfigure can continuously fail with the invalid session token error shortly after entering phase 2. The error resembles the following example:

    FATA[0000] Failed to get new conv client: Docker version check failed: \
    Failed to get docker version: Error response from daemon: \
    {"message":"invalid session token"}
    

    Workaround:

    1. Before running any bootstrap command, source a client bundle in order to locate the existing dtr-phase2 container.

    2. Remove the existing dtr-phase2 container.

    Refer to MSR Bootstrap Commands (Restore, Backup, Reconfigure) Fail with “invalid session token” in the Mirantis knowledge base for more information.

    FIELD-4270

Security information

Deprecation notes

  • In correlation with the End of Life date for MKE 3.2.x and MSR 2.7.x, Mirantis stopped maintaining the associated documentation set on 2021-07-21.

2.9.3

(2021-07-01)

Enhancements

  • MSR now tags all analytics reports with the user license ID when telemetry is enabled. It does not, though, collect any further identifying information. In line with this change, the MSR settings API no longer contains anonymizeAnalytics, and the MSR web UI no longer includes the Make data anonymous toggle (ENGDTR-2607).

  • The response for the /api/v0/meta/settings/compliance security compliance API now includes the following information:

    • Product version

    • Global enforcement policy

    • For each repository, a list of the following:

      • Enforcement policies

      • Promotion policies

      • Pruning policies

      • Push mirroring policies

      • Poll mirroring policies

    (ENGDTR-2532)

  • Added a matches operator to the rule engine that matches subject fields to a user-provided regex. This operator can be used for promotion, pruning, image enforcement, and push mirroring policies (ENGDTR-2498).

  • MSR now boosts container security by running the scanner process in a sandbox with restricted permissions. In the event the scanner process is compromised, it does not have access to the Rethink database private keys or any portion of the file system that it does not require access to (ENGDTR-1915).

  • Updated Django to version 3.1.10, resolving the following CVEs: CVE-2021-31542 and CVE-2021-32052 (ENGDTR-2651).

Addressed issues

  • Fixed an issue with the MSR web UI wherein the repository listing on the Organizations > Teams > Permissions tab displayed no more than ten teams (FIELD-3998).

  • Fixed an issue in the MSR web UI wherein the Scanning enabled setting failed to display correctly after changing it, navigating away from, and back to the Security tab (FIELD-3541).

  • Fixed an issue in the MSR web UI wherein after clicking Sync Database Now, the In Progress icon failed to disappear at the correct time and the scanning information (including the database version) failed to update without a browser refresh (FIELD-3541).

  • Fixed an issue in the MSR web UI wherein the value of Scanning timeout limit failed to display correctly after changing it, navigating away from, and back to the Security tab (FIELD-3541).

  • Fixed an issue wherein one or more RethinkDB servers in an unavailable state caused dtr emergency-repair to hang indefinitely (ENGDTR-2640).

  • Fixed an issue in MSR 2.9.2 that caused bootstrapper to panic when performing manual operations in an unhealthy environment.

Security information

  • Vulnerability scans no longer reveal a false positive for CVE-2020-17541 as of CVE database version 1388, published 2021-06-24 at 1:04 PM EST (ENGDTR-2634).

  • Vulnerability scans no longer reveal a false positive for CVE-2021-23017 as of CVE database version 1437, published 2021-06-27 at 5:11 PM EST (ENGDTR-2634).

  • Vulnerability scans may reveal the following CVE, though MSR is not impacted: CVE-2021-29921 (ENGDTR-2634).

  • Resolved the following CVEs in MSR containers:

    (ENGDTR-2634)

2.9.2 (discontinued)

(2021-06-29)

Warning

MSR 2.9.2 was discontinued shortly after release due to an issue wherein bootstrapper panicked when performing manual operations in an unhealthy environment. The product enhancements and bug fixes planned for MSR 2.9.2 are a part of MSR 2.9.3, which also resolves the bootstrapper issue.

Mirantis strongly recommends that customers who deployed MSR 2.9.2 upgrade to MSR 2.9.3.

2.9.1

(2021-05-17)

Enhancements

  • Added 5-star rating form to web UI (ENGDTR-2541, ENGDTR-2540).

  • MSR now applies a 56-character limit on “namespace/repository” length at creation, and thus eliminates a situation wherein attempts to push tags to repos with too-long names return a 500 Internal Server Error (ENGDTR-2525).

  • MSR now alerts administrators if the storage back end contents do not match the metadata, or if a new install of MSR uses a storage back end that contains data from a different MSR installation (ENGDTR-2501).

  • Updated golang to 1.16.3 and kube-linter to 0.2.1 (ENGDTR-2561).

  • Added activity log type DELETE for TagLimit pruning (ENGDTR-2497).

  • The MSR UI now includes a horizontal scrollbar (in addition to the existing vertical scrollbar), thus allowing users to better adjust the window dimensions.

  • The enableManifestLists setting is no longer needed and has been removed due to breaking Docker Content Trust (FIELD-2642, FIELD-2644).

  • Updated the MSR web UI Last updated at trigger for the promotion and mirror policies to include the option to specify before a particular time (after already exists) (FIELD-2180).

  • The mirantis/dtr --help documentation no longer recommends using the --rm option when invoking commands. Leaving it out preserves containers after they have finished running, thus allowing users to retrieve logs at a later time (FIELD-2204).

Addressed issues

  • Fixed broken links to MSR documentation in the MSR web UI (FIELD-3822).

  • Fixed “nasa bootstrap” integration test (and emergency repair procedure) (ENGDTR-2433).

  • Fixed an issue wherein pushing images with previously-pushed layer data that has been deleted from storage caused unknown blob errors. Pushing such images now replaces missing layer data. Sweeping image layers with image layer data missing from storage no longer causes garbage collection to error out (FIELD-1836).

Security information

  • MSR is not vulnerable to the following CVEs as a result of the update of mirantiseng/rethinkdb to Alpine 3.13.5:

    (ENGDTR-2580)

  • Though the version of busybox within the container is not vulnerable, dtr-rethink vulnerability scans may present false positives for CVE-2018-1000500 and CVE-2021-28831 in the busybox component (ENGDTR-2571).

  • Though the jvm-hotspot-openjdk component is not present in the dtr-jobrunner container, dtr-jobrunner vulnerability scans may detect CVE-2021-2161 and CVE-2021-2163 in the component (ENGDTR-2571).

  • Vulnerability scans no longer report CVE-2016-4074 as a result of the 2021.03 scanner update.

  • A self scan of MSR 2.9.1 reveals five vulnerabilities, however these CVEs are not a threat to MSR:

    (ENGDTR-2543)

  • urllib3 version 1.26.4 and later fixes CVE-2021-28363, however the dtr-jobrunner container uses Alpine which has yet to release urllib3 1.26.4 in a stable repository.

    The dtr-jobrunner container does not make any outgoing HTTP requests to containers external to MSR and therefore is not susceptible to CVE-2021-28363 (ENGDTR-2581).

  • A self-scan can report a false positive for CVE-2021-29482 (ENGDTR-2608).

2.9.0

(2021-04-12)

Enhancements

  • Added support for hosting Helm charts using Helm v2 and v3 (ENGDTR-1750).

    • Includes support for standard Helm charts, as well as OCI-based Helm charts (an experimental feature in Helm v3).

    • Chart and optional provenance file can be stored in the repository.

    • Includes the ability to lint Helm charts against a set of best practices and generate a linting report.

    • The MSR API now includes the following endpoints for supporting Helm charts in MSR:

      Description

      Endpoint

      Retrieve repository index file

      GET /charts/<namespace>/<reponame>/index.yaml
      

      Retrieve chart or provenance file from repository

      GET /charts/<namespace>/<reponame>/<chartname>/<filename>
      

      Upload chart (and, optionally, provenance file) to repository

      POST /charts/api/<namespace>/<reponame>/charts
      

      Upload provenance file to repository

      POST /charts/<namespace>/<reponame>/prov
      

      Delete chart (and provenance file, if present) from repository

      DELETE /charts/<namespace>/<reponame>/charts/ \
      <chartname>/<chartversion>
      

      Get metadata for all charts in repository:

      GET /charts/<namespace>/<reponame>/charts
      

      Get metadata for all versions of a chart in the repository

      GET /charts/<namespace>/<reponame>/charts/<chartname>
      

      Get metadata for chart version in repository

      GET /charts/<namespace>/<reponame>/charts/ \
      <chartname>/<chartversion>
      

      Get default values for chart version in repository

      GET /charts/<namespace>/<reponame>/charts/<chartname>/ \
      <chartversion>/values
      

      Template a chart version in repository

      GET /charts/<namespace>/<reponame>/charts/<chartname>/ \
      <chartversion>/template
      

      Lint a particular chart version

      POST /charts/api/<namespace>/<reponame>/charts/ \
      <chartname>/<chartversion>/lint
      

      Lint every version of all available charts

      POST /charts/api/lint
      

      Get linting results (as a JSON file) for a particular chart version in a particular repository

      GET /charts/api/<namespace>/<reponame>/charts/ \
      <chartname>/<chartversion>/lintsummary
      

      Export linting results (as a CSV file) for a particular chart version in a particular repository

      GET /charts/api/<namespace>/<reponame>/charts/ \
      <chartname>/<chartversion>/lintsummary/export
      

      Retrieve all available linting rules (each rule includes a name, description, remediation, template, and parameters)

      GET /charts/api/lintingrules
      
  • Added running image enforcement policy support to MSR, which allows users to block clients from pulling images based on specified criteria.

    • Users can configure policies scoped either globally or at the repository level.

    • MSR logs all enforcement events to the activity log in the MSR web UI. (MSR tracks enforcement events triggered by global enforcement policies as GLOBAL event types.)

    • The MSR API now includes the following endpoints for supporting running image enforcement in MSR:

      Description

      Endpoint

      Get all repository enforcement policies

      GET /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies
      

      Create repository enforcement policy

      POST /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies
      

      Delete all repository enforcement policies

      DELETE /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies
      

      Retrieve specific repository enforcement policy

      GET /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies/<enforcementpolicyid>
      

      Delete specific repository enforcement policy

      DELETE /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies/<enforcementpolicyid>
      

      Patch specific repository enforcement policy

      PATCH /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies/<enforcementpolicyid>
      

      Add rules to specific repository enforcement policy

      POST /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies/<enforcementpolicyid>/rules
      

      Delete specific rule from repository enforcement policy

      DELETE /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies/<enforcementpolicyid>/rules/<ruleid>
      

      Patch specific rule within repository enforcement policy

      PATCH /api/v0/repositories/<namespace>/<reponame>/ \
      enforcementPolicies/<enforcementpolicyid>/rules/<ruleid>
      

      Modify global enforcement policy

      POST /api/v0/meta/settings/globalEnforcementPolicy/rules
      

      Delete and disable global enforcement policy

      DELETE ​/api​/v0​/meta​/settings​/globalEnforcementPolicy
      

      Update specific rule within global enforcement policy

      PUT /api/v0/meta/settings/globalEnforcementPolicy/rules/<ruleid>
      

      Delete a specific rule within global enforcement policy

      DELETE /api/v0/meta/settings/globalEnforcementPolicy/ \
      rules/<ruleid>
      
  • Added the capability to download scanner reports and optionally bundle them with the following information (which you can then send to Mirantis, as required):

    • Missing vulnerabilities

    • False positives

    • Incorrect component versions

    • Additional information

    (ENGDTR-2332)

  • Starting with this release, you no longer need to indicate that storage has migrated when modifying back end storage configuration. MSR now assumes that the new storage back end is not empty and that its contents match those of the old back end. Thus, we have removed the --storage-migrated flag and web UI storage migration checkbox from MSR.

    If the back end is empty or if the new back end content does not match the the old back end content, MSR produces unknown blob errors during pushes and pulls.

    If you deploy a brand new storage back end and the data inside does not match the old back end, you must first reinitialize the storage with the new --reinitialize-storage flag within reconfigure. Note that this action erases all tag metadata (FIELD-2571).

  • All analytics reports for instances of MSR with a Mirantis-issued license key now include the license ID (even when the anonymize analytics setting is enabled). The license subject reads License ID in the web UI (ENGDTR-2327).

  • Intermittent failures no longer occur during metadata garbage collection when using Google Cloud Storage as the back end (ENGDTR-2376).

  • Pulling images from a repository using crictl no longer returns a 500 error (FIELD-3331).

  • Lengthy tag names no longer overlap with adjacent text in the repository tag list (FIELD-1631).

Security information

  • MSR is not vulnerable to CVE-2019-15562, despite its detection in dtr-notary-signer and dtr-notary-server vulnerability scans, as the SQL back end is not used in Notary deployment (ENGDTR-2319).

  • Vulnerability scans of the dtr-jobrunner can give false positives for CVE-2020-29363, CVE-2020-29361, and CVE-2020-29362 in the p11-kit component. The container’s version of p11-kit is not vulnerable to these CVEs. (ENGDTR-2319).

  • Resolved CVE-2019-20907 (ENGDTR-2259).

Considerations

  • CentOS 8 entered EOL status as of 31-December-2021. For this reason, Mirantis no longer supports CentOS 8 for all versions of MSR. We encourage customers who are using CentOS 8 to migrate onto any one of the supported operating systems, as further bug fixes will not be forthcoming.

  • In developing MSR 2.9.x, Mirantis has been transitioning from legacy Docker Hub-issued licenses to JWT licenses, as detailed below:

    • Versions 2.9.0 to 2.9.3: Docker Hub licenses and JWT licenses

    • Versions 2.9.4 and later: JWT licenses only

  • When malware is present in customer images, malware scanners operating on MSR nodes at runtime can wrongly report MSR as a bad actor. If your malware scanner detects any issue in a running instance of MSR, refer to Vulnerability scanning.

Note

The Mirantis Migration Tool (MMT) release notes are included within the Mirantis Migration Tool guide.

Release Compatibility Matrix

MSR 2.9 Compatibility Matrix

Mirantis Secure Registry (MSR, and formerly Docker Trusted Registry) provides an enterprise grade container registry solution that can be easily integrated to provide the core of an effective secure software supply chain.

Support for MSR is defined in the Mirantis Cloud Native Platform Subscription Services agreement.

Operating system compatibility

MSR functionality is dependent on MKE, and MKE functionality is dependent on MCR. As such, MSR operating system compatibility is contingent on the operating system compatibility of the MCR versions with which your particular MKE version is compatible.

To determine MSR operating system compatibility:

  1. Access the MKE compatibility matrix and locate the version of MKE that you are running with MSR.

  2. Note the MCR versions with which that MKE version is compatible.

  3. Access the MCR compatibility matrix and locate the MCR versions that are compatible with your version of MKE to determine operating system compatibility.

MSR version

Required MKE version

2.9.15

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

2.9.14

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

2.9.13

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

2.9.12

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.11

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.10

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.9

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.8

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.7

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.6

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.5

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.4

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.3

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.2 1

Not applicable

2.9.1

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

2.9.0

3.7.2, 3.7.1, 3.7.0

3.6.8, 3.6.7, 3.6.6, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0

3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0

3.4.15, 3.4.14, 3.4.13, 3.4.12, 3.4.11, 3.4.10, 3.4.9, 3.4.8, 3.4.7, 3.4.6, 3.4.5, 3.4.4, 3.4.2, 3.4.0

1

MSR 2.9.2 was discontinued shortly after release due to an issue wherein bootstrapper panicked when performing manual operations in an unhealthy environment. The product enhancements and bug fixes planned for MSR 2.9.2 are a part of MSR 2.9.3, which also resolves the bootstrapper issue. Mirantis strongly recommends that customers who deployed MSR 2.9.2 upgrade to MSR 2.9.3 (or later).

Storage back ends

MSR supports the following storage systems:

Persistent volume

  • NFS (v3 and v4)

  • Bind mount

  • Volume

Cloud storage providers

  • Amazon S3

  • Microsoft Azure

  • Google Cloud Storage

  • Alibaba Cloud Object Storage Service

Note

MSR cannot be deployed to Windows nodes.

MKE and MSR Browser compatibility

The Mirantis Kubernetes Engine (MKE) and Mirantis Secure Registry (MSR) web user interfaces (UIs) both run in the browser, separate from any backend software. As such, Mirantis aims to support browsers separately from the backend software in use.

Mirantis currently supports the following web browsers:

Browser

Supported version

Release date

Operating systems

Google Chrome

96.0.4664 or newer

15 November 2021

MacOS, Windows

Microsoft Edge

95.0.1020 or newer

21 October 2021

Windows only

Firefox

94.0 or newer

2 November 2021

MacOS, Windows

To ensure the best user experience, Mirantis recommends that you use the latest version of any of the supported browsers. The use of other browsers or older versions of the browsers we support can result in rendering issues, and can even lead to glitches and crashes in the event that some JavaScript language features or browser web APIs are not supported.

Important

Mirantis does not tie browser support to any particular MKE or MSR software release.

Mirantis strives to leverage the latest in browser technology to build more performant client software, as well as ensuring that our customers benefit from the latest browser security updates. To this end, our strategy is to regularly move our supported browser versions forward, while also lagging behind the latest releases by approximately one year to give our customers a sufficient upgrade buffer.

MKE, MSR, and MCR Maintenance Lifecycle

The MKE, MSR, and MCR platform subscription provides software, support, and certification to enterprise development and IT teams that build and manage critical apps in production at scale. It provides a trusted platform for all apps which supply integrated management and security across the app lifecycle, comprised primarily of Mirantis Kubernetes Engine, Mirantis Secure Registry (MSR), and Mirantis Container Runtime (MCR).

Mirantis validates the MKE, MSR, and MCR platform for the operating system environments specified in the mcr-23.0-compatibility-matrix, adhering to the Maintenance Lifecycle detailed here. Support for the MKE, MSR, and MCR platform is defined in the Mirantis Cloud Native Platform Subscription Services agreement.

Detailed here are all currently supported product versions, as well as the product versions most recently deprecated. It can be assumed that all earlier product versions are at End of Life (EOL).

Important Definitions

  • “Major Releases” (X.y.z): Vehicles for delivering major and minor feature development and enhancements to existing features. They incorporate all applicable Error corrections made in prior Major Releases, Minor Releases, and Maintenance Releases.

  • “Minor Releases” (x.Y.z): Vehicles for delivering minor feature developments, enhancements to existing features, and defect corrections. They incorporate all applicable Error corrections made in prior Minor Releases, and Maintenance Releases.

  • “Maintenance Releases” (x.y.Z): Vehicles for delivering Error corrections that are severely affecting a number of customers and cannot wait for the next major or minor release. They incorporate all applicable defect corrections made in prior Maintenance Releases.

  • “End of Life” (EOL): Versions are no longer supported by Mirantis, updating to a later version is recommended.

Support lifecycle

GA to 12 months

12 to 18 months

18 to 24 months

Full support

Full support 1

Limited Support for existing installations 2

1 Software patches for critical bugs and security issues only; no feature enablement.

2 Software patches for critical security issues only.

Mirantis Kubernetes Engine (MKE)

3.6.z

3.7.z

General Availability (GA)

2022-OCT-13 (3.6.0)

2023-AUG-30 (3.7.0)

End of Life (EOL)

2024-OCT-13

2025-AUG-29

Release frequency

x.y.Z every 6 weeks

x.y.Z every 6 weeks

Patch release content

As needed:

  • Maintenance releases

  • Security patches

  • Custom hotfixes

As needed:

  • Maintenance releases

  • Security patches

  • Custom hotfixes

Supported lifespan

2 years 1

2 years 1

1 Refer to the Support lifecycle table for details.

EOL MKE Versions

MKE Version

EOL date

2.0.z

2017-AUG-16

2.1.z

2018-FEB-07

2.2.z

2019-NOV-01

3.0.z

2020-APR-16

3.1.z

2020-NOV-06

3.2.z

2021-JUL-21

3.3.z

2022-MAY-27

3.4.z

2023-APR-11

3.5.z

2023-NOV-22

Mirantis Secure Registry (MSR)

2.9.z

3.0.z

3.1.z

General Availability (GA)

2021-APR-12 (2.9.0)

2021-DEC-21 (3.0.0)

2023-SEP-28 (3.1.0)

End of Life (EOL)

2024-OCT-13

2024-APR-20

2025-SEP-27

Release frequency

x.y.Z every 6 weeks

x.y.Z every 6 weeks

x.y.Z every 6 weeks

Patch release content

As needed:

  • Maintenance releases

  • Security patches

  • Custom hotfixes

As needed:

  • Maintenance releases

  • Security patches

  • Custom hotfixes

As needed:

  • Maintenance releases

  • Security patches

  • Custom hotfixes

Supported lifespan

2 years 1

2 years 1

2 years 1

1 Refer to the Support lifecycle table for details.

EOL MSR Versions

MSR Version

EOL date

2.1.z

2017-AUG-16

2.2.z

2018-FEB-07

2.3.z

2019-FEB-15

2.4.z

2019-NOV-01

2.5.z

2020-APR-16

2.6.z

2020-NOV-06

2.7.z

2021-JUL-21

2.8.z

2022-MAY-27

Mirantis Container Runtime (MCR)

Enterprise 23.0

General Availability (GA)

2023-FEB-23 (23.0.1)

End of Life (EOL)

2025-FEB-22

Release frequency

x.y.Z every 6 weeks

Patch release content

As needed:

  • Maintenance releases

  • Security patches

  • Custom hotfixes

Supported lifespan

2 years 1

1 Refer to the Support lifecycle table for details.

EOL MCR Versions

MCR Version

EOL date

CSE 1.11.z

2017-MAR-02

CSE 1.12.z

2017-NOV-14

CSE 1.13.z

2018-FEB-07

EE 17.03.z

2018-MAR-01

Docker Engine - Enterprise v17.06

2020-APR-16

Docker Engine - Enterprise 18.03

2020-JUN-16

Docker Engine - Enterprise 18.09

2020-NOV-06

Docker Engine - Enterprise 19.03

2021-JUL-21

MCR 19.03.8+

2022-MAY-27

MCR 20.10.0+

2023-DEC-10

Release Cadence and Support Lifecycle

With the intent of improving the customer experience, Mirantis strives to offer maintenance releases for the Mirantis Secure Registry (MSR) software every six to eight weeks. Primarily, these maintenance releases will aim to resolve known issues and issues reported by customers, quash CVEs, and reduce technical debt. The version of each MSR maintenance release is reflected in the third digit position of the version number (as an example, for MSR 2.9 the most current maintenance release is MSR 2.9.16).

In parallel with our maintenance MSR release work, each year Mirantis will develop and release a new major version of MSR, the Mirantis support lifespan of which will adhere to our legacy two year standard.

End of Life Date

The End of Life (EOL) date for MSR 2.9 is 2024-OCT-13.

For more information on MSR version lifecycles, refer to the MKE, MSR, and MCR Maintenance Lifecycle.

The MSR team will make every effort to hold to the release cadence stated here. Customers should be aware, though, that development and release cycles can change, and without advance notice.

Technology Preview features

A Technology Preview feature provides early access to upcoming product innovations, allowing customers to experiment with the functionality and provide feedback.

Technology Preview features may be privately or publicly available and neither are intended for production use. While Mirantis will provide assistance with such features through official channels, normal Service Level Agreements do not apply.

As Mirantis considers making future iterations of Technology Preview features generally available, we will do our best to resolve any issues that customers experience when using these features.

During the development of a Technology Preview feature, additional components may become available to the public for evaluation. Mirantis cannot guarantee the stability of such features. As a result, if you are using Technology Preview features, you may not be able to seamlessly upgrade to subsequent product releases.

Mirantis makes no guarantees that Technology Preview features will graduate to generally available features.

Open Source Components and Licenses

Click any product component license below to download a text file of that license to your local system.