Introduction

The Mirantis Secure Registry (MSR) documentation is your resource for information on how to deploy and operate an MSR instance. The intent of the content therein is to provide users with an understanding of the core concepts of the product, while also providing instruction sufficient to deploy and operate the software.

Mirantis is committed to constantly building on and improving the MSR documentation, in response to the feedback and kind requests we receive from the MSR user base.

Product Overview

Mirantis Secure Registry (MSR) is a solution that enables enterprises to store and manage their container images on-premises or in their virtual private clouds. With the advent of MSR 3.1.0, the software can run alongside your other apps in any standard Kubernetes distribution, or you can deploy it onto a Swarm cluster. As a result, the MSR user has far greater flexibility, as many resources are administered by the orchestrator rather than the registry itself. And while MSR 3.1.0 is not integrated with Mirantis Kubernetes Engine (MKE) as it was prior to version 3.0.0, it runs just as well on MKE as on any supported Kubernetes distribution or on Docker Swarm.

The security that is built into MSR enables you to verify and trust the provenance and content of your applications and ensure secure separation of concerns. Using MSR, you meet security and regulatory compliance requirements. In addition, the automated operations and integration with CI/CD speed up application testing and delivery. The most common use cases for MSR include:

Helm charts repositories

Deploying applications to Kubernetes can be complex. Setting up a single application can involve creating multiple interdependent Kubernetes resources, such as pods, services, deployments, and replica sets. Each of these requires manual creation of a detailed YAML manifest file as well. This is a lot of work and time invested. With Helm charts (packages that consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files) you can save time and install the software you need with all the dependencies, upgrade, and configure it.

Automated development

Easily create an automated workflow where you push a commit that triggers a build on a CI provider, which pushes a new image into your registry. Then, the registry fires off a webhook and triggers deployment on a staging environment, or notifies other systems that a new image is available.

Secure and vulnerability-free images

When an industry requires applications to comply with certain security standards to meet regulatory compliances, your applications are as secure as the images that run those applications. To ensure that your images are secure and do not have any vulnerabilities, track your images using a binary image scanner to detect components in images and identify associated CVEs. In addition, you may also run image enforcement policies to prevent vulnerable or inappropriate images from being pulled and deployed from your registry.

Reference Architecture

The MSR Reference Architecture provides comprehensive technical information on Mirantis Secure Registry (MSR), including component particulars, infrastructure specifications, and networking and volumes detail.

Introduction to MSR

Mirantis Secure Registry (MSR) is an enterprise-grade image storage solution. Installed behind a firewall, either on-premises or on a virtual private cloud, MSR provides a secure environment where users can store and manage their images.

Starting with MSR 3.1.0, MSR can run alongside your other apps in any standard Kubernetes distribution, or you can deploy it onto a Swarm cluster. As a result, the MSR user has a great deal of flexibility, as many resources are administered by the orchestrator rather than by the registry itself.

While MSR 3.1.x is not integrated with Mirantis Kubernetes Engine (MKE), as it was it was prior to version 3.0.0, it runs just as well on MKE as on any supported Kubernetes distribution or on Docker Swarm.

The advantages of MSR include the following:

Image and job management

MSR has a web-based user interface used for browsing images and auditing repository events. With the web UI, you can see which Dockerfile lines produced an image and, if security scanning is enabled, a list of all of the software installed in that image and any Common Vulnerabilities and Exposures (CVEs). You can also audit jobs with the web UI.

MSR can serve as a continuous integration and continuous delivery (CI/CD) component, in the building, shipping, and running of applications.

Availability

MSR is highly available through the use of multiple replicas of all containers and metadata. As such, MSR will continue to operate in the event of machine failure, thus allowing for repair.

Efficiency

MSR can reduce the bandwidth used when pulling images by caching images closer to users. In addition, MSR can clean up unreferenced manifests and layers.

Built-in access control

As with Mirantis Kubernetes Engine (MKE), MSR uses role-based access control (RBAC), which allows you to manage image access, either manually, with LDAP, or with Active Directory.

Security scanning

A security scanner is built into MSR, which can be used to discover the versions of the software that is in use in your images. This tool scans each layer and aggregates the results, offering a complete picture of what is being shipped as a part of your stack. Most importantly, as the security scanner is kept up-to-date by tapping into a periodically updated vulnerability database, it is able to provide unprecedented insight into your exposure to known security threats.

Image signing

MSR ships with Notary, which allows you to sign and verify images using Docker Content Trust.

Components

Mirantis Secure Registry (MSR) is a containerized application that runs on a Kubernetes cluster. After deploying MSR, you can use your Docker CLI client to log in, push images, and pull images. For high availability, you can horizontally scale your MSR workloads across multiple Kubernetes worker nodes.

Workloads

Descriptions for each of the workloads that MSR creates during installation are available in the table below.

Caution

Do not use these components in your applications, as they are for internal MSR use only.

MSR installation workloads

Name

Name on Kubernetes

Name on Swarm

Description

API

deployment/<release-name>-msr-api

msr_msr-api-server

Executes the MSR business logic, serving the MSR web application and API.

Garant

deployment/<release-name>-msr-garant

msr_msr-garant

Manages MSR authentication.

Jobrunner

deployment/<release-name>-msr-jobrunner-<deployment>

msr_msr-jobrunner

Runs asynchronous background jobs, including garbage collection and image vulnerability scans.

NGINX

deployment/<release-name>-msr-nginx

msr_msr-nginx

Receives HTTP and HTTPS requests and proxies those requests to other MSR components.

Notary server

deployment/<release-name>-msr-notary-server

msr_msr-notary-server

Provides signing and verification for images that are pushed to or pulled from the secure registry.

Notary signer

deployment/<release-name>-msr-notary-signer

msr_msr-notary-signer

Performs server-side timestamp and snapshot signing for Content Trust metadata.

Registry

deployment/<release-name>-msr-registry

msr_msr-registry

Implements pull and push functionality for Docker images and manages how images are stored.

RethinkDB

statefulset/<release-name>-msr-rethinkdb-cluster, deployment/<release-name>-msr-rethinkdb-proxy

msr_msr-rethinkdb

Stores persisting repository metadata.

Scanningstore

statefulset/<release-name>-msr-scanningstore

msr_msr-scanningstore

Stores security scanning data.

eNZi

deployment/<release-name>-enzi-api, statefulset/<release-name>-enzi-worker

msr_msr-enzi-api, msr_msr-enzi-worker

Authenticates and authorizes MSR users.

Third-party components

Name

Name on Kubernetes

Description

PostgreSQL

deployment/postgres-operator

Manages the security scanning database.

cert-manager

deployment/cert-manager, deployment/cert-manager-caininjector, deployment/cert-manager-webhook

Manages certificates for all MSR components.

Note

Third-party components are present only in Kubernetes deployments. Swarm-based installations include only the components listed in the MSR installation workloads table.

The communication flow between MSR workloads is illustrated below:

msr-architecture

Note

The third-party cert-manager component interacts with all of the components displayed in the above diagram.

JobRunner

Descriptions for each of the job types that are run by MSR are available in the table below.

MSR job types

Job type

Description

analytics_report

Uploads an analytics report to Mirantis.

helm_chart_lint

Lints a Helm chart.

helm_chart_lint_all

Lints all charts in all repositories.

onlinegc

Performs garbage collection for all types of MSR data and metadata.

onlinegc_blobs

Performs garbage collection of orphaned image layer data.

onlinegc_events

Performs auto-deletion of repository events.

onlinegc_joblogs

Performs auto-deletion of job logs.

onlinegc_metadata

Performs garbage collection of image metadata.

onlinegc_scans

Performs garbage collection of security scan results for deleted layers.

poll_mirror

Pulls tags from remote repositories as determined by mirroring policies.

push_mirror_tag

Pushes image tags to remote repositories as determined by mirroring policies.

scan_check

Scans image by digest.

scan_check_all

Rescans all previously scanned images.

scan_check_single

Scans single layer of the image.

tag_prune

Deletes tags from remote repositories, as determined by the pruning policies of the repositories.

update_vuln_db

Updates vulnerability database (CVE list).

webhook

Sends a webhook.

System requirements

Make sure you review the resource allocation detail for MSR prior to installation.

System requirements on Kubernetes

Herein, we offer detail for both a minimum resource allotment as well as guidelines for an optimum resource allotment.

Minimum resource allotment

Verify that at a minimum your system can allocate the following resources solely to the running of MSR:

Component

Requirement

Nodes

One Linux/AMD64 worker node, running Kubernetes 1.21 - 1.27 1:

  • 16 GB RAM

  • 4 vCPUs

Kubernetes command line tool

kubectl

Kubernetes configuration files

kubeconfig

Component necessary for accessing the Kubernetes cluster.

Note

If you are installing MSR 3.0.x on an MKE Kubernetes cluster, you must download the MKE client bundle to obtain the kubeconfig file.

Certificate management

cert-manager installed on the cluster

Minimum required version: 1.7.2

Kubernetes package management

Helm

Minimum required version: 3.7.0

Metadata storage

One 64 GB Kubernetes persistent volume 2 that supports the ReadWriteOnce volume access mode, or a StorageClass that can provision such a volume

Image data storage

Use any of the following:

  • One Kubernetes persistent volume 2 that supports the ReadWriteMany volume access mode, or a StorageClass that can provision such a volume

  • One cloud object storage bucket, such as Amazon S3

For more information, refer to Storage.

Image-scanning CVE database

A PostgreSQL server with sufficient storage for a 24 GB database. This can be either:

  • An MSR-deployed dedicated PostgreSQL server, an option that requires:

    • Postgres Operator installed on the cluster

      Minimum required version: 1.9.0 1

    • 4 GB of RAM and 1 vCPU available, for reservation on a Kubernetes worker node

    • One Kubernetes persistent volume 2 with 24 GB of available storage that supports the ReadWriteOnce volume access mode, or a StorageClass that can provision such a volume

  • An existing PostgreSQL server with sufficient storage for a 24 GB database

System requirements on Swarm

Herein, we offer detail for both a minimum resource allotment as well as guidelines for an optimum resource allotment.

Minimum resource allotment

Verify that at a minimum your system can allocate the following resources solely to the running of MSR:

Component

Requirement

Nodes

One Linux/AMD64 worker node, running Docker Swarm:

  • 16 GB RAM

  • 4 vCPUs

Docker Swarm command line tool

docker swarm

Storage

One cloud storage bucket, such as Amazon S3, with at least 88 GB of space reserved.

Volumes

MSR handles the creation of default volumes differently in Kubernetes and Swarm deployments.

Kubernetes deployments

By default, MSR creates the following persistent volume claims (PVCs):

PVC

Description

<release-name>

Stores image data when MSR is configured to store image data in a persistent volume

<release-name>-rethinkdb-cluster-<n>

Stores repository metadata

<release-name>-scanningstore-<n>

Stores vulnerability scan data when MSR is configured to deploy an internal PostgreSQL cluster

You can customize the storage class that is used to provision persistent volumes for these claims, or you can pre-provision volumes for use with MSR. Refer to install-online for more information.

Swarm deployments

By default, MSR creates the following volumes:

Volume

Description

msr_msr-storage

Stores image data when MSR is configured to store image data in a persistent volume

msr_msr-rethink

Stores repository metadata

msr_pgdata-msr-scanningstore

Stores vulnerability scan data when MSR is configured to deploy an internal PostgreSQL cluster

Storage

MSR supports the use of either a Persistent Volume or Cloud storage:

Storage type

Orchestrator

Description

Persistent Volume

Kubernetes

MSR is compatible with the types of Persistent Volumes listed in the Kubernetes documentation.

Note

nfs is commonly used in production environments. The hostPath and local options are not suitable for production, however they may be of use in certain limited testing scenarios.

Cloud

Kubernetes, Swarm

MSR is compatible with the following storage providers:

  • NFS

  • Amazon S3

  • Microsoft Azure

  • OpenStack Swift

  • Google Cloud Storage

  • Alibaba Cloud Object Storage Service

Note

The deployment of MSR to Windows nodes is not supported.

MSR web UI

Use the MSR web UI to manage settings and user permissions for your MSR installation.

Rule engine

MSR uses a rule engine to evaluate policies, such as tag pruning and image enforcement.

The rule engine supports the following operators:

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

  • starts with

  • ends with

  • contains

  • one of

  • not one of

  • matches

  • before

  • after

Note

The matches operator conforms subject fields to a user-provided regular expression (regex). The regex for matches must follow the specification in the official Go documentation: Package syntax.

Each of the following policies uses the rule engine:

Installation Guide

Targeted to deployment specialists and QA engineers, the MSR Installation Guide provides the detailed information and procedures you need to install and configure Mirantis Secure Registry (MSR).

There are three paths available for the installation of MSR 3.1.x: MSR on Swarm, MSR on Kubernetes using the MSR Operator, and MSR on Kubernetes using a Helm chart.

Prepare MKE for MSR Install

Important

The information herein is targeted solely to Kubernetes deployments.

To install MSR on MKE you must first configure both the default:postgres-operator user account and the default:postgres-pod service account in MKE with the privileged permission.

To prepare MKE for MSR install:

  1. Log in to the MKE web UI.

  2. In the left-side navigation panel, click the <user name> drop-down to display the available options.

  3. For MKE 3.6.0 or earlier, click Admin Settings > Orchestration. For MKE 3.6.1 or later, click Admin Settings > Privileges.

  4. Navigate to the User account privileges section.

  5. Enter <namespace-name>:postgres-operator into the User accounts field.

    Note

    You can replace <namespace-name> with default to indicate the use of the default namespace.

  6. Select the privileged check box.

  7. Scroll down to the Service account privileges section.

  8. Enter <namespace-name>:postgres-pod into the Service accounts field.

    Note

    You can replace <namespace-name> with default to indicate the use of the default namespace.

  9. Select the privileged checkbox.

  10. Click Save.

Important

For already deployed MSR instances, issue a rolling restart of the postgres-operator deployment:

kubectl rollout restart deploy/postgres-operator

Install on Kubernetes

In MSR 3.1, you can use either of two methods for installing the software on any Kubernetes distribution that supports persistent storage: the recommended MSR Operator method and the legacy Helm chart method.

For information on installing high availability MSR instances, refer to Install an HA MSR deployment.

Install using the MSR Operator

Available since MSR 3.1.1

This guide details how to install MSR using the MSR Operator in either an online or an air-gapped Kubernetes environment.

Install online using the MSR Operator

Herein, Mirantis provides step-by-step instruction on how to install MSR onto an Internet-connected Kubernetes cluster using a Helm chart.

Prepare your environment
  1. Install and configure your Kubernetes distribution.

  2. Ensure that the default StorageClass on your cluster supports the dynamic provisioning of volumes. If necessary, refer to the Kubernetes documentation Change the default StorageClass.

    If no default StorageClass is set, you can specify a StorageClass for MSR to use by providing the following additional parameters to the custom resource manifest:

    spec:
      registry:
        storage:
          persistentVolume:
            storageClassName: '<my-storageclass>'
      postgresql:
        volume:
          storageClass: '<my-storageclass>'
      rethinkdb:
        cluster:
          persistentVolume:
            storageClass: '<my-storageclass>'
    

    The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

    spec:
      registry:
        storage:
          backend: 'persistentVolume'
    

    MSR creates PersistentVolumeClaims with either the ReadWriteOnce or the ReadWriteMany access modes, depending on the purpose for which they are created. Thus the StorageClass provisioner that MSR uses must be able to provision PersistentVolumes with at least the ReadWriteOnce and ReadWriteMany access modes.

    The <release-name> PVC is created by default with the ReadWriteMany access mode. If you choose to install MSR with a persistentVolume back end, you can override this default access mode by adding the following parameter to the custom resource manifest:

    spec:
      registry:
        storage:
          persistentVolume:
            accessModes: ['<new-access-mode>']
    
Prerequisites

The following key components must be in place before you can install MSR on Kubernetes using the online method:

  • cert-manager

  • Postgres Operator

  • RethinkDB Operator

  • MSR Operator

To ensure that all of the key prerequisites are present:

  1. Install cert-manager:

    Important

    The cert-manager version must be 1.7.2 or later.

    helm upgrade --install cert-manager cert-manager \
         --repo https://charts.jetstack.io \
         --version 1.12.3 \
         --set installCRDs=true
    
  2. Install Postgres Operator:

    Important

    The Postgres Operator version you install must be 1.9.0 or later, as all versions up through 1.8.2 use the PodDisruptionBudget policy/v1beta1 Kubernetes API, which is no longer served as of Kubernetes 1.25. This being the case, various MSR features may not function properly if a Postgres Operator prior to 1.9.0 is installed alongside MSR on Kubernetes 1.25 or later.

    helm upgrade --install postgres-operator postgres-operator \
         --repo https://opensource.zalando.com/postgres-operator/charts/postgres-operator/ \
         --version 1.10.0 \
         --set configKubernetes.spilo_runasuser=101 \
         --set configKubernetes.spilo_runasgroup=103 \
         --set configKubernetes.spilo_fsgroup=103
    

    Note

    By default, MSR uses the persistent volume claims detailed in Volumes.

    If you have a pre-existing PersistentVolume that contains image blob data that you intend to use with a new instance of MSR, you can add the following to the MSR custom resource manifest to provide the new instance with the name of the associated PersistentVolumeClaim:

    spec:
      registry:
        storage:
          backend: 'persistentVolume'
          persistentVolume:
            existingClaim: '<pre-existing-msr-pvc>'
    

    This setting indicates the <release-name> PVC referred to in volumes.

  3. Install RethinkDB Operator:

    helm upgrade --install rethinkdb-operator rethinkdb-operator \
         --repo https://registry.mirantis.com/charts/rethinkdb/rethinkdb-operator \
         --version 1.0.0
    
  4. Install MSR Operator:

    helm upgrade --install msr-operator msr-operator \
         --repo https://registry.mirantis.com/charts/msr/msr-operator \
         --version 1.0.0
    
Install MSR

After installing the prerequisites, you can deploy MSR by editing and applying the custom resource manifest, downloadable herein.

Following MSR installation, you can make changes to the the MSR CustomResource (CR) by using kubectl to edit the custom resource manifest.

To install MSR:

  1. Download the cr-sample-manifest YAML file by clicking cr-sample-manifest.yaml.

  2. Make further edits to the cr-sample-manifest.yaml file as needed. Default values will be applied for any fields that are both present in the manifest and left blank. If the field is not present in the manifest, it will receive an empty value.

  3. Invoke the following command to run the webhook health check and create the custom resource:

    kubectl wait --for=condition=ready pod -l \
    app.kubernetes.io/name="msr-operator" && kubectl apply -f cr-sample-manifest.yaml
    
  4. Verify completion of the reconciliation process:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

    To troubleshoot the reconciliation process, run the following commands:

    kubectl describe msrs.msr.mirantis.com
    kubectl describe rethinkdbs.rethinkdb.com
    

    Review the MSR Operator Pod logs for more detailed results:

    kubectl logs <msr-operator-pod-name>
    

To verify the success of your MSR installation:

  1. Verify that all msr-* Pods are in the running state.

  2. Set up your load balancer.

  3. Log into the MSR web UI.

  4. Log into MSR from the command line:

    docker login $FQDN
    
  5. Push an image to MSR using docker push.

Note

The default credentials for MSR are:

  • User name: admin

  • password: password

See also

Check the Pods

If you are using MKE with your cluster, download and configure the client bundle. Otherwise, ensure that you can access the cluster using kubectl, either by updating the default Kubernetes config file or by setting the KUBECONFIG environment variable to the path of the unique config file for the cluster.

kubectl get pods

Example output:

NAME                                              READY   STATUS    RESTARTS   AGE
cert-manager-6bf59fc5c7-5wchj                     1/1     Running   0          23m
cert-manager-cainjector-5c5f8bfbd6-mlr2k          1/1     Running   0          23m
cert-manager-webhook-6fcbbd87c9-7ftv7             1/1     Running   0          23m
msr-api-cfc88f8ff-8lh9n                           1/1     Running   4          18m
msr-enzi-api-77bf8558b9-p6q7x                     1/1     Running   1          18m
msr-enzi-worker-0                                 1/1     Running   3          18m
msr-garant-d84bbfccd-j94qc                        1/1     Running   4          18m
msr-jobrunner-default-54675dd9f4-cwnfg            1/1     Running   3          18m
msr-nginx-6d7c775dd9-nt48c                        1/1     Running   0          18m
msr-notary-server-64f9dd68fc-xzpp4                1/1     Running   4          18m
msr-notary-signer-5b6f7f6bd9-bcqwv                1/1     Running   3          18m
msr-registry-6b6c6b59d5-8bnsl                     1/1     Running   0          18m
msr-rethinkdb-cluster-0                           1/1     Running   0          18m
msr-rethinkdb-proxy-7fccc79db7-njrfl              1/1     Running   2          18m
msr-scanningstore-0                               1/1     Running   0          18m
nfs-subdir-external-provisioner-c5f64f6cd-mjjqt   1/1     Running   0          19m
postgres-operator-54bb64998c-mjs6q                1/1     Running   0          22m

If you intend to run vulnerability scans, the msr-scanningstore-0 Pod must have Running status. If this is not the case, it is likely that the StorageClass is missing or is misconfigured, or because no default StorageClass is set. To rectify this, you must configure a default StorageClass and then re-install MSR. Otherwise, you can specify a StorageClass for MSR to use by providing the following when using a custom resource manifest install MSR:

spec:
  registry:
    storage:
      persistentVolume:
        storageClass: '<my-storageclass>'
  postgresql:
    volume:
      storageClass: '<my-storageclass>'
  rethinkdb:
    cluster:
      persistentVolume:
        storageClass: '<my-storageclass>'

Note

The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

spec:
  registry:
    storage:
      backend: 'persistentVolume'
Add load balancer (AWS)

If you deploy MSR to AWS you should consider adding a load balancer to your installation.

  1. Set an environment variable to use in assigning an internal service name to the load balancer service:

    export MSR_ELB_SERVICE="msr-public-elb"
    
  2. Use Kubernetes to create an AWS load balancer to expose NGINX, the front end for the MSR web UI:

    kubectl expose deployment msr-nginx --type=LoadBalancer \
      --name="${MSR_ELB_SERVICE}"
    
  3. Check the status:

    kubectl get svc | grep "${MSR_ELB_SERVICE}" | awk '{print $4}'
    

    Note

    The output returned on AWS will be a FQDN, whereas other cloud providers may return an FQDN or an IP address.

    Example output:

    af42a8a8351864683b584833065b62c7-1127599283.us-west-2.elb.amazonaws.com
    

    Note

    • If nothing returns after you have run the command, wait a few minutes and run the command again.

    • If the command returns an FQDN it may be necessary to wait for the new DNS record to resolve. You can check the resolution status by running the following script, inserting the output string you received in place of $FQDN:

      while : ; do dig +short $FQDN ; sleep 5 ; done
      
    • If the command returns an IP address, you can access the load balancer at: https://<load-balancer-IP>/

  4. When one or more IP addresses display, you can interrupt the shell loop and access your MSR 3.0.x load balancer at: https://$FQDN/

    Note

    The load balancer will stop any attempt to tear down the VPC in which the EC2 instances are running. As such, in order to tear down the VPC you must first remove the load balancer:

    kubectl delete svc msr-public-elb
    
  5. Optional. Configure MSR to use Notary to sign images. To do this, update NGINX to add the DNS name:

    1. Modify your custom resource manifest to contain the following values:

      nginx:
        webtls:
          spec:
            dnsNames: ["nginx","localhost","${MSR_FQDN}"]
      
  6. Invoke the following command to run the webhook health check and apply the changes to the custom resource:

    kubectl wait --for=condition=ready pod -l \
    app.kubernetes.io/name="msr-operator" && kubectl apply -f cr-sample-manifest.yaml
    
  7. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    

    To troubleshoot the reconciliation process, run the following commands:

    kubectl describe msrs.msr.mirantis.com
    

    Review the MSR Operator Pod logs for more detailed results:

    kubectl logs <msr-operator-pod-name>
    
Install offline using the MSR Operator

Herein, Mirantis provides step-by-step instruction on how to install MSR onto an air-gapped Kubernetes cluster using the MSR Operator.

For documentation purposes, Mirantis assumes that you are installing MSR on an offline Kubernetes cluster from an Internet-connected machine that has access to the Kubernetes cluster. In doing so, you will use Helm and the MSR Operator to perform the MSR installation from the Internet-connected machine.

Prepare your environment
  1. Confirm that the default StorageClass on your cluster supports dynamic volume provisioning. For more information, refer to the Kubernetes documentation Change the default StorageClass.

    If a default StorageClass is not set, you can specify a StorageClass to MSR by providing the following additional parameters to the custom resource manifest:

    spec:
      registry:
        storage:
          persistentVolume:
            storageClassName: '<my-storageclass>'
      postgresql:
        volume:
          storageClass: '<my-storageclass>'
      rethinkdb:
        cluster:
          persistentVolume:
            storageClass: '<my-storageclass>'
    

    The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

    spec:
      registry:
        storage:
          backend: 'persistentVolume'
    

    MSR creates PersistentVolumeClaims with either the ReadWriteOnce or the ReadWriteMany access modes, depending on the purpose for which they are created. Thus the StorageClass provisioner that MSR uses must be able to provision PersistentVolumes with at least the ReadWriteOnce and the ReadWriteMany access modes.

    The <release-name> PVC is created by default with the ReadWriteMany access mode. If you choose to install MSR with a persistentVolume back end, you can override this default access mode by adding the following parameter to the custom resource manifest:

    spec:
      registry:
        storage:
          persistentVolume:
            accessModes: ['<new-access-mode>']
    
  2. On the Internet-connected computer, configure your environment to use the kubeconfig of the offline Kubernetes cluster. You can do this by setting a KUBECONFIG environment variable.

See also

Kubernetes official documentation: Storage Classes

Set up a Docker registry

Prepare a Docker registry on the Internet-connected machine that contains all of the images that are necessary to install MSR. Kubernetes will pull the required images from this registry to the offline nodes during the installation of the prerequisites and MSR.

  1. On the Internet-connected machine, set up a Docker registry that the offline Kubernetes cluster can access using a private IP address. For more information, refer to Docker official documentation: Deploy a registry server.

  2. Add the msrofficial, postgres-operator, jetstack, and rethinkdb-operator Helm repositories:

    helm repo add msrofficial https://registry.mirantis.com/charts/msr/msr
    helm repo add postgres-operator https://opensource.zalando.com/postgres-operator/charts/postgres-operator
    helm repo add jetstack https://charts.jetstack.io
    helm repo add rethinkdb-operator https://registry.mirantis.com/charts/rethinkdb/rethinkdb-operator/
    helm repo update
    
  3. Obtain the names of all the images that are required for installing MSR from the desired version of the Helm charts, for MSR, postgres-operator, cert-manager, and rethinkdb-operator. You can do this by templating each chart and grepping for image::

    helm template msr msrofficial/msr \
    --version=<msr-chart-version> \
    --api-versions=acid.zalan.do/v1 \
    --api-versions=cert-manager.io/v1 | grep image:
    
    helm template postgres-operator postgres-operator/postgres-operator \
    --version 1.7.1 \
    --set configKubernetes.spilo_runasuser=101 \
    --set configKubernetes.spilo_runasgroup=103 \
    --set configKubernetes.spilo_fsgroup=103 | grep image:
    
    helm template cert-manager jetstack/cert-manager \
    --version 1.7.2 \
    --set installCRDs=true | grep image:
    
    helm template rethinkdb-operator rethinkdb-operator/rethinkdb-operator \
      --version 1.0.0 | grep image:
    
  4. Pull the images listed in the previous step.

  5. Tag each image, including its original namespace, in preparation for pushing the image to the Docker registry. For example:

    docker tag registry.mirantis.com/msr/msr-api:<msr-version> <registry-ip>/msr/msr-api:<msr-version>
    
  6. Push all the required images to the Docker registry. For example:

    docker push <registry-ip>/msr/msr-api:<msr-version>
    
  7. Create the following YAML files, which you will reference to override the image repository information that is contained in the Helm charts used for MSR installation:

    • my_postgres_values.yaml:

      image:
        registry: <registry-ip>
      
      configGeneral:
        docker_image: <registry-ip>/acid/spilo-14:<version>
      
      configLogicalBackup:
        logical_backup_docker_image: <registry-ip>/acid/logical-backup:<version>
      
      configConnectionPooler:
        connection_pooler_image: <registry-ip>/acid/pgbouncer:<version>
      
    • my_certmanager_values.yaml:

      image:
        registry: <registry-ip>
        repository: jetstack/cert-manager-controller
      
      webhook:
        image:
          registry: <registry-ip>
          repository: jetstack/cert-manager-webhook
      
      cainjector:
        image:
          registry: <registry-ip>
          repository: jetstack/cert-manager-cainjector
      
      startupapicheck:
        image:
          registry: <registry-ip>
          repository: jetstack/cert-manager-ctl
      
      • my_rethinkdb_operator_values.yaml:

        controllerManager:
          kubeRbacProxy:
            image:
              repository: <registry-ip>/kubebuilder/kube-rbac-proxy
              tag: <tag>
          manager:
            image:
              repository: <registry-ip>/msr/rethinkdb-operator
              tag: <tag>
        
Prerequisites

You must have cert-manager, the Postgres Operator, and the RethinkDB Operator in place before you can use the offline method to install MSR.

Install cert-manager

Important

You must be running cert-manager 1.7.2 or later.

  1. Run the helm install command:

    helm install cert-manager jetstack/cert-manager \
    --version 1.7.2 \
    --set installCRDs=true \
    -f my_certmanager_values.yaml
    
  2. Verify that cert-manager is in the Running state:

    kubectl get pods
    

    If any of the cert-manager Pods are not in the Running state, run kubectl describe on each Pod:

    kubectl describe <cert-manager-pod-name>
    

    Note

    To troubleshoot the issues that present in the kubectl describe command output, refer to Troubleshooting in the official cert-manager documentation.

Install Postgres Operator

Important

You must be running Postgres Operator 1.9.0 or later. 1

  1. Run the helm install command with spilo_* parameters:

    helm install postgres-operator postgres-operator/postgres-operator \
    --version <version> \
    --set configKubernetes.spilo_runasuser=101 \
    --set configKubernetes.spilo_runasgroup=103 \
    --set configKubernetes.spilo_fsgroup=103 \
    -f my_postgres_values.yaml
    
  2. Verify that Postgres Operator is in the Running state:

    kubectl get pods
    

    To troubleshoot a failing Postgres Operator Pod, run the following command:

    kubectl describe <postgres-operator-pod-name>
    

    Review the Pod logs for more detailed results:

    kubectl logs <postgres-operator-pod-name>
    

Note

By default, MSR uses the persistent volume claims detailed in Volumes.

If you have a pre-existing PersistentVolume that contains image blob data that you intend to use with a new instance of MSR, you can add the following to the MSR custom resource manifest to provide the new instance with the name of the associated PersistentVolumeClaim:

spec:
  registry:
    storage:
      backend: 'persistentVolume'
      persistentVolume:
        existingClaim: '<pre-existing-msr-pvc>'

Be aware that this setting indicates the <release-name> PVC referred to in Volumes.

Install RethinkDB Operator
  1. Run the helm install command:

    helm install rethinkdb-operator rethinkdb-operator/rethinkdb-operator \
      --version 1.0.0 \
      -f my_rethinkdb_values.yaml
    
  2. Verify that RethinkDB Operator is in the Running state:

    kubectl get pods
    

    The RethinkDB Operator Pod name begins with rethinkdb-operator-controller-manager.

    To troubleshoot a failing RethinkDB Operator Pod, run the following command:

    kubectl describe pod <rethinkdb-operator-pod-name>
    

    Review the Pod logs for more detailed results:

    kubectl logs <rethinkdb-operator-pod-name>
    
Install MSR Operator
  1. Download the msr-operator YAML file by clicking msr-operator.yaml.

  2. Update the msr-operator.yaml file to include references to the required images in the offline registry:

    1. Identify the kube-rbac-proxy image reference in the msr-operator.yaml file:

      cat msr-operator.yaml | grep 'kube-rbac-proxy:' -n
      
    2. Edit the line so to refer to the correct image:

      image: <registry-ip>/kubebuilder/kube-rbac-proxy:v.0.13.0
      
    3. Identify the msr-operator image reference in the msr-operator.yaml file:

      cat msr-operator.yaml | grep 'msr-operator:' -n
      
    4. Edit the line to refer to the correct image:

      image: <registry-ip>/msr/msr-operator:1.0.0
      
  3. Install the MSR Operator:

    kubectl apply --server-side=true -f msr-operator.yaml
    
  4. Verify that the MSR Operator Pod is in the Running state:

    kubectl get pods
    

    The MSR Operator Pod name begins with msr-operator-controller-manager.

    To troubleshoot a failing MSR Operator Pod, run the following command:

    kubectl describe pod <msr-operator-pod-name>
    

    Review the Pod logs for more detailed results:

    kubectl logs <msr-operator-pod-name>
    
1

Postgres Operator up through 1.8.2 uses the PodDisruptionBudget policy/v1beta1 Kubernetes API, which is no longer served as of Kubernetes 1.25. As such, various features of MSR may not function properly if Postgres Operator 1.8.2 or earlier is installed alongside MSR on Kube v1.25 or later.

Install MSR

After installing the prerequisites, you can deploy MSR by editing and applying the custom resource manifest, downloadable herein.

Following MSR installation, you can make changes to the the MSR CustomResource (CR) by using kubectl to edit the custom resource manifest.

To install MSR:

  1. Download the cr-sample-manifest YAML file by clicking cr-sample-manifest.yaml.

  2. Edit the cr-sample-manifest.yaml to include a reference to the offline registry:

    spec:
      image:
        registry: <registry-ip>
    
  3. Make further edits to the cr-sample-manifest.yaml file as needed. Default values will be applied for any fields that are both present in the manifest and left blank. If the field is not present in the manifest, it will receive an empty value.

  4. Invoke the following command to run the webhook health check and create the custom resources.

    kubectl wait --for=condition=ready pod -l \
    app.kubernetes.io/name="msr-operator" && kubectl apply -f cr-sample-manifest.yaml
    
  5. Verify completion of the reconciliation process:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

    To troubleshoot the reconciliation process, run the following commands:

    kubectl describe msrs.msr.mirantis.com
    kubectl describe rethinkdbs.rethinkdb.com
    

    Review the MSR Operator Pod logs for more detailed information:

    kubectl logs <msr-operator-pod-name>
    

To verify the success of your MSR installation:

  1. Verify that all msr-* Pods are in the running state.

  2. Log into the MSR web UI.

  3. Log into MSR from the command line:

    docker login <private-ip>
    
  4. Push an image to MSR using docker push.

    Note

    The default credentials for MSR are:

    • User name: admin

    • password: password

  5. Optional. Disable outgoing connections in the MSR web UI Admin Settings.

    MSR offers outgoing connections for the following tasks:

    • Analytics reporting

    • New version notifications

    • Online license verification

    • Vulnerability scanning database updates

Check the Pods

If you are using MKE with your cluster, download and configure the client bundle. Otherwise, ensure that you can access the cluster using kubectl, either by updating the default Kubernetes config file or by setting the KUBECONFIG environment variable to the path of the unique config file for the cluster.

kubectl get pods

Example output:

NAME                                              READY   STATUS    RESTARTS   AGE
cert-manager-6bf59fc5c7-5wchj                     1/1     Running   0          23m
cert-manager-cainjector-5c5f8bfbd6-mlr2k          1/1     Running   0          23m
cert-manager-webhook-6fcbbd87c9-7ftv7             1/1     Running   0          23m
msr-api-cfc88f8ff-8lh9n                           1/1     Running   4          18m
msr-enzi-api-77bf8558b9-p6q7x                     1/1     Running   1          18m
msr-enzi-worker-0                                 1/1     Running   3          18m
msr-garant-d84bbfccd-j94qc                        1/1     Running   4          18m
msr-jobrunner-default-54675dd9f4-cwnfg            1/1     Running   3          18m
msr-nginx-6d7c775dd9-nt48c                        1/1     Running   0          18m
msr-notary-server-64f9dd68fc-xzpp4                1/1     Running   4          18m
msr-notary-signer-5b6f7f6bd9-bcqwv                1/1     Running   3          18m
msr-registry-6b6c6b59d5-8bnsl                     1/1     Running   0          18m
msr-rethinkdb-cluster-0                           1/1     Running   0          18m
msr-rethinkdb-proxy-7fccc79db7-njrfl              1/1     Running   2          18m
msr-scanningstore-0                               1/1     Running   0          18m
nfs-subdir-external-provisioner-c5f64f6cd-mjjqt   1/1     Running   0          19m
postgres-operator-54bb64998c-mjs6q                1/1     Running   0          22m

If you intend to run vulnerability scans, the msr-scanningstore-0 Pod must have Running status. If this is not the case, it is likely that the StorageClass is missing or is misconfigured, or because no default StorageClass is set. To rectify this, you must configure a default StorageClass and then re-install MSR. Otherwise, you can specify a StorageClass for MSR to use by providing the following when using a custom resource manifest install MSR:

spec:
  registry:
    storage:
      persistentVolume:
        storageClass: '<my-storageclass>'
  postgresql:
    volume:
      storageClass: '<my-storageclass>'
  rethinkdb:
    cluster:
      persistentVolume:
        storageClass: '<my-storageclass>'

Note

The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

spec:
  registry:
    storage:
      backend: 'persistentVolume'

Install MSR using a Helm chart

Contained herein are the legacy instruction for installing MSR using a Helm chart in either an online or an air-gapped Kubernetes environment.

Install MSR online using a Helm chart

Herein, Mirantis provides step-by-step instruction on how to install MSR onto an Internet-connected Kubernetes cluster using a Helm chart.

Prerequisites

You must have the following key components in place before you can install MSR online using a Helm chart: a Kubernetes platform, cert-manager, and the Postgres Operator.

Prepare your Kubernetes environment
  1. Install and configure your Kubernetes distribution.

  2. Ensure that the default StorageClass on your cluster supports the dynamic provisioning of volumes. If necessary, refer to the Kubernetes documentation Change the default StorageClass.

    If no default StorageClass is set, you can specify a StorageClass for MSR to use by providing the following additional parameters to MSR when running the helm install command:

    --set registry.storage.persistentVolume.storageClass=<my-storageclass>
    --set postgresql.volume.storageClass=<my-storageclass>
    --set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>
    

    The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

    --set registry.storage.backend=persistentVolume
    

    MSR creates PersistentVolumeClaims with either the ReadWriteOnce or the ReadWriteMany access modes, depending on the purpose for which they are created. Thus the StorageClass provisioner that MSR uses must be able to provision PersistentVolumes with at least the ReadWriteOnce and ReadWriteMany access modes.

    The <release-name> PVC is created by default with the ReadWriteMany access mode. If you choose to install MSR with a persistentVolume back end, you can override this default access mode with the following parameter when running the helm install command:

    --set registry.storage.persistentVolume.accessMode=<new-access-mode>
    
Install cert-manager

Important

The cert-manager version must be 1.7.2 or later.

  1. Run the following helm install command:

    helm repo add jetstack https://charts.jetstack.io
    
    helm repo update
    
    helm install cert-manager jetstack/cert-manager \
       --version 1.7.2 \
       --set installCRDs=true
    
  2. Verify that cert-manager is in the Running state:

    kubectl get pods
    

    If any of the cert-manager Pods are not in the Running state, run kubectl describe on each Pod:

    kubectl describe <cert-manager-pod-name>
    

    Note

    To troubleshoot the issues that present in the kubectl describe command output, refer to Troubleshooting in the official cert-manager documentation.

Install Postgres Operator

Important

The Postgres Operator version you install must be 1.9.0 or later, as all versions up through 1.8.2 use the PodDisruptionBudget policy/v1beta1 Kubernetes API, which is no longer served as of Kubernetes 1.25. This being the case, various MSR features may not function properly if a Postgres Operator prior to 1.9.0 is installed alongside MSR on Kubernetes 1.25 or later.

  1. Run the following helm install command, including spilo_* parameters:

    helm repo add postgres-operator \
      https://opensource.zalando.com/postgres-operator/charts/postgres-operator/
    
    helm repo update
    
    helm install postgres-operator postgres-operator/postgres-operator \
      --version <version> \
      --set configKubernetes.spilo_runasuser=101 \
      --set configKubernetes.spilo_runasgroup=103 \
      --set configKubernetes.spilo_fsgroup=103
    
  2. Verify that Postgres Operator is in the Running state:

    kubectl get pods
    

    To troubleshoot a failing Postgres Operator Pod, run the following command:

    kubectl describe <postgres-operator-pod-name>
    

    Review the Pod logs for more detailed results:

    kubectl logs <postgres-operator-pod-name>
    

Note

By default, MSR uses the persistent volume claims detailed in Volumes.

If you have a pre-existing PersistentVolume that contains image blob data that you intend to use with a new instance of MSR, you can use Helm to provide the new instance with the name of the associated PersistentVolumeClaim:

--set registry.storage.persistentVolume.existingClaim=<pre-existing-msr-pvc>

This setting indicates the <release-name> PVC referred to in volumes.

Run install command
  1. Use a Helm chart to install MSR:

    helm repo add msrofficial https://registry.mirantis.com/charts/msr/msr
    
    helm repo update
    
    helm install msr msrofficial/msr \
      --version <helm-chart-version> \
      --set-file license=path/to/file/license.lic
    

    Note

    If the installation fails and MSR Pods continue to run in your cluster, it is likely that MSR failed to complete the initialization process, and thus you must reinstall MSR. To delete the Pods and completely uninstall MSR:

    1. Delete any running msr-initialize Pods:

      kubectl delete job msr-initialize
      
    2. Delete any remaining Pods:

      helm uninstall msr
      
  2. Verify the success of your MSR installation.

    1. Verify that all msr-* Pods are in the running state. For more detail, refer to check-the-pods-online-helm.

    2. Set up your load balancer.

    3. Log into the MSR web UI.

    4. Log into MSR from the command line:

      docker login $FQDN
      
    5. Push an image to MSR using docker push.

    Note

    The default credentials for MSR are:

    • User name: admin

    • password: password

    Be aware that the Helm chart values also include the default MSR credentials information. As such, Mirantis strongly recommends that you change the credentials immediately following installation.

See also

Check the Pods

If you are using MKE with your cluster, download and configure the client bundle. Otherwise, ensure that you can access the cluster using kubectl, either by updating the default Kubernetes config file or by setting the KUBECONFIG environment variable to the path of the unique config file for the cluster.

kubectl get pods

Example output:

NAME                                              READY   STATUS    RESTARTS   AGE
cert-manager-6bf59fc5c7-5wchj                     1/1     Running   0          23m
cert-manager-cainjector-5c5f8bfbd6-mlr2k          1/1     Running   0          23m
cert-manager-webhook-6fcbbd87c9-7ftv7             1/1     Running   0          23m
msr-api-cfc88f8ff-8lh9n                           1/1     Running   4          18m
msr-enzi-api-77bf8558b9-p6q7x                     1/1     Running   1          18m
msr-enzi-worker-0                                 1/1     Running   3          18m
msr-garant-d84bbfccd-j94qc                        1/1     Running   4          18m
msr-jobrunner-default-54675dd9f4-cwnfg            1/1     Running   3          18m
msr-nginx-6d7c775dd9-nt48c                        1/1     Running   0          18m
msr-notary-server-64f9dd68fc-xzpp4                1/1     Running   4          18m
msr-notary-signer-5b6f7f6bd9-bcqwv                1/1     Running   3          18m
msr-registry-6b6c6b59d5-8bnsl                     1/1     Running   0          18m
msr-rethinkdb-cluster-0                           1/1     Running   0          18m
msr-rethinkdb-proxy-7fccc79db7-njrfl              1/1     Running   2          18m
msr-scanningstore-0                               1/1     Running   0          18m
nfs-subdir-external-provisioner-c5f64f6cd-mjjqt   1/1     Running   0          19m
postgres-operator-54bb64998c-mjs6q                1/1     Running   0          22m

If you intend to run vulnerability scans, the msr-scanningstore-0 Pod must have Running status. If this is not the case, it is likely that the StorageClass is missing or is misconfigured, or because no default StorageClass is set. To rectify this, you must configure a default StorageClass and then re-install MSR. Otherwise, you can specify a StorageClass for MSR to use by providing the following when using Helm to install MSR:

--set registry.storage.persistentVolume.storageClass=<my-storageclass>
--set postgresql.volume.storageClass=<my-storageclass>
--set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>

Note

The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

--set registry.storage.backend=persistentVolume
Add load balancer (AWS)

If you deploy MSR to AWS you should consider adding a load balancer to your installation.

  1. Set an environment variable to use in assigning an internal service name to the load balancer service:

    export MSR_ELB_SERVICE="msr-public-elb"
    
  2. Use Kubernetes to create an AWS load balancer to expose NGINX, the front end for the MSR web UI:

    kubectl expose deployment msr-nginx --type=LoadBalancer \
      --name="${MSR_ELB_SERVICE}"
    
  3. Check the status:

    kubectl get svc | grep "${MSR_ELB_SERVICE}" | awk '{print $4}'
    

    Note

    The output returned on AWS will be a FQDN, whereas other cloud providers may return an FQDN or an IP address.

    Example output:

    af42a8a8351864683b584833065b62c7-1127599283.us-west-2.elb.amazonaws.com
    

    Note

    • If nothing returns after you have run the command, wait a few minutes and run the command again.

    • If the command returns an FQDN it may be necessary to wait for the new DNS record to resolve. You can check the resolution status by running the following script, inserting the output string you received in place of $FQDN:

      while : ; do dig +short $FQDN ; sleep 5 ; done
      
    • If the command returns an IP address, you can access the load balancer at: https://<load-balancer-IP>/

  4. When one or more IP addresses display, you can interrupt the shell loop and access your MSR 3.0.x load balancer at: https://$FQDN/

    Note

    The load balancer will stop any attempt to tear down the VPC in which the EC2 instances are running. As such, in order to tear down the VPC you must first remove the load balancer:

    kubectl delete svc msr-public-elb
    
  5. Optional. Configure MSR to use Notary to sign images. To do this, update NGINX to add the DNS name:

    1. When using an <MSR-chart-version> version, such as 1.0.0, for the Helm and MSR_FQDN, run:

      helm upgrade msr msrofficial/msr \
        --version $<MSR-chart-version> \
        --set-file license=path/to/file/license.lic \
        --set nginx.webtls.spec.dnsNames="{nginx,localhost,${MSR_FQDN}}" \
        --reuse-values
      
    2. Verify the upgrade change:

      helm get values msr
      

      Example output:

      USER-SUPPLIED VALUES:
      license: |
      e3ded81fe8de30b857fe1de1d1f6968bcb8b5b1078021a88839ad3b3c9e1a77a94fa7987bd2591c8dd59ad8bae4ce0719a67d9756561b7c67c12ee42b1c505bf596e4224abb792a00bfbdf4c9fc32ea727f82f8f6250720bb634b082162842797e87ad3bfbf6f408dae41e81a862cd73a3d2729dc81365900e293b4724231b2c6f0fc6c2e83ee32d1eb0107ca9afa42a4f5b20ac5c6b538a551d8f380f6a89d9746fc7405d5ba96738c1365a6b91b2c0572225b8a5d39e4b6956c48bf9b07068248762c71987999dfc8c1e4432e39fd20f52b6d9ddf4839ea5c5e0164acb3956c01da4dd3f5499deed204dff40323445b87196a11e3ee966f238e32b414fe8e5b1881859e3fadc8394826882fb3e39f6c4d2369e5b9161b9495455c4587dbec33d197accf9f5c1032be5ed32a776f091e1935fd0fecdf7010caa8cf3034b15d46247146cc5917843e771
      nginx:
      webtls:
         spec:
            dnsNames:
            - nginx
            - localhost
            - af42a8a8351864683b584833065b62c7-1127599283.us-west-2.elb.amazonaws.com
      
Install MSR offline using a Helm chart

Herein, Mirantis provides step-by-step instruction on how to install MSR onto an air-gapped Kubernetes cluster using a Helm chart.

For documentation purposes, Mirantis assumes that you are installing MSR on an offline Kubernetes cluster from an Internet-connected machine that has access to the Kubernetes cluster. In doing so, you will use Helm to perform the MSR installation from the Internet-connected machine.

Prepare your environment
  1. Confirm that the default StorageClass on your cluster supports dynamic volume provisioning. For more information, refer to the Kubernetes documentation Change the default StorageClass.

    If a default StorageClass is not set, you can specify a StorageClass to MSR by providing the following additional parameters during the running of the helm install command:

    --set registry.storage.persistentVolume.storageClass=<my-storageclass>
    --set postgresql.volume.storageClass=<my-storageclass>
    --set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>
    

    The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

    --set registry.storage.backend=persistentVolume
    

    MSR creates PersistentVolumeClaims with either the ReadWriteOnce or the ReadWriteMany access modes, depending on the purpose for which they are created. Thus the StorageClass provisioner that MSR uses must be able to provision PersistentVolumes with at least the ReadWriteOnce and the ReadWriteMany access modes.

    The <release-name> PVC is created by default with the ReadWriteMany access mode. If you choose to install MSR with a persistentVolume back end, you can override this default access mode with the following parameter when running the helm install command:

    --set registry.storage.persistentVolume.accessMode=<new-access-mode>
    
  2. On the Internet-connected computer, configure your environment to use the kubeconfig of the offline Kubernetes cluster. You can do this by setting a KUBECONFIG environment variable.

See also

Kubernetes official documentation: Storage Classes

Set up a Docker registry

Prepare a Docker registry on the Internet-connected machine that contains all of the images that are necessary to install MSR. Kubernetes will pull the required images from this registry to the offline nodes during the installation of the prerequisites and MSR.

  1. On the Internet-connected machine, set up a Docker registry that the offline Kubernetes cluster can access using a private IP address. For more information, refer to Docker official documentation: Deploy a registry server.

  2. Add the msrofficial, postgres-operator, and jetstack Helm repositories:

    helm repo add msrofficial https://registry.mirantis.com/charts/msr/msr
    helm repo add postgres-operator https://opensource.zalando.com/postgres-operator/charts/postgres-operator
    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    
  3. Obtain the names of all the images that are required for installing MSR from the desired version of the Helm charts, for MSR, postgres-operator, and cert-manager. You can do this by templating each chart and grepping for image::

    helm template msr msrofficial/msr \
    --version=<msr-chart-version> \
    --api-versions=acid.zalan.do/v1 \
    --api-versions=cert-manager.io/v1 | grep image:
    
    helm template postgres-operator postgres-operator/postgres-operator \
    --version 1.7.1 \
    --set configKubernetes.spilo_runasuser=101 \
    --set configKubernetes.spilo_runasgroup=103 \
    --set configKubernetes.spilo_fsgroup=103 | grep image:
    
    helm template cert-manager jetstack/cert-manager \
    --version 1.7.2 \
    --set installCRDs=true | grep image:
    
  4. Pull the images listed in the previous step.

  5. Tag each image, including its original namespace, in preparation for pushing the image to the Docker registry. For example:

    docker tag registry.mirantis.com/msr/msr-api:<msr-version> <registry-ip>/msr/msr-api:<msr-version>
    
  6. Push all the required images to the Docker registry. For example:

    docker push <registry-ip>/msr/msr-api:<msr-version>
    
  7. Create the following YAML files, which you will reference to override the image repository information that is contained in the Helm charts used for MSR installation:

    • my_msr_values.yaml:

      imageRegistry: <registry-ip>
      
      enzi:
        image:
          registry: <registry-ip>
      
      rethinkdb:
        image:
          registry: <registry-ip>
      
    • my_postgres_values.yaml:

      image:
        registry: <registry-ip>
      
      configGeneral:
        docker_image: <registry-ip>/acid/spilo-14:<version>
      
      configLogicalBackup:
        logical_backup_docker_image: <registry-ip>/acid/logical-backup:<version>
      
      configConnectionPooler:
        connection_pooler_image: <registry-ip>/acid/pgbouncer:<version>
      
    • my_certmanager_values.yaml:

      image:
        registry: <registry-ip>
        repository: jetstack/cert-manager-controller
      
      webhook:
        image:
          registry: <registry-ip>
          repository: jetstack/cert-manager-webhook
      
      cainjector:
        image:
          registry: <registry-ip>
          repository: jetstack/cert-manager-cainjector
      
      startupapicheck:
        image:
          registry: <registry-ip>
          repository: jetstack/cert-manager-ctl
      
Prerequisites

You must have cert-manager and the Postgres Operator in place before you can install MSR using the offline method.

Install cert-manager

Important

The cert-manager version must be 1.7.2 or later.

  1. Run the following helm install command:

    helm install cert-manager jetstack/cert-manager \
    --version 1.7.2 \
    --set installCRDs=true \
    -f my_certmanager_values.yaml
    
  2. Verify that cert-manager is in the Running state:

    kubectl get pods
    

    If any of the cert-manager Pods are not in the Running state, run kubectl describe on each Pod:

    kubectl describe <cert-manager-pod-name>
    

    Note

    To troubleshoot the issues that present in the kubectl describe command output, refer to Troubleshooting in the official cert-manager documentation.

Install Postgres Operator

Important

The Postgres Operator version you install must be 1.9.0 or later, as all versions up through 1.8.2 use the PodDisruptionBudget policy/v1beta1 Kubernetes API, which is no longer served as of Kubernetes 1.25. This being the case, various MSR features may not function properly if a Postgres Operator prior to 1.9.0 is installed alongside MSR on Kubernetes 1.25 or later.

  1. Run the following helm install command, including spilo_* parameters:

    helm install postgres-operator postgres-operator/postgres-operator \
    --version <version> \
    --set configKubernetes.spilo_runasuser=101 \
    --set configKubernetes.spilo_runasgroup=103 \
    --set configKubernetes.spilo_fsgroup=103 \
    -f my_postgres_values.yaml
    
  2. Verify that Postgres Operator is in the Running state:

    kubectl get pods
    

    To troubleshoot a failing Postgres Operator Pod, run the following command:

    kubectl describe <postgres-operator-pod-name>
    

    Review the Pod logs for more detailed results:

    kubectl logs <postgres-operator-pod-name>
    

Note

By default, MSR uses the persistent volume claims detailed in Volumes.

If you have a pre-existing PersistentVolume that contains image blob data that you intend to use with a new instance of MSR, you can use Helm to provide the new instance with the name of the associated PersistentVolumeClaim:

--set registry.storage.persistentVolume.existingClaim=<pre-existing-msr-pvc>

This setting indicates the <release-name> PVC referred to in Volumes.

Run install command
  1. Use a Helm chart to install MSR:

    helm install msr msrofficial/msr \
    --version <helm-chart-version> \
    --set-file license=path/to/file/license.lic \
    -f my_msr_values.yaml
    

    Note

    If the installation fails and MSR Pods continue to run in your cluster, it is likely that MSR failed to complete the initialization process, and thus you must reinstall MSR. To delete the Pods and completely uninstall MSR:

    1. Delete any running msr-initialize Pods:

      kubectl delete job msr-initialize
      
    2. Delete any remaining Pods:

      helm uninstall msr
      
  2. Verify the success of your MSR installation.

    1. Verify that all msr-* Pods are in the running state. For more detail, refer to check-the-pods-offline-helm

    2. Log into the MSR web UI.

    3. Log into MSR from the command line:

      docker login <private-ip>
      
    4. Push an image to MSR using docker push.

    Note

    The default credentials for MSR are:

    • User name: admin

    • password: password

    Be aware that the Helm chart values also include the default MSR credentials information. As such, Mirantis strongly recommends that you change the credentials immediately following installation.

  3. Optional. Disable outgoing connections in the MSR web UI Admin Settings. MSR offers outgoing connections for the following tasks:

    • Analytics reporting

    • New version notifications

    • Online license verification

    • Vulnerability scanning database updates

See also

Helm official documentation: Helm Install

Check the Pods

If you are using MKE with your cluster, download and configure the client bundle. Otherwise, ensure that you can access the cluster using kubectl, either by updating the default Kubernetes config file or by setting the KUBECONFIG environment variable to the path of the unique config file for the cluster.

kubectl get pods

Example output:

NAME                                              READY   STATUS    RESTARTS   AGE
cert-manager-6bf59fc5c7-5wchj                     1/1     Running   0          23m
cert-manager-cainjector-5c5f8bfbd6-mlr2k          1/1     Running   0          23m
cert-manager-webhook-6fcbbd87c9-7ftv7             1/1     Running   0          23m
msr-api-cfc88f8ff-8lh9n                           1/1     Running   4          18m
msr-enzi-api-77bf8558b9-p6q7x                     1/1     Running   1          18m
msr-enzi-worker-0                                 1/1     Running   3          18m
msr-garant-d84bbfccd-j94qc                        1/1     Running   4          18m
msr-jobrunner-default-54675dd9f4-cwnfg            1/1     Running   3          18m
msr-nginx-6d7c775dd9-nt48c                        1/1     Running   0          18m
msr-notary-server-64f9dd68fc-xzpp4                1/1     Running   4          18m
msr-notary-signer-5b6f7f6bd9-bcqwv                1/1     Running   3          18m
msr-registry-6b6c6b59d5-8bnsl                     1/1     Running   0          18m
msr-rethinkdb-cluster-0                           1/1     Running   0          18m
msr-rethinkdb-proxy-7fccc79db7-njrfl              1/1     Running   2          18m
msr-scanningstore-0                               1/1     Running   0          18m
nfs-subdir-external-provisioner-c5f64f6cd-mjjqt   1/1     Running   0          19m
postgres-operator-54bb64998c-mjs6q                1/1     Running   0          22m

If you intend to run vulnerability scans, the msr-scanningstore-0 Pod must have Running status. If this is not the case, it is likely that the StorageClass is missing or is misconfigured, or because no default StorageClass is set. To rectify this, you must configure a default StorageClass and then re-install MSR. Otherwise, you can specify a StorageClass for MSR to use by providing the following when using Helm to install MSR:

--set registry.storage.persistentVolume.storageClass=<my-storageclass>
--set postgresql.volume.storageClass=<my-storageclass>
--set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>

Note

The first of these three parameters is only applicable when you install MSR with a persistentVolume back end, the default setting:

--set registry.storage.backend=persistentVolume

Install on Swarm

Available since MSR 3.1.0

In this section of the Mirantis documentation, we provide comprehensive information on how to install MSR on a Swarm-orchestrated cluster.

Install MSR online

The procedure provided herein will guide you in your installation of MSR onto a Swarm cluster that has one manager and one worker node, with the MSR installation occurring on one worker. Be aware, though, that you can adjust the number of nodes to fit your specific needs.

Important

  • You must install MSR on an odd number of nodes.

  • Mirantis recommends that you install MSR on worker nodes only.

  1. If you have not done so, create the swarm where MSR will run.

  2. SSH into the manager node.

  3. Generate the values.yml file that you will use to configure and deploy MSR:

    docker run -it --rm \
    --entrypoint cat registry.mirantis.com/msr/msr-installer:<msr-version> \
    /config/values.yml > values.yml
    
  4. Edit the values.yml file to customize your MSR deployment. Be sure to place your license in the license section.

    license: '<license-string>'
    
  5. Obtain a list of non-manager nodes along with their node IDs:

    docker node ls --format "{{ .ID }}" --filter "role=worker"
    
  6. In the swarm.nodeList section of the values.yml file, add the node IDs of the worker nodes on which you plan to install MSR:

    swarm:
      nodeList:
        - <node-id-1>
        - <node-id-2>
        - <node-id-3>
    
  7. Execute the following command to install MSR:

    docker run \
      --rm \
      -it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      registry.mirantis.com/msr/msr-installer:<msr-version> \
      install \
      --https_port 8443 \
      --http_port 8888
    

    Note

    • If you do not specify any worker nodes on which to install MSR, the software will exclusively be installed on the manager node where the msr-installer is executed.

    • You must specify the destination file in the destination container as :/config/values.yml \. Any other name will cause the container deployment to fail, which will result in the cluster becoming inoperable.

    • To switch the log-level from the default info to debug, you need to insert the --log-level debug flag between the msr-installer image and the install subcommand.

    • Port 8443 is indicated in the provided example, demonstrating a scenario in which MKE and MSR are both in use and have a conflict with port 443. Port 443 should be used exclusively for all other installation configurations.

  8. Review the status of the deployed services:

    docker stack services msr
    
  9. Access the MSR web UI at https://<node-ip>:443. The default user name and password are admin:password.

Install MSR offline

The example that follows assumes that you are installing MSR on an offline Swarm cluster from an Internet-connected machine that has access to the Swarm cluster through private IP addresses.

Important

  • You must install MSR on an odd number of nodes.

  • Mirantis recommends that you install MSR on worker nodes only.

  1. Run the following shell script from the Internet-connected machine:

    #!/bin/sh
    
    TAG="<msr-version>"
    REGISTRY="registry.mirantis.com/msr"
    RETHINK_TAG="2.4.3-mirantis-0.1.0"
    ENZI_TAG="1.0.85"
    FILE="msr-${TAG}.tar.gz"
    
    IMAGES="$REGISTRY/msr-garant:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-installer:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-notary-signer:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-registry:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-nginx:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-api:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-notary-server:$TAG"
    IMAGES="$IMAGES $REGISTRY/msr-jobrunner:$TAG"
    IMAGES="$IMAGES $REGISTRY/enzi:$ENZI_TAG"
    IMAGES="$IMAGES registry.opensource.zalan.do/acid/spilo-14:2.1-p3"
    IMAGES="$IMAGES registry.mirantis.com/rethinkdb/rethinkdb:$RETHINK_TAG"
    
    echo "Pulling images..."
    for NAME in ${IMAGES}; do
        docker image pull ${NAME};
    done
    
    echo "Saving images..."
    docker image save $IMAGES -o $FILE
    echo "Images saved. To load use docker image load -i $FILE"
    
  2. Copy the msr-<msr-version>.tar.gz file to each offline host machine on which you will install MSR:

    scp msr-<msr-version>.tar.gz <user-name>@<host-ip-address>:</path/to/destination>
    
  3. From each offline host machine on which you will install MSR, including the manager node, load the MSR images from the msr-<msr-version>.tar.gz file:

    ssh <user-name>@<host-ip-address> 'docker load -i msr-<msr-version>.tar.gz'
    
  4. SSH into the manager node.

  5. Generate the values.yaml file that you will use to configure and deploy MSR:

    docker run -it --rm \
    --entrypoint cat registry.mirantis.com/msr/msr-installer:<msr-version> \
    /config/values.yml > values.yml
    
  6. Edit the values.yaml file to customize your MSR deployment. Be sure to place your license in the license section:

    license: '<license-string>'
    
  7. Obtain a list of non-manager nodes along with their node IDs:

    docker node ls --format "{{ .ID }}" --filter "role=worker"
    
  8. In the swarm.nodeList section of the values.yaml file, add the node IDs of the worker nodes on which you plan to install MSR:

    swarm:
      nodeList:
        - <node-id-1>
        - <node-id-2>
        - <node-id-3>
    
  9. Install MSR, specifying the node ID of the worker on which you will run MSR:

    docker run \
      --rm \
      -it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      registry.mirantis.com/msr/msr-installer:<msr-version> \
      install \
      --https_port 8443 \
      --http_port 8888
    

    Note

    If you do not specify any worker nodes on which to install MSR, the software will exclusively be installed on the manager node where the msr-installer is executed.

  10. Review the status of the deployed services. Be aware that this may require a wait time of up to two minutes.

    docker stack services msr
    
  11. Access the MSR web UI at https://<node-ip>:443. The default user name and password are admin:password.

  12. Optional. Disable outgoing connections in the MSR web UI Admin Settings. MSR offers outgoing connections for the following tasks:

    • Analytics reporting

    • New version notifications

    • Online license verification

    • Vulnerability scanning database updates

Obtain the MSR license

After you install MSR, download your new MSR license and apply it using a Helm command.

Warning

Users are not authorized to run MSR without a valid license. For more information, refer to Mirantis Agreements and Terms.

Download your MSR license

Note

If you do not have the CloudCare Portal welcome email, contact your designated administrator.

  1. Log in to the Mirantis CloudCare Portal.

  2. In the top navigation bar, click Environments.

  3. Click the Environment Name associated with the license you want to download.

  4. Scroll down to Licenses and click the License File URL. A new tab opens in your browser.

  5. Click View file to download your license file.

Update your license settings

The procedure for updating your MSR license differs, depending on whether you are deploying the software with Kubernetes or Swarm.

Kubernetes deployments
  1. Insert the contents of your MSR license in the license field of your custom resource definition manifest:

    spec:
      license: '<license-string>'
    
  2. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

Apply your MSR license to an unlicensed MSR instance:

helm upgrade msr msr --repo https://registry.mirantis.com/charts/msr/msr \
--version 1.0.0 \
--set-file license=path/to/file/license.lic
Swarm deployments
  1. SSH into a manager node on the Swarm cluster on which MSR is running.

  2. Insert your license information into the license section of your values.yaml file:

    license: '<license-string>'
    
  3. Obtain a list of non-manager nodes along with their node IDs, noting the IDs of the nodes on which MSR is installed:

    docker node ls --format "{{ .ID }}" --filter "role=worker"
    
  4. Upgrade MSR, specifying a node ID for each node on which MSR is installed:

    docker run \
      --rm \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      registry.mirantis.com/msr/msr-installer:<new-msr-version> \
      upgrade \
      --node <node-id>
    
  5. Review the status of the deployed services:

    docker stack services msr
    

Uninstall MSR

The method used to uninstall MSR differs based on the orchestrator employed to manage your MSR instance.

Kubernetes deployments

To prevent data loss, uninstalling MSR does not delete persistent volumes (PVs) or certificate secrets.

To uninstall MSR using the MSR Operator:

  1. Run the following command to uninstall MSR:

    kubectl delete --ignore-not-found=true -f msr-operator.yaml
    
  2. List the persistent volumes claims (PVCs):

    kubectl get pvc
    
  3. Delete the PVCs:

    kubectl delete pvc <pvcs>
    

    Note

    The spec.PersistentVolumeClaimRetentionPolicy field in the custom resource manifest differs from the PersistentVolume Reclaim policy in Kubernetes. The MSR Operator PersistentVolumeClaim Retention policy can accept either of the following values:

    • retain: When the MSR custom resource is deleted, the PVCs used by MSR are retained (default).

    • delete: Deleting the MSR custom resource results in the automatic deletion of the PVCs used by MSR.

    For more information on deleting and retaining PVs, refer to the official Kubernetes documentation.

  4. Delete the secrets associated with your MSR deployment:

kubectl delete -l app.kubernetes.io/name=msr

To uninstall MSR using a Helm chart:

  1. Run the following Helm command:

    helm uninstall <release-name>
    
  2. Remove persistent volumes and certificate secrets.

Swarm deployments

  1. SSH into a manager node on the Swarm cluster in which MSR is running.

  2. Uninstall MSR:

    docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock \
      registry.mirantis.com/msr/msr-installer:<msr version> \
      uninstall
    

    By default, the uninstaller does not delete the data associated with your MSR deployment. To delete that data, you must include the --destroy flag with the uninstall command.

Operations Guide

The MSR Operations Guide provides the detailed information you need to store and manage images on-premises or in a virtual private cloud, to meet security or regulatory compliance requirements.

Access MSR

Configure your Mirantis Container Runtime

By default Mirantis Container Runtime uses TLS when pushing and pulling images to an image registry like Mirantis Secure Registry (MSR).

If MSR is using the default configurations or was configured to use self-signed certificates, you need to configure your Mirantis Container Runtime to trust MSR. Otherwise, when you try to log in, push to, or pull images from MSR, you’ll get an error:

docker login msr.example.org

x509: certificate signed by unknown authority

The first step to make your Mirantis Container Runtime trust the certificate authority used by MSR is to get the MSR CA certificate. Then you configure your operating system to trust that certificate.

Configure your host
macOS

In your browser navigate to https://<msr-url>/ca to download the TLS certificate used by MSR. Then add that certificate to macOS Keychain.

After adding the CA certificate to Keychain, restart Docker Desktop for Mac.

Windows

In your browser navigate to https://<msr-url>/ca to download the TLS certificate used by MSR. Open Windows Explorer, right-click the file you’ve downloaded, and choose Install certificate.

Then, select the following options:

  • Store location: local machine

  • Check place all certificates in the following store

  • Click Browser, and select Trusted Root Certificate Authorities

  • Click Finish

Learn more about managing TLS certificates.

After adding the CA certificate to Windows, restart Docker Desktop for Windows.

Ubuntu/ Debian
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /usr/local/share/ca-certificates/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-certificates
# Restart the Docker daemon
sudo service docker restart
RHEL/ CentOS
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /etc/pki/ca-trust/source/anchors/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-trust
# Restart the Docker daemon
sudo /bin/systemctl restart docker.service
Boot2Docker
  1. Log into the virtual machine with ssh:

    docker-machine ssh <machine-name>
    
  2. Create the bootsync.sh file, and make it executable:

    sudo touch /var/lib/boot2docker/bootsync.sh
    sudo chmod 755 /var/lib/boot2docker/bootsync.sh
    
  3. Add the following content to the bootsync.sh file. You can use nano or vi for this.

    #!/bin/sh
    
    cat /var/lib/boot2docker/server.pem >> /etc/ssl/certs/ca-certificates.crt
    
  4. Add the MSR CA certificate to the server.pem file:

    curl -k https://<msr-domain-name>/ca | sudo tee -a /var/lib/boot2docker/server.pem
    
  5. Run bootsync.sh and restart the Docker daemon:

    sudo /var/lib/boot2docker/bootsync.sh
    sudo /etc/init.d/docker restart
    
Log into MSR

To validate that your Docker daemon trusts MSR, try authenticating against MSR.

docker login msr.example.org
Where to go next

Configure your Notary client

Configure your Notary client as described in Delegations for content trust.

Use a cache

Mirantis Secure Registry can be configured to have one or more caches. This allows you to choose from which cache to pull images from for faster download times.

If an administrator has set up caches, you can choose which cache to use when pulling images.

In the MSR web UI, navigate to your Account, and check the Content Cache options.

Once you save, your images are pulled from the cache instead of the central MSR.

Manage access tokens

You can create and distribute access tokens in MSR that grant users access at specific permission levels.

Access tokens are associated with a particular user account. They take on the permissions of that account when in use, adjusting automatically to any permissions changes that are made to the associated user account.

Note

Regular MSR users can create access tokens that adopt their own account permissions, while administrators can create access tokens that adopt the account permissions of any account they choose, including the admin account.

Access tokens are of use in building CI/CD pipelines and other integrations, as you can issue separate tokens for each integration and henceforth deactivate or delete such tokens at any time. You can also use access tokens to generate a temporary password for a user who is locked out of their account.

Create an access token

  1. Log in to the MSR web UI as the user whose permissions you want associated with the token.

  2. In the left-side navigation panel, navigate to <user name> > Profile.

  3. Select the Access Tokens tab.

  4. Click New access token.

  5. Add a description for the new token. You can, for example, describe the purpose of the token or illustrate a use scenario.

  6. Click Create. The token will temporarily display. Once you click Done, you will never again be able to see the token.

Modify an access token

Although you cannot view the access token itself following its initial display, you can give it a new description, deactivate, or delete the token.

To give an access token a new description:

  1. Select the View details link associated with the required access token.

  2. Enter a new description in the Description field.

  3. Click Save.

To deactivate an access token:

  1. Select View details next to the required access token.

  2. Slide the Is active toggle to the left.

  3. Click Save.

To delete an access token:

  1. Select the checkbox associated with the access token you want to delete.

  2. Click Delete.

  3. Type delete in the pop-up window and click OK.

Use an access token

You can use an access token anywhere you need an MSR password.

Examples:

  • You can pass your access token to the --password or -p option when logging in from your Docker CLI client:

    docker login dtr.example.org --username <username> --password <token>
    
  • You can pass your access token to an MSR API endpoint to list the repositories to which the associated user has access:

    curl --silent --insecure --user <username>:<token> dtr.example.org/api/v0/repositories
    

Configure MSR

Add a custom TLS certificate

By default, Mirantis Secure Registry (MSR) services are exposed using HTTPS. This ensures encrypted communications between clients and your trusted registry. If you do not pass a PEM-encoded TLS certificate during installation, MSR will generate a self-signed certificate, which leads to an insecure site warning when accessing MSR through a browser. In addition, MSR includes an HTTP Strict Transport Security (HSTS) header in all API responses, which can cause your browser not to load the MSR web UI.

You can configure MSR to use your own TLS certificates, to ensure that MSR automatically trusts browsers and client tools. You can also enable user authentication through client certificates that your organization Public Key Infrastructure (PKI) provides.

Kubernetes deployments

To upload your own TLS certificates and keys, you can use the Helm CLI options to either install or reconfigure your MSR instance.

You can customize the WebTLS certificate using either the MSR Operator or the Helm chart:

  1. Obtain your TLS certificate and key files.

    Note

    You can use a previously created CA signed SSL certificate, or create a new one. 1

  2. Add the secret to the cluster:

    kubectl create secret tls <secret-name> \
      --key <keyfile>.pem \
      --cert <certfile>.pem
    
  3. Update your custom resource manifest:

    spec:
      nginx:
        webtls:
          secretName: '<secret-name>'
          create: false
    
  4. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  5. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    
  6. Enable port forwarding:

    kubectl port-forward service/msr 8080 8443:443
    
  7. Go to https://localhost:8443/login and log in as an administrator.

  8. Verify the presence of a valid certificate by matching the information with that of the generated certificate.

  1. Acquire your TLS certificate and key files.

    Note

    You can use a previously created CA signed SSL certificate, or you can create a new one. 1

  2. Add the secret to the cluster:

    kubectl create secret tls <secret-name> \
      --key <keyfile>.pem \
      --cert <certfile>.pem
    
  3. Install the helm chart with the custom certificate:

    helm install msr msr \
      --repo https://registry.mirantis.com/charts/msr/msr \
      --version 1.0.0 \
      --set-file license=path/to/file/license.lic \
      --set nginx.webtls.secretName="<secret-name>"
    
  4. Enable port forwarding:

    kubectl port-forward service/msr 8080 8443:443
    
  5. Log in as an administrator at https://localhost:8443/login.

  6. Verify the presence of a valid certificate by matching the information with that of the generated certificate.

Swarm deployments

Add a custom TLS certificate to an existing Swarm deployment, using the Docker CLI:

  1. Acquire your PEM-encoded x509 certificate.

    Note

    You can use a previously created CA signed SSL certificate, or you can create a new one. 1

  2. Verify that your certificate is split into the following three files:

    cert.pem

    This is the public key and includes everything between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.

    key.pem

    This is the private key and includes everything between -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----.

    ca.pem

    This is the public certificate of the Certificate Authority and includes everything between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.

    Note

    If the certificate is not already split, you can split it yourself by copy-pasting each of the three sections into its own separate file.

  3. Create a Docker secret for each of the three certificate files:

    docker secret create msr-web-cert cert.pem
    docker secret create msr-web-key key.pem
    docker secret create msr-web-ca ca.pem
    
  4. Update the NGINX service with the custom certificate:

    docker service update msr_msr-nginx \
      --secret-add msr-web-ca \
      --secret-add msr-web-cert \
      --secret-add msr-web-key \
      --env-rm MSR_WEB_TLS_CERT_FILE \
      --env-rm MSR_WEB_TLS_KEY_FILE \
      --env-rm MSR_WEB_TLS_CA_FILE \
      --env-add MSR_WEB_TLS_CERT_FILE=/var/run/secrets/msr-web-cert \
      --env-add MSR_WEB_TLS_KEY_FILE=/var/run/secrets/msr-web-key \
      --env-add MSR_WEB_TLS_CA_FILE=/var/run/secrets/msr-api-ca
    
1(1,2,3)

Users who want to create a new self-signed certificate that is valid for the host name can do so using mkcert or openssl.

Disable persistent cookies

By default, Mirantis Secure Registry (MSR) uses persistent cookies. Alternatively, you can switch to using session-based authentication cookies that expire when you close your browser.

To disable persistent cookies:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System.

  3. On the General tab, scroll down to Browser Cookies.

  4. Slide the toggle to the right next to Disable persistent cookies.

  5. Verify that persistent cookies are disabled:

    1. Log in to the MSR web UI using Chrome.

    2. Right-click any page and select Inspect.

    3. In the Developer Tools panel, navigate to Application > Cookies > https://<msr-external-url>.

    4. Verify that Expires / Max-Age is set to Session.

    1. Log in to the MSR web UI using Firefox.

    2. Right-click any page and select Inspect.

    3. In the Developer Tools panel, navigate to Storage > Cookies > https://<msr-external-url>.

    4. Verify that Expires / Max-Age is set to Session.

Disable MSR telemetry

By default, MSR automatically records and transmits data to Mirantis through an encrypted channel for monitoring and analysis purposes. The data collected provides the Mirantis Customer Success Organization with information that helps Mirantis to better understand the operational use of MSR by our customers. It also provides key feedback in the form of product usage statistics, which assists our product teams in making enhancements to Mirantis products and services.

Caution

To send MSR telemetry, the container runtime and the jobrunner container must be able to resolve api.segment.io and create a TCP (HTTPS) connection on port 443.

To disable telemetry for MSR:

  1. Log in to the MSR web UI as an administrator.

  2. Click System in the left-side navigation panel to open the System page.

  3. Click the General tab in the details pane.

  4. Scroll down in the details pane to the Analytics section.

  5. Toggle the Send data slider to the left.

Configure external storage

By default, MSR uses the local filesystem of the node on which it is running to store your Docker images. As an alternative, you can configure MSR to use an external storage back end for improved performance or high availability.

Configure MSR image storage

If your MSR deployment has a single replica, you can continue to use the local filesystem to store your Docker images. If, though, your MSR deployment has multiple replicas, make sure that all of the replicas are using the same storage back end for high availability.

Whenever a user pulls an image, the MSR node serving the request must have access to that image.

Storage back ends

MSR supports the following storage systems:

Persistent volume

  • NFS

  • Bind mount

  • Volume

Cloud storage providers

  • Amazon S3

  • Microsoft Azure

  • OpenStack Swift

  • Google Cloud Storage

  • Alibaba Cloud Object Storage Service

You can configure your storage back end at the time of MSR installation or upgrade. To do so, specify the registry.storage.backend parameter in your custom resource manifest or Helm chart values.yaml file with one of the following values, as appropriate:

  • "persistentVolume"

  • "azure"

  • "gcs"

  • "s3"

  • "swift"

  • "oss"

The following table details the fields that you can configure in the registry.storage.persistentVolume section of the custom resource manifest and Helm chart values.yaml file:

Field

Description

storageClass

The storageClass for the persistentVolume.

accessMode

The access mode for the persistentVolume.

size

The size of the persistentVolume.

Local filesystem

The default MSR back end is persistentVolume.

You must configure a default StorageClass on your cluster that supports the dynamic provisioning of persistent volumes. The StorageClass must support the provisioning of ReadWriteOnce and ReadWriteMany volumes.

To verify the current default StorageClass:

kubectl get sc

Example output:

NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  33d

MSR deployments with high availability must use either NFS or another centralized storage back end to ensure that all MSR replicas have access to the same images.

To verify the amount of persistent volume space that is in use:

kubectl -n <NAMESPACE> exec service/<RELEASE_NAME> -- df
Deploy MSR on NFS

You can configure your MSR replicas to store images on an NFS partition, to thus enable all replicas to share the same storage back end.

Note

As MSR does not migrate storage content when it switches back ends, you must migrate the content prior to changing the MSR storage configuration.

Prepare MSR for NFS
  1. Verify that the NFS server has the correct configuration.

  2. Verify that the NFS server has a fixed IP address.

  3. Verify that all hosts that are running MSR have the correct NFS libraries.

  4. Verify that the hosts can connect to the NFS server by listing the directories exported by your NFS server:

    showmount -e <nfsserver>
    
  5. Mount one of the exported directories:

    mkdir /tmp/mydir && sudo mount -t nfs <nfs server>:<directory> /tmp/mydir
    
Configure NFS for MSR

Note

The manifest examples herein are offered for demonstration purposes only. They do not exist in the Mirantis repository and thus are not available for use. To use NFS with MSR 3.0.x, you must enlist an external provisioner, such as NFS Ganesha server and external provisioner or NFS subdir external provisioner.

Kubernetes deployments
  1. Define the NFS service:

    kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
    
  2. Create an NFS server and service:

    1. Create the NFS server from the service definition:

      kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml
      
    2. Expose the NFS server as a service:

      kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
      
    3. Verify that the Pods are correctly deployed:

      kubectl get pods -l role=nfs-server.
      
  3. Create the persistent volume claim:

    1. Locate the cluster IP for your server:

      kubectl describe services nfs-server
      
    2. Edit the NFS persistent volume to use the correct IP address. Because there are not yet any service names, you must hard-code the IP address.

  4. Set up the persistent volume to use the NFS service:

    kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml
    kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml
    
Swarm deployments
  1. Edit your values.yaml file to include the following information:

    driverOpts:
      type: "nfs"
      o: "addr=<remote-host>,rw,nfsvers=<nfs-version>,async"
      device: ":<remote-path>"
    
  2. Proceed with your MSR on Swarm installation.

Configure MSR for a cloud storage provider (S3)

You can configure MSR to store Docker images on Amazon S3 or on any other file servers with an S3-compatible API.

All S3-compatible services store files in “buckets”, to which you can authorize users to read, write, and delete files. Whenever you integrate MSR with such a service, MSR sends all read and write operations to the S3 bucket where the images then persist.

Note

The instructions offered below pertain specifically to the configuration of MSR to Amazon S3. They can, however, also serve as a guide for how to configure MSR to other available cloud storage providers.

Create a bucket on Amazon S3

Before you configure MSR you must first create a bucket on Amazon S3. To optimize pulls and pushes, Mirantis suggests that you create the S3 bucket in the AWS region that is physically closest to the servers on which MSR is set to run.

  1. Create an S3 bucket.

  2. Create a new IAM user for the MSR integration.

  3. Apply an IAM policy that has the following limited user permissions:

    • Access to the newly-created bucket

    • Ability to read, write, and delete files

    Example user policy
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "s3:ListAllMyBuckets",
                "Resource": "arn:aws:s3:::*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": "arn:aws:s3:::<bucket-name>"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": "arn:aws:s3:::<bucket-name>/*"
            }
        ]
    }
    
Configure MSR on Amazon S3
Kubernetes deployments
  1. Add the following values to the custom resource manifest. If you are using IAM role authentication, do not include the lines for accesskey and secretkey. Running Kubernetes on AWS requires that you include v4auth: true.

    spec:
      registry:
        storage:
          backend: "s3"
          s3:
            region: <region>
            bucket: <bucket-name>
            accesskey: <access-key>
            secretkey: <secret-key>
            v4auth: true
          persistentVolume:
            size: <size>
    
  2. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    
  1. Set registry.storage.backend to s3.

  2. Specify registry.storage.s3.region and registry.storage.s3.bucket.

  3. If you are not using IAM role authentication, you must also set registry.storage.s3.accesskey and registry.storage.s3.secretkey.

  4. To activate the new storage configuration settings, issue the helm upgrade command.

Example configuration command at install time:

helm install msr msrofficial/msr \
--version 1.0.0 \
--set registry.storage.backend=s3 \
--set registry.storage.s3.accesskey=<> \
--set registry.storage.s3.secretkey=<> \
--set registry.storage.s3.region=us-east-2 \
--set registry.storage.s3.bucket=testing-msr-kube

Example configuration command at time of upgrade:

helm upgrade msr msrofficial/msr \
--version 1.0.0 \
--set registry.storage.backend=s3 \
--set registry.storage.s3.accesskey=<> \
--set registry.storage.s3.secretkey=<> \
--set registry.storage.s3.region=us-east-2 \
--set registry.storage.s3.bucket=testing-msr-kube
Swarm deployments
  1. Update your values.yaml file to include the following values.

    Note

    If you are using IAM role authentication, do not include the lines that set the accesskey and secretkey values.

    registry:
      storage:
        backend: 's3'
        s3:
          region: <region>
          bucket: <bucket-name>
          accesskey: <access-key>
          secretkey: <secret-key>
    
  2. Install or upgrade your deployment, as needed.

The following parameters are available for configuration in the registry.storage.s3 section of the custom resource manifest, Helm chart, or Swarm cluster values.yaml file:

Amazon S3

Field

Description

Level

accesskey

AWS Access Key.

Standard

secretkey

AWS Secret key.

Standard

region

The AWS region in which your bucket exists.

Standard

regionendpoint

The endpoint for S3 compatible storage services.

Standard

bucket

The name of the bucket in which image data is stored.

Standard

encrypt

Indicates whether images are stored in encrypted format.

Advanced

keyid

The KMS key ID to use for encryption of images.

Advanced

secure

Indicates whether to use HTTPS for data transfers to the bucket.

Advanced

v4auth

Indicates whether to use AWS Signature Version 4 to authenticate requests.

Advanced

chunksize

The default part size for multipart uploads.

Advanced

rootdirectory

A prefix that is applied to all object keys to allow you to segment data in your bucket if necessary.

Advanced

storageclass

The S3 storage class applied to each registry file. Valid options are “STANDARD” and “REDUCED_REDUNDANCY”.

Advanced

MSR supports the following S3 regions:

us-east-1

us-east-2

us-west-1

us-west-2

eu-west-1

eu-west-2

eu-central-1

ap-south-1

ap-southeast-1

ap-southeast-2

ap-northeast-1

ap-northeast-2

sa-east-1

cn-north-1

us-gov-west-1

ca-central-1

Restore MSR with your previous settings
Restore MSR with S3 settings

To restore MSR using your previously configured S3 settings, use restore.

Restore MSR with non-S3 cloud storage provider settings

For S3-compatible cloud storage providers other than Amazon S3, configure the following parameters in the registry.storage section of the custom resource manifest, Helm chart, or Swarm cluster values.yaml file:

Microsoft Azure

Field

Description

Level

accountname

The name of the Azure Storage Account.

Standard

accountkey

The Primary or Secondary Key for the Storage Account.

Standard

container

The name of the Azure root storage container in which image data is stored.

Standard

realm

The domain name suffix for the Storage API endpoint.

Advanced

OpenStack Swift

Field

Description

Level

authurl

OpenStack user name.

Standard

username

OpenStack user name.

Standard

password

OpenStack password.

Standard

container

The name of the Swift container in which to store the registry images.

Standard

region

The contents of a service account private key file in JSON format that is used for Service Account Authentication.

Advanced

tenant

OpenStack tenant name.

Advanced

tenantid

OpenStack tenant ID.

Advanced

domain

OpenStack domain name for Identity v3 API.

Advanced

domainid

OpenStack domain id for Identity v3 API.

Advanced

trustid

OpenStack trust ID for Identity v3 API.

Advanced

insecureskipverify

Skips TLS server certificate verification.

Advanced

chunksize

Data segments for the Swift Dynamic Large Objects.

Advanced

prefix

A prefix that is applied to all Swift object keys that allows you to segment data in your container, if necessary.

Advanced

secretkey

The secret key used to generate temporary URLs.

Advanced

accesskey

The access key to generate temporary URLs.

Advanced

authversion

Specifies the OpenStack Auth version.

Advanced

endpointtype

The endpoint type used when connecting to Swift.

Advanced

Google Cloud Storage

Field

Description

Level

bucket

The name of the Google Cloud Storage bucket in which image data is stored.

Standard

credentials

The contents of a service account private key file in JSON format that is used for Service Account Authentication.

Advanced

rootdirectory

The root directory tree in which all registry files are stored. The prefix is applied to all Google Cloud Storage keys, to allow you to segment data in your bucket as necessary.

Advanced

chunksize

The chunk size used for uploading large blobs.

Advanced

Alibaba Cloud Object Storage Service

Field

Description

Level

accesskeyid

Access key ID.

Standard

accesskeysecret

Access key secret.

Standard

region

The ID of the OSS region in which you would like to store objects.

Standard

bucket

The name of the OSS bucket in which to store objects.

Standard

endpoint

The endpoint domain name for accessing OSS.

Advanced

internal

Indicates whether to use the internal endpoint instead of the public endpoint, for OSS access.

Advanced

encrypt

Indicates whether to encrypt your data on the server side.

Advanced

secure

Indicates whether to transfer data to the bucket over HTTPS.

Advanced

chunksize

The default part size for multipart uploads.

Advanced

rootdirectory

A prefix that is applied to all object keys that allows you to segment data in your bucket, if necessary.

Advanced

Switch storage back ends

To facilitate online garbage collection, switching storage back ends initializes a new metadata store and erases your existing tags. As a best practice, you should always move, back up, and restore MSR storage back ends together with your metadata.

Kubernetes deployments

To switch your storage back end to Amazon S3 using the MSR Operator:

  1. Add the following values to the custom resource manifest. If you are using IAM role authentication, do not include the lines for accesskey and secretkey. Running Kubernetes on AWS requires that you include v4auth: true.

    spec:
      registry:
        storage:
          backend: "s3"
          s3:
            region: <region>
            bucket: <bucket-name>
            accesskey: <access-key>
            secretkey: <secret-key>
            v4auth: true
          persistentVolume:
            size: <size>
    
  2. Apply these changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify that the reconciliation process for the custom resource is complete:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

To switch your storage back end to Amazon S3 using a Helm chart:

helm upgrade msr msrofficial/msr \
--set registry.storage.backend=s3 \
--set registry.storage.s3.accesskey=<> \
--set registry.storage.s3.secretkey=<> \
--set registry.storage.s3.region=us-east-2 \
--set registry.storage.s3.bucket=testing-msr-kube
Swarm deployments
  1. SSH into a manager node on the Swarm cluster on which you are running MSR.

  2. Edit your values.yaml file to include the new storage back end:

    registry:
      storage:
        backend: '<storage-backend>'
    
  3. Obtain a list of non-manager nodes along with their node IDs, noting the IDs of the nodes on which MSR is installed:

    docker node ls --format "{{ .ID }}" --filter "role=worker"
    
  4. Update your MSR deployment, specifying a node ID for each node on which MSR is installed:

    docker run -it \
      --rm \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      msr-installer \
      upgrade \
      --node <node-id>
    
  5. Review the status of the deployed services:

    docker stack services msr
    

Set up high availability

Mirantis Secure Registry (MSR) is designed to scale horizontally as your usage increases. You can scale each of the resources that the custom resource manifest creates by editing the replicaCount setting in the custom resource manifest. You can also add more replicas to cause MSR to scale to demand and for high availability.

To ensure that MSR is tolerant to failures, you can add additional replicas to each of the resources MSR deploys. MSR with high availability requires a minimum of three Nodes.

When sizing your MSR installation for high availability, Mirantis recommends that you follow these best practices:

  • Ensure that multiple Pods created for the same resource are not scheduled on the same Node. To do this, enable a Pod affinity setting in your Kubernetes environment that schedules Pod replicas on different Nodes.

    Note

    If you are unsure of which Pod affinity settings to use, set the podAntiAffinityPreset field to hard, to enable the recommended affinity settings intended for a highly available workload.

  • Do not scale RethinkDB with just two replicas.

    Caution

    RethinkDB cannot tolerate a failure with an even number of replicas.

    To determine the best way to scale RethinkDB, refer to the following table.

    MSR RethinkDB replicas

    1

    3

    5

    7

    Failures tolerated

    0

    1

    2

    3

    Caution

    Adding too many replicas to the RethinkDB cluster can lead to performance degradation.

Install an HA MSR deployment

Note

The instruction herein is a supplement to the MSR installation procedure detailed at Installation Guide.

Kubernetes deployments

High availability (HA) MSR deployments require a Kubernetes environment that have:

  • At least two different nodes on which to run an MSR deployment

  • An additional node on which to replicate the RethinkDB cluster, to ensure fault tolerance

To install an HA MSR deployment using the MSR Operator:

  1. Modify your cr-sample-manifest.yaml file to includes the values contained in the following YAML example:

    spec:
      podAntiAffinityPreset: hard
      rethinkdb:
        cluster:
          replicaCount: 3
        proxy:
          replicaCount: 2
      enzi:
        api:
          replicaCount: 2
        worker:
          replicaCount: 2
      nginx:
        replicaCount: 2
      garant:
        replicaCount: 2
      api:
        replicaCount: 2
      jobrunner:
        deployments:
          default:
            replicaCount: 2
      notarySigner:
        replicaCount: 2
      notaryServer:
        replicaCount: 2
      registry:
        replicaCount: 2
    

    Note

    You can edit the replica counts in the custom resource manifest, but be aware that rethinkdb.cluster.replicaCount must always be an odd number. Refer to the RethinkDB scaling chart for details.

  2. Invoke the following command to run the webhook health check and apply the changes to the custom resource:

    kubectl wait --for=condition=ready pod -l \
    app.kubernetes.io/name="msr-operator" && kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

    To troubleshoot the reconciliation process, run the following commands:

    kubectl describe msrs.msr.mirantis.com
    kubectl describe rethinkdbs.rethinkdb.com
    

    Review the MSR Operator Pod logs for more detailed results:

    kubectl logs <msr-operator-pod-name>
    
  4. Optional. Another way to troubleshoot the reconciliation process is to monitor the cluster scaling in the RethinkDB admin console.

To install an HA MSR deployment using a Helm chart:

  1. Create an ha.yaml file with the following content:

    The ha.yaml file sample
    global:
      podAntiAffinityPreset: hard
      rethinkdb:
        cluster:
          replicaCount: 3
        proxy:
          replicaCount: 2
      enzi:
        api:
          replicaCount: 2
        worker:
          replicaCount: 2
      nginx:
        replicaCount: 2
      garant:
        replicaCount: 2
      api:
        replicaCount: 2
      jobrunner:
        deployments:
          default:
            replicaCount: 2
      notarySigner:
        replicaCount: 2
      notaryServer:
        replicaCount: 2
      registry:
        replicaCount: 2
    

    Note

    You can edit the replica counts in the ha.yaml file. However, you must make sure that rethinkdb.cluster.replicaCount is always an odd number. Refer to the RethinkDB scaling chart for details.

  2. Using Helm, apply the YAML file to a new installation:

    helm install msrofficial/msr -f ha.yaml
    
Swarm deployments

You must have at least three worker nodes to run a robust and fault-tolerant high availability (HA) MSR deployment.

Note

The procedure that follows is supplementary to the MSR installation procedure. Refer to Install MSR online for the comprehensive installation instructions.

  1. SSH into a manager node.

  2. Obtain a list of non-manager nodes along with their node IDs:

    docker node ls --format "{{ .ID }}" --filter "role=worker"
    
  3. Install an HA deployment, specifying the node IDs of the workers on which MSR will run:

    docker run \
      --rm \
      --it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      registry.mirantis.com/msr/msr-installer:<msr-version> \
      install \
      --node <node-id> \
      --node <node-id> \
      --node <node-id> \
      --https_port 443 \
      --http_port 80
    

    Important

    You must install MSR onto an odd number of worker nodes, the reason for which is that RethinkDB uses a raft consensus algorithm to ensure data consistency and fault tolerance.

  4. Review the status of the deployed services:

    docker stack services msr
    
Modify replica counts on an existing installation
Kubernetes deployments

To modify replica counts for MSR resources using MSR Operator:

You can use the kubectl apply command on your custom resource manifest to modify replica counts across MSR resources.

For information on how many RethinkDB replicas to use, refer to the RethinkDB replica count table.

  1. In the cr-sample-manifest.yaml file, edit the key-value pair that corresponds to the MSR resource whose replica count you want to modify. For example, nginx:

    Note

    For the full configuration example, refer to the CRD file sample.

    nginx:
      replicaCount: <desired-replica-count>
    
  2. Invoke the following command to run the webhook health check and apply the changes to the custom resource:

    kubectl wait --for=condition=ready pod -l \
    app.kubernetes.io/name="msr-operator" && kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

    To troubleshoot the reconciliation process, run the following commands:

    kubectl describe msrs.msr.mirantis.com
    kubectl describe rethinkdbs.rethinkdb.com
    

    Review the MSR Operator Pod logs for more detailed results:

    kubectl logs <msr-operator-pod-name>
    
  4. Optional. Another way to troubleshoot the reconciliation process is to monitor the cluster scaling in the RethinkDB admin console.

To modify replica counts for MSR resources using a Helm chart:

You can use the helm upgrade command to modify replica counts across non-RethinkDB MSR resources. For the RethinkDB resources, refer to Modify replica counts for RethinkDB resources.

  1. In the ha.yaml file, edit the key-value pair that corresponds to the MSR resource whose replica count you wish to modify. For example, nginx:

    Note

    For the full configuration example, refer to The ha.yaml file sample.

    nginx:
      replicaCount: <desired-replica-count>
    
  2. To apply the new values, run helm upgrade:

    helm upgrade msrofficial/msr –-version 1.0.0 -f ha.yaml
    
Swarm deployments
  1. SSH into a manager node.

  2. Verify that you have the values.yaml that you generated to install and modify your MSR deployment.

  3. Scale your deployment to the required number of worker nodes:

    docker run \
      --rm \
      -it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      registry.mirantis.com/msr/msr-installer:<msr-version> \
      scale --replicas <number-of-replicas>
    

    Important

    Because RethinkDB uses a raft consensus algorithm to ensure data consistency and fault tolerance, you must install MSR onto an odd number of worker nodes.

  4. Review the status of the deployed services:

    docker stack services msr
    
Modify replica counts for RethinkDB resources

Note

The procedure outlined herein is not necessary if you are using the MSR Operator to install and manage your MSR deployment.

Unlike other MSR resources, modifications to RethinkDB resources require that you scale the RethinkDB tables. The scaling of the cluster occurs when you alter the replicaCount value in the ha.yaml file.

Add replicas to RethinkDB
  1. Adjust the replicaCount value by creating or editing an existing ha.yaml file:

    Note

    Refer to ha-yaml-sample for the full configuration example.

    rethinkdb:
    cluster:
       replicaCount: <desired-replica-count>
    
  2. To apply the new values, run helm upgrade:

    helm upgrade msrofficial/msr –-version 1.0.0 -f ha.yaml
    
  3. Monitor the addition of the RethinkDB replicas to ensure that each replica has the Running status before you continue.

    kubectl get pods
    -l="app.kubernetes.io/component=cluster","app.kubernetes.io/name=rethinkdb"
    

    Example output:

    NAME                      READY   STATUS    RESTARTS   AGE
    msr-rethinkdb-cluster-0   1/1     Running   0          3h19m
    msr-rethinkdb-cluster-1   1/1     Running   0          110s
    msr-rethinkdb-cluster-2   1/1     Running   0          83s
    
  4. Use the MSR CLI to scale the RethinkDB tables within the cluster to use the newly added replicas:

    kubectl exec -it deploy/msr-api -- msr db scale
    
Remove replicas from RethinkDB

As an example, the replica removal procedure illustrates how to scale down from three servers to one server.

  1. Decommission the RethinkDB servers that you want to remove:

    1. Obtain a current list of RethinkDB servers:

      kubectl exec deploy/msr-api -- msr rethinkdb list
      

      Example output:

      NAME                    ID                                   TAGS    CACHE (MB)
      msr_rethinkdb_cluster_1 fa5d11f0-d47f-4a8f-895f-246271212204 default 100
      msr_rethinkdb_cluster_0 b81cca8a-6584-4b9a-9c97-e9f3c86b24fd default 100
      msr_rethinkdb_cluster_2 d6d29977-6ab6-4815-ab24-25519ab3339f default 100
      
    2. Determine the servers to decommission. Be aware that the number of replicas will scale down from the highest number to the lowest.

    3. Run msr rethinkdb decommission on the servers to be decommissioned. As the scale down in the example is from three servers to one server, the two servers with the highest numbers should be targeted for decommission.

      kubectl exec deploy/msr-api -- msr rethinkdb decommission msr_rethinkdb_cluster_2 msr_rethinkdb_cluster_1
      
  2. Scale down the RethinkDB tables within the cluster:

    kubectl exec -it deploy/msr-api -- msr db scale
    
  3. Adjust the replicaCount value by creating or editing an existing ha.yaml file.

    nginx:
      replicaCount: 1
    
  4. Apply the new replicaCount values:

    helm upgrade msrofficial/msr –-version 1.0.0 -f ha.yaml
    
  5. Monitor the removal of the cluster pods to ensure their termination:

    kubectl get pods
    -l="app.kubernetes.io/component=cluster","app.kubernetes.io/name=rethinkdb"
    

    Example output:

    NAME     READY   STATUS        RESTARTS   AGE
    msr-rethinkdb-cluster-0   1/1     Running       0          3h19m
    msr-rethinkdb-cluster-1   1/1     Running       0          1h22m
    msr-rethinkdb-cluster-2   0/1     Terminating   0          1h22m
    

Set up security scanning

For MSR to perform security scanning, you must have a running deployment of Mirantis Secure Registry (MSR), administrator access, and an MSR license that includes security scanning.

Before you can set up security scanning, you must verify that your Docker ID can access and download your MSR license from DockerHub. If you are using a license that is associated with an organization account, verify that your Docker ID is a member of the Owners team, as only members of that team can download license files for an organization. If you are using a license associated with an individual account, no additional action is needed.

Note

To verify that your MSR license includes security scanning:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

If the Enable Scanning toggle displays, the license includes security scanning.

To learn how to obtain and install your MSR license, refer to Obtain the MSR license.

Enable MSR security scanning
  1. Log in to the MSR web UI as an administrator.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

  3. Slide the Enable Scanning toggle to the right.

  4. Set the security scanning mode by selecting either Online or Offline.

    • Online mode:

      Online mode downloads the latest vulnerability database from a Docker server and installs it.

      1. Select whether to include jobrunner and postgresDB logs

      2. Click Sync Database now.

    • Offline mode:

      Offline mode requires that you manually perform the following steps.

      1. Download the most recent CVE database.

        Be aware that the example command specifies default values. It instructs the container to output the database file to the ~/Downloads directory and configures the volume to map from the local machine into the container. If the destination for the database is in a separate directory, you must define an additional volume. For more information, refer to the table that follows this procedure.

        docker run -it --rm \
        -v ${HOME}/Downloads:/data \
        -e CVE_DB_URL_ONLY=false \
        -e CLOBBER_FILE=false \
        -e DATABASE_OUTPUT="/data" \
        -e DATABASE_SCHEMA=3 \
        -e DEBUG=false \
        -e VERSION_ONLY=false \
        mirantis/get-dtr-cve-db:latest
        
      2. Click Select Database and open the downloaded CVE database file.

Runtime environment variable override

Variable

Default

Override detail

CLOBBER_FILE

false

Set to true to overwrite an existing file with the same database name.

CVE_DB_URL_ONLY

false

Set to true to output the CVE database URL; does not download the CVE database.

DATABASE_OUTPUT

/data

Indicates the database download directory inside the container.

DATABASE_SCHEMA

3

Valid values:

  • 1 (DTR 2.2.5 or lower)

  • 2 (DTR 2.3.x; 2.4.x; 2.5.15 or lower; 2.6.11 or lower; 2.7.4 or lower)

  • 3 (DTR 2.5.16 or higher; 2.6.12 or higher; 2.7.5 or higher)

DEBUG

false

Set to true to execute the script with set -x.

VERSION_ONLY

false

Set to true to produce a dry run that outputs the CVE database version number, but does not download the CVE database.

Set repository scanning mode

Two image scanning modes are available:

On push

The image is re-scanned (1) on each docker push to the repository and (2) when a user with write access clicks the Start Scan links or the Scan button.

Manual

The image is scanned only when a user with write access clicks the Start Scan links or Scan button.

By default, new repositories are set to scan On push, and any repositories that existed before scanning was enabled are set to Manual.

To change the scanning mode for an individual repository:

  1. Verify that you have write or admin access to the repository.

  2. Navigate to the repository, and click the Settings tab.

  3. Scroll down to the Image scanning section.

  4. Select the desired scanning mode.

Update the CVE scanning database

MSR security scanning indexes the components in your MSR images and compares them against a CVE database. This database is routinely updated with new vulnerability signatures, and thus MSR must be regularly updated with the latest version to properly scan for all possible vulnerabilities. After updating the database, MSR matches the components in the new CVE reports to the indexed components in your images, and generates an updated report.

Note

MSR users with administrator access can learn when the CVE database was last updated by accessing the Security tab in the MSR System page.

Update CVE database in online mode

In online mode, MSR security scanning monitors for updates to the vulnerability database, and downloads them when available.

To ensure that MSR can access the database updates, verify that the host can access both https://license.mirantis.com and https://dss-cve-updates.mirantis.com/ on port 443 using HTTPS.

MSR checks for new CVE database updates every day at 3:00 AM UTC. If an update is available, it is automatically downloaded and applied, without interrupting any scans in progress. Once the update is completed, the security scanning system checks the indexed components for new vulnerabilities.

To set the update mode to online:

  1. Log in to the MSR web UI as an administrator.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

  3. Click Online.

Your choice is saved automatically.

Note

To check immediately for a CVE database update, click Sync Database now.

Update CVE database in offline mode

When connection to the update server is not possible, you can update the CVE database for your MSR instance using a .tar file that contains the database updates.

To set the update mode to offline:

  1. Log in to the MSR web UI as an administrator.

  2. In the left-side navigation panel, click System and navigate to the Security tab.

  3. Select Offline

  4. Click Select Database and open the downloaded CVE database file.

MSR installs the new CVE database and begins checking the images that are already indexed for components that match new or updated vulnerabilities.

Caches

The time needed to pull and push images is directly influenced by the distance between your users and the geographic location of your MSR deployment. This is because the files need to traverse the physical space and cross multiple networks. You can, however, deploy MSR caches at different geographic locations, to add greater efficiency and shorten user wait time.

With MSR caches you can:

  • Accelerate image pulls for users in a variety of geographical regions.

  • Manage user permissions from a central location.

MSR caches are inconspicuous to your users, as they will continue to log in and pull images using the provided MSR URL address.

When MSR receives a user request, it first authenticates the request and verifies that the user has permission to pull the requested image. Assuming the user has permission, they then receive an image manifest that contains the list of image layers to pull and which directs them to pull the images from a particular cache.

When your users request image layers from the indicated cache, the cache pulls these images from MSR and maintains a copy. This enables the cache to serve the image layers to other users without having to retrieve them again from MSR.

Note

Avoid using caches if your users need to push images faster or if you want to implement region-based RBAC policies. Instead, deploy multiple MSR clusters and apply mirroring policies between them. For further details, refer to Promotion policies and monitoring.

MSR cache prerequisites

Before deploying an MSR cache in a datacenter:

  • Obtain access to the Kubernetes cluster that is running MSR in your data center.

  • Join the nodes into a cluster.

  • Dedicate one or more worker nodes for running the MSR cache.

  • Obtain TLS certificates with which to secure the cache.

  • Configure a shared storage system, if you want the cache to be highly available.

  • Configure your firewall rules to ensure that your users have access to the cache through your chosen port.

    Note

    For illustration purposes only, the MSR cache documentation details caches that are exposed on port 443/TCP using an ingress controller.

MSR cache deployment scenario

MSR caches running in different geographic locations can provide your users with greater efficiency and shorten the amount of time required to pull images from MSR.

Consider a scenario in which you are running an MSR instance that is installed in the United States, with a user base that includes developers located in the United States, Asia, and Europe. The US-based developers can pull their images from MSR quickly, however those working in Asia and Europe have to contend with unacceptably long wait times to pull the same images. You can address this issue by deploying MSR caches in Asia and Europe, thus reducing the wait time for developers located in those areas.

The described MSR cache scenario requires three datacenters:

  1. US-based datacenter, running MSR configured for high availability

  2. Asia-based datacenter, running an MSR cache that is configured to fetch images from MSR

  3. Europe-based datacenter, running an MSR cache that is configured to fetch images from MSR

For information on datacenter configuration, refer to MSR cache prerequisites.

Deploy an MSR cache with Kubernetes

Note

The MSR with Kubernetes deployment detailed herein assumes that you have a running MSR deployment.

When you establish the MSR cache as a Kubernetes deployment, you ensure that Kubernetes will automatically schedule and restart the service in the event of a problem.

You manage the cache configuration with a Kubernetes Config Map and the TLS certificates with Kubernetes secrets. This setup enables you to securely manage the configurations of the node on which the cache is running.

Prepare the cache deployment

Following cache preparation, you will have the following file structure on your workstation:

├── msrcache.yml
├── config.yml
└── certs
    ├── cache.cert.pem
    ├── cache.key.pem
    └── msr.cert.pem
msrcache.yml

The YAML file that allows you to deploy the cache with a single command.

config.yml

The cache configuration file.

certs

The certificates subdirectory.

cache.cert.pem

The cache public key certificate, including any intermediaries.

cache.key.pem

The cache private key.

msr.cert.pem

The MSR CA certificate.

Create the MSR cache certificates

To deploy the MSR cache with a TLS endpoint you must generate a TLS certificate and key from a certificate authority.

The manner in which you expose the MSR cache changes the Storage Area Networks (SANs) that are required for the certificate. For example:

  • To deploy the MSR cache with an ingress object you must use an external MSR cache address that resolves to your ingress controller as part of your certificate.

  • To expose the MSR cache through a Kubernetes Cloud Provider, you must have the external Loadbalancer address as part of your certificate.

  • To expose the MSR cache through a Node port or a host port you must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.

Create the MSR cache certficates:

  1. Create a cache certificate:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -m PEM
    
  2. Create a directory called certs.

  3. In the certs directory, place the newly created certificate cache.cert.pem and key cache.key.pem for your MSR cache.

  4. Place the certificate authority in the certs directory, including any intermedite certificate authorities of the certificate from your MSR deployment. If your MSR deployment uses cert-manager, use kebectl to source this from the main MSR deployment.

    kubectl get secret msr-nginx-ca-cert -o go-template='{{ index .data "ca.crt" | base64decode }}'
    

Note

If cert-manager is not in use, you must provide your custom nginx.webtls certificate.

Configure the MSR cache

The MSR cache takes its configuration from a configuration file that you mount into the container.

You can edit the following MSR cache configuration file for your environment, entering the relevant external MSR cache, worker node, or external loadbalancer FQDN. Once you have configured the cache it fetches image layers from MSR and maintains a local copy for 24 hours. If a user requests the image layer after that period, the cache fetches it again from MSR.

cat > config.yml <<EOF
version: 0.1
log:
  level: info
storage:
  delete:
    enabled: true
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: 0.0.0.0:443
  secret: generate-random-secret
  host: https://<external-fqdn-msrcache> # Could be MSR Cache / Loadbalancer / Worker Node external FQDN
  tls:
    certificate: /certs/cache.cert.pem
    key: /certs/cache.key.pem
middleware:
  registry:
      - name: downstream
        options:
          blobttl: 24h
          upstreams:
            - https://<msr-url> # URL of the Main MSR Deployment
          cas:
            - /certs/msr.cert.pem
EOF

By default, the cache stores image data inside its container. Thus, if something goes wrong with the cache service and Kubernetes deploys a new Pod, cached data is not persisted. The data is not lost, however, as it persists in the primary MSR.

Note

Kubernetes persistent volumes or persistent volume claims must be in use to provide persistent back end storage capabilities for the cache.

Define Kubernetes resources

The Kubernetes manifest file you use to deploy the MSR cache is independent from how you choose to expose the MSR cache within your environment.

cat > msrcache.yml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: msr-cache
  namespace: msr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: msr-cache
  template:
    metadata:
      labels:
        app: msr-cache
      annotations:
       seccomp.security.alpha.kubernetes.io/pod: docker/default
    spec:
      containers:
        - name: msr-cache
          image: registry.mirantis.com/msr/msr-content-cache:3.1.2
          command: ["bin/sh"]
          args:
            - start.sh
            - /config/config.yml
          ports:
          - name: https
            containerPort: 443
          volumeMounts:
          - name: msr-certs
            readOnly: true
            mountPath: /certs/
          - name: msr-cache-config
            readOnly: true
            mountPath: /config
      volumes:
      - name: msr-certs
        secret:
          secretName: msr-certs
      - name: msr-cache-config
        configMap:
          defaultMode: 0666
          name: msr-cache-config
EOF
Create Kubernetes resources

To create the Kubernetes resources, you must have the kubectl command line tool configured to communicate with your Kubernetes cluster, through either a Kubernetes configuration file or an MKE client bundle.

Note

The documentation herein assumes that you have the necessary file stucture on your workstation.

To create the Kubernetes resources:

  1. Create a Kubernetes namespace to logically separate all of the MSR cache components:

    kubectl create namespace msr
    
  2. Create the Kubernetes Secrets that contain the MSR cache TLS certificates and a Kubernetes ConfigMap that contains the MSR cache configuration file:

    kubectl -n msr create secret generic msr-certs \
      --from-file=certs/msr.cert.pem \
      --from-file=certs/cache.cert.pem \
      --from-file=certs/cache.key.pem
    
    kubectl -n msr create configmap msr-cache-config \
      --from-file=config.yaml
    
  3. Create the Kubernetes deployment:

    kubectl create -f msrcache.yaml
    
  4. Review the running Pods in your cluster to confirm successful deployment:

    kubectl -n msr get pods
    
  5. Optional. Troubleshoot your deployment:

    kubectl -n msr describe pods <pods>
    
    and / or
    
    `kubectl -n msr logs <pods>
    
Expose the MSR Cache

To provide external access to your MSR cache you must expose the cache Pods.

Important

  • Expose your MSR cache through only one external interface.

  • To ensure TLS certificate validity, you must expose the cache through the same interface for which you previously created a certificate.

Kubernetes supports several methods for exposing a service, based on your infrastructure and your environment. Detail is offered below for the NodePort method and the Ingress Controllers method.

NodePort method
  1. Add a worker node FQDN to the TLS certificate at the start and access the MSR cache through an exposed port on a worker node FQDN.

    cat > msrcacheservice.yaml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: msr-cache
      namespace: msr
    spec:
      type: NodePort
      ports:
      - name: https
        port: 443
        targetPort: 443
        protocol: TCP
      selector:
        app: msr-cache
    EOF
    
    kubectl create -f msrcacheservice.yaml
    
  2. Run the following command to determine the port on which you have exposed the MSR cache:

    kubectl -n msr get services
    
  3. Test the external reachability of your MSR cache. To do this, use curl to hit the API endpoint, using both the external address of a worker node and the NodePort:

    curl -X GET https://<workernodefqdn>:<nodeport>/v2/_catalog
    {"repositories":[]}
    
Ingress Controllers method

In the ingress contoller exposure scheme, you expose the MSR cache through an ingress object.

  1. Create a DNS rule in your environment to resolve an MSR cache external FQDN address to the address of your ingress controller. In addition, specify at the start the same MSR cache external FQDN within the MSR cache certificate.

    cat > msrcacheingress.yaml <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: msr-cache
      namespace: msr
      annotations:
        nginx.ingress.kubernetes.io/ssl-passthrough: "true"
        nginx.ingress.kubernetes.io/secure-backends: "true"
    spec:
      tls:
      - hosts:
        - <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
      rules:
      - host: <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
        http:
          paths:
          - pathType: Prefix
            path: "/cache"
            backend:
              service:
                name: msr-cache
                port:
                  number: 443
    EOF
    
    kubectl create -f msrcacheingress.yaml
    
  2. Test the external reachability of your MSR cache. To do this, use curl to hit the API endpoint. The address should be the one you have previously defined in the service definition file.

curl -X GET https://external-msr-cache-fqdn/v2/_catalog
{"repositories":[]}

See also

Deploy an MSR cache with Swarm

Note

The MSR on Swarm deployment detailed herein assumes that you have a running MSR deployment and that you have provisioned multiple nodes and joined them into a swarm.

You will deploy your MSR cache as a Docker service, thus ensuring that Docker automatically schedules and restarts the service in the event of a problem.

You manage the cache configuration using a Docker configuration and the TLS certificates using Docker secrets. This setup enables you to securely manage the node configurations for the node on which the cache is running.

Prepare the cache deployment

Important

To ensure MSR cache functionality, Mirantis highly recommends that you deploy the cache on a dedicated node.

Label the cache node

To target your deployment to the cache node, you must first label that node. To do this, SSH into a manager node of the swarm within which you want to deploy the MSR cache.

docker node update --label-add msr.cache=true <node-hostname>

Note

If you are using MKE to manage the swarm, use a client bundle to configure your Docker CLI client to connect to the swarm.

Configure the MSR cache

Following cache preparation, you will have the following file structure on your workstation:

├── docker-stack.yml
├── config.yml          # The cache configuration file
└── certs
    ├── cache.cert.pem  # The cache public key certificate
    ├── cache.key.pem   # The cache private key
    └── msr.cert.pem    # MSR CA certificate

With the configuration detailed herein, the cache fetches image layers from MSR and retains a local copy for 24 hours. After that, if a user requests that image layer, the cache re-fetches it from MSR.

The cache is configured to persist data inside its container. If something goes wrong with the cache service, Docker automatically redeploys a new container, but the previously cached data does not persist. You can customize the storage parameters, if you want to store the image layers using a persistent storage back end.

Also, the cache is configured to use port 443. If you are already using that port in the swarm, update the deployment and configuration files to use another port. Remember to create firewall rules for the port you choose.

Edit the docker-stack.yml file

With a single command, you can deploy the cache using the docker-stack.yml file, which you mount into the container.

Edit the sample MSR cache configuration file that follows to fit your environment:

version: "3.3"
services:
  cache:
    image: registry.mirantis.com/msr/msr-content-cache:3.0.7
    entrypoint:
      - "/start.sh"
      - "/config.yml"
    ports:
      - 443:443
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.msr.cache == true]
      restart_policy:
        condition: on-failure
    configs:
      - source: config.yml
        target: /config.yml
    secrets:
      - msr.cert.pem
      - cache.cert.pem
      - cache.key.pem
configs:
  config.yml:
    file: ./config.yml
secrets:
  msr.cert.pem:
    file: ./certs/msr.cert.pem
  cache.cert.pem:
    file: ./certs/cache.cert.pem
  cache.key.pem:
    file: ./certs/cache.key.pem
Edit the config.yml file

You configure the MSR cache using a configuration file that you mount into the container.

Edit the sample MSR cache configuration file that follows to fit your environment, entering the relevant external MSR cache, worker node, or external loadbalancer FQDN. Once configured, the cache fetches image layers from MSR and maintains a local copy for 24 hours. If a user requests the image layer after that period, the cache re-fetches it from MSR.

version: 0.1
log:
  level: info
storage:
  delete:
    enabled: true
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: '0.0.0.0:443'
  secret: generate-random-secret
  host: 'https://<cache-url>'
  tls:
    certificate: /run/secrets/cache.cert.pem
    key: /run/secrets/cache.key.pem
middleware:
  registry:
    - name: downstream
      options:
        blobttl: 24h
        upstreams:
          - https://<msr-url>:<msr-port>
        cas:
          - /run/secrets/msr.cert.pem
Create the MSR cache certificates

To deploy the MSR cache with a TLS endpoint, you must generate a TLS certificate and key from a certificate authority.

Be aware that to expose the MSR cache through a node port or a host port, you must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.

Create the MSR cache certificates:

  1. Create a cache certificate:

    ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -m PEM
    
  2. Create a directory called certs and place in it the newly created certificate cache.cert.pem and key cache.key.pem for your MSR cache.

  3. Configure the cert pem files, as detailed below:

    pem file

    Information to add

    cache.cert.pem

    Add the public key certificate for the cache. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file.

    cache.key.pem

    Add the unencrypted private key for the cache.

    msr.cert.pem

    Configure the cache to trust MSR.

    Add the MSR CA certificate to the certs/msr.cert. pem file, if you are using the default MSR configuration, or if MSR is using TLS certificates signed by your own certificate authority. Note that configuring msr.cert.pem is not necessary if you have customized MSR to use TLS certificates issued by a globally trusted certificate authority, as in this case the cache will automatically trust MSR.

    curl -sk https://<msr-url>/ca > certs/msr.cert.pem
    
Deploy the cache
  1. Run the following command to initiate cache deployment:

    docker stack deploy --compose-file docker-stack.yml msr-cache
    
  2. Verify the successful deployment of the cache:

    docker stack ps msr-cache
    

    Docker should display the msr-cache stack as running.

  3. Register the cache with MSR.

    You must configure MSR to recognize the cache. Use the POST /api/v0/content_caches API to do this, by way of the MSR interactive API documentation.

    1. Access the MSR web UI.

    2. Select API docs from the top-right menu.

    3. Navigate to POST /api/v0/content_caches and click to expand it.

    4. Type the following into the body field:

      {
      "name": "region-asia",
      "host": "https://<cache-url>:<cache-port>"
      }
      
    5. Click Try it out! to make the API call.

  4. Configure your user account.

    In the MSR web UI, navigate to your Account, click the Settings tab, and edit the Content Cache settings to the newly deployed cache.

    Note

    To set up user accounts for multiple users simultaneously, use the /api/v0/accounts/{username}/settings API endpoint.

    Henceforth, you will be using the cache whenever you pull images.

  5. Test the cache.

    1. Verify that the cache is functioning properly:

      1. Push an image to MSR.

      2. Verify that the cache is configured to your user account.

      3. Delete the image from your local system.

      4. Pull the image from MSR.

    2. Check the logs to verify that the cache is serving your request:

      docker service logs --follow msr-cache_cache
      

      Issues with TLS authentication are the most common causes of cache misconfiguration, including:

      • MSR not trusting the cache TLS certificates.

      • The cache not trusting MSR TLS certificates.

      • Your machine not trusting MSR or the cache.

      You can use the logs to troubleshoot cache misconfigurations.

  6. Clean up sensitive files, such as private keys for the cache, by running the following command:

    rm -rf certs
    
Configure caches for high availability

To ensure that your MSR cache is always available to users and is highly performant, configure it for high availability.

You will require the following to deploy MSR caches with high availability:

  • Multiple nodes, one for each cache replica

  • A load balancer

  • Shared storage system that has read-after-write consistency

With high availability, Mirantis recommends that you configure the replicas to store data using a shared storage system. MSR cache deployment is the same, though, regardless of whether you are deploying a single replica or multiple replicas.

When using a shared storage system, once an image layer is cached, any replica is able to serve it to users without having to fetch a new copy from MSR.

MSR caches support the following storage systems:

  • Alibaba Cloud Object Storage Service

  • Amazon S3

  • Azure Blob Storage

  • Google Cloud Storage

  • NFS

  • Openstack Swift

Note

If you are using NFS as a shared storage system, ensure read-after-write consistency by verifying that the shared directory is configured with:

/dtr-cache *(rw,root_squash,no_wdelay)

In addition, mount the NFS directory on each node where you will deploy an MSR cache replica.

To configure caches for high availability:

  1. Use SSH to log in to a manager node of the cluster on which you want to deploy the MSR cache. If you are using MKE to manage that cluster, you can also use a client bundle to configure your Docker CLI client to connect to the cluster.

  2. Label each node that is going to run the cache replica:

    docker node update --label-add dtr.cache=true <node-hostname>
    
  3. Create the cache configuration files by following the instructions for deploying a single cache replica. Be sure to adapt the storage object, using the configuration options for the shared storage of your choice.

  4. Deploy a load balancer of your choice to balance requests across your set of replicas.

MSR cache configuration

MSR caches are based on Docker Registry, and use the same configuration file format. The MSR cache extends the Docker Registry configuration file format, though, introducing a new middleware called downstream with three configuration options: blobttl, upstreams, and cas:

middleware:
  registry:
      - name: downstream
        options:
          blobttl: 24h
          upstreams:
            - <Externally-reachable address for upstream registry or content cache in format scheme://host:port>
          cas:
            - <Absolute path to next-hop upstream registry or content cache CA certificate in the container's filesystem>

The following table offers detail specific to MSR caches for each parameter:

Parameter

Required

Description

blobttl

no

The TTL (Time to Live) value for blobs in the cache, offered as a positive integer and suffix denoting a unit of time.

Valid values:

  • ns (nanoseconds)

  • us (microseconds)

  • ms (milliseconds)

  • s (seconds)

  • m (minutes)

  • h (hours)

Note

If the suffix is omitted, the system interprets the value as nanoseconds.

If blobttl is configured, storage.delete.enabled must be set to true.

cas

no

An optional list of absolute paths to PEM-encoded CA certificates of upstream registries or content caches.

upstreams

yes

A list of externally-reachable addresses for upstream registries of content caches. If you specify more than one host, it will pull from registries in a round-robin fashion.

Garbage collection

Mirantis Secure Registry (MSR) supports garbage collection, the automatic cleanup of unused image layers. You can configure garbage collection to occur at regularly scheduled times, as well as set a specific duration for the process.

Garbage collection first identifies and marks unused image layers, then subsequently deletes the layers that have been marked.

Schedule garbage collection
  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System and select the Garbage collection tab.

  3. Set the duration for the garbage collection job:

    • Until done

    • For <number> minutes

    • Never

  4. Set the garbage collection schedule:

    • Custom cron schedule (<hour, date, month, day>)

    • Daily at midnight UTC

    • Every Saturday at 1AM UTC

    • Every Sunday at 1AM UTC

    • Do not repeat

  5. Click either Save & Start or Save. Save & Start runs the garbage collection job immediately and Save runs the job at the next scheduled time.

  6. At the scheduled start time, verify that garbage collection has begun by navigating to the Job Logs tab.

How garbage collection works

In conducting garbage collection, MSR performs the following actions in sequence:

  1. Establishes a cutoff time.

  2. Marks each referenced manifest file with a timestamp. When manifest files are pushed to MSR, they are also marked with a timestamp.

  3. Sweeps each manifest file that does not have a timestamp after the cutoff time.

  4. Deletes the file if it is never referenced, meaning that no image tag uses it.

  5. Repeats the process for blob links and blob descriptors.

Each image stored in MSR is comprised of the following files:

  • The image filesystem, which consists of a list of unioned image layers.

  • A configuration file, which contains the architecture of the image along with other metadata.

  • A manifest file, which contains a list of all the image layers and the configuration file for the image.

MSR tracks these files in its metadata store, using RethinkDB, doing so in a content-addressable manner in which each file corresponds to a cryptographic hash of the file content. Thus, if two image tags hold exactly the same content, MSR only stores the content once, which makes hash collisions nearly impossible even when image tag names differ. For example, if wordpress:4.8 and wordpress:latest have the same content, MSR will only store that content once. If you delete one of these tags, the other will remain intact.

As a result, when you delete an image tag, MSR cannot delete the underlying files as it is possible that other tags also use the same underlying files.

Create a new repository when pushing an image

By default, MSR only allows users to push images to repositories that already exist, and for which the user has write privileges. Alternatively, you can configure MSR to create a new private repository when an image is pushed.

To create a new repository when pushing an image:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click Settings and scroll down to Repositories.

  3. Slide the Create repository on push toggle to the right.

  4. Push an image to a non-existing repository:

    curl --user <admin-user>:<password> \
    --request POST "<msr-url>/api/v0/meta/settings" \
    --header "accept: application/json" \
    --header "content-type: application/json" \
    --data "{ \"createRepositoryOnPush\": true}"
    

Pushing an image to a non-existing repository will create a new repository using the following naming convention:

  • Non-admin users: <user-name>/<repository>

  • Admin users: <organization>/<repository>

Use a web proxy

Mirantis Secure Registry (MSR) makes outgoing connections to check for new versions, automatically renew its license, and update its vulnerability database. If MSR cannot access the Internet, you must manually apply any updates.

One way to keep your environment secure while still allowing MSR access to the Internet is to deploy a web proxy. If you have an HTTP or HTTPS proxy, you can configure MSR to use it.

Configure web proxy usage on Kubernetes

You can configure web proxy usage on Kubernetes using either the MSR Operator or a Helm chart.

  1. In the custom resource manifest, insert the following values to add the HTTP_PROXY and HTTPS_PROXY environment variables to all containers in your MSR deployment:

    spec:
      extraEnv:
        HTTP_PROXY: "<domain>:<port>"
        HTTPS_PROXY: "username:password@<domain>:<port>"
    
  2. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    
  4. Verify the MSR configuration by reviewing the Pod resources that the MSR Helm chart deploys for the environment variables:

    kubectl get deploy/msr-registry -o jsonpath='{@.spec.template.spec.containers[].env}'
    

    Example output:

    [{"name":"HTTP_PROXY","value":"example.com:444"}]%
    
  1. In values.yaml, insert the following snippet to add the HTTP_PROXY and HTTPS_PROXY environment variables to all containers in your MSR deployment:

    global:
      extraEnv:
        HTTP_PROXY: "<domain>:<port>"
        HTTPS_PROXY: "username:password@<domain>:<port>"
    
  2. Apply the newly inserted values:

    helm upgrade msr msrofficial/msr --version 1.0.0 -f values.yaml
    
  3. Verify the MSR configuration by reviewing the Pod resources that the MSR Helm chart deploys for the environment variables:

    kubectl get deploy/msr-registry -o jsonpath='{@.spec.template.spec.containers[].env}'
    

    Example output:

    [{"name":"HTTP_PROXY","value":"example.com:444"}]%
    
Configure web proxy usage on Swarm
  1. Update your MSR services to include the HTTP_PROXY and HTTPS_PROXY environment variables:

    docker service update msr_msr-api-server \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-garant \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-jobrunner \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-nginx \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-notary-server \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-notary-signer \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-registry \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-scanningstore \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-enzi-api \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    docker service update msr_msr-enzi-worker \
      --env-add HTTP_PROXY=<domain>:<port> \
      --env-add HTTPS_PROXY=<username>:<password>@<domain>:<port>
    
  2. Verify that each environment variable is appropriately set:

    docker service inspect <msr-service-name> --format '{{.Spec.TaskTemplate.ContainerSpec.Env }}' | grep 'HTTP_PROXY\|HTTPS_PROXY'
    

Manage applications

In addition to storing individual and multi-architecture container images and plugins, MSR supports the storage of applications as their own distinguishable type.

Applications include the following two tags:

Image

Tag

Type

Under the hood

Invocation

<app-tag>-invoc

Container image represented by OS and architecture.

For example, linux amd64.

Uses Mirantis Container Runtime. The Docker daemon is responsible for building and pushing the image. Includes scan results for the invocation image.

Application with bundled components

<app-tag>

Application

Uses the application client to build and push the image. Includes scan results for the bundled components. Docker App is an experimental Docker CLI feature.

Use docker app push to push your applications to MSR. For more information, refer to Docker App in the official Docker documentation.

View application vulnerabilities

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click Repositories.

  3. Select the desired repository and click the Tags tab.

  4. Click View details on the <app-tag> or <app-tag>-invoc row.

Limitations

  • You cannot sign an application as the Notary signer cannot sign Open Container Initiative (OCI) indices.

  • Scanning-based policies do not take effect until after all images bundled in the application have been scanned.

  • Docker Content Trust (DCT) does not work for applications and multi-architecture images, which have the same underlying structure.

Parity with existing repository and image features

The following repository and image management events also apply to applications:

Manage images

Create a repository

MSR requires that you create the image repository before pushing any images to the registry.

To create an image repository:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click New repository.

  4. Select the required namespace and enter the name for your repository using only lowercase letters, numbers, underscores, and hyphens.

  5. Select whether your repository is public or private:

    • Public repositories are visible to all users, but can only be modified by those with write permissions.

    • Private repositories are visible only to users with repository permissions.

  6. Optional. Click Show advanced settings:

    • Select On to make tags immutable, and thus unable to be overwritten.

    • Select On push to configure images to be scanned automatically when they are pushed to MSR. You will also be able to scan them manually.

  7. Click Create.

Note

To enable tag pruning, refer to Set a tag limit. This feature requires that tag immutability is turned off at the repository level.

Image names in MSR

MSR image names must have the following characteristics:

  • The organization and repository names both must have fewer than 56 characters.

  • The complete image name, which includes the domain, organization, and repository name, must not exceed 255 characters.

  • When you tag your images for MSR, they must take the following form:

    <msr-domain-name>/<user-or-org>/<repository-name>.

    For example, https://127.0.0.1/admin/nginx.

Multi-architecture images

While it is possible to enable the just-in-time creation of multi-architecture image repositories when creating a repository using the API, Mirantis does not recommend using this option, as it will cause Docker Content Trust to fail along with other issues. To manage Docker image manifests and manifest lists, instead use the experimental command docker manifest.

Review repository information

The MSR web UI has an Info page for each repository that includes the following sections:

  • A README file, which is editable by admin users.

  • The docker pull command for pulling the images contained in the given repository. To learn more about pulling images, refer to Pull and push images.

  • The permissions associated with the user who is currently logged in.

To view the Info section:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, click Repositories.

  3. Select the required repository by clicking the repository name rather than the namespace name that precedes the /.

    The Info tab displays by default.

To view the repository events that your permissions level has access to, hover over the question mark next to the permissions level that displays under Your permission.

Note

Your permissions list may include repository events that are not displayed in the Activity tab. Also, it is not an exhaustive list of the event types that are displayed in your activity stream. To learn more about repository events, refer to Audit repository events.

Pull and push images

Just as with Docker Hub, interactions with MSR consist in the following:

  • docker login <msr-url> authenticates the user on MSR

  • docker pull <image>:<tag> pulls an image from MSR

  • docker push <image>:<tag> pushes an image to MSR

Pull an image

Note

It is only necessary to authenticate using docker login before pulling a private image.

  1. If you need to pull a private image, log in to MSR:

    docker login <registry-host-name>
    
  2. Pull the required image:

    docker pull <registry-host-name>/<namespace>/<repository>:<tag>
    
Push an image

Before you can push an image to MSR, you must create a repository and tag your image.

  1. Create a repository for the required image.

  2. Tag the image using the host name, namespace, repository name, and tag:

    docker tag <image-name> <registry-host-name>/<namespace>/<repository>:<tag>
    
  3. Log in to MSR:

    docker login <registry-host-name>
    
  4. Push the image to MSR:

    docker push <registry-host-name>/<namespace>/<repository>:<tag>
    
  5. Verify that the image successfully pushed:

    1. Log in to the MSR web UI.

    2. In the left-side navigation panel, click Repositories.

    3. Select the relevant repository.

    4. Navigate to the Tags tab.

    5. Verify that the required tag is listed on the page.

Windows image limitations

The base layers of the Microsoft Windows base images have redistribution restrictions. When you push a Windows image to MSR, Docker only pushes the image manifest and the layers that are above the Windows base layers. As a result:

  • When a user pulls a Windows image from MSR, the Windows base layers are automatically fetched from Microsoft.

  • Because MSR does not have access to the image base layers, it cannot scan those image layers for vulnerabilities. The Windows base layers are, however, scanned by Docker Hub.

On air-gapped or similarly limited systems, you can configure Docker to push Windows base layers to MSR by adding the following line to C:\ProgramData\docker\config\daemon.json:

"allow-nondistributable-artifacts": ["<msr-host-name>:<msr-port>"]

Caution

For production environments, Mirantis does not recommend configuring Docker to push Windows base layers to MSR.

Delete images

Note

If your MSR instance uses image signing, you will need to remove any trust data on the image before you can delete it. For more information, refer to Delete signed images.

To delete an image:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the relevant repository and navigate to the Tags tab.

  4. Select the check box next to the tags that you want to delete.

  5. Click Delete.

Alternatively, you can delete every tag for a particular image by deleting the relevant repository.

To delete a repository:

  1. Click the required repository and navigate to the Settings tab.

  2. Scroll down to Delete repository and click Delete.

Scan images for vulnerabilities

Mirantis Secure Registry (MSR) has the ability to scan images for security vulnerabilities contained in the US National Vulnerability Database. Security scan results are reported for each image tag contained in a repository.

Security scanning is available as an add-on to MSR. If security scan results are not available on your repositories, your organization may not have purchased the security scanning feature or it may be disabled. Administrator permissions are required to enable security scanning on your MSR instance.

Note

Only users with write access to a repository can manually start a scan. Users with read-only access can, however, view the scan results.

Security scan process

Scans run on demand when you initiate them in the MSR web UI or automatically when you push an image to the registry.

The scanner first performs a binary scan on each layer of the image, identifies the software components in each layer, and indexes the SHA of each component in a bill-of-materials. A binary scan evaluates the components on a bit-by-bit level, so vulnerable components are discovered even if they are statically linked or use a different name.

The scan then compares the SHA of each component against the US National Vulnerability Database that is installed on your MSR instance. When this database is updated, MSR verifies whether the indexed components have newly discovered vulnerabilities.

MSR has the ability to scan both Linux and Windows images. However, because Docker defaults to not pushing foreign image layers for Windows images, MSR does not scan those layers. If you want MSR to scan your Windows images, configure Docker to always push image layers, and it will scan the non-foreign layers.

Scan images

Note

Only users with write access to a repository can manually start a scan. Users with read-only access can, however, view the scan results.

Security scan on push

By default, a security scan runs automatically when you push an image to the registry.

To view the results of a security scan:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the required repository and select the Tags tab.

  4. Click View details on the required tag.

Manual scanning

You can manually start a scan for images in repositories that you have write access to.

To manually scan an image:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the required repository and select the Tags tab.

  4. Click Start a scan on the required image tag.

  5. To review the scan results, click View details.

Change the scanning mode

You can change the scanning mode for each individual repository at any time. You might want to disable scanning in either of the following scenarios:

  • You are pushing an image repeatedly during troubleshooting and do not want to waste resources on rescanning.

  • A repository contains legacy code that is not used or updated frequently.

Note

To change an individual repository scanning mode, you must have write or administrator access to the repository.

To change the scanning mode:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Click the required repository and select the Settings tab.

  4. Scroll down to Image scanning and under Scan on push, select either On push or Manual.

Review security scan results

Once MSR has run a security scan for an image, you can view the results.

Scan summaries

A summary of the results displays next to each scanned tag on the repository Tags tab, and presents in one of the following ways:

  • If the scan did not find any vulnerabilities, the word Clean displays in green.

  • If the scan found vulnerabilities, the severity level, Critical, Major, or Minor, displays in red or orange with the number of vulnerabilities. If the scan could not detect the version of a component, the vulnerabilities are reported for all versions of the component.

Detailed report

To view the full scanning report, click View details for the required image tag.

The top of the resulting page includes metadata about the image including the SHA, image size, last push date, user who initiated the push, security scan summary, and the security scan progress.

The scan results for each image include two different modes so you can quickly view details about the image, its components, and any vulnerabilities found:

  • The Layers view lists the layers of the image in the order that they are built by the Dockerfile.

    This view can help you identify which command in the build introduced the vulnerabilities, and which components are associated with that command. Click a layer to see a summary of its components. You can then click on a component to switch to the Component view and obtain more details about the specific item.

    Note

    The layers view can be long, so be sure to scroll down if you do not immediately see the reported vulnerabilities.

  • The Components view lists the individual component libraries indexed by the scanning system in order of severity and number of vulnerabilities found, with the most vulnerable library listed first.

    Click an individual component to view details on the vulnerability it introduces, including a short summary and a link to the official CVE database report. A single component can have multiple vulnerabilities, and the scan report provides details on each one. In addition, the component details include the license type used by the component, the file path to the component in the image, and the number of layers that contain the component.

Note

The CVE count presented in the scan summary of an image with multiple layers may differ from the count obtained through summation of the CVEs for each individual image component. This is because the scan summary performs a summation of the CVEs in every layer of the image, and a component may be present in more than one layer of an image.

What to do next

If you find that an image in your registry contains vulnerable components, you can use the linked CVE scan information in each scan report to evaluate the vulnerability and decide what to do.

If you discover vulnerable components, you should verify whether there is an updated version available where the security vulnerability has been addressed. If necessary, you can contact the component maintainers to ensure that the vulnerability is being addressed in a future version or a patch update.

If the vulnerability is in a base layer, such as an operating system, you might not be able to correct the issue in the image. In this case, you can switch to a different version of the base layer, or you can find a less vulnerable equivalent.

You can address vulnerabilities in your repositories by updating the images to use updated and corrected versions of vulnerable components or by using a different component that offers the same functionality. When you have updated the source code, run a build to create a new image, tag the image, and push the updated image to your MSR instance. You can then re-scan the image to confirm that you have addressed the vulnerabilities.

Override a vulnerability

MSR security scanning sometimes reports image vulnerabilities that you know have already been fixed. In such cases, it is possible to hide the vulnerability warning.

To override a vulnerability:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Navigate to the required repository and click View details.

  4. To review the vulnerabilities associated with each component in the image, click the Components tab.

  5. Select the component with the vulnerability you want to ignore, navigate to the vulnerability, and click Hide.

Once dismissed, the vulnerability is hidden system-wide and will no longer be reported as a vulnerability on affected images with the same layer IDs or digests. In addition, MSR will not re-evaluate the promotion policies that have been set up for the repository.

To re-evaluate the promotion policy for the affected image:

After hiding a particular vulnerability, you can re-evaluate the promotion policy for the affected image.

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Navigate to the required repository and click View details.

  4. Click Promote.

Prevent tags from being overwritten

By default, users can push the same tag multiple times to a repository, thus overwriting the older versions of the tag. This can however lead to problems if a user pushes an image with the same tag name but different functionality. Also, when images are overwritten, it can be difficult to determine which build originally generated the image.

To prevent tags from being overwritten, you can configure a repository to be immutable. Once configured, MSR will not allow another image with the same tag to be pushed to the repository.

Note

Enabling tag immutability disables repository tag limits.

Make tags immutable

You can enable tag immutability when creating a new repository or at a later time.

To enable tag immutability when creating a new repository:

  1. Log in to the MSR web UI.

  2. Follow the steps in Create a repository.

  3. On the new repository creation page, click Show advanced settings.

  4. Under Immutability, select On.

To enable tag immutability on an existing repository:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, select Repositories.

  3. Select the relevant repository and navigate to the Settings tab.

  4. In the General section under Immutability, select On.

Once tag immutability is enabled, MSR will return an error message such as the following when you try to push a tag that already exists:

docker push msr-example.com/library/wordpress:latest
unknown: tag=latest cannot be overwritten because
msr-example.com/library/wordpress is an immutable repository

Sign images with Docker Content Trust

Docker Content Trust (DCT) allows you to sign image tags, thus giving consumers a way to verify the integrity of your images. Users interact with DCT using a combination of docker trust and notary commands.

Configure image signing

To configure image signing, you must enable Docker Content Trust (DCT) and initiate a repository for use with DCT.

Enable DCT

While MSR supports DCT use by default, you must opt in to use it on the Docker client side by setting the following environment variable:

export DOCKER_CONTENT_TRUST=1

Important

Mirantis recommends that you add this environment variable to your shell login configuration, so that it is always active.

Trust MSR CA certificate

If your MSR instance uses a certificate that is issued by a well-known, public certificate authority (CA), then skip this section and proceed to Configure repository for signing.

If the MSR certificate authority (CA) is self-signed, you must configure the machine that runs the docker trust commands to trust the CA, as detailed in this section.

Caution

It is not possible to use DCT with a remote MSR that is set up as an insecure registry in the Docker daemon configuration. This is because DCT operations are not processed by the Docker daemon, but are instead sent directly to the back-end Notary components that handle signing. It is not possible to configure the back-end components to allow insecure operation.

To configure your machine to trust a self-signed CA:

  1. Create a certificate directory for the MSR host in the Docker configuration directory:

    export MSR=<registy-hostname>
    mkdir -p ~/.docker/certs.d/${MSR}
    
  2. Download the MSR CA certificate into the newly created directory:

    curl -ks https://${MSR}/ca > ~/.docker/certs.d/${MSR}/ca.crt
    
  3. Restart the Docker daemon.

  4. Verify that you do not receive certificate errors when accessing MSR:

    docker login ${MSR}
    
  5. Create a symlink between the certs.d and tls directories. This link allows the Docker client to share the same CA trust as established for the Docker daemon in the preceding steps.

    ln -s certs.d ~/.docker/tls
    
Configure repository for signing

Initialize a repository for use with DCT by pushing an image to the relevant repository. You will be prompted for both a new root key password and a new repository key password, as displayed in the example output.

docker push <registry-host-name>/<namespace>/<repository>:<tag>

Example output:

The push refers to repository [<registry-host-name>/<namespace>/<repository>]
b2d5eeeaba3a: Layer already exists
latest: digest: sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 size: 528
Signing and pushing trust metadata
You are about to create a new root signing key passphrase. This passphrase
will be used to protect the most sensitive key in your signing system. Please
choose a long, complex passphrase and be careful to keep the password and the
key file itself secure and backed up. It is highly recommended that you use a
password manager to generate the passphrase and keep it safe. There will be no
way to recover this key. You can find the key in your config directory.
Enter passphrase for new root key with ID 8128255: <root-password>
Repeat passphrase for new root key with ID 8128255: <root-password>
Enter passphrase for new repository key with ID 493e995: <repository-password>
Repeat passphrase for new repository key with ID 493e995: <repository-password>
Finished initializing "<registry-host-name>/<namespace>/<repository>"
Successfully signed <registry-host-name>/<namespace>/<repository>:<tag>

The root and repository keys are kept only locally in your content trust store.

Sign an image

Once you have initiated a repository for use with Docker Content Trust (DCT), you can now sign images.

To sign an image:

  1. Push the required image to MSR. You will be prompted for the repository key password, as displayed in the example output.

    docker push <registry-host-name>/<namespace>/<repository>:<tag>
    

    Example output:

    The push refers to repository [<registry-host-name>/<namespace>/<repository>]
    b2d5eeeaba3a: Layer already exists
    latest: digest: sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 size: 528
    Signing and pushing trust metadata
    Enter passphrase for repository key with ID c549efc: <repository-password>
    Successfully signed <registry-host-name>/<namespace>/<repository>:<tag>
    
  2. Inspect the repository trust metadata to verify that the image is signed by the user:

    docker trust inspect --pretty <registry-host-name>/<namespace>/<repository>
    

    Example output:

    Signatures for <registry-host-name>/<namespace>/<repository>
    
    SIGNED TAG   DIGEST                                                             SIGNERS
    <tag>        def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748   Repo Admin
    
    Administrative keys for <registry-host-name>/<namespace>/<repository>
    
      Repository Key:       e0d15a24b7...540b4a2506b
      Root Key:             b74854cb27...a72fbdd7b9a
    
Add an additional signer

You have the option to sign an image using multiple user keys. This topic describes how to add a regular user as a signer in addition to the repository admin.

Note

Signers in Docker Content Trust (DCT) do not correspond with users in MSR, thus you can add a signer using a user name that does not exist in MSR.

To add a signer:

  1. On the user machine, obtain a signing key pair:

    docker trust key generate <user-name>
    

    Example output:

    Generating key for <user-name>...
    Enter passphrase for new <user-name> key with ID c549efc: <user-password>
    Repeat passphrase for new <user-name> key with ID c549efc: <user-password>
    Successfully generated and loaded private key. Corresponding public key available:
    /path/to/public/key/<user-name>.pub
    

    The private key is password protected and kept in the local trust store, where it remains throughout all signing operations. The public key is stored in the .pub file, which you must provide to the repository administrator to add the user as a signer.

  2. Provide the user public key to the repository admin.

  3. On the admin machine, add the user as a signer to the repository. You will be prompted for the repository key password that you created in Configure repository for signing, as displayed in the example output.

    docker trust signer add --key /path/to/public/key/<user-name>.pub <user-name> <registry-host-name>/<namespace>/<repository>
    

    Example output:

    Adding signer "<user-name>" to <registry-host-name>/<namespace>/<repository>...
    Enter passphrase for repository key with ID 493e995: <repository-password>
    Successfully added signer: <user-name> to <registry-host-name>/<namespace>/<repository>
    
  4. Inspect the repository trust metadata to verify that the user is correctly added:

    docker trust inspect --pretty <registry-host-name>/<namespace>/<repository>
    

    Example output:

    Signatures for <registry-host-name>/<namespace>/<repository>
    
    SIGNED TAG   DIGEST                                                             SIGNERS
    <tag>        def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748   Repo Admin
    
    List of signers and their keys for kubernetes.docker.internal/admin/nginx
    
    SIGNER           KEYS
    <user-name>      c9f9039a520a
    
    Administrative keys for <registry-host-name>/<namespace>/<repository>
    
      Repository Key:       e0d15a24b7...540b4a2506b
      Root Key:             b74854cb27...a72fbdd7b9a
    
  5. On the user machine, sign the image as the regular user. You will be prompted for the user key password, as displayed in the example output.

    docker trust sign <registry-host-name>/<namespace>/<repository>:<tag>
    

    Example output:

    Signing and pushing trust metadata for <registry-host-name>/<namespace>/<repository>:<tag>
    Enter passphrase for <user-name> key with ID 927f303: <user-password>
    Enter passphrase for <user-name> key with ID 5ac7d9a: <user-password>
    Successfully signed <registry-host-name>/<namespace>/<repository>:<tag>
    
  6. Inspect the repository trust metadata to verify that the image is signed by the user:

    docker trust inspect --pretty <registry-host-name>/<namespace>/<repository>
    

    Example output:

    Signatures for <registry-host-name>/<namespace>/<repository>:<tag>
    
    SIGNED TAG     DIGEST                       SIGNERS
    <tag>              5b49c8e2c89...5bb69e2033     <user-name>
    
    List of signers and their keys for <registry-host-name>/<namespace>/<repository>:<tag>
    
    SIGNER         KEYS
    <user-name>    927f30366699
    
    Administrative keys for <registry-host-name>/<namespace>/<repository>:<tag>
    
      Repository Key:       e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b
      Root Key:     b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a
    

    Note

    Once an additional signer signs an image, the repository admin is no longer listed under SIGNERS.

Delete trust data

Repositories that contain trust metadata cannot be deleted until the trust metadata is removed. Doing so requires use of the Notary CLI.

To delete trust metadata from a repository:

Run the following command to delete the trust metadata. You will be prompted for your user name and password, as displayed in the example output.

notary delete <registry-host-name>/<namespace>/<repository> --remote

Example output:

Deleting trust data for <registry-host-name>/<namespace>/<repository>
Enter username: <user-name>
Enter password: <password>
Successfully deleted local and remote trust data for <registry-host-name>/<namespace>/<repository>

Note

If you do not include the --remote flag, Notary deletes local cached content but does not delete data from the Notary server.

Delete signed images

To delete a signed image, you must first remove trust data for all of the roles that have signed the image. After you remove the trust data, proceed to deleting the image, as described in Delete images.

To identify the roles that signed an image:

  1. Determine the roles that are trusted to sign the image:

  2. Configure your Notary client.

  3. List the trusted roles:

    notary delegation list <registry-host-name>/<namespace>/<repository>
    

    Example output:

    ROLE                PATHS             KEY IDS                  THRESHOLD
    ----                -----             -------                  ---------
    targets/releases    "" <all paths>    c3470c45cefde5...2ea9bc8    1
    targets/qa          "" <all paths>    c3470c45cefde5...2ea9bc8    1
    

    In this example, the repository owner delegated trust to the targets/releases and targets/qa roles.

  4. For each role listed in the previous step, identify whether it signed the image:

    notary list <registry-host-name>/<namespace>/<repository> --roles <role-name>
    

To remove trust data for a role:

Note

Only users with private keys that have the required roles can perform this operation.

For each role that signed the image, remove the trust data for that role:

notary remove <registry-host-name>/<namespace>/<repository> <tag> \
--roles <role-name> --publish

The image will display as unsigned once the trust data has been removed for all of the roles that signed the image.

Using Docker Content Trust with a Remote MKE Cluster

For more advanced deployments, you may want to share one Mirantis Secure Registry across multiple Mirantis Kubernetes Engines. However, customers wanting to adopt this model alongside the Only Run Signed Images MKE feature, run into problems as each MKE operates an independent set of users.

Docker Content Trust (DCT) gets around this problem, since users from a remote MKE are able to sign images in the central MSR and still apply runtime enforcement.

In the following example, we will connect MSR managed by MKE cluster 1 with a remote MKE cluster which we are calling MKE cluster 2, sign the image with a user from MKE cluster 2, and provide runtime enforcement within MKE cluster 2. This process could be repeated over and over, integrating MSR with multiple remote MKE clusters, signing the image with users from each environment, and then providing runtime enforcement in each remote MKE cluster separately.

Note

Before attempting this guide, familiarize yourself with Docker Content Trust and Only Run Signed Images on a single MKE. Many of the concepts within this guide may be new without that background.

Prerequisites
  • Cluster 1, running MKE 3.5.x or later, with an MSR 2.9.x or later deployed within the cluster.

  • Cluster 2, running MKE 3.5.x or later, with no MSR node.

  • Nodes on Cluster 2 need to trust the Certificate Authority which signed MSR’s TLS Certificate. This can be tested by logging on to a cluster 2 virtual machine and running curl https://msr.example.com.

  • The MSR TLS Certificate needs to be properly configured, ensuring that the Loadbalancer/Public Address field has been configured, with this address included within the certificate.

  • A machine with MCR 20.10.x or later installed, as this contains the relevant docker trust commands.

Registering MSR with a remote Mirantis Kubernetes Engine

As there is no registry running within cluster 2, by default MKE will not know where to check for trust data. Therefore, the first thing we need to do is register MSR within the remote MKE in cluster 2. When you normally install MSR, this registration process happens by default to a local MKE, or cluster 1.

Note

The registration process allows the remote MKE to get signature data from MSR, however this will not provide Single Sign On (SSO). Users on cluster 2 will not be synced with cluster 1’s MKE or MSR. Therefore when pulling images, registry authentication will still need to be passed as part of the service definition if the repository is private. See the Kubernetes example.

To add a new registry, retrieve the Certificate Authority (CA) used to sign the MSR TLS Certificate through the MSR URL’s /ca endpoint.

$ curl -ks https://msr.example.com/ca > dtr.crt

Next, convert the MSR certificate into a JSON configuration file for registration within the MKE for cluster 2.

You can find a template of the dtr-bundle.json below. Replace the host address with your MSR URL, and enter the contents of the MSR CA certificate between the new line commands \n and \n.

Note

JSON Formatting

Ensure there are no line breaks between each line of the MSR CA certificate within the JSON file. Use your favorite JSON formatter for validation.

$ cat dtr-bundle.json
{
  "hostAddress": "msr.example.com",
  "caBundle": "-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----END CERTIFICATE-----"
}

Now upload the configuration file to cluster 2’s MKE through the MKE API endpoint, /api/config/trustedregistry_. To authenticate against the API of cluster 2’s MKE, we have downloaded an MKE client bundle, extracted it in the current directory, and will reference the keys for authentication.

$ curl --cacert ca.pem --cert cert.pem --key key.pem \
    -X POST \
    -H "Accept: application/json" \
    -H "Content-Type: application/json" \
    -d @dtr-bundle.json \
    https://cluster2.example.com/api/config/trustedregistry_

Navigate to the MKE web interface to verify that the JSON file was imported successfully, as the MKE endpoint will not output anything. Select Admin > Admin Settings > Mirantis Secure Registry. If the registry has been added successfully, you should see the MSR listed.

Additionally, you can check the full MKE configuration file within cluster 2’s MKE. Once downloaded, the ucp-config.toml file should now contain a section called [registries]

$ curl --cacert ca.pem --cert cert.pem --key key.pem https://cluster2.example.com/api/ucp/config-toml > ucp-config.toml

If the new registry isn’t shown in the list, check the ucp-controller container logs on cluster 2.

Signing an image in MSR

We will now sign an image and push this to MSR. To sign images we need a user’s public private key pair from cluster 2. It can be found in a client bundle, with key.pem being a private key and cert.pem being the public key on an X.509 certificate.

First, load the private key into the local Docker trust store (~/.docker/trust). The name used here is purely metadata to help keep track of which keys you have imported.

docker trust key load --name cluster2admin key.pem
Loading key from "key.pem"...
Enter passphrase for new cluster2admin key with ID a453196:
Repeat passphrase for new cluster2admin key with ID a453196:
Successfully imported key from key.pem

Next initiate the repository, and add the public key of cluster 2’s user as a signer. You will be asked for a number of passphrases to protect the keys. Keep note of these passphrases, and see [Docker Content Trust documentation] (/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server) to learn more about managing keys.

docker trust signer add --key cert.pem cluster2admin msr.example.com/admin/trustdemo
Adding signer "cluster2admin" to msr.example.com/admin/trustdemo...
Initializing signed repository for msr.example.com/admin/trustdemo...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID dd4460f:
Repeat passphrase for new repository key with ID dd4460f:
Successfully initialized "msr.example.com/admin/trustdemo"
Successfully added signer: cluster2admin to msr.example.com/admin/trustdemo

Finally, sign the image tag. This pushes the image up to MSR, as well as signs the tag with the user from cluster 2’s keys.

docker trust sign msr.example.com/admin/trustdemo:1
Signing and pushing trust data for local image msr.example.com/admin/trustdemo:1, may overwrite remote trust data
The push refers to repository [dtr.olly.dtcntr.net/admin/trustdemo]
27c0b07c1b33: Layer already exists
aa84c03b5202: Layer already exists
5f6acae4a5eb: Layer already exists
df64d3292fd6: Layer already exists
1: digest: sha256:37062e8984d3b8fde253eba1832bfb4367c51d9f05da8e581bd1296fc3fbf65f size: 1153
Signing and pushing trust metadata
Enter passphrase for cluster2admin key with ID a453196:
Successfully signed msr.example.com/admin/trustdemo:1

Within the MSR web interface, you should now be able to see your newly pushed tag with the Signed text next to the size.

You could sign this image multiple times if required, whether it’s multiple teams from the same cluster wanting to sign the image, or you integrating MSR with more remote MKEs so users from clusters 1, 2, 3, or more can all sign the same image.

Enforce Signed Image Tags on the Remote MKE

We can now enable Only Run Signed Images on the remote MKE. To do this, login to cluster 2’s MKE web interface as an admin. Select Admin > Admin Settings > Docker Content Trust.

Finally we can now deploy a workload on cluster 2, using a signed image from a MSR running on cluster 1. This workload could be a simple $ docker run, a Swarm Service, or a Kubernetes workload. As a simple test, source a client bundle, and try running one of your signed images.

source env.sh

docker service create msr.example.com/admin/trustdemo:1
nqsph0n6lv9uzod4lapx0gwok
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE                                   PORTS
nqsph0n6lv9u        laughing_lamarr         replicated          1/1                 msr.example.com/admin/trustdemo:1
Troubleshooting

If the image is stored in a private repository within MSR, you need to pass credentials to the Orchestrator as there is no SSO between cluster 2 and MSR. See the relevant Kubernetes documentation for more details.

Example Errors
Image or trust data does not exist
image or trust data does not exist for msr.example.com/admin/trustdemo:1

This means something went wrong when initiating the repository or signing the image, as the tag contains no signing data.

Image did not meet required signing policy
Error response from daemon: image did not meet required signing policy

msr.example.com/admin/trustdemo:1: image did not meet required signing policy

This means that the image was signed correctly, however the user who signed the image does not meet the signing policy in cluster 2. This could be because you signed the image with the wrong user keys.

MSR URL must be a registered trusted registry
Error response from daemon: msr.example.com must be a registered trusted registry. See 'docker run --help'.

This means you have not registered MSR to work with a remote MKE instance yet, as outlined in Registering MSR with a remote Mirantis Kubernetes Engine.

Manage jobs

Job queue

Mirantis Secure Registry (MSR) uses a job queue to schedule batch jobs. Jobs are added to a cluster-wide job queue, and then consumed and executed by a job runner within MSR.

All MSR replicas have access to the job queue, and have a job runner component that can get and execute work.

How it works

When a job is created, it is added to a cluster-wide job queue and enters the waiting state. When one of the MSR replicas is ready to claim the job, it waits a random time of up to 3 seconds to give every replica the opportunity to claim the task.

A replica claims a job by adding its replica ID to the job. That way, other replicas will know the job has been claimed. Once a replica claims a job, it adds that job to an internal queue, which in turn sorts the jobs by their scheduledAt time. Once that happens, the replica updates the job status to running, and starts executing it.

The job runner component of each MSR replica keeps a heartbeatExpiration entry on the database that is shared by all replicas. If a replica becomes unhealthy, other replicas notice the change and update the status of the failing worker to dead. Also, all the jobs that were claimed by the unhealthy replica enter the worker_dead state, so that other replicas can claim the job.

Job types

MSR runs periodic and long-running jobs. The following is a complete list of jobs you can filter for via the user interface or the API.

Job types

Job

Description

gc

A garbage collection job that deletes layers associated with deleted images.

onlinegc

A garbage collection job that deletes layers associated with deleted images without putting the registry in read-only mode.

onlinegc_metadata

A garbage collection job that deletes metadata associated with deleted images.

onlinegc_joblogs

A garbage collection job that deletes job logs based on a configured job history setting.

metadatastoremigration

A necessary migration that enables the onlinegc feature.

sleep

Used for testing the correctness of the jobrunner. It sleeps for 60 seconds.

false

Used for testing the correctness of the jobrunner. It runs the false command and immediately fails.

tagmigration

Used for synchronizing tag and manifest information between the MSR database and the storage backend.

bloblinkmigration

A DTR 2.1 to 2.2 upgrade process that adds references for blobs to repositories in the database.

license_update

Checks for license expiration extensions if online license updates are enabled.

scan_check

An image security scanning job. This job does not perform the actual scanning, rather it spawns scan_check_single jobs (one for each layer in the image). Once all of the scan_check_single jobs are complete, this job will terminate.

scan_check_single

A security scanning job for a particular layer given by the parameter: SHA256SUM. This job breaks up the layer into components and checks each component for vulnerabilities.

scan_check_all

A security scanning job that updates all of the currently scanned images to display the latest vulnerabilities.

update_vuln_db

A job that is created to update MSR’s vulnerability database. It uses an Internet connection to check for database updates through https://dss-cve-updates.docker.com/ and updates the dtr-scanningstore container if there is a new update available.

scannedlayermigration

A DTR 2.4 to 2.5 upgrade process that restructures scanned image data.

push_mirror_tag

A job that pushes a tag to another registry after a push mirror policy has been evaluated.

poll_mirror

A global cron that evaluates poll mirroring policies.

webhook

A job that is used to dispatch a webhook payload to a single endpoint.

nautilus_update_db

The old name for the update_vuln_db job. This may be visible on old log files.

ro_registry

A user-initiated job for manually switching MSR into read-only mode.

tag_pruning

A job for cleaning up unnecessary or unwanted repository tags which can be configured by repository admins.

Job status

Jobs can have one of the following status values:

Job values

Status

Description

waiting

Unclaimed job waiting to be picked up by a worker.

running

The job is currently being run by the specified workerID.

done

The job has succesfully completed.

errors

The job has completed with errors.

cancel_request

The status of a job is monitored by the worker in the database. If the job status changes to cancel_request, the job is canceled by the worker.

cancel

The job has been canceled and ws not fully executed.

deleted

The job and its logs have been removed.

worker_dead

The worker for this job has been declared dead and the job will not continue.

worker_shutdown

The worker that was running this job has been gracefully stopped.

worker_resurrection

The worker for this job has reconnected to the databsase and will cancel this job.

Audit jobs with the web interface

Admins can view and audit jobs within the software using either the API or the MSR web UI.

Prerequisite
  • Job Queue

View jobs list

To view the list of jobs within MSR, do the following:

  1. Log in to the MSR web UI.

  2. Navigate to System > Job Logs in the left-side navigation panel. You should see a paginated list of past, running, and queued jobs. By default, Job Logs shows the latest 10 jobs on the first page.

  3. If required, filter the jobs by:

    • Action

    • Worker ID, which is the ID of the worker in an MSR replica responsible for running the job

  4. Optional. Click Edit Settings on the right of the filtering options to update your Job Logs settings.

Job details

The following is an explanation of the job-related fields displayed in Job Logs and uses the filtered online_gc action from above.

Jobs values

Job Detail

Description

Example

Action

The type of action or job being performed.

onlinegc

ID

The ID of the job.

ccc05646-569a-4ac4-b8e1-113111f63fb9

Worker

The ID of the worker node responsible for ruinning the job.

8f553c8b697c

Status

Current status of the action or job.

done

Start Time

Time when the job started.

9/23/2018 7:04 PM

Last updated

Time when the job was last updated.

9/23/2018 7:04 PM

View Logs

Links to the full logs for the job.

[View Logs]

View job-specific logs

To view the log details for a specific job, do the following:

  1. Click View Logs next to the job value, Last Updated You will be redirected to the log detail page of your selected job.

    Notice how the job ID is reflected in the URL while the Action and the abbreviated form of the job ID are reflected in the heading. Also, the JSON lines displayed are job-specific MSR container logs.

  2. Enter or select a different line count to truncate the number of lines displayed. Lines are cut off from the end of the logs.

Audit jobs with the API

Overview

Admins have the ability to audit jobs using the web interface.

Prerequisite
  • Job Queue

Job capacity

Each job runner has a limited capacity and will not claim jobs that require a higher capacity. You can see the capacity of a job runner via the GET /api/v0/workers endpoint:

{
  "workers": [
    {
      "id": "000000000000",
      "status": "running",
      "capacityMap": {
        "scan": 1,
        "scanCheck": 1
      },
      "heartbeatExpiration": "2017-02-18T00:51:02Z"
    }
  ]
}

This means that the worker with replica ID 000000000000 has a capacity of 1 scan and 1 scanCheck. Next, review the list of available jobs:

{
  "jobs": [
    {
      "id": "0",
      "workerID": "",
      "status": "waiting",
      "capacityMap": {
        "scan": 1
      }
    },
    {
       "id": "1",
       "workerID": "",
       "status": "waiting",
       "capacityMap": {
         "scan": 1
       }
    },
    {
     "id": "2",
      "workerID": "",
      "status": "waiting",
      "capacityMap": {
        "scanCheck": 1
      }
    }
  ]
}

If worker 000000000000 notices the jobs in waiting state above, then it will be able to pick up jobs 0 and 2 since it has the capacity for both. Job 1 will have to wait until the previous scan job, 0, is completed. The job queue will then look like:

{
  "jobs": [
    {
      "id": "0",
      "workerID": "000000000000",
      "status": "running",
      "capacityMap": {
        "scan": 1
      }
    },
    {
       "id": "1",
       "workerID": "",
       "status": "waiting",
       "capacityMap": {
         "scan": 1
       }
    },
    {
     "id": "2",
      "workerID": "000000000000",
      "status": "running",
      "capacityMap": {
        "scanCheck": 1
      }
    }
  ]
}

You can get a list of jobs via the GET /api/v0/jobs/ endpoint. Each job looks like:

{
    "id": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
    "retryFromID": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
    "workerID": "000000000000",
    "status": "done",
    "scheduledAt": "2017-02-17T01:09:47.771Z",
    "lastUpdated": "2017-02-17T01:10:14.117Z",
    "action": "scan_check_single",
    "retriesLeft": 0,
    "retriesTotal": 0,
    "capacityMap": {
          "scan": 1
    },
    "parameters": {
          "SHA256SUM": "1bacd3c8ccb1f15609a10bd4a403831d0ec0b354438ddbf644c95c5d54f8eb13"
    },
    "deadline": "",
    "stopTimeout": ""
}

The JSON fields of interest here are:

  • id: The ID of the job

  • workerID: The ID of the worker in a MSR replica that is running this job

  • status: The current state of the job

  • action: The type of job the worker will actually perform

  • capacityMap: The available capacity a worker needs for this job to run

Cron jobs

Several of the jobs performed by MSR are run in a recurrent schedule. You can see those jobs using the GET /api/v0/crons endpoint:

{
  "crons": [
    {
      "id": "48875b1b-5006-48f5-9f3c-af9fbdd82255",
      "action": "license_update",
      "schedule": "57 54 3 * * *",
      "retries": 2,
      "capacityMap": null,
      "parameters": null,
      "deadline": "",
      "stopTimeout": "",
      "nextRun": "2017-02-22T03:54:57Z"
    },
    {
      "id": "b1c1e61e-1e74-4677-8e4a-2a7dacefffdc",
      "action": "update_db",
      "schedule": "0 0 3 * * *",
      "retries": 0,
      "capacityMap": null,
      "parameters": null,
      "deadline": "",
      "stopTimeout": "",
      "nextRun": "2017-02-22T03:00:00Z"
    }
  ]
}

The schedule field uses a cron expression following the (seconds) (minutes) (hours) (day of month) (month) (day of week) format. For example, 57 54 3 * * * with cron ID 48875b1b-5006-48f5-9f3c-af9fbdd82255 will be run at 03:54:57 on any day of the week or the month, which is 2017-02-22T03:54:57Z in the example JSON response above.

Enable auto-deletion of job logs

Mirantis Secure Registry has a global setting for auto-deletion of job logs which allows them to be removed as part of garbage collection. MSR admins can enable auto-deletion of repository events in MSR 2.6 based on specified conditions which are covered below.

  1. Log in to the MSR web UI.

  2. Navigate to System in the left-side navigation panel.

  3. Scroll down to Job Logs and turn on Auto-Deletion.

  4. Specify the conditions with which a job log auto-deletion will be triggered.

    MSR allows you to set your auto-deletion conditions based on the following optional job log attributes:

    Name

    Description

    Example

    Age

    Lets you remove job logs which are older than your specified number of hours, days, weeks or months

    2 months

    Max number of events

    Lets you specify the maximum number of job logs allowed within MSR.

    100

    If you check and specify both, job logs will be removed from MSR during garbage collection if either condition is met. You should see a confirmation message right away.

  5. Click Start Deletion if you are ready. Read more about Garbage collection if you are unsure about this operation.

  6. Navigate to System > Job Logs in the left-side navigation panel to verify that onlinegc_joblogs has started.

Note

When you enable auto-deletion of job logs, the logs will be permanently deleted during garbage collection.

Manage users

Authentication and authorization

With MSR you can control which users have access to your image repositories.

Users

By default, anonymous users can only pull images from public repositories. They cannot create new repositories or push to existing ones. You can then grant permissions to enforce fine-grained access control to image repositories.

  1. Create a user.

    Registered users can create and manage their own repositories. You can also integrate with an LDAP service to manage users from a single place.

  2. Extend the permissions by adding the user to a team.

    To extend a user’s permission and manage their permissions over repositories, you add the user to a team. A team defines the permissions users have for a set of repositories.

Organizations and teams

When a user creates a repository, only that user can make changes to the repository settings, and push new images to it.

Organizations take permission management one step further by allowing multiple users to own and manage a common set of repositories. This is useful when implementing team workflows. With organizations you can delegate the management of a set of repositories and user permissions to the organization administrators.

An organization owns a set of repositories and defines a set of teams. With teams you can define fine-grain permissions that a team of users has for a set of repositories.

Enable LDAP and sync teams and users

Essential to MSR authentication and authorization is the enablement of LDAP and the subsequent syncing of your LDAP directory to your MSR-created teams and users.

To enable LDAP and sync to your LDAP directory:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System to display the General tab.

  3. Scroll down the page to the Auth Settings section and select Click here to configure auth settings. The right pane will display Authentication & Authorization in the details pane.

  4. In the Identity Provider Integration section, move the slider next to LDAP to enable the LDAP settings.

  5. Enter the values that correspond with your LDAP server installation.

  6. Test your configuration in MSR.

  7. Create a team in MSR to mirror your LDAP directory.

  8. Select ENABLE SYNC TEAM MEMBERS.

  9. Choose between the following two methods for matching group members from an LDAP directory. Refer to the table below for more information.

    • Select LDAP MATCH METHOD to change the method for matching group members in the LDAP directory from Match Search Results (default) to Match Group Members. Fill out Group DN and Group Member Attribute as required.

    • Keep the default Match Search Results method and fill out Search Base DN, Search filter, and Search subtree instead of just one level, as required.

  10. Optional. Select Immediately Sync Team Members to run an LDAP sync operation immediately after saving the configuration for the team.

  11. Click Create.


You can match group members from an LDAP directory either by matching group members or by matching search results:

Bind method

Description

Match Group Members (direct bind)

Specifies that team members are synced directly with members of a group in the LDAP directory of your organization. The team membership is synced to match the membership of the group.

Group DN

The distinguished name of the group from which you select users.

Group Member Attribute

The value of this group attribute corresponds to the distinguished names of the members of the group.

Match Search Results (search bind)

Specifies that team members are synced using a search query against the LDAP directory of your organization. The team membership is synced to match the users in the search results.

Search Base DN

The distinguished name of the node in the directory tree where the search starts looking for users.

Search filter

Filters to find users. If empty, existing users in the search scope are added as members of the team.

Search subtree instead of just one level

Defines search through the full LDAP tree, not just one level, starting at the base DN.

Configure SAML integration on MSR

SAML configuration requires that you know the metadata URL for your chosen identity provider, as well as the URL for the MSR host that contains the IP address or domain of your MSR installation.

To configure SAML integration on MSR:

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System to display the General tab.

  3. Scroll down the page to the Auth Settings section and select Click here to configure auth settings. The right pane will display Authentication & Authorization in the details pane.

  4. In the Identity Provider Integration section, move the slider next to SAML to enable the SAML settings.

  5. In the SAML idP Server subsection, enter values for the following fields: SAML Proxy URL, SAML Proxy User, SAML Proxy Password , and IdP Metadata URL.

    SAML Proxy URL

    Optional. URL of the user proxy server used by MSR to fetch the metadata specified in the IdP Metadata URL field.

    SAML Proxy User

    Optional. The user name for proxyvcauthentication.

    SAML Proxy Password

    Optional. The password for proxy authentication.

    IdP Metadata URL

    URL for the identity provider metadata

    Note

    If the metadata URL is publicly certified, you can continue with the default settings:

    • Skip TLS Verification unchecked

    • Root Certificates Bundle blank

    Mirantis recommends the use of TLS verification in production environments. If the metadata URL cannot be certified by the default certificate authority store, you must provide the certificates from the identity provider in the Root Certificates Bundle field.

  6. Click Test Proxy Settings to verify that the proxy server has access to the URL entered into the IdP Metadata URL field.

  7. In the SAML Service Provider subsection, in the MSR Host field, enter the URL that includes the IP address or domain of your MSR installation.

    The port number is optional. The current IP address or domain displays by default.

  8. Optional. Customize the text of the sign-in button by entering the text for the button in the Customize Sign In Button Text field. By default, the button text is Sign in with SAML.

  9. Copy the SERVICE PROVIDER METADATA URL, the ASSERTION CONSUMER SERVICE (ACS) URL, and the SINGLE LOGOUT (SLO) URL, to paste later into the identity provider workflow.

  10. Click Save.

Note

  • To configure a service provider, enter the Service provider metadata URL to obtain its metadata. To access the URL, you may need to provide the CA certificate that can verify the remote server.

  • To link group membership with users, use the Edit or Create team dialog to associate SAML group assertion with the MSR team to synchronize user team membership when the user log in.

SCIM integration

Simple-Cloud-Identity-Management/System-for-Cross-domain-Identity-Management (SCIM) provides an LDAP alternative for provisioning and managing users and groups, as well as syncing users and groups with an upstream identity provider. Using the SCIM schema and the API, you can use single sign-on (SSO) across various tools.

SCIM implementation allows proactive synchronization with MSR and eliminates manual intervention.

Supported identity providers
  • Okta 3.2.0

Typical steps involved in SCIM integration:
  1. Configure SCIM for MSR.

  2. Configure SCIM authentication and access.

  3. Specify user attributes.

Configure SCIM for MSR

Docker’s SCIM implementation uses SCIM version 2.0.

  1. Log in to the MSR web UI.

  2. In the left-side navigation panel, navigate to System to display the General tab.

  3. Scroll down the page to the Auth Settings section and select Click here to configure auth settings. The right pane will display Authentication & Authorization in the details pane.

  4. In the Identity Provider Integration section, move the slider next to SCIM to enable the SAML settings.

    By default, docker-datacenter is the organization to which the SCIM team belongs. Enter the API token in the UI or have MSR generate a UUID for you.

Configure SCIM authentication and access

The base URL for all SCIM API calls is https://<Host IP>/enzi/v0/scim/v2/. All SCIM methods are accessible API endpoints of this base URL.

Bearer Auth is the API authentication method. When configured, SCIM API endpoints are accessed via the following HTTP header Authorization: Bearer <token>

Note

  • SCIM API endpoints are not accessible by any other user (or their token), including the MSR administrator and MSR admin Bearer token.

  • An HTTP authentication request header that contains a Bearer token is the only method supported.

Specify user attributes

The following table maps SCIM and SAML attributes to user attribute fields that Docker uses.

MSR

SAML

SCIM

Account name

nameID in response

userName

Account full name

attribute value in fullname assertion

user name.formatted

Team group link name

attribute value in member-of assertion

group displayName

Team name

N/A

when creating a team, use group displayName + _SCIM

Supported SCIM API endpoints
  • User operations

    • Retrieve user information

    • Create a new user

    • Update user information

  • Group operations

    • Create a new user group

    • Retrieve group information

    • Update user group membership (add/replace/remove users)

  • Service provider configuration operations

    • Retrieve service provider resource type metadata

    • Retrieve schema for service provider and SCIM resources

    • Retrieve schema for service provider configuration

User operations

For user GET and POST operations:

  • Filtering is only supported using the userName attribute and eq operator. For example, filter=userName Eq "john".

  • Attribute name and attribute operator are case insensitive. For example, the following two expressions evaluate to the same logical value:

    • filter=userName Eq "john"

    • filter=Username eq "john"

  • Pagination is fully supported.

  • Sorting is not supported.

GET /Users

Returns a list of SCIM users, 200 users per page by default. Use the startIndex and count query parameters to paginate long lists of users.

For example, to retrieve the first 20 Users, set startIndex to 1 and count to 20, using the following json request:

GET Host IP/enzi/v0/scim/v2/Users?startIndex=1&count=20
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8

The response to the previous query returns paging metadata that is similar to the following example:

{
   "totalResults":100,
   "itemsPerPage":20,
   "startIndex":1,
   "schemas":["urn:ietf:params:scim:api:messages:2.0:ListResponse"],
   "Resources":[{
      ...
   }]
}
GET /Users/{id}

Retrieves a single user resource. The value of the {id} should be the user ID. You can also use the userName attribute to filter the results.

GET {Host IP}/enzi/v0/scim/v2/Users?{user ID}
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
POST /Users

Creates a user. Must include the userName attribute and at least one email address.

POST {Host IP}/enzi/v0/scim/v2/Users
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
PATCH /Users/{id}

Updates a user’s active status. Inactive users can be reactivated by specifying "active": true. Active users can be deactivated by specifying "active": false. The value of the {id} should be the user ID.

PATCH {Host IP}/enzi/v0/scim/v2/Users?{user ID}
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
PUT /Users/{id}

Updates existing user information. All attribute values are overwritten, including attributes for which empty values or no values were provided. If a previously set attribute value is left blank during a PUT operation, the value is updated with a blank value in accordance with the attribute data type and storage provider. The value of the {id} should be the user ID.

Group operations

For group GET and POST operations:

  • Pagination is fully supported.

  • Sorting is not supported.

GET /Groups/{id}

Retrieves information for a single group.

GET /scim/v1/Groups?{Group ID}
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
GET /Groups

Returns a paginated list of groups, ten groups per page by default. Use the startIndex and count query parameters to paginate long lists of groups.

GET /scim/v1/Groups?startIndex=4&count=500 HTTP/1.1
Host: example.com
Accept: application/scim+json
Authorization: Bearer h480djs93hd8
POST /Groups

Creates a new group. Users can be added to the group during group creation by supplying user ID values in the members array.

PATCH /Groups/{id}

Updates an existing group resource, allowing individual (or groups of) users to be added or removed from the group with a single operation. Add is the default operation.

Setting the operation attribute of a member object to delete removes members from a group.

PUT /Groups/{id}

Updates an existing group resource, overwriting all values for a group even if an attribute is empty or not provided. PUT replaces all members of a group with members provided via the members attribute. If a previously set attribute is left blank during a PUT operation, the new value is set to blank in accordance with the data type of the attribute and the storage provider.

Service provider configuration operations

SCIM defines three endpoints to facilitate discovery of SCIM service provider features and schema that can be retrieved using HTTP GET:

GET /ResourceTypes

Discovers the resource types available on a SCIM service provider, for example, Users and Groups. Each resource type defines the endpoints, the core schema URI that defines the resource, and any supported schema extensions.

GET /Schemas

Retrieves information about all resource schemas supported by a SCIM service provider.

GET /ServiceProviderConfig

Returns a JSON structure that describes the SCIM specification features available on a service provider using the schemas attribute of urn:ietf:params:scim:schemas:core:2.0:ServiceProviderConfig.

Create and manage teams

You can extend a user’s default permissions by granting them individual permissions in other image repositories, by adding the user to a team. A team defines the permissions that a set of users has for a set of repositories.

To create a new team:

  1. Log in to the MSR web UI.

  2. Navigate to the Organizations page.

  3. Click the organization within which you want to create the team.

  4. Click + to create a new team.

  5. Give the team a name.

  6. Click the team name to manage its settings.

# Click the Add user button to add team members.

Manage team permissions

Once you have created the team, the next step is to define the team permissions for a set of repositories.

To manage team permissions:

  1. Navigate to the Permissions tab, and click the Add repository permissions button.

  2. Choose the repositories that the team has access to, and what permission levels the team members have.

    Three permission levels are available:

    Permission level

    Description

    Read only

    View repository, pull images.

    Read & Write

    View repository, pull and push images.

    Admin

    Manage repository and change its settings, pull and push images.

Delete a team

To delete a team:

If you are an organization owner, you can delete a team in that organization.

  1. Navigate to the Team.

  2. Choose the Settings tab.

  3. Click Delete.

Create and manage organizations

When a user creates a repository, only that user has permissions to make changes to the repository.

For team workflows, where multiple users have permissions to manage a set of common repositories, you can create an organization.

To create a new organization, navigate to the MSR web UI and go to the Organizations page.

Click the New organization button, and choose a meaningful name for the organization.

Repositories owned by this organization will contain the organization name, so to pull an image from that repository you will use:

docker pull <msr-domain-name>/<organization>/<repository>:<tag>

Click Save to create the organization, and then click the organization to define which users are allowed to manage this organization. These users will be able to edit the organization settings, edit all repositories owned by the organization, and define the user permissions for this organization.

For this, click the Add user button, select the users that you want to grant permissions to manage the organization, and click Save. Then change their permissions from Member to Org Owner.

Permission levels

Mirantis Secure Registry (MSR) allows you to define fine-grained permissions over image repositories.

Administrators

MSR administrators have permission to manage all MSR repositories and settings.

Team permission levels

With teams you can define the repository permissions for a set of users (read, read-write, and admin).

Repository operation

read

read-write

admin

View/browse

x

x

x

Pull

x

x

x

Push

x

x

Start a scan

x

x

Delete tags

x

x

Edit description

x

Set public or private

x

Manage user access

x

Delete repository

x

Note

Team permissions are additive. When a user is a member of multiple teams, they have the highest permission level defined by those teams.

Overall permissions

Permission level

Description

Anonymous or unauthenticated users

Search and pull public repositories.

Authenticated Users

Search and pull public repos, and create and manage their own repositories.

Team Member

Do everything a user can do, plus the permissions granted by the team the user belongs to.

Organization Owner

Manage repositories and teams for the organization.

Admin

Manage anything across MKE and MSR.

Manage webhooks

You can configure MSR to automatically post event notifications to a webhook URL of your choosing. This lets you build complex CI and CD pipelines with your Docker images.

Webhook types

Event type

Scope

Access level

Availability

Tag pushed to repository (TAG_PUSH)

Individual repositories

Repository admin

Web UI and API

Tag pulled from repository (TAG_PULL)

Individual repositories

Repository admin

Web UI and API

Tag deleted from repository (TAG_DELETE)

Individual repositories

Repository admin

Web UI and API

Manifest pushed to repository (MANIFEST_PUSH)

Individual repositories

Repository admin

Web UI and API

Manifest pulled from repository (MANIFEST_PULL)

Individual repositories

Repository admin

Web UI and API

Manifest deleted from repository (MANIFEST_DELETE)

Individual repositories

Repository admin

Web UI and API

Security scan completed (SCAN_COMPLETED)

Individual repositories

Repository admin

Web UI and API

Security scan failed (SCAN_FAILED)

Individual repositories

Repository admin

Web UI and API

Image promoted from repository (PROMOTION)

Individual repositories

Repository admin

Web UI and API

Image mirrored from repository (PUSH_MIRRORING)

Individual repositories

Repository admin

Web UI and API

Image mirrored from remote repository (POLL_MIRRORING)

Individual repositories

Repository admin

Web UI and API

Repository created, updated, or deleted (REPO_CREATED, REPO_UPDATED, and REPO_DELETED)

Namespace, organizations

Namespace, organization owners

API only

Security scanner update completed (SCANNER_UPDATE_COMPLETED))

Global

MSR admin

API only

Helm chart deleted from repository (CHART_DELETE))

Individual repositories

Repository admin

Web UI and API

Helm chart pushed to repository (CHART_PUSH))

Individual repositories

Repository admin

Web UI and API

Helm chart pulled from repository (CHART_PULL))

Individual repositories

Repository admin

Web UI and API

Helm chart linting completed (CHART_LINTED))

Individual repositories

Repository admin

Web UI and API

You must have admin privileges to a repository or namespace in order to subscribe to its webhook events. For example, a user must be an admin of repository “foo/bar” to subscribe to its tag push events. A MSR admin can subscribe to any event.

Manage repository webhooks with the web interface

You must have admin privileges to the repository in order to create a webhook or edit any aspect of an existing webhook.

Create a webhook for your repository
  1. In your browser, navigate to https://<msr-url> and log in with your credentials.

  2. Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

  3. Select the Webhooks tab, and click New Webhook.

  4. From the Notification to receive drop-down list, select the event that will trigger the webhook.

  5. Set the URL that will receive the JSON payload. Click Test next to the Webhook URL field, so that you can validate that the integration is working. At your specified URL, you should receive a JSON payload for your chosen event type notification.

    {
      "type": "TAG_PUSH",
      "createdAt": "2019-05-15T19:39:40.607337713Z",
      "contents": {
        "namespace": "foo",
        "repository": "bar",
        "tag": "latest",
        "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
        "imageName": "foo/bar:latest",
        "os": "linux",
        "architecture": "amd64",
        "author": "",
        "pushedAt": "2015-01-02T15:04:05Z"
      },
      "location": "/repositories/foo/bar/tags/latest"
    }
    
  6. (Optional) Assign a TLS certificate to your webhook.

    1. Expand Show advanced settings.

    2. Paste the TLS certificate associated with your webhook URL into the TLS Cert field.

      Note

      For testing purposes, you can test your TLS certficate over HTTP rather than HTTPS.

    3. To circumvent TLS verification, tick the Skip TLS Verification checkbox.

  7. (Optional) Format your webhook message. Available since MSR 3.0.2

    You can use Golang templates to format the webhook messages that are sent.

    1. Expand Show advanced settings.

    2. Paste the configured Golang template for the webhook message into the Webhook Message Format field.

  8. Click Create to save the webhook. Once saved, your webhook is active and starts sending POST notifications whenever your chosen event type is triggered.

As a repository admin, you can add or delete a webhook at any point. Additionally, you can create, view, and delete webhooks for your organization or trusted registry using the API.

Change the Active status of a webhook

Note

By default, the webhook status is set to Active on its creation.

  1. In your browser, navigate to https://<msr-url> and log in with your credentials.

  2. Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

  3. Select the Webhooks tab. The existing webhooks display on the page.

  4. Locate the webhook whose Active status you want to change and move the slider underneath the Active heading accordingly.

Manage repository webhooks with the API

Triggering notifications

Refer to Webhook types for a list of events you can trigger notifications for via the API.

Your MSR hostname serves as the base URL for your API requests.

From the MSR web interface, click API on the bottom left-side navigation panel to explore the API resources and endpoints. Click Execute to send your API request.

API requests via curl

You can use curl to send HTTP or HTTPS API requests. Note that you will have to specify skipTLSVerification: true on your request in order to test the webhook endpoint over HTTP.

Example curl request
curl -u test-user:$TOKEN -X POST "https://msr-example.com/api/v0/webhooks" -H "accept: application/json" -H "content-type: application/json" -d "{ \"endpoint\": \"https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019\", \"key\": \"maria-testorg/lab-words\", \"skipTLSVerification\": true, \"type\": \"TAG_PULL\"}"
Example JSON response
{
  "id": "b7bf702c31601efb4796da59900ddc1b7c72eb8ca80fdfb1b9fecdbad5418155",
  "type": "TAG_PULL",
  "key": "maria-testorg/lab-words",
  "endpoint": "https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019",
  "authorID": "194efd8e-9ee6-4d43-a34b-eefd9ce39087",
  "createdAt": "2019-05-22T01:55:20.471286995Z",
  "lastSuccessfulAt": "0001-01-01T00:00:00Z",
  "inactive": false,
  "tlsCert": "",
  "skipTLSVerification": true
}
Subscribe to events

To subscribe to events, send a POST request to /api/v0/webhooks with the following JSON payload:

Example usage
{
  "type": "TAG_PUSH",
  "key": "foo/bar",
  "endpoint": "https://example.com"
}

The keys in the payload are:

  • type: The event type to subcribe to.

  • key: The namespace/organization or repo to subscribe to. For example, “foo/bar” to subscribe to pushes to the “bar” repository within the namespace/organization “foo”.

  • endpoint: The URL to send the JSON payload to.

Normal users must supply a “key” to scope a particular webhook event to a repository or a namespace/organization. MSR admins can choose to omit this, meaning a POST event notification of your specified type will be sent for all MSR repositories and namespaces.

Receive a payload

Whenever your specified event type occurs, MSR will send a POST request to the given endpoint with a JSON-encoded payload. The payload will always have the following wrapper:

{
  "type": "...",
  "createdAt": "2012-04-23T18:25:43.511Z",
  "contents": {...}
}
  • type refers to the event type received at the specified subscription endpoint.

  • contents refers to the payload of the event itself. Each event is different, therefore the structure of the JSON object in contents will change depending on the event type. See Content structure for more details.

Test payload subscriptions

Before subscribing to an event, you can view and test your endpoints using fake data. To send a test payload, send POST request to /api/v0/webhooks/test with the following payload:

{
  "type": "...",
  "endpoint": "https://www.example.com/"
}

Change type to the event type that you want to receive. MSR will then send an example payload to your specified endpoint. The example payload sent is always the same.

Content structure

Comments after (//) are for informational purposes only, and the example payloads have been clipped for brevity.

Repository event content structure

Tag push

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag just pushed
  "digest": "",       // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
  "os": "",           // (string) the OS for the tag's manifest
  "architecture": "", // (string) the architecture for the tag's manifest
  "author": "",       // (string) the username of the person who pushed the tag
  "pushedAt": "",     // (string) JSON-encoded timestamp of when the push occurred
  ...
}

Tag delete

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag just deleted
  "digest": "",       // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
  "os": "",           // (string) the OS for the tag's manifest
  "architecture": "", // (string) the architecture for the tag's manifest
  "author": "",       // (string) the username of the person who deleted the tag
  "deletedAt": "",     // (string) JSON-encoded timestamp of when the delete occurred
  ...
}

Manifest push

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "digest": "",       // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
  "os": "",           // (string) the OS for the manifest
  "architecture": "", // (string) the architecture for the manifest
  "author": "",       // (string) the username of the person who pushed the manifest
  ...
}

Manifest delete

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "digest": "",       // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
  "os": "",           // (string) the OS for the manifest
  "architecture": "", // (string) the architecture for the manifest
  "author": "",       // (string) the username of the person who deleted the manifest
  "deletedAt": "",    // (string) JSON-encoded timestamp of when the delete occurred
  ...
}

Security scan completed

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag scanned
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
  "scanSummary": {
    "namespace": "",          // (string) repository's namespace/organization name
    "repository": "",         // (string) repository name
    "tag": "",                // (string) the name of the tag just pushed
    "critical": 0,            // (int) number of critical issues, where CVSS >= 7.0
    "major": 0,               // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
    "minor": 0,               // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
    "last_scan_status": 0,    // (int) enum; see scan status section
    "check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
    ...
  }
}

Security scan failed

{
  "namespace": "",    // (string) namespace/organization for the repository
  "repository": "",   // (string) repository name
  "tag": "",          // (string) the name of the tag scanned
  "imageName": "",    // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
  "error": "",        // (string) the error that occurred while scanning
  ...
}

Chart push

{
    "namespace": "foo",       // (string) namespace/organization for the repository
    "repository": "bar",      // (string) repository name
    "event": "CHART_PUSH",    // (string) event name
    "author": "exampleUser",  // (string) the username of the person who deleted the manifest
    "data": {
      "urls": [
        "http://example.com"  //
      ],
      "created": "2015-01-02T15:04:05Z",
      "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c" // (string) sha256 digest of the manifest of the helm chart (eg. "sha256:0afb...")
  }
}

Chart pull

{
  "namespace": "foo",
  "repository": "bar",
  "event": "CHART_PULL",
  "author": "exampleUser",
  "data": {
    "urls": [
      "http://example.com"
    ],
    "created": "2015-01-02T15:04:05Z",
    "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c"
  }
}

Chart linted

{
  "namespace": "foo",
  "repository": "bar",
  "event": "CHART_LINTED",
  "author": "exampleUser",
  "data": {
    "chartName": "test-chart",
    "chartVersion": "1.0"
  }
}

Chart delete

{
  "namespace": "foo",
  "repository": "bar",
  "event": "CHART_DELETE",
  "author": "exampleUser",
  "data": {
    "urls": [
      "http://example.com"
    ],
    "created": "2015-01-02T15:04:05Z",
    "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c"
  }
}
Namespace-specific event structure

Repository event (created/updated/deleted)

{
  "namespace": "",    // (string) repository's namespace/organization name
  "repository": "",   // (string) repository name
  "event": "",        // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
  "author": "",       // (string) the name of the user responsible for the event
  "data": {}          // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
Global event structure

Security scanner update complete

{
  "scanner_version": "",
  "scanner_updated_at": "", // (string) JSON-encoded timestamp of when the scanner updated
  "db_version": 0,          // (int) newly updated database version
  "db_updated_at": "",      // (string) JSON-encoded timestamp of when the database updated
  "success": <true|false>   // (bool) whether the update was successful
  "replicas": {             // (object) a map keyed by replica ID containing update information for each replica
    "replica_id": {
      "db_updated_at": "",  // (string) JSON-encoded time of when the replica updated
      "version": "",        // (string) version updated to
      "replica_id": ""      // (string) replica ID
    },
    ...
  }
}
Security scan status codes
  • 0: Failed. An error occurred checking an image’s layer

  • 1: Unscanned. The image has not yet been scanned

  • 2: Scanning. Scanning in progress

  • 3: Pending: The image will be scanned when a worker is available

  • 4: Scanned: The image has been scanned but vulnerabilities have not yet been checked

  • 5: Checking: The image is being checked for vulnerabilities

  • 6: Completed: The image has been fully security scanned

View and manage existing subscriptions
View all subscriptions

To view existing subscriptions, send a GET request to /api/v0/webhooks. As a normal user (i.e., not a MSR admin), this will show all of your current subscriptions across every namespace/organization and repository. As a MSR admin, this will show every webhook configured for your MSR.

The API response will be in the following format:

[
  {
    "id": "",        // (string): UUID of the webhook subscription
    "type": "",      // (string): webhook event type
    "key": "",       // (string): the individual resource this subscription is scoped to
    "endpoint": "",  // (string): the endpoint to send POST event notifications to
    "authorID": "",  // (string): the user ID resposible for creating the subscription
    "createdAt": "", // (string): JSON-encoded datetime when the subscription was created
  },
  ...
]
View subscriptions for a particular resource

You can also view subscriptions for a given resource that you are an admin of. For example, if you have admin rights to the repository “foo/bar”, you can view all subscriptions (even other people’s) from a particular API endpoint. These endpoints are:

  • GET /api/v0/repositories/{namespace}/{repository}/webhooks: View all webhook subscriptions for a repository

  • GET /api/v0/repositories/{namespace}/webhooks: View all webhook subscriptions for a namespace/organization

Delete a subscription

To delete a webhook subscription, send a DELETE request to /api/v0/webhooks/{id}, replacing {id} with the webhook subscription ID which you would like to delete.

Only a MSR admin or an admin for the resource with the event subscription can delete a subscription. As a normal user, you can only delete subscriptions for repositories which you manage.

Manage repository events

Audit repository events

Starting in DTR 2.6, each repository page includes an Activity tab which displays a sortable and paginated list of the most recent events within the repository. This offers better visibility along with the ability to audit events. Event types listed vary according to your repository permission level. Additionally, MSR admins can enable auto-deletion of repository events as part of maintenance and cleanup.

In the following section, we will show you how to view and audit the list of events in a repository. We will also cover the event types associated with your permission level.

View List of Events

As of DTR 2.3, admins were able to view a list of MSR events using the API. MSR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following:

  1. Navigate to https://<msr-url> and log in with your MSR credentials.

  2. Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

  3. Select the Activity tab. You should see a paginated list of the latest events based on your repository permission level. By default, Activity shows the latest 10 events and excludes pull events, which are only visible to repository and MSR admins.

    • If you’re a repository or a MSR admin, uncheck Exclude pull to view pull events. This should give you a better understanding of who is consuming your images.

    • To update your event view, select a different time filter from the drop-down list.

Activity Stream

The following table breaks down the data included in an event and uses the highlighted Create Promotion Policy event as an example.

Event detail

Description

Example

Label

Friendly name of the event.

Create Promotion Policy

Repository

This will always be the repository in review following the <user-or-org>/<repository_name> convention outlined in Create a repository

test-org/test-repo-1

Tag

Tag affected by the event, when applicable.

test-org/test-repo-1:latest where latest is the affected tag

SHA

The digest value for ``CREATE` operations such as creating a new image tag or a promotion policy.

sha256:bbf09ba3

Type

Event type. Possible values are: CREATE, GET, UPDATE, DELETE, SEND, FAIL and SCAN.

CREATE

Initiated by

The actor responsible for the event. For user-initiated events, this will reflect the user ID and link to that user’s profile. For image events triggered by a policy – pruning, pull / push mirroring, or promotion – this will reflect the relevant policy ID except for manual promotions where it reflects PROMOTION MANUAL_P, and link to the relevant policy page. Other event actors may not include a link.

PROMOTION CA5E7822

Date and Time

When the event happened in your configured time zone.

2018 9:59 PM

Event Audits

Given the level of detail on each event, it should be easy for MSR and security admins to determine what events have taken place inside of MSR. For example, when an image which shouldn’t have been deleted ends up getting deleted, the security admin can determine when and who initiated the deletion.

Event Permissions

Repository event

Description

Minimum permission level

Push

Refers to Create Manifest and Update Tag events. Learn more about pushing images.

Authenticated users

Scan

Requires security scanning to be set up by an MSR admin. Once enabled, this will display as a SCAN event type.

Authenticated users

Promotion

Refers to a Create Promotion Policy event which links to the Promotions tab of the repository where you can edit the existing promotions. See Promotion Policies for different ways to promote an image.

Repository admin

Delete

Refers to “Delete Tag” events. Learn more about Delete images.

Authenticated users

Pull

Refers to “Get Tag” events. Learn more about Pull an image.

Repository admin

Mirror

Refers to Pull mirroring and Push mirroring events. See Mirror images to another registry and Mirror images from another registry for more details.

Repository admin

Create repo

Refers to Create Repository events. See Create a repository for more details.

Authenticated users

Where to go next

Enable Auto-Deletion of Repository Events

Mirantis Secure Registry has a global setting for repository event auto-deletion. This allows event records to be removed as part of garbage collection. MSR administrators can enable auto-deletion of repository events in DTR 2.6 based on specified conditions which are covered below.

  1. In your browser, navigate to https://<msr-url> and log in with your admin credentials.

  2. Select System from the left-side navigation panel which displays the Settings page by default.

  3. Scroll down to Repository Events and turn on Auto-Deletion.

  4. Specify the conditions with which an event auto-deletion will be triggered.

MSR allows you to set your auto-deletion conditions based on the following optional repository event attributes:

Name

Description

Example

Age

Lets you remove events older than your specified number of hours, days, weeks or months.

2 months

Max number of events

Lets you specify the maximum number of events allowed in the repositories.

6000

If you check and specify both, events in your repositories will be removed during garbage collection if either condition is met. You should see a confirmation message right away.

  1. Click Start GC if you’re ready.

  2. Navigate to System > Job Logs to confirm that onlinegc has happened.

Where to go next

Promotion policies and monitoring

Promotion policies overview

Mirantis Secure Registry allows you to automatically promote and mirror images based on a policy. In MSR 2.7, you have the option to promote applications with the experimental docker app CLI addition. Note that scanning-based promotion policies do not take effect until all application-bundled images have been scanned. This way you can create a Docker-centric development pipeline.

You can mix and match promotion policies, mirroring policies, and webhooks to create flexible development pipelines that integrate with your existing CI/CD systems.

Promote an image using policies

One way to create a promotion pipeline is to automatically promote images to another repository.

You start by defining a promotion policy that’s specific to a repository. When someone pushes an image to that repository, MSR checks if it complies with the policy you set up and automatically pushes the image to another repository.

Learn how to promote an image using policies.

Mirror images to another registry

You can also promote images between different MSR deployments. This not only allows you to create promotion policies that span multiple MSRs, but also allows you to mirror images for security and high availability.

You start by configuring a repository with a mirroring policy. When someone pushes an image to that repository, MSR checks if the policy is met, and if so pushes it to another MSR deployment or Docker Hub.

Learn how to mirror images to another registry.

Mirror images from another registry

Another option is to mirror images from another MSR deployment. You configure a repository to poll for changes in a remote repository. All new images pushed into the remote repository are then pulled into MSR.

This is an easy way to configure a mirror for high availability since you won’t need to change firewall rules that are in place for your environments.

Learn how to mirror images from another registry.

Promote an image using policies

Mirantis Secure Registry allows you to create image promotion pipelines based on policies.

In this example we will create an image promotion pipeline such that:

  1. Developers iterate and push their builds to the dev/website repository.

  2. When the team creates a stable build, they make sure their image is tagged with -stable.

  3. When a stable build is pushed to the dev/website repository, it will automatically be promoted to qa/website so that the QA team can start testing.

With this promotion policy, the development team doesn’t need access to the QA repositories, and the QA team doesn’t need access to the development repositories.

Configure your repository

Once you’ve created a repository, navigate to the repository page on the MSR web interface, and select the Promotions tab.

Note

Only administrators can globally create and edit promotion policies. By default users can only create and edit promotion policies on repositories within their user namespace.

Click New promotion policy, and define the image promotion criteria.

MSR allows you to set your promotion policy based on the following image attributes:

Image attributes

Name

Description

Example

Tag name

Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Promote to Target if Tag name ends in stable

Component

Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Promote to Target if Component name starts with b

Vulnarabilities

Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number

Promote to Target if Critical vulnerabilities = 3

License

Whether the image uses an intellectual property license and is one of or not one of your specified words

Promote to Target if License name = docker

Now you need to choose what happens to an image that meets all the criteria.

Select the target organization or namespace and repository where the image is going to be pushed. You can choose to keep the image tag, or transform the tag into something more meaningful in the destination repository, by using a tag template.

In this example, if an image in the dev/website is tagged with a word that ends in “stable”, MSR will automatically push that image to the qa/website repository. In the destination repository the image will be tagged with the timestamp of when the image was promoted.

Everything is set up! Once the development team pushes an image that complies with the policy, it automatically gets promoted. To confirm, select the Promotions tab on the dev/website repository.

You can also review the newly pushed tag in the target repository by navigating to qa/website and selecting the Tags tab.

Where to go next

Mirror images to another registry

Mirantis Secure Registry allows you to create mirroring policies for a repository. When an image gets pushed to a repository and meets the mirroring criteria, MSR automatically pushes it to a repository in a remote Mirantis Secure Registry or Hub registry.

This not only allows you to mirror images but also allows you to create image promotion pipelines that span multiple MSR deployments and datacenters.

In this example we will create an image mirroring policy such that:

  1. Developers iterate and push their builds to msr-example.com/dev/website the repository in the MSR deployment dedicated to development.

  2. When the team creates a stable build, they make sure their image is tagged with -stable.

  3. When a stable build is pushed to msr-example.com/dev/website, it will automatically be pushed to qa-example.com/qa/website, mirroring the image and promoting it to the next stage of development.

With this mirroring policy, the development team does not need access to the QA cluster, and the QA team does not need access to the development cluster.

You need to have permissions to push to the destination repository in order to set up the mirroring policy.

Configure your repository connection

Once you have created a repository, navigate to the repository page on the web interface, and select the Mirrors tab.

Click New mirror to define where the image will be pushed if it meets the mirroring criteria.

Under Mirror direction, choose Push to remote registry. Specify the following details:

Field

Description

Registry type

You can choose between Mirantis Secure Registry and Docker Hub. If you choose MSR, enter your MSR URL. Otherwise, Docker Hub defaults to https://index.docker.io

Username and password or access token

Your credentials in the remote repository you wish to push to. To use an access token instead of your password, see authentication token.

Repository

Enter the namespace and the repository_name after the /

Show advanced settings

Enter the TLS details for the remote repository or check Skip TLS verification. If the MSR remote repository is using self-signed TLS certificates or certificates signed by your own certificate authority, you also need to provide the public key certificate for that CA. You can retrieve the certificate by accessing https://<msr-domain>/ca. Remote certificate authority is optional for a remote repository in Docker Hub.

Note

Make sure the account you use for the integration has permissions to write to the remote repository.

Click Connect to test the integration.

In this example, the image gets pushed to the qa/example repository of a MSR deployment available at qa-example.com using a service account that was created just for mirroring images between repositories.

Next, set your push triggers. MSR allows you to set your mirroring policy based on the following image attributes:

Name

Description

Example

Tag name

Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Copy image to remote repository if Tag name ends in stable

Component

Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Copy image to remote repository if Component name starts with b

Vulnarabilities

Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number

Copy image to remote repository if Critical vulnerabilities = 3

License

Whether the image uses an intellectual property license and is one of or not one of your specified words

Copy image to remote repository if License name = docker

You can choose to keep the image tag, or transform the tag into something more meaningful in the remote registry by using a tag template.

In this example, if an image in the dev/website repository is tagged with a word that ends in stable, MSR will automatically push that image to the MSR deployment available at qa-example.com. The image is pushed to the qa/example repository and is tagged with the timestamp of when the image was promoted.

Everything is set up! Once the development team pushes an image that complies with the policy, it automatically gets promoted to qa/example in the remote trusted registry at qa-example.com.

Metadata persistence

When an image is pushed to another registry using a mirroring policy, scanning and signing data is not persisted in the destination repository.

If you have scanning enabled for the destination repository, MSR is going to scan the image pushed. If you want the image to be signed, you need to do it manually.

Where to go next

Mirror images from another registry.

Mirror images from another registry

Mirantis Secure Registry allows you to set up a mirror of a repository by constantly polling it and pulling new image tags as they are pushed. This ensures your images are replicated across different registries for high availability. It also makes it easy to create a development pipeline that allows different users access to a certain image without giving them access to everything in the remote registry.

To mirror a repository, start by creating a repository in the MSR deployment that will serve as your mirror. Previously, you were only able to set up pull mirroring from the API. Starting in DTR 2.6, you can also mirror and pull from a remote MSR or Docker Hub repository.

Pull mirroring on the web interface

To get started, navigate to https://<msr-url> and log in with your MKE credentials.

Select Repositories in the left-side navigation panel, and then click the name of the repository you want to view. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

Next, select the Mirrors tab and click New mirror. On the New mirror page, choose Pull from remote registry.

Specify the following details:

Field

Description

Registry type

You can choose between Mirantis Secure Registry and Docker Hub. If you choose MSR, enter your MSR URL. Otherwise, Docker Hub defaults to https://index.docker.io

Username and password or access token

Your credentials in the remote repository you wish to poll from. To use an access token instead of your password, see authentication token.

Repository

Enter the namespace and the repository_name after the /

Show advanced settings

Enter the TLS details for the remote repository or check Skip TLS verification. If the MSR remote repository is using self-signed certificates or certificates signed by your own certificate authority, you also need to provide the public key certificate for that CA. You can retrieve the certificate by accessing https://<msr-domain>/ca. Remote certificate authority is optional for a remote repository in Docker Hub.

After you have filled out the details, click Connect to test the integration.

Once you have successfully connected to the remote repository, new buttons appear:

  • Click Save to mirror future tag, or;

  • To mirror all existing and future tags, click Save & Apply instead.

Pull mirroring on the API

There are a few different ways to send your MSR API requests. To explore the different API resources and endpoints from the web interface, click API on the bottom left-side navigation panel.

Search for the endpoint:

POST /api/v0/repositories/{namespace}/{reponame}/pollMirroringPolicies

Click Try it out and enter your HTTP request details. namespace and reponame refer to the repository that will be poll mirrored. The boolean field, initialEvaluation, corresponds to Save when set to false and will only mirror images created after your API request. Setting it to true corresponds to Save & Apply which means all tags in the remote repository will be evaluated and mirrored. The other body parameters correspond to the relevant remote repository details that you can see on the MSR web interface. As a best practice, use a service account just for this purpose. Instead of providing the password for that account, you should pass an authentication token.

If the MSR remote repository is using self-signed certificates or certificates signed by your own certificate authority, you also need to provide the public key certificate for that CA. You can get it by accessing https://<msr-domain>/ca. The remoteCA field is optional for mirroring a Docker Hub repository.

Click Execute. On success, the API returns an HTTP 201 response.

Review the poll mirror job log

Once configured, the system polls for changes in the remote repository and runs the poll_mirror job every 30 minutes. On success, the system will pull in new images and mirror them in your local repository. Starting in DTR 2.6, you can filter for poll_mirror jobs to review when it was last ran. To manually trigger the job and force pull mirroring, use the POST /api/v0/jobs API endpoint and specify poll_mirror as your action.

curl -X POST "https:/<msr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action\": \"poll_mirror\"}"

See Manage jobs to learn more about job management within MSR.

Where to go next

Mirror images to another registry.

Template reference

When defining promotion policies you can use templates to dynamically name the tag that is going to be created.

Important

Whenever an image promotion event occurs, the MSR timestamp for the event is in UTC (Coordinated Univeral Time). That timestamp, however, is converted by the browser and presents in the user’s time zone. Inversely, if a time-based tag is applied to a target image, MSR captures it in UTC but cannot convert it to the user’s timezone due to the tags being immutable strings.

You can use these template keywords to define your new tag:

Template

Description

Example result

%n

The tag to promote

1, 4.5, latest

%A

Day of the week

Sunday, Monday

%a

Day of the week, abbreviated

Sun, Mon, Tue

%w

Day of the week, as a number

0, 1, 6

%d

Number for the day of the month

01, 15, 31

%B

Month

January, December

%b

Month, abbreviated

Jan, Jun, Dec

%m

Month, as a number

01, 06, 12

%Y

Year

1999, 2015, 2048

%y

Year, two digits

99, 15, 48

%H

Hour, in 24 hour format

00, 12, 23

%I

Hour, in 12 hour format

01, 10, 10

%p

Period of the day

AM, PM

%M

Minute

00, 10, 59

%S

Second

00, 10, 59

%f

Microsecond

000000, 999999

%Z

Name for the timezone

UTC, PST, EST

%j

Day of the year

001, 200, 366

%W

Week of the year

00, 10, 53

Use Helm charts

Helm is a tool that manages Kubernetes packages called charts, which are put to use in defining, installing, and upgrading Kubernetes applications. These charts, in conjunction with Helm tooling, deploy applications into Kubernetes clusters. Charts are comprised of a collection of files and directories, arranged in a particular structure and packaged as a .tgz file. Charts define Kubernetes objects, such as the Service and DaemonSet objects used in the application under deployment.

MSR enables you to use Helm to store and serve Helm charts, thus allowing users to push charts to and pull charts from MSR repositories using the Helm CLI and the MSR API.

MSR supports both Helm v2 and v3. The two versions differ significantly with regard to the Helm CLI, which affects the applications under deployment rather than Helm chart support in MSR. One key difference is that while Helm v2 includes both the Helm CLI and Tiller (Helm Server), Helm v3 includes only the Helm CLI. Helm charts (referred to as releases following their installation in Kubernetes) are managed by Tiller in Helm v2 and by Helm CLI in Helm v3.

Note

For a breakdown of the key differences between Helm v2 and Helm v3, refer to Helm official documentation.

Add a Helm chart repository

Users can add a Helm chart repository to MSR through the MSR web UI.

  1. Login to the MSR web UI.

  2. Click Repositories in the left-side navigation panel.

  3. Click New repository.

  4. In the name field, enter the name for the new repository and click Create.

  5. To add the new MSR repository as a Helm repository:

    helm repo add <reponame> https://<msrhost>/charts/<namespace>/<reponame> --username <username> --password <password> --ca-file ca.crt
    
    "<reponame>" has been added to your repositories
    
  6. To verify that the new MSR Helm repository has been added:

    helm repo list
    
    NAME        URL
    <reponame>  https://<msrhost>/charts/<namespace>/<reponame>
    

Pull charts and their provenance files

Helm charts can be pulled from MSR Helm repositories using either the MSR API or the Helm CLI.

Pull with the MSR API

Note

Though the MSR API can be used to pull both Helm charts and provenance files, it is not possible to use it to pull both at the same time.

Pull a chart

To pull a Helm chart:

curl -u <username>:<password> \
  --request GET https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz \
  -H "accept: application/octet-stream" \
  -o <chartname>-<chartversion>.tgz \
  --cacert ca.crt
Pull a provenance file

To pull a provenance file:

curl -u <username>:<password> \
  --request GET https://msrhost/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz.prov \
  -H "accept: application/octet-stream" \
  -o <chartname>-<chartversion>.tgz.prov \
  --cacert ca.crt
Pull with the Helm CLI

Note

Though the Helm CLI can be used to pull a Helm chart by itself or a Helm chart and its provenance file, it is not possible to use the Helm CLI to pull a provenance file by itself.

Pull a chart

Use the helm pull CLI command to pull a Helm chart:

helm pull <reponame>/<chartname> --version <chartversion>
ls
ca.crt  <chartname>-<chartversion>.tgz

Alternatively, use the following command:

helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt
Pull a chart and a provenance file in tandem

Use the helm pull CLI command with the --prov option to pull a Helm chart and a provenance file at the same time:

helm pull <reponame>/<chartname> --version <chartversion> --prov

ls
ca.crt  <chartname>-<chartversion>.tgz  <chartname>-<chartversion>.tgz.prov

Alternatively, use the following command:

helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt --prov

Push charts and their provenance files

You can use the MSR API or the Helm CLI to push Helm charts and their provenance files to an MSR Helm repository.

Note

Pushing and pulling Helm charts can be done with or without a provenance file.

Push charts with the MSR API

Using the MSR API, you can push Helm charts with application/octet-stream or multipart/form-data.

Push with application/octet-stream

To push a Helm chart through the MSR API with application/octet-stream:

curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Push with multipart/form-data

To push a Helm chart through the MSR API with multipart/form-data:

curl -F "chart=@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force push a chart

To overwrite an existing chart, turn off repository immutability and include a ?force query parameter in the HTTP request.

  1. Navigate to Repositories and click the Settings tab.

  2. Under Immutability, select Off.

To force push a Helm chart using the MSR API:

curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Push provenance files with the MSR API

You can use the MSR API to separately push provenance files related to Helm charts.

To push a provenance file through the MSR API:

curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" https://<msrhost>/charts/api/<namespace>/<reponame>/prov -u <username>:<password> --cacert ca.crt

Note

Attempting to push a provenance file for a nonexistent chart will result in an error.

Force push a provenance file

To force push a provenance file using the MSR API:

curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/prov?force" -u <username>:<password> --cacert ca.crt
Push a chart and its provenance file with a single API request

To push a Helm chart and a provenance file with a single API request:

curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" https://msrhost/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force push a chart and a provenance file

To force push both a Helm chart and a provenance file using a single API request:

curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Push charts with the Helm CLI

Note

To push a Helm chart using the Helm CLI, first install the helm cm-push plugin from chartmuseum/helm-push. It is not possible to push a provenance file using the Helm CLI.

Use the helm push CLI command to push a Helm chart:

helm cm-push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt
Force push a chart

Use the helm cm-push CLI command with the --force option to force push a Helm chart:

helm push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt --force

View charts in a Helm repository

View charts in a Helm repository using either the MSR API or the MSR web UI.

Viewing charts with the MSR API

To view charts that have been pushed to a Helm repository using the MSR API, consider the following options:

Option

CLI command

View the index file

curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt

View a paginated list of all charts

curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt

View a paginated list of chart versions

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname> -u <username>:<password> \
--cacert ca.crt

Describe a version of a particular chart

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion> -u \
<username>:<password> --cacert ca.crt

Return the default values of a version of a particular chart

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/values -u \
<username>:<password> --cacert ca.crt

Produce a template of a version of a particular chart

curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/template -u \
<username>:<password> --cacert ca.crt
Viewing charts with the MSR web UI

Use the MSR web UI to view the MSR Helm repository charts.

  1. In the MSR web UI, navigate to Repositories.

  2. Click the name of the repository that contains the charts you want to view. The page will refresh to display the detail for the selected Helm repository.

  3. Click the Charts tab. The page will refresh to display all the repository charts.

View

UI sequence

Chart versions

Click the View Chart button associated with the required Helm repository.

Chart description

  1. Click the View Chart button associated with the required Helm repository.

  2. Click the View Chart button for the particular chart version.

Default values

  1. Click the View Chart button associated with the required Helm repository.

  2. Click the View Chart button for the particular chart version.

  3. Click Configuration.

Chart templates

  1. Click the View Chart button associated with the required Helm repository.

  2. Click the View Chart button for the particular chart version.

  3. Click Template.

Delete charts from a Helm repository

You can only delete charts from MSR Helm repositories using the MSR API, not the web UI.

To delete a version of a particular chart from a Helm repository through the MSR API:

curl --request DELETE https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion> -u <username>:<password> --cacert ca.crt

Helm chart linting

Helm chart linting can enure that Kubernets YAML files and Helm charts adhere to a set of best practices, with a focus on production readiness and security.

A set of established rules forms the basis of Helm chart linting. The process generates a report that you can use to take any necessary actions.

Implement Helm linting

Perform Helm linting using either the MSR web UI or the MSR API.

Helm linting with the web UI
  1. Open the MSR web UI.

  2. Navigate to Repositories.

  3. Click the name of the repository that contains the chart you want to lint.

  4. Click the Charts tab.

  5. Click the View Chart button associated with the required Helm chart.

  6. Click the View Chart button for the required chart version.

  7. Click the Linting Summary tab.

  8. Click the Lint Chart button to generate a Helm chart linting report.

Helm linting with the API
  1. Run the Helm chart linter on a particular chart.

    curl -k -H "Content-Type: application/json" --request POST "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lint" -u <username>:<password>
    
  2. Generate a Helm chart linting report.

    curl -k -X GET "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lintsummary" -u <username>:<password>
    
Helm chart linting rules

Helm liniting reports offer the linting rules, rule decriptions, and remediations as they are presented in the following table.

Name

Description

Remediation

dangling-service

Indicates when services do not have any associated deployments.

Confirm that your service’s selector correctly matches the labels on one of your deployments.

default-service-account

Indicates when pods use the default service account.

Create a dedicated service account for your pod. Refer to Configure Service Accounts for Pods for details.

deprecated-service-account-field

Indicates when deployments use the deprecated serviceAccount field.

Use the serviceAccountName field instead.

drop-net-raw-capability

Indicates when containers do not drop NET_RAW capability.

NET_RAW makes it so that an application within the container is able to craft raw packets, use raw sockets, and bind to any address. Remove this capability in the containers under containers security contexts.

env-var-secret

Indicates when objects use a secret in an environment variable.

Do not use raw secrets in environment variables. Instead, either mount the secret as a file or use a secretKeyRef. Refer to Using Secrets for details.

mismatching-selector

Indicates when deployment selectors fail to match the pod template labels.

Confirm that your deployment selector correctly matches the labels in its pod template.

no-anti-affinity

Indicates when deployments with multiple replicas fail to specify inter-pod anti-affinity, to ensure that the orchestrator attempts to schedule replicas on different nodes.

Specify anti-affinity in your pod specification to ensure that the orchestrator attempts to schedule replicas on different nodes. Using podAntiAffinity, specify a labelSelector that matches pods for the deployment, and set the topologyKey to kubernetes.io/hostname. Refer to Inter-pod affinity and anti-afinity for details.

no-extensions-v1beta

Indicates when objects use deprecated API versions under extensions/ v1beta.

Migrate using the apps/v1 API versions for the objects. Refer to Deprecated APIs Removed In 1.16 for details.

no-liveness-probe

Indicates when containers fail to specify a liveness probe.

Specify a liveness probe in your container. Refer to Configure Liveness, Readiness, and Startup Probes for details.

no-read-only-root-fs

Indicates when containers are running without a read-only root filesystem.

Set readOnlyRootFilesystem to true in the container securityContext.

no-readiness-probe

Indicates when containers fail to specify a readiness probe.

Specify a readiness probe in your container. Refer to Configure Liveness, Readiness, and Startup Probes for details.

non-existent-service-account

Indicates when pods reference a service account that is not found.

Create the missing service account, or refer to an existing service account.

privileged-container

Indicates when deployments have containers running in privileged mode.

Do not run your container as privileged unless it is required.

required-annotation-email

Indicates when objects do not have an email annotation with a valid email address.

Add an email annotation to your object with the email address of the object’s owner.

required-label-owner

Indicates when objects do not have an email annotation with an owner label.

Add an email annotation to your object with the name of the object’s owner.

run-as-non-root

Indicates when containers are not set to runAsNonRoot.

Set runAsUser to a non-zero number and runAsNonRoot to true in your pod or container securityContext. Refer to Configure a Security Context for a Pod or Container for details.

ssh-port

Indicates when deployments expose port 22, which is commonly reserved for SSH access.

Ensure that non-SSH services are not using port 22. Confirm that any actual SSH servers have been vetted.

unset-cpu-requirements

Indicates when containers do not have CPU requests and limits set.

Set CPU requests and limits for your container based on its requirements. Refer to Requests and limits for details.

unset-memory-requirements

Indicates when containers do not have memory requests and limits set.

Set memory requests and limits for your container based on its requirements. Refer to Requests and limits for details.

writable-host-mount

Indicates when containers mount a host path as writable.

Set containers to mount host paths as readOnly, if you need to access files on the host.

cluster-admin-role-binding

CIS Benchmark 5.1.1 Ensure that the cluster-admin role is only used where required.

Create and assign a separate role that has access to specific resources/actions needed for the service account.

docker-sock

Alert on deployments with docker.sock mounted in containers.

Ensure the Docker socket is not mounted inside any containers by removing the associated Volume and VolumeMount in deployment yaml specification. If the Docker socket is mounted inside a container it could allow processes running within the container to execute Docker commands which would effectively allow for full control of the host.

exposed-services

Alert on services for forbidden types.

Ensure containers are not exposed through a forbidden service type such as NodePort or LoadBalancer.

host-ipc

Alert on pods/deployment-likes with sharing host’s IPC namespace.

Ensure the host’s IPC namespace is not shared.

host-network

Alert on pods/deployment-likes with sharing host’s network namespace.

Ensure the host’s network namespace is not shared.

host-pid

Alert on pods/deployment-likes with sharing host’s process namespace.

Ensure the host’s process namespace is not shared.

privilege-escalation-container

Alert on containers if allowing privilege escalation that could gain more privileges than its parent process.

Ensure containers do not allow privilege escalation by setting allowPrivilegeEscalation=false. See Configure a Security Context for a Pod or Container for more details.

privileged-ports

Alert on deployments with privileged ports mapped in containers.

Ensure privileged ports [0, 1024] are not mapped within containers.

sensitive-host-mounts

Alert on deployments with sensitive host system directories mounted in containers.

Ensure sensitive host system directories are not mounted in containers by removing those Volumes and VolumeMounts.

unsafe-proc-mount

Alert on deployments with unsafe /proc mount (procMount=Unmasked) that will bypass the default masking behavior of the container runtime.

Ensure container does not unsafely exposes parts of /proc by setting procMount=Default. Unmasked ProcMount bypasses the default masking behavior of the container runtime. See Pod Security Standards for more details.

unsafe-sysctls

Alert on deployments specifying unsafe sysctls that may lead to severe problems like wrong behavior of containers.

Ensure container does not allow unsafe allocation of system resources by removing unsafe sysctls configurations. For more details see Using sysctls in a Kubernetes Cluster and Configure namespaced kernel parameters (sysctls) at runtime.

Helm limitations

Storage redirects

The option to redirect clients on pull for Helm repositories is present in the web UI. However, it is currently ineffective. Refer to the relevant issue on GitHub for more information.

MSR API endpoints

For the following endpoints, note that while the Swagger API Reference does not specify example responses for HTTP 200 codes, this is due to a Swagger bug and responses will be returned.

# Get chart or provenance file from repo
GET     https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<filename>
# Template a chart version
GET     https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/template
Chart storage limit

Users can safely store up to 100,000 charts per repository; storing a greater number may compromise some MSR functionality.

Tag pruning

Tag pruning is the process of cleaning up unnecessary or unwanted repository tags. As of v2.6, you can configure the Mirants Secure Registry (MSR) to automatically perform tag pruning on repositories that you manage by:

  • Specifying a tag pruning policy or alternatively,

  • Setting a tag limit

Note

When run, tag pruning only deletes a tag and does not carry out any actual blob deletion.

Known Issue

While the tag limit field is disabled when you turn on immutability for a new repository, this is currently not the case with Repository Settings. As a workaround, turn off immutability when setting a tag limit via Repository Settings > Pruning.

In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy.

Specify a tag pruning policy

As a repository administrator, you can now add tag pruning policies on each repository that you manage. To get started, navigate to https://<msr-url> and log in with your credentials.

Select Repositories in the left-side navigation panel, and then click the name of the repository you want to update. Note that you will have to click on the repository name following the / after the specific namespace for your repository.

Select the Pruning tab, and click New pruning policy to specify your tag pruning criteria:

MSR allows you to set your pruning triggers based on the following image attributes:

Image attributes

Name

Description

Example

Tag name

Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Tag name = test`

Component name

Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values

Component name starts with b

Vulnerabilities

Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number

Critical vulnerabilities = 3

License

Whether the image uses an intellectual property license and is one of or not one of your specified words

License name = docker

Last updated at

Whether the last image update was before your specified number of hours, days, weeks, or months. For details on valid time units, see Go’s ParseDuration function

Last updated at: Hours = 12

Specify one or more image attributes to add to your pruning criteria, then choose:

  • Prune future tags to save the policy and apply your selection to future tags. Only matching tags after the policy addition will be pruned during garbage collection.

  • Prune all tags to save the policy, and evaluate both existing and future tags on your repository.

Upon selection, you will see a confirmation message and will be redirected to your newly updated Pruning tab.

If you have specified multiple pruning policies on the repository, the Pruning tab will display a list of your prune triggers and details on when the last tag pruning was performed based on the trigger, a toggle for deactivating or reactivating the trigger, and a View link for modifying or deleting your selected trigger.

All tag pruning policies on your account are evaluated every 15 minutes. Any qualifying tags are then deleted from the metadata store. If a tag pruning policy is modified or created, then the tag pruning policy for the affected repository will be evaluated.

Set a tag limit

In addition to pruning policies, you can also set tag limits on repositories that you manage to restrict the number of tags on a given repository. Repository tag limits are processed in a first in first out (FIFO) manner. For example, if you set a tag limit of 2, adding a third tag would push out the first.

To set a tag limit, do the following:

  1. Select the repository that you want to update and click the Settings tab.

  2. Turn off immutability for the repository.

  3. Specify a number in the Pruning section and click Save. The Pruning tab will now display your tag limit above the prune triggers list along with a link to modify this setting.

Vulnerability scanning

In addition to its primary function of storing Docker images, MSR offers a deeply integrated vulnerability scanner that analyzes container images, either by manual user request or automatically whenever an image is uploaded to the registry.

MSR image scanning occurs in a service known as the dtr-jobrunner container. To scan an image, MSR:

  • Extracts a copy of the image layers from backend storage.

  • Extracts the files from the layer into a working directory inside the dtr-jobrunner container.

  • Executes the scanner against the files in this working directory, collecting a series of scanning data. Once the scanning data is collected, the working directory for the layer is removed.

Important

In scanning images for security vulnerabilities, MSR temporarily extracts the contents of your images to disk. If malware is contained in these images, external malware scanners may wrongly attribute that malware to MSR. The key indication of this is the detection of malware in the dtr-jobrunner container in /tmp/findlib-workdir-*. To prevent any recurrence of the issue, Mirantis recommends configuring the run-time scanner to exclude files found in the MSR dtr-jobrunner containers in /tmp or more specifically, if wildcards can be used, /tmp/findlib-workdir-*.

Scanner reporting

You can review vulnerability scanning results and submit those results to Mirantis Customer Support to help with the troubleshooting process.

Possible scanner report issues include:

  • Scanner crashes

  • Improperly extracted containers

  • Improperly detected components

  • Incorrectly matched backport

  • Vulnerabilities improperly matched to components

  • Vulnerability false positives

Export a scanner report

You can export a scanner report as a JSON (for support and diagnostics) or a CSV file (for processing using Windows or Linux shell scripts).

  1. Sign in to MSR.

  2. Navigate to Repositories > <repo-name> > Tags.

  3. Click View Details for the required image.

  4. Click Export Report and select Export as JSON or Export as CSV.

    Find the report as either scannerReport.json (for JSON) or scannerReport.txt (for CSV) in your browser downloads directory.

Submit a scanner report

You can send a scanner report directly to Mirantis Customer Support to help the group in their troubleshooting efforts.

  1. Sign in to MSR.

  2. Navigate to the View Details page and click the Components tab.

  3. Click Show layers affected for the layer you want to report.

  4. Click Report Issue. A pop-up window displays with the fields detailed in the following table:

    Field

    Description

    Component

    The Component field is automatically filled out and is not editable. If the information is incorrect, make a note in the Additional info field.

    Reported version or date

    The Reported version or date field is automatically filled out and is not editable. If the information is incorrect, make a note in the Additional info field.

    Report layer

    Indicate the image or image layer. Options include: Omit layer, Include layer, Include image.

    False Positive(s)

    Optional. Select from the drop-down menu all CVEs you suspect are false positives. Toggle the False Positive(s) control to edit the field.

    Missing Issue(s)

    Optional. List CVEs you suspect are missing from the report. Enter CVEs in the format CVE-yyyy-#### or CVE-yyyy-##### and separate each CVE with a comma. Toggle the Missing Issue(s) control to edit the field.

    Incorrect Component Version

    Optional. Enter any incorrect component version information in the Missing Issue(s) field. Toggle the Incorrect Component Version control to edit the field.

    Additional info

    Optional. Indicate anything else that does not pertain to the other fields. Toggle the Additional info control to edit this field.

  5. Fill out the fields in the pop-up window and click Submit.

MSR generates a JSON-formatted scanner report, which it bundles into a file together with the scan data. This file downloads to your local drive, at which point you can share it as needed with Mirantis Customer Support.

Important

To submit a scanner report along with the associated image, bundle the items into a .tgz file and include that file in a new Mirantis Customer Support ticket.

To download the relevant image:

docker save <msr-address>/<user>/<image-name>:tag <image-name>.tar

To bundle the report and image as a .tgz file:

tar -cvzf scannerIssuesReport.tgz <image-name>.tar scannerIssuesReport.json

Image enforcement policies and monitoring

MSR users can automatically block clients from pulling images stored in the registry by configuring enforcement policies at either the global or repository level.

An enforcement policy is a collection of rules used to determine whether an image can be pulled.

A good example of a scenario in which an enforcement policy can be useful is when an administrator wants to house images in MSR but does not want those images to be pulled into environments by MSR users. In this case, the administrator would configure an enforcement policy either at the global or repository level based on a selected set of rules.

Enforcement policies: global versus repository

Global image enforcement policies differ from those set at the repository level in several important respects:

  • Whereas both administrators and regular users can set up enforcement policies at the repository level, only administrators can set up enforcement policies at the global level.

  • Only one global enforcement policy can be set for each MSR instance, whereas multiple enforcement policies can be configured at the repository level.

  • Global enforcement policies are evaluated prior to repository policies.

Enforcement policy rule attributes

Global and repository enforcement policies are generated from the same set of rule attributes.

Note

All rules must evaluate to true for an image to be pulled; if any rules evaluate to false, the image pull will be blocked.

Rule attributes

Name

Filters

Example

Tag name

  • equals

  • starts with

  • ends with

  • contains

  • one of

  • not one of

Tag name starts with dev

Component name

  • equals

  • starts with

  • ends with

  • contains

  • one of

  • not one of

Component name starts with b

All CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

All CVSS 3 vulnerabilities less than 3

Critical CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

Critical CVSS vulnerabilities less than 3

High CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

High CVSS 3 vulnerabilities less than 3

Medium CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

Medium CVSS 3 vulnerabilities less than 3

Low CVSS 3 vulnerabilities

  • greater than or equals

  • greater than

  • equals

  • not equals

  • less than or equals

  • less than

Low CVSS 3 vulnerabilities less than 3

License name

  • one of

  • not one of

License name one of msr

Last updated at

  • before

Last updated at before 12 hours

Configure enforcement policies

Use the MSR web UI to set up enforcement policies for both repository and global enforcement.

Set up repository enforcement

Important

Users can only create and edit enforcement policies for repositories within their user namespace.

To set up a repository enforcement policy using the MSR web UI:

  1. Log in to the MSR web UI.

  2. Navigate to Repositories.

  3. Select the repository to edit.

  4. Click the Enforcement tab and select New enforcement policy.

  5. Define the enforcement policy rules with the desired rule attributes and select Save. The screen displays the new enforcement policy in the Enforcement tab. By default, the new enforcement policy is toggled on.

Once a repository enforcement policy is set up and activated, pull requests that do not satisfy the policy rules will return the following error message:

Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: enforcement policies '<enforcement-policy-id>'
blocked request
Set up global enforcement

Important

Only administrators can set up global enforcement policies.

To set up a global enforcement policy using the MSR web UI:

  1. Log in to the MSR web UI.

  2. Navigate to System.

  3. Select the Enforcement tab.

  4. Confirm that the global enforcement function is Enabled.

  5. Define the enforcement policy rules with the desired criteria and select Save.

Once the global enforcement policy is set up, pull requests against any repository that do not satisfy the policy rules will return the following error message:

Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: global enforcement policy blocked request

Monitor enforcement activity

Administrators and users can monitor enforcement activity in the MSR web UI.

Important

Enforcement events can only be monitored at the repository level. It is not possible, for example, to view in one location all enforcement events that correspond to the global enforcement policy.

  1. Navigate to Repositories.

  2. Select the repository whose enforcement activity you want to review.

  3. Select the Activity tab to view enforcement event activity. For instance you can:

    • Identify which policy triggered an event using the enforcement ID displayed on the event entry. (The enforcement IDs for each enforcement policy are located on the Enforcement tab.)

    • Identify the user responsible for making a blocked pull request, and the time of the event.

Upgrade MSR

The information offered herein relates exclusively to upgrades between MSR 3.x.x versions. To upgrade to MSR 3.x.x from MSR 2.x.x, you must use the Mirantis Migration Tool.

Schedule your upgrade outside of peak hours to avoid any business impact, as brief interruptions may occur.

Semantic versioning

MSR uses semantic versioning. While downgrades are not supported, Mirantis supports upgrades according to the following rules:

  • When upgrading from one patch version to another, you can skip patch versions as no data migration takes place between patch versions.

  • When upgrading between minor releases, you cannot skip releases. You can, however, upgrade from any patch version from the previous minor release to any patch version of the subsequent minor release.

  • When upgrading between major releases, you must upgrade one major version at a time.

Description

From

To

Supported

Patch upgrade

x.y.0

x.y.1

Yes

Skip patch version

x.y.0

x.y.2

Yes

Patch downgrade

x.y.2

x.y.1

No

Minor upgrade

x.y.*

x.y+1.*

Yes

Skip minor version

x.y.*

x.y+2.*

No

Minor downgrade

x.y.*

x.y-1.*

No

Major upgrade

x.y.z

x+1.0.0

Yes

Major upgrade skipping minor version

x.y.z

x+1.y+1.z

No

Skip major version

x.*.*

x+2.*.*

No

Major downgrade

x.*.*

x-1.*.*

No

Upgrade on Kubernetes

There are two upgrade paths and two upgrade methods to consider in the life of MSR 3.x.x. The following table presents the methods available to upgrade between MSR minor and patch versions.

Note

You must use the Mirantis Migration Tool to migrate from MSR 2.x.x to MSR 3.x.x.

From

To

Available upgrade method

3.0.x

3.1.x

Helm chart

3.1.x

3.1.x+1

MSR Operator, Helm chart

Upgrade on Kubernetes using the MSR Operator

To upgrade from MSR 3.0.x to 3.1.x, use the Mirantis Migration Tool (MMT).

To upgrade to a new patch version:

  1. Edit the custom resource manifest to include the MSR version to which you plan to upgrade:

    spec:
      image:
        tag: <3.1.x>
    
  2. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    
Upgrade on Kubernetes using a Helm chart

Note

Before upgrading from MSR 3.0.0 to a later patch version, you must verify that you are running cert-manager 1.7.2 or later:

helm history cert-manager

To upgrade cert-manager to version 1.7.2:

helm upgrade cert-manager jetstack/cert-manager \
  --version 1.7.2 \
  --set installCRDs=true

To upgrade to a new MSR version:

  1. Run the helm upgrade command:

    helm upgrade msr msrofficial/msr --version <helm-chart-version> --set-file license=path/to/file/license.lic
    
  2. Verify the installation of all MSR components.

    1. Verify that each Pod is in the Running state:

      kubectl get pods
      
    2. Troubleshoot any failing Pods by running the following command on each failed Pod:

      kubectl describe <pod-name>
      
    3. Optional. Review the Pod logs for more detailed results:

      kubectl logs <pod-name>
      

Upgrade on Swarm

To upgrade to a later patch version, you must include a reference to the values.yaml file when running the msr-installer image.

  1. SSH into a manager node on the Swarm cluster in which MSR is running.

  2. Verify that you have the values.yaml file that you generated to install and modify your MSR deployment.

  3. Obtain a list of the worker nodes along with their node IDs, noting the IDs of the nodes on which MSR is installed:

    docker node ls --format "{{ .ID }}" --filter "role=worker"
    
  4. Edit the values.yaml file to specify the list of worker nodes on which MSR is installed.

  5. Upgrade MSR, specifying a node ID for each node on which MSR is installed:

    docker run \
      --rm \
      -it \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v <path-to-values.yml>:/config/values.yml \
      registry.mirantis.com/msr/msr-installer:<new-msr-version> \
      upgrade
    
  6. Review the status of the deployed services:

    docker stack services msr
    

Monitor MSR

Gain valuable insights into the health of your MSR cluster through effective monitoring. You can optimize your monitoring strategy either by setting up a Prometheus server to scrape MSR metrics or by accessing a range of MSR endpoints to assess the health of your cluster.

Collect MSR metrics with Prometheus

Available since MSR 3.1.0

MSR provides an extensive set of metrics with which you can monitor and assess the health of your registry. These metrics are designed to work with Prometheus, a powerful monitoring system, and can be combined with Grafana to create interactive metric dashboards.

Herein, we present an example of deploying a Prometheus server to scrape your MSR metrics. There are, however, multiple valid approaches to configuring your metrics ecosystem, and you can choose the setup that best suits your needs.

Configure Prometheus to scrape MSR metrics

To collect MSR metrics, you must install a Prometheus server on your MSR cluster.

Kubernetes deployments

To install a Prometheus server:

  1. Add the Prometheus repository:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    
  2. Update the Prometheus repository:

    helm repo update
    
  3. Obtain the target IP address and port for the msr-api service:

    kubectl get svc
    

    The IP address is listed under CLUSTER-IP and the port is listed under PORT(S).

  4. Create a prometheus-values.yaml file with the following configuration detail:

    extraScrapeConfigs: |
      - job_name: '<metrics-job-name>'
        metrics_path: /api/v0/admin/metrics
        static_configs:
          - targets: ['<msr-api-service-cluster-ip>:<port>']
        basic_auth:
          username: '<user-name>'
          password: '<password>'
        scheme: https
        tls_config:
          <tls-detail>
    

    Note

    For more information on how to fill out this configuration detail, refer to <scrape_config> in the official Prometheus documentation.

    To learn how to generate and add your own certs to MSR, refer to Add a custom TLS certificate.

    Note

    The Helm chart used herein includes the prometheus-node-exporter, which may crash at start up on some local environments.

    To resolve this issue, include the following configuration detail in the prometheus-values.yaml file:

    prometheus-node-exporter:
      hostRootFsMount:
        enabled: false
    
  5. Install the Prometheus server:

    helm install -f prometheus-values.yaml prometheus prometheus-community/prometheus
    

To verify that your Prometheus server is running and scraping the MSR metrics endpoint:

  1. Forward the Prometheus server to port 9090:

    kubectl port-forward `kubectl get pods | grep prometheus-server | tr -s ' ' | cut -d' ' -f1`  9090
    
  2. In a web browser, navigate to http://<prometheus-host>:9090.

  3. Select Status > Targets in the Prometheus UI menu bar.

  4. Verify that the MSR metrics endpoint is listed on the page with the up status.

    The metrics endpoint is labeled with the job-name entered in the extraScrapeConfigs section of the prometheus-values.yaml file.

Swarm deployments

To install a Prometheus server:

  1. SSH into a manager node on your Swarm cluster.

  2. Create a prometheus.yml file that includes the following values:

    scrape_configs:
      - job_name: '<metrics-job-name>'
        metrics_path: /api/v0/admin/metrics
        static_configs:
          - targets: ['msr-api-server:443']
        basic_auth:
          username: '<user-name>'
          password: '<password>'
        scheme: https
        tls_config:
          <tls-detail>
    

    Note

    For more information on how to fill out this configuration detail, refer to <scrape_config> in the official Prometheus documentation.

    To learn how to generate and add your own certs to MSR, refer to Add a custom TLS certificate.

  3. Create the following docker-stack.yaml file to configure a Swarm service for deploying the Prometheus server:

    version: '3.7'
    
    volumes:
      prometheus_data:
    
    services:
      prometheus:
        image: prom/prometheus:v2.45.0
        volumes:
          - ./prometheus.yml:/etc/prometheus/prometheus.yml
          - prometheus_data:/prometheus
        ports:
          - <prometheus-ui-port>:9090
        networks:
          - msr_msr-ol
        deploy:
          placement:
            constraints:
              - node.role==manager
    
    networks:
      msr_msr-ol:
        name: msr_msr-ol
        external: true
    

    Note

    For the <prometheus-ui-port> value in the ports section, select a port that is currently available in your Swarm cluster.

  4. Deploy the Prometheus server onto your Swarm cluster:

    docker stack deploy -c docker-stack.yaml prometheus
    

To verify that your Prometheus server is running and scraping the MSR metrics endpoint:

  1. Verify that your Prometheus service is running:

    docker service ls
    
  2. In a web browser, navigate to http://<manager-node-ip>:<prometheus-ui-port>. This is the same <prometheus-ui-port> that you included in the ports section of the docker-stack.yaml file.

  3. Select Status > Targets in the Prometheus UI menu bar.

  4. Verify that the MSR metrics endpoint is listed on the page with the up status. You may need to wait approximately 30 seconds for this to occur.

    The metrics endpoint is labeled with the <metrics-job-name> entered in the scrape_configs section of the prometheus.yml file.

MSR metrics exposed for Prometheus

Comprehensive detail on all of the metrics exposed by MSR is provided herein. For specific key metrics, refer to the Usage information, which offers valuable insights on interpreting the data and using it to troubleshoot your MSR deployment.

Registry metrics

Registry metrics capture essential MSR functionality, such as repository count, tag count, push events, and pull events.

Metrics often incorporate labels to differentiate specific attributes of the measured item. The table below provides a list of possible values for the labels associated with registry metrics:

Label

Possible values

namespace

Namespace name

repository

Repository name

repos

Description

Current number of repositories

Metric type

Gauge

Labels

None

public_repos

Description

Current number of public repositories

Metric type

Gauge

Labels

None

private_repos

Description

Current number of private repositories

Metric type

Gauge

Labels

None

pull_count

Description

Running total of image pulls

Metric type

Counter

Labels

None

pull_count_per_repo

Description

Running total of image pulls per repository

Metric type

Counter

Labels

namespace, repository

push_count

Description

Running total of image pushes

Metric type

Counter

Labels

None

push_count_per_repo

Description

Running total of image pushes per repository

Metric type

Counter

Labels

namespace, repository

tags

Description

Current number of image tags

Metric type

Gauge

Labels

None

Usage

If your tag count increases beyond your needs, you can enable tag pruning policies on individual repositories to manage the growth effectively.

Note

Tag pruning selectively removes image tags, but it does not eliminate the associated data blobs. To completely remove unwanted image tags and free up cluster resources, it is necessary that you schedule garbage collection as well.

tags_per_repo

Description

Current number of image tags per repository

Metric type

Gauge

Labels

namespace, repository

Usage

If an individual repository tag count increases beyond your needs, you can enable tag pruning policies to manage the growth effectively.

Note

Tag pruning selectively removes image tags, but it does not eliminate the associated data blobs. To completely remove unwanted image tags and free up cluster resources, it is necessary that you schedule garbage collection as well.

pruning_policy_enabled_repos

Description

Current number of repositories for which at least one pruning policy is enabled

Metric type

Gauge

Labels

None

Usage

To assess whether pruning policy usage should be increased across your cluster, compare this number with the total number of repositories.

Mirroring metrics

Mirroring metrics track the number of push and pull mirroring jobs, categorized by job status.

Considered as a whole, these metrics offer real-time insights into the performance of your mirroring jobs. For example, when you observe a simultaneous decrease in poll_mirror_running and an increase in poll_mirror_done, this provides immediate assurance that your poll mirroring configuration is functioning properly.

poll_mirror_waiting

Description

Current number of poll mirroring jobs with a ‘waiting’ status

Metric type

Gauge

Labels

None

Usage

If there is a significant number of poll mirroring jobs in the waiting state, consider updating the Jobrunner capacity configuration to allow a higher parallel execution of mirroring jobs.

poll_mirror_running

Description

Current number of poll mirroring jobs with a ‘running’ status

Metric type

Gauge

Labels

None

poll_mirror_done

Description

Running total of poll mirroring jobs with a ‘done’ status

Metric type

Counter

Labels

None

poll_mirror_errored

Description

Running total of poll mirroring jobs with an ‘errored’ status

Metric type

Counter

Labels

None

Usage

If there is a sudden surge in the number of poll mirroring jobs in the errored state, investigate the Jobrunner logs to troubleshoot the issue.

push_mirror_waiting

Description

Current number of push mirroring jobs with a ‘waiting’ status

Metric type

Gauge

Labels

None

Usage

If there is a significant number of push mirroring jobs in the waiting state, consider updating the Jobrunner capacity configuration to allow a higher parallel execution of mirroring jobs.

push_mirror_running

Description

Current number of push mirroring jobs with a ‘running’ status

Metric type

Gauge

Labels

None

push_mirror_done

Description

Running total of push mirroring jobs with a ‘done’ status

Metric type

Counter

Labels

None

push_mirror_errored

Description

Running total of push mirroring jobs with an ‘errored’ status

Metric type

Counter

Labels

None

Usage

If there is a sudden surge in the number of push mirroring jobs in the errored state, investigate the Jobrunner logs to troubleshoot the issue.

Authentication metrics

Authentication metrics monitor the count of CLI logins and active web UI sessions.

cli_login_count

Description

Running total of CLI logins made

Metric type

Counter

Labels

None

Usage

If you observe a sharp decline in CLI logins, investigate the Garant logs to troubleshoot the issue.

ui_sessions

Description

Current number of active user interface sessions

Metric type

Gauge

Labels

None

Usage

If you observe a sharp decline in active UI sessions, investigate the eNZi logs to troubleshoot the issue.

RethinkDB metrics

The metrics for RethinkDB are extracted from the system statistics and current issues tables, providing a broad range of information about your RethinkDB deployment.

Metrics often incorporate labels to differentiate specific attributes of the measured item. The table below provides a list of possible values for the labels associated with RethinkDB metrics:

Label

Possible values

db

Database name

table

Table name

server

Server name

operation

read, written

cluster_client_connections

Description

Current number of connections from the cluster

Metric type

Gauge

Labels

None

cluster_docs_per_second

Description

Current number of document reads and writes per second from the cluster

Metric type

Gauge

Labels

operation

server_client_connections

Description

Current number of client connections to the server

Metric type

Gauge

Labels

server

server_queries_per_second

Description

Current number of queries per second from the server

Metric type

Gauge

Labels

server

server_docs_per_second

Description

Current number of document reads and writes per second from the server

Metric type

Gauge

Labels

server, operation

table_docs_per_second

Description

Current number of document reads and writes per second from the table

Metric type

Gauge

Labels

db, table, operation

Usage

If you observe that certain tables have a high volume of reads or writes, it is advisable to evenly distribute the primary replicas associated with those tables across the RethinkDB servers. This approach ensures a balanced distribution of the cluster load, leading to improved performance across the system.

table_rows_count

Description

Current number of rows in the table

Metric type

Gauge

Labels

db, table

tablereplica_docs_per_second

Description

Current number of document reads and writes per second from the table replica

Metric type

Gauge

Labels

db, table, server, operation

tablereplica_cache_bytes

Description

Table replica cache size, in bytes

Metric type

Gauge

Labels

db, table, server

tablereplica_io

Description

Table replica byte reads and writes per second

Metric type

Gauge

Labels

db, table, server, operation

tablereplica_data_bytes

Description

Table replica size, in stored bytes

Metric type

Gauge

Labels

db, table, server

log_write_issues

Description

Current number of log write issues

Metric type

Gauge

Labels

None

Usage

Log write issues refer to situations where RethinkDB encounters failures while attempting to write to its log file. Refer to System current issues table in the official RethinkDB documentation for more information.

name_collision_issues

Description

Current number of name collision issues

Metric type

Gauge

Labels

None

Usage

Name collision issues arise when multiple servers, databases, or tables within the same database are assigned identical names. Refer to System current issues table in the official RethinkDB documentation for more information.

outdated_index_issues

Description

Current number of outdated index issues

Metric type

Gauge

Labels

None

Usage

Outdated index issues occur when indexes that were created using an older version of RethinkDB need to be rebuilt due to changes in the indexing mechanism employed by RethinkDB Query Language (ReQL). Refer to System current issues table in the official RethinkDB documentation for more information.

total_availability_issues

Description

Current number of total availability issues

Metric type

Gauge

Labels

None

Usage

Total availability issues occur when a table within the RethinkDB cluster is missing at least one replica. Refer to System current issues table in the official RethinkDB documentation for more information.

memory_availability_issues

Description

Current number of memory availability issues

Metric type

Gauge

Labels

None

Usage

Memory availability issues arise when a page fault occurs on a RethinkDB server and the system starts using swap space. Refer to System current issues table in the official RethinkDB documentation for more information.

connectivity_issues

Description

Current number of connectivity issues

Metric type

Gauge

Labels

None

Usage

Connectivity issues occur when certain servers within a RethinkDB cluster are unable to establish a connection or communicate with all other servers in the cluster. Refer to System current issues table in the official RethinkDB documentation for more information.

other_issues

Description

Current number of uncategorized issues

Metric type

Gauge

Labels

None

Usage

Refer to your RethinkDB logs to diagnose the issue.

Note

If the number of other_issues is greater than zero, it indicates the need to expand the existing set of metrics to cover those additional issue types. Please reach out to Mirantis and inform us that you are seeing other_issues tracked in your cluster.

table_size

Description

Table size in MB

Metric type

Gauge

Labels

db, table

Usage

When a specific table in your MSR deployment grows unchecked, it may indicate a potential issue with the corresponding functionality. For instance, if the size of the tags table is increasing beyond expectations, it could be a sign that your pruning policies, which are responsible for managing tag retention, are not functioning properly. Similarly, if the blobs table is growing more than anticipated, it could suggest a problem with the garbage collection process, which is responsible for removing unused data blobs.

Prometheus scrape metrics

Prometheus scrape metrics capture the duration of each metrics scrape and the number of errors returned during the process.

scrape_latency

Description

Duration of metrics collection

Metric type

Gauge

Labels

None

Usage

Elevated metrics scrape latency can serve as an indicator that additional resources should be allocated to your Prometheus server.

scrape_errors

Description

Current number of errors that occurred during metrics collection

Metric type

Gauge

Labels

None

Usage

Since MSR metrics depend heavily on the use of RethinkDB, any scrape errors encountered are likely to be caused by issues related to RethinkDB itself. To diagnose and troubleshoot the problem, refer to the logs of your RethinkDB deployment.

See also

Health check endpoints

MSR exposes several endpoints that you can use to assess whether or not an MSR replica is healthy:

  • /_ping: Checks if the MSR replica is healthy, and returns a simple JSON response. This is useful for load balancing and other automated health check tasks.

  • /nginx_status: Returns the number of connections handled by the NGINX MSR front end.

  • /api/v0/meta/cluster_status: Returns detailed information about all MSR replicas.

Cluster status

The /api/v0/meta/cluster_status endpoint requires administrator credentials, and returns a JSON object for the entire cluster as observed by the replica being queried. You can authenticate your requests using HTTP basic auth.

curl -ksL -u <user>:<pass> https://<msr-domain>/api/v0/meta/cluster_status
{
  "current_issues": [
   {
    "critical": false,
    "description": "... some replicas are not ready. The following servers are
                    not reachable: dtr_rethinkdb_f2277ad178f7",
  }],
  "replica_health": {
    "f2277ad178f7": "OK",
    "f3712d9c419a": "OK",
    "f58cf364e3df": "OK"
  },
}

You can find health status on the current_issues and replica_health arrays.

For even more detailed troubleshooting information, examine the individual container logs.

Review the Notary audit logs

Docker Content Trust (DCT) keeps audit logs of changes made to trusted repositories. Every time you push a signed image to a repository, or delete trust data for a repository, DCT logs that information.

These logs are only available from the MSR API.

Get an authentication token

To access the audit logs you need to authenticate your requests using an authentication token. You can get an authentication token for all repositories, or one that is specific to a single repository.

curl --insecure --silent \
  --user <user>:<password> \
  "https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=registry:catalog:*"
curl --insecure --silent \
  --user <user>:<password> \
  "https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=repository:<dtr-url>/<repository>:pull"

MSR returns a JSON file with a token, even when the user does not have access to the repository to which they requested the authentication token. This token does not grant access to MSR repositories.

The returned JSON file has the following structure:

{
  "token": "<token>",
  "access_token": "<token>",
  "expires_in": "<expiration in seconds>",
  "issued_at": "<time>"
}
Changefeed API

Once you have an authentication token, you can use the following endpoints to get audit logs:

URL

Description

Authorization

GET /v2/_trust/changefeed

Get audit logs for all repositories.

Global scope token

GET /v2/<msr-url>/<repository>/_trust/changefeed

Get audit logs for a specific repository.

Repositorhy-specific token

Both endpoints have the following query string parameters:

Field name

Required

Type

Description

change_id

Yes

String

A non-inclusive starting change ID from which to start returning results. This will typically be the first or last change ID from the previous page of records requested, depending on which direction your are paging in.

The value 0 indicates records should be returned starting from the beginning of time.

The value 1 indicates records should be returned starting from the most recent record. If 1 is provided, the implementation will also assume the records value is meant to be negative, regardless of the given sign.

records

Yes

String integer

The number of records to return. A negative value indicates the number of records preceding the change_id should be returned. Records are always returned sorted from oldest to newest.

Example JSON response:

{
  "count": 1,
  "records": [
    {
      "ID": "0a60ec31-d2aa-4565-9b74-4171a5083bef",
      "CreatedAt": "2017-11-06T18:45:58.428Z",
      "GUN": "msr.example.org/library/wordpress",
      "Version": 1,
      "SHA256": "a4ffcae03710ae61f6d15d20ed5e3f3a6a91ebfd2a4ba7f31fc6308ec6cc3e3d",
      "Category": "update"
    }
  ]
}

Below is the description for each of the fields in the response:

Field name

Description

count

The number of records returned.

ID

The ID of the change record. Should be used in the change_id field of requests to provide a non-exclusive starting index. It should be treated as an opaque value that is guaranteed to be unique within an instance of notary.

CreatedAt

The time the change happened.

GUN

The MSR repository that was changed.

Version

The version that the repository was updated to. This increments every time there’s a change to the trust repository.

This is always 0 for events representing trusted data being removed from the repository.

SHA256

The checksum of the timestamp being updated to. This can be used with the existing notary APIs to request said timestamp.

This is always an empty string for events representing trusted data being removed from the repository

Category

The kind of change that was made to the trusted repository. Can be update, or deletion.

The results only include audit logs for events that happened more than 60 seconds ago, and are sorted from oldest to newest.

Even though the authentication API always returns a token, the changefeed API validates if the user has access to see the audit logs or not:

  • If the user is an admin they can see the audit logs for any repositories,

  • All other users can only see audit logs for repositories they have read access.

Troubleshoot MSR

You can handle many potential MSR issues using the tips and tricks detailed herein.

Troubleshoot your MSR Kubernetes deployment

You can use general Kubernetes troubleshooting and debugging techniques to troubleshoot your MSR Kubernetes deployment.

To review an example of a failed Pod:

kubectl get pods

Example output:

NAME                                     READY   STATUS              RESTARTS      AGE
msr-api-95dc9979b-4sgfg                  1/1     Running             3 (54s ago)   99s
msr-enzi-api-6f6f54c4c5-72bkb            1/1     Running             1 (39s ago)   100s
msr-enzi-worker-55b5786699-pnlh4         1/1     Running             3 (81s ago)   100s
msr-garant-84c5d9489b-t4bl4              1/1     Running             3 (51s ago)   100s
msr-jobrunner-default-7fcc9bb849-4whcl   1/1     Running             3 (54s ago)   100s
msr-nginx-76dbf47797-slllp               0/1     ContainerCreating   0             99s
msr-notary-server-6dfb9c67c9-mft97       1/1     Running             2 (85s ago)   99s
msr-notary-signer-576c5f574b-ftm5z       1/1     Running             2 (90s ago)   99s
msr-registry-7df8fd6fcd-l67d6            1/1     Running             3 (51s ago)   100s
msr-rethinkdb-cluster-0                  1/1     Running             0             100s
msr-rethinkdb-proxy-d5798dd75-ft75c      1/1     Running             2 (85s ago)   99s
msr-scanningstore-0                      1/1     Running             0             99s
postgres-operator-569b58b8c6-c6vxv       1/1     Running             0             32h
postgres-operator-ui-7b9f8d69bc-pv9nm    1/1     Running             0             32h

To review a greater amount of information about a failed Pod:

kubectl get pods -o wide

Example output:

NAME                                     READY   STATUS              RESTARTS        AGE     IP            NODE       NOMINATED NODE   READINESS GATES
msr-api-95dc9979b-4sgfg                  1/1     Running             3 (2m48s ago)   3m33s   172.17.0.14   minikube   <none>           <none>
msr-enzi-api-6f6f54c4c5-72bkb            1/1     Running             1 (2m33s ago)   3m34s   172.17.0.13   minikube   <none>           <none>
msr-enzi-worker-55b5786699-pnlh4         1/1     Running             3 (3m15s ago)   3m34s   172.17.0.8    minikube   <none>           <none>
msr-garant-84c5d9489b-t4bl4              1/1     Running             3 (2m45s ago)   3m34s   172.17.0.11   minikube   <none>           <none>
msr-jobrunner-default-7fcc9bb849-4whcl   1/1     Running             3 (2m48s ago)   3m34s   172.17.0.9    minikube   <none>           <none>
msr-nginx-76dbf47797-slllp               0/1     ContainerCreating   0               3m33s   <none>        minikube   <none>           <none>
msr-notary-server-6dfb9c67c9-mft97       1/1     Running             3 (51s ago)     3m33s   172.17.0.18   minikube   <none>           <none>
msr-notary-signer-576c5f574b-ftm5z       1/1     Running             3 (57s ago)     3m33s   172.17.0.12   minikube   <none>           <none>
msr-registry-7df8fd6fcd-l67d6            1/1     Running             3 (2m45s ago)   3m34s   172.17.0.15   minikube   <none>           <none>
msr-rethinkdb-cluster-0                  1/1     Running             0               3m34s   172.17.0.10   minikube   <none>           <none>
msr-rethinkdb-proxy-d5798dd75-ft75c      1/1     Running             2 (3m19s ago)   3m33s   172.17.0.17   minikube   <none>           <none>
msr-scanningstore-0                      1/1     Running             0               3m33s   172.17.0.16   minikube   <none>           <none>
postgres-operator-569b58b8c6-c6vxv       1/1     Running             0               32h     172.17.0.7    minikube   <none>           <none>
postgres-operator-ui-7b9f8d69bc-pv9nm    1/1     Running             0               32h     172.17.0.6    minikube   <none>           <none>

To review the Pods running in all namespaces:

kubectl get pods --all-namespaces

Example output:

NAMESPACE      NAME                                       READY   STATUS              RESTARTS        AGE
cert-manager   cert-manager-7dd5854bb4-hx7mj              1/1     Running             1 (7d5h ago)    7d9h
cert-manager   cert-manager-cainjector-64c949654c-gwvgg   1/1     Running             2 (2d9h ago)    7d9h
cert-manager   cert-manager-webhook-6b57b9b886-7prtc      1/1     Running             1 (2d9h ago)    7d9h
default        msr-api-95dc9979b-4sgfg                    1/1     Running             3 (4m44s ago)   5m29s
default        msr-enzi-api-6f6f54c4c5-72bkb              1/1     Running             1 (4m29s ago)   5m30s
default        msr-enzi-worker-55b5786699-pnlh4           1/1     Running             3 (5m11s ago)   5m30s
default        msr-garant-84c5d9489b-t4bl4                1/1     Running             3 (4m41s ago)   5m30s
default        msr-jobrunner-default-7fcc9bb849-4whcl     1/1     Running             3 (4m44s ago)   5m30s
default        msr-nginx-76dbf47797-slllp                 0/1     ContainerCreating   0               5m29s
default        msr-notary-server-6dfb9c67c9-mft97         1/1     Running             3 (2m47s ago)   5m29s
default        msr-notary-signer-576c5f574b-ftm5z         1/1     Running             3 (2m53s ago)   5m29s
default        msr-registry-7df8fd6fcd-l67d6              1/1     Running             3 (4m41s ago)   5m30s
default        msr-rethinkdb-cluster-0                    1/1     Running             0               5m30s
default        msr-rethinkdb-proxy-d5798dd75-ft75c        1/1     Running             2 (5m15s ago)   5m29s
default        msr-scanningstore-0                        1/1     Running             0               5m29s
default        postgres-operator-569b58b8c6-c6vxv         1/1     Running             0               32h
default        postgres-operator-ui-7b9f8d69bc-pv9nm      1/1     Running             0               32h
kube-system    coredns-78fcd69978-48bfx                   1/1     Running             1 (7d5h ago)    7d9h
kube-system    etcd-minikube                              1/1     Running             1 (2d9h ago)    7d9h
kube-system    kube-apiserver-minikube                    1/1     Running             1 (2d9h ago)    7d9h
kube-system    kube-controller-manager-minikube           1/1     Running             1 (7d5h ago)    7d9h
kube-system    kube-proxy-2h2z5                           1/1     Running             1 (2d9h ago)    7d9h
kube-system    kube-scheduler-minikube                    1/1     Running             1 (2d9h ago)    7d9h
kube-system    storage-provisioner                        1/1     Running             2 (2d9h ago)    7d9h

To review all services:

kubectl get services

Example output:

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
kubernetes                 ClusterIP   10.96.0.1        <none>        443/TCP            7d10h
msr                        ClusterIP   10.98.33.163     <none>        8080/TCP,443/TCP   8m14s
msr-api                    ClusterIP   10.102.145.77    <none>        443/TCP            8m14s
msr-enzi                   ClusterIP   10.102.7.61      <none>        4443/TCP           8m14s
msr-garant                 ClusterIP   10.102.139.182   <none>        443/TCP            8m14s
msr-notary                 ClusterIP   10.107.27.10     <none>        443/TCP            8m14s
msr-notary-signer          ClusterIP   10.103.28.108    <none>        7899/TCP           8m14s
msr-registry               ClusterIP   10.109.12.52     <none>        443/TCP            8m14s
msr-rethinkdb-admin        ClusterIP   None             <none>        8080/TCP           8m14s
msr-rethinkdb-cluster      ClusterIP   None             <none>        29015/TCP          8m14s
msr-rethinkdb-proxy        ClusterIP   10.103.235.96    <none>        28015/TCP          8m14s
msr-scanningstore          ClusterIP   10.99.62.126     <none>        5432/TCP           8m13s
msr-scanningstore-config   ClusterIP   None             <none>        <none>             7m56s
msr-scanningstore-repl     ClusterIP   10.107.82.163    <none>        5432/TCP           8m13s
postgres-operator          ClusterIP   10.108.77.171    <none>        8080/TCP           32h
postgres-operator-ui       ClusterIP   10.108.138.75    <none>        80/TCP             32h

To review the state of a running or failed Pod:

kubectl describe pod msr-nginx-76dbf47797-slllp

Example output, including status, environment variables, certificates used, and recent events such as why the Pod might have failed to start:

Name:           msr-nginx-76dbf47797-slllp
Namespace:      default
Priority:       0
Node:           minikube/192.168.49.2
Start Time:     Wed, 17 Nov 2021 19:22:17 -0500
Labels:         app.kubernetes.io/component=nginx
app.kubernetes.io/instance=msr
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=msr
app.kubernetes.io/version=3.0.0-tp2
helm.sh/chart=msr-1.0.0-tp2.1
pod-template-hash=76dbf47797
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/msr-nginx-76dbf47797

   .
   .
   .
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/arch=amd64
kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                   From               Message

Normal   Scheduled    9m17s                 default-scheduler  Successfully assigned default/msr-nginx-76dbf47797-slllp to minikube
Warning  FailedMount  58s (x12 over 9m13s)  kubelet            MountVolume.SetUp failed for volume "secrets" : secret "bad" not found
Warning  FailedMount  27s (x4 over 7m15s)   kubelet            Unable to attach or mount volumes: unmounted volumes=[secrets], unattached volumes=[secrets kube-api-access-6h99g]: timed out waiting for the condition

To view the Pod logs:

kubectl get logs <pod-name>

To create a shell to examine things from inside a Pod:

kubectl exec --stdin --tty <pod-name> -- /bin/sh

Troubleshoot your MSR Swarm deployment

The commands herein allow you to diagnose and resolve common issues you may encounter in deploying MSR on a Swarm cluster.

To identify a failed service on your cluster:

List the services in your MSR stack and subsequently identify any that are not running.

docker stack services msr

Example output:

ID             NAME                    MODE         REPLICAS   IMAGE                                                   PORTS
k8taishq5xxk   msr_msr-api-server      replicated   3/3        registry.mirantis.com/msr/msr-api:<release number>
fk344mcex0gp   msr_msr-enzi-api        replicated   3/3        registry.mirantis.com/msr/enzi:1.0.85
p75o0wug72ck   msr_msr-enzi-worker     replicated   3/3        registry.mirantis.com/msr/enzi:1.0.85
bnulom7u88fd   msr_msr-garant          replicated   3/3        registry.mirantis.com/msr/msr-garant:<release number>
p14k98kl9tt6   msr_msr-initialize      replicated   0/1        registry.mirantis.com/msr/msr-api:<release number>
k5qsenngjxc4   msr_msr-jobrunner       replicated   3/3        registry.mirantis.com/msr/msr-jobrunner:<release number>
qv3cdf30ebbb   msr_msr-nginx           replicated   3/3        registry.mirantis.com/msr/msr-nginx:<release number>            *:443->443/tcp, *:8080->8080/tcp
eroxakg061ns   msr_msr-notary-server   replicated   3/3        registry.mirantis.com/msr/msr-notary-server:<release number>
8osnskkpvv9d   msr_msr-notary-signer   replicated   3/3        registry.mirantis.com/msr/msr-notary-signer:<release number>
v9q1e6nnzutq   msr_msr-registry        replicated   0/3        registry.mirantis.com/msr/msr-registry:<release number>
o32erkkz8tjo   msr_msr-rethinkdb       replicated   3/3        mirantis/rethinkdb:2.3.7-mirantis-41-a02bade

To obtain detailed information for a service that is not running:

docker service ps msr_msr-registry --no-trunc

Example output:

ID                          NAME                     IMAGE                                                                                                                             NODE           DESIRED STATE   CURRENT STATE           ERROR                                                                                                                                                                                                                                                                                                                    PORTS
7o8rjdjydwfqnz0qhekz46tq5   msr_msr-registry.1       registry.mirantis.com/msr/msr-registry:<release number>@sha256:a4d3a083da310dff374c37850e1e8de81ad9150b770683b1529cabf508ae8f07   6e1b4b0f0dcc   Ready           Ready 1 second ago
lickekmwnp6d2ot558ohh2cnj    \_ msr_msr-registry.1   registry.mirantis.com/msr/msr-registry:<release number>@sha256:a4d3a083da310dff374c37850e1e8de81ad9150b770683b1529cabf508ae8f07   aed603d27071   Shutdown        Failed 1 second ago     "starting container failed: error while mounting volume '/var/lib/docker/volumes/msr_msr-storage/_data': failed to mount local volume: mount :/:/var/lib/docker/volumes/msr_msr-storage/_data, data: addr=172.17.0.10,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport: connection refused"

To review all of the services that are running on the cluster:

docker service ls

Example output:

ID             NAME                    MODE         REPLICAS   IMAGE                                                   PORTS
sr1ivj8c0iyh   msr_msr-api-server      replicated   3/3        registry.mirantis.com/msr/msr-api:<release number>
ks7r7nctqaon   msr_msr-enzi-api        replicated   3/3        registry.mirantis.com/msr/enzi:1.0.85
rj7z7iojd54g   msr_msr-enzi-worker     replicated   3/3        registry.mirantis.com/msr/enzi:1.0.85
n7mufyqsl8n3   msr_msr-garant          replicated   3/3        registry.mirantis.com/msr/msr-garant:<release number>
s0p4vmxopdbt   msr_msr-initialize      replicated   0/1        registry.mirantis.com/msr/msr-api:<release number>
llvu69o504ks   msr_msr-jobrunner       replicated   3/3        registry.mirantis.com/msr/msr-jobrunner:<release number>
kycj3hoqd74s   msr_msr-nginx           replicated   3/3        registry.mirantis.com/msr/msr-nginx:<release number>           *:443->443/tcp, *:8080->8080/tcp
jsxdq6j25r7h   msr_msr-notary-server   replicated   3/3        registry.mirantis.com/msr/msr-notary-server:<release number>
3zrjhpe2rb4i   msr_msr-notary-signer   replicated   3/3        registry.mirantis.com/msr/msr-notary-signer:<release number>
znz4ioqyegkt   msr_msr-registry        replicated   3/3        registry.mirantis.com/msr/msr-registry:<release number>
lm47q08a7t9i   msr_msr-rethinkdb       replicated   3/3        mirantis/rethinkdb:2.3.7-mirantis-41-a02bade

To obtain the service logs:

docker service logs msr_msr-api-server

Example output:

msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"info","msg":"Generating an authenticator for eNZi client","time":"2023-06-27T23:01:47Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"info","msg":"Attempting to create or update MSR's Service registration with the eNZi server","time":"2023-06-27T23:01:47Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"info","msg":"Updated service \"Mirantis Secure Registry\"","time":"2023-06-27T23:01:47Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"info","msg":"Obtaining eNZi service registration","time":"2023-06-27T23:01:48Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"error","msg":"failed to obtain repository counts: rethinkdb: Cannot reduce over an empty stream. in:\nr.DB(\"dtr2\").Table(\"repositories\").Group(\"visibility\").Count().Ungroup().Map(func(var_2 r.Term) r.Term { return r.Object(var_2.Field(\"group\"), var_2.Field(\"reduction\")) }).Reduce(func(var_3, var_4 r.Term) r.Term { return var_3.Merge(var_4) })","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"info","msg":"Starting temporary CVE file cleanup within \"/storage/scan_update/\" directory","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"error":"open /storage/scan_update/: no such file or directory","level":"error","msg":"Could not delete all tmp files","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"level":"info","msg":"No files to remove","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc    | {"address":":443","level":"info","msg":"Admin server about to listen for connections","time":"2023-06-27T23:01:49Z"}

To create a shell to examine the contents of a container:

  1. SSH into the host that is running the container to which you want to connect.

  2. Obtain the required container ID:

    CONTAINER_ID=docker ps --filter="name=<container-name>*"
    
  3. Run a shell within the required container:

    docker exec -it $CONTAINER_ID sh
    

Access RethinkDB

MSR uses RethinkDB to persist and reproduce data across replicas. To review the internal state of MSR, you can connect directly to the RethinkDB instance that is running on an MSR replica, using either the RethinkCLI or the MSR API.

Warning

Mirantis does not support direct modifications to RethinkDB, and thus any unforeseen issues that result from doing so are solely the user’s responsibility.

Access RethinkDB with the RethinkCLI

Note

If you are using a Helm chart to install and manage your MSR deployment, enable the RethinkDB admin console by including the flag in you helm install or helm upgrade command:

--set rethinkdb.admin.service.enabled=true
  1. In the cr-sample-manifest.yaml file that you applied when installing MSR, enable the RethinkDB admin console:

    spec:
      rethinkdb:
          admin:
            enabled: true
    
  2. Invoke the following command to run the webhook health check and apply the changes to the custom resource:

    kubectl wait --for=condition=ready pod -l \
    app.kubernetes.io/name="msr-operator" && kubectl apply -f cr-sample-manifest.yaml
    
  3. Enable external access to the RethinkDB Admin Console:

    kubectl port-forward service/msr-rethinkdb-admin 8080:8080
    
  4. Access the interactive RethinkDB Admin Console by opening http://localhost:8080 in a web browser.

  5. Navigate to the Tables page to verify the number of replicas serving each table.

  6. Query the database contents:

    • List the cluster problems as detected by the current node:

      r.db("rethinkdb").table("current_issues")
      

      Example output:

      []
      
    • List the databases that RethinkDB contains:

      r.dbList()
      

      Example output:

      [ 'dtr2',
        'jobrunner',
        'notaryserver',
        'notarysigner',
        'rethinkdb' ]
      
    • List the tables contained in the dtr2 database:

      r.db('dtr2').tableList()
      

      Example output:

      [ 'blob_links',
        'blobs',
        'client_tokens',
        'content_caches',
        'events',
        'layer_vuln_overrides',
        'manifests',
        'metrics',
        'namespace_team_access',
        'poll_mirroring_policies',
        'promotion_policies',
        'properties',
        'pruning_policies',
        'push_mirroring_policies',
        'repositories',
        'repository_team_access',
        'scanned_images',
        'scanned_layers',
        'tags',
        'user_settings',
        'webhooks' ]
      
    • List the entries contained in the repositories table:

      r.db('dtr2').table('repositories')
      

      Example output:

      [ { enableManifestLists: false,
          id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b',
          immutableTags: false,
          name: 'test-repo-1',
          namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
          namespaceName: 'admin',
          pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6',
          pulls: 0,
          pushes: 0,
          scanOnPush: false,
          tagLimit: 0,
          visibility: 'public' },
        { enableManifestLists: false,
          id: '9f43f029-9683-459f-97d9-665ab3ac1fda',
          immutableTags: false,
          longDescription: '',
          name: 'testing',
          namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
          namespaceName: 'admin',
          pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be',
          pulls: 0,
          pushes: 0,
          scanOnPush: false,
          shortDescription: '',
          tagLimit: 1,
          visibility: 'public' } ]
      

Note

Individual databases and tables are a private implementation detail that may change in MSR from version to version. You can, though, always use dbList() and tableList() to explore the contents and data structure.

Access RethinkDB with the MSR API
  1. Enable external access to the MSR API:

    kubectl port-forward service/msr 8443:443
    
  2. Review the status of your MSR cluster:

    curl -u admin:$TOKEN -X GET "https://<msr-url>/api/v0/meta/cluster_status" -H "accept: application/json"
    

    Example API response:

    {
      "rethink_system_tables": {
        "cluster_config": [
          {
            "heartbeat_timeout_secs": 10,
            "id": "heartbeat"
          }
        ],
        "current_issues": [],
        "db_config": [
          {
            "id": "339de11f-b0c2-4112-83ac-520cab68d89c",
            "name": "notaryserver"
          },
          {
            "id": "aa2e893f-a69a-463d-88c1-8102aafebebc",
            "name": "dtr2"
          },
          {
            "id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd",
            "name": "jobrunner"
          },
          {
            "id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039",
            "name": "notarysigner"
          }
        ],
        "server_status": [
          {
            "id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a",
            "name": "dtr_rethinkdb_5eb9459a7832",
            "network": {
              "canonical_addresses": [
                {
                  "host": "dtr-rethinkdb-5eb9459a7832.dtr-ol",
                  "port": 29015
                }
              ],
              "cluster_port": 29015,
              "connected_to": {
                "dtr_rethinkdb_56b65e8c1404": true
              },
              "hostname": "9e83e4fee173",
              "http_admin_port": "<no http admin>",
              "reql_port": 28015,
              "time_connected": "2019-02-15T00:19:22.035Z"
            },
           }
         ...
        ]
      }
    }
    

See also

The RethinkDB documentation on RethinkDB queries

Troubleshoot scanning or CVE updates failure

CVE database connectivity issues are often at the root of any scanning or CVE updating problems you may encounter. On Kubernetes deployments, a faulty installation of the PostgreSQL operator is often the root cause for such issues, whereas on Swarm these issues are likely to be linked to the Scanningstore service.

On Kubernetes deployments

Verify that the postgres operator is running by invoking the kubectl get pods command. If the output you receive resembles the following example, your PostgreSQL is properly installed:

postgres-operator-6788c8bf6-494lt     1/1   Running  0         16d

If, however, the command produces no output, or the state that presents is something other than Running, install PostgreSQL as follows:

helm upgrade -i postgres-operator postgres-operator/postgres-operator \
  --version 1.7.1 \
  --set configKubernetes.spilo_runasuser=101 \
  --set configKubernetes.spilo_runasgroup=103 \
  --set configKubernetes.spilo_fsgroup=103
On Swarm deployments

Examine the service logs to ascertain whether there are issues with the Scanningstore service.

docker service logs msr_msr-scanningstore

Vulnerability scan warnings

Warnings display in a red banner at the top of the MSR web UI to indicate potential vulnerability scanning issues.

Warning

Cause

Warning: Cannot perform security scans because no vulnerability database was found.

Displays when vulnerabilty scanning is enabled but there is no vulnerability database available to MSR. Typically, the warning displays when a vulnerability database update is run for the first time and the operation fails, as no usable vulnerability database exists at this point.

Warning: Last vulnerability database sync failed.

Displays when a vulnerability database update fails, even though there is a previous usable vulnerabilty database available for vulnerability scans. The warning typically displays when a vulnerability database update fails, despite successful completion of a prior vulnerability database update.

Note

The terms vulnerability database sync and vulnerability database update are interchangeable, in the context of MSR web UI warnings.

Note

The issuing of warnings is the same regardless of whether vulnerability database updating is done manually or is performed automatically through a job.

MSR undergoes a number of steps in performing a vulnerability database update, including TAR file download and extraction, file validation, and the update operation itself. Errors that can trigger warnings can occur at any point in the update process. These errors can include such system-related matters as low disk space, issues with the transient network, or configuration complications. As such, the best strategy for troubleshooting MSR vulnerability scanning issues is to review the logs.

View the logs for an online vulnerability database update

Online vulnerability database updates are performed by a jobrunner container, the logs for which you can view through a docker CLI command or by using the MSR web UI

  • CLI command:

    docker logs <jobrunner-container-name>
    
  • MSR web UI:

    Navigate to System > Job Logs in the left-side navigation panel.

View the logs for an offline vulnerability database update

The MSR vulnerability database update occurs through the dtr-api container. As such, access the logs for that container to ascertain the reason for update failure.

Obtain more log information

If the logs do not initially offer adequate detail on the cause of vulnerability database update failure, you can display additional logs by setting MSR to enable debug logging.

Use MSR Operator to enable and disable debug logging.

  1. Edit the custom resource manifest to enable and disable debug logging. For example:

    spec:
      logLevel: 'debug'
    
  2. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    

Use Helm to enable and disable debug logging. For example:

helm upgrade --reuse-values --set logLevel=debug [RELEASE] [CHART]

For Swarm deployments, update the log level for each service:

docker service update msr_msr-api-server --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-garant --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-jobrunner --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-nginx --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-notary-server --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-notary-signer --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-registry --env-add MSR_LOG_LEVEL=debug
docker service update msr_msr-scanningstore --env-add DEBUG=true

Certificate issues when pushing and pulling images

If TLS is not properly configured, you are likely to encounter an x509: certificate signed by unknown authority error when attempting to run the following commands:

  • docker login

  • docker push

  • docker pull

To resolve the issue:

Verify that your MSR instance has been configured with your TLS certificate Fully Qualified Domain Name (FQDN). For more information, refer to Add a custom TLS certificate.

Alternatively, but only in testing scenarios, you can skip using a certificate by adding your registry host name as an insecure registry in the Docker daemon.json file:

{
    "insecure-registries" : [ "registry-host-name" ]
}

Configure AWS_CA_BUNDLE environment variable

You may encounter an insecure TLS connection error if you are running MSR behind an MITM proxy and using AWS S3 for your storage back end.

Kubernetes resolution
  1. Add the AWS_CA_BUNDLE environment variable to all of the MSR containers by adding the following to your custom resource manifest:

    spec:
      extraEnv:
        AWS_CA_BUNDLE: "path_to_the_certificate"
    
  2. Apply the changes to the custom resource:

    kubectl apply -f cr-sample-manifest.yaml
    
  3. Verify completion of the reconciliation process for the custom resource:

    kubectl get msrs.msr.mirantis.com
    kubectl get rethinkdbs.rethinkdb.com
    
  1. Add the AWS_CA_BUNDLE environment variable to all of the MSR containers by appending the MSR Helm chart values.yaml file as follows:

    global:
      extraEnv:
        AWS_CA_BUNDLE: "path_to_the_certificate"
    
  2. Apply the new value:

    helm upgrade msr msrofficial/msr --version <version-number> -f values.yaml
    
Swarm resolution
  1. Update your Registry service to include the AWS_CA_BUNDLE environment variable:

    docker service update msr_msr-registry \
      --env-add AWS_CA_BUNDLE=<bundle-path>
    
  2. Verify that the environment variable is set:

    docker service inspect msr_msr-registry \
      --format '{{.Spec.TaskTemplate.ContainerSpec.Env }}' \
      | grep 'AWS_CA_BUNDLE'
    

MSR on Swarm one node to multi node scaling failure

The RethinkDB node can fail when scaling MSR on Swarm on a new cluster from one node to several nodes, resulting in the generation of the following error message during execution of scale command:

level=fatal msg="polling failed with 40 attempts 1s apart: service
\"msr_msr-rethinkdb\" is not yet ready"

To prevent such a failure, pre-pull RethinkDB images on all nodes. To do so, run following command on each node in a Swarm cluster:

docker pull registry.mirantis.com/rethinkdb/rethinkdb:2.4.3-mirantis-0.1.3

Disaster recovery

Disaster recovery overview

Mirantis Secure Registry (MSR) uses RethinkDB to store metadata. RethinkDB is a clustered application, and thus to configure it with high availability it must have three or more servers, and its tables must be configured to have three or more replicas.

For a RethinkDB table to be healthy, a majority (n/2 + 1) of replicas per table must be available. As such, there are three possible failure scenarios:

Failure scenarios

Scenario

Description

Minority of replicas are unhealthy

One or more table replicas are unhealthy, but the overall majority (n/2 + 1) remains healthy and is able to communicate, each with the others.

As long as more than half of the table voting replicas and more than half of the voting replicas for each shard remain available, one of those voting replicas will be arbitrarily selected as the new primary.

Majority of replicas are unhealthy

Half or more voting replicas of a shard are lost and cannot be reconnected.

An emergency repair of the cluster remains possible, without having to restore from a backup, which minimizes the amount of data lost. Refer to mirantis/msr db emergency-repair for more detail.

All replicas are unhealthy

A complete disaster scenario wherein all replicas are lost, the result being the loss or corruption of all associated data volumes. In this scenario, you must restore MSR from a backup. Restoring from a backup should be a last resort solution. You should first attempt an emergency repair, as this can mitigate data loss. Refer to Restore from an MSR backup for more information.

Repair a single replica

When one or more MSR replicas are unhealthy but the overall majority (n/2 + 1) is healthy and able to communicate with one another, your MSR cluster is still functional and healthy.

Cluster with two nodes unhealthy

Given that the MSR cluster is healthy, there is no need to execute a disaster recovery procedure, such as restoring from a backup. Instead, you should:

  1. Remove the unhealthy replicas from the MSR cluster.

  2. Join new replicas to make MSR highly available.

The order in which you perform these operations is important, as an MSR cluster requires a majority of replicas to be healthy at all times. If you join more replicas before removing the ones that are unhealthy, your MSR cluster might become unhealthy.

Split-brain scenario

To understand why you should remove unhealthy replicas before joining new ones, imagine you have a five-replica MSR deployment, and something goes wrong with the overlay network connection the replicas, causing them to be separated in two groups.

Cluster with network problem

Because the cluster originally had five replicas, it can work as long as three replicas are still healthy and able to communicate (5 / 2 + 1 = 3). Even though the network separated the replicas in two groups, MSR is still healthy.

If at this point you join a new replica instead of fixing the network problem or removing the two replicas that got isolated from the rest, it’s possible that the new replica ends up in the side of the network partition that has less replicas.

cluster with split brain

When this happens, both groups now have the minimum amount of replicas needed to establish a cluster. This is also known as a split-brain scenario, because both groups can now accept writes and their histories start diverging, making the two groups effectively two different clusters.

Configure replicas
MSR on Swarm

To add or remove MSR on Swarm nodes you must reconfigure the application with the list of nodes.

  1. Obtain MSR on Swarm configuration file:

    docker run -it --rm --entrypoint \
    cat registry.mirantis.com/msr/msr-installer:3.1.0 \
    /config/values.yml > newvalues-swarm.yaml
    
  2. Edit the newvalues-swarm.yaml file and specify the worker nodes on which MSR is to be deployed:

    swarm:
      ## nodeList is a comma seperated list of node IDs within the swarm that represent nodes that MSR will be allowed to
      ## deploy to.To retrieve a list of nodes within a swarm execute `docker node ls`. If no nodes are specified then MSR
      ## will be installed on the current node.
      ##
      nodeList:
    
  3. Update MSR.

Scale Helm deployment

To scale your Helm deployment, you must first obtain your MSR deployment:

kubectl get deployment

Next, run the following command to add and remove replicas from your MSR deployment.