Introduction¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
The Mirantis Secure Registry (MSR) documentation is your resource for information on how to deploy and operate an MSR instance. The intent of the content therein is to provide users with an understanding of the core concepts of the product, while also providing instruction sufficient to deploy and operate the software.
Mirantis is committed to constantly building on and improving the MSR documentation, in response to the feedback and kind requests we receive from the MSR user base.
Product Overview¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
Mirantis Secure Registry (MSR) is a solution that enables enterprises to store and manage their container images on-premises or in their virtual private clouds. With the advent of MSR 3.0.0, the software can run alongside your other apps in any standard Kubernetes 1.20 and above distribution, simply through the use of standard Helm techniques. As a result, the MSR user has far greater flexibility as many resources are administered by the orchestrator rather than the registry itself. And while MSR 3.0.0 is not integrated with Mirantis Kubernetes Engine (MKE) as has been the case with previous versions, it runs just as well on MKE as on any supported Kubernetes distribution.
The security that is built into MSR enables you to verify and trust the provenance and content of your applications and ensure secure separation of concerns. Using MSR, you meet security and regulatory compliance requirements. In addition, the automated operations and integration with CI/CD speed up application testing and delivery. The most common use cases for MSR include:
- Helm charts repositories
Deploying applications to Kubernetes can be complex. Setting up a single application can involve creating multiple interdependent Kubernetes resources, such as pods, services, deployments, and replica sets. Each of these requires manual creation of a detailed YAML manifest file as well. This is a lot of work and time invested. With Helm charts (packages that consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files) you can save time and install the software you need with all the dependencies, upgrade, and configure it.
- Automated development
Easily create an automated workflow where you push a commit that triggers a build on a CI provider, which pushes a new image into your registry. Then, the registry fires off a webhook and triggers deployment on a staging environment, or notifies other systems that a new image is available.
- Secure and vulnerability-free images
When an industry requires applications to comply with certain security standards to meet regulatory compliances, your applications are as secure as the images that run those applications. To ensure that your images are secure and do not have any vulnerabilities, track your images using a binary image scanner to detect components in images and identify associated CVEs. In addition, you may also run image enforcement policies to prevent vulnerable or inappropriate images from being pulled and deployed from your registry.
Reference Architecture¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
The MSR Reference Architecture provides comprehensive technical information on Mirantis Secure Registry (MSR), including component particulars, infrastructure specifications, and networking and volumes detail.
Introduction to MSR¶
Mirantis Secure Registry (MSR) is an enterprise-grade image storage solution. Installed behind a firewall, either on-premises or on a virtual private cloud, MSR provides a secure environment where users can store and manage their images.
Starting with MSR 3.0.0, MSR can run alongside your other apps in any standard Kubernetes distribution, through the use of standard Helm techniques. As a result, the MSR user has a great deal of flexibility, as many resources are administered by the orchestrator rather than by the registry itself.
While MSR 3.0.x is not integrated with Mirantis Kubernetes Engine (MKE), as it was with previous versions, it runs just as well on MKE as on any supported Kubernetes distribution.
The advantages of MSR include the following:
- Image and job management
MSR has a web-based user interface used for browsing images and auditing repository events. With the web UI, you can see which Dockerfile lines produced an image and, if security scanning is enabled, a list of all of the software installed in that image and any Common Vulnerabilities and Exposures (CVEs). You can also audit jobs with the web UI.
MSR can serve as a continuous integration and continuous delivery (CI/CD) component, in the building, shipping, and running of applications.
- Availability
MSR is highly available through the use of multiple replicas of all containers and metadata. As such, MSR will continue to operate in the event of machine failure, thus allowing for repair.
- Efficiency
MSR can reduce the bandwidth used when pulling images by caching images closer to users. In addition, MSR can clean up unreferenced manifests and layers.
- Built-in access control
As with Mirantis Kubernetes Engine (MKE), MSR uses role-based access control (RBAC), which allows you to manage image access, either manually, with LDAP, or with Active Directory.
- Security scanning
A security scanner is built into MSR, which can be used to discover the versions of the software that is in use in your images. This tool scans each layer and aggregates the results, offering a complete picture of what is being shipped as a part of your stack. Most importantly, as the security scanner is kept up-to-date by tapping into a periodically updated vulnerability database, it is able to provide unprecedented insight into your exposure to known security threats.
- Image signing
MSR ships with Notary, which allows you to sign and verify images using Docker Content Trust.
Components¶
Mirantis Secure Registry (MSR) is a containerized application that runs on a Kubernetes cluster. After deploying MSR, you can use your Docker CLI client to log in, push images, and pull images. For high availability, you can horizontally scale your MSR workloads across multiple Kubernetes worker nodes.
Workloads¶
Descriptions for each of the workloads that MSR creates during installation are available in the table below.
Caution
Do not use these components in your applications, as they are for internal MSR use only.
Name |
Full name |
Description |
---|---|---|
API |
|
Executes the MSR business logic, serving the MSR web application and API. |
Garant |
|
Manages MSR authentication. |
Jobrunner |
|
Runs asynchronous background jobs, including garbage collection and image vulnerability scans. |
NGINX |
|
Receives HTTP and HTTPS requests and proxies those requests to other MSR components. |
Notary server |
|
Provides signing and verification for images that are pushed to or pulled from the secure registry. |
Notary signer |
|
Performs server-side timestamp and snapshot signing for Content Trust metadata. |
Registry |
|
Implements pull and push functionality for Docker images and manages how images are stored. |
RethinkDB |
|
Stores persisting repository metadata. |
Scanningstore |
|
Stores security scanning data. |
eNZi |
|
Authenticates and authorizes MSR users. |
Name |
Full name |
Description |
---|---|---|
PostgreSQL |
|
Manages the security scanning database. |
cert-manager |
|
Manages certificates for all MSR components. |
The communication flow between MSR workloads is illustrated below:
Note
The third-party cert-manager component interacts with all of the components displayed in the above diagram.
JobRunner¶
Descriptions for each of the job types that are run by MSR are available in the table below.
Job type |
Description |
---|---|
|
Uploads an analytics report to Mirantis. |
|
Lints a Helm chart. |
|
Lints all charts in all repositories. |
|
Performs garbage collection for all types of MSR data and metadata. |
|
Performs garbage collection of orphaned image layer data. |
|
Performs auto-deletion of repository events. |
|
Performs auto-deletion of job logs. |
|
Performs garbage collection of image metadata. |
|
Performs garbage collection of security scan results for deleted layers. |
|
Pulls tags from remote repositories as determined by mirroring policies. |
|
Pushes image tags to remote repositories as determined by mirroring policies. |
|
Scans image by digest. |
|
Rescans all previously scanned images. |
|
Scans single layer of the image. |
|
Deletes tags from remote repositories, as determined by the pruning policies of the repositories. |
|
Updates vulnerability database (CVE list). |
|
Sends a webhook. |
System requirements¶
Make sure you review the resource allocation detail for MSR prior to installation. Below, we offer detail for both a minimum resource allotment as well as guidelines for an optimum resource allotment.
Minimum resource allotment¶
Verify that at a minimum your system can allocate the following resources solely to the running of MSR:
Component |
Requirement |
---|---|
Nodes |
One Linux/AMD64 worker node, running Kubernetes 1.21 - 1.27 1:
|
Kubernetes command line tool |
|
Kubernetes configuration files |
Component necessary for accessing the Kubernetes cluster. Note If you are installing MSR 3.0.x on an MKE Kubernetes cluster, you
must download the MKE client bundle
to obtain the |
Certificate management |
cert-manager installed on the cluster Minimum required version: |
Kubernetes package management |
Minimum required version: |
Metadata storage |
One 64 GB Kubernetes persistent volume 2 that supports the
|
Image data storage |
Use any of the following:
For more information, refer to Storage. |
Image-scanning CVE database |
A PostgreSQL server with sufficient storage for a 24 GB database. This can be either:
|
Recommended resource allotment¶
For optimal performance, Mirantis recommends that you allocate the following resources solely to MSR:
Component |
Requirement |
---|---|
Nodes |
Three Linux/AMD64 worker nodes, running Kubernetes 1.21 - 1.27 1, each with:
|
Kubernetes command line tool |
|
Kubernetes configuration files |
Component necessary for accessing the Kubernetes cluster. Note If you are installing MSR 3.0.x on an MKE Kubernetes cluster, you
must download the MKE client bundle
to obtain the |
Certificate management |
cert-manager installed on the cluster Minimum required version: |
Kubernetes package management |
Minimum required version: |
Metadata storage |
Three 64 GB Kubernetes persistent volumes 2 that support the
|
Image data storage |
Use any of the following:
For more information, refer to Storage. |
Image-scanning CVE database |
A high availability PostgreSQL server with sufficient storage for a 24 GB database. This can be either:
|
- 1(1,2,3,4)
Postgres Operator up through 1.8.2 uses the
PodDisruptionBudget policy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25. As such, various features of MSR may not function properly if Postgres Operator 1.8.2 or earlier is installed alongside MSR on Kube v1.25 or later.- 2(1,2,3,4,5,6)
For persistent volume configuration information, refer to the Kubernetes documentation, Configure a Pod to Use a PersistentVolume for Storage.
Volumes¶
By default, MSR creates the following persistent volume claims:
Volume name |
Description |
---|---|
|
Stores image data when MSR is configured to store image data in a persistent volume |
|
Stores repository metadata |
|
Stores vulnerability scan data when MSR is configured to deploy an internal PostgreSQL cluster |
You can customize the storage class that is used to provision persistent volumes for these claims, or you can pre-provision volumes for use with MSR. Refer to Install MSR online for more information.
Storage¶
MSR supports the use of either a Persistent Volume or Cloud storage:
Storage type |
Description |
---|---|
Persistent Volume |
MSR is compatible with the types of Persistent Volumes listed in the Kubernetes documentation. Note
|
Cloud |
MSR is compatible with the following cloud storage providers:
|
Note
The deployment of MSR to Windows nodes is not supported.
MSR web UI¶
Use the MSR web UI to manage settings and user permissions for your MSR installation.
Rule engine¶
MSR uses a rule engine to evaluate policies, such as tag pruning and image enforcement.
The rule engine supports the following operators:
|
|
Note
The matches
operator conforms subject fields to a user-provided regular
expression (regex). The regex for matches
must follow the specification
in the official Go documentation: Package syntax.
Each of the following policies uses the rule engine:
Installation Guide¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
Targeted to deployment specialists and QA engineers, the MSR Installation Guide provides the detailed information and procedures you need to install and configure Mirantis Secure Registry (MSR).
Prepare MKE for MSR Install¶
To install MSR on MKE you must first configure both the
default:postgres-operator
user account and the default:postgres-pod
service account in MKE with the privileged permission.
To prepare MKE for MSR install:
Log in to the MKE web UI.
In the left-side navigation panel, click the <user name> drop-down to display the available options.
For MKE 3.6.0 or earlier, click Admin Settings > Orchestration. For MKE 3.6.1 or later, click Admin Settings > Privileges.
Navigate to the User account privileges section.
Enter
<namespace-name>:postgres-operator
into the User accounts field.Note
You can replace
<namespace-name>
withdefault
to indicate the use of the default namespace.Select the privileged check box.
Scroll down to the Service account privileges section.
Enter
<namespace-name>:postgres-pod
into the Service accounts field.Note
You can replace
<namespace-name>
withdefault
to indicate the use of the default namespace.Select the privileged checkbox.
Click Save.
Important
For already deployed MSR instances, issue a rolling restart of the
postgres-operator
deployment:
kubectl rollout restart deploy/postgres-operator
Install MSR online¶
Use a Helm chart to install MSR onto any Kubernetes distribution that supports persistent storage.
Prerequisites¶
You must have the following key components in place before you can install MSR online using a Helm chart: a Kubernetes platform, cert-manager, and the Postgres Operator.
Prepare your Kubernetes environment¶
Install and configure your Kubernetes distribution.
Ensure that the default StorageClass on your cluster supports the dynamic provisioning of volumes. If necessary, refer to the Kubernetes documentation Change the default StorageClass.
If no default StorageClass is set, you can specify a StorageClass for MSR to use by providing the following additional parameters to MSR when running the helm install command:
--set registry.storage.persistentVolume.storageClass=<my-storageclass> --set postgresql.volume.storageClass=<my-storageclass> --set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>
The first of these three parameters is only applicable when you install MSR with a persistentVolume backend, the default setting:
--set registry.storage.backend=persistentVolume
MSR creates PersistentVolumeClaims with either the
ReadWriteOnce
or theReadWriteMany
access modes, depending on the purpose for which they are created. Thus the StorageClass provisioner that MSR uses must be able to provision PersistentVolumes with at least theReadWriteOnce
andReadWriteMany
access modes.The
<release-name>
PVC is created by default with theReadWriteMany
access mode. If you choose to install MSR with a persistentVolume backend, you can override this default access mode with the following parameter when running the helm install command:--set registry.storage.persistentVolume.accessMode=<new-access-mode>
Install cert-manager¶
Important
The cert-manager version must be 1.7.2 or later.
Run the following helm install command:
helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager \ --version 1.7.2 \ --set installCRDs=true
Verify that cert-manager is in the
Running
state:kubectl get pods
If any of the cert-manager Pods are not in the
Running
state, run kubectl describe on each Pod:kubectl describe <cert-manager-pod-name>
Note
To troubleshoot the issues that present in the kubectl describe command output, refer to Troubleshooting in the official cert-manager documentation.
Install Postgres Operator¶
Important
The Postgres Operator version you install must be 1.9.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudget policy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.9.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Run the following helm install command, including
spilo_*
parameters:helm repo add postgres-operator \ https://opensource.zalando.com/postgres-operator/charts/postgres-operator/ helm repo update helm install postgres-operator postgres-operator/postgres-operator \ --version <version> \ --set configKubernetes.spilo_runasuser=101 \ --set configKubernetes.spilo_runasgroup=103 \ --set configKubernetes.spilo_fsgroup=103
Verify that Postgres Operator is in the
Running
state:kubectl get pods
To troubleshoot a failing Postgres Operator Pod, run the following command:
kubectl describe <postgres-operator-pod-name>
Review the Pod logs for more detailed results:
kubectl logs <postgres-operator-pod-name>
Note
By default, MSR uses the persistent volume claims detailed in Volumes.
If you have a pre-existing PersistentVolume that contains image blob data that you intend to use with a new instance of MSR, you can use Helm to provide the new instance with the name of the associated PersistentVolumeClaim:
--set registry.storage.persistentVolume.existingClaim=<pre-existing-msr-pvc>
This setting indicates the <release-name> PVC referred to in volumes.
See also
Helm official documentation: Helm Install
Run install command¶
Use a Helm chart to install MSR:
helm repo add msrofficial https://registry.mirantis.com/charts/msr/msr helm repo update helm install msr msrofficial/msr \ --version <helm-chart-version> \ --set-file license=path/to/file/license.lic
Note
If the installation fails and MSR Pods continue to run in your cluster, it is likely that MSR failed to complete the initialization process, and thus you must reinstall MSR. To delete the Pods and completely uninstall MSR:
Delete any running msr-initialize Pods:
kubectl delete job msr-initialize
Delete any remaining Pods:
helm uninstall msr
Verify the success of your MSR installation.
Verify that all
msr-*
Pods are in therunning
state. For more detail, refer to Check the Pods.Log into the MSR web UI.
Log into MSR from the command line:
docker login $FQDN
Push an image to MSR using docker push.
Note
The default credentials for MSR are:
User name:
admin
password:
password
Be aware that the Helm chart values also include the default MSR credentials information. As such, Mirantis strongly recommends that you change the credentials immediately following installation.
See also
Helm official documentation: Helm Install
Kubernetes official documentation: Storage Classes
Check the Pods¶
If you are using MKE with your cluster, download and configure the client
bundle. Otherwise, ensure that you can access the cluster using kubectl
,
either by updating the default Kubernetes config file or by setting the
KUBECONFIG
environment variable to the path of the unique config file for
the cluster.
kubectl get pods
Example output:
NAME READY STATUS RESTARTS AGE
cert-manager-6bf59fc5c7-5wchj 1/1 Running 0 23m
cert-manager-cainjector-5c5f8bfbd6-mlr2k 1/1 Running 0 23m
cert-manager-webhook-6fcbbd87c9-7ftv7 1/1 Running 0 23m
msr-api-cfc88f8ff-8lh9n 1/1 Running 4 18m
msr-enzi-api-77bf8558b9-p6q7x 1/1 Running 1 18m
msr-enzi-worker-0 1/1 Running 3 18m
msr-garant-d84bbfccd-j94qc 1/1 Running 4 18m
msr-jobrunner-default-54675dd9f4-cwnfg 1/1 Running 3 18m
msr-nginx-6d7c775dd9-nt48c 1/1 Running 0 18m
msr-notary-server-64f9dd68fc-xzpp4 1/1 Running 4 18m
msr-notary-signer-5b6f7f6bd9-bcqwv 1/1 Running 3 18m
msr-registry-6b6c6b59d5-8bnsl 1/1 Running 0 18m
msr-rethinkdb-cluster-0 1/1 Running 0 18m
msr-rethinkdb-proxy-7fccc79db7-njrfl 1/1 Running 2 18m
msr-scanningstore-0 1/1 Running 0 18m
nfs-subdir-external-provisioner-c5f64f6cd-mjjqt 1/1 Running 0 19m
postgres-operator-54bb64998c-mjs6q 1/1 Running 0 22m
If you intend to run vulnerability scans, the msr-scanningstore-0
Pod
must have Running
status. If this is not the case, it is likely that
the StorageClass is missing or is misconfigured, or because no default
StorageClass is set. To rectify this, you must configure a default
StorageClass and then re-install MSR. Otherwise, you can specify a
StorageClass for MSR to use by providing the following when using Helm to
install MSR:
--set registry.storage.persistentVolume.storageClass=<my-storageclass>
--set postgresql.volume.storageClass=<my-storageclass>
--set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>
Note
The first of these three parameters is only applicable when you install MSR with a persistentVolume backend, the default setting:
--set registry.storage.backend=persistentVolume
Add load balancer (AWS)¶
If you deploy MSR to AWS you may also want to add a load balancer to your installation.
Set an environment variable to use in assigning an internal service name to the load balancer service:
export MSR_ELB_SERVICE="msr-public-elb"
Use Kubernetes to create an AWS load balancer to expose NGINX, the front end for the MSR web UI:
kubectl expose deployment msr-nginx --type=LoadBalancer \ --name="${MSR_ELB_SERVICE}"
Check the status:
kubectl get svc | grep "${MSR_ELB_SERVICE}" | awk '{print $4}'
Note
The output returned on AWS will be a FQDN, whereas other cloud providers may return an FQDN or an IP address.
Example output:
af42a8a8351864683b584833065b62c7-1127599283.us-west-2.elb.amazonaws.com
Note
If nothing returns after you have run the command, wait a few minutes and run the command again.
If the command returns an FQDN it may be necessary to wait for the new DNS record to resolve. You can check the resolution status by running the following script, inserting the output string you received in place of
$FQDN
:while : ; do dig +short $FQDN ; sleep 5 ; done
If the command returns an IP address, you can access the load balancer at:
https://<load-balancer-IP>/
When one or more IP addresses display, you can interrupt the shell loop and access your MSR 3.0.x load balancer at:
https://$FQDN/
Note
The load balancer will stop any attempt to tear down the VPC in which the EC2 instances are running. As such, in order to tear down the VPC you must first remove the load balancer:
kubectl delete svc msr-public-elb
Optional. Configure MSR to use Notary to sign images. To do this, update NGINX to add the DNS name:
When using an <MSR-chart-version> version, such as 1.0.0, for the Helm and MSR_FQDN, run:
helm upgrade msr msrofficial/msr \ --version $<MSR-chart-version> \ --set-file license=path/to/file/license.lic \ --set nginx.webtls.spec.dnsNames="{nginx,localhost,${MSR_FQDN}}" \ --reuse-values
Verify the upgrade change:
helm get values msr
Example output:
USER-SUPPLIED VALUES: license: | e3ded81fe8de30b857fe1de1d1f6968bcb8b5b1078021a88839ad3b3c9e1a77a94fa7987bd2591c8dd59ad8bae4ce0719a67d9756561b7c67c12ee42b1c505bf596e4224abb792a00bfbdf4c9fc32ea727f82f8f6250720bb634b082162842797e87ad3bfbf6f408dae41e81a862cd73a3d2729dc81365900e293b4724231b2c6f0fc6c2e83ee32d1eb0107ca9afa42a4f5b20ac5c6b538a551d8f380f6a89d9746fc7405d5ba96738c1365a6b91b2c0572225b8a5d39e4b6956c48bf9b07068248762c71987999dfc8c1e4432e39fd20f52b6d9ddf4839ea5c5e0164acb3956c01da4dd3f5499deed204dff40323445b87196a11e3ee966f238e32b414fe8e5b1881859e3fadc8394826882fb3e39f6c4d2369e5b9161b9495455c4587dbec33d197accf9f5c1032be5ed32a776f091e1935fd0fecdf7010caa8cf3034b15d46247146cc5917843e771 nginx: webtls: spec: dnsNames: - nginx - localhost - af42a8a8351864683b584833065b62c7-1127599283.us-west-2.elb.amazonaws.com
Install MSR offline¶
For documentation purposes, Mirantis assumes here that you are installing MSR on an offline Kubernetes cluster from an Internet-connected machine that has access to the Kubernetes cluster. In doing so, you will use Helm to perform the MSR installation from the Internet-connected machine.
Prepare your environment¶
Confirm that the default StorageClass on your cluster supports dynamic volume provisioning. For more information, refer to the Kubernetes documentation Change the default StorageClass.
If a default StorageClass is not set, you can specify a StorageClass to MSR by providing the following additional parameters during the running of the helm install command:
--set registry.storage.persistentVolume.storageClass=<my-storageclass> --set postgresql.volume.storageClass=<my-storageclass> --set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>
The first of these three parameters is only applicable when you install MSR with a persistentVolume backend, the default setting:
--set registry.storage.backend=persistentVolume
MSR creates PersistentVolumeClaims with either the
ReadWriteOnce
or theReadWriteMany
access modes, depending on the purpose for which they are created. Thus the StorageClass provisioner that MSR uses must be able to provision PersistentVolumes with at least theReadWriteOnce
and theReadWriteMany
access modes.The
<release-name>
PVC is created by default with theReadWriteMany
access mode. If you choose to install MSR with a persistentVolume backend, you can override this default access mode with the following parameter when running the helm install command:--set registry.storage.persistentVolume.accessMode=<new-access-mode>
On the Internet-connected computer, configure your environment to use the kubeconfig of the offline Kubernetes cluster. You can do this by setting a KUBECONFIG environment variable.
See also
Kubernetes official documentation: Storage Classes
Set up a Docker registry¶
Prepare a Docker registry on the Internet-connected machine that contains all of the images that are necessary to install MSR. Kubernetes will pull the required images from this registry to the offline nodes during the installation of the prerequisites and MSR.
On the Internet-connected machine, set up a Docker registry that the offline Kubernetes cluster can access using a private IP address. For more information, refer to Docker official documentation: Deploy a registry server.
Add the
msrofficial
,postgres-operator
, andjetstack
Helm repositories:helm repo add msrofficial https://registry.mirantis.com/charts/msr/msr helm repo add postgres-operator https://opensource.zalando.com/postgres-operator/charts/postgres-operator helm repo add jetstack https://charts.jetstack.io helm repo update
Obtain the names of all the images that are required for installing MSR from the desired version of the Helm charts, for MSR, postgres-operator, and cert-manager. You can do this by templating each chart and grepping for
image:
:helm template msr msrofficial/msr \ --version=<msr-chart-version> \ --api-versions=acid.zalan.do/v1 \ --api-versions=cert-manager.io/v1 | grep image: helm template postgres-operator postgres-operator/postgres-operator \ --version 1.7.1 \ --set configKubernetes.spilo_runasuser=101 \ --set configKubernetes.spilo_runasgroup=103 \ --set configKubernetes.spilo_fsgroup=103 | grep image: helm template cert-manager jetstack/cert-manager \ --version 1.7.2 \ --set installCRDs=true | grep image:
Pull the images listed in the previous step.
Tag each image, including its original namespace, in preparation for pushing the image to the Docker registry. For example:
docker tag registry.mirantis.com/msr/msr-api:<msr-version> <registry-ip>/msr/msr-api:<msr-version>
Push all the required images to the Docker registry. For example:
docker push <registry-ip>/msr/msr-api:<msr-version>
Create the following YAML files, which you will reference to override the image repository information that is contained in the Helm charts used for MSR installation:
my_msr_values.yaml
:imageRegistry: <registry-ip> enzi: image: registry: <registry-ip> rethinkdb: image: registry: <registry-ip>
my_postgres_values.yaml
:image: registry: <registry-ip> configGeneral: docker_image: <registry-ip>/acid/spilo-14:<version> configLogicalBackup: logical_backup_docker_image: <registry-ip>/acid/logical-backup:<version> configConnectionPooler: connection_pooler_image: <registry-ip>/acid/pgbouncer:<version>
my_certmanager_values.yaml
:image: registry: <registry-ip> repository: jetstack/cert-manager-controller webhook: image: registry: <registry-ip> repository: jetstack/cert-manager-webhook cainjector: image: registry: <registry-ip> repository: jetstack/cert-manager-cainjector startupapicheck: image: registry: <registry-ip> repository: jetstack/cert-manager-ctl
Prerequisites¶
You must have cert-manager and the Postgres Operator in place before you can install MSR using the offline method.
Install cert-manager¶
Important
The cert-manager version must be 1.7.2 or later.
Run the following helm install command:
helm install cert-manager jetstack/cert-manager \ --version 1.7.2 \ --set installCRDs=true \ -f my_certmanager_values.yaml
Verify that cert-manager is in the
Running
state:kubectl get pods
If any of the cert-manager Pods are not in the
Running
state, run kubectl describe on each Pod:kubectl describe <cert-manager-pod-name>
Note
To troubleshoot the issues that present in the kubectl describe command output, refer to Troubleshooting in the official cert-manager documentation.
Install Postgres Operator¶
Important
The Postgres Operator version you install must be 1.9.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudget policy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.9.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Run the following helm install command, including
spilo_*
parameters:helm install postgres-operator postgres-operator/postgres-operator \ --version <version> \ --set configKubernetes.spilo_runasuser=101 \ --set configKubernetes.spilo_runasgroup=103 \ --set configKubernetes.spilo_fsgroup=103 \ -f my_postgres_values.yaml
Verify that Postgres Operator is in the
Running
state:kubectl get pods
To troubleshoot a failing Postgres Operator Pod, run the following command:
kubectl describe <postgres-operator-pod-name>
Review the Pod logs for more detailed results:
kubectl logs <postgres-operator-pod-name>
Note
By default, MSR uses the persistent volume claims detailed in Volumes.
If you have a pre-existing PersistentVolume that contains image blob data that you intend to use with a new instance of MSR, you can use Helm to provide the new instance with the name of the associated PersistentVolumeClaim:
--set registry.storage.persistentVolume.existingClaim=<pre-existing-msr-pvc>
This setting indicates the <release-name>
PVC referred to in
Volumes.
See also
Helm official documentation: Helm Install
Run install command¶
Use a Helm chart to install MSR:
helm install msr msrofficial/msr \ --version <helm-chart-version> \ --set-file license=path/to/file/license.lic \ -f my_msr_values.yaml
Note
If the installation fails and MSR Pods continue to run in your cluster, it is likely that MSR failed to complete the initialization process, and thus you must reinstall MSR. To delete the Pods and completely uninstall MSR:
Delete any running
msr-initialize
Pods:kubectl delete job msr-initialize
Delete any remaining Pods:
helm uninstall msr
Verify the success of your MSR installation.
Verify that all
msr-*
Pods are in therunning
state. For more detail, refer to Check the PodsLog into the MSR web UI.
Log into MSR from the command line:
docker login <private-ip>
Push an image to MSR using docker push.
Note
The default credentials for MSR are:
User name:
admin
password:
password
Be aware that the Helm chart values also include the default MSR credentials information. As such, Mirantis strongly recommends that you change the credentials immediately following installation.
Optional. Disable outgoing connections in the MSR web UI Admin Settings. MSR offers outgoing connections for the following tasks:
Analytics reporting
New version notifications
Online license verification
Vulnerability scanning database updates
See also
Helm official documentation: Helm Install
Check the Pods¶
If you are using MKE with your cluster, download and configure the client
bundle. Otherwise, ensure that you can access the cluster using kubectl
,
either by updating the default Kubernetes config file or by setting the
KUBECONFIG
environment variable to the path of the unique config file for
the cluster.
kubectl get pods
Example output:
NAME READY STATUS RESTARTS AGE
cert-manager-6bf59fc5c7-5wchj 1/1 Running 0 23m
cert-manager-cainjector-5c5f8bfbd6-mlr2k 1/1 Running 0 23m
cert-manager-webhook-6fcbbd87c9-7ftv7 1/1 Running 0 23m
msr-api-cfc88f8ff-8lh9n 1/1 Running 4 18m
msr-enzi-api-77bf8558b9-p6q7x 1/1 Running 1 18m
msr-enzi-worker-0 1/1 Running 3 18m
msr-garant-d84bbfccd-j94qc 1/1 Running 4 18m
msr-jobrunner-default-54675dd9f4-cwnfg 1/1 Running 3 18m
msr-nginx-6d7c775dd9-nt48c 1/1 Running 0 18m
msr-notary-server-64f9dd68fc-xzpp4 1/1 Running 4 18m
msr-notary-signer-5b6f7f6bd9-bcqwv 1/1 Running 3 18m
msr-registry-6b6c6b59d5-8bnsl 1/1 Running 0 18m
msr-rethinkdb-cluster-0 1/1 Running 0 18m
msr-rethinkdb-proxy-7fccc79db7-njrfl 1/1 Running 2 18m
msr-scanningstore-0 1/1 Running 0 18m
nfs-subdir-external-provisioner-c5f64f6cd-mjjqt 1/1 Running 0 19m
postgres-operator-54bb64998c-mjs6q 1/1 Running 0 22m
If you intend to run vulnerability scans, the msr-scanningstore-0
Pod
must have Running
status. If this is not the case, it is likely that
the StorageClass is missing or is misconfigured, or because no default
StorageClass is set. To rectify this, you must configure a default
StorageClass and then re-install MSR. Otherwise, you can specify a
StorageClass for MSR to use by providing the following when using Helm to
install MSR:
--set registry.storage.persistentVolume.storageClass=<my-storageclass>
--set postgresql.volume.storageClass=<my-storageclass>
--set rethinkdb.cluster.persistentVolume.storageClass=<my-storageclass>
Note
The first of these three parameters is only applicable when you install MSR with a persistentVolume backend, the default setting:
--set registry.storage.backend=persistentVolume
Obtain the MSR license¶
After you install MSR, download your new MSR license and apply it using a Helm command.
Warning
Users are not authorized to run MSR without a valid license. For more information, refer to Mirantis Agreements and Terms.
To download your MSR license:
Note
If you do not have the CloudCare Portal welcome email, contact your designated administrator.
Log in to the Mirantis CloudCare Portal.
In the top navigation bar, click Environments.
Click the Environment Name associated with the license you want to download.
Scroll down to Licenses and click the License File URL. A new tab opens in your browser.
Click View file to download your license file.
To update your license settings:
Apply your MSR license to an unlicensed MSR instance:
helm upgrade msr msr --repo https://registry.mirantis.com/charts/msr/msr \
--version 1.0.0 \
--set-file license=path/to/file/license.lic
Uninstall MSR¶
You can uninstall MSR using a Helm command. To prevent data loss, uninstalling MSR does not delete persistent volumes or certificate secrets.
Run the following Helm command to uninstall MSR:
helm uninstall <Release.Name>
Next, remove persistent volumes and certificate secrets.
Operations Guide¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
The MSR Operations Guide provides the detailed information you need to store and manage images on-premises or in a virtual private cloud, to meet security or regulatory compliance requirements.
Access MSR¶
Configure your Mirantis Container Runtime¶
By default Mirantis Container Runtime uses TLS when pushing and pulling images to an image registry like Mirantis Secure Registry (MSR).
If MSR is using the default configurations or was configured to use self-signed certificates, you need to configure your Mirantis Container Runtime to trust MSR. Otherwise, when you try to log in, push to, or pull images from MSR, you’ll get an error:
docker login msr.example.org
x509: certificate signed by unknown authority
The first step to make your Mirantis Container Runtime trust the certificate authority used by MSR is to get the MSR CA certificate. Then you configure your operating system to trust that certificate.
Configure your host¶
macOS¶
In your browser navigate to https://<msr-url>/ca
to download the TLS
certificate used by MSR. Then add that certificate to macOS
Keychain.
After adding the CA certificate to Keychain, restart Docker Desktop for Mac.
Windows¶
In your browser navigate to https://<msr-url>/ca
to download the TLS
certificate used by MSR. Open Windows Explorer, right-click the file
you’ve downloaded, and choose Install certificate.
Then, select the following options:
Store location: local machine
Check place all certificates in the following store
Click Browser, and select Trusted Root Certificate Authorities
Click Finish
Learn more about managing TLS certificates.
After adding the CA certificate to Windows, restart Docker Desktop for Windows.
Ubuntu/ Debian¶
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /usr/local/share/ca-certificates/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-certificates
# Restart the Docker daemon
sudo service docker restart
RHEL/ CentOS¶
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /etc/pki/ca-trust/source/anchors/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-trust
# Restart the Docker daemon
sudo /bin/systemctl restart docker.service
Boot2Docker¶
Log into the virtual machine with ssh:
docker-machine ssh <machine-name>
Create the
bootsync.sh
file, and make it executable:sudo touch /var/lib/boot2docker/bootsync.sh sudo chmod 755 /var/lib/boot2docker/bootsync.sh
Add the following content to the
bootsync.sh
file. You can use nano or vi for this.#!/bin/sh cat /var/lib/boot2docker/server.pem >> /etc/ssl/certs/ca-certificates.crt
Add the MSR CA certificate to the
server.pem
file:curl -k https://<msr-domain-name>/ca | sudo tee -a /var/lib/boot2docker/server.pem
Run
bootsync.sh
and restart the Docker daemon:sudo /var/lib/boot2docker/bootsync.sh sudo /etc/init.d/docker restart
Log into MSR¶
To validate that your Docker daemon trusts MSR, try authenticating against MSR.
docker login msr.example.org
Where to go next¶
Configure your Notary client¶
Configure your Notary client as described in Delegations for content trust.
Use a cache¶
Mirantis Secure Registry can be configured to have one or more caches. This allows you to choose from which cache to pull images from for faster download times.
If an administrator has set up caches, you can choose which cache to use when pulling images.
In the MSR web UI, navigate to your Account, and check the Content Cache options.
Once you save, your images are pulled from the cache instead of the central MSR.
Manage access tokens¶
You can create and distribute access tokens in MSR that grant users access at specific permission levels.
Access tokens are associated with a particular user account. They take on the permissions of that account when in use, adjusting automatically to any permissions changes that are made to the associated user account.
Note
Regular MSR users can create access tokens that adopt their own account permissions, while administrators can create access tokens that adopt the account permissions of any account they choose, including the admin account.
Access tokens are of use in building CI/CD pipelines and other integrations, as you can issue separate tokens for each integration and henceforth deactivate or delete such tokens at any time. You can also use access tokens to generate a temporary password for a user who is locked out of their account.
Create an access token¶
Log in to the MSR web UI as the user whose permissions you want associated with the token.
In the left-side navigation panel, navigate to <user name> > Profile.
Select the Access Tokens tab.
Click New access token.
Add a description for the new token. You can, for example, describe the purpose of the token or illustrate a use scenario.
Click Create. The token will temporarily display. Once you click Done, you will never again be able to see the token.
Modify an access token¶
Although you cannot view the access token itself following its initial display, you can give it a new description, deactivate, or delete the token.
To give an access token a new description:
Select the View details link associated with the required access token.
Enter a new description in the Description field.
Click Save.
To deactivate an access token:
Select View details next to the required access token.
Slide the Is active toggle to the left.
Click Save.
To delete an access token:
Select the checkbox associated with the access token you want to delete.
Click Delete.
Type
delete
in the pop-up window and click OK.
Use an access token¶
You can use an access token anywhere you need an MSR password.
Examples:
You can pass your access token to the
--password
or-p
option when logging in from your Docker CLI client:docker login dtr.example.org --username <username> --password <token>
You can pass your access token to an MSR API endpoint to list the repositories to which the associated user has access:
curl --silent --insecure --user <username>:<token> dtr.example.org/api/v0/repositories
Configure MSR¶
Add a custom TLS certificate¶
By default, Mirantis Secure Registry (MSR) services are exposed using HTTPS. This ensures encrypted communications between clients and your trusted registry. If you do not pass a PEM-encoded TLS certificate during installation, MSR will generate a self-signed certificate, which leads to an insecure site warning when accessing MSR through a browser. In addition, MSR includes an HTTP Strict Transport Security (HSTS) header in all API responses, which can cause your browser not to load the MSR web UI.
You can configure MSR to use your own TLS certificates, to ensure that MSR automatically trusts browsers and client tools. You can also enable user authentication through client certificates that your organization Public Key Infrastructure (PKI) provides.
To upload your own TLS certificates and keys, you can use the Helm CLI options to either install or reconfigure your MSR instance.
Customize the WebTLS certificate¶
Acquire your TLS certificate and key files.
Note
You can use a previously created CA signed SSL certificate, or you can create a new one. 1
Add the secret to the cluster:
kubectl create secret tls <secret-name> \ --key <keyfile>.pem \ --cert <certfile>.pem
Install the helm chart with the custom certificate:
helm install msr msr \ --repo https://registry.mirantis.com/charts/msr/msr \ --version 1.0.0 \ --set-file license=path/to/file/license.lic \ --set nginx.webtls.secretName="<secret-name>"
Enable port forwarding:
kubectl port-forward service/msr 8080 8443:443
Log in as an administrator at
https://localhost:8443/login
.Verify the presence of a valid certificate by matching the information with that of the generated certificate.
- 1
Users who want to create a new self-signed certificate that is valid for the host name can do so using
mkcert
oropenssl
.
Disable MSR telemetry¶
By default, MSR automatically records and transmits data to Mirantis through an encrypted channel for monitoring and analysis purposes. The data collected provides the Mirantis Customer Success Organization with information that helps Mirantis to better understand the operational use of MSR by our customers. It also provides key feedback in the form of product usage statistics, which assists our product teams in making enhancements to Mirantis products and services.
Caution
To send MSR telemetry, the container runtime and the jobrunner
container must be able to resolve api.segment.io
and create a TCP
(HTTPS) connection on port 443.
To disable telemetry for MSR:
Log in to the MSR web UI as an administrator.
Click System in the left-side navigation panel to open the System page.
Click the General tab in the details pane.
Scroll down in the details pane to the Analytics section.
Toggle the Send data slider to the left.
Configure external storage¶
By default, MSR uses the local filesystem of the node on which it is running to store your Docker images. As an alternative, you can configure MSR to use an external storage backend for improved performance or high availability.
Configure MSR image storage¶
If your MSR deployment has a single replica, you can continue to use the local filesystem to store your Docker images. If, though, your MSR deployment has multiple replicas, make sure that all of the replicas are using the same storage backend for high availability.
Whenever a user pulls an image, the MSR node serving the request must have access to that image.
Storage backends¶
MSR supports the following storage systems:
Persistent volume |
|
Cloud storage providers |
|
You can configure your storage backend at the time of Helm chart installation
or upgrade. To do so, specify in your Helm chart the
registry.storage.backend
parameter with one of the following values, as
appropriate:
"persistentVolume"
"azure"
"gcs"
"s3"
"swift"
"oss"
The registry.storage.persistentVolume
section of values.yaml
in your
Helm chart contains the following detailed storage configuration information:
Field |
Description |
---|---|
storageClass |
The storageClass for the persistentVolume. |
accessMode |
The access mode for the persistentVolume. |
size |
The size of the persistentVolume. |
Local filesystem¶
The default MSR backend is persistentVolume
.
You must configure a default StorageClass
on your cluster that supports the dynamic provisioning of persistent volumes.
The StorageClass must support the provisioning of ReadWriteOnce
and
ReadWriteMany
volumes.
To verify the current default StorageClass
:
kubectl get sc
Example output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 33d
MSR deployments with high availability must use either NFS or another centralized storage backend to ensure that all MSR replicas have access to the same images.
To verify the amount of persistent volume space that is in use:
kubectl -n <NAMESPACE> exec service/<RELEASE_NAME> -- df
Deploy MSR on NFS¶
You can configure your MSR replicas to store images on an NFS partition, to thus enable all replicas to share the same storage backend.
Note
As MSR does not migrate storage content when it switches backends, you must migrate the content prior to changing the MSR storage configuration.
Prepare MSR for NFS¶
Verify that the NFS server has the correct configuration.
Verify that the NFS server has a fixed IP address.
Verify that all hosts that are running MSR have the correct NFS libraries.
Verify that the hosts can connect to the NFS server by listing the directories exported by your NFS server:
showmount -e <nfsserver>
Mount one of the exported directories:
mkdir /tmp/mydir && sudo mount -t nfs <nfs server>:<directory> /tmp/mydir
Configure NFS for MSR¶
Note
The manifest examples herein are offered for demonstration purposes only. They do not exist in the Mirantis repository and thus are not available for use. To use NFS with MSR 3.0.x, you must enlist an external provisioner, such as NFS Ganesha server and external provisioner or NFS subdir external provisioner.
Define the NFS service:
kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
Create an NFS server and service:
Create the NFS server from the service definition:
kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml
Expose the NFS server as a service:
kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
Verify that the Pods are correctly deployed:
kubectl get pods -l role=nfs-server.
Create the persistent volume claim:
Locate the cluster IP for your server:
kubectl describe services nfs-server
Edit the NFS persistent volume to use the correct IP address. Because there are not yet any service names, you must hard-code the IP address.
Set up the persistent volume to use the NFS service:
kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml
Note
Switching MSR to NFS
As MSR does not migrate storage content when it switches backends, you must migrate the content prior to changing the MSR storage configuration.
Configure MSR for a cloud storage provider (S3)¶
You can configure MSR to store Docker images on Amazon S3 or on any other file servers with an S3-compatible API.
All S3-compatible services store files in “buckets”, to which you can authorize users to read, write, and delete files. Whenever you integrate MSR with such a service, MSR sends all read and write operations to the S3 bucket where the images then persist.
Note
The instructions offered below pertain specifically to the configuration of MSR to Amazon S3. They can, however, also serve as a guide for how to configure MSR to other available cloud storage providers.
Create a bucket on Amazon S3¶
Before you configure MSR you must first create a bucket on Amazon S3. To optimize pulls and pushes, Mirantis suggests that you create the S3 bucket in the AWS region that is physically closest to the servers on which MSR is set to run.
Create an S3 bucket.
Create a new IAM user for the MSR integration.
Apply an IAM policy that has the following limited user permissions:
Access to the newly-created bucket
Ability to read, write, and delete files
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": "arn:aws:s3:::<bucket-name>" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads" ], "Resource": "arn:aws:s3:::<bucket-name>/*" } ] }
Configure MSR on Amazon S3¶
Set
registry.storage.backend
tos3
.Specify
registry.storage.s3.region
andregistry.storage.s3.bucket
.If you are not using IAM role authentication, you must also set
registry.storage.s3.accesskey
andregistry.storage.s3.secretkey
.To activate the new storage configuration settings, issue the helm upgrade command.
Example configuration command at install time:
helm install msr msrofficial/msr \
--version 1.0.0 \
--set registry.storage.backend=s3 \
--set registry.storage.s3.accesskey=<> \
--set registry.storage.s3.secretkey=<> \
--set registry.storage.s3.region=us-east-2 \
--set registry.storage.s3.bucket=testing-msr-kube
Example configuration command at time of upgrade:
helm upgrade msr msrofficial/msr \
--version 1.0.0 \
--set registry.storage.backend=s3 \
--set registry.storage.s3.accesskey=<> \
--set registry.storage.s3.secretkey=<> \
--set registry.storage.s3.region=us-east-2 \
--set registry.storage.s3.bucket=testing-msr-kube
The following parameters are available for configuration in the
registry.storage.s3
section of the values.yaml
file in your Helm chart:
Field |
Description |
Level |
---|---|---|
|
AWS Access Key. |
Standard |
|
AWS Secret key. |
Standard |
|
The AWS region in which your bucket exists. |
Standard |
|
The endpoint for S3 compatible storage services. |
Standard |
|
The name of the bucket in which image data is stored. |
Standard |
|
Indicates whether images are stored in encrypted format. |
Advanced |
|
The KMS key ID to use for encryption of images. |
Advanced |
|
Indicates whether to use HTTPS for data transfers to the bucket. |
Advanced |
|
Indicates whether to use AWS Signature Version 4 to authenticate requests. |
Advanced |
|
The default part size for multipart uploads. |
Advanced |
|
A prefix that is applied to all object keys to allow you to segment data in your bucket if necessary. |
Advanced |
|
The S3 storage class applied to each registry file. Valid options are “STANDARD” and “REDUCED_REDUNDANCY”. |
Advanced |
MSR supports the following S3 regions:
us-east-1 |
us-east-2 |
us-west-1 |
us-west-2 |
eu-west-1 |
eu-west-2 |
eu-central-1 |
ap-south-1 |
ap-southeast-1 |
ap-southeast-2 |
ap-northeast-1 |
ap-northeast-2 |
sa-east-1 |
cn-north-1 |
us-gov-west-1 |
ca-central-1 |
Restore MSR with S3¶
To restore MSR using your previously configured S3 settings, use restore.
Other cloud storage providers¶
For S3-compatible cloud storage providers other than Amazon S3, configure the
following parameters in the registry.storage
section of the values.yaml
file in your Helm chart:
Field |
Description |
Level |
---|---|---|
|
The name of the Azure Storage Account. |
Standard |
|
The Primary or Secondary Key for the Storage Account. |
Standard |
|
The name of the Azure root storage container in which image data is stored. |
Standard |
|
The domain name suffix for the Storage API endpoint. |
Advanced |
Field |
Description |
Level |
---|---|---|
|
OpenStack user name. |
Standard |
|
OpenStack user name. |
Standard |
|
OpenStack password. |
Standard |
|
The name of the Swift container in which to store the registry images. |
Standard |
|
The contents of a service account private key file in JSON format that is used for Service Account Authentication. |
Advanced |
|
OpenStack tenant name. |
Advanced |
|
OpenStack tenant ID. |
Advanced |
|
OpenStack domain name for Identity v3 API. |
Advanced |
|
OpenStack domain id for Identity v3 API. |
Advanced |
|
OpenStack trust ID for Identity v3 API. |
Advanced |
|
Skips TLS server certificate verification. |
Advanced |
|
Data segments for the Swift Dynamic Large Objects. |
Advanced |
|
A prefix that is applied to all Swift object keys that allows you to segment data in your container, if necessary. |
Advanced |
|
The secret key used to generate temporary URLs. |
Advanced |
|
The access key to generate temporary URLs. |
Advanced |
|
Specifies the OpenStack Auth version. |
Advanced |
|
The endpoint type used when connecting to Swift. |
Advanced |
Field |
Description |
Level |
---|---|---|
|
The name of the Google Cloud Storage bucket in which image data is stored. |
Standard |
|
The contents of a service account private key file in JSON format that is used for Service Account Authentication. |
Advanced |
|
The root directory tree in which all registry files are stored. The prefix is applied to all Google Cloud Storage keys, to allow you to segment data in your bucket as necessary. |
Advanced |
|
The chunk size used for uploading large blobs. |
Advanced |
Field |
Description |
Level |
---|---|---|
|
Access key ID. |
Standard |
|
Access key secret. |
Standard |
|
The ID of the OSS region in which you would like to store objects. |
Standard |
|
The name of the OSS bucket in which to store objects. |
Standard |
|
The endpoint domain name for accessing OSS. |
Advanced |
|
Indicates whether to use the internal endpoint instead of the public endpoint, for OSS access. |
Advanced |
|
Indicates whether to encrypt your data on the server side. |
Advanced |
|
Indicates whether to transfer data to the bucket over HTTPS. |
Advanced |
|
The default part size for multipart uploads. |
Advanced |
|
A prefix that is applied to all object keys that allows you to segment data in your bucket, if necessary. |
Advanced |
Switch storage backends¶
To facilitate online garbage collection, switching storage backends initializes a new metadata store and erases your existing tags. As a best practice, you should always move, back up, and restore MSR storage backends together with your metadata.
The following example demonstrates how to swicth your storage backend to Amazon S3:
helm upgrade msr msrofficial/msr \
--set registry.storage.backend=s3 \
--set registry.storage.s3.accesskey=<> \
--set registry.storage.s3.secretkey=<> \
--set registry.storage.s3.region=us-east-2 \
--set registry.storage.s3.bucket=testing-msr-kube
Set up high availability¶
Mirantis Secure Registry (MSR) is designed to scale horizontally as your usage
increases. You can scale each of the resources the MSR Helm chart creates by
editing the replicaCount
setting across each resource. You can also add
more replicas to cause MSR to scale to demand and for high availability.
To ensure that MSR is tolerant to failures, you can add additional replicas to each of the resources MSR deploys. MSR with high availability requires a minimum of three Nodes.
When sizing your MSR installation for high availability, Mirantis recommends that you follow these best practices:
Ensure that multiple Pods created for the same resource are not scheduled on the same Node. To do this, enable a Pod affinity setting in your Kubernetes environment that schedules Pod replicas on different Nodes.
Note
If you are unsure of which Pod affinity settings to use, set the
global.podAntiAffinityPreset
tohard
, to enable the recommended affinity settings intended for a highly available workload.Do not scale RethinkDB with just two replicas.
Caution
RethinkDB cannot tolerate a failure with an even number of replicas.
To determine how to best scale RethinkDB, refer to the following table.
MSR RethinkDB replicas
Failures tolerated
1
0
3
1
5
2
7
3
Caution
Adding too many replicas to the RethinkDB cluster can lead to performance degradation.
Install an HA MSR deployment¶
High availability (HA) MSR deployments require a Kubernetes environment with:
At least two different nodes to run an MSR deployment
An additional node on which to replicate the RethinkDB cluster, to ensure fault tolerance
To install an HA MSR deployment:
Create an
ha.yaml
file with the following content:global: podAntiAffinityPreset: hard rethinkdb: cluster: replicaCount: 3 proxy: replicaCount: 2 enzi: api: replicaCount: 2 worker: replicaCount: 2 nginx: replicaCount: 2 garant: replicaCount: 2 api: replicaCount: 2 jobrunner: deployments: default: replicaCount: 2 notarySigner: replicaCount: 2 notaryServer: replicaCount: 2 registry: replicaCount: 2
Note
You can edit the replica counts in the
ha.yaml
file. However, you must make sure thatrethinkdb.cluster.replicaCount
is always an odd number. Refer to the RethinkDB scaling chart for details.Use Helm to apply the YAML file to a new installation:
helm install msrofficial/msr -f ha.yaml
Modify replica counts on an existing installation¶
You can use the helm upgrade command to modify replica counts across non-RethinkDB MSR resources. For the RethinkDB resources, refer to Modify replica counts for RethinkDB resources.
To modify replica counts for MSR resources:
In the
ha.yaml
file, edit the key-value pair that corresponds to the MSR resource whose replica count you wish to modify. For example,nginx
:Note
Refer to The ha.yaml file sample for the full configuration example.
nginx: replicaCount: <desired-replica-count>
Note
Refer to The ha.yaml file sample for the full configuration example.
To apply the new values, run the helm upgrade command:
helm upgrade msrofficial/msr –-version 1.0.0 -f ha.yaml
Modify replica counts for RethinkDB resources¶
Unlike other MSR resources, modifications to RethinkDB resources require that
you scale the RethinkDB tables. Cluster scaling occurs when you alter the
replicaCount
value in the ha.yaml
file.
Add replicas to RethinkDB¶
Adjust the
replicaCount
value by creating or editing an existingha.yaml
file:Note
Refer to The ha.yaml file sample for the full configuration example.
rethinkdb: cluster: replicaCount: <desired-replica-count>
Run the helm upgrade command to apply the new values:
helm upgrade msrofficial/msr –-version 1.0.0 -f ha.yaml
Monitor the addition of the RethinkDB replicas to ensure that each one has a
Running
status prior to scaling the RethinkDB tables in the cluster:kubectl get pods -l="app.kubernetes.io/component=cluster","app.kubernetes.io/name=rethinkdb"
Example output:
NAME READY STATUS RESTARTS AGE msr-rethinkdb-cluster-0 1/1 Running 0 3h19m msr-rethinkdb-cluster-1 1/1 Running 0 110s msr-rethinkdb-cluster-2 1/1 Running 0 83s
Scale the RethinkDB tables within the cluster to use the newly added replicas:
kubectl exec -it deploy/msr-api -- msr db scale
Remove replicas from RethinkDB¶
The replica removal procedure offers an example of how to scale down from three servers to one server.
Decommission the RethinkDB servers that you want to remove:
Obtain a current list of RethinkDB servers:
kubectl exec deploy/msr-api -- msr rethinkdb list
Example output:
NAME ID TAGS CACHE (MB) msr_rethinkdb_cluster_1 fa5d11f0-d47f-4a8f-895f-246271212204 default 100 msr_rethinkdb_cluster_0 b81cca8a-6584-4b9a-9c97-e9f3c86b24fd default 100 msr_rethinkdb_cluster_2 d6d29977-6ab6-4815-ab24-25519ab3339f default 100
Determine which servers to decommission.
Run msr rethinkdb decommission on the servers you want to decommission.
Note
The number of replicas will scale down from the highest number to the lowest. Thus, as the scale down in the example is from three servers to one server, the two servers with the highest numbers should be targeted for decommission.
kubectl exec deploy/msr-api -- msr rethinkdb decommission msr_rethinkdb_cluster_2 msr_rethinkdb_cluster_1
Scale down the RethinkDB tables within the cluster:
kubectl exec -it deploy/msr-api -- msr db scale
Adjust the
replicaCount
value by creating or editing an existingha.yaml
file.nginx: replicaCount: 1
Apply the new
replicaCount
values:helm upgrade msrofficial/msr –-version 1.0.0 -f ha.yaml
Monitor the removal of the cluster pods to ensure their termination:
kubectl get pods -l="app.kubernetes.io/component=cluster","app.kubernetes.io/name=rethinkdb"
Example output:
NAME READY STATUS RESTARTS AGE msr-rethinkdb-cluster-0 1/1 Running 0 3h19m msr-rethinkdb-cluster-1 1/1 Running 0 1h22m msr-rethinkdb-cluster-2 0/1 Terminating 0 1h22m
Set up security scanning¶
For MSR to perform security scanning, you must have a running deployment of Mirantis Secure Registry (MSR), administrator access, and an MSR license that includes security scanning.
Before you can set up security scanning, you must verify that your Docker ID
can access and download your MSR license from DockerHub. If you are using a
license that is associated with an organization account, verify that your
Docker ID is a member of the Owners
team, as only members of that team can
download license files for an organization. If you are using a license
associated with an individual account, no additional action is needed.
Note
To verify that your MSR license includes security scanning:
Log in to the MSR web UI.
In the left-side navigation panel, click System and navigate to the Security tab.
If the Enable Scanning toggle displays, the license includes security scanning.
To learn how to obtain and install your MSR license, refer to Obtain the MSR license.
Enable MSR security scanning¶
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate to the Security tab.
Slide the Enable Scanning toggle to the right.
Set the security scanning mode by selecting either Online or Offline.
Online mode:
Online mode downloads the latest vulnerability database from a Docker server and installs it.
Select whether to include jobrunner and postgresDB logs
Click Sync Database now.
Offline mode:
Offline mode requires that you manually perform the following steps.
Download the most recent CVE database.
Be aware that the example command specifies default values. It instructs the container to output the database file to the
~/Downloads
directory and configures the volume to map from the local machine into the container. If the destination for the database is in a separate directory, you must define an additional volume. For more information, refer to the table that follows this procedure.docker run -it --rm \ -v ${HOME}/Downloads:/data \ -e CVE_DB_URL_ONLY=false \ -e CLOBBER_FILE=false \ -e DATABASE_OUTPUT="/data" \ -e DATABASE_SCHEMA=3 \ -e DEBUG=false \ -e VERSION_ONLY=false \ mirantis/get-dtr-cve-db:latest
Click Select Database and open the downloaded CVE database file.
Variable |
Default |
Override detail |
---|---|---|
CLOBBER_FILE |
|
Set to |
CVE_DB_URL_ONLY |
|
Set to |
DATABASE_OUTPUT |
|
Indicates the database download directory inside the container. |
DATABASE_SCHEMA |
|
Valid values:
|
DEBUG |
|
Set to |
VERSION_ONLY |
|
Set to |
Set repository scanning mode¶
Two image scanning modes are available:
- On push
The image is re-scanned (1) on each
docker push
to the repository and (2) when a user withwrite
access clicks the Start Scan links or the Scan button.- Manual
The image is scanned only when a user with
write
access clicks the Start Scan links or Scan button.
By default, new repositories are set to scan On push, and any repositories that existed before scanning was enabled are set to Manual.
To change the scanning mode for an individual repository:
Verify that you have
write
oradmin
access to the repository.Navigate to the repository, and click the Settings tab.
Scroll down to the Image scanning section.
Select the desired scanning mode.
Update the CVE scanning database¶
MSR security scanning indexes the components in your MSR images and compares them against a CVE database. This database is routinely updated with new vulnerability signatures, and thus MSR must be regularly updated with the latest version to properly scan for all possible vulnerabilities. After updating the database, MSR matches the components in the new CVE reports to the indexed components in your images, and generates an updated report.
Note
MSR users with administrator access can learn when the CVE database was last updated by accessing the Security tab in the MSR System page.
Update CVE database in online mode¶
In online mode, MSR security scanning monitors for updates to the vulnerability database, and downloads them when available.
To ensure that MSR can access the database updates, verify that the host can
access both https://license.mirantis.com
and
https://dss-cve-updates.mirantis.com/
on port 443 using HTTPS.
MSR checks for new CVE database updates every day at 3:00 AM UTC. If an update is available, it is automatically downloaded and applied, without interrupting any scans in progress. Once the update is completed, the security scanning system checks the indexed components for new vulnerabilities.
To set the update mode to online:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate to the Security tab.
Click Online.
Your choice is saved automatically.
Note
To check immediately for a CVE database update, click Sync Database now.
Update CVE database in offline mode¶
When connection to the update server is not possible, you can update the CVE
database for your MSR instance using a .tar
file that contains the database
updates.
To set the update mode to offline:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate to the Security tab.
Select Offline
Click Select Database and open the downloaded CVE database file.
MSR installs the new CVE database and begins checking the images that are already indexed for components that match new or updated vulnerabilities.
Caches¶
The time needed to pull and push images is directly influenced by the distance between your users and the geographic location of your MSR deployment. This is because the files need to traverse the physical space and cross multiple networks. You can, however, deploy MSR caches at different geographic locations, to add greater efficiency and shorten user wait time.
With MSR caches you can:
Accelerate image pulls for users in a variety of geographical regions.
Manage user permissions from a central location.
MSR caches are inconspicuous to your users, as they will continue to log in and pull images using the provided MSR URL address.
When MSR receives a user request, it first authenticates the request and verifies that the user has permission to pull the requested image. Assuming the user has permission, they then receive an image manifest that contains the list of image layers to pull and which directs them to pull the images from a particular cache.
When your users request image layers from the indicated cache, the cache pulls these images from MSR and maintains a copy. This enables the cache to serve the image layers to other users without having to retrieve them again from MSR.
Note
Avoid using caches if your users need to push images faster or if you want to implement region-based RBAC policies. Instead, deploy multiple MSR clusters and apply mirroring policies between them. For further details, refer to Promotion policies and monitoring.
MSR cache prerequisites¶
Before deploying an MSR cache in a datacenter:
Obtain access to the Kubernetes cluster that is running MSR in your data center.
Join the nodes into a cluster.
Dedicate one or more worker nodes for running the MSR cache.
Obtain TLS certificates with which to secure the cache.
Configure a shared storage system, if you want the cache to be highly available.
Configure your firewall rules to ensure that your users have access to the cache through your chosen port.
Note
For illustration purposes only, the MSR cache documentation details caches that are exposed on port 443/TCP using an ingress controller.
MSR cache deployment scenario¶
MSR caches running in different geographic locations can provide your users with greater efficiency and shorten the amount of time required to pull images from MSR.
Consider a scenario in which you are running an MSR instance that is installed in the United States, with a user base that includes developers located in the United States, Asia, and Europe. The US-based developers can pull their images from MSR quickly, however those working in Asia and Europe have to contend with unacceptably long wait times to pull the same images. You can address this issue by deploying MSR caches in Asia and Europe, thus reducing the wait time for developers located in those areas.
The described MSR cache scenario requires three datacenters:
US-based datacenter, running MSR configured for high availability
Asia-based datacenter, running an MSR cache that is configured to fetch images from MSR
Europe-based datacenter, running an MSR cache that is configured to fetch images from MSR
For information on datacenter configuration, refer to MSR cache prerequisites.
Deploy an MSR cache with Kubernetes¶
Note
The MSR with Kubernetes deployment detailed herein assumes that you have a running MSR deployment.
When you establish the MSR cache as a Kubernetes deployment, you ensure that Kubernetes will automatically schedule and restart the service in the event of a problem.
You manage the cache configuration with a Kubernetes Config Map and the TLS certificates with Kubernetes secrets. This setup enables you to securely manage the configurations of the node on which the cache is running.
Prepare the cache deployment¶
Following cache preparation, you will have the following file structure on your workstation:
├── msrcache.yml
├── config.yml
└── certs
├── cache.cert.pem
├── cache.key.pem
└── msr.cert.pem
- msrcache.yml
The YAML file that allows you to deploy the cache with a single command.
- config.yml
The cache configuration file.
- certs
The certificates subdirectory.
- cache.cert.pem
The cache public key certificate, including any intermediaries.
- cache.key.pem
The cache private key.
- msr.cert.pem
The MSR CA certificate.
To deploy the MSR cache with a TLS endpoint you must generate a TLS ceritificate and key from a certificate authority.
The manner in which you expose the MSR cache changes the Storage Area Networks (SANs) that are required for the certificate. For example:
To deploy the MSR cache with an ingress object you must use an external MSR cache address that resolves to your ingress controller as part of your certificate.
To expose the MSR cache through a Kubernetes Cloud Provider, you must have the external Loadbalancer address as part of your certificate.
To expose the MSR cache through a Node port or a host port you must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.
Create the MSR cache certficates:
Create a cache certificate:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -m PEM
Create a directory called
certs
.In the
certs
directory, place the newly created certificatecache.cert.pem
and keycache.key.pem
for your MSR cache.Place the certificate authority in the
certs
directory, including any intermedite certificate authorities of the certificate from your MSR deployment. If your MSR deployment uses cert-manager, use kebectl to source this from the main MSR deployment.kubectl get secret msr-nginx-ca-cert -o go-template='{{ index .data "ca.crt" | base64decode }}'
Note
If cert-manager is not in use, you must provide your custom nginx.webtls
certificate.
The MSR cache takes its configuration from a configuration file that you mount into the container.
You can edit the following MSR cache configuration file for your environment, entering the relevant external MSR cache, worker node, or external loadbalancer FQDN. Once you have configured the cache it fetches image layers from MSR and maintains a local copy for 24 hours. If a user requests the image layer after that period, the cache fetches it again from MSR.
cat > config.yml <<EOF
version: 0.1
log:
level: info
storage:
delete:
enabled: true
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
secret: generate-random-secret
host: https://<external-fqdn-msrcache> # Could be MSR Cache / Loadbalancer / Worker Node external FQDN
tls:
certificate: /certs/cache.cert.pem
key: /certs/cache.key.pem
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- https://<msr-url> # URL of the Main MSR Deployment
cas:
- /certs/msr.cert.pem
EOF
By default, the cache stores image data inside its container. Thus, if something goes wrong with the cache service and Kubernetes deploys a new Pod, cached data is not persisted. The data is not lost, however, as it persists in the primary MSR.
Note
Kubernetes persistent volumes or persistent volume claims must be in use to provide persistent backend storage capabilities for the cache.
The Kubernetes manifest file you use to deploy the MSR cache is independent from how you choose to expose the MSR cache within your environment.
cat > msrcache.yml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: msr-cache
namespace: msr
spec:
replicas: 1
selector:
matchLabels:
app: msr-cache
template:
metadata:
labels:
app: msr-cache
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: msr-cache
image: registry.mirantis.com/msr/msr-content-cache:3.0.11
command: ["bin/sh"]
args:
- start.sh
- /config/config.yml
ports:
- name: https
containerPort: 443
volumeMounts:
- name: msr-certs
readOnly: true
mountPath: /certs/
- name: msr-cache-config
readOnly: true
mountPath: /config
volumes:
- name: msr-certs
secret:
secretName: msr-certs
- name: msr-cache-config
configMap:
defaultMode: 0666
name: msr-cache-config
EOF
Create Kubernetes resources¶
To create the Kubernetes resources, you must have the kubectl command line tool configured to communicate with your Kubernetes cluster, through either a Kubernetes configuration file or an MKE client bundle.
Note
The documentation herein assumes that you have the necessary file stucture on your workstation.
To create the Kubernetes resources:
Create a Kubernetes namespace to logically separate all of the MSR cache components:
kubectl create namespace msr
Create the Kubernetes Secrets that contain the MSR cache TLS certificates and a Kubernetes ConfigMap that contains the MSR cache configuration file:
kubectl -n msr create secret generic msr-certs \ --from-file=certs/msr.cert.pem \ --from-file=certs/cache.cert.pem \ --from-file=certs/cache.key.pem kubectl -n msr create configmap msr-cache-config \ --from-file=config.yaml
Create the Kubernetes deployment:
kubectl create -f msrcache.yaml
Review the running Pods in your cluster to confirm successful deployment:
kubectl -n msr get pods
Optional. Troubleshoot your deployment:
kubectl -n msr describe pods <pods> and / or
`kubectl -n msr logs <pods>
Expose the MSR Cache¶
To provide external access to your MSR cache you must expose the cache Pods.
Important
Expose your MSR cache through only one external interface.
To ensure TLS certificate validity, you must expose the cache through the same interface for which you previously created a certificate.
Kubernetes supports several methods for exposing a service, based on your infrastructure and your environment. Detail is offered below for the NodePort method and the Ingress Controllers method.
Add a worker node FQDN to the TLS certificate at the start and access the MSR cache through an exposed port on a worker node FQDN.
cat > msrcacheservice.yaml <<EOF apiVersion: v1 kind: Service metadata: name: msr-cache namespace: msr spec: type: NodePort ports: - name: https port: 443 targetPort: 443 protocol: TCP selector: app: msr-cache EOF kubectl create -f msrcacheservice.yaml
Run the following command to determine the port on which you have exposed the MSR cache:
kubectl -n msr get services
Test the external reachability of your MSR cache. To do this, use
curl
to hit the API endpoint, using both the external address of a worker node and the NodePort:curl -X GET https://<workernodefqdn>:<nodeport>/v2/_catalog {"repositories":[]}
In the ingress contoller exposure scheme, you expose the MSR cache through an ingress object.
Create a DNS rule in your environment to resolve an MSR cache external FQDN address to the address of your ingress controller. In addition, specify at the start the same MSR cache external FQDN within the MSR cache certificate.
cat > msrcacheingress.yaml <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: msr-cache namespace: msr annotations: nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/secure-backends: "true" spec: tls: - hosts: - <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address rules: - host: <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address http: paths: - pathType: Prefix path: "/cache" backend: service: name: msr-cache port: number: 443 EOF kubectl create -f msrcacheingress.yaml
Test the external reachability of your MSR cache. To do this, use
curl
to hit the API endpoint. The address should be the one you have previously defined in the service definition file.
curl -X GET https://external-msr-cache-fqdn/v2/_catalog
{"repositories":[]}
See also
The official Kubernetes documentation on Publishing services - service types.
Configure caches for high availability¶
To ensure that your MSR cache is always available to users and is highly performant, configure it for high availability.
You will require the following to deploy MSR caches with high availability:
Multiple nodes, one for each cache replica
A load balancer
Shared storage system that has read-after-write consistency
With high availability, Mirantis recommends that you configure the replicas to store data using a shared storage system. MSR cache deployment is the same, though, regardless of whether you are deploying a single replica or multiple replicas.
When using a shared storage system, once an image layer is cached, any replica is able to serve it to users without having to fetch a new copy from MSR.
MSR caches support the following storage systems:
Alibaba Cloud Object Storage Service
Amazon S3
Azure Blob Storage
Google Cloud Storage
NFS
Openstack Swift
Note
If you are using NFS as a shared storage system, ensure read-after-write consistency by verifying that the shared directory is configured with:
/dtr-cache *(rw,root_squash,no_wdelay)
In addition, mount the NFS directory on each node where you will deploy an MSR cache replica.
To configure caches for high availability:
Use SSH to log in to a manager node of the cluster on which you want to deploy the MSR cache. If you are using MKE to manage that cluster, you can also use a client bundle to configure your Docker CLI client to connect to the cluster.
Label each node that is going to run the cache replica:
docker node update --label-add dtr.cache=true <node-hostname>
Create the cache configuration files by following the instructions for deploying a single cache replica. Be sure to adapt the
storage
object, using the configuration options for the shared storage of your choice.Deploy a load balancer of your choice to balance requests across your set of replicas.
MSR cache configuration¶
MSR caches are based on Docker Registry, and use the same configuration file
format. The MSR cache extends the Docker Registry configuration file format,
though, introducing a new middleware called downstream
with three
configuration options: blobttl
, upstreams
, and cas
:
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- <Externally-reachable address for upstream registry or content cache in format scheme://host:port>
cas:
- <Absolute path to next-hop upstream registry or content cache CA certificate in the container's filesystem>
The following table offers detail specific to MSR caches for each parameter:
Parameter |
Required |
Description |
---|---|---|
|
no |
The TTL (Time to Live) value for blobs in the cache, offered as a positive integer and suffix denoting a unit of time. Valid values:
Note If the suffix is omitted, the system interprets the value as nanoseconds. If blobttl is configured, |
|
no |
An optional list of absolute paths to PEM-encoded CA certificates of upstream registries or content caches. |
|
yes |
A list of externally-reachable addresses for upstream registries of content caches. If you specify more than one host, it will pull from registries in a round-robin fashion. |
Garbage collection¶
Mirantis Secure Registry (MSR) supports garbage collection, the automatic cleanup of unused image layers. You can configure garbage collection to occur at regularly scheduled times, as well as set a specific duration for the process.
Garbage collection first identifies and marks unused image layers, then subsequently deletes the layers that have been marked.
Schedule garbage collection¶
Log in to the MSR web UI.
In the left-side navigation panel, navigate to System and select the Garbage collection tab.
Set the duration for the garbage collection job:
Until done
For <number> minutes
Never
Set the garbage collection schedule:
Custom cron schedule (
<hour, date, month, day>
)Daily at midnight UTC
Every Saturday at 1AM UTC
Every Sunday at 1AM UTC
Do not repeat
Click either Save & Start or Save. Save & Start runs the garbage collection job immediately and Save runs the job at the next scheduled time.
At the scheduled start time, verify that garbage collection has begun by navigating to the Job Logs tab.
How garbage collection works¶
In conducting garbage collection, MSR performs the following actions in sequence:
Establishes a cutoff time.
Marks each referenced manifest file with a timestamp. When manifest files are pushed to MSR, they are also marked with a timestamp.
Sweeps each manifest file that does not have a timestamp after the cutoff time.
Deletes the file if it is never referenced, meaning that no image tag uses it.
Repeats the process for blob links and blob descriptors.
Each image stored in MSR is comprised of the following files:
The image filesystem, which consists of a list of unioned image layers.
A configuration file, which contains the architecture of the image along with other metadata.
A manifest file, which contains a list of all the image layers and the configuration file for the image.
MSR tracks these files in its metadata store, using RethinkDB, doing so in a
content-addressable manner in which each file corresponds to a cryptographic
hash of the file content. Thus, if two image tags hold exactly the same
content, MSR only stores the content once, which makes hash collisions nearly
impossible even when image tag names differ. For example, if wordpress:4.8
and wordpress:latest
have the same content, MSR will only store that
content once. If you delete one of these tags, the other will remain intact.
As a result, when you delete an image tag, MSR cannot delete the underlying files as it is possible that other tags also use the same underlying files.
Create a new repository when pushing an image¶
By default, MSR only allows users to push images to repositories that already exist, and for which the user has write privileges. Alternatively, you can configure MSR to create a new private repository when an image is pushed.
To create a new repository when pushing an image:
Log in to the MSR web UI.
In the left-side navigation panel, click Settings and scroll down to Repositories.
Slide the Create repository on push toggle to the right.
Push an image to a non-existing repository:
curl --user <admin-user>:<password> \ --request POST "<msr-url>/api/v0/meta/settings" \ --header "accept: application/json" \ --header "content-type: application/json" \ --data "{ \"createRepositoryOnPush\": true}"
Pushing an image to a non-existing repository will create a new repository using the following naming convention:
Non-admin users:
<user-name>/<repository>
Admin users:
<organization>/<repository>
Use a web proxy¶
Mirantis Secure Registry (MSR) makes outgoing connections to check for new versions, automatically renew its license, and update its vulnerability database. If MSR cannot access the Internet, you must manually apply any updates.
One way to keep your environment secure while still allowing MSR access to the Internet is to deploy a web proxy. If you have an HTTP or HTTPS proxy, you can configure MSR to use it.
To configure MSR for web proxy usage:
In
values.yaml
, insert the following snippet to add theHTTP_PROXY
andHTTPS_PROXY
environment variables to all containers in your MSR deployment:global: extraEnv: HTTP_PROXY: "<domain>:<port>" HTTPS_PROXY: "username:password@<domain>:<port>"
Apply the newly inserted values:
helm upgrade msr msrofficial/msr --version 1.0.0 -f values.yaml
Verify the MSR configuration by reviewing the Pod resources that the MSR Helm chart deploys for the environment variables:
kubectl get deploy/msr-registry -o jsonpath='{@.spec.template.spec.containers[].env}'
Example output:
[{"name":"HTTP_PROXY","value":"example.com:444"}]%
Manage applications¶
In addition to storing individual and multi-architecture container images and plugins, MSR supports the storage of applications as their own distinguishable type.
Applications include the following two tags:
Image |
Tag |
Type |
Under the hood |
---|---|---|---|
Invocation |
|
Container image represented by OS and architecture. For example, |
Uses Mirantis Container Runtime. The Docker daemon is responsible for building and pushing the image. Includes scan results for the invocation image. |
Application with bundled components |
|
Application |
Uses the application client to build and push the image. Includes scan results for the bundled components. Docker App is an experimental Docker CLI feature. |
Use docker app push to push your applications to MSR. For more information, refer to Docker App in the official Docker documentation.
View application vulnerabilities¶
Log in to the MSR web UI.
In the left-side navigation panel, click Repositories.
Select the desired repository and click the Tags tab.
Click View details on the <app-tag> or <app-tag>-invoc row.
Limitations¶
You cannot sign an application as the Notary signer cannot sign Open Container Initiative (OCI) indices.
Scanning-based policies do not take effect until after all images bundled in the application have been scanned.
Docker Content Trust (DCT) does not work for applications and multi-architecture images, which have the same underlying structure.
Parity with existing repository and image features¶
The following repository and image management events also apply to applications:
Manage images¶
Create a repository¶
MSR requires that you create the image repository before pushing any images to the registry.
To create an image repository:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click New repository.
Select the required namespace and enter the name for your repository using only lowercase letters, numbers, underscores, and hyphens.
Select whether your repository is public or private:
Public repositories are visible to all users, but can only be modified by those with write permissions.
Private repositories are visible only to users with repository permissions.
Optional. Click Show advanced settings:
Select On to make tags immutable, and thus unable to be overwritten.
Select On push to configure images to be scanned automatically when they are pushed to MSR. You will also be able to scan them manually.
Click Create.
Note
To enable tag pruning, refer to Set a tag limit. This feature requires that tag immutability is turned off at the repository level.
Image names in MSR¶
MSR image names must have the following characteristics:
The organization and repository names both must have fewer than 56 characters.
The complete image name, which includes the domain, organization, and repository name, must not exceed 255 characters.
When you tag your images for MSR, they must take the following form:
<msr-domain-name>/<user-or-org>/<repository-name>
.For example,
https://127.0.0.1/admin/nginx
.
Multi-architecture images¶
While it is possible to enable the just-in-time creation of multi-architecture image repositories when creating a repository using the API, Mirantis does not recommend using this option, as it will cause Docker Content Trust to fail along with other issues. To manage Docker image manifests and manifest lists, instead use the experimental command docker manifest.
Review repository information¶
The MSR web UI has an Info page for each repository that includes the following sections:
A
README
file, which is editable by admin users.The docker pull command for pulling the images contained in the given repository. To learn more about pulling images, refer to Pull and push images.
The permissions associated with the user who is currently logged in.
To view the Info section:
Log in to the MSR web UI.
In the left-side navigation panel, click Repositories.
Select the required repository by clicking the repository name rather than the namespace name that precedes the /.
The Info tab displays by default.
To view the repository events that your permissions level has access to, hover over the question mark next to the permissions level that displays under Your permission.
Note
Your permissions list may include repository events that are not displayed in the Activity tab. Also, it is not an exhaustive list of the event types that are displayed in your activity stream. To learn more about repository events, refer to Audit repository events.
Pull and push images¶
Just as with Docker Hub, interactions with MSR consist in the following:
docker login <msr-url> authenticates the user on MSR
docker pull <image>:<tag> pulls an image from MSR
docker push <image>:<tag> pushes an image to MSR
Pull an image¶
Note
It is only necessary to authenticate using docker login before pulling a private image.
If you need to pull a private image, log in to MSR:
docker login <registry-host-name>
Pull the required image:
docker pull <registry-host-name>/<namespace>/<repository>:<tag>
Push an image¶
Before you can push an image to MSR, you must create a repository and tag your image.
Create a repository for the required image.
Tag the image using the host name, namespace, repository name, and tag:
docker tag <image-name> <registry-host-name>/<namespace>/<repository>:<tag>
Log in to MSR:
docker login <registry-host-name>
Push the image to MSR:
docker push <registry-host-name>/<namespace>/<repository>:<tag>
Verify that the image successfully pushed:
Log in to the MSR web UI.
In the left-side navigation panel, click Repositories.
Select the relevant repository.
Navigate to the Tags tab.
Verify that the required tag is listed on the page.
Windows image limitations¶
The base layers of the Microsoft Windows base images have redistribution restrictions. When you push a Windows image to MSR, Docker only pushes the image manifest and the layers that are above the Windows base layers. As a result:
When a user pulls a Windows image from MSR, the Windows base layers are automatically fetched from Microsoft.
Because MSR does not have access to the image base layers, it cannot scan those image layers for vulnerabilities. The Windows base layers are, however, scanned by Docker Hub.
On air-gapped or similarly limited systems, you can configure Docker to push
Windows base layers to MSR by adding the following line to
C:\ProgramData\docker\config\daemon.json
:
"allow-nondistributable-artifacts": ["<msr-host-name>:<msr-port>"]
Caution
For production environments, Mirantis does not recommend configuring Docker to push Windows base layers to MSR.
Delete images¶
Note
If your MSR instance uses image signing, you will need to remove any trust data on the image before you can delete it. For more information, refer to Delete signed images.
To delete an image:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click the relevant repository and navigate to the Tags tab.
Select the check box next to the tags that you want to delete.
Click Delete.
Alternatively, you can delete every tag for a particular image by deleting the relevant repository.
To delete a repository:
Click the required repository and navigate to the Settings tab.
Scroll down to Delete repository and click Delete.
Scan images for vulnerabilities¶
Mirantis Secure Registry (MSR) has the ability to scan images for security vulnerabilities contained in the US National Vulnerability Database. Security scan results are reported for each image tag contained in a repository.
Security scanning is available as an add-on to MSR. If security scan results are not available on your repositories, your organization may not have purchased the security scanning feature or it may be disabled. Administrator permissions are required to enable security scanning on your MSR instance.
Note
Only users with write access to a repository can manually start a scan. Users with read-only access can, however, view the scan results.
Security scan process¶
Scans run on demand when you initiate them in the MSR web UI or automatically when you push an image to the registry.
The scanner first performs a binary scan on each layer of the image, identifies the software components in each layer, and indexes the SHA of each component in a bill-of-materials. A binary scan evaluates the components on a bit-by-bit level, so vulnerable components are discovered even if they are statically linked or use a different name.
The scan then compares the SHA of each component against the US National Vulnerability Database that is installed on your MSR instance. When this database is updated, MSR verifies whether the indexed components have newly discovered vulnerabilities.
MSR has the ability to scan both Linux and Windows images. However, because Docker defaults to not pushing foreign image layers for Windows images, MSR does not scan those layers. If you want MSR to scan your Windows images, configure Docker to always push image layers, and it will scan the non-foreign layers.
Scan images¶
Note
Only users with write access to a repository can manually start a scan. Users with read-only access can, however, view the scan results.
Security scan on push¶
By default, a security scan runs automatically when you push an image to the registry.
To view the results of a security scan:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click the required repository and select the Tags tab.
Click View details on the required tag.
Manual scanning¶
You can manually start a scan for images in repositories that you have
write
access to.
To manually scan an image:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click the required repository and select the Tags tab.
Click Start a scan on the required image tag.
To review the scan results, click View details.
Change the scanning mode¶
You can change the scanning mode for each individual repository at any time. You might want to disable scanning in either of the following scenarios:
You are pushing an image repeatedly during troubleshooting and do not want to waste resources on rescanning.
A repository contains legacy code that is not used or updated frequently.
Note
To change an individual repository scanning mode, you must have write
or
administrator
access to the repository.
To change the scanning mode:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click the required repository and select the Settings tab.
Scroll down to Image scanning and under Scan on push, select either On push or Manual.
Review security scan results¶
Once MSR has run a security scan for an image, you can view the results.
Scan summaries¶
A summary of the results displays next to each scanned tag on the repository Tags tab, and presents in one of the following ways:
If the scan did not find any vulnerabilities, the word Clean displays in green.
If the scan found vulnerabilities, the severity level, Critical, Major, or Minor, displays in red or orange with the number of vulnerabilities. If the scan could not detect the version of a component, the vulnerabilities are reported for all versions of the component.
Detailed report¶
To view the full scanning report, click View details for the required image tag.
The top of the resulting page includes metadata about the image including the SHA, image size, last push date, user who initiated the push, security scan summary, and the security scan progress.
The scan results for each image include two different modes so you can quickly view details about the image, its components, and any vulnerabilities found:
The Layers view lists the layers of the image in the order that they are built by the Dockerfile.
This view can help you identify which command in the build introduced the vulnerabilities, and which components are associated with that command. Click a layer to see a summary of its components. You can then click on a component to switch to the Component view and obtain more details about the specific item.
Note
The layers view can be long, so be sure to scroll down if you do not immediately see the reported vulnerabilities.
The Components view lists the individual component libraries indexed by the scanning system in order of severity and number of vulnerabilities found, with the most vulnerable library listed first.
Click an individual component to view details on the vulnerability it introduces, including a short summary and a link to the official CVE database report. A single component can have multiple vulnerabilities, and the scan report provides details on each one. In addition, the component details include the license type used by the component, the file path to the component in the image, and the number of layers that contain the component.
Note
The CVE count presented in the scan summary of an image with multiple layers may differ from the count obtained through summation of the CVEs for each individual image component. This is because the scan summary performs a summation of the CVEs in every layer of the image, and a component may be present in more than one layer of an image.
What to do next¶
If you find that an image in your registry contains vulnerable components, you can use the linked CVE scan information in each scan report to evaluate the vulnerability and decide what to do.
If you discover vulnerable components, you should verify whether there is an updated version available where the security vulnerability has been addressed. If necessary, you can contact the component maintainers to ensure that the vulnerability is being addressed in a future version or a patch update.
If the vulnerability is in a base layer, such as an operating system, you might not be able to correct the issue in the image. In this case, you can switch to a different version of the base layer, or you can find a less vulnerable equivalent.
You can address vulnerabilities in your repositories by updating the images to use updated and corrected versions of vulnerable components or by using a different component that offers the same functionality. When you have updated the source code, run a build to create a new image, tag the image, and push the updated image to your MSR instance. You can then re-scan the image to confirm that you have addressed the vulnerabilities.
Override a vulnerability¶
MSR security scanning sometimes reports image vulnerabilities that you know have already been fixed. In such cases, it is possible to hide the vulnerability warning.
To override a vulnerability:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Navigate to the required repository and click View details.
To review the vulnerabilities associated with each component in the image, click the Components tab.
Select the component with the vulnerability you want to ignore, navigate to the vulnerability, and click Hide.
Once dismissed, the vulnerability is hidden system-wide and will no longer be reported as a vulnerability on affected images with the same layer IDs or digests. In addition, MSR will not re-evaluate the promotion policies that have been set up for the repository.
To re-evaluate the promotion policy for the affected image:
After hiding a particular vulnerability, you can re-evaluate the promotion policy for the affected image.
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Navigate to the required repository and click View details.
Click Promote.
Sign images with Docker Content Trust¶
Docker Content Trust (DCT) allows you to sign image tags, thus giving consumers a way to verify the integrity of your images. Users interact with DCT using a combination of docker trust and notary commands.
Configure image signing¶
To configure image signing, you must enable Docker Content Trust (DCT) and initiate a repository for use with DCT.
Enable DCT¶
While MSR supports DCT use by default, you must opt in to use it on the Docker client side by setting the following environment variable:
export DOCKER_CONTENT_TRUST=1
Important
Mirantis recommends that you add this environment variable to your shell login configuration, so that it is always active.
Trust MSR CA certificate¶
If your MSR instance uses a certificate that is issued by a well-known, public certificate authority (CA), then skip this section and proceed to Configure repository for signing.
If the MSR certificate authority (CA) is self-signed, you must configure the machine that runs the docker trust commands to trust the CA, as detailed in this section.
Caution
It is not possible to use DCT with a remote MSR that is set up as an insecure registry in the Docker daemon configuration. This is because DCT operations are not processed by the Docker daemon, but are instead sent directly to the back-end Notary components that handle signing. It is not possible to configure the back-end components to allow insecure operation.
To configure your machine to trust a self-signed CA:
Create a certificate directory for the MSR host in the Docker configuration directory:
export MSR=<registy-hostname> mkdir -p ~/.docker/certs.d/${MSR}
Download the MSR CA certificate into the newly created directory:
curl -ks https://${MSR}/ca > ~/.docker/certs.d/${MSR}/ca.crt
Restart the Docker daemon.
Verify that you do not receive certificate errors when accessing MSR:
docker login ${MSR}
Create a symlink between the
certs.d
andtls
directories. This link allows the Docker client to share the same CA trust as established for the Docker daemon in the preceding steps.ln -s certs.d ~/.docker/tls
Configure repository for signing¶
Initialize a repository for use with DCT by pushing an image to the relevant repository. You will be prompted for both a new root key password and a new repository key password, as displayed in the example output.
docker push <registry-host-name>/<namespace>/<repository>:<tag>
Example output:
The push refers to repository [<registry-host-name>/<namespace>/<repository>]
b2d5eeeaba3a: Layer already exists
latest: digest: sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 size: 528
Signing and pushing trust metadata
You are about to create a new root signing key passphrase. This passphrase
will be used to protect the most sensitive key in your signing system. Please
choose a long, complex passphrase and be careful to keep the password and the
key file itself secure and backed up. It is highly recommended that you use a
password manager to generate the passphrase and keep it safe. There will be no
way to recover this key. You can find the key in your config directory.
Enter passphrase for new root key with ID 8128255: <root-password>
Repeat passphrase for new root key with ID 8128255: <root-password>
Enter passphrase for new repository key with ID 493e995: <repository-password>
Repeat passphrase for new repository key with ID 493e995: <repository-password>
Finished initializing "<registry-host-name>/<namespace>/<repository>"
Successfully signed <registry-host-name>/<namespace>/<repository>:<tag>
The root and repository keys are kept only locally in your content trust store.
Sign an image¶
Once you have initiated a repository for use with Docker Content Trust (DCT), you can now sign images.
To sign an image:
Push the required image to MSR. You will be prompted for the repository key password, as displayed in the example output.
docker push <registry-host-name>/<namespace>/<repository>:<tag>
Example output:
The push refers to repository [<registry-host-name>/<namespace>/<repository>] b2d5eeeaba3a: Layer already exists latest: digest: sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 size: 528 Signing and pushing trust metadata Enter passphrase for repository key with ID c549efc: <repository-password> Successfully signed <registry-host-name>/<namespace>/<repository>:<tag>
Inspect the repository trust metadata to verify that the image is signed by the user:
docker trust inspect --pretty <registry-host-name>/<namespace>/<repository>
Example output:
Signatures for <registry-host-name>/<namespace>/<repository> SIGNED TAG DIGEST SIGNERS <tag> def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 Repo Admin Administrative keys for <registry-host-name>/<namespace>/<repository> Repository Key: e0d15a24b7...540b4a2506b Root Key: b74854cb27...a72fbdd7b9a
Add an additional signer¶
You have the option to sign an image using multiple user keys. This topic describes how to add a regular user as a signer in addition to the repository admin.
Note
Signers in Docker Content Trust (DCT) do not correspond with users in MSR, thus you can add a signer using a user name that does not exist in MSR.
To add a signer:
On the user machine, obtain a signing key pair:
docker trust key generate <user-name>
Example output:
Generating key for <user-name>... Enter passphrase for new <user-name> key with ID c549efc: <user-password> Repeat passphrase for new <user-name> key with ID c549efc: <user-password> Successfully generated and loaded private key. Corresponding public key available: /path/to/public/key/<user-name>.pub
The private key is password protected and kept in the local trust store, where it remains throughout all signing operations. The public key is stored in the
.pub
file, which you must provide to the repository administrator to add the user as a signer.Provide the user public key to the repository admin.
On the admin machine, add the user as a signer to the repository. You will be prompted for the repository key password that you created in Configure repository for signing, as displayed in the example output.
docker trust signer add --key /path/to/public/key/<user-name>.pub <user-name> <registry-host-name>/<namespace>/<repository>
Example output:
Adding signer "<user-name>" to <registry-host-name>/<namespace>/<repository>... Enter passphrase for repository key with ID 493e995: <repository-password> Successfully added signer: <user-name> to <registry-host-name>/<namespace>/<repository>
Inspect the repository trust metadata to verify that the user is correctly added:
docker trust inspect --pretty <registry-host-name>/<namespace>/<repository>
Example output:
Signatures for <registry-host-name>/<namespace>/<repository> SIGNED TAG DIGEST SIGNERS <tag> def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 Repo Admin List of signers and their keys for kubernetes.docker.internal/admin/nginx SIGNER KEYS <user-name> c9f9039a520a Administrative keys for <registry-host-name>/<namespace>/<repository> Repository Key: e0d15a24b7...540b4a2506b Root Key: b74854cb27...a72fbdd7b9a
On the user machine, sign the image as the regular user. You will be prompted for the user key password, as displayed in the example output.
docker trust sign <registry-host-name>/<namespace>/<repository>:<tag>
Example output:
Signing and pushing trust metadata for <registry-host-name>/<namespace>/<repository>:<tag> Enter passphrase for <user-name> key with ID 927f303: <user-password> Enter passphrase for <user-name> key with ID 5ac7d9a: <user-password> Successfully signed <registry-host-name>/<namespace>/<repository>:<tag>
Inspect the repository trust metadata to verify that the image is signed by the user:
docker trust inspect --pretty <registry-host-name>/<namespace>/<repository>
Example output:
Signatures for <registry-host-name>/<namespace>/<repository>:<tag> SIGNED TAG DIGEST SIGNERS <tag> 5b49c8e2c89...5bb69e2033 <user-name> List of signers and their keys for <registry-host-name>/<namespace>/<repository>:<tag> SIGNER KEYS <user-name> 927f30366699 Administrative keys for <registry-host-name>/<namespace>/<repository>:<tag> Repository Key: e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b Root Key: b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a
Note
Once an additional signer signs an image, the repository admin is no longer listed under SIGNERS.
Delete trust data¶
Repositories that contain trust metadata cannot be deleted until the trust metadata is removed. Doing so requires use of the Notary CLI.
To delete trust metadata from a repository:
Run the following command to delete the trust metadata. You will be prompted for your user name and password, as displayed in the example output.
notary delete <registry-host-name>/<namespace>/<repository> --remote
Example output:
Deleting trust data for <registry-host-name>/<namespace>/<repository>
Enter username: <user-name>
Enter password: <password>
Successfully deleted local and remote trust data for <registry-host-name>/<namespace>/<repository>
Note
If you do not include the --remote
flag, Notary deletes local cached
content but does not delete data from the Notary server.
Delete signed images¶
To delete a signed image, you must first remove trust data for all of the roles that have signed the image. After you remove the trust data, proceed to deleting the image, as described in Delete images.
To identify the roles that signed an image:
Determine the roles that are trusted to sign the image:
List the trusted roles:
notary delegation list <registry-host-name>/<namespace>/<repository>
Example output:
ROLE PATHS KEY IDS THRESHOLD ---- ----- ------- --------- targets/releases "" <all paths> c3470c45cefde5...2ea9bc8 1 targets/qa "" <all paths> c3470c45cefde5...2ea9bc8 1
In this example, the repository owner delegated trust to the
targets/releases
andtargets/qa
roles.For each role listed in the previous step, identify whether it signed the image:
notary list <registry-host-name>/<namespace>/<repository> --roles <role-name>
To remove trust data for a role:
Note
Only users with private keys that have the required roles can perform this operation.
For each role that signed the image, remove the trust data for that role:
notary remove <registry-host-name>/<namespace>/<repository> <tag> \
--roles <role-name> --publish
The image will display as unsigned once the trust data has been removed for all of the roles that signed the image.
Using Docker Content Trust with a Remote MKE Cluster¶
For more advanced deployments, you may want to share one Mirantis Secure Registry across multiple Mirantis Kubernetes Engines. However, customers wanting to adopt this model alongside the Only Run Signed Images MKE feature, run into problems as each MKE operates an independent set of users.
Docker Content Trust (DCT) gets around this problem, since users from a remote MKE are able to sign images in the central MSR and still apply runtime enforcement.
In the following example, we will connect MSR managed by MKE cluster 1 with a remote MKE cluster which we are calling MKE cluster 2, sign the image with a user from MKE cluster 2, and provide runtime enforcement within MKE cluster 2. This process could be repeated over and over, integrating MSR with multiple remote MKE clusters, signing the image with users from each environment, and then providing runtime enforcement in each remote MKE cluster separately.
Note
Before attempting this guide, familiarize yourself with Docker Content Trust and Only Run Signed Images on a single MKE. Many of the concepts within this guide may be new without that background.
Prerequisites¶
Cluster 1, running MKE 3.5.x or later, with an MSR 2.9.x or later deployed within the cluster.
Cluster 2, running MKE 3.5.x or later, with no MSR node.
Nodes on Cluster 2 need to trust the Certificate Authority which signed MSR’s TLS Certificate. This can be tested by logging on to a cluster 2 virtual machine and running
curl https://msr.example.com
.The MSR TLS Certificate needs to be properly configured, ensuring that the Loadbalancer/Public Address field has been configured, with this address included within the certificate.
A machine with MCR 20.10.x or later installed, as this contains the relevant docker trust commands.
Registering MSR with a remote Mirantis Kubernetes Engine¶
As there is no registry running within cluster 2, by default MKE will not know where to check for trust data. Therefore, the first thing we need to do is register MSR within the remote MKE in cluster 2. When you normally install MSR, this registration process happens by default to a local MKE, or cluster 1.
Note
The registration process allows the remote MKE to get signature data from MSR, however this will not provide Single Sign On (SSO). Users on cluster 2 will not be synced with cluster 1’s MKE or MSR. Therefore when pulling images, registry authentication will still need to be passed as part of the service definition if the repository is private. See the Kubernetes example.
To add a new registry, retrieve the Certificate Authority (CA) used to
sign the MSR TLS Certificate through the MSR URL’s /ca
endpoint.
$ curl -ks https://msr.example.com/ca > dtr.crt
Next, convert the MSR certificate into a JSON configuration file for registration within the MKE for cluster 2.
You can find a template of the dtr-bundle.json
below. Replace the
host address with your MSR URL, and enter the contents of the MSR CA
certificate between the new line commands \n and \n
.
Note
JSON Formatting
Ensure there are no line breaks between each line of the MSR CA certificate within the JSON file. Use your favorite JSON formatter for validation.
$ cat dtr-bundle.json
{
"hostAddress": "msr.example.com",
"caBundle": "-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----END CERTIFICATE-----"
}
Now upload the configuration file to cluster 2’s MKE through the MKE API
endpoint, /api/config/trustedregistry_
. To authenticate against the
API of cluster 2’s MKE, we have downloaded an MKE client bundle,
extracted it in the current directory, and will reference the keys for
authentication.
$ curl --cacert ca.pem --cert cert.pem --key key.pem \
-X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d @dtr-bundle.json \
https://cluster2.example.com/api/config/trustedregistry_
Navigate to the MKE web interface to verify that the JSON file was imported successfully, as the MKE endpoint will not output anything. Select Admin > Admin Settings > Mirantis Secure Registry. If the registry has been added successfully, you should see the MSR listed.
Additionally, you can check the full MKE configuration
file within cluster 2’s MKE. Once downloaded, the
ucp-config.toml
file should now contain a section called [registries]
$ curl --cacert ca.pem --cert cert.pem --key key.pem https://cluster2.example.com/api/ucp/config-toml > ucp-config.toml
If the new registry isn’t shown in the list, check the
ucp-controller
container logs on cluster 2.
Signing an image in MSR¶
We will now sign an image and push this to MSR. To sign images we need a
user’s public private key pair from cluster 2. It can be found in a
client bundle, with key.pem
being a private key and cert.pem
being the public key on an X.509 certificate.
First, load the private key into the local Docker trust store
(~/.docker/trust)
. The name used here is purely metadata to help
keep track of which keys you have imported.
docker trust key load --name cluster2admin key.pem
Loading key from "key.pem"...
Enter passphrase for new cluster2admin key with ID a453196:
Repeat passphrase for new cluster2admin key with ID a453196:
Successfully imported key from key.pem
Next initiate the repository, and add the public key of cluster 2’s user as a signer. You will be asked for a number of passphrases to protect the keys. Keep note of these passphrases, and see [Docker Content Trust documentation] (/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server) to learn more about managing keys.
docker trust signer add --key cert.pem cluster2admin msr.example.com/admin/trustdemo
Adding signer "cluster2admin" to msr.example.com/admin/trustdemo...
Initializing signed repository for msr.example.com/admin/trustdemo...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID dd4460f:
Repeat passphrase for new repository key with ID dd4460f:
Successfully initialized "msr.example.com/admin/trustdemo"
Successfully added signer: cluster2admin to msr.example.com/admin/trustdemo
Finally, sign the image tag. This pushes the image up to MSR, as well as signs the tag with the user from cluster 2’s keys.
docker trust sign msr.example.com/admin/trustdemo:1
Signing and pushing trust data for local image msr.example.com/admin/trustdemo:1, may overwrite remote trust data
The push refers to repository [dtr.olly.dtcntr.net/admin/trustdemo]
27c0b07c1b33: Layer already exists
aa84c03b5202: Layer already exists
5f6acae4a5eb: Layer already exists
df64d3292fd6: Layer already exists
1: digest: sha256:37062e8984d3b8fde253eba1832bfb4367c51d9f05da8e581bd1296fc3fbf65f size: 1153
Signing and pushing trust metadata
Enter passphrase for cluster2admin key with ID a453196:
Successfully signed msr.example.com/admin/trustdemo:1
Within the MSR web interface, you should now be able to see your newly pushed tag with the Signed text next to the size.
You could sign this image multiple times if required, whether it’s multiple teams from the same cluster wanting to sign the image, or you integrating MSR with more remote MKEs so users from clusters 1, 2, 3, or more can all sign the same image.
Troubleshooting¶
If the image is stored in a private repository within MSR, you need to pass credentials to the Orchestrator as there is no SSO between cluster 2 and MSR. See the relevant Kubernetes documentation for more details.
image or trust data does not exist for msr.example.com/admin/trustdemo:1
This means something went wrong when initiating the repository or signing the image, as the tag contains no signing data.
Error response from daemon: image did not meet required signing policy
msr.example.com/admin/trustdemo:1: image did not meet required signing policy
This means that the image was signed correctly, however the user who signed the image does not meet the signing policy in cluster 2. This could be because you signed the image with the wrong user keys.
Error response from daemon: msr.example.com must be a registered trusted registry. See 'docker run --help'.
This means you have not registered MSR to work with a remote MKE instance yet, as outlined in Registering MSR with a remote Mirantis Kubernetes Engine.
Manage jobs¶
Job queue¶
Mirantis Secure Registry (MSR) uses a job queue to schedule batch jobs. Jobs are added to a cluster-wide job queue, and then consumed and executed by a job runner within MSR.
All MSR replicas have access to the job queue, and have a job runner component that can get and execute work.
How it works¶
When a job is created, it is added to a cluster-wide job queue and
enters the waiting
state. When one of the MSR replicas is ready to
claim the job, it waits a random time of up to 3
seconds to give
every replica the opportunity to claim the task.
A replica claims a job by adding its replica ID to the job. That way,
other replicas will know the job has been claimed. Once a replica claims
a job, it adds that job to an internal queue, which in turn sorts the
jobs by their scheduledAt
time. Once that happens, the replica
updates the job status to running
, and starts executing it.
The job runner component of each MSR replica keeps a
heartbeatExpiration
entry on the database that is shared by all
replicas. If a replica becomes unhealthy, other replicas notice the
change and update the status of the failing worker to dead
. Also,
all the jobs that were claimed by the unhealthy replica enter the
worker_dead
state, so that other replicas can claim the job.
Job types¶
MSR runs periodic and long-running jobs. The following is a complete list of jobs you can filter for via the user interface or the API.
Job |
Description |
---|---|
gc |
A garbage collection job that deletes layers associated with deleted images. |
onlinegc |
A garbage collection job that deletes layers associated with deleted images without putting the registry in read-only mode. |
onlinegc_metadata |
A garbage collection job that deletes metadata associated with deleted images. |
onlinegc_joblogs |
A garbage collection job that deletes job logs based on a configured job history setting. |
metadatastoremigration |
A necessary migration that enables the |
sleep |
Used for testing the correctness of the jobrunner. It sleeps for 60 seconds. |
false |
Used for testing the correctness of the jobrunner. It runs the |
tagmigration |
Used for synchronizing tag and manifest information between the MSR database and the storage backend. |
bloblinkmigration |
A DTR 2.1 to 2.2 upgrade process that adds references for blobs to repositories in the database. |
license_update |
Checks for license expiration extensions if online license updates are enabled. |
scan_check |
An image security scanning job. This job does not perform the actual
scanning, rather it spawns |
scan_check_single |
A security scanning job for a particular layer given by the |
scan_check_all |
A security scanning job that updates all of the currently scanned images to display the latest vulnerabilities. |
update_vuln_db |
A job that is created to update MSR’s vulnerability database. It uses an
Internet connection to check for database updates through
|
scannedlayermigration |
A DTR 2.4 to 2.5 upgrade process that restructures scanned image data. |
push_mirror_tag |
A job that pushes a tag to another registry after a push mirror policy has been evaluated. |
poll_mirror |
A global cron that evaluates poll mirroring policies. |
webhook |
A job that is used to dispatch a webhook payload to a single endpoint. |
nautilus_update_db |
The old name for the |
ro_registry |
A user-initiated job for manually switching MSR into read-only mode. |
tag_pruning |
A job for cleaning up unnecessary or unwanted repository tags which can be configured by repository admins. |
Job status¶
Jobs can have one of the following status values:
Status |
Description |
---|---|
waiting |
Unclaimed job waiting to be picked up by a worker. |
running |
The job is currently being run by the specified |
done |
The job has succesfully completed. |
errors |
The job has completed with errors. |
cancel_request |
The status of a job is monitored by the worker in the database. If the
job status changes to |
cancel |
The job has been canceled and ws not fully executed. |
deleted |
The job and its logs have been removed. |
worker_dead |
The worker for this job has been declared |
worker_shutdown |
The worker that was running this job has been gracefully stopped. |
worker_resurrection |
The worker for this job has reconnected to the databsase and will cancel this job. |
Audit jobs with the web interface¶
Admins can view and audit jobs within the software using either the API or the MSR web UI.
Prerequisite¶
Job Queue
View jobs list¶
To view the list of jobs within MSR, do the following:
Log in to the MSR web UI.
Navigate to System > Job Logs in the left-side navigation panel. You should see a paginated list of past, running, and queued jobs. By default, Job Logs shows the latest
10
jobs on the first page.If required, filter the jobs by:
Action
Worker ID, which is the ID of the worker in an MSR replica responsible for running the job
Optional. Click Edit Settings on the right of the filtering options to update your Job Logs settings.
Job details¶
The following is an explanation of the job-related fields displayed in
Job Logs and uses the filtered online_gc
action from above.
Job Detail |
Description |
Example |
---|---|---|
Action |
The type of action or job being performed. |
|
ID |
The ID of the job. |
|
Worker |
The ID of the worker node responsible for ruinning the job. |
|
Status |
Current status of the action or job. |
|
Start Time |
Time when the job started. |
|
Last updated |
Time when the job was last updated. |
|
View Logs |
Links to the full logs for the job. |
|
View job-specific logs¶
To view the log details for a specific job, do the following:
Click View Logs next to the job value, Last Updated You will be redirected to the log detail page of your selected job.
Notice how the job
ID
is reflected in the URL while theAction
and the abbreviated form of the jobID
are reflected in the heading. Also, the JSON lines displayed are job-specific MSR container logs.Enter or select a different line count to truncate the number of lines displayed. Lines are cut off from the end of the logs.
Audit jobs with the API¶
Overview¶
Admins have the ability to audit jobs using the web interface.
Prerequisite¶
Job Queue
Job capacity¶
Each job runner has a limited capacity and will not claim jobs that
require a higher capacity. You can see the capacity of a job runner via
the GET /api/v0/workers
endpoint:
{
"workers": [
{
"id": "000000000000",
"status": "running",
"capacityMap": {
"scan": 1,
"scanCheck": 1
},
"heartbeatExpiration": "2017-02-18T00:51:02Z"
}
]
}
This means that the worker with replica ID 000000000000
has a
capacity of 1 scan
and 1 scanCheck
. Next, review the list of
available jobs:
{
"jobs": [
{
"id": "0",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scan": 1
}
},
{
"id": "1",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scan": 1
}
},
{
"id": "2",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scanCheck": 1
}
}
]
}
If worker 000000000000
notices the jobs in waiting
state above,
then it will be able to pick up jobs 0
and 2
since it has the
capacity for both. Job 1
will have to wait until the previous scan
job, 0
, is completed. The job queue will then look like:
{
"jobs": [
{
"id": "0",
"workerID": "000000000000",
"status": "running",
"capacityMap": {
"scan": 1
}
},
{
"id": "1",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scan": 1
}
},
{
"id": "2",
"workerID": "000000000000",
"status": "running",
"capacityMap": {
"scanCheck": 1
}
}
]
}
You can get a list of jobs via the GET /api/v0/jobs/
endpoint. Each
job looks like:
{
"id": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
"retryFromID": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
"workerID": "000000000000",
"status": "done",
"scheduledAt": "2017-02-17T01:09:47.771Z",
"lastUpdated": "2017-02-17T01:10:14.117Z",
"action": "scan_check_single",
"retriesLeft": 0,
"retriesTotal": 0,
"capacityMap": {
"scan": 1
},
"parameters": {
"SHA256SUM": "1bacd3c8ccb1f15609a10bd4a403831d0ec0b354438ddbf644c95c5d54f8eb13"
},
"deadline": "",
"stopTimeout": ""
}
The JSON fields of interest here are:
id
: The ID of the jobworkerID
: The ID of the worker in a MSR replica that is running this jobstatus
: The current state of the jobaction
: The type of job the worker will actually performcapacityMap
: The available capacity a worker needs for this job to run
Cron jobs¶
Several of the jobs performed by MSR are run in a recurrent schedule.
You can see those jobs using the GET /api/v0/crons
endpoint:
{
"crons": [
{
"id": "48875b1b-5006-48f5-9f3c-af9fbdd82255",
"action": "license_update",
"schedule": "57 54 3 * * *",
"retries": 2,
"capacityMap": null,
"parameters": null,
"deadline": "",
"stopTimeout": "",
"nextRun": "2017-02-22T03:54:57Z"
},
{
"id": "b1c1e61e-1e74-4677-8e4a-2a7dacefffdc",
"action": "update_db",
"schedule": "0 0 3 * * *",
"retries": 0,
"capacityMap": null,
"parameters": null,
"deadline": "",
"stopTimeout": "",
"nextRun": "2017-02-22T03:00:00Z"
}
]
}
The schedule
field uses a cron expression following the
(seconds) (minutes) (hours) (day of month) (month) (day of week)
format. For example, 57 54 3 * * *
with cron ID
48875b1b-5006-48f5-9f3c-af9fbdd82255
will be run at 03:54:57
on
any day of the week or the month, which is 2017-02-22T03:54:57Z
in
the example JSON response above.
Enable auto-deletion of job logs¶
Mirantis Secure Registry has a global setting for auto-deletion of job logs which allows them to be removed as part of garbage collection. MSR admins can enable auto-deletion of repository events in MSR 2.6 based on specified conditions which are covered below.
Log in to the MSR web UI.
Navigate to System in the left-side navigation panel.
Scroll down to Job Logs and turn on Auto-Deletion.
Specify the conditions with which a job log auto-deletion will be triggered.
MSR allows you to set your auto-deletion conditions based on the following optional job log attributes:
Name
Description
Example
Age
Lets you remove job logs which are older than your specified number of hours, days, weeks or months
2 months
Max number of events
Lets you specify the maximum number of job logs allowed within MSR.
100
If you check and specify both, job logs will be removed from MSR during garbage collection if either condition is met. You should see a confirmation message right away.
Click Start Deletion if you are ready. Read more about Garbage collection if you are unsure about this operation.
Navigate to System > Job Logs in the left-side navigation panel to verify that
onlinegc_joblogs
has started.
Note
When you enable auto-deletion of job logs, the logs will be permanently deleted during garbage collection.
Manage users¶
Create and manage teams¶
You can extend a user’s default permissions by granting them individual permissions in other image repositories, by adding the user to a team. A team defines the permissions that a set of users has for a set of repositories.
To create a new team:
Log in to the MSR web UI.
Navigate to the Organizations page.
Click the organization within which you want to create the team.
Click + to create a new team.
Give the team a name.
Click the team name to manage its settings.
# Click the Add user button to add team members.
Manage team permissions¶
Once you have created the team, the next step is to define the team permissions for a set of repositories.
To manage team permissions:
Navigate to the Permissions tab, and click the Add repository permissions button.
Choose the repositories that the team has access to, and what permission levels the team members have.
Three permission levels are available:
Permission level
Description
Read only
View repository, pull images.
Read & Write
View repository, pull and push images.
Admin
Manage repository and change its settings, pull and push images.
Delete a team¶
To delete a team:
If you are an organization owner, you can delete a team in that organization.
Navigate to the Team.
Choose the Settings tab.
Click Delete.
Create and manage organizations¶
When a user creates a repository, only that user has permissions to make changes to the repository.
For team workflows, where multiple users have permissions to manage a set of common repositories, you can create an organization.
To create a new organization, navigate to the MSR web UI and go to the Organizations page.
Click the New organization button, and choose a meaningful name for the organization.
Repositories owned by this organization will contain the organization name, so to pull an image from that repository you will use:
docker pull <msr-domain-name>/<organization>/<repository>:<tag>
Click Save to create the organization, and then click the organization to define which users are allowed to manage this organization. These users will be able to edit the organization settings, edit all repositories owned by the organization, and define the user permissions for this organization.
For this, click the Add user button, select the users that you want to grant permissions to manage the organization, and click Save. Then change their permissions from Member to Org Owner.
Permission levels¶
Mirantis Secure Registry (MSR) allows you to define fine-grained permissions over image repositories.
Administrators¶
MSR administrators have permission to manage all MSR repositories and settings.
Team permission levels¶
With teams you can define the repository permissions for a set of users (read, read-write, and admin).
Repository operation |
read |
read-write |
admin |
---|---|---|---|
View/browse |
x |
x |
x |
Pull |
x |
x |
x |
Push |
x |
x |
|
Start a scan |
x |
x |
|
Delete tags |
x |
x |
|
Edit description |
x |
||
Set public or private |
x |
||
Manage user access |
x |
||
Delete repository |
x |
Note
Team permissions are additive. When a user is a member of multiple teams, they have the highest permission level defined by those teams.
Overall permissions¶
Permission level |
Description |
---|---|
Anonymous or unauthenticated users |
Search and pull public repositories. |
Authenticated Users |
Search and pull public repos, and create and manage their own repositories. |
Team Member |
Do everything a user can do, plus the permissions granted by the team the user belongs to. |
Organization Owner |
Manage repositories and teams for the organization. |
Admin |
Manage anything across MKE and MSR. |
Manage webhooks¶
You can configure MSR to automatically post event notifications to a webhook URL of your choosing. This lets you build complex CI and CD pipelines with your Docker images.
Webhook types¶
Event type |
Scope |
Access level |
Availability |
---|---|---|---|
Tag pushed to repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Tag pulled from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Tag deleted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Manifest pushed to repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Manifest pulled from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Manifest deleted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Security scan completed ( |
Individual repositories |
Repository admin |
Web UI and API |
Security scan failed ( |
Individual repositories |
Repository admin |
Web UI and API |
Image promoted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Image mirrored from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Image mirrored from remote repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Repository created, updated, or deleted ( |
Namespace, organizations |
Namespace, organization owners |
API only |
Security scanner update completed ( |
Global |
MSR admin |
API only |
Helm chart deleted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Helm chart pushed to repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Helm chart pulled from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Helm chart linting completed ( |
Individual repositories |
Repository admin |
Web UI and API |
You must have admin privileges to a repository or namespace in order to subscribe to its webhook events. For example, a user must be an admin of repository “foo/bar” to subscribe to its tag push events. A MSR admin can subscribe to any event.
Manage repository webhooks with the web interface¶
You must have admin privileges to the repository in order to create a webhook or edit any aspect of an existing webhook.
Create a webhook for your repository¶
In your browser, navigate to
https://<msr-url>
and log in with your credentials.Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the
/
after the specific namespace for your repository.Select the Webhooks tab, and click New Webhook.
From the Notification to receive drop-down list, select the event that will trigger the webhook.
Set the URL that will receive the JSON payload. Click Test next to the Webhook URL field, so that you can validate that the integration is working. At your specified URL, you should receive a JSON payload for your chosen event type notification.
{ "type": "TAG_PUSH", "createdAt": "2019-05-15T19:39:40.607337713Z", "contents": { "namespace": "foo", "repository": "bar", "tag": "latest", "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c", "imageName": "foo/bar:latest", "os": "linux", "architecture": "amd64", "author": "", "pushedAt": "2015-01-02T15:04:05Z" }, "location": "/repositories/foo/bar/tags/latest" }
(Optional) Assign a TLS certificate to your webhook.
Expand Show advanced settings.
Paste the TLS certificate associated with your webhook URL into the TLS Cert field.
Note
For testing purposes, you can test your TLS certficate over HTTP rather than HTTPS.
To circumvent TLS verification, tick the Skip TLS Verification checkbox.
(Optional) Format your webhook message. Available since MSR 3.0.2
You can use Golang templates to format the webhook messages that are sent.
Expand Show advanced settings.
Paste the configured Golang template for the webhook message into the Webhook Message Format field.
Click Create to save the webhook. Once saved, your webhook is active and starts sending POST notifications whenever your chosen event type is triggered.
As a repository admin, you can add or delete a webhook at any point. Additionally, you can create, view, and delete webhooks for your organization or trusted registry using the API.
Change the Active status of a webhook¶
Note
By default, the webhook status is set to Active on its creation.
In your browser, navigate to
https://<msr-url>
and log in with your credentials.Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the
/
after the specific namespace for your repository.Select the Webhooks tab. The existing webhooks display on the page.
Locate the webhook whose Active status you want to change and move the slider underneath the Active heading accordingly.
Manage repository webhooks with the API¶
Triggering notifications
Refer to Webhook types for a list of events you can trigger notifications for via the API.
Your MSR hostname serves as the base URL for your API requests.
From the MSR web interface, click API on the bottom left-side navigation panel to explore the API resources and endpoints. Click Execute to send your API request.
API requests via curl¶
You can use curl to send HTTP or HTTPS API requests. Note that you will have to
specify skipTLSVerification: true
on your request in order to test the
webhook endpoint over HTTP.
Example curl request¶
curl -u test-user:$TOKEN -X POST "https://msr-example.com/api/v0/webhooks" -H "accept: application/json" -H "content-type: application/json" -d "{ \"endpoint\": \"https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019\", \"key\": \"maria-testorg/lab-words\", \"skipTLSVerification\": true, \"type\": \"TAG_PULL\"}"
Example JSON response¶
{
"id": "b7bf702c31601efb4796da59900ddc1b7c72eb8ca80fdfb1b9fecdbad5418155",
"type": "TAG_PULL",
"key": "maria-testorg/lab-words",
"endpoint": "https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019",
"authorID": "194efd8e-9ee6-4d43-a34b-eefd9ce39087",
"createdAt": "2019-05-22T01:55:20.471286995Z",
"lastSuccessfulAt": "0001-01-01T00:00:00Z",
"inactive": false,
"tlsCert": "",
"skipTLSVerification": true
}
Subscribe to events¶
To subscribe to events, send a POST
request to /api/v0/webhooks
with the following JSON payload:
Example usage¶
{
"type": "TAG_PUSH",
"key": "foo/bar",
"endpoint": "https://example.com"
}
The keys in the payload are:
type
: The event type to subcribe to.key
: The namespace/organization or repo to subscribe to. For example, “foo/bar” to subscribe to pushes to the “bar” repository within the namespace/organization “foo”.endpoint
: The URL to send the JSON payload to.
Normal users must supply a “key” to scope a particular webhook event to a repository or a namespace/organization. MSR admins can choose to omit this, meaning a POST event notification of your specified type will be sent for all MSR repositories and namespaces.
Receive a payload¶
Whenever your specified event type occurs, MSR will send a POST request to the given endpoint with a JSON-encoded payload. The payload will always have the following wrapper:
{
"type": "...",
"createdAt": "2012-04-23T18:25:43.511Z",
"contents": {...}
}
type
refers to the event type received at the specified subscription endpoint.contents
refers to the payload of the event itself. Each event is different, therefore the structure of the JSON object incontents
will change depending on the event type. See Content structure for more details.
Test payload subscriptions¶
Before subscribing to an event, you can view and test your endpoints
using fake data. To send a test payload, send POST
request to
/api/v0/webhooks/test
with the following payload:
{
"type": "...",
"endpoint": "https://www.example.com/"
}
Change type
to the event type that you want to receive. MSR will
then send an example payload to your specified endpoint. The example
payload sent is always the same.
Content structure¶
Comments after (//
) are for informational purposes only, and the
example payloads have been clipped for brevity.
Repository event content structure¶
Tag push
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"os": "", // (string) the OS for the tag's manifest
"architecture": "", // (string) the architecture for the tag's manifest
"author": "", // (string) the username of the person who pushed the tag
"pushedAt": "", // (string) JSON-encoded timestamp of when the push occurred
...
}
Tag delete
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just deleted
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"os": "", // (string) the OS for the tag's manifest
"architecture": "", // (string) the architecture for the tag's manifest
"author": "", // (string) the username of the person who deleted the tag
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
...
}
Manifest push
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"os": "", // (string) the OS for the manifest
"architecture": "", // (string) the architecture for the manifest
"author": "", // (string) the username of the person who pushed the manifest
...
}
Manifest delete
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"os": "", // (string) the OS for the manifest
"architecture": "", // (string) the architecture for the manifest
"author": "", // (string) the username of the person who deleted the manifest
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
...
}
Security scan completed
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"scanSummary": {
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"critical": 0, // (int) number of critical issues, where CVSS >= 7.0
"major": 0, // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
"minor": 0, // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
"last_scan_status": 0, // (int) enum; see scan status section
"check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
...
}
}
Security scan failed
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"error": "", // (string) the error that occurred while scanning
...
}
Chart push
{
"namespace": "foo", // (string) namespace/organization for the repository
"repository": "bar", // (string) repository name
"event": "CHART_PUSH", // (string) event name
"author": "exampleUser", // (string) the username of the person who deleted the manifest
"data": {
"urls": [
"http://example.com" //
],
"created": "2015-01-02T15:04:05Z",
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c" // (string) sha256 digest of the manifest of the helm chart (eg. "sha256:0afb...")
}
}
Chart pull
{
"namespace": "foo",
"repository": "bar",
"event": "CHART_PULL",
"author": "exampleUser",
"data": {
"urls": [
"http://example.com"
],
"created": "2015-01-02T15:04:05Z",
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c"
}
}
Chart linted
{
"namespace": "foo",
"repository": "bar",
"event": "CHART_LINTED",
"author": "exampleUser",
"data": {
"chartName": "test-chart",
"chartVersion": "1.0"
}
}
Chart delete
{
"namespace": "foo",
"repository": "bar",
"event": "CHART_DELETE",
"author": "exampleUser",
"data": {
"urls": [
"http://example.com"
],
"created": "2015-01-02T15:04:05Z",
"digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c"
}
}
Namespace-specific event structure¶
Repository event (created/updated/deleted)
{
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"event": "", // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
"author": "", // (string) the name of the user responsible for the event
"data": {} // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
Global event structure¶
Security scanner update complete
{
"scanner_version": "",
"scanner_updated_at": "", // (string) JSON-encoded timestamp of when the scanner updated
"db_version": 0, // (int) newly updated database version
"db_updated_at": "", // (string) JSON-encoded timestamp of when the database updated
"success": <true|false> // (bool) whether the update was successful
"replicas": { // (object) a map keyed by replica ID containing update information for each replica
"replica_id": {
"db_updated_at": "", // (string) JSON-encoded time of when the replica updated
"version": "", // (string) version updated to
"replica_id": "" // (string) replica ID
},
...
}
}
Security scan status codes¶
0: Failed. An error occurred checking an image’s layer
1: Unscanned. The image has not yet been scanned
2: Scanning. Scanning in progress
3: Pending: The image will be scanned when a worker is available
4: Scanned: The image has been scanned but vulnerabilities have not yet been checked
5: Checking: The image is being checked for vulnerabilities
6: Completed: The image has been fully security scanned
View and manage existing subscriptions¶
View all subscriptions¶
To view existing subscriptions, send a GET
request to
/api/v0/webhooks
. As a normal user (i.e., not a MSR admin), this will
show all of your current subscriptions across every
namespace/organization and repository. As a MSR admin, this will show
every webhook configured for your MSR.
The API response will be in the following format:
[
{
"id": "", // (string): UUID of the webhook subscription
"type": "", // (string): webhook event type
"key": "", // (string): the individual resource this subscription is scoped to
"endpoint": "", // (string): the endpoint to send POST event notifications to
"authorID": "", // (string): the user ID resposible for creating the subscription
"createdAt": "", // (string): JSON-encoded datetime when the subscription was created
},
...
]
View subscriptions for a particular resource¶
You can also view subscriptions for a given resource that you are an admin of. For example, if you have admin rights to the repository “foo/bar”, you can view all subscriptions (even other people’s) from a particular API endpoint. These endpoints are:
GET /api/v0/repositories/{namespace}/{repository}/webhooks
: View all webhook subscriptions for a repositoryGET /api/v0/repositories/{namespace}/webhooks
: View all webhook subscriptions for a namespace/organization
Delete a subscription¶
To delete a webhook subscription, send a DELETE
request to
/api/v0/webhooks/{id}
, replacing {id}
with the webhook
subscription ID which you would like to delete.
Only a MSR admin or an admin for the resource with the event subscription can delete a subscription. As a normal user, you can only delete subscriptions for repositories which you manage.
Manage repository events¶
Audit repository events¶
Starting in DTR 2.6, each repository page includes an Activity tab which displays a sortable and paginated list of the most recent events within the repository. This offers better visibility along with the ability to audit events. Event types listed vary according to your repository permission level. Additionally, MSR admins can enable auto-deletion of repository events as part of maintenance and cleanup.
In the following section, we will show you how to view and audit the list of events in a repository. We will also cover the event types associated with your permission level.
View List of Events¶
As of DTR 2.3, admins were able to view a list of MSR events using the API. MSR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following:
Navigate to
https://<msr-url>
and log in with your MSR credentials.Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the
/
after the specific namespace for your repository.Select the Activity tab. You should see a paginated list of the latest events based on your repository permission level. By default, Activity shows the latest
10
events and excludes pull events, which are only visible to repository and MSR admins.If you’re a repository or a MSR admin, uncheck Exclude pull to view pull events. This should give you a better understanding of who is consuming your images.
To update your event view, select a different time filter from the drop-down list.
Activity Stream¶
The following table breaks down the data included in an event and uses
the highlighted Create Promotion Policy
event as an example.
Event detail |
Description |
Example |
---|---|---|
Label |
Friendly name of the event. |
|
Repository |
This will always be the repository in review following the
|
|
Tag |
Tag affected by the event, when applicable. |
|
SHA |
The digest value for ``CREATE` operations such as creating a new image tag or a promotion policy. |
|
Type |
Event type. Possible values are: |
|
Initiated by |
The actor responsible for the event. For user-initiated events, this
will reflect the user ID and link to that user’s profile. For image
events triggered by a policy – pruning, pull / push mirroring, or
promotion – this will reflect the relevant policy ID except for manual
promotions where it reflects |
|
Date and Time |
When the event happened in your configured time zone. |
|
Event Audits¶
Given the level of detail on each event, it should be easy for MSR and security admins to determine what events have taken place inside of MSR. For example, when an image which shouldn’t have been deleted ends up getting deleted, the security admin can determine when and who initiated the deletion.
Event Permissions¶
Repository event |
Description |
Minimum permission level |
---|---|---|
Push |
Refers to |
Authenticated users |
Scan |
Requires security scanning to be set
up by an MSR admin.
Once enabled, this will display as a |
Authenticated users |
Promotion |
Refers to a |
Repository admin |
Delete |
Refers to “Delete Tag” events. Learn more about Delete images. |
Authenticated users |
Pull |
Refers to “Get Tag” events. Learn more about Pull an image. |
Repository admin |
Mirror |
Refers to |
Repository admin |
Create repo |
Refers to |
Authenticated users |
Where to go next¶
Enable Auto-Deletion of Repository Events¶
Mirantis Secure Registry has a global setting for repository event auto-deletion. This allows event records to be removed as part of garbage collection. MSR administrators can enable auto-deletion of repository events in DTR 2.6 based on specified conditions which are covered below.
In your browser, navigate to
https://<msr-url>
and log in with your admin credentials.Select System from the left-side navigation panel which displays the Settings page by default.
Scroll down to Repository Events and turn on Auto-Deletion.
Specify the conditions with which an event auto-deletion will be triggered.
MSR allows you to set your auto-deletion conditions based on the following optional repository event attributes:
Name |
Description |
Example |
---|---|---|
Age |
Lets you remove events older than your specified number of hours, days, weeks or months. |
|
Max number of events |
Lets you specify the maximum number of events allowed in the repositories. |
|
If you check and specify both, events in your repositories will be removed during garbage collection if either condition is met. You should see a confirmation message right away.
Click Start GC if you’re ready.
Navigate to System > Job Logs to confirm that
onlinegc
has happened.
Where to go next¶
Promotion policies and monitoring¶
Promotion policies overview¶
Mirantis Secure Registry allows you to automatically promote and mirror images based on a policy. In MSR 2.7, you have the option to promote applications with the experimental docker app CLI addition. Note that scanning-based promotion policies do not take effect until all application-bundled images have been scanned. This way you can create a Docker-centric development pipeline.
You can mix and match promotion policies, mirroring policies, and webhooks to create flexible development pipelines that integrate with your existing CI/CD systems.
Promote an image using policies
One way to create a promotion pipeline is to automatically promote images to another repository.
You start by defining a promotion policy that’s specific to a repository. When someone pushes an image to that repository, MSR checks if it complies with the policy you set up and automatically pushes the image to another repository.
Learn how to promote an image using policies.
Mirror images to another registry
You can also promote images between different MSR deployments. This not only allows you to create promotion policies that span multiple MSRs, but also allows you to mirror images for security and high availability.
You start by configuring a repository with a mirroring policy. When someone pushes an image to that repository, MSR checks if the policy is met, and if so pushes it to another MSR deployment or Docker Hub.
Learn how to mirror images to another registry.
Mirror images from another registry
Another option is to mirror images from another MSR deployment. You configure a repository to poll for changes in a remote repository. All new images pushed into the remote repository are then pulled into MSR.
This is an easy way to configure a mirror for high availability since you won’t need to change firewall rules that are in place for your environments.
Promote an image using policies¶
Mirantis Secure Registry allows you to create image promotion pipelines based on policies.
In this example we will create an image promotion pipeline such that:
Developers iterate and push their builds to the
dev/website
repository.When the team creates a stable build, they make sure their image is tagged with
-stable
.When a stable build is pushed to the
dev/website
repository, it will automatically be promoted toqa/website
so that the QA team can start testing.
With this promotion policy, the development team doesn’t need access to the QA repositories, and the QA team doesn’t need access to the development repositories.
Configure your repository¶
Once you’ve created a repository, navigate to the repository page on the MSR web interface, and select the Promotions tab.
Note
Only administrators can globally create and edit promotion policies. By default users can only create and edit promotion policies on repositories within their user namespace.
Click New promotion policy, and define the image promotion criteria.
MSR allows you to set your promotion policy based on the following image attributes:
Name |
Description |
Example |
---|---|---|
Tag name |
Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Promote to Target if Tag name ends in |
Component |
Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Promote to Target if Component name starts with |
Vulnarabilities |
Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number |
Promote to Target if Critical vulnerabilities = |
License |
Whether the image uses an intellectual property license and is one of or not one of your specified words |
Promote to Target if License name = |
Now you need to choose what happens to an image that meets all the criteria.
Select the target organization or namespace and repository where the image is going to be pushed. You can choose to keep the image tag, or transform the tag into something more meaningful in the destination repository, by using a tag template.
In this example, if an image in the dev/website
is tagged with a
word that ends in “stable”, MSR will automatically push that image to
the qa/website
repository. In the destination repository the image
will be tagged with the timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted. To confirm,
select the Promotions tab on the dev/website
repository.
You can also review the newly pushed tag in the target repository by
navigating to qa/website
and selecting the Tags tab.
Where to go next¶
Mirror images to another registry¶
Mirantis Secure Registry allows you to create mirroring policies for a repository. When an image gets pushed to a repository and meets the mirroring criteria, MSR automatically pushes it to a repository in a remote Mirantis Secure Registry or Hub registry.
This not only allows you to mirror images but also allows you to create image promotion pipelines that span multiple MSR deployments and datacenters.
In this example we will create an image mirroring policy such that:
Developers iterate and push their builds to
msr-example.com/dev/website
the repository in the MSR deployment dedicated to development.When the team creates a stable build, they make sure their image is tagged with
-stable
.When a stable build is pushed to
msr-example.com/dev/website
, it will automatically be pushed toqa-example.com/qa/website
, mirroring the image and promoting it to the next stage of development.
With this mirroring policy, the development team does not need access to the QA cluster, and the QA team does not need access to the development cluster.
You need to have permissions to push to the destination repository in order to set up the mirroring policy.
Configure your repository connection¶
Once you have created a repository, navigate to the repository page on the web interface, and select the Mirrors tab.
Click New mirror to define where the image will be pushed if it meets the mirroring criteria.
Under Mirror direction, choose Push to remote registry. Specify the following details:
Field |
Description |
---|---|
Registry type |
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
|
Username and password or access token |
Your credentials in the remote repository you wish to push to. To use an access token instead of your password, see authentication token. |
Repository |
Enter the |
Show advanced settings |
Enter the TLS details for the remote repository or check
Skip TLS verification. If the MSR remote repository is
using self-signed TLS certificates or certificates signed by your own
certificate authority, you also need to provide the public key
certificate for that CA. You can retrieve the certificate by accessing
|
Note
Make sure the account you use for the integration has permissions to write to the remote repository.
Click Connect to test the integration.
In this example, the image gets pushed to the qa/example
repository
of a MSR deployment available at qa-example.com
using a service
account that was created just for mirroring images between repositories.
Next, set your push triggers. MSR allows you to set your mirroring policy based on the following image attributes:
Name |
Description |
Example |
---|---|---|
Tag name |
Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Copy image to remote repository if Tag name ends in |
Component |
Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Copy image to remote repository if Component name starts with |
Vulnarabilities |
Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number |
Copy image to remote repository if Critical vulnerabilities = |
License |
Whether the image uses an intellectual property license and is one of or not one of your specified words |
Copy image to remote repository if License name = |
You can choose to keep the image tag, or transform the tag into something more meaningful in the remote registry by using a tag template.
In this example, if an image in the dev/website
repository is tagged
with a word that ends in stable
, MSR will automatically push that
image to the MSR deployment available at qa-example.com
. The image
is pushed to the qa/example
repository and is tagged with the
timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted to
qa/example
in the remote trusted registry at qa-example.com
.
Metadata persistence¶
When an image is pushed to another registry using a mirroring policy, scanning and signing data is not persisted in the destination repository.
If you have scanning enabled for the destination repository, MSR is going to scan the image pushed. If you want the image to be signed, you need to do it manually.
Where to go next¶
Mirror images from another registry¶
Mirantis Secure Registry allows you to set up a mirror of a repository by constantly polling it and pulling new image tags as they are pushed. This ensures your images are replicated across different registries for high availability. It also makes it easy to create a development pipeline that allows different users access to a certain image without giving them access to everything in the remote registry.
To mirror a repository, start by creating a repository in the MSR deployment that will serve as your mirror. Previously, you were only able to set up pull mirroring from the API. Starting in DTR 2.6, you can also mirror and pull from a remote MSR or Docker Hub repository.
Pull mirroring on the web interface¶
To get started, navigate to https://<msr-url>
and log in with your
MKE credentials.
Select Repositories in the left-side navigation panel, and then
click the name of the repository you want to view. Note that you will
have to click on the repository name following the /
after the specific
namespace for your repository.
Next, select the Mirrors tab and click New mirror. On the New mirror page, choose Pull from remote registry.
Specify the following details:
Field |
Description |
---|---|
Registry type |
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
|
Username and password or access token |
Your credentials in the remote repository you wish to poll from. To use an access token instead of your password, see authentication token. |
Repository |
Enter the |
Show advanced settings |
Enter the TLS details for the remote repository or check
|
After you have filled out the details, click Connect to test the integration.
Once you have successfully connected to the remote repository, new buttons appear:
Click Save to mirror future tag, or;
To mirror all existing and future tags, click Save & Apply instead.
Pull mirroring on the API¶
There are a few different ways to send your MSR API requests. To explore the different API resources and endpoints from the web interface, click API on the bottom left-side navigation panel.
Search for the endpoint:
POST /api/v0/repositories/{namespace}/{reponame}/pollMirroringPolicies
Click Try it out and enter your HTTP request details.
namespace
and reponame
refer to the repository that will be poll
mirrored. The boolean field, initialEvaluation
, corresponds to
Save when set to false
and will only mirror images created
after your API request. Setting it to true
corresponds to
Save & Apply which means all tags in the remote repository will
be evaluated and mirrored. The other body parameters correspond to the
relevant remote repository details that you can see on the MSR web
interface. As a best practice,
use a service account just for this purpose. Instead of providing the
password for that account, you should pass an authentication
token.
If the MSR remote repository is using self-signed certificates or
certificates signed by your own certificate authority, you also need to
provide the public key certificate for that CA. You can get it by
accessing https://<msr-domain>/ca
. The remoteCA
field is
optional for mirroring a Docker Hub repository.
Click Execute. On success, the API returns an HTTP 201
response.
Review the poll mirror job log¶
Once configured, the system polls for changes in the remote repository
and runs the poll_mirror
job every 30 minutes. On success, the
system will pull in new images and mirror them in your local repository.
Starting in DTR 2.6, you can filter for poll_mirror
jobs to review
when it was last ran. To manually trigger the job and force pull
mirroring, use the POST /api/v0/jobs
API endpoint and specify
poll_mirror
as your action.
curl -X POST "https:/<msr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action\": \"poll_mirror\"}"
See Manage jobs to learn more about job management within MSR.
Where to go next¶
Template reference¶
When defining promotion policies you can use templates to dynamically name the tag that is going to be created.
Important
Whenever an image promotion event occurs, the MSR timestamp for the event is in UTC (Coordinated Univeral Time). That timestamp, however, is converted by the browser and presents in the user’s time zone. Inversely, if a time-based tag is applied to a target image, MSR captures it in UTC but cannot convert it to the user’s timezone due to the tags being immutable strings.
You can use these template keywords to define your new tag:
Template |
Description |
Example result |
---|---|---|
|
The tag to promote |
1, 4.5, latest |
|
Day of the week |
Sunday, Monday |
|
Day of the week, abbreviated |
Sun, Mon, Tue |
|
Day of the week, as a number |
0, 1, 6 |
|
Number for the day of the month |
01, 15, 31 |
|
Month |
January, December |
|
Month, abbreviated |
Jan, Jun, Dec |
|
Month, as a number |
01, 06, 12 |
|
Year |
1999, 2015, 2048 |
|
Year, two digits |
99, 15, 48 |
|
Hour, in 24 hour format |
00, 12, 23 |
|
Hour, in 12 hour format |
01, 10, 10 |
|
Period of the day |
AM, PM |
|
Minute |
00, 10, 59 |
|
Second |
00, 10, 59 |
|
Microsecond |
000000, 999999 |
|
Name for the timezone |
UTC, PST, EST |
|
Day of the year |
001, 200, 366 |
|
Week of the year |
00, 10, 53 |
Use Helm charts¶
Helm is a tool that manages Kubernetes packages called charts, which are
put to use in defining, installing, and upgrading Kubernetes applications.
These charts, in conjunction with Helm tooling, deploy applications
into Kubernetes clusters. Charts are comprised of a collection of files and
directories, arranged in a particular structure and packaged as a .tgz
file. Charts define Kubernetes objects, such as the Service
and DaemonSet objects used in the application under deployment.
MSR enables you to use Helm to store and serve Helm charts, thus allowing users to push charts to and pull charts from MSR repositories using the Helm CLI and the MSR API.
MSR supports both Helm v2 and v3. The two versions differ significantly with regard to the Helm CLI, which affects the applications under deployment rather than Helm chart support in MSR. One key difference is that while Helm v2 includes both the Helm CLI and Tiller (Helm Server), Helm v3 includes only the Helm CLI. Helm charts (referred to as releases following their installation in Kubernetes) are managed by Tiller in Helm v2 and by Helm CLI in Helm v3.
Note
For a breakdown of the key differences between Helm v2 and Helm v3, refer to Helm official documentation.
Add a Helm chart repository¶
Users can add a Helm chart repository to MSR through the MSR web UI.
Login to the MSR web UI.
Click Repositories in the left-side navigation panel.
Click New repository.
In the name field, enter the name for the new repository and click Create.
To add the new MSR repository as a Helm repository:
helm repo add <reponame> https://<msrhost>/charts/<namespace>/<reponame> --username <username> --password <password> --ca-file ca.crt "<reponame>" has been added to your repositories
To verify that the new MSR Helm repository has been added:
helm repo list NAME URL <reponame> https://<msrhost>/charts/<namespace>/<reponame>
Pull charts and their provenance files¶
Helm charts can be pulled from MSR Helm repositories using either the MSR API or the Helm CLI.
Pull with the MSR API¶
Note
Though the MSR API can be used to pull both Helm charts and provenance files, it is not possible to use it to pull both at the same time.
Pull a chart¶
To pull a Helm chart:
curl -u <username>:<password> \
--request GET https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz \
-H "accept: application/octet-stream" \
-o <chartname>-<chartversion>.tgz \
--cacert ca.crt
Pull a provenance file¶
To pull a provenance file:
curl -u <username>:<password> \
--request GET https://msrhost/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz.prov \
-H "accept: application/octet-stream" \
-o <chartname>-<chartversion>.tgz.prov \
--cacert ca.crt
Pull with the Helm CLI¶
Note
Though the Helm CLI can be used to pull a Helm chart by itself or a Helm chart and its provenance file, it is not possible to use the Helm CLI to pull a provenance file by itself.
Pull a chart¶
Use the helm pull
CLI command to pull a Helm chart:
helm pull <reponame>/<chartname> --version <chartversion>
ls
ca.crt <chartname>-<chartversion>.tgz
Alternatively, use the following command:
helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt
Pull a chart and a provenance file in tandem¶
Use the helm pull
CLI command with the --prov
option to pull a Helm
chart and a provenance file at the same time:
helm pull <reponame>/<chartname> --version <chartversion> --prov
ls
ca.crt <chartname>-<chartversion>.tgz <chartname>-<chartversion>.tgz.prov
Alternatively, use the following command:
helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt --prov
Push charts and their provenance files¶
You can use the MSR API or the Helm CLI to push Helm charts and their provenance files to an MSR Helm repository.
Note
Pushing and pulling Helm charts can be done with or without a provenance file.
Push charts with the MSR API¶
Using the MSR API, you can push Helm charts with application/octet-stream
or multipart/form-data
.
Push with application/octet-stream¶
To push a Helm chart through the MSR API with application/octet-stream
:
curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Push with multipart/form-data¶
To push a Helm chart through the MSR API with multipart/form-data
:
curl -F "chart=@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force push a chart¶
To overwrite an existing chart, turn off repository immutability and include a
?force
query parameter in the HTTP request.
Navigate to Repositories and click the Settings tab.
Under Immutability, select Off.
To force push a Helm chart using the MSR API:
curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Push provenance files with the MSR API¶
You can use the MSR API to separately push provenance files related to Helm charts.
To push a provenance file through the MSR API:
curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" https://<msrhost>/charts/api/<namespace>/<reponame>/prov -u <username>:<password> --cacert ca.crt
Note
Attempting to push a provenance file for a nonexistent chart will result in an error.
Force push a provenance file¶
To force push a provenance file using the MSR API:
curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/prov?force" -u <username>:<password> --cacert ca.crt
Push a chart and its provenance file with a single API request¶
To push a Helm chart and a provenance file with a single API request:
curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" https://msrhost/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force push a chart and a provenance file¶
To force push both a Helm chart and a provenance file using a single API request:
curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Push charts with the Helm CLI¶
Note
To push a Helm chart using the Helm CLI, first install the helm cm-push
plugin
from chartmuseum/helm-push. It is not possible to push a
provenance file using the Helm CLI.
Use the helm push CLI command to push a Helm chart:
helm cm-push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt
Force push a chart¶
Use the helm cm-push CLI command with the --force option to force push a Helm chart:
helm push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt --force
View charts in a Helm repository¶
View charts in a Helm repository using either the MSR API or the MSR web UI.
Viewing charts with the MSR API¶
To view charts that have been pushed to a Helm repository using the MSR API, consider the following options:
Option |
CLI command |
---|---|
View the index file |
curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt
|
View a paginated list of all charts |
curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt
|
View a paginated list of chart versions |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname> -u <username>:<password> \
--cacert ca.crt
|
Describe a version of a particular chart |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion> -u \
<username>:<password> --cacert ca.crt
|
Return the default values of a version of a particular chart |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/values -u \
<username>:<password> --cacert ca.crt
|
Produce a template of a version of a particular chart |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/template -u \
<username>:<password> --cacert ca.crt
|
Viewing charts with the MSR web UI¶
Use the MSR web UI to view the MSR Helm repository charts.
In the MSR web UI, navigate to Repositories.
Click the name of the repository that contains the charts you want to view. The page will refresh to display the detail for the selected Helm repository.
Click the Charts tab. The page will refresh to display all the repository charts.
View |
UI sequence |
---|---|
Chart versions |
Click the View Chart button associated with the required Helm repository. |
Chart description |
|
Default values |
|
Chart templates |
|
Delete charts from a Helm repository¶
You can only delete charts from MSR Helm repositories using the MSR API, not the web UI.
To delete a version of a particular chart from a Helm repository through the MSR API:
curl --request DELETE https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion> -u <username>:<password> --cacert ca.crt
Helm chart linting¶
Helm chart linting can enure that Kubernets YAML files and Helm charts adhere to a set of best practices, with a focus on production readiness and security.
A set of established rules forms the basis of Helm chart linting. The process generates a report that you can use to take any necessary actions.
Implement Helm linting¶
Perform Helm linting using either the MSR web UI or the MSR API.
Helm linting with the web UI¶
Open the MSR web UI.
Navigate to Repositories.
Click the name of the repository that contains the chart you want to lint.
Click the Charts tab.
Click the View Chart button associated with the required Helm chart.
Click the View Chart button for the required chart version.
Click the Linting Summary tab.
Click the Lint Chart button to generate a Helm chart linting report.
Helm linting with the API¶
Run the Helm chart linter on a particular chart.
curl -k -H "Content-Type: application/json" --request POST "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lint" -u <username>:<password>
Generate a Helm chart linting report.
curl -k -X GET "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lintsummary" -u <username>:<password>
Helm chart linting rules¶
Helm liniting reports offer the linting rules, rule decriptions, and remediations as they are presented in the following table.
Name |
Description |
Remediation |
---|---|---|
|
Indicates when services do not have any associated deployments. |
Confirm that your service’s selector correctly matches the labels on one of your deployments. |
|
Indicates when pods use the default service account. |
Create a dedicated service account for your pod. Refer to Configure Service Accounts for Pods for details. |
|
Indicates when deployments use the deprecated |
Use the |
|
Indicates when containers do not drop |
|
|
Indicates when objects use a secret in an environment variable. |
Do not use raw secrets in environment variables. Instead, either mount
the secret as a file or use a |
|
Indicates when deployment selectors fail to match the pod template labels. |
Confirm that your deployment selector correctly matches the labels in its pod template. |
|
Indicates when deployments with multiple replicas fail to specify inter-pod anti-affinity, to ensure that the orchestrator attempts to schedule replicas on different nodes. |
Specify anti-affinity in your pod specification to ensure that the
orchestrator attempts to schedule replicas on different nodes. Using
|
|
Indicates when objects use deprecated API versions under |
Migrate using the |
|
Indicates when containers fail to specify a liveness probe. |
Specify a liveness probe in your container. Refer to Configure Liveness, Readiness, and Startup Probes for details. |
|
Indicates when containers are running without a read-only root filesystem. |
Set |
|
Indicates when containers fail to specify a readiness probe. |
Specify a readiness probe in your container. Refer to Configure Liveness, Readiness, and Startup Probes for details. |
|
Indicates when pods reference a service account that is not found. |
Create the missing service account, or refer to an existing service account. |
|
Indicates when deployments have containers running in privileged mode. |
Do not run your container as privileged unless it is required. |
|
Indicates when objects do not have an |
Add an |
|
Indicates when objects do not have an |
Add an |
|
Indicates when containers are not set to |
Set |
|
Indicates when deployments expose port 22, which is commonly reserved for SSH access. |
Ensure that non-SSH services are not using port 22. Confirm that any actual SSH servers have been vetted. |
|
Indicates when containers do not have CPU requests and limits set. |
Set CPU requests and limits for your container based on its requirements. Refer to Requests and limits for details. |
|
Indicates when containers do not have memory requests and limits set. |
Set memory requests and limits for your container based on its requirements. Refer to Requests and limits for details. |
|
Indicates when containers mount a host path as writable. |
Set containers to mount host paths as |
|
CIS Benchmark 5.1.1 Ensure that the |
Create and assign a separate role that has access to specific resources/actions needed for the service account. |
|
Alert on deployments with |
Ensure the Docker socket is not mounted inside any containers by
removing the associated |
|
Alert on services for forbidden types. |
Ensure containers are not exposed through a forbidden service type such
as |
|
Alert on pods/deployment-likes with sharing host’s IPC namespace. |
Ensure the host’s IPC namespace is not shared. |
|
Alert on pods/deployment-likes with sharing host’s network namespace. |
Ensure the host’s network namespace is not shared. |
|
Alert on pods/deployment-likes with sharing host’s process namespace. |
Ensure the host’s process namespace is not shared. |
|
Alert on containers if allowing privilege escalation that could gain more privileges than its parent process. |
Ensure containers do not allow privilege escalation by setting
|
|
Alert on deployments with privileged ports mapped in containers. |
Ensure privileged ports [ |
|
Alert on deployments with sensitive host system directories mounted in containers. |
Ensure sensitive host system directories are not mounted in containers
by removing those |
|
Alert on deployments with unsafe |
Ensure container does not unsafely exposes parts of |
|
Alert on deployments specifying unsafe |
Ensure container does not allow unsafe allocation of system resources by
removing unsafe |
Helm limitations¶
Storage redirects¶
The option to redirect clients on pull for Helm repositories is present in the web UI. However, it is currently ineffective. Refer to the relevant issue on GitHub for more information.
MSR API endpoints¶
For the following endpoints, note that while the Swagger API Reference does not specify example responses for HTTP 200 codes, this is due to a Swagger bug and responses will be returned.
# Get chart or provenance file from repo
GET https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<filename>
# Template a chart version
GET https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/template
Chart storage limit¶
Users can safely store up to 100,000 charts per repository; storing a greater number may compromise some MSR functionality.
Tag pruning¶
Tag pruning is the process of cleaning up unnecessary or unwanted repository tags. As of v2.6, you can configure the Mirants Secure Registry (MSR) to automatically perform tag pruning on repositories that you manage by:
Specifying a tag pruning policy or alternatively,
Setting a tag limit
Note
When run, tag pruning only deletes a tag and does not carry out any actual blob deletion.
Known Issue
While the tag limit field is disabled when you turn on immutability for a new repository, this is currently not the case with Repository Settings. As a workaround, turn off immutability when setting a tag limit via Repository Settings > Pruning.
In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy.
Specify a tag pruning policy¶
As a repository administrator, you can now add tag pruning policies on
each repository that you manage. To get started, navigate to
https://<msr-url>
and log in with your credentials.
Select Repositories in the left-side navigation panel, and then
click the name of the repository you want to update. Note that you will
have to click on the repository name following the /
after the specific
namespace for your repository.
Select the Pruning tab, and click New pruning policy to specify your tag pruning criteria:
MSR allows you to set your pruning triggers based on the following image attributes:
Name |
Description |
Example |
---|---|---|
Tag name |
Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Tag name = test` |
Component name |
Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Component name starts with |
Vulnerabilities |
Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number |
Critical vulnerabilities = |
License |
Whether the image uses an intellectual property license and is one of or not one of your specified words |
License name = |
Last updated at |
Whether the last image update was before your specified number of hours, days, weeks, or months. For details on valid time units, see Go’s ParseDuration function |
Last updated at: Hours = |
Specify one or more image attributes to add to your pruning criteria, then choose:
Prune future tags to save the policy and apply your selection to future tags. Only matching tags after the policy addition will be pruned during garbage collection.
Prune all tags to save the policy, and evaluate both existing and future tags on your repository.
Upon selection, you will see a confirmation message and will be redirected to your newly updated Pruning tab.
If you have specified multiple pruning policies on the repository, the Pruning tab will display a list of your prune triggers and details on when the last tag pruning was performed based on the trigger, a toggle for deactivating or reactivating the trigger, and a View link for modifying or deleting your selected trigger.
All tag pruning policies on your account are evaluated every 15 minutes. Any qualifying tags are then deleted from the metadata store. If a tag pruning policy is modified or created, then the tag pruning policy for the affected repository will be evaluated.
Set a tag limit¶
In addition to pruning policies, you can also set tag limits on repositories that you manage to restrict the number of tags on a given repository. Repository tag limits are processed in a first in first out (FIFO) manner. For example, if you set a tag limit of 2, adding a third tag would push out the first.
To set a tag limit, do the following:
Select the repository that you want to update and click the Settings tab.
Turn off immutability for the repository.
Specify a number in the Pruning section and click Save. The Pruning tab will now display your tag limit above the prune triggers list along with a link to modify this setting.
Vulnerability scanning¶
In addition to its primary function of storing Docker images, MSR offers a deeply integrated vulnerability scanner that analyzes container images, either by manual user request or automatically whenever an image is uploaded to the registry.
MSR image scanning occurs in a service known as the dtr-jobrunner container. To scan an image, MSR:
Extracts a copy of the image layers from backend storage.
Extracts the files from the layer into a working directory inside the dtr-jobrunner container.
Executes the scanner against the files in this working directory, collecting a series of scanning data. Once the scanning data is collected, the working directory for the layer is removed.
Important
In scanning images for security vulnerabilities, MSR temporarily extracts
the contents of your images to disk. If malware is contained in these
images, external malware scanners may wrongly attribute that malware to MSR.
The key indication of this is the detection of malware in the dtr-jobrunner
container in /tmp/findlib-workdir-*
. To prevent any recurrence of the
issue, Mirantis recommends configuring the run-time scanner to exclude files
found in the MSR dtr-jobrunner containers in /tmp
or more specifically,
if wildcards can be used, /tmp/findlib-workdir-*
.
Scanner reporting¶
You can review vulnerability scanning results and submit those results to Mirantis Customer Support to help with the troubleshooting process.
Possible scanner report issues include:
Scanner crashes
Improperly extracted containers
Improperly detected components
Incorrectly matched backport
Vulnerabilities improperly matched to components
Vulnerability false positives
Export a scanner report¶
You can export a scanner report as a JSON (for support and diagnostics) or a CSV file (for processing using Windows or Linux shell scripts).
Sign in to MSR.
Navigate to Repositories > <repo-name> > Tags.
Click View Details for the required image.
Click Export Report and select Export as JSON or Export as CSV.
Find the report as either
scannerReport.json
(for JSON) orscannerReport.txt
(for CSV) in your browser downloads directory.
Submit a scanner report¶
You can send a scanner report directly to Mirantis Customer Support to help the group in their troubleshooting efforts.
Sign in to MSR.
Navigate to the View Details page and click the Components tab.
Click Show layers affected for the layer you want to report.
Click Report Issue. A pop-up window displays with the fields detailed in the following table:
Field
Description
Component
The Component field is automatically filled out and is not editable. If the information is incorrect, make a note in the Additional info field.
Reported version or date
The Reported version or date field is automatically filled out and is not editable. If the information is incorrect, make a note in the Additional info field.
Report layer
Indicate the image or image layer. Options include: Omit layer, Include layer, Include image.
False Positive(s)
Optional. Select from the drop-down menu all CVEs you suspect are false positives. Toggle the False Positive(s) control to edit the field.
Missing Issue(s)
Optional. List CVEs you suspect are missing from the report. Enter CVEs in the format
CVE-yyyy-####
orCVE-yyyy-#####
and separate each CVE with a comma. Toggle the Missing Issue(s) control to edit the field.Incorrect Component Version
Optional. Enter any incorrect component version information in the Missing Issue(s) field. Toggle the Incorrect Component Version control to edit the field.
Additional info
Optional. Indicate anything else that does not pertain to the other fields. Toggle the Additional info control to edit this field.
Fill out the fields in the pop-up window and click Submit.
MSR generates a JSON-formatted scanner report, which it bundles into a file together with the scan data. This file downloads to your local drive, at which point you can share it as needed with Mirantis Customer Support.
Important
To submit a scanner report along with the associated image, bundle the items
into a .tgz
file and include that file in a new Mirantis Customer
Support ticket.
To download the relevant image:
docker save <msr-address>/<user>/<image-name>:tag <image-name>.tar
To bundle the report and image as a .tgz
file:
tar -cvzf scannerIssuesReport.tgz <image-name>.tar scannerIssuesReport.json
Image enforcement policies and monitoring¶
MSR users can automatically block clients from pulling images stored in the registry by configuring enforcement policies at either the global or repository level.
An enforcement policy is a collection of rules used to determine whether an image can be pulled.
A good example of a scenario in which an enforcement policy can be useful is when an administrator wants to house images in MSR but does not want those images to be pulled into environments by MSR users. In this case, the administrator would configure an enforcement policy either at the global or repository level based on a selected set of rules.
Enforcement policies: global versus repository¶
Global image enforcement policies differ from those set at the repository level in several important respects:
Whereas both administrators and regular users can set up enforcement policies at the repository level, only administrators can set up enforcement policies at the global level.
Only one global enforcement policy can be set for each MSR instance, whereas multiple enforcement policies can be configured at the repository level.
Global enforcement policies are evaluated prior to repository policies.
Enforcement policy rule attributes¶
Global and repository enforcement policies are generated from the same set of rule attributes.
Note
All rules must evaluate to true
for an image to be pulled; if any rules
evaluate to false
, the image pull will be blocked.
Name |
Filters |
Example |
---|---|---|
Tag name |
|
Tag name starts with |
Component name |
|
Component name starts with |
All CVSS 3 vulnerabilities |
|
All CVSS 3 vulnerabilities less than |
Critical CVSS 3 vulnerabilities |
|
Critical CVSS vulnerabilities less than |
High CVSS 3 vulnerabilities |
|
High CVSS 3 vulnerabilities less than |
Medium CVSS 3 vulnerabilities |
|
Medium CVSS 3 vulnerabilities less than |
Low CVSS 3 vulnerabilities |
|
Low CVSS 3 vulnerabilities less than |
License name |
|
License name one of |
Last updated at |
|
Last updated at before |
Configure enforcement policies¶
Use the MSR web UI to set up enforcement policies for both repository and global enforcement.
Set up repository enforcement¶
Important
Users can only create and edit enforcement policies for repositories within their user namespace.
To set up a repository enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to Repositories.
Select the repository to edit.
Click the Enforcement tab and select New enforcement policy.
Define the enforcement policy rules with the desired rule attributes and select Save. The screen displays the new enforcement policy in the Enforcement tab. By default, the new enforcement policy is toggled on.
Once a repository enforcement policy is set up and activated, pull requests that do not satisfy the policy rules will return the following error message:
Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: enforcement policies '<enforcement-policy-id>'
blocked request
Set up global enforcement¶
Important
Only administrators can set up global enforcement policies.
To set up a global enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to System.
Select the Enforcement tab.
Confirm that the global enforcement function is Enabled.
Define the enforcement policy rules with the desired criteria and select Save.
Once the global enforcement policy is set up, pull requests against any repository that do not satisfy the policy rules will return the following error message:
Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: global enforcement policy blocked request
Monitor enforcement activity¶
Administrators and users can monitor enforcement activity in the MSR web UI.
Important
Enforcement events can only be monitored at the repository level. It is not possible, for example, to view in one location all enforcement events that correspond to the global enforcement policy.
Navigate to Repositories.
Select the repository whose enforcement activity you want to review.
Select the Activity tab to view enforcement event activity. For instance you can:
Identify which policy triggered an event using the enforcement ID displayed on the event entry. (The enforcement IDs for each enforcement policy are located on the Enforcement tab.)
Identify the user responsible for making a blocked pull request, and the time of the event.
Upgrade MSR¶
To use Mirantis Secure Registry (MSR) 3.0.x, you must perform a fresh installation, as it is not currently possible to upgrade to MSR 3.0.x from any MSR 2.x.x version. You can, though, upgrade from MSR 3.0.x to a later 3.0.x patch version.
Note
Before upgrading from MSR 3.0.0 to a later patch version, you must confirm that the cert-manager component is version 1.7.2 or later:
helm history cert-manager
To upgrade cert-manager to version 1.7.2:
helm upgrade cert-manager jetstack/cert-manager \
--version 1.7.2 \
--set installCRDs=true
Schedule your upgrade outside of peak hours to avoid any business impact, as brief interruptions may occur.
To upgrade to a new patch version:
Run the following command: helm upgrade command:
helm upgrade msr msrofficial/msr --version <helm-chart-version> --set-file license=path/to/file/license.lic
Verify the installation of all MSR components.
Verify that each Pod is in the
Running
state:kubectl get pods
Troubleshoot any failing Pods by running the following command on each failed Pod:
kubectl describe <pod-name>
Review the Pod logs for more detailed results:
kubectl logs <pod-name>
Monitor MSR¶
Mirantis Secure Registry is a Dockerized application. To monitor it, you can use the same tools and techniques you’re already using to monitor other containerized applications running on your cluster. One way to monitor MSR is using the monitoring capabilities of Docker Universal Control Plane.
In your browser, log in to Mirantis Kubernetes Engine (MKE), and navigate to the Stacks page. If you have MSR set up for high-availability, then all the MSR replicas are displayed.
To check the containers for the MSR replica, click the replica you want to inspect, click Inspect Resource, and choose Containers.
Now you can drill into each MSR container to see its logs and find the root cause of the problem.
Health check endpoints¶
MSR also exposes several endpoints you can use to assess if a MSR replica is healthy or not:
/_ping
: Checks if the MSR replica is healthy, and returns a simple json response. This is useful for load balancing or other automated health check tasks./nginx_status
: Returns the number of connections being handled by the NGINX front-end used by MSR./api/v0/meta/cluster_status
: Returns extensive information about all MSR replicas.
Cluster status¶
The /api/v0/meta/cluster_status
endpoint requires administrator
credentials, and returns a JSON object for the entire cluster as observed by
the replica being queried. You can authenticate your requests using HTTP basic
auth.
curl -ksL -u <user>:<pass> https://<msr-domain>/api/v0/meta/cluster_status
{
"current_issues": [
{
"critical": false,
"description": "... some replicas are not ready. The following servers are
not reachable: dtr_rethinkdb_f2277ad178f7",
}],
"replica_health": {
"f2277ad178f7": "OK",
"f3712d9c419a": "OK",
"f58cf364e3df": "OK"
},
}
You can find health status on the current_issues
and
replica_health
arrays. If this endpoint doesn’t provide meaningful
information when trying to troubleshoot, try troubleshooting using
logs.
Check notary audit logs¶
Docker Content Trust (DCT) keeps audit logs of changes made to trusted repositories. Every time you push a signed image to a repository, or delete trust data for a repository, DCT logs that information.
These logs are only available from the MSR API.
Get an authentication token¶
To access the audit logs you need to authenticate your requests using an authentication token. You can get an authentication token for all repositories, or one that is specific to a single repository.
curl --insecure --silent \
--user <user>:<password> \
"https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=registry:catalog:*"
curl --insecure --silent \
--user <user>:<password> \
"https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=repository:<dtr-url>/<repository>:pull"
MSR returns a JSON file with a token, even when the user doesn’t have access to the repository to which they requested the authentication token. This token doesn’t grant access to MSR repositories.
The JSON file returned has the following structure:
{
"token": "<token>",
"access_token": "<token>",
"expires_in": "<expiration in seconds>",
"issued_at": "<time>"
}
Changefeed API¶
Once you have an authentication token you can use the following endpoints to get audit logs:
URL |
Description |
Authorization |
---|---|---|
|
Get audit logs for all repositories. |
Global scope token |
|
Get audit logs for a specific repository. |
Repositorhy-specific token |
Both endpoints have the following query string parameters:
Field name |
Required |
Type |
Description |
---|---|---|---|
|
Yes |
String |
A non-inclusive starting change ID from which to start returning results. This will typically be the first or last change ID from the previous page of records requested, depending on which direction your are paging in. The value 0 indicates records should be returned starting from the beginning of time. The value 1 indicates records should be returned starting from the most recent record. If 1 is provided, the implementation will also assume the records value is meant to be negative, regardless of the given sign. |
|
Yes |
String integer |
The number of records to return. A negative value indicates the number of records preceding the change_id should be returned. Records are always returned sorted from oldest to newest. |
The response is a JSON like:
{
"count": 1,
"records": [
{
"ID": "0a60ec31-d2aa-4565-9b74-4171a5083bef",
"CreatedAt": "2017-11-06T18:45:58.428Z",
"GUN": "msr.example.org/library/wordpress",
"Version": 1,
"SHA256": "a4ffcae03710ae61f6d15d20ed5e3f3a6a91ebfd2a4ba7f31fc6308ec6cc3e3d",
"Category": "update"
}
]
}
Below is the description for each of the fields in the response:
Field name |
Description |
---|---|
|
The number of records returned. |
|
The ID of the change record. Should be used in the change_id field of requests to provide a non-exclusive starting index. It should be treated as an opaque value that is guaranteed to be unique within an instance of notary. |
|
The time the change happened. |
|
The MSR repository that was changed. |
|
The version that the repository was updated to. This increments every time there’s a change to the trust repository. This is always 0 for events representing trusted data being removed from the repository. |
|
The checksum of the timestamp being updated to. This can be used with the existing notary APIs to request said timestamp. This is always an empty string for events representing trusted data being removed from the repository |
|
The kind of change that was made to the trusted repository. Can be update, or deletion. |
The results only include audit logs for events that happened more than 60 seconds ago, and are sorted from oldest to newest.
Even though the authentication API always returns a token, the changefeed API validates if the user has access to see the audit logs or not:
If the user is an admin they can see the audit logs for any repositories,
All other users can only see audit logs for repositories they have read access.
Troubleshoot MSR¶
You can handle many potential MSR issues using the tips and tricks detailed herein.
Troubleshoot your MSR Kubernetes deployment¶
You can use general Kubernetes troubleshooting and debugging techniques to troubleshoot your MSR Kubernetes deployment.
To review an example of a failed Pod:
kubectl get pods
Example output:
NAME READY STATUS RESTARTS AGE
msr-api-95dc9979b-4sgfg 1/1 Running 3 (54s ago) 99s
msr-enzi-api-6f6f54c4c5-72bkb 1/1 Running 1 (39s ago) 100s
msr-enzi-worker-55b5786699-pnlh4 1/1 Running 3 (81s ago) 100s
msr-garant-84c5d9489b-t4bl4 1/1 Running 3 (51s ago) 100s
msr-jobrunner-default-7fcc9bb849-4whcl 1/1 Running 3 (54s ago) 100s
msr-nginx-76dbf47797-slllp 0/1 ContainerCreating 0 99s
msr-notary-server-6dfb9c67c9-mft97 1/1 Running 2 (85s ago) 99s
msr-notary-signer-576c5f574b-ftm5z 1/1 Running 2 (90s ago) 99s
msr-registry-7df8fd6fcd-l67d6 1/1 Running 3 (51s ago) 100s
msr-rethinkdb-cluster-0 1/1 Running 0 100s
msr-rethinkdb-proxy-d5798dd75-ft75c 1/1 Running 2 (85s ago) 99s
msr-scanningstore-0 1/1 Running 0 99s
postgres-operator-569b58b8c6-c6vxv 1/1 Running 0 32h
postgres-operator-ui-7b9f8d69bc-pv9nm 1/1 Running 0 32h
To review a greater amount of information about a failed Pod:
kubectl get pods -o wide
Example output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
msr-api-95dc9979b-4sgfg 1/1 Running 3 (2m48s ago) 3m33s 172.17.0.14 minikube <none> <none>
msr-enzi-api-6f6f54c4c5-72bkb 1/1 Running 1 (2m33s ago) 3m34s 172.17.0.13 minikube <none> <none>
msr-enzi-worker-55b5786699-pnlh4 1/1 Running 3 (3m15s ago) 3m34s 172.17.0.8 minikube <none> <none>
msr-garant-84c5d9489b-t4bl4 1/1 Running 3 (2m45s ago) 3m34s 172.17.0.11 minikube <none> <none>
msr-jobrunner-default-7fcc9bb849-4whcl 1/1 Running 3 (2m48s ago) 3m34s 172.17.0.9 minikube <none> <none>
msr-nginx-76dbf47797-slllp 0/1 ContainerCreating 0 3m33s <none> minikube <none> <none>
msr-notary-server-6dfb9c67c9-mft97 1/1 Running 3 (51s ago) 3m33s 172.17.0.18 minikube <none> <none>
msr-notary-signer-576c5f574b-ftm5z 1/1 Running 3 (57s ago) 3m33s 172.17.0.12 minikube <none> <none>
msr-registry-7df8fd6fcd-l67d6 1/1 Running 3 (2m45s ago) 3m34s 172.17.0.15 minikube <none> <none>
msr-rethinkdb-cluster-0 1/1 Running 0 3m34s 172.17.0.10 minikube <none> <none>
msr-rethinkdb-proxy-d5798dd75-ft75c 1/1 Running 2 (3m19s ago) 3m33s 172.17.0.17 minikube <none> <none>
msr-scanningstore-0 1/1 Running 0 3m33s 172.17.0.16 minikube <none> <none>
postgres-operator-569b58b8c6-c6vxv 1/1 Running 0 32h 172.17.0.7 minikube <none> <none>
postgres-operator-ui-7b9f8d69bc-pv9nm 1/1 Running 0 32h 172.17.0.6 minikube <none> <none>
To review the Pods running in all namespaces:
kubectl get pods --all-namespaces
Example output:
NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager cert-manager-7dd5854bb4-hx7mj 1/1 Running 1 (7d5h ago) 7d9h
cert-manager cert-manager-cainjector-64c949654c-gwvgg 1/1 Running 2 (2d9h ago) 7d9h
cert-manager cert-manager-webhook-6b57b9b886-7prtc 1/1 Running 1 (2d9h ago) 7d9h
default msr-api-95dc9979b-4sgfg 1/1 Running 3 (4m44s ago) 5m29s
default msr-enzi-api-6f6f54c4c5-72bkb 1/1 Running 1 (4m29s ago) 5m30s
default msr-enzi-worker-55b5786699-pnlh4 1/1 Running 3 (5m11s ago) 5m30s
default msr-garant-84c5d9489b-t4bl4 1/1 Running 3 (4m41s ago) 5m30s
default msr-jobrunner-default-7fcc9bb849-4whcl 1/1 Running 3 (4m44s ago) 5m30s
default msr-nginx-76dbf47797-slllp 0/1 ContainerCreating 0 5m29s
default msr-notary-server-6dfb9c67c9-mft97 1/1 Running 3 (2m47s ago) 5m29s
default msr-notary-signer-576c5f574b-ftm5z 1/1 Running 3 (2m53s ago) 5m29s
default msr-registry-7df8fd6fcd-l67d6 1/1 Running 3 (4m41s ago) 5m30s
default msr-rethinkdb-cluster-0 1/1 Running 0 5m30s
default msr-rethinkdb-proxy-d5798dd75-ft75c 1/1 Running 2 (5m15s ago) 5m29s
default msr-scanningstore-0 1/1 Running 0 5m29s
default postgres-operator-569b58b8c6-c6vxv 1/1 Running 0 32h
default postgres-operator-ui-7b9f8d69bc-pv9nm 1/1 Running 0 32h
kube-system coredns-78fcd69978-48bfx 1/1 Running 1 (7d5h ago) 7d9h
kube-system etcd-minikube 1/1 Running 1 (2d9h ago) 7d9h
kube-system kube-apiserver-minikube 1/1 Running 1 (2d9h ago) 7d9h
kube-system kube-controller-manager-minikube 1/1 Running 1 (7d5h ago) 7d9h
kube-system kube-proxy-2h2z5 1/1 Running 1 (2d9h ago) 7d9h
kube-system kube-scheduler-minikube 1/1 Running 1 (2d9h ago) 7d9h
kube-system storage-provisioner 1/1 Running 2 (2d9h ago) 7d9h
To review all services:
kubectl get services
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d10h
msr ClusterIP 10.98.33.163 <none> 8080/TCP,443/TCP 8m14s
msr-api ClusterIP 10.102.145.77 <none> 443/TCP 8m14s
msr-enzi ClusterIP 10.102.7.61 <none> 4443/TCP 8m14s
msr-garant ClusterIP 10.102.139.182 <none> 443/TCP 8m14s
msr-notary ClusterIP 10.107.27.10 <none> 443/TCP 8m14s
msr-notary-signer ClusterIP 10.103.28.108 <none> 7899/TCP 8m14s
msr-registry ClusterIP 10.109.12.52 <none> 443/TCP 8m14s
msr-rethinkdb-admin ClusterIP None <none> 8080/TCP 8m14s
msr-rethinkdb-cluster ClusterIP None <none> 29015/TCP 8m14s
msr-rethinkdb-proxy ClusterIP 10.103.235.96 <none> 28015/TCP 8m14s
msr-scanningstore ClusterIP 10.99.62.126 <none> 5432/TCP 8m13s
msr-scanningstore-config ClusterIP None <none> <none> 7m56s
msr-scanningstore-repl ClusterIP 10.107.82.163 <none> 5432/TCP 8m13s
postgres-operator ClusterIP 10.108.77.171 <none> 8080/TCP 32h
postgres-operator-ui ClusterIP 10.108.138.75 <none> 80/TCP 32h
To review the state of a running or failed Pod:
kubectl describe pod msr-nginx-76dbf47797-slllp
Example output, including status, environment variables, certificates used, and recent events such as why the Pod might have failed to start:
Name: msr-nginx-76dbf47797-slllp
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 17 Nov 2021 19:22:17 -0500
Labels: app.kubernetes.io/component=nginx
app.kubernetes.io/instance=msr
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=msr
app.kubernetes.io/version=3.0.0-tp2
helm.sh/chart=msr-1.0.0-tp2.1
pod-template-hash=76dbf47797
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/msr-nginx-76dbf47797
.
.
.
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 9m17s default-scheduler Successfully assigned default/msr-nginx-76dbf47797-slllp to minikube
Warning FailedMount 58s (x12 over 9m13s) kubelet MountVolume.SetUp failed for volume "secrets" : secret "bad" not found
Warning FailedMount 27s (x4 over 7m15s) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets], unattached volumes=[secrets kube-api-access-6h99g]: timed out waiting for the condition
To view the Pod logs:
kubectl get logs <pod-name>
To create a shell to examine things from inside a Pod:
kubectl exec --stdin --tty <pod-name> -- /bin/sh
See also
Access RethinkDB¶
MSR uses RethinkDB to persist and reproduce data across replicas. To review the internal state of MSR, you can connect directly to the RethinkDB instance that is running on an MSR replica, using either the RethinkCLI or the MSR API.
Warning
Mirantis does not support direct modifications to RethinkDB, and thus any unforeseen issues that result from doing so are solely the user’s responsibility.
Access RethinkDB with the RethinkCLI¶
Enable external access to the RethinkDB Admin Console:
kubectl port-forward service/msr-rethinkdb-admin 8080:8080
Access the interactive RethinkDB Admin Console by opening
http://localhost:8080
in a web browser.Query the database contents:
List the cluster problems as detected by the current node:
r.db("rethinkdb").table("current_issues")
Example output:
[]
List the databases that RethinkDB contains:
r.dbList()
Example output:
[ 'dtr2', 'jobrunner', 'notaryserver', 'notarysigner', 'rethinkdb' ]
List the tables contained in the
dtr2
database:r.db('dtr2').tableList()
Example output:
[ 'blob_links', 'blobs', 'client_tokens', 'content_caches', 'events', 'layer_vuln_overrides', 'manifests', 'metrics', 'namespace_team_access', 'poll_mirroring_policies', 'promotion_policies', 'properties', 'pruning_policies', 'push_mirroring_policies', 'repositories', 'repository_team_access', 'scanned_images', 'scanned_layers', 'tags', 'user_settings', 'webhooks' ]
List the entries contained in the repositories table:
r.db('dtr2').table('repositories')
Example output:
[ { enableManifestLists: false, id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b', immutableTags: false, name: 'test-repo-1', namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481', namespaceName: 'admin', pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6', pulls: 0, pushes: 0, scanOnPush: false, tagLimit: 0, visibility: 'public' }, { enableManifestLists: false, id: '9f43f029-9683-459f-97d9-665ab3ac1fda', immutableTags: false, longDescription: '', name: 'testing', namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481', namespaceName: 'admin', pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be', pulls: 0, pushes: 0, scanOnPush: false, shortDescription: '', tagLimit: 1, visibility: 'public' } ]
Note
Individual databases and tables are a private implementation detail that may
change in MSR from version to version. You can, though, always use
dbList()
and tableList()
to explore the contents and data structure.
Access RethinkDB with the MSR API¶
Enable external access to the MSR API:
kubectl port-forward service/msr 8443:443
Review the status of your MSR cluster:
curl -u admin:$TOKEN -X GET "https://<msr-url>/api/v0/meta/cluster_status" -H "accept: application/json"
Example API response:
{ "rethink_system_tables": { "cluster_config": [ { "heartbeat_timeout_secs": 10, "id": "heartbeat" } ], "current_issues": [], "db_config": [ { "id": "339de11f-b0c2-4112-83ac-520cab68d89c", "name": "notaryserver" }, { "id": "aa2e893f-a69a-463d-88c1-8102aafebebc", "name": "dtr2" }, { "id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd", "name": "jobrunner" }, { "id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039", "name": "notarysigner" } ], "server_status": [ { "id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a", "name": "dtr_rethinkdb_5eb9459a7832", "network": { "canonical_addresses": [ { "host": "dtr-rethinkdb-5eb9459a7832.dtr-ol", "port": 29015 } ], "cluster_port": 29015, "connected_to": { "dtr_rethinkdb_56b65e8c1404": true }, "hostname": "9e83e4fee173", "http_admin_port": "<no http admin>", "reql_port": 28015, "time_connected": "2019-02-15T00:19:22.035Z" }, } ... ] } }
See also
The RethinkDB documentation on RethinkDB queries
Troubleshoot scanning or CVE updates failure¶
CVE database connectivity issues are often at the root of any scanning or CVE updating problems you may encounter. A faulty installation of the PostgreSQL operator is often to blame for such issues.
To determine the state of your PostgreSQL installation:
Verify that the postgres operator is running by invoking the kubectl get pods command. If the output you receive is similar to the example that follows, your PostgreSQL is properly installed:
postgres-operator-6788c8bf6-494lt 1/1 Running 0 16d
If, however, the command produces no output, or the state that presents is
something other than Running
, install PostgreSQL as follows:
helm upgrade -i postgres-operator postgres-operator/postgres-operator \
--version 1.7.1 \
--set configKubernetes.spilo_runasuser=101 \
--set configKubernetes.spilo_runasgroup=103 \
--set configKubernetes.spilo_fsgroup=103
Vulnerability scan warnings¶
Warnings display in a red banner at the top of the MSR web UI to indicate potential vulnerability scanning issues.
Warning |
Cause |
---|---|
Warning: Cannot perform security scans because no vulnerability database was found. |
Displays when vulnerabilty scanning is enabled but there is no vulnerability database available to MSR. Typically, the warning displays when a vulnerability database update is run for the first time and the operation fails, as no usable vulnerability database exists at this point. |
Warning: Last vulnerability database sync failed. |
Displays when a vulnerability database update fails, even though there is a previous usable vulnerabilty database available for vulnerability scans. The warning typically displays when a vulnerability database update fails, despite successful completion of a prior vulnerability database update. |
Note
The terms vulnerability database sync and vulnerability database update are interchangeable, in the context of MSR web UI warnings.
Note
The issuing of warnings is the same regardless of whether vulnerability database updating is done manually or is performed automatically through a job.
MSR undergoes a number of steps in performing a vulnerability database update, including TAR file download and extraction, file validation, and the update operation itself. Errors that can trigger warnings can occur at any point in the update process. These errors can include such system-related matters as low disk space, issues with the transient network, or configuration complications. As such, the best strategy for troubleshooting MSR vulnerability scanning issues is to review the logs.
To view the logs for an online vulnerability database update:
Online vulnerability database updates are performed by a jobrunner container, the logs for which you can view through a docker CLI command or by using the MSR web UI
CLI command:
docker logs <jobrunner-container-name>
MSR web UI:
Navigate to System > Job Logs in the left-side navigation panel.
To view the logs for an offline vulnerability database update:
The MSR vulnerability database update occurs through the dtr-api container. As such, access the logs for that container to ascertain the reason for update failure.
To obtain more log information:
If the logs do not initially offer enough detail on the cause of vulnerability database update failure, set MSR to enable debug logging, which will display additional debug logs.
Use Helm to enable and disable debug logging. For example:
helm upgrade --reuse-values --set logLevel=debug [RELEASE] [CHART]
Certificate issues when pushing and pulling images¶
If TLS is not properly configured, you are likely to encounter an
x509: certificate signed by unknown authority
error when attempting to run
the following commands:
docker login
docker push
docker pull
To resolve the issue:
Verify that your MSR instance has been configured with your TLS certificate Fully Qualified Domain Name (FQDN). For more information, refer to Add a custom TLS certificate.
Alternatively, but only in testing scenarios, you can skip using a certificate
by adding your registry host name as an insecure registry in the Docker
daemon.json
file:
{
"insecure-registries" : [ "registry-host-name" ]
}
Configure AWS_CA_BUNDLE environment variable¶
You may encounter an insecure TLS connection error if you are running MSR behind an MITM proxy and using AWS S3 for your storage backend.
To resolve the issue:
Add the
AWS_CA_BUNDLE
environment variable to all of the MSR containers by appending the MSR Helm chartvalues.yaml
file as follows:global: extraEnv: AWS_CA_BUNDLE: "path_to_the_certificate"
Apply the new value:
helm upgrade msr msrofficial/msr --version <version-number> -f values.yaml
Disaster recovery¶
Disaster recovery overview¶
Mirantis Secure Registry (MSR) uses RethinkDB to store metadata. RethinkDB is a clustered application, and thus to configure it with high availability it must have three or more servers, and its tables must be configured to have three or more replicas.
For a RethinkDB table to be healthy, a majority (n/2 + 1) of replicas per table must be available. As such, there are three possible failure scenarios:
Scenario |
Description |
---|---|
Minority of replicas are unhealthy |
One or more table replicas are unhealthy, but the overall majority (n/2 + 1) remains healthy and is able to communicate, each with the others. As long as more than half of the table voting replicas and more than half of the voting replicas for each shard remain available, one of those voting replicas will be arbitrarily selected as the new primary. |
Majority of replicas are unhealthy |
Half or more voting replicas of a shard are lost and cannot be reconnected. An emergency repair of the cluster remains possible, without having to restore from a backup, which minimizes the amount of data lost. Refer to mirantis/msr db emergency-repair for more detail. |
All replicas are unhealthy |
A complete disaster scenario wherein all replicas are lost, the result being the loss or corruption of all associated data volumes. In this scenario, you must restore MSR from a backup. Restoring from a backup should be a last resort solution. You should first attempt an emergency repair, as this can mitigate data loss. Refer to Restore from an MSR backup for more information. |
See also
Repair a single replica¶
When one or more MSR replicas are unhealthy but the overall majority (n/2 + 1) is healthy and able to communicate with one another, your MSR cluster is still functional and healthy.
Given that the MSR cluster is healthy, there is no need to execute a disaster recovery procedure, such as restoring from a backup. Instead, you should:
Instead, you should:
Remove the unhealthy replicas from the MSR cluster.
Join new replicas to make MSR highly available.
The order in which you perform these operations is important, as an MSR cluster requires a majority of replicas to be healthy at all times. If you join more replicas before removing the ones that are unhealthy, your MSR cluster might become unhealthy.
Split-brain scenario¶
To understand why you should remove unhealthy replicas before joining new ones, imagine you have a five-replica MSR deployment, and something goes wrong with the overlay network connection the replicas, causing them to be separated in two groups.
Because the cluster originally had five replicas, it can work as long as three replicas are still healthy and able to communicate (5 / 2 + 1 = 3). Even though the network separated the replicas in two groups, MSR is still healthy.
If at this point you join a new replica instead of fixing the network problem or removing the two replicas that got isolated from the rest, it is possible that the new replica ends up in the side of the network partition that has less replicas.
When this happens, both groups now have the minimum amount of replicas needed to establish a cluster. This is also known as a split-brain scenario, because both groups can now accept writes and their histories start diverging, making the two groups effectively two different clusters.
Scale Helm deployment¶
Important
With MSR 3.0 you can configure the number of replicas, however you cannot add or remove separate replicas.
To scale your Helm deployment, you must first obtain your MSR deployment:
kubectl get deployment
Next, run the following command to add and remove replicas from your MSR deployment.
kubectl scale deployment --replicas=3 <deployment-name>
Example:
kubectl scale deployment --replicas=3 msr-api
For comprehensive information on how to scale MSR on Helm up and down as a Kubernetes application, refer to the Kubernetes documenation Running Multiple Instances of Your App.
See also
Repair a cluster¶
For a MSR cluster to be healthy, a majority of its replicas (n/2 + 1) need to be healthy and be able to communicate with the other replicas. This is known as maintaining quorum.
In a scenario where quorum is lost, but at least one replica is still accessible, you can use that replica to repair the cluster. That replica doesn’t need to be completely healthy. The cluster can still be repaired as the MSR data volumes are persisted and accessible.
Repairing the cluster from an existing replica minimizes the amount of data lost. If this procedure doesn’t work, you’ll have to restore from an existing backup.
Diagnose an unhealthy cluster¶
When a majority of replicas are unhealthy, causing the overall MSR
cluster to become unhealthy, internal server error
presents for operations
such as docker login , docker pull , and
docker push.
Accessing the /_ping
endpoint of any replica also returns the same
error. It is also possible that the MSR web UI is partially or fully
unresponsive.
Using the msr db scale command returns an error such as:
{"level":"fatal","msg":"unable to reconfigure replication: unable to
reconfigure replication for table \"org_membership\": unable to
reconfigure database replication: rethinkdb: The server(s) hosting table
`enzi.org_membership` are currently unreachable. The table was not
reconfigured. If you do not expect the server(s) to recover, you can use
`emergency_repair` to restore availability of the table.
\u003chttp://rethinkdb.com/api/javascript/reconfigure/#emergency-repair-mode\u003e
in:\nr.DB(\"enzi\").Table(\"org_membership\").Reconfigure(replicas=1, shards=1)","time":"2022-12-09T20:13:47Z"}
command terminated with exit code 1
Perform an emergency repair¶
Use the msr db emergency-repair command to repair an
unhealthy MSR cluster from the msr-api
Deployment.
This command overrides the standard safety checks that occur when scaling a
RethinkDB cluster. This allows RethinkDB to modify the replication factor to
the setting most appropriate for the number of rethinkdb-cluster
Pods that
are connected to the database.
The msr db emergency-repair command is commonly used when the
msr db scale command is no longer able to reliably scale the
database. This typically occurs when there is a prior loss of quorum, which
often happens when you scale rethinkdb.cluster.replicaCount
without first
decommissioning and scaling RethinkDB servers. For more information on scaling
down RethinkDB servers, refer to Remove replicas from RethinkDB.
Run the following command to perform an emergency repair:
kubectl exec deploy/msr-api -- msr db emergency-repair
Create an MSR backup¶
An MSR backup contains the data that MSR manages, with the exception of images, charts, and the vulnerability database.
Data managed by MSR¶
The table that follows describes the various types of data that MSR manages, and indicates which of these data types is backed up when you run the msr backup command.
Data |
Backup |
Description |
---|---|---|
Configurations |
yes |
MSR settings. |
Repository metadata |
yes |
Metadata about the repositories, charts, and images deployed, such as architecture and size. |
Access control to repos and images |
yes |
Permissions for teams and repositories. |
Notary data |
yes |
Signatures and digests for images that are signed. |
Scan results |
yes |
Information about security vulnerabilities in your images. |
Image and chart content |
no |
The images and charts that have been stored in MSR within a repository; must be backed up separately, depending on the MSR configuration. |
Users, orgs, teams |
yes |
The data related to users, orgs, and teams that MSR backs up. |
Vulnerability database |
no |
Database of vulnerabilities, which can be redownloaded after a restore. |
Back up MSR data¶
The creation of a complete MSR backup requires that you back up both the contents of repositories (such as images and charts) and the metadata MSR manages.
Back up image content¶
Note
As you can configure MSR for several types of storage backends, the method
for backing up images and charts will vary. The example we offer is for
persistentVolume
. If you are using a different storage backend, such as
a cloud provider, you should adhere to the recommended practices for that
system.
When MSR is configured with persistentVolume
, images and charts are stored
on the local file system or on mounted network storage.
One way you can back up the images and charts data is by creating a tar archive
of the data volume that MSR uses. To find the path of the volume, describe the
PersistentVolume
associated with the PersistentVolumeClaim
:
kubectl get persistentvolumeclaim msr
NAME STATUS VOLUME CAPACITY ACCESS
MODES STORAGECLASS AGE msr Bound
pvc-36c236cb-d5f2-431d-aeb7-76c0de49b17b 10Gi RWX
standard 17h
k get persistentvolume pvc-36c236cb-d5f2-431d-aeb7-76c0de49b17b -o
jsonpath='{.spec.hostPath.path}'
/tmp/hostpath-provisioner/myns0/msr
sudo tar -cvf /tmp/hostpath-provisioner/myns0/msr
Back up MSR metadata¶
Use the msr backup command to create a backup of the MSR metadata. The command is present in any API Pod and can be run using the kubectl exec command.
An example follows of how to create a backup for an MSR installation named
mymsr
. The backup contents are streamed to standard output, which is
redirected locally to the file backup.tar
.
kubectl exec -i deployment/mymsr-api -- msr backup - > backup.tar
Note
If your backup file contains sensitive information, you may want to encrypt it.
Test your backup¶
To validate your backup, print and review the contents of the created tar file:
tar -tf backup.tar
You can also test your backup by restoring it to a new MSR instance Restore from backup.
Restore from an MSR backup¶
If a majority of the RethinkDB table replicas in use by MSR are unhealthy and you are unable to run a successful emergency repair, you will have to restore the cluster from a backup.
To restore MSR:
Confirm installation of the version of MSR that corresponds to the one in use when the backup was made.
Restore the images and charts content.
Restore MSR metadata from a backup created using the msr backup command.
Register MSR with eNZi.
Download the vulnerability database.
Install corresponding MSR version¶
A running instance of MSR must be set up before you can restore from a backup, to serve as the restore target, and that MSR instance must be the same version as the one from which the backup was created.
Restore images and charts¶
If you had MSR configured to store images on the local filesystem, you can extract your backup:
sudo tar -xf image-backup.tar -C /var/lib/docker/volumes
Note
If you are using a different storage backend, adhere to the best practices recommended for that system.
Restore MSR metadata from a backup¶
Use the msr restore command to restore MSR metadata from a previously created backup. The command is present in any API Pod and can be run using the kubectl exec command.
The following is an example of restoring onto an MSR installation named
mymsr
. The backup contents are streamed from standard input, which receives
its data from the local file backup.tar
.
kubectl exec -i deployment.apps/mymsr-api -- msr restore - < backup.tar
Register MSR with eNZi (auth service)¶
Whenever you restore MSR from a backup, you must register the software with eNZi.
Run the msr auth register admin command:
kubectl exec -i deployment.apps/mymsr-api -- msr auth register --username admin -p password https://mymsr-enzi:4443/enzi
Restart MSR Pods:
kubectl rollout restart deployment mymsr-api mymsr-enzi-api mymsr-garant mymsr-registry
Re-fetch the vulnerability database¶
If you enable image scanning, you must re-download the vulnerability database following any successful restore operation.
Where to go next¶
compatibility-matrix
Customer feedback¶
You can submit feedback on MSR to Mirantis either by rating your experience or through a Jira ticket.
To rate your MSR experience:
Log in to the MSR web UI.
Click Give feedback at the bottom of the screen.
Rate your MSR experience from one to five stars, and add any additional comments in the provided field.
Click Send feedback.
To offer more detailed feedback:
Log in to the MSR web UI.
Click Give feedback at the bottom of the screen.
Click create a ticket in the 5-star review dialog to open a Jira feedback collector.
Fill in the Jira feedback collector fields and add attachments as necessary.
Click Submit.
Migration Guide¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
The migration of MSR metadata and image binaries to a new Kubernetes or Swarm cluster can be a complex operation. To help you to successfully complete this task, Mirantis provides the Mirantis Migration Tool (MMT).
With MMT, you can transition to the same MSR version you already have in use, or you can opt to upgrade to a more recent major, minor, or patch version of the software. In addition, MMT allows you to switch cluster orchestrators and deployment methods as part of the migration process.
Available MSR system orchestrations include:
MSR system |
Orchestration |
---|---|
|
Kubernetes orchestration |
|
Docker Swarm orchestration |
|
MKE orchestration |
Migration paths¶
Note
MMT does not support migrating to 2.9.x target systems.
Source MSR system |
Target MSR system |
---|---|
MSR 2.9 |
|
MSR 3.0, Helm |
|
MSR 3.1, Helm |
|
MSR 3.1, Operator |
|
MSR 3.1, Swarm |
|
The workflow for migrating MSR deployments is a multi-stage sequential operation.
Migrations from MSR 2.9.x:
Source verification
Estimation
Extraction
Transformation
Restoration
Migrations from MSR 3.x.x:
Extraction
Restoration
Note
Refer to Kubernetes migrations for all migrations that include Kubernetes-based source or target systems.
Backup and restoration paths¶
You can use MMT to create an MSR system backup as well as to restore an MSR system from a previously created backup.
Source MSR system |
Target MSR system |
---|---|
MSR 3.0, Helm |
MSR 3.0, Helm |
MSR 3.1, Helm |
MSR 3.1, Helm |
MSR 3.1, Operator |
MSR 3.1, Operator |
MSR 3.1, Swarm |
MSR 3.1, Swarm |
MMT architecture¶
The Mirantis Migration Tool is designed to work with MSR-based source registries.
The mmt command syntax is as follows:
mmt <command> <command-mode> --storage-mode <storage-mode> ... <directory>
The <command>
argument represents the particular stage of the migration
process:
Migration stage |
Description |
---|---|
verify |
Verification of the MSR source system configuration. The verify command must be run on the source MSR system. Refer to Verify the source system configuration for more information. Applies only to migrations that originate from MSR 2.9.x systems. |
estimate |
Estimation of the number of images and the amount of metadata to migrate. The estimate command must be run on the source MSR system. Refer to Estimate the migration for more information. Applies only to migrations that originate from MSR 2.9.x systems. |
extract |
Extraction of metadata, storage configuration, and blob storage in the
case of the |
transform |
Transformation of metadata from the source registry for use with the target MSR system. The transform command must be run on the target MSR system. Refer to Transform the data extract for more information. Applies only to migrations that originate from MSR 2.9.x systems. |
restore |
Restoration of transformed metadata, storage configuration, and blob
storage in the case of the |
The <command-mode>
argument indicates the mode in which the command is to
run specific to the source or target registry. msr
and msr3
are
currently the only accepted values, as MMT currently only supports the
migration of MSR registries.
The --storage-mode
flag and its accompanying <storage-mode>
argument
indicate the storage mode to use in migrating the
registry blob storage.
Storage mode |
Description |
---|---|
|
The binary image data remains in its original location. The target MSR system must be configured to use the same external storage as the source MSR system. Refer to Configure external storage for more information. Important Due to its ability to handle large amounts of data, Mirantis
recommends the use of |
|
The binary image data is copied from the source system to a local directory on the workstation that is running MMT. This mode allows movement from one storage location to another. It is especially useful in air-gapped environments. |
The <directory>
argument is used to share state across each command. The
resulting directory is typically the destination for the data that is extracted
from the source registry, which then serves as the source for the extracted
data in subsequent commands.
Migration prerequisites¶
You must meet certain prerequisites to successfully migrate an MSR system using the Mirantis Migration Tool (MMT):
Placement of the source MSR registry into read-only mode. To do this, execute the following API request:
curl -u <username>:$TOKEN -X POST "https://<msr-url>/api/v0/meta/settings" -H "accept: application/json" -H "content-type: application/json" -d "{ \"readOnlyRegistry\": true }"
A
202 Accepted
response indicates success.Important
To avoid data inconsistencies, the source registry must remain in read-only mode throughout the migration to the target MSR system.
Revert the value of
readOnlyRegistry
tofalse
after the migration is complete.Be aware that MSR 3.0.x source systems cannot be placed into read-only mode. If you are migrating from a 3.0.x source system, be careful not to write any files during the migration process.
An active MSR 3.x.x installation, version 3.0.3 or later, to serve as the migration target.
Configuration of the namespace for the MSR target installation, which you set by running the following command:
kubectl config set-context --current --namespace=<NAMESPACE-for-MSR-3.x.x-migration-target>
You must pull the MMT image to both the source and target systems, using the following command:
docker pull registry.mirantis.com/msr/mmt
2.9.x source systems only. Administrator credentials for the MKE cluster on which the source MSR 2.9 system is running.
Kubernetes target systems only. A
kubectl
config file, which is typically located in$HOME/.kube
.Kubernetes target systems only. Credentials within the
kubectl
config file that supply cluster admin access to the Kubernetes cluster that is running MSR 3.x.x.
Select the storage mode¶
Once the prerequisites are met, you can select
from two available storage modes for migrating binary image data from a source
MSR system to a target MSR system: inplace
and copy
.
Note
In all but one stage of the migration workflow, you will indicate the
storage mode of choice in the storage-mode
parameter setting. The step
in which you do not indicate the storage mode is
Restore the data extract.
Storage mode |
Description |
---|---|
|
The binary image data remains in its original location. The target MSR system must be configured to use the same external storage as the source MSR system. Refer to Configure external storage for more information. Important Due to its ability to handle large amounts of data, Mirantis
recommends the use of |
|
The binary image data is copied from the source system to a local directory on the workstation that is running MMT. This mode allows movement from one storage location to another. It is especially useful in air-gapped environments. |
Important
Migrations from source MSR systems that use Docker volumes for image
storage, such as local filesystem storage backend, can only be performed
using the copy
storage mode. Refer to
Filesystem storage backends for more information.
Kubernetes migrations¶
For all Kubernetes-based migrations, Mirantis recommends running MMT in a Pod rather than using the docker run deployment method. Migration scenarios in which this does not apply are limited to MSR 2.9.x source systems and Swarm-based MSR 3.1.x source and target systems.
Important
All Kubernetes-based migrations that use a filesystem backend must run MMT in a Pod.
When performing a restore from within the MMT Pod, the Persistent Volume Claim (PVC) used by the Pod must contain the data extracted from the source MSR system.
Before you perform the migration, deploy the following Pod onto your Kubernetes-based source and target systems:
apiVersion: v1
kind: ServiceAccount
metadata:
name: mmt-serviceaccount
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mmt-role
rules:
- apiGroups: ["", "apps", "rbac.authorization.k8s.io", "cert-manager.io", "acid.zalan.do"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mmt-rolebinding
subjects:
- kind: ServiceAccount
name: mmt-serviceaccount
roleRef:
kind: Role
name: mmt-role
apiGroup: rbac.authorization.k8s.io
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mmt-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "20Gi"
---
apiVersion: v1
kind: Pod
metadata:
name: mmt
spec:
serviceAccountName: mmt-serviceaccount
volumes:
- name: storage
persistentVolumeClaim:
claimName: msr
- name: migration
persistentVolumeClaim:
claimName: mmt-pvc
containers:
- name: mmt
image: registry.mirantis.com/msr/mmt:2.0.1
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "tail -f /dev/null"]
volumeMounts:
- name: storage
mountPath: /storage
- name: migration
mountPath: /migration
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 256Mi
restartPolicy: Never
Note
In the
rules
section of theRole
definition, add or remove permissions according to your requirements.For the
PersistentVolumeClaim
definition, modify the thespec.resources.storage
value according to your requirements.In the
Pod
definition, thespec.volumes[0].persistentVolumeClaim.claimName
field refers to the PVC used by the target MSR 3.x system. Modify the value as required.
Step-by-step migration¶
Once you have met the Migration prerequisites, configured your source MSR system and your target MSR system, and selected the storage mode, you can perform the migration workflow as a sequence of individual steps.
Migrations from MSR 2.9.x to 3.x.x must follow each of the five migration steps, whereas migrations from MSR 3.x.x source systems skip the verify, estimate, and transform steps, and instead begin with extract before proceeding directly to restore.
Important
All MMT commands that are run on MSR 3.x.x systems, including both source
and target deployments, must include the --fullname
option, which
specifies the name of the MSR instance.
To obtain the name of your MSR instance:
helm ls
Verify the source system configuration¶
Note
If your migration originates from MSR 3.x.x, proceed directly to Extract the data.
The first step in migrating your source MSR system to a target MSR system is to verify the configuration of the source system.
docker run \
--rm \
-it \
registry.mirantis.com/msr/mmt:<mmt-version> \
verify msr \
--source-mke-url <mke-url> \
--source-username <admin-username> \
--source-password <admin-password> \
--source-url <source-msr-url> \
--storage-mode <inplace|copy> \
--source-insecure-tls \
/migration
Note
Migrations that use the copy
storage mode and a filesystem
storage
backend must also include the --mount
option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount
is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version>
portion of the command.
--mount source=dtr-registry-<replica-id>,target=/storage
To obtain the MSR replica ID, run the following command from within an MSR node:
docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'
Parameter |
Description |
---|---|
|
Set the URL for the source Mirantis Kubernetes Engine (MKE) system. |
|
Set the username of the admin user. For MSR 2.9.x source systems, use the MKE admin user. |
|
Set the password of the admin user. For MSR 2.9.x source systems, use the MKE admin user. |
|
Set the URL for the source MSR system. |
|
Set the registry migration storage mode. Valid values: |
|
Optional. Set whether to use an insecure connection. Valid values: Default: |
Example output:
Note
Sizing information displays only when a migration is run in copy
storage mode.
INFO[0000] Logging level set to "info"
INFO[0000] Migration will be performed with "copy" storage mode
INFO[0000] Verifying health of source MSR <source-msr-url>
INFO[0000] ok
INFO[0000] Verifying provided credentials with source MSR...
INFO[0000] ok
INFO[0000] Verifying health of source MKE <source-mke-url>
INFO[0000] ok
INFO[0000] Verifying provided credentials with source MKE...
INFO[0001] ok
INFO[0001] Extracting MSR storage configuration
INFO[0001] Checking the size of used source storage...
INFO[0001] Retrieving AWS S3 storage size
INFO[0001] Source has size 249 MB
INFO[0001] ok
Estimate the migration¶
Note
If your migration originates from MSR 3.x.x, proceed directly to Extract the data.
Before extracting the data for migration you must estimate the number of images and the amount of metadata to migrate from your source MSR system to the new MSR target system. To do so, run the following command on the source MSR system.
docker run \
--rm \
-it \
-v <local-migration-directory>:/migration:Z \
registry.mirantis.com/msr/mmt:<mmt-version> \
estimate msr \
--source-mke-url <mke-url> \
--source-username <admin-username> \
--source-password <admin-password> \
--source-url <source-msr-url> \
--storage-mode <inplace|copy> \
--source-insecure-tls \
/migration
Note
Migrations that use the copy
storage mode and a filesystem
storage
backend must also include the --mount
option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount
is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version>
portion of the command.
--mount source=dtr-registry-<replica-id>,target=/storage
To obtain the MSR replica ID, run the following command from within an MSR node:
docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'
Parameter |
Description |
---|---|
|
Set the URL for the source Mirantis Kubernetes Engine (MKE) system. |
|
Set the username of the admin user. For MSR 2.9.x source systems, use the MKE admin user. |
|
Set the password of the admin user. For MSR 2.9.x source systems, use the MKE admin user. |
|
Set the URL for the source MSR system. |
|
Set the registry migration storage mode. Valid values: |
|
Optional. Set whether to use an insecure connection. Valid values: |
Example output:
Source Registry: "https://172.17.0.1" (Type: "msr") with authentication data from MKE: "https://172.17.0.1:444"
Mode: "copy"
Metadata: 30 MB
Image tags: 2 (2.8 MB)
As a result, all existing MSR storage is copied.
Extract the data¶
You can extract metadata and, optionally, binary image data from an MSR source system using commands that are presented herein.
Important
To avoid data inconsistencies, the source registry must remain in read-only mode throughout the migration to the target MSR system.
Be aware that MSR 3.0.x source systems cannot be placed into read-only mode. If you are migrating from a 3.0.x source system, be careful not to write any files during the migration process.
extract msr (2.9.x source systems)¶
Use the extract msr command for migrations that originate from an MSR 2.9.x system.
docker run \
--rm -it \
-v <local-migration-directory>:/migration:Z \
registry.mirantis.com/msr/mmt:<mmt-version> \
extract msr \
--source-mke-url <mke-url> \
--source-username <admin-username> \
--source-password <admin-password> \
--source-url <source-msr-url> \
--storage-mode <inplace|copy> \
--source-insecure-tls \
/migration
Note
Migrations that use the copy
storage mode and a filesystem
storage
backend must also include the --mount
option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount
is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version>
portion of the command.
--mount source=dtr-registry-<replica-id>,target=/storage
To obtain the MSR replica ID, run the following command from within an MSR node:
docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'
Parameter |
Description |
---|---|
|
Set the URL for the source Mirantis Kubernetes Engine (MKE) system. |
|
Set the username of the admin user. For MSR 2.9.x source systems, use the MKE admin user. |
|
Set the password of the admin user. For MSR 2.9.x source systems, use the MKE admin user. |
|
Set the URL for the source MSR system. |
|
Set the registry migration storage mode. Valid values: |
|
Optional. Set whether to use an insecure connection. Valid values: |
|
Optional. Disables MMT metrics collection for the
|
Example output:
The Mirantis Migration Tool extracted your registry of MSR 2.9, using the
following parameters:
Source Registry: https://172.17.0.1
Mode: copy
Image data: 2 blobs (2.8 MB)
The data extract is rendered as a TAR file with the name
dtr-metadata-mmt-backup.tar
in the <local-migration-directory>
. The
file name is later converted to msr-backup-<MSR-version>-mmt.tar
, following
the transform step.
extract msr3 (3.x.x source systems)¶
Available since MMT 2.0.0
Use the extract msr3 command for migrations that originate from an MSR 3.x.x system.
Deploy MMT as a Pod onto your MSR source cluster.
Exec into the MMT Pod.
Execute the extract command:
./mmt extract msr3 \ --storage-mode <inplace|copy> \ --fullname <source-MSR-instance-name> \ /migration
Execute the following command on a Swarm worker node on which MSR is installed:
docker run \
--rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v <local-migration-directory>:/migration:Z \
--mount source=msr_msr-storage,target=/storage \
--network msr_msr-ol \
registry.mirantis.com/msr/mmt:<mmt-version> \
extract msr3 \
--storage-mode <inplace|copy> \
--fullname <source-MSR-instance-name> \
--swarm \
/migration
Parameter |
Description |
---|---|
|
Optional. Disables MMT metrics collection for the
|
|
Optional. Sets the name of the MSR instance from which MMT will perform the data extract. Default: |
|
Optional. Excludes the events table from the data extract. |
|
Optional. Sets the number of parallel IO copies when performing blob storage copy tasks. Default: |
|
Optional. Excludes unsigned images from the data extract. |
|
Sets the registry migration storage mode. Valid values: |
|
Optional. Indicates that the source system runs on a Swarm cluster. |
Example output:
INFO[0000] Migration will be performed with "inplace" storage mode
INFO[0000] Backing up metadata...
{"level":"info","msg":"Writing RethinkDB backup","time":"2023-07-06T01:25:51Z"}
{"level":"info","msg":"Backing up MSR","time":"2023-07-06T01:25:51Z"}
{"level":"info","msg":"Recording time of backup","time":"2023-07-06T01:25:51Z"}
{"level":"info","msg":"Backup file checksum is: 0e2134abf81147eef953e2668682b5e6b0e9761f3cbbb3551ae30204d0477291","time":"2023-07-06T01:25:51Z"}
INFO[0002] The Mirantis Migration Tool extracted your registry of MSR 3.x, using the following parameters:
Source Registry: MSR3
Mode: metadata only
Existing MSR3 storage will be backed up.
The source registry must remain in read-only mode for the duration of the operation to avoid data inconsistencies.
The data extract is rendered as a TAR file with the name
msr-backup-<MSR-version>-mmt.tar
in the <local-migration-directory>
.
Transform the data extract¶
Note
If your migration originates from MSR 3.x.x, proceed directly to Restore the data extract.
Once you have extracted the data from your source MSR system, you must transform the metadata into a format that is suitable for migration to an MSR 3.x.x system.
Deploy MMT as a Pod onto your MSR target cluster.
Exec into the MMT Pod.
Execute the transform command:
./mmt transform metadata msr \ --fullname <dest-MSR-instance-name> --storage-mode <inplace|copy> \ --enzipassword <source-MSR-password> \ /migration
Note
The value of
--enzipassword
is the MSR source system password. This optional parameter is required when the source and target MSR passwords differ.
On the target system, run the following command from inside a worker node on which MSR is installed:
docker run \
--rm --it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v <local-migration-directory>:/migration:Z \
--mount source=msr_msr-storage,target=/storage \
--network msr_msr-ol \
registry.mirantis.com/msr/mmt:$MMT_VERSION \
transform metadata msr \
--storage-mode <inplace|copy> \
--enzipassword <source-MSR-password> \
--swarm=true \
/migration
Note
The value of --enzipassword
is the MSR source system password.
This optional parameter is required when the source and target MSR
passwords differ.
Parameter |
Description |
---|---|
|
Set the registry migration storage mode. Valid values: |
|
Optional. Disables MMT metrics collection for the
|
|
Optional. Specifies that the source system runs on Docker Swarm. Default: |
|
Sets the name of the MSR instance to which MMT will migrate the transformed data extract. Use only when the target system runs on a Kubernetes cluster. Default: |
Example output:
Writing migration summary file
Finalizing backup directory structure
Creating tar file
Cleaning transform operation artifacts from directory: "/home/<user-directory>/tmp/migrate"
Restore the data extract¶
You can restore a transformed data extract into a target MSR environment using commands that are presented herein on the target MSR system.
Deploy MMT as a Pod onto your MSR target cluster.
Exec into the MMT Pod.
Execute the restore command:
./mmt restore msr \ --storage-mode <inplace|copy> \ --fullname <source-MSR-instance-name> /migration
Example output:
Successfully restored metadata from: "/home/<user-directory>/tmp/migrate/msr-backup-<MSR-version>-mmt.tar"
Register MSR with eNZi:
kubectl exec -it deployment/<msr-instance-name>-api -- \ msr auth register \ --username <username> \ --password <password> \ https://<msr-instance-name>-enzi:4443/enzi
Restart the affected MSR Pods:
for each in $(kubectl get deployments.apps -l "app.kubernetes.io/instance=msr" | tail -n+2 | cut -d ' ' -f1); do kubectl rollout restart $each; done
On the target system, run the following command from inside a worker node on which MSR is installed:
docker run \ --rm --it \ -v /var/run/docker.sock:/var/run/docker.sock \ -v <local-migration-directory>:/migration:Z \ --mount source=msr_msr-storage,target=/storage \ --network msr_msr-ol \ registry.mirantis.com/msr/mmt:$MMT_VERSION \ restore msr --storage-mode <inplace|copy> \ --swarm \ /migration
Example output:
Successfully restored metadata from: "/home/<user-directory>/tmp/migrate/msr-backup-<MSR-version>-mmt.tar"
Register MSR with eNZi:
docker exec -it $(docker ps -q --filter "name=msr-api") sh -c 'msr auth register https://$TASK_SLOT.msr-enzi-api:4443/enzi'
Restart the affected services:
docker service update --force msr_msr-enzi-api && \ docker service update --force msr_msr-api-server && \ docker service update --force msr_msr-registry && \ docker service update --force msr_msr-garant && \ docker service update --force msr_msr-jobrunner
Parameter |
Target system orchestrator |
Description |
---|---|---|
|
Kubernetes, Swarm |
Optional. Sets the path to the data extract from which to restore your MSR deployment. Default: Data extract in the current directory. |
|
Kubernetes, Swarm |
Optional. Sets the path to the blob storage directory from which to restore your MSR image blobs. Use only if extraction was performed in copy mode. Default: Blob storage in the current directory. |
|
Kubernetes, Swarm |
Optional. Disables MMT metrics collection for the restore command. You must include the flag each time you run the command. |
|
Swarm |
Optional. Sets the eNZi admin password. |
|
Kubernetes |
Sets the name of the MSR instance to which MMT will restore the data extract. Note Use only when the target system runs on a Kubernetes cluster. Default: |
|
Kubernetes, Swarm |
Optional. Sets the path to the manifests directory from which to load the configuration. Default: Manifests in the current directory. |
|
Kubernetes |
Optional. Sets the location of the MSR 3.x chart. Valid values: path to chart directory or packaged chart, URL for MSR repository, or fully qualified chart URL. Default: |
|
Kubernetes |
Optional. Sets the namespace scope for the given command. Default: |
|
Kubernetes, Swarm |
Optional. Sets the number of parallel IO copies when performing blob storage copy tasks. Default: |
|
Kubernetes, Swarm |
Sets the registry migration storage mode. Valid values: |
|
Swarm |
Optional. Specifies that the source system runs on Docker Swarm. Default: |
Settings not migrated¶
MMT settings that do not persist through the migration process include:
Single Sign-On, located in the General tab of the MSR web UI.
Automatic Scanning Timeouts, located in the Security tab of the MSR web UI.
Vulnerability database
Results of image scans
MSR license
Telemetry¶
Available as of MMT 1.0.1
By default, MMT sends usage metrics to Mirantis whenever you run the
extract
, transform
, and restore
commands. To disable this
functionality, include the --disable-analytics
flag whenever you issue any
of these commands.
MMT collects the following metrics to improve the product and facilitate its use:
Metric |
Description |
---|---|
|
Number of images stored in the source MSR system. |
|
Total size of all the images stored in the source MSR system. |
|
Time at which the command stops running. |
|
Number of errors that occurred during the given migration step. |
|
Migration step for which metrics are being collected.
For example, |
|
Time at which the command begins running. |
|
Command status. In the case of command failure, MMT reports all associated error messages. |
|
Storage mode used for migration. Valid values: |
|
Storage type used in the MSR source and target systems Valid values: |
|
Source MSR IP address or URL that is used to associate metrics from separate commands. |
Troubleshoot migration¶
You can address various potential MSR migration issues using the tips and suggestions detailed herein.
Restore MSR reusing an extract copy¶
To reuse the extract copy for a restore, reset the appropriate flags
in the migration_summary.json
file to false
or leave the flags
empty. Otherwise, the MMT restore command will skip the extract.
Reset values in the
migration_summary.json
file:"Restore": { "BlobStorage": { "BlobsProcessed": {} }, "AllBlobsComplete": false }, "EnziMetadata": false, "RethinkMetadata": { "Copied": false, "IgnoredEventsTable": false } }
¶ Value
Description
"BlobsProcessed": {}
"AllBlobsComplete": false
Allows to redo the backup and skip the restoration.
"EnziMetadata": false
"RethinkMetadata": false
Allows to copy over the eNZi metadata during the command rerun.
Run the restore command, using a backup file that was created from an extract on the MSR 3.x.x system.
Too many open files¶
When migrating a large source installation to your MSR target environment, MMT can fail due to too many files being open. If this happens, the following error message displays:
failed to extract blob data to filesystem: failed to copy file <filename>
from storage driver to <registry name>: error creating file: <filename>:
open <filename>: too many open files
To resolve the issue, run the following command in the terminal on which you are running MMT:
ulimit -n 1048576
Failure to load data error message¶
During the Restore stage of the migration workflow you may encounter the following error:
failed to determine custom restore path options: failed to get MSR version
information: no pod found in namespace with component label: api: default
Run kubectl config get-contexts to list all contexts available.
Find the correct context and run the following command:
kubectl config use-context <name-of-context-that-connects-to-cluster-running-MSR-3.0>`
No space left on device¶
During the Extract stage of the migration workflow, you may encounter the following error message:
failed to extract blob data to filesystem: failed to copy file <filename>
from storage driver to <registry location>: error copying file <filename> to
<registry location>: write <filename>: no space left on device
To resolve this error ensure the directory provided as a parameter has enough space to store the migration data.
Failed to estimate migration error message¶
During the Estimate stage of the migration workflow, you may encounter the following error message:
failed to estimate MSR registry migration: failed to verify given directory: unable to get directory FileInfo: /mnt/test2: stat /mnt/test2: no such file or directory
When running MMT from a Docker container, ensure that the path provided for storing migration data has been mounted as a docker volume to the local machine.
When running MMT outside of Docker, ensure the path provided exists.
rethinkdb row cannot be restored¶
During the Restore stage of the migration workflow, you may encounter an error message that indicates an issue with rethinkdb row restoration:
Can't restore rethinkdb row: rethinkdb: Cannot perform write: lost contact
with primary replica in:\n<rethink-db-statement>
Kubernetes deployments¶
The error is reported when the rethinkdb Pod for the destination MSR 3.x installation does not have enough disk space available due to the sizing of its provisioned volume.
Edit the
values.yaml
file you used for MSR deployment, changing therethinkdb.cluster.persistentVolume.size
value to match the source RethinkDB volume sizeRun the helm upgrade --values <path to values.yaml> msr msr/msr command.
Swarm deployments¶
The error is reported when the node on which RethinkDB is running on the target MSR system does not have enough available disk space.
SSH into the node on which RethinkDB is running.
Review the amount of disk space used by the docker daemon on the node:
docker system df
Review the total size and available storage of the node filesystem:
df
Allocate more storage to the host machine on which the target node is running.
Admin password on MSR 3.0.x target no longer works¶
As a result of the migration, the source MSR system security settings completely replace the settings in the target MSR system. Thus, to gain admin access to the target system, you must use the admin password for the source system.
Blob image copy considerations¶
MMT uses several parallel sub-routines in copying image blobs, the number of
which is controlled by the --parallel-io-count
parameter, which has a
default value of 4
.
Image blobs are copied only when you are using the copy
storage mode for
your migration, during the Extract and Restore stages of the migration workflow. For optimum
performance, the number of CPU resources to allocate for the MMT container
(--cpus=<value>
) is --parallel-io-count
, plus one for MMT itself.
Total blob size: 0¶
You may encounter an INFO[0014] Total blob size: 0
error message during
migration with copy mode.
This indicates that the storage is empty or that blob storage mapping
is defined incorrectly.
The error may result in a panic message in versions prior to MMT 2.0.2-GA.
To resolve the issue, ensure that the correct source volume is specified in the mount parameter of the MMT command line. Note that the exact source storage name may vary.
--mount source=dtr-registry-nfs-000000000003,target=/storage
If the issue is not resolved, update to MMT 2.0.2-GA or later.
Additional parameters¶
Errors can occur during migration that require the use of additional MMT parameters at various stages of the migration process.
For scenarios wherein the pulling of Docker images has failed, you can use the parameters detailed in the following table to pull the needed images to your MKE cluster running MSR 2.9.x.
Parameter |
Description |
---|---|
|
Set the MSR 2.9.x dtr image that is to run in the MSR 2.9.x environment during migration. Default: |
|
Set the MSR 2.9.x repository within which the dtr image will run in the MSR 2.9.x environment during migration. Default: |
|
Set the image tag of the MSR 2.9.x repository where the dtr image is to run in the MSR 2.9.x environment during migration. Defaults to the version of the 2.9.x source system. |
|
Set the MMT Docker image, which you use during migration to run a copy of the MMT image within the 2.9.x (MKE) environment. Default: |
|
Set the MMT repository that is to be used during migration to run a copy of the MMT image within the 2.9.x (MKE) environment. Default: |
|
Set the image tag of the MMT Docker image that is to be used during migration to run a copy of the MMT image within the MSR 2.9.x (MKE) environment. Default: |
Additional volume mappings for containers¶
During the Transform and Restore stages of the migration workflow, you may encounter the following error message:
[unable to read client-cert
/home/<username>/.minikube/profiles/minikube/client.crt for minikube due to
open /home/<username>/.minikube/profiles/minikube/client.crt: no such file
or directory, unable to read client-key
/home/<username>/.minikube/profiles/minikube/client.key for minikube due to
open /home/<username>/.minikube/profiles/minikube/client.key: no such file
or directory, unable to read certificate-authority
/home/<username>/.minikube/ca.crt for minikube due to open
/home/<username>/.minikube/ca.crt: no such file or directory]
To address this error, add additional volume mappings to running Docker containers as needed:
-v $HOME/.minikube/profiles/minikube:/.minikube/profiles/minikube
Failed to query for metadata size¶
You must pull the MMT image to both your source MSR system and your target MSR system, otherwise the migration will fail with the following error message:
Failed to query RethinkDB for total metadata size: failed to convert query
response to int: strconv.Atoi: parsing "": invalid syntax
Continuing without total metadata size ...
To remedy this you must pull the MMT image to each of the two systems, using the following commands:
docker login registry.mirantis.com -u <username>
docker pull registry.mirantis.com/msr/mmt
flag provided but not defined: -append¶
MSR 3.0.3 or later must be running on your target MSR 3.x cluster, otherwise
the restore
migration step will fail with the following error message:
{"level":"fatal","msg":"flag provided but not defined: -append","time":"<time>"}
failed to restore metadata from "/migration/msr-backup-<msr-version>-mmt.tar": restore failed: command terminated with exit code 1
To resolve the issue, upgrade your target cluster to MSR 3.0.3 or later. Refer to Upgrade MSR for more information.
Storage configuration is out of sync with metadata¶
With the inplace
storage mode, an error
message will display if you fail to configure the external storage location for
your target MSR system to the same storage location that your source MSR system
uses:
Storage configuration may be out of sync with metadata: storage backend is
missing expected files (expected BlobStoreID <BlobStoreID>)
To remedy the error, do one of the following:
Configure your target MSR system to use the same external storage as your source MSR system. Refer to Configure external storage for more information.
Rerun the migration using the
copy
storage mode.Manually copy the files from the source MSR system to the target MSR system.
The estimate command returns an image data value of 0¶
Running the estimate command on a filesystem
backend can
result in the display of an image data size of zero bytes:
Image data: 0 blobs (0 B)
Note
If the estimate command produces the issue, it is certain to carry forward to the output of the extract command.
To resolve this issue, you must Download and configure the MKE client bundle prior to performing the migration.
Unable to get FileInfo: /blobs¶
Running the restore command on a filesystem
backend can result
in the following error, indicating that the command did not succeed:
failed to verify given file path: unable to get FileInfo: /blobs
To resolve the issue, you must download and configure the MKE client bundle before you perform the migration.
failed to run container: mmt-dtr-rethinkdb-backup¶
During the Estimate and Extract stages of the migration workflow, you may encounter the following error message:
FATA[0001] failed to extract MSR metadata: \
failed to run container: \
mmt-dtr-rethinkdb-backup: \
Error response from daemon: \
Conflict: \
The name mmt-dtr-rethinkdb-backup is already assigned. \
You have to delete (or rename) that container to be able to assign \
mmt-dtr-rethinkdb-backup to a container again.
Identify the node on which
mmt-dtr-rethinkdb-backup
was created.From the node on which the
mmt-dtr-rethinkdb-backup
container was created, delete the RethinkDB backup container:docker rm -f mmt-dtr-rethinkdb-backup
MMT release notes¶
MMT 2.0.2 current
Patch release for MMT 2.0 release that introduces the following key features:
Improved performance of migrating blobs
Addressed issues
MMT 2.0.1
Patch release for MMT 2.0 release that introduces the following key features:
Addressed issues
MMT 2.0.0
Initial MMT 2.0 release that introduces the following key features:
New migration paths
Additional command line operations
MMT 1.0.2
Patch release for MMT 1.0 that introduces the following key feature:
Improved performance of migrating blobs
MMT 1.0.1
Patch release for MMT 1.0 that introduces the following key feature:
MMT usage metrics
2.0.2¶
Release date: 2024-MAR-27
Enhancements¶
[FIELD-6173] Improved the performance of migrating blobs from older versions of MSR.
Addressed issues¶
[ENGDTR-4170] Fixed an issue wherein during migration the LDAP setting was not appearing in the destination MSR. Now, the setting is completely transferred to MSR 3.x metadata and can be accessed on the Settings page of the MSR web UI.
2.0.1¶
Release date: 2023-NOV-20
Enhancements¶
Implemented NFS storage migration for inplace mode.
Implemented the new --swarm option, which enables extract/transform/restore from MSR 2.9 to MSR 3.x on Docker
Implemented mmt extract msr3 - backup of MSR 3.x metadata and images, for Swarm, Helm, and MSR Operator
Implemented MSR Operator support in MMT for transform/restore.
Introduced the --enzipassword` option, which adds the eNZi admin password to Swarm and MSR Operator restore.
Fixed the Helm upgrade process.
Fixed an issue wherein container exec` failed with an unknown http header.
Fixed MMT versioning.
Fixed the process of pulling an MMT image on a random node during the estimation step.
Upgraded:
Alpine to version 3.18
Go to 1.20.10
Go modules to fix CVEs
Addressed issues¶
[FIELD-6379] Fixed an issue wherein the estimation command in air-gapped environments failed due to attempts to pull the MMT image on a random node. The fix ensures that the MMT image is pulled on the required node, where the estimation command is executed.
2.0.0¶
Release date: 2023-SEP-28
Enhancements¶
MMT now includes the following command line operations:
- extract msr3 command
Performs the extract migration step on MSR 3.x source systems.
- --swarm option
Used in conjunction with the extract msr3, transform, and restore msr commands to indicate that the source or target system runs on a Swarm cluster.
1.0.2¶
Release date: 2024-MAR-27
Enhancements¶
[FIELD-6173] Improved the performance of migrating blobs from older versions of MSR.
1.0.1¶
Release date: 2023-MAY-16
Enhancements¶
[ENGDTR-3102] MMT now collects migration data, which Mirantis will use to identify ways to improve the product and facilitate its use.
Learn more
Addressed issues¶
[ENGDTR-3517] Fixed an issue wherein the restore command did not continue from its stopping point when it was terminated prior to completion.
[ENGDTR-3385] When run again following an interruption, the extract command now logs the number of blobs that it previously copied.
To improve MMT CLI help text readability, commands are now grouped into types.
Security¶
The critical and high severity CVEs addressed in this MMT release are detailed in the following table:
CVE
Status
Problem details from upstream
Resolved
Authorization Bypass Through User-Controlled Key in GitHub repository
emicklei/go-restful
prior tov3.8.0
.Resolved
Due to unsanitized
NUL
values, attackers may be able to maliciously set environment variables on Windows. Insyscall.StartProcess
andos/exec.Cmd
, invalid environment variable values containingNUL
values are not properly checked for. A malicious environment variable value can exploit this behavior to set a value for a different environment variable. For example, the environment variable string"A=B\x00C=D"
sets the variables"A=B"
and"C=D"
.
Get Support¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
Mirantis Secure Registry (MSR) subscriptions provide access to prioritized support for designated contacts from your company, agency, team, or organization. MSR service levels are based on your subscription level and the cloud or cluster that you designate in your technical support case. Our support offerings are described on the Enterprise-Grade Cloud Native and Kubernetes Support page. You may inquire about Mirantis support subscriptions by using the contact us form.
The CloudCare Portal is the chief way in
which Mirantis interacts with customers who are experiencing technical
issues. Access to the CloudCare Portal requires prior authorization by your
company, agency, team, or organization, and a brief email verification step.
After Mirantis sets up its back-end systems at the start of the support
subscription, a designated administrator at your company, agency, team, or
organization can designate additional contacts. If you have not already
received and verified an invitation to our CloudCare Portal, contact your local
designated administrator, who can add you to the list of designated contacts.
Most companies, agencies, teams, and organizations have multiple designated
administrators for the CloudCare Portal, and these are often the persons most
closely involved with the software. If you do not know who your
local designated administrator is, or you are having problems accessing the
CloudCare Portal, you can also send an email to Mirantis support at
support@mirantis.com
.
Once you have verified your contact details and changed your password, you and all of your colleagues will have access to all of the cases and purchased resources. Mirantis recommends that you retain your Welcome to Mirantis email, as it contains information on how to access the CloudCare Portal, guidance on submitting new cases, managing your resources, and other related issues.
Mirantis encourages all customers with technical problems to use the knowledge base, which you can access on the Knowledge tab of the CloudCare Portal. We also encourage you to review the MSR product documentation and release notes prior to filing a technical case, as the problem may have already been fixed in a later release, or a workaround solution may be available for a similar problem that other customers have experienced.
One of the features of the CloudCare Portal is the ability to associate cases with a specific MSR cluster. The associated cases are referred to in the Portal as “Clouds”. Mirantis pre-populates your customer account with one or more clouds based on your subscription(s). You may also create and manage your Clouds to better match the way in which you use your subscription.
Mirantis also recommends and encourages customers to file new cases based on a specific Cloud in your account. This is because most Clouds also have associated support entitlements, licenses, contacts, and cluster configurations. These submissions greatly enhance the ability of Mirantis to support you in a timely manner.
You can locate the existing Clouds associated with your account by using the Clouds tab at the top of the portal home page. Navigate to the appropriate Cloud and click on the Cloud name. Once you have verified that the Cloud represents the correct MSR cluster and support entitlement, create a new case via the New Case button near the top of the Cloud page.
The support bundle, which is a compressed archive in ZIP format of configuration data and log files from the cluster, is the key to receiving effective technical support for most MSR cases. There are several ways to gather a support bundle, each of which is described in the sections that follow. Once you have obtained a support bundle, you can upload the bundle to your new technical support case by following the instructions in the Mirantis knowledge base, using the Detail view of your case.
Note
MSR users can obtain a support bundle using the Mirantis Support Console. For those running MSR on Mirantis Kubernetes Engine (MKE), there are additional methods for obtaining a support bundle that are detailed in MSR support bundles on MKE.
Mirantis Support Console¶
Use the Mirantis Support Console to obtain an MSR support bundle, using either the Support Console UI or the API.
Install the Support Console¶
You can install the Support Console on online and offline clusters.
Install the Support Console online¶
Use a Helm chart to install the Support Console:
helm repo add support-console-official https://registry.mirantis.com/charts/support/console
helm repo update
helm install support-console support-console-official/support-console --version 1.0.0 --set env.PRODUCT=msr
Once the Support Console is successfully installed, the system returns the commands needed to access the Support Console UI:
Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=support-console,app.kubernetes.io/instance=support-console" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8000 to use your application"
kubectl --namespace default port-forward $POD_NAME 8000:$CONTAINER_PORT
Install the Support Console offline¶
An Internet-connected system is required for offline installation of the Support Console, for the purpose of downloading and transferring the necessary files to the offline host.
Download the Support Console image package from
https://s3-us-east-2.amazonaws.com/packages-mirantis.com/caas/msc_image_1.0.0.tar.gz
.Download the Helm chart package:
helm pull https://registry.mirantis.com/charts/support/console/support-console/support-console-1.0.0.tgz
Copy the image and Helm chart packages to the offline host machine:
scp support-console-1.0.0.tgz msc_image_1.0.0.tar.gz
Install the Support Console:
helm install support-console support-console-1.0.0.tgz --version 1.0.0 --set env.PRODUCT=msr
Once the Support Console is successfully installed, the system returns the commands needed to access the Support Console UI:
Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=support-console,app.kubernetes.io/instance=support-console" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8000 to use your application"
kubectl --namespace default port-forward $POD_NAME 8000:$CONTAINER_PORT
Obtain the support bundle¶
You can use the Support Console UI or API to obtain the MSR support bundle.
Obtain the support bundle using the Support Console UI¶
Forward the Support Console to port 8000:
kubectl --namespace default port-forward service/support-console 8000:8000
In your web browser, navigate to
localhost:8000
to view the Support Console UI.Click Collect Support Bundle.
In the pop-up window, enter the namespace from which you want to collect support data. By default, the Support Console gathers support data from the
default
namespace.Optional. If you no longer require access to the Support Console, click Uninstall in the left-side navigation panel to remove the support-console Pod from your cluster.
Obtain the support bundle using the Support Console API¶
Forward the Support Console to port 8000:
kubectl --namespace default port-forward service/support-console 8000:8000
Obtain the support bundle, specifying the namespace from which you want to collect support data. By default, the Support Console gathers support data from the
default
namespace.curl localhost:8000/collect?ns=<namespace> -O -J
Optional. If you no longer require access to the Support Console, run the following command to remove the
support-console
Pod from your cluster:helm uninstall support-console
MSR support bundles on MKE¶
If your MSR instance runs on MKE, you can use any of the following methods to obtain a support bundle.
Obtain a full-cluster support bundle using the MKE web UI¶
Log in to the MKE web UI as an administrator.
In the left-side nagivation panel, navigate to <user name> and click Support Bundle. The support bundle download will require several minutes to complete.
Note
The default name for the generated support bundle file is
docker-support-<cluster-id>-YYYYmmdd-hh_mm_ss.zip
. Mirantis suggests that you not alter the file name before submitting it to the customer portal. However, if necessary, you can add a custom string betweendocker-support
and<cluster-id>
, as in:docker-support-MyProductionCluster-<cluster-id>-YYYYmmdd-hh_mm_ss.zip
.Submit the support bundle to Mirantis Customer Support by clicking Share support bundle on the success prompt that displays once the support bundle has finished downloading.
Fill in the Jira feedback dialog, and click Submit.
Obtain a full-cluster support bundle using the MKE API¶
Create an environment variable with the user security token:
export AUTHTOKEN=$(curl -sk -d \ '{"username":"<username>","password":"<password>"}' \ https://<mke-ip>/auth/login | jq -r .auth_token)
Obtain a cluster-wide support bundle:
curl -k -X POST -H "Authorization: Bearer $AUTHTOKEN" \ -H "accept: application/zip" https://<mke-ip>/support \ -o docker-support-$(date +%Y%m%d-%H_%M_%S).zip
Append the --submit option to the support command to submit the support bundle to Mirantis Customer Support. In addition to the support bundle, the following information will also be sent:
Cluster ID
MKE version
MCR version
OS/architecture
Cluster size
For more information on the support command, refer to mke-cli-support.
Obtain a single-node support bundle using the CLI¶
Use SSH to log into a node and run:
MKE_VERSION=$((docker container inspect ucp-proxy \ --format '{{index .Config.Labels "com.docker.ucp.version"}}' \ 2>/dev/null || echo -n 3.7.4)|tr -d [[:space:]]) docker container run --rm \ --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ --log-driver none \ mirantis/ucp:${MKE_VERSION} \ support > \ docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz
Important
If SELinux is enabled, include the following flag:
--security-opt label=disable
.Note
The CLI-derived support bundle only contains logs for the node on which you are running the command. If you are running a high availability MKE cluster, collect support bundles from all manager nodes.
Append the --submit option to the support command to submit the support bundle to Mirantis Customer Support. In addition to the support bundle, the following information will also be sent:
Cluster ID
MKE version
MCR version
OS/architecture
Cluster size
For more information on the support command, refer to mke-cli-support.
Use the MKE CLI with PowerShell to get a support bundle¶
Run the following command on Windows worker nodes to collect the support information and and have it placed automatically into a zip file:
$MKE_SUPPORT_DIR = Join-Path -Path (Get-Location) -ChildPath 'dsinfo'
$MKE_SUPPORT_ARCHIVE = Join-Path -Path (Get-Location) -ChildPath $('docker-support-' + (hostname) + '-' + (Get-Date -UFormat "%Y%m%d-%H_%M_%S") + '.zip')
$MKE_PROXY_CONTAINER = & docker container ls --filter "name=ucp-proxy" --format "{{.Image}}"
$MKE_REPO = if ($MKE_PROXY_CONTAINER) { ($MKE_PROXY_CONTAINER -split '/')[0] } else { 'mirantis' }
$MKE_VERSION = if ($MKE_PROXY_CONTAINER) { ($MKE_PROXY_CONTAINER -split ':')[1] } else { '3.6.0' }
docker container run --name windowssupport `
-e UTILITY_CONTAINER="$MKE_REPO/ucp-containerd-shim-process-win:$MKE_VERSION" `
-v \\.\pipe\docker_engine:\\.\pipe\docker_engine `
-v \\.\pipe\containerd-containerd:\\.\pipe\containerd-containerd `
-v 'C:\Windows\system32\winevt\logs:C:\eventlogs:ro' `
-v 'C:\Windows\Temp:C:\wintemp:ro' $MKE_REPO/ucp-dsinfo-win:$MKE_VERSION
docker cp windowssupport:'C:\dsinfo' .
docker rm -f windowssupport
Compress-Archive -Path $MKE_SUPPORT_DIR -DestinationPath $MKE_SUPPORT_ARCHIVE
API Reference¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
Mirantis offers two APIs for use with the Mirantis Secure Registry (MSR).
MSR API¶
The Mirantis Secure Registry (MSR) API is a REST API, available using HTTPS, that enables programmatic access to resources managed by MSR.
For details, by product version:
eNZi API¶
The eNZi service provides authentication and authorization function for MSR. It provides a rich API that users and Open ID Connect clients can query identity, sessions, membership, teams, and label permissions.
For details, as the API specifications align with MSR versions:
MSR 3.0.8 to latest: Mirantis eNZi 1.0.85 API Reference
MSR 3.0.7 and MSR 3.0.6: Mirantis eNZi 1.0.7 API Reference
MSR 3.0.5: Mirantis eNZi 1.0.6 API Reference
MSR 3.0.4: Mirantis eNZi 1.0.3 API Reference
MSR 3.0.3: Mirantis eNZi 1.0.2 API Reference
MSR 3.0.2: Mirantis eNZi 1.0.1 API Reference
MSR 3.0.1 and MSR 3.0.0: Mirantis eNZi 1.0.0 API Reference
CLI Reference¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
You can use the MSR CLI tool to backup and restore the software, perform database administration tasks, and gather information on RethinkDB clusters. The tool runs in interactive mode by default, issuing prompts as necessary for any required values.
Following MSR installation, run the helm get notes
command for
instructions on how to access the MSR CLI:
helm get notes <RELEASE_NAME>
Additional help is available for the CLI and for each command by way of the
–help
option.
Usage¶
msr [global options] command [command options] [arguments...]
Commands¶
mirantis/msr auth register¶
Register MSR with an authentication service.
Usage¶
msr auth register [command options] [URL]
Options¶
Option |
Description |
---|---|
|
Verifies the authentication service certificate with the custom CAs in PEMFILE.
|
|
The admin user name.
|
|
The password for the named admin user. Only valid in conjunction with
the
|
|
Registers the CA certificate in PEMFILE with eNZi to authenticate the MSR application (required).
|
|
The web address for MSR.
|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
Indicates whether to suppress interactive prompts.
|
|
Indicates whether to show help.
|
mirantis/msr auth status¶
Obtain the status of MSR authentication registration.
Usage¶
msr auth status [command options] [arguments...]
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
Indicates whether to show help.
|
mirantis/msr backup¶
Create a backup of MSR.
Usage¶
To backup MSR metadata to a file:
msr backup [command options] [filename]
To backup MSR to standard output:
msr backup [command options] -
Description¶
The msr backup command creates a backup of the metadata in use by MSR, which you can restore with the msr restore command.
Note
msr backup only creates backups of configuration, image, and chart metadata. It does not back up the Docker images or Helm charts stored in your registry. Mirantis suggests that you implement a separate backup policy for the contents of your storage backend, taking into consideration whether your MSR installation is configured to store images on the filesystem or through the use of a cloud provider.
Important
Mirantis recommends that you store your backup in a secure location, as it contains sensitive information.
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
Ignores the events table during a backup.
|
|
Indicates whether to show help.
|
mirantis/msr db migrate¶
Apply MSR database migrations.
Usage¶
msr db migrate [command options] [arguments...]
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
The number of RethinkDB servers to replicate new tables across.
|
|
Indicates whether to show help.
|
mirantis/msr db scale¶
Change the replication factor of RethinkDB tables in use by MSR.
When the --replicas
flag is present, the db scale
command uses the
associated value as the replication factor for the tables. Otherwise, the
replication factor is set automatically, based on the number of RethinkDB
servers that are connected to the server at that moment.
Usage¶
msr db scale [command options] [arguments...]
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
The number of RethinkDB servers to replicate new tables across.
|
|
Indicates whether to show help.
|
mirantis/msr db wait¶
Wait for all database tables to be ready.
Usage¶
msr db wait [command options] [arguments...]
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
The maximum amount of time to wait. For example,
|
|
Indicates whether to show help.
|
mirantis/msr db emergency-repair¶
Recover MSR tables from a loss of quorum.
Usage¶
msr db emergency-repair [command options] [arguments...]
Description¶
Use the db emergency-repair command to repair all RethinkDB tables
in use by MSR that have lost quorum. The command accomplishes its work by
running the RethinkDB unsafe_rollback
emergency repair on the tables and
then scaling the tables similar to msr db scale command (refer to
mirantis/msr db scale for information on which replication factor to use).
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
The number of RethinkDB servers to replicate new tables across once emergency-repair is complete.
|
|
Indicates whether to show help.
|
See also
The official RethinkDB documentation on Emergency Repair
mirantis/msr init¶
Initialize MSR and set it up with an authentication service.
Usage¶
msr init [command options] [URL]
Options¶
Option |
Description |
---|---|
|
Verifies the authentication service certificate with the custom CAs in PEMFILE.
|
|
The admin user name.
|
|
The password for the named admin user. Only valid in conjunction
with the
|
|
Registers the CA certificate in PEMFILE with eNZi to authenticate the MSR application (required).
|
|
The web address for MSR.
|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
The number of RethinkDB servers to replicate new tables across.
|
|
Indicates whether to suppress interactive prompts.
|
|
Indicates whether to show help.
|
mirantis/msr restore¶
Restore MSR from an existing backup.
Usage¶
To restore MSR metadata from a file:
msr restore [command options] [filename]
To restore MSR from standard input:
msr restore [command options] -
Description¶
The msr backup command performs a restore of the metadata used by MSR, from a backup file that has been generated by the msr backup command.
Note
msr backup does not restore Docker images or Helm charts. Mirantis suggests that you implement a separate restore procedure for the contents of your storage backend, taking into consideration whether your MSR installation is configured to store images on the local filesystem or through a cloud provider.
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
Indicates whether to show help.
|
mirantis/msr rethinkdb count¶
Obtain the number of active servers in the RethinkDB cluster.
Usage¶
msr rethinkdb count [command options] [arguments...]
Description¶
The rethinkdb count
command prints to standard output the number of servers
in the RethinkDB cluster that have the server tag default
.
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
Blocks the print action until the cluster contains a set number of servers.
|
|
The maximum amount of time to wait. For example,
|
|
Indicates whether to show help.
|
mirantis/msr rethinkdb list¶
List the servers in the RethinkDB cluster.
Usage¶
msr rethinkdb list [command options] [arguments...]
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
The output format.
|
|
Indicates whether to show help.
|
mirantis/msr rethinkdb decommission¶
Decommission RethinkDB servers.
Usage¶
msr rethinkdb decommission [command options] SERVER...
Description¶
Use the rethinkdb decommission command to remove all tags from a RethinkDB server, so that table replicas are removed from that server the next time the respective tables are reconfigured (scaled).
Options¶
Option |
Description |
---|---|
|
The RethinkDB address.
|
|
The path for the RethinkDB client certificate file.
|
|
The path for the RethinkDB client key file.
|
|
The path for the RethinkDB ca certificate file.
|
|
Indicates whether to skip TLS verification.
|
|
Indicates whether to show help.
|
Note
Due to the significant changes put forward with the introduction of the MSR 3.0.0 release, several legacy MSR CLI commands became unnecessary and have been removed. The commands that are no longer available are:
dtr destroy
dtr images
dtr join
dtr reconfigure
dtr remove
dtr upgrade
Global options¶
Option |
Description |
---|---|
|
Sets the log level.
|
|
Indicates whether to show help.
|
Release Notes¶
Warning
In correlation with the end of life (EOL) date for MSR 3.0.x, Mirantis stopped maintaining this documentation version as of 2024-APR-20. The latest MSR product documentation is available here.
Considerations
CentOS 8 entered EOL status as of 31-December-2021. For this reason, Mirantis no longer supports CentOS 8 for all versions of MSR. We encourage customers who are using CentOS 8 to migrate onto any one of the supported operating systems, as further bug fixes will not be forthcoming.
Note
The Mirantis Migration Tool (MMT) release notes are included within the Migration Guide.
MSR 3.0.12 current
Patch release for MSR 3.0 that addresses middleware updates, including Golang 1.21.8 and Synopsys Scanner 2023.12.
MSR 3.0.11
Patch release for MSR 3.0 that addresses middleware updates, including Golang 1.20.12 and Synopsys scanner Synopsys Scanner 2023.9. In addition, a number of CVEs have been resolved.
MSR 3.0.10
Patch release for MSR 3.0 that focuses on middleware updates, including Go 1.20.10.
MSR 3.0.9
Patch release for MSR 3.0 that focuses on the resolution of CVEs and a bug fix in the web UI.
MSR 3.0.8
Patch release for MSR 3.0 that focuses on middleware updates including Go 1.20.5 and Synopsys scanner 2023.3.0.
MSR 3.0.7
Patch release for MSR 3.0 that focuses on CVE fixes. Refer to Security information for full details.
In addition, MSR 3.0.7 includes a Go upgrade to version 1.20.3.
MSR 3.0.6
Patch release for MSR 3.0 that focuses on middleware updates including Go 1.19.4 and Synopsys scanner 2022.12.2.
The MSR 3.0.6 release also features updated SAML proxy settings in the MSR web UI.
MSR 3.0.5
Patch release for MSR 3.0 that focuses on CVE fixes. Refer to Security information for full details.
In addition, MSR 3.0.5 offers an upgrade of Synopsys scanner to version 2022.9.1.
MSR 3.0.4
Patch release for MSR 3.0 that focuses on CVE fixes. Refer to Security information for full details.
In addition, MSR 3.0.4 offers an upgrade of Synopsys scanner to version 2022.6.0 and improved command-line logging.
MSR 3.0.3
Patch release for MSR 3.0 focusing on delivering bug and CVE fixes. Refer to Addressed issues and Security information for full details.
In addition, MSR 3.0.3 offers an upgrade of Synopsys scanner to release 2022.3.1.
MSR 3.0.2
Patch release for MSR 3.0 introducing the following key features:
CVE database update failure information is easier to discover
Webhooks can be set up on Helm chart-related actions
Improvements to vulnerability scan summary counts presentation
Update of cert-manager pre-requisite to 1.7.2
MSR 3.0.1
Patch release for MSR 3.0 introducing the following key features:
MSR data restore speed increase
Update of Synopsys scanner to release 2021.12.0
Update of cert-manager pre-requisite to 1.6.1
MSR 3.0.0
Initial MSR 3.0.0 release introducing the following key features:
MSR runs on any standard Kubernetes 1.20 and above distribution.
MSR no longer requires deployment onto dedicated nodes.
Jobrunner workers are now grouped into deployments, with their capacity maps defined at the deployment level, rather than separately for each replica.
3.0.12¶
Release date |
Name |
Highlights |
---|---|---|
2024-MAR-27 |
MSR 3.0.12 |
Patch release for MSR 3.0 that addresses middleware updates, including Golang 1.21.8 and Synopsys Scanner 2023.12. |
Addressed issues¶
The list of the addressed issues in MSR 3.0.12 include:
[ENGDTR-4158] Fixed an issue wherein the
initialEvaluation
flag of a created or updated tag pruning policy was set totrue
, which caused its evaluation to run in the API server. Instead, now the evaluation of the policy is executed in the JobRunner as a singletag_prune
job.[ENGDTR-4159] Fixed an issue wherein the tag pruning policy feature, responsible for the automated testing of tags and providing the count of affected tags, was preventing the creation of policies. To ensure the reliable creation of tag pruning policies, this feature has been removed. Consequently, users will not see the number of affected tags when creating new policies. For testing purposes before evaluation, Mirantis recommends that you use the <