The Mirantis Secure Registry (MSR) documentation is your resource for
information on how to deploy and operate an MSR instance. The intent of the
content therein is to provide users with an understanding of the core concepts
of the product, while also providing instruction sufficient to deploy and
operate the software.
Mirantis is committed to constantly building on and improving the MSR
documentation, in response to the feedback and kind requests we receive from
the MSR user base.
Mirantis Secure Registry (MSR) is a solution that enables enterprises to store
and manage their container images on-premises or in their virtual private
clouds. With the advent of MSR 3.1.0, the software can run alongside your other
apps in any standard Kubernetes distribution, or you can deploy it onto a Swarm
cluster. As a result, the MSR user has far greater flexibility, as many
resources are administered by the orchestrator rather than the registry itself.
And while MSR 3.1.0 is not integrated with Mirantis Kubernetes Engine (MKE) as
it was prior to version 3.0.0, it runs just as well on MKE as on any supported
Kubernetes distribution or on Docker Swarm.
The security that is built into MSR enables you to verify and trust the
provenance and content of your applications and ensure secure separation of
concerns. Using MSR, you meet security and regulatory compliance requirements.
In addition, the automated operations and integration with CI/CD speed up
application testing and delivery. The most common use cases for MSR include:
Helm charts repositories
Deploying applications to Kubernetes can be complex. Setting up a single
application can involve creating multiple interdependent Kubernetes
resources, such as pods, services, deployments, and replica sets. Each of
these requires manual creation of a detailed YAML manifest file as well.
This is a lot of work and time invested. With Helm charts (packages that
consist of a few YAML configuration files and some templates that are
rendered into Kubernetes manifest files) you can save time and install
the software you need with all the dependencies, upgrade, and configure it.
Automated development
Easily create an automated workflow where you push a commit that
triggers a build on a CI provider, which pushes a new image into
your registry. Then, the registry fires off a webhook and triggers
deployment on a staging environment, or notifies other systems
that a new image is available.
Secure and vulnerability-free images
When an industry requires applications to comply with certain security
standards to meet regulatory compliances, your applications are as
secure as the images that run those applications. To ensure that your
images are secure and do not have any vulnerabilities, track your
images using a binary image scanner to detect components in images
and identify associated CVEs. In addition, you may also run image
enforcement policies to prevent vulnerable or inappropriate images
from being pulled and deployed from your registry.
The Mirantis Secure Registry (MSR) Reference Architecture provides
comprehensive technical information on MSR, including component particulars,
infrastructure specifications, and networking and volumes detail.
Mirantis Secure Registry (MSR) is an enterprise-grade image storage
solution. Installed behind a firewall, either on-premises or on a virtual
private cloud, MSR provides a secure environment where users can store and
manage their images.
Starting with MSR 3.1.0, MSR can run alongside your other apps in any standard
Kubernetes distribution, or you can deploy it onto a Swarm cluster. As a
result, the MSR user has a great deal of flexibility, as many resources are
administered by the orchestrator rather than by the registry itself.
While MSR 3.1.x is not integrated with Mirantis Kubernetes Engine (MKE), as it
was it was prior to version 3.0.0, it runs just as well on MKE as on any
supported Kubernetes distribution or on Docker Swarm.
The advantages of MSR include the following:
Image and job management
MSR has a web-based user interface used for browsing images and auditing
repository events. With the web UI, you can see which Dockerfile lines
produced an image and, if security scanning is enabled, a list of all of the
software installed in that image and any Common Vulnerabilities and Exposures
(CVEs). You can also audit jobs with the web UI.
MSR can serve as a continuous integration and continuous delivery (CI/CD)
component, in the building, shipping, and running of applications.
Availability
MSR is highly available through the use of multiple replicas of all
containers and metadata. As such, MSR will continue to operate in the event
of machine failure, thus allowing for repair.
Efficiency
MSR can reduce the bandwidth used when pulling images by caching images
closer to users. In addition, MSR can clean up unreferenced manifests and
layers.
Built-in access control
As with Mirantis Kubernetes Engine (MKE), MSR uses role-based access control
(RBAC), which allows you to manage image access, either manually, with LDAP,
or with Active Directory.
Security scanning
A security scanner is built into MSR, which can be used to discover the
versions of the software that is in use in your images. This tool scans each
layer and aggregates the results, offering a complete picture of what is
being shipped as a part of your stack. Most importantly, as the security
scanner is kept up-to-date by tapping into a periodically updated
vulnerability database, it is able to provide unprecedented insight into your
exposure to known security threats.
Image signing
MSR ships with Notary, which allows you to sign and verify images using
Docker Content Trust.
Mirantis Secure Registry (MSR) is a containerized application that runs on a
Kubernetes cluster. After deploying MSR, you can use your Docker CLI client to
log in, push images, and pull images. For high availability, you can
horizontally scale your MSR workloads across multiple Kubernetes worker nodes.
Third-party components are present only in Kubernetes deployments.
Swarm-based installations include only the components listed in the
MSR installation workloads table.
To mitigate the risk of security breaches and exploits, Mirantis strongly
recommends upgrading the third-party components to the latest supported
version.
The communication flow between MSR workloads is illustrated below:
msr-architecture
Note
The third-party cert-manager component interacts with all of the components
displayed in the above diagram.
4 GB of RAM and 1 vCPU available, for reservation on a Kubernetes
worker node
One Kubernetes persistent volume 2 with 24 GB of available
storage that supports the ReadWriteOnce volume access mode, or a
StorageClass that can provision such a volume
An existing PostgreSQL server with sufficient storage for a 24 GB
database
Three 64 GB Kubernetes persistent volumes 2 that support the
ReadWriteOnce volume access mode, or a StorageClass that can
provision such volumes
Image data storage
Use any of the following:
One Kubernetes persistent volume 2 with 25 - 100 GB of
available storage that supports the ReadWriteMany volume access
mode, or a StorageClass that can provision such a volume
One cloud object storage bucket, such as Amazon S3, with 25 - 100 GB
of available storage
Two Kubernetes nodes, each with 4 GB of RAM and 1 vCPU available for
reservation
Two persistent volumes 2, each with 24 GB of available
storage, that support the ReadWriteOnce volume access mode, or a
StorageClass that can provision such volumes
An existing high availability PostgreSQL server with sufficient
storage for a 24 GB database, which supports synchronous replication
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the
PodDisruptionBudgetpolicy/v1beta1 Kubernetes API, which is no
longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
By default, MSR creates the following persistent volume claims (PVCs):
PVC
Description
<release-name>
Stores image data when MSR is configured to store image data in a
persistent volume
<release-name>-rethinkdb-cluster-<n>
Stores repository metadata
<release-name>-scanningstore-<n>
Stores vulnerability scan data when MSR is configured to deploy an
internal PostgreSQL cluster
You can customize the storage class that is used to provision persistent
volumes for these claims, or you can pre-provision volumes for use with MSR.
Refer to install-online for more information.
nfs is commonly used in production environments. The hostPath
and local options are not suitable for production, however they
may be of use in certain limited testing scenarios.
Cloud
Kubernetes, Swarm
MSR is compatible with the following storage providers:
NFS
Amazon S3
Microsoft Azure
OpenStack Swift
Google Cloud Storage
Alibaba Cloud Object Storage Service
Note
The deployment of MSR to Windows nodes is not supported.
The matches operator conforms subject fields to a user-provided regular
expression (regex). The regex for matches must follow the specification
in the official Go documentation: Package syntax.
Each of the following policies uses the rule engine:
Targeted to deployment specialists and QA engineers, the MSR Installation Guide
provides the detailed information and procedures you need to install
and configure Mirantis Secure Registry (MSR).
There are three paths available for the installation of MSR 3.1.x: MSR on
Swarm, MSR on Kubernetes using the MSR Operator, and MSR on Kubernetes using a
Helm chart.
The information herein is targeted solely to Kubernetes deployments.
To install MSR on MKE you must first configure both the
default:postgres-operator user account and the default:postgres-pod
service account in MKE with the privileged permission.
To prepare MKE for MSR install:
Log in to the MKE web UI.
In the left-side navigation panel, click the <user name>
drop-down to display the available options.
For MKE 3.6.0 or earlier, click Admin Settings > Orchestration.
For MKE 3.6.1 or later, click Admin Settings > Privileges.
Navigate to the User account privileges section.
Enter <namespace-name>:postgres-operator into the User
accounts field.
Note
You can replace <namespace-name> with default to indicate the use
of the default namespace.
Select the privileged check box.
Scroll down to the Service account privileges section.
Enter <namespace-name>:postgres-pod into the Service accounts
field.
Note
You can replace <namespace-name> with default to indicate the use
of the default namespace.
Select the privileged checkbox.
Click Save.
Important
For already deployed MSR instances, issue a rolling restart of the
postgres-operator deployment:
In MSR 3.1, you can use either of two methods for installing the software on
any Kubernetes distribution that supports persistent storage: the recommended
MSR Operator method and the legacy Helm chart method.
Install and configure your Kubernetes distribution.
Ensure that the default StorageClass on your cluster supports the dynamic
provisioning of volumes. If necessary, refer to the Kubernetes documentation
Change the default StorageClass.
If no default StorageClass is set, you can specify a StorageClass for MSR to
use by providing the following additional parameters to the custom
resource manifest:
The first of these three parameters is only applicable when you install MSR
with a persistentVolume backend, the default setting:
spec:registry:storage:backend:'persistentVolume'
MSR creates PersistentVolumeClaims with either the
ReadWriteOnce or the ReadWriteManyaccess modes,
depending on the purpose for which they are created. Thus the StorageClass
provisioner
that MSR uses must be able to provision PersistentVolumes with at least the
ReadWriteOnce and ReadWriteMany access modes.
The <release-name> PVC is created by default with the ReadWriteMany
access mode. If you choose to install MSR with a persistentVolume backend,
you can override this default access mode by adding the following parameter
to the custom resource manifest:
The following key components must be in place before you can install MSR on
Kubernetes using the online method:
cert-manager
Postgres Operator
RethinkDB Operator
MSR Operator
The MSR Operator, RethinkDB Operator, and MSR must all run in the same
namespace. With the MSR Operator, however, you can install cert-manager
and the Postgres Operator in a different namespace from the one where
the MSR resource is running.
Tip
To mitigate the risk of security breaches and exploits, Mirantis strongly
recommends upgrading any third-party components that are already installed
to the latest supported version before proceeding with installation.
To ensure that all of the key prerequisites are present:
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudgetpolicy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
By default, MSR uses the persistent volume claims detailed in
Volumes.
If you have a pre-existing PersistentVolume that contains image blob data
that you intend to use with a new instance of MSR, you can add the following
to the MSR custom resource manifest to provide the new instance with the
name of the associated PersistentVolumeClaim:
Make further edits to the cr-sample-manifest.yaml file as needed.
Default values will be applied for any fields that are both present in the
manifest and left blank. If the field is not present in the manifest, it
will receive an empty value.
Note
You can supersede the default MSR password, which is password, by
adding the enzi.adminPassword parameter to the
cr-sample-manifest.yaml file:
enzi:adminPassword:'<my-password>'
Invoke the following command to run the webhook health check and create the
custom resource:
If you are using MKE with your cluster, download and configure the client
bundle. Otherwise, ensure that you can access the cluster using kubectl,
either by updating the default Kubernetes config file or by setting the
KUBECONFIG environment variable to the path of the unique config file for
the cluster.
If you intend to run vulnerability scans, the msr-scanningstore-0 Pod
must have Running status. If this is not the case, it is likely that
the StorageClass is missing or is misconfigured, or because no default
StorageClass is set. To rectify this, you must configure a default
StorageClass and then re-install MSR. Otherwise, you can specify a
StorageClass for MSR to use by providing the following when using a custom
resource manifest install MSR:
If nothing returns after you have run the command, wait a few minutes
and run the command again.
If the command returns an FQDN it may be necessary to wait for the new
DNS record to resolve. You can check the resolution status by running
the following script, inserting the output string you received in
place of $FQDN:
while:;dodig+short$FQDN;sleep5;done
If the command returns an IP address, you can access the load balancer
at: https://<load-balancer-IP>/
When one or more IP addresses display, you can interrupt the shell loop and
access your MSR 3.0.x load balancer at: https://$FQDN/
Note
The load balancer will stop any attempt to tear down the VPC in which the
EC2 instances are running. As such, in order to tear down the VPC you must
first remove the load balancer:
kubectldeletesvcmsr-public-elb
Optional. Configure MSR to use Notary to sign images. To do this, update
NGINX to add the DNS name:
Modify your custom resource manifest to contain the following
values:
Herein, Mirantis provides step-by-step instruction on how to install MSR onto
an air-gapped Kubernetes cluster using the MSR Operator.
For documentation purposes, Mirantis assumes that you are installing MSR
on an offline Kubernetes cluster from an Internet-connected machine that has
access to the Kubernetes cluster. In doing so, you will use Helm and the MSR
Operator to perform the MSR installation from the Internet-connected machine.
Confirm that the default StorageClass on your cluster supports dynamic
volume provisioning. For more information, refer to the Kubernetes
documentation Change the default StorageClass.
If a default StorageClass is not set, you can specify a StorageClass to MSR
by providing the following additional parameters to the custom
resource manifest:
The first of these three parameters is only applicable when you install MSR
with a persistentVolume backend, the default setting:
spec:registry:storage:backend:'persistentVolume'
MSR creates PersistentVolumeClaims with
either the ReadWriteOnce or the ReadWriteManyaccess modes,
depending on the purpose for which they are created.
Thus the StorageClass provisioner
that MSR uses must be able to provision PersistentVolumes with at least the
ReadWriteOnce and the ReadWriteMany access modes.
The <release-name> PVC is created by default with the ReadWriteMany
access mode. If you choose to install MSR with a persistentVolume backend,
you can override this default access mode by adding the following parameter
to the custom resource manifest:
On the Internet-connected computer, configure your environment to use the
kubeconfig of the offline Kubernetes cluster. You can do this by setting a
KUBECONFIG
environment variable.
Prepare a Docker registry on the Internet-connected machine that contains all
of the images that are necessary to install MSR. Kubernetes will pull the
required images from this registry to the offline nodes during the installation
of the prerequisites and MSR.
On the Internet-connected machine, set up a Docker registry that the offline
Kubernetes cluster can access using a private IP address. For
more information, refer to
Docker official documentation: Deploy a registry server.
Add the msrofficial, postgres-operator, jetstack, and
rethinkdb-operator Helm repositories:
Obtain the names of all the images that are required for installing MSR
from the desired version of the Helm charts, for MSR, postgres-operator,
cert-manager, and rethinkdb-operator. You can do this by templating each
chart and grepping for image::
Push all the required images to the Docker registry. For example:
dockerpush<registry-ip>/msr/msr-api:<msr-version>
Create the following YAML files, which you will reference to override the
image repository information that is contained in the Helm charts used for
MSR installation:
The following key components must be in place before you can install MSR on
Kubernetes using the offline method:
cert-manager
Postgres Operator
RethinkDB Operator
MSR Operator
The MSR Operator, RethinkDB Operator, and MSR must all run in the same
namespace. With the MSR Operator, however, you can install cert-manager
and the Postgres Operator in a different namespace from the one where
the MSR resource is running.
Tip
To mitigate the risk of security breaches and exploits, Mirantis strongly
recommends upgrading any third-party components that are already installed
to the latest supported version before proceeding with installation.
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudgetpolicy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Run the helm install command with spilo_* parameters:
Verify that Postgres Operator is in the Running state:
kubectlgetpods
To troubleshoot a failing Postgres Operator Pod, run the following
command:
kubectldescribe<postgres-operator-pod-name>
Review the Pod logs for more detailed results:
kubectllogs<postgres-operator-pod-name>
Note
By default, MSR uses the persistent volume claims detailed in
Volumes.
If you have a pre-existing PersistentVolume that contains image blob data
that you intend to use with a new instance of MSR, you can add the following
to the MSR custom resource manifest to provide the new instance with the
name of the associated PersistentVolumeClaim:
Identify the msr-operator image reference in the
msr-operator.yaml file:
catmsr-operator.yaml|grep'msr-operator:'-n
Edit the line to refer to the correct image:
image:<registry-ip>/msr/msr-operator:1.0.4
Install the MSR Operator:
kubectlapply--server-side=true-fmsr-operator.yaml
Verify that the MSR Operator Pod is in the Running
state:
kubectlgetpods
The MSR Operator Pod name begins with msr-operator-controller-manager.
To troubleshoot a failing MSR Operator Pod, run the following
command:
kubectldescribepod<msr-operator-pod-name>
Review the Pod logs for more detailed results:
kubectllogs<msr-operator-pod-name>
Important
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudgetpolicy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Edit the cr-sample-manifest.yaml to include a reference to the
offline registry:
spec:image:registry:<registry-ip>
Make further edits to the cr-sample-manifest.yaml file as needed.
Default values will be applied for any fields that are both present in the
manifest and left blank. If the field is not present in the manifest, it
will receive an empty value.
Note
You can supersede the default MSR password, which is password, by
adding the enzi.adminPassword parameter to the
cr-sample-manifest.yaml file:
enzi:adminPassword:'<my-password>'
Invoke the following command to run the webhook health check and create the
custom resources.
If you are using MKE with your cluster, download and configure the client
bundle. Otherwise, ensure that you can access the cluster using kubectl,
either by updating the default Kubernetes config file or by setting the
KUBECONFIG environment variable to the path of the unique config file for
the cluster.
If you intend to run vulnerability scans, the msr-scanningstore-0 Pod
must have Running status. If this is not the case, it is likely that
the StorageClass is missing or is misconfigured, or because no default
StorageClass is set. To rectify this, you must configure a default
StorageClass and then re-install MSR. Otherwise, you can specify a
StorageClass for MSR to use by providing the following when using a custom
resource manifest install MSR:
You must have the following key components in place before you can install
MSR online using a Helm chart: a Kubernetes platform, cert-manager, and the
Postgres Operator.
Tip
To mitigate the risk of security breaches and exploits, Mirantis strongly
recommends upgrading any third-party components that are already installed
to the latest supported version before proceeding with installation.
Install and configure your Kubernetes distribution.
Ensure that the default StorageClass on your cluster supports the dynamic
provisioning of volumes. If necessary, refer to the Kubernetes documentation
Change the default StorageClass.
If no default StorageClass is set, you can specify a StorageClass for MSR to
use by providing the following additional parameters to MSR when running the
helm install command:
The first of these three parameters is only applicable when you install MSR
with a persistentVolume backend, the default setting:
--setregistry.storage.backend=persistentVolume
MSR creates PersistentVolumeClaims with either the
ReadWriteOnce or the ReadWriteManyaccess modes,
depending on the purpose for which they are created. Thus the StorageClass
provisioner
that MSR uses must be able to provision PersistentVolumes with at least the
ReadWriteOnce and ReadWriteMany access modes.
The <release-name> PVC is created by default with the ReadWriteMany
access mode. If you choose to install MSR with a persistentVolume backend,
you can override this default access mode with the following parameter
when running the helm install command:
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudgetpolicy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Run the following helm install command, including spilo_*
parameters:
Verify that Postgres Operator is in the Running state:
kubectlgetpods
To troubleshoot a failing Postgres Operator Pod, run the following
command:
kubectldescribe<postgres-operator-pod-name>
Review the Pod logs for more detailed results:
kubectllogs<postgres-operator-pod-name>
Note
By default, MSR uses the persistent volume claims detailed in
Volumes.
If you have a pre-existing PersistentVolume that contains image blob data
that you intend to use with a new instance of MSR, you can use Helm to
provide the new instance with the name of the associated
PersistentVolumeClaim:
As the Helm chart values include the default MSR credentials information,
Mirantis strongly recommends that you change these credentials immediately
following installation.
Beginning with MSR 3.1.10, you can set your password during installation
using the enzi.adminPassword parameter. Alternatively, you can define
the enzi.adminPassword parameter in the values.yaml file in advance
of MSR installation. Be aware that the enzi.adminPassword parameter is
only used during MSR installation and does not affect upgrades.
Important
Mirantis has transitioned to an OCI-based Helm registry for
registry.mirantis.com. As a result, Helm repository management is no
longer required. Commands that rely on Helm repository operations,
such as helm repo update and helm upgrade,
will fail with HTTP 4xx errors.
For both new installations and upgrades, use the OCI-based registry URL
directly. To check for available upgrades, run
helm upgrade --dry-run without specifying a version.
If the installation fails and MSR Pods continue to run in your cluster,
it is likely that MSR failed to complete the initialization process, and
thus you must reinstall MSR. To delete the Pods and completely uninstall
MSR:
Delete any running msr-initialize Pods:
kubectldeletejobmsr-initialize
Delete any remaining Pods:
helmuninstallmsr
Verify the success of your MSR installation.
Verify that all msr-* Pods are in the running state. For more
detail, refer to check-the-pods-online-helm.
If you are using MKE with your cluster, download and configure the client
bundle. Otherwise, ensure that you can access the cluster using kubectl,
either by updating the default Kubernetes config file or by setting the
KUBECONFIG environment variable to the path of the unique config file for
the cluster.
If you intend to run vulnerability scans, the msr-scanningstore-0 Pod
must have Running status. If this is not the case, it is likely that
the StorageClass is missing or is misconfigured, or because no default
StorageClass is set. To rectify this, you must configure a default
StorageClass and then re-install MSR. Otherwise, you can specify a
StorageClass for MSR to use by providing the following when using Helm to
install MSR:
If nothing returns after you have run the command, wait a few minutes
and run the command again.
If the command returns an FQDN it may be necessary to wait for the new
DNS record to resolve. You can check the resolution status by running
the following script, inserting the output string you received in
place of $FQDN:
while:;dodig+short$FQDN;sleep5;done
If the command returns an IP address, you can access the load balancer
at: https://<load-balancer-IP>/
When one or more IP addresses display, you can interrupt the shell loop and
access your MSR 3.0.x load balancer at: https://$FQDN/
Note
The load balancer will stop any attempt to tear down the VPC in which the
EC2 instances are running. As such, in order to tear down the VPC you must
first remove the load balancer:
kubectldeletesvcmsr-public-elb
Optional. Configure MSR to use Notary to sign images. To do this, update
NGINX to add the DNS name:
When using an <MSR-chart-version> version, such as 1.0.0, for the Helm
and MSR_FQDN, run:
Herein, Mirantis provides step-by-step instruction on how to install MSR onto
an air-gapped Kubernetes cluster using a Helm chart.
For documentation purposes, Mirantis assumes that you are installing MSR
on an offline Kubernetes cluster from an Internet-connected machine that has
access to the Kubernetes cluster. In doing so, you will use Helm to perform the
MSR installation from the Internet-connected machine.
Confirm that the default StorageClass on your cluster supports dynamic
volume provisioning. For more information, refer to the Kubernetes
documentation Change the default StorageClass.
If a default StorageClass is not set, you can specify a StorageClass to MSR
by providing the following additional parameters during the running of the
helm install command:
The first of these three parameters is only applicable when you install MSR
with a persistentVolume backend, the default setting:
--setregistry.storage.backend=persistentVolume
MSR creates PersistentVolumeClaims with
either the ReadWriteOnce or the ReadWriteManyaccess modes,
depending on the purpose for which they are created.
Thus the StorageClass provisioner
that MSR uses must be able to provision PersistentVolumes with at least the
ReadWriteOnce and the ReadWriteMany access modes.
The <release-name> PVC is created by default with the ReadWriteMany
access mode. If you choose to install MSR with a persistentVolume backend,
you can override this default access mode with the following parameter
when running the helm install command:
On the Internet-connected computer, configure your environment to use the
kubeconfig of the offline Kubernetes cluster. You can do this by setting a
KUBECONFIG
environment variable.
Prepare a Docker registry on the Internet-connected machine that contains all
of the images that are necessary to install MSR. Kubernetes will pull the
required images from this registry to the offline nodes during the installation
of the prerequisites and MSR.
On the Internet-connected machine, set up a Docker registry that the offline
Kubernetes cluster can access using a private IP address. For
more information, refer to
Docker official documentation: Deploy a registry server.
Add the msrofficial, postgres-operator, and jetstack Helm
repositories:
Obtain the names of all the images that are required for installing MSR
from the desired version of the Helm charts, for MSR, postgres-operator,
and cert-manager. You can do this by templating each chart and
grepping for image::
Push all the required images to the Docker registry. For example:
dockerpush<registry-ip>/msr/msr-api:<msr-version>
Create the following YAML files, which you will reference to override the
image repository information that is contained in the Helm charts used for
MSR installation:
You must have cert-manager and the Postgres Operator in place before you can
install MSR using the offline method.
Tip
To mitigate the risk of security breaches and exploits, Mirantis strongly
recommends upgrading any third-party components that are already installed
to the latest supported version before proceeding with installation.
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudgetpolicy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Run the following helm install command, including spilo_*
parameters:
Verify that Postgres Operator is in the Running state:
kubectlgetpods
To troubleshoot a failing Postgres Operator Pod, run the following
command:
kubectldescribe<postgres-operator-pod-name>
Review the Pod logs for more detailed results:
kubectllogs<postgres-operator-pod-name>
Note
By default, MSR uses the persistent volume claims detailed in
Volumes.
If you have a pre-existing PersistentVolume that contains image blob data
that you intend to use with a new instance of MSR, you can use Helm to
provide the new instance with the name of the associated
PersistentVolumeClaim:
As the Helm chart values include the default MSR credentials information,
Mirantis strongly recommends that you change these credentials immediately
following installation.
Beginning with MSR 3.1.10, you can set your password during installation
using the enzi.adminPassword parameter. Alternatively, you can define
the enzi.adminPassword parameter in the values.yaml file in advance
of MSR installation. Be aware that the enzi.adminPassword parameter is
only used during MSR installation and does not affect upgrades.
Important
Mirantis has transitioned to an OCI-based Helm registry for
registry.mirantis.com. As a result, Helm repository management is no
longer required. Commands that rely on Helm repository operations,
such as helm repo update and helm upgrade,
will fail with HTTP 4xx errors.
For both new installations and upgrades, use the OCI-based registry URL
directly. To check for available upgrades, run
helm upgrade --dry-run without specifying a version.
If the installation fails and MSR Pods continue to run in your cluster,
it is likely that MSR failed to complete the initialization process, and
thus you must reinstall MSR. To delete the Pods and completely uninstall
MSR:
Delete any running msr-initialize Pods:
kubectldeletejobmsr-initialize
Delete any remaining Pods:
helmuninstallmsr
Verify the success of your MSR installation.
Verify that all msr-* Pods are in the running state. For more
detail, refer to check-the-pods-offline-helm
Log into the MSR web UI.
Log into MSR from the command line:
dockerlogin<private-ip>
Push an image to MSR using docker push.
Optional. Disable outgoing connections in the MSR web UI
Admin Settings. MSR offers outgoing connections for the
following tasks:
If you are using MKE with your cluster, download and configure the client
bundle. Otherwise, ensure that you can access the cluster using kubectl,
either by updating the default Kubernetes config file or by setting the
KUBECONFIG environment variable to the path of the unique config file for
the cluster.
If you intend to run vulnerability scans, the msr-scanningstore-0 Pod
must have Running status. If this is not the case, it is likely that
the StorageClass is missing or is misconfigured, or because no default
StorageClass is set. To rectify this, you must configure a default
StorageClass and then re-install MSR. Otherwise, you can specify a
StorageClass for MSR to use by providing the following when using Helm to
install MSR:
The procedure provided herein will guide you in your installation of MSR onto a
Swarm cluster that has one manager and one worker node, with the MSR
installation occurring on one worker. Be aware, though, that you can adjust the
number of nodes to fit your specific needs.
Important
Mirantis recommends that you:
Install MSR on an odd number of nodes.
To bypass the recommendation check in the
apply command, add the --force
option.
For MSR 3.1.4 or earlier use the install command instead of
the apply command.
If you do not specify any worker nodes on which to install MSR, the
process fails. You must specify at least one node within
swarm.nodeList to indicate the node that msr-installer should
use.
You must specify the destination file in the destination container
as :/config/values.yml\. Any other name will cause the container
deployment to fail, which will result in the cluster becoming
inoperable.
To switch the log-level from the default info to debug,
you must insert the --log-leveldebug flag between the
msr-installer image and the apply subcommand.
To modify the log-level of the containers that will be deployed,
use logLevel within the values.yml file.
Port 8443 is indicated in the provided example, demonstrating a
scenario in which MKE and MSR are both in use and have a conflict with
port 443. Port 443 should be used exclusively for all other
installation configurations.
To configure the host or load balancer URL for accessing MSR,
use the --external-url flag at the time of installation or upgrade.
You can use the flag alongside such options as --https-port
or --http-port, for example --external-urlmsr.example.com.
Alternatively, the external URL can be configured in the values.yaml
file by setting the value to the global.externalURL field.
Optional. Use a load balancer to expose services externally in the swarm.
MSR on Swarm relies on Ingress load balancing.
Refer to the official
Load balancing
documentation for more information.
Review the status of the deployed services:
dockerstackservicesmsr
Access the MSR web UI at https://<node-ip>:443. The default username
and password are admin:password.
The procedure provided herein assumes that you are installing MSR
on an offline Swarm cluster from an Internet-connected machine that has
access to the Swarm cluster through private IP addresses.
Important
Mirantis recommends that you:
Install MSR on an odd number of nodes.
To bypass the recommendation check in the
apply command, add the :command:`–force `
option.
Install MSR on worker nodes only.
Enable all authenticated users, including service accounts, to schedule
services and perform tasks on all nodes.
For MSR 3.1.4 or earlier use the install command instead of
the apply command.
If you do not specify any worker nodes on which to install MSR, the
process fails. You must specify at least one node within
swarm.nodeList to indicate which node msr-installer should use.
Optional. Use a load balancer to expose services externally in the swarm.
MSR on Swarm relies on Ingress load balancing.
Refer to the official
Load balancing
documentation for more information.
Review the status of the deployed services. Be aware that this may require
a wait time of up to two minutes.
dockerstackservicesmsr
Access the MSR web UI at https://<node-ip>:443. The default username
and password are admin:password.
Optional. Disable outgoing connections in the MSR web UI
Admin Settings. MSR offers outgoing connections for the
following tasks:
The spec.PersistentVolumeClaimRetentionPolicy field in the custom
resource manifest differs from the PersistentVolume Reclaim policy in
Kubernetes. The MSR Operator PersistentVolumeClaim Retention policy can
accept either of the following values:
retain: When the MSR custom resource is deleted, the PVCs used by
MSR are retained (default).
delete: Deleting the MSR custom resource results in the automatic
deletion of the PVCs used by MSR.
By default, the uninstaller does not delete the data associated with your
MSR deployment. To delete that data, you must include the --destroy flag
with the uninstall command.
The MSR Operations Guide provides the detailed information you
need to store and manage images on-premises or in a virtual private
cloud, to meet security or regulatory compliance requirements.
By default Mirantis Container Runtime uses TLS when pushing and pulling images
to an image registry like Mirantis Secure Registry (MSR).
If MSR is using the default configurations or was configured to use
self-signed certificates, you need to configure your Mirantis Container Runtime
to trust MSR. Otherwise, when you try to log in, push to, or pull images
from MSR, you’ll get an error:
The first step to make your Mirantis Container Runtime trust the certificate
authority used by MSR is to get the MSR CA certificate. Then you
configure your operating system to trust that certificate.
In your browser navigate to https://<msr-url>/ca to download the TLS
certificate used by MSR. Open Windows Explorer, right-click the file
you’ve downloaded, and choose Install certificate.
Then, select the following options:
Store location: local machine
Check place all certificates in the following store
Click Browser, and select Trusted Root Certificate
Authorities
# Download the MSR CA certificate
sudocurl-khttps://<msr-domain-name>/ca-o/usr/local/share/ca-certificates/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudoupdate-ca-certificates
# Restart the Docker daemon
sudoservicedockerrestart
# Download the MSR CA certificate
sudocurl-khttps://<msr-domain-name>/ca-o/etc/pki/ca-trust/source/anchors/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudoupdate-ca-trust
# Restart the Docker daemon
sudo/bin/systemctlrestartdocker.service
Mirantis Secure Registry can be configured to have one or more caches.
This allows you to choose from which cache to pull images from for
faster download times.
If an administrator has set up caches, you can
choose which cache to use when pulling images.
In the MSR web UI, navigate to your Account, and check the
Content Cache options.
Once you save, your images are pulled from the cache instead of the
central MSR.
You can create and distribute access tokens in MSR that grant users access at
specific permission levels.
Access tokens are associated with a particular user account. They take on the
permissions of that account when in use, adjusting automatically to any
permissions changes that are made to the associated user account.
Note
Regular MSR users can create access tokens that adopt their own account
permissions, while administrators can create access tokens that adopt the
account permissions of any account they choose, including the admin account.
Access tokens are of use in building CI/CD pipelines and other integrations, as
you can issue separate tokens for each integration and henceforth deactivate or
delete such tokens at any time. You can also use access tokens to generate a
temporary password for a user who is locked out of their account.
Note
To monitor users login events, enable the auditAuthLogsEnabled parameter
in the /settings API endpoint:
By default, Mirantis Secure Registry (MSR) services are exposed using HTTPS.
This ensures encrypted communications between clients and your trusted
registry. If you do not pass a PEM-encoded TLS certificate during installation,
MSR will generate a self-signed certificate, which leads to an insecure site
warning when accessing MSR through a browser. In addition, MSR includes an HTTP
Strict Transport Security (HSTS) header in all API responses, which can cause
your browser not to load the MSR web UI.
You can configure MSR to use your own TLS certificates, to ensure that MSR
automatically trusts browsers and client tools. You can also enable user
authentication through client certificates that your organization Public
Key Infrastructure (PKI) provides.
By default, Mirantis Secure Registry (MSR) uses persistent cookies.
Alternatively, you can switch to using session-based authentication cookies
that expire when you close your browser.
To disable persistent cookies:
Log in to the MSR web UI.
In the left-side navigation panel, navigate to System.
On the General tab, scroll down to Browser Cookies.
Slide the toggle to the right next to
Disable persistent cookies.
Verify that persistent cookies are disabled:
Using Chrome
Log in to the MSR web UI using Chrome.
Right-click any page and select Inspect.
In the Developer Tools panel, navigate to
Application > Cookies > https://<msr-external-url>.
Verify that Expires / Max-Age is set to
Session.
Using Firefox
Log in to the MSR web UI using Firefox.
Right-click any page and select Inspect.
In the Developer Tools panel, navigate to
Storage > Cookies > https://<msr-external-url>.
By default, MSR automatically records and transmits data to Mirantis
through an encrypted channel for monitoring and analysis purposes. The data
collected provides the Mirantis Customer Success Organization with information
that helps Mirantis to better understand the operational use of MSR by our
customers. It also provides key feedback in the form of product usage
statistics, which assists our product teams in making enhancements to Mirantis
products and services.
Caution
To send MSR telemetry, the container runtime and the jobrunner
container must be able to resolve api.segment.io and create a TCP
(HTTPS) connection on port 443.
To disable telemetry for MSR:
Log in to the MSR web UI as an administrator.
Click System in the left-side navigation panel to open the
System page.
Click the General tab in the details pane.
Scroll down in the details pane to the Analytics section.
By default, MSR uses the local filesystem of the node on which it is running to
store your Docker images. As an alternative, you can configure MSR to use an
external storage backend for improved performance or high availability.
If your MSR deployment has a single replica, you can continue to use the local
filesystem to store your Docker images. If, though, your MSR deployment has
multiple replicas, make sure that all of the replicas are using the same
storage backend for high availability.
Whenever a user pulls an image, the MSR node serving the request must have
access to that image.
You can configure your storage backend at the time of MSR installation or
upgrade. To do so, specify the registry.storage.backend parameter in your
custom resource manifest or Helm chart values.yaml file with one of the
following values, as appropriate:
"persistentVolume"
"azure"
"gcs"
"s3"
"swift"
"oss"
The following table details the fields that you can configure in the
registry.storage.persistentVolume section of the custom resource manifest
and Helm chart values.yaml file:
MSR deployments with high availability must use either NFS or another
centralized storage backend to ensure that all MSR replicas have access to the
same images.
To verify the amount of persistent volume space that is in use:
The manifest examples herein are offered for demonstration purposes only.
They do not exist in the Mirantis repository and thus are not available for
use. To use NFS with MSR 3.0.x, you must enlist an external provisioner,
such as NFS Ganesha server and external provisioner
or NFS subdir external provisioner.
You can configure MSR to store Docker images on Amazon S3 or on any other file
servers with an S3-compatible API.
All S3-compatible services store files in “buckets”, to which you can authorize
users to read, write, and delete files. Whenever you integrate MSR with such a
service, MSR sends all read and write operations to the S3 bucket where the
images then persist.
Before you configure MSR you must first create a bucket on Amazon S3. To
optimize pulls and pushes, Mirantis suggests that you create the S3 bucket in
the AWS region that is physically closest to the servers on which MSR is set to
run.
Create an S3 bucket.
Create a new IAM user for the MSR integration.
Apply an IAM policy that has the following limited user permissions:
Add the following values to the custom resource manifest. If you are
using IAM role authentication, do not include the lines for
accesskey and secretkey. Running Kubernetes on AWS requires that
you include v4auth:true.
The following parameters are available for configuration in the
registry.storage.s3 section of the custom resource manifest, Helm chart, or
Swarm cluster values.yaml file:
To restore MSR using your previously configured S3 settings, use
restore.
Restore MSR with non-S3 cloud storage provider settings¶
For S3-compatible cloud storage providers other than Amazon S3, configure the
following parameters in the registry.storage section of the custom resource
manifest, Helm chart, or Swarm cluster values.yaml file:
The name of the Google Cloud Storage bucket in which image
data is stored.
Standard
credentials
The contents of a service account private key file in
JSON format that is used for Service Account Authentication.
Advanced
rootdirectory
The root directory tree in which all registry files
are stored. The prefix is applied to all Google Cloud Storage keys,
to allow you to segment data in your bucket as necessary.
To facilitate online garbage collection, switching storage backends
initializes a new metadata store and erases your existing tags. As a best
practice, you should always move, back up, and restore MSR storage backends
together with your metadata.
To switch your storage backend to Amazon S3 using the MSR Operator:
Add the following values to the custom resource manifest. If you are
using IAM role authentication, do not include the lines for
accesskey and secretkey. Running Kubernetes on AWS requires that
you include v4auth:true.
Mirantis Secure Registry (MSR) is designed to scale horizontally as your usage
increases. You can scale each of the resources that the custom resource
manifest creates by editing the replicaCount setting in the custom resource
manifest. You can also add more replicas to cause MSR to scale to demand and
for high availability.
To ensure that MSR is tolerant to failures, you can add additional replicas to
each of the resources MSR deploys. MSR with high availability requires a
minimum of three Nodes.
When sizing your MSR installation for high availability, Mirantis recommends
that you follow these best practices:
Ensure that multiple Pods created for the same resource are not scheduled
on the same Node. To do this, enable a Pod affinity setting in your
Kubernetes environment that schedules Pod replicas on different Nodes.
Note
If you are unsure of which Pod affinity settings to use, set the
podAntiAffinityPreset field to hard, to enable the recommended
affinity settings intended for a highly available workload.
Do not scale RethinkDB with just two replicas.
Caution
RethinkDB cannot tolerate a failure with an even number of replicas.
To determine the best way to scale RethinkDB, refer to the following table.
MSR RethinkDB replicas
Failures tolerated
1
0
3
1
5
2
7
3
Caution
Adding too many replicas to the RethinkDB cluster can lead to performance
degradation.
You can edit the replica counts in the custom resource manifest, but be
aware that rethinkdb.cluster.replicaCount must always be an odd
number. Refer to the RethinkDB scaling chart for details.
Invoke the following command to run the webhook health check and apply the
changes to the custom resource:
You can edit the replica counts in the ha.yaml file. However,
you must make sure that rethinkdb.cluster.replicaCount is always
an odd number. Refer to the RethinkDB scaling chart for details.
Use Helm to apply the YAML file to a new installation:
You must have at least three worker nodes to run a robust and fault-tolerant
high availability (HA) MSR deployment.
Note
The procedure that follows is supplementary to the MSR installation
procedure. Refer to Install MSR online for the comprehensive
installation instructions.
SSH into a manager node.
Obtain a list of non-manager nodes along with their node IDs:
For MSR 3.1.4 or earlier use the install command instead of
the apply command.
You must install MSR onto an odd number of worker nodes, the reason for
which is that RethinkDB uses a raft consensus algorithm to ensure data
consistency and fault tolerance.
Review the status of the deployed services:
dockerstackservicesmsr
Modify replica counts on an existing installation¶
In the cr-sample-manifest.yaml file, edit the key-value pair that
corresponds to the MSR resource whose replica count you want to modify. For
example, nginx:
To modify replica counts for MSR resources using a Helm chart:
You can use the helm upgrade command to modify replica counts
across non-RethinkDB MSR resources. For the RethinkDB resources, refer to
Modify replica counts for RethinkDB resources.
In the ha.yaml file, edit the key-value pair that corresponds to the MSR
resource whose replica count you wish to modify. For example, nginx:
For MSR 3.1.4 or earlier, use the scale command instead of
the apply command.
Important
Because RethinkDB uses a raft consensus algorithm to ensure data
consistency and fault tolerance, you must install MSR onto an odd number
of worker nodes.
The procedure outlined herein is not necessary if you are using the
MSR Operator to install and manage your MSR deployment.
Unlike other MSR resources, modifications to RethinkDB resources require that
you scale the RethinkDB tables. Cluster scaling occurs when you alter the
replicaCount value in the ha.yaml file.
Run msr rethinkdb decommission on the servers you want to
decommission.
Note
The number of replicas will scale down from the highest number to the
lowest. Thus, as the scale down in the example is from three servers
to one server, the two servers with the highest numbers should be
targeted for decommission.
For MSR to perform security scanning, you must have a running deployment of
Mirantis Secure Registry (MSR), administrator access, and an MSR license that
includes security scanning.
Before you can set up security scanning, you must verify that your Docker ID
can access and download your MSR license from DockerHub. If you are using a
license that is associated with an organization account, verify that your
Docker ID is a member of the Owners team, as only members of that team can
download license files for an organization. If you are using a license
associated with an individual account, no additional action is needed.
Note
To verify that your MSR license includes security scanning:
Log in to the MSR web UI.
In the left-side navigation panel, click System and navigate
to the Security tab.
If the Enable Scanning toggle displays, the license includes
security scanning.
In the left-side navigation panel, click System and navigate
to the Security tab.
Slide the Enable Scanning toggle to the right.
Set the security scanning mode by selecting either Online or
Offline.
Online mode:
Online mode downloads the latest vulnerability database from a Docker
server and installs it.
Select whether to include jobrunner and postgresDB logs
Click Sync Database now.
Offline mode:
Offline mode requires that you manually perform the following steps.
Download the most recent CVE database.
Be aware that the example command specifies default values. It
instructs the container to output the database file to the
~/Downloads directory and configures the volume to map from the
local machine into the container. If the destination for the database
is in a separate directory, you must define an additional volume. For
more information, refer to the table that follows this procedure.
MSR security scanning indexes the components in your MSR images and
compares them against a CVE database. This database is routinely updated
with new vulnerability signatures, and thus MSR must be regularly updated with
the latest version to properly scan for all possible vulnerabilities. After
updating the database, MSR matches the components in the new CVE reports to the
indexed components in your images, and generates an updated report.
Note
MSR users with administrator access can learn when the CVE database was last
updated by accessing the Security tab in the MSR
System page.
In online mode, MSR security scanning monitors for updates to
the vulnerability database, and downloads them when available.
To ensure that MSR can access the database updates, verify that the host can
access both https://license.mirantis.com and
https://dss-cve-updates.mirantis.com/ on port 443 using HTTPS.
MSR checks for new CVE database updates every day at 3:00 AM UTC. If an update
is available, it is automatically downloaded and applied, without interrupting
any scans in progress. Once the update is completed, the security scanning
system checks the indexed components for new vulnerabilities.
To set the update mode to online:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate
to the Security tab.
Click Online.
Your choice is saved automatically.
Note
To check immediately for a CVE database update, click
Sync Database now.
When connection to the update server is not possible, you can update the CVE
database for your MSR instance using a .tar file that contains the database
updates.
To set the update mode to offline:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate
to the Security tab.
Select Offline
Click Select Database and open the downloaded CVE database file.
MSR installs the new CVE database and begins checking the images that are
already indexed for components that match new or updated vulnerabilities.
The time needed to pull and push images is directly influenced by the distance
between your users and the geographic location of your MSR deployment. This is
because the files need to traverse the physical space and cross multiple
networks. You can, however, deploy MSR caches at different geographic
locations, to add greater efficiency and shorten user wait time.
With MSR caches you can:
Accelerate image pulls for users in a variety of geographical regions.
Manage user permissions from a central location.
MSR caches are inconspicuous to your users, as they will continue to log in and
pull images using the provided MSR URL address.
When MSR receives a user request, it first authenticates the request and
verifies that the user has permission to pull the requested image. Assuming
the user has permission, they then receive an image manifest that contains the
list of image layers to pull and which directs them to pull the images from a
particular cache.
When your users request image layers from the indicated cache, the cache pulls
these images from MSR and maintains a copy. This enables the cache to serve the
image layers to other users without having to retrieve them again from MSR.
Note
Avoid using caches if your users need to push images faster or if you want
to implement region-based RBAC policies. Instead, deploy multiple MSR
clusters and apply mirroring policies between them. For further details,
refer to Promotion policies and monitoring.
MSR caches running in different geographic locations can provide your users
with greater efficiency and shorten the amount of time required to pull images
from MSR.
Consider a scenario in which you are running an MSR instance that is installed
in the United States, with a user base that includes developers located in the
United States, Asia, and Europe. The US-based developers can pull their images
from MSR quickly, however those working in Asia and Europe have to contend with
unacceptably long wait times to pull the same images. You can address this
issue by deploying MSR caches in Asia and Europe, thus reducing the wait time
for developers located in those areas.
The described MSR cache scenario requires three datacenters:
The MSR with Kubernetes deployment detailed herein assumes that you have a
running MSR deployment.
When you establish the MSR cache as a Kubernetes deployment, you ensure that
Kubernetes will automatically schedule and restart the service in the event
of a problem.
You manage the cache configuration with a Kubernetes Config Map and the TLS
certificates with Kubernetes secrets. This setup enables you to securely
manage the configurations of the node on which the cache is running.
To deploy the MSR cache with a TLS endpoint you must generate a TLS
certificate and key from a certificate authority.
The manner in which you expose the MSR cache changes the Storage Area Networks
(SANs) that are required for the certificate. For example:
To deploy the MSR cache with an ingress object you must use an external MSR
cache address that resolves to your ingress controller as part of your
certificate.
To expose the MSR cache through a Kubernetes Cloud Provider, you must have
the external load balancer address as part of your certificate.
To expose the MSR cache through a Node port or a host port you must use a
Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.
In the certs directory, place the newly created certificate
cache.cert.pem and key cache.key.pem for your MSR cache.
Place the certificate authority in the certs directory, including any
intermediate certificate authorities of the certificate from your MSR
deployment. If your MSR deployment uses cert-manager, use kebectl to
source this from the main MSR deployment.
kubectlgetsecretmsr-nginx-ca-cert-ogo-template='{{ index .data "ca.crt" | base64decode }}'
Note
If cert-manager is not in use, you must provide your custom nginx.webtls
certificate.
The MSR cache takes its configuration from a configuration file that you mount
into the container.
You can edit the following MSR cache configuration file for your environment,
entering the relevant external MSR cache, worker node, or external load
balancer FQDN. Once you have configured the cache it fetches image layers from
MSR and maintains a local copy for 24 hours. If a user requests the image layer
after that period, the cache fetches it again from MSR.
cat > config.yml <<EOFversion:0.1log:level:infostorage:delete:enabled:truefilesystem:rootdirectory:/var/lib/registryhttp:addr:0.0.0.0:443secret:generate-random-secrethost:https://<external-fqdn-msrcache># Could be MSR Cache / Loadbalancer / Worker Node external FQDNtls:certificate:/certs/cache.cert.pemkey:/certs/cache.key.pemmiddleware:registry:-name:downstreamoptions:blobttl:24hupstreams:-https://<msr-url># URL of the Main MSR Deploymentcas:-/certs/msr.cert.pemEOF
By default, the cache stores image data inside its container. Thus, if
something goes wrong with the cache service and Kubernetes deploys a new Pod,
cached data is not persisted. The data is not lost, however, as it
persists in the primary MSR.
Note
Kubernetes persistent volumes or persistent volume claims must be in use to
provide persistent backend storage capabilities for the cache.
To create the Kubernetes resources, you must have the kubectl
command line tool configured to communicate with your Kubernetes cluster,
through either a Kubernetes configuration file or an MKE client bundle.
To provide external access to your MSR cache you must expose the cache Pods.
Important
Expose your MSR cache through only one external interface.
To ensure TLS certificate validity, you must expose the cache through the
same interface for which you previously created a certificate.
Kubernetes supports several methods for exposing a service, based on
your infrastructure and your environment. Detail is offered below for the
NodePort method and the Ingress Controllers method.
Run the following command to determine the port on which you have exposed
the MSR cache:
kubectl-nmsrgetservices
Test the external reachability of your MSR cache. To do this, use curl
to hit the API endpoint, using both the external address of a worker node
and the NodePort:
In the ingress controller exposure scheme, you expose the MSR cache through an
ingress object.
Create a DNS rule in your environment to resolve an MSR cache external FQDN
address to the address of your ingress controller. In addition, specify at
the start the same MSR cache external FQDN within the MSR cache
certificate.
cat > msrcacheingress.yaml <<EOFapiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:msr-cachenamespace:msrannotations:nginx.ingress.kubernetes.io/ssl-passthrough:"true"nginx.ingress.kubernetes.io/secure-backends:"true"spec:tls:-hosts:-<external-msr-cache-fqdn># Replace this value with your external MSR Cache addressrules:-host:<external-msr-cache-fqdn># Replace this value with your external MSR Cache addresshttp:paths:-pathType:Prefixpath:"/cache"backend:service:name:msr-cacheport:number:443EOFkubectl create -f msrcacheingress.yaml
Test the external reachability of your MSR cache. To do this, use curl
to hit the API endpoint. The address should be the one you have previously
defined in the service definition file.
The MSR on Swarm deployment detailed herein assumes that you have a
running MSR deployment and that you have provisioned multiple
nodes and joined them into a swarm.
You will deploy your MSR cache as a Docker service, thus ensuring that Docker
automatically schedules and restarts the service in the event of a problem.
You manage the cache configuration using a Docker configuration and the TLS
certificates using Docker secrets. This setup enables you to securely manage
the node configurations for the node on which the cache is running.
To target your deployment to the cache node, you must first label that node. To
do this, SSH into a manager node of the swarm within which you want to deploy
the MSR cache.
Following cache preparation, you will have the following file structure on your
workstation:
├──docker-stack.yml
├──config.yml# The cache configuration file
└──certs
├──cache.cert.pem# The cache public key certificate├──cache.key.pem# The cache private key└──msr.cert.pem# MSR CA certificate
With the configuration detailed herein, the cache fetches image layers
from MSR and retains a local copy for 24 hours. After that, if a user requests
that image layer, the cache re-fetches it from MSR.
The cache is configured to persist data inside its container. If something goes
wrong with the cache service, Docker automatically redeploys a new container,
but the previously cached data does not persist. You can customize the storage
parameters, if you want to store the image layers using a persistent storage
backend.
Also, the cache is configured to use port 443. If you are already using that
port in the swarm, update the deployment and configuration files to use another
port. Remember to create firewall rules for the port you choose.
You configure the MSR cache using a configuration file that you mount into the
container.
Edit the sample MSR cache configuration file that follows to fit your
environment, entering the relevant external MSR cache, worker node, or external
load balancer FQDN. Once configured, the cache fetches image layers from MSR and
maintains a local copy for 24 hours. If a user requests the image layer after
that period, the cache re-fetches it from MSR.
To deploy the MSR cache with a TLS endpoint, you must generate a TLS
certificate and key from a certificate authority.
Be aware that to expose the MSR cache through a node port or a host port, you
must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your
certificate.
Create a directory called certs and place in it the newly created
certificate cache.cert.pem and key cache.key.pem for your MSR cache.
Configure the cert pem files, as detailed below:
pem file
Information to add
cache.cert.pem
Add the public key certificate for the cache. If the certificate
has been signed by an intermediate certificate authority, append its
public key certificate at the end of the file.
cache.key.pem
Add the unencrypted private key for the cache.
msr.cert.pem
Configure the cache to trust MSR.
Add the MSR CA certificate to the certs/msr.cert.pem file, if
you are using the default MSR configuration, or if MSR is using TLS
certificates signed by your own certificate authority. Note that
configuring msr.cert.pem is not necessary if you have customized
MSR to use TLS certificates issued by a globally trusted certificate
authority, as in this case the cache will automatically trust MSR.
You will require the following to deploy MSR caches with high availability:
Multiple nodes, one for each cache replica
A load balancer
Shared storage system that has read-after-write consistency
With high availability, Mirantis recommends that you configure the replicas to
store data using a shared storage system. MSR cache deployment is the same,
though, regardless of whether you are deploying a single replica or multiple
replicas.
When using a shared storage system, once an image layer is cached, any replica
is able to serve it to users without having to fetch a new copy from MSR.
MSR caches support the following storage systems:
Alibaba Cloud Object Storage Service
Amazon S3
Azure Blob Storage
Google Cloud Storage
NFS
Openstack Swift
Note
If you are using NFS as a shared storage system, ensure read-after-write
consistency by verifying that the shared directory is configured with:
/dtr-cache*(rw,root_squash,no_wdelay)
In addition, mount the NFS directory on each node where you will deploy
an MSR cache replica.
To configure caches for high availability:
Use SSH to log in to a manager node of the cluster on which you want to
deploy the MSR cache. If you are using MKE to manage that cluster, you can
also use a client bundle to configure your Docker CLI client to connect to
the cluster.
Label each node that is going to run the cache replica:
Create the cache configuration files by following the instructions for
deploying a single cache replica. Be sure to adapt the storage object,
using the configuration options for the shared storage of your choice.
Deploy a load balancer of your choice to balance requests across your
set of replicas.
MSR caches are based on Docker Registry, and use the same configuration file
format. The MSR cache extends the Docker Registry configuration file format,
though, introducing a new middleware called downstream with three
configuration options: blobttl, upstreams, and cas:
middleware:registry:-name:downstreamoptions:blobttl:24hupstreams:-<Externally-reachable address for upstream registry or content cache in format scheme://host:port>cas:-<Absolute path to next-hop upstream registry or content cache CA certificate in the container's filesystem>
The following table offers detail specific to MSR caches for each parameter:
Parameter
Required
Description
blobttl
no
The TTL (Time to Live) value for blobs in the cache, offered as a
positive integer and suffix denoting a unit of time.
Valid values:
ns (nanoseconds)
us (microseconds)
ms (milliseconds)
s (seconds)
m (minutes)
h (hours)
Note
If the suffix is omitted, the system interprets the value as
nanoseconds.
If blobttl is configured, storage.delete.enabled must be set to
true.
cas
no
An optional list of absolute paths to PEM-encoded CA certificates of
upstream registries or content caches.
upstreams
yes
A list of externally-reachable addresses for upstream registries of
content caches. If you specify more than one host, it will pull from
registries in a round-robin fashion.
Mirantis Secure Registry (MSR) supports garbage collection, the automatic
cleanup of unused image layers. You can configure garbage collection to occur
at regularly scheduled times, as well as set a specific duration for the
process.
Garbage collection first identifies and marks unused image layers, then
subsequently deletes the layers that have been marked.
In conducting garbage collection, MSR performs the following actions in
sequence:
Establishes a cutoff time.
Marks each referenced manifest file with a timestamp. When manifest files
are pushed to MSR, they are also marked with a timestamp.
Sweeps each manifest file that does not have a timestamp after the cutoff
time.
Deletes the file if it is never referenced, meaning that no image tag uses
it.
Repeats the process for blob links and blob descriptors.
Each image stored in MSR is comprised of the following files:
The image filesystem, which consists of a list of unioned image layers.
A configuration file, which contains the architecture of the image along with
other metadata.
A manifest file, which contains a list of all the image layers and the
configuration file for the image.
MSR tracks these files in its metadata store, using RethinkDB, doing so in a
content-addressable manner in which each file corresponds to a cryptographic
hash of the file content. Thus, if two image tags hold exactly the same
content, MSR only stores the content once, which makes hash collisions nearly
impossible even when image tag names differ. For example, if wordpress:4.8
and wordpress:latest have the same content, MSR will only store that
content once. If you delete one of these tags, the other will remain intact.
As a result, when you delete an image tag, MSR cannot delete the
underlying files as it is possible that other tags also use the same
underlying files.
By default, MSR only allows users to push images to repositories that already
exist, and for which the user has write privileges. Alternatively, you can
configure MSR to create a new private repository when an image is pushed.
To create a new repository when pushing an image:
Log in to the MSR web UI.
In the left-side navigation panel, click Settings and scroll
down to Repositories.
Slide the Create repository on push toggle to the right.
Mirantis Secure Registry (MSR) makes outgoing connections to check for new
versions, automatically renew its license, and update its vulnerability
database. If MSR cannot access the Internet, you must manually apply
any updates.
One way to keep your environment secure while still allowing MSR
access to the Internet is to deploy a web proxy. If you have an HTTP or
HTTPS proxy, you can configure MSR to use it.
You can configure web proxy usage on Kubernetes using either the MSR Operator
or a Helm chart.
MSR Operator
In the custom resource manifest, insert the following values to
add the HTTP_PROXY and HTTPS_PROXY environment variables to all
containers in your MSR deployment:
In addition to storing individual and multi-architecture container images and
plugins, MSR supports the storage of applications as their own
distinguishable type.
Applications include the following two tags:
Image
Tag
Type
Under the hood
Invocation
<app-tag>-invoc
Container image represented by OS and architecture.
For example, linuxamd64.
Uses Mirantis Container Runtime. The Docker daemon is responsible for
building and pushing the image. Includes scan results for the invocation
image.
Application with bundled components
<app-tag>
Application
Uses the application client to build and push the image. Includes scan
results for the bundled components. Docker App is an experimental Docker
CLI feature.
Use docker app push to push your applications to MSR. For more
information, refer to Docker App
in the official Docker documentation.
While it is possible to enable the just-in-time creation of multi-architecture
image repositories when creating a repository using the API, Mirantis does not
recommend using this option, as it will cause Docker Content Trust to fail
along with other issues. To manage Docker image manifests and manifest
lists, instead use the experimental command docker manifest.
The MSR web UI has an Info page for each repository that
includes the following sections:
A README file, which is editable by admin users.
The docker pull command for pulling the images contained in the
given repository. To learn more about pulling images, refer to
Pull and push images.
The permissions associated with the user who is currently logged in.
To view the Info section:
Log in to the MSR web UI.
In the left-side navigation panel, click Repositories.
Select the required repository by clicking the repository
name rather than the namespace name that precedes the /.
The Info tab displays by default.
To view the repository events that your permissions level has access to,
hover over the question mark next to the permissions level that displays under
Your permission.
Note
Your permissions list may include repository events that are not displayed
in the Activity tab. Also, it is not an exhaustive list of the
event types that are displayed in your activity stream. To learn more about
repository events, refer to Audit repository events.
The base layers of the Microsoft Windows base images have redistribution
restrictions. When you push a Windows image to MSR, Docker only pushes the
image manifest and the layers that are above the Windows base layers. As a
result:
When a user pulls a Windows image from MSR, the Windows base layers
are automatically fetched from Microsoft.
Because MSR does not have access to the image base layers, it cannot scan
those image layers for vulnerabilities. The Windows base layers are,
however, scanned by Docker Hub.
On air-gapped or similarly limited systems, you can configure Docker to push
Windows base layers to MSR by adding the following line to
C:\ProgramData\docker\config\daemon.json:
If your MSR instance uses image signing, you will need to remove any trust
data on the image before you can delete it. For more information, refer to
Delete signed images.
To delete an image:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click the relevant repository and navigate to the Tags tab.
Select the check box next to the tags that you want to delete.
Click Delete.
Alternatively, you can delete every tag for a particular image by deleting the
relevant repository.
To delete a repository:
Click the required repository and navigate to the Settings
tab.
Scroll down to Delete repository and click
Delete.
Mirantis Secure Registry (MSR) has the ability to scan images for security
vulnerabilities contained in the US National Vulnerability Database. Security
scan results are reported for each image tag contained in a repository.
Security scanning is available as an add-on to MSR. If security scan results
are not available on your repositories, your organization may not have
purchased the security scanning feature or it may be disabled. Administrator
permissions are required to enable security scanning on your MSR instance.
Important
During scanning images for security vulnerabilities, MSR temporarily
extracts the contents of your images to disk. If malware is contained in
these images, external scanners may wrongly attribute that malware
to MSR. The key indication of this is the detection of malware in the
dtr-jobrunner container in /tmp/findlib-workdir-*.
To prevent any recurrence of the issue, Mirantis recommends configuring
the run-time scanner to exclude files found in the MSR dtr-jobrunner
containers in /tmp, or more specifically, if wildcards can be used,
in /tmp/findlib-workdir-*.
The scanner first performs a binary scan on each layer of the image,
identifies the software components in each layer, and indexes the SHA of
each component in a bill-of-materials. A binary scan evaluates the
components on a bit-by-bit level, so vulnerable components are
discovered even if they are statically linked or use a different name.
The scan then compares the SHA of each component against the US National
Vulnerability Database that is installed on your MSR instance. When this
database is updated, MSR verifies whether the indexed components have newly
discovered vulnerabilities.
MSR has the ability to scan both Linux and Windows images. However, because
Docker defaults to not pushing foreign image layers for Windows images,
MSR does not scan those layers. If you want MSR to scan your Windows images,
configure Docker to always push image layers, and
it will scan the non-foreign layers.
A summary of the results displays next to each scanned tag on the repository
Tags tab, and presents in one of the following ways:
If the scan did not find any vulnerabilities, the word Clean
displays in green.
If the scan found vulnerabilities, the severity level, Critical,
Major, or Minor, displays in red or orange with the
number of vulnerabilities. If the scan could not detect the version of
a component, the vulnerabilities are reported for all versions of the
component.
To view the full scanning report, click View details for the
required image tag.
The top of the resulting page includes metadata about the image including
the SHA, image size, last push date, user who initiated the push, security scan
summary, and the security scan progress.
The scan results for each image include two different modes so you can
quickly view details about the image, its components, and any
vulnerabilities found:
The Layers view lists the layers of the image in the order that
they are built by the Dockerfile.
This view can help you identify which command in the build
introduced the vulnerabilities, and which components are associated
with that command. Click a layer to see a summary of its
components. You can then click on a component to switch to the
Component view and obtain more details about the specific item.
Note
The layers view can be long, so be sure to scroll down if
you do not immediately see the reported vulnerabilities.
The Components view lists the individual component libraries
indexed by the scanning system in order of severity and number of
vulnerabilities found, with the most vulnerable library listed first.
Click an individual component to view details on the vulnerability it
introduces, including a short summary and a link to the official CVE database
report. A single component can have multiple vulnerabilities, and the scan
report provides details on each one. In addition, the component details
include the license type used by the component, the file path to the
component in the image, and the number of layers that contain the component.
Note
The CVE count presented in the scan summary of an image with multiple layers
may differ from the count obtained through summation of the CVEs for each
individual image component. This is because the scan summary performs a
summation of the CVEs in every layer of the image, and a component may be
present in more than one layer of an image.
If you find that an image in your registry contains vulnerable
components, you can use the linked CVE scan information in each scan
report to evaluate the vulnerability and decide what to do.
If you discover vulnerable components, you should verify whether there is an
updated version available where the security vulnerability has been
addressed. If necessary, you can contact the component maintainers to
ensure that the vulnerability is being addressed in a future version or
a patch update.
If the vulnerability is in a base layer, such as an operating
system, you might not be able to correct the issue in the image. In this
case, you can switch to a different version of the base layer, or you
can find a less vulnerable equivalent.
You can address vulnerabilities in your repositories by updating the images to
use updated and corrected versions of vulnerable components or by using
a different component that offers the same functionality. When you have
updated the source code, run a build to create a new image, tag the
image, and push the updated image to your MSR instance. You can then
re-scan the image to confirm that you have addressed the
vulnerabilities.
MSR security scanning sometimes reports image vulnerabilities that you know
have already been fixed. In such cases, it is possible to hide the
vulnerability warning.
Note
Only MSR administrators can override layer vulnerabilities.
To override a vulnerability:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, select Repositories.
Navigate to the required repository and click View details.
To review the vulnerabilities associated with each component in the image,
click the Components tab.
Select the component with the vulnerability you want to ignore,
navigate to the vulnerability, and click Hide.
Once dismissed, the vulnerability is hidden system-wide and will no
longer be reported as a vulnerability on affected images with the
same layer IDs or digests. In addition, MSR will not re-evaluate the
promotion policies that have been set up for the repository.
To re-evaluate the promotion policy for the affected image:
After hiding a particular vulnerability, you can re-evaluate the promotion
policy for the affected image.
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Navigate to the required repository and click View details.
To send a scanner report directly to Mirantis Customer Support:
Log in to the MSR web UI.
Navigate to View Details and click Components.
Click Show layers affected for the layer you want to
report.
Click Report Issue. A pop-up window displays with the
fields detailed in the following table:
Field
Description
Component
Automatically filled out and not editable. If the information is
incorrect, make a note in the Additional info field.
Reported version or date
Automatically filled out and not editable. If the information is
incorrect, make a note in the Additional info field.
Report layer
Indicate the image or image layer. Options include:
Omit layer, Include layer, Include
image.
False Positive(s)
Optional. Select from the drop-down menu all CVEs you suspect are
false positives. Toggle the False Positive(s) control to
edit the field.
Missing Issue(s)
Optional. List CVEs you suspect are missing from the report. Enter
CVEs in the format CVE-yyyy-#### or CVE-yyyy-##### and
separate each CVE with a comma. Toggle the Missing
Issue(s) control to edit the field.
Incorrect Component Version
Optional. Enter any incorrect component version information in the
Missing Issue(s) field. Toggle the
Incorrect Component Version control to edit the field.
Additional info
Optional. Indicate anything else that does not pertain to other
fields. Toggle the Additional info control to edit this
field.
Fill out the fields in the pop-up window and click Submit.
MSR generates a JSON-formatted scanner report, which it bundles into a file
together with the scan data. This file downloads to your local drive, at which
point you can share it as needed with Mirantis Customer Support.
Important
To submit a scanner report along with the associated image, bundle the items
into a .tgz file and include that file in a Mirantis Customer
Support ticket.
By default, users can push the same tag multiple times to a repository,
thus overwriting the older versions of the tag. This can however lead to
problems if a user pushes an image with the same tag name but different
functionality. Also, when images are overwritten, it can be difficult to
determine which build originally generated the image.
To prevent tags from being overwritten, you can configure a repository
to be immutable. Once configured, MSR will not allow another image with the
same tag to be pushed to the repository.
Note
Enabling tag immutability disables repository tag limits.
Docker Content Trust (DCT) allows you to sign image tags, thus giving consumers
a way to verify the integrity of your images. Users interact with DCT using a
combination of docker trust and notary commands.
If your MSR instance uses a certificate that is issued by a well-known, public
certificate authority (CA), then skip this section and proceed to
Configure repository for signing.
If the MSR certificate authority (CA) is self-signed, you must configure the
machine that runs the docker trust commands to trust the CA, as
detailed in this section.
Caution
It is not possible to use DCT with a remote MSR that is set up as an
insecure registry in the Docker daemon configuration. This is because DCT
operations are not processed by the Docker daemon, but are instead sent
directly to the back-end Notary components that handle signing. It is not
possible to configure the back-end components to allow insecure operation.
To configure your machine to trust a self-signed CA:
Create a certificate directory for the MSR host in the Docker configuration
directory:
Verify that you do not receive certificate errors when accessing MSR:
dockerlogin${MSR}
Create a symlink between the certs.d and tls directories. This link
allows the Docker client to share the same CA trust as established for the
Docker daemon in the preceding steps.
Initialize a repository for use with DCT by pushing an image to the relevant
repository. You will be prompted for both a new root key password and a new
repository key password, as displayed in the example output.
You have the option to sign an image using multiple user keys. This topic
describes how to add a regular user as a signer in addition to the repository
admin.
Note
Signers in Docker Content Trust (DCT) do not correspond with users in MSR,
thus you can add a signer using a user name that does not exist in MSR.
The private key is password protected and kept in the local trust store,
where it remains throughout all signing operations. The public key is stored
in the .pub file, which you must provide to the repository administrator
to add the user as a signer.
Provide the user public key to the repository admin.
On the admin machine, add the user as a signer to the repository. You will
be prompted for the repository key password that you created in
Configure repository for signing, as displayed in the example output.
To delete a signed image, you must first remove trust data for all of the roles
that have signed the image. After you remove the trust data, proceed to
deleting the image, as described in Delete images.
To identify the roles that signed an image:
Determine the roles that are trusted to sign the image:
The image will display as unsigned once the trust data has been removed for
all of the roles that signed the image.
Using Docker Content Trust with a Remote MKE Cluster¶
For more advanced deployments, you may want to share one Mirantis Secure
Registry across multiple Mirantis Kubernetes Engines. However, customers
wanting to adopt this model alongside the Only Run Signed Images
MKE feature, run into problems as each MKE operates an independent
set of users.
Docker Content Trust (DCT) gets around this problem, since users from a
remote MKE are able to sign images in the central MSR and still apply
runtime enforcement.
In the following example, we will connect MSR managed by MKE cluster 1
with a remote MKE cluster which we are calling MKE cluster 2, sign the
image with a user from MKE cluster 2, and provide runtime enforcement
within MKE cluster 2. This process could be repeated over and over,
integrating MSR with multiple remote MKE clusters, signing the image
with users from each environment, and then providing runtime enforcement
in each remote MKE cluster separately.
Note
Before attempting this guide, familiarize yourself with Docker
Content Trust and Only Run Signed Images on a single MKE.
Many of the concepts within this guide may be new without
that background.
Cluster 1, running MKE 3.5.x or later, with an MSR 2.9.x or later
deployed within the cluster.
Cluster 2, running MKE 3.5.x or later, with no MSR node.
Nodes on Cluster 2 need to trust the Certificate Authority which
signed MSR’s TLS Certificate. This can be tested by logging on to a
cluster 2 virtual machine and running
curlhttps://msr.example.com.
The MSR TLS Certificate needs to be properly configured, ensuring that
the Loadbalancer/Public Address field has been configured, with
this address included within the certificate.
A machine with MCR 20.10.x or later
installed, as this contains the relevant
docker trust commands.
Registering MSR with a remote Mirantis Kubernetes Engine¶
As there is no registry running within cluster 2, by default MKE will
not know where to check for trust data. Therefore, the first thing we
need to do is register MSR within the remote MKE in cluster 2. When you
normally install MSR, this registration process happens by default to a
local MKE, or cluster 1.
Note
The registration process allows the remote MKE to get signature data
from MSR, however this will not provide Single Sign On (SSO). Users
on cluster 2 will not be synced with cluster 1’s MKE or MSR.
Therefore when pulling images, registry authentication will still
need to be passed as part of the service definition if the repository
is private. See the Kubernetes
example.
To add a new registry, retrieve the Certificate Authority (CA) used to
sign the MSR TLS Certificate through the MSR URL’s /ca endpoint.
$curl-kshttps://msr.example.com/ca>dtr.crt
Next, convert the MSR certificate into a JSON configuration file for
registration within the MKE for cluster 2.
You can find a template of the dtr-bundle.json below. Replace the
host address with your MSR URL, and enter the contents of the MSR CA
certificate between the new line commands \nand\n.
Note
JSON Formatting
Ensure there are no line breaks between each line of the MSR CA
certificate within the JSON file. Use your favorite JSON formatter
for validation.
$catdtr-bundle.json
{"hostAddress":"msr.example.com",
"caBundle":"-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----END CERTIFICATE-----"}
Now upload the configuration file to cluster 2’s MKE through the MKE API
endpoint, /api/config/trustedregistry_. To authenticate against the
API of cluster 2’s MKE, we have downloaded an MKE client bundle,
extracted it in the current directory, and will reference the keys for
authentication.
Navigate to the MKE web interface to verify that the JSON file was
imported successfully, as the MKE endpoint will not output anything.
Select Admin > Admin Settings > Mirantis Secure Registry. If the
registry has been added successfully, you should see the MSR listed.
Additionally, you can check the full MKE configuration
file within cluster 2’s MKE. Once downloaded, the
ucp-config.toml file should now contain a section called [registries]
We will now sign an image and push this to MSR. To sign images we need a
user’s public private key pair from cluster 2. It can be found in a
client bundle, with key.pem being a private key and cert.pem
being the public key on an X.509 certificate.
First, load the private key into the local Docker trust store
(~/.docker/trust). The name used here is purely metadata to help
keep track of which keys you have imported.
Next initiate the repository, and add the public key of cluster 2’s user
as a signer. You will be asked for a number of passphrases to protect
the keys. Keep note of these passphrases, and see [Docker Content Trust
documentation]
(/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server)
to learn more about managing keys.
Within the MSR web interface, you should now be able to see your newly
pushed tag with the Signed text next to the size.
You could sign this image multiple times if required, whether it’s
multiple teams from the same cluster wanting to sign the image, or you
integrating MSR with more remote MKEs so users from clusters 1, 2, 3, or
more can all sign the same image.
We can now enable Only Run Signed Images on the remote MKE. To do
this, login to cluster 2’s MKE web interface as an admin.
Select Admin > Admin Settings > Docker Content Trust.
Finally we can now deploy a workload on cluster 2, using a signed image
from an MSR running on cluster 1. This workload could be a simple
$dockerrun, a Swarm Service, or a Kubernetes workload. As a simple
test, source a client bundle, and try running one of your signed images.
If the image is stored in a private repository within MSR, you need to
pass credentials to the Orchestrator as there is no SSO between cluster
2 and MSR. See the relevant
Kubernetes
documentation for more details.
Example Errors¶Image or trust data does not exist¶
This means that the image was signed correctly, however the user who
signed the image does not meet the signing policy in cluster 2. This
could be because you signed the image with the wrong user keys.
Mirantis Secure Registry (MSR) uses a job queue to schedule batch jobs.
Jobs are added to a cluster-wide job queue, and then consumed and
executed by a job runner within MSR.
All MSR replicas have access to the job queue, and have a job runner
component that can get and execute work.
When a job is created, it is added to a cluster-wide job queue and
enters the waiting state. When one of the MSR replicas is ready to
claim the job, it waits a random time of up to 3 seconds to give
every replica the opportunity to claim the task.
A replica claims a job by adding its replica ID to the job. That way,
other replicas will know the job has been claimed. Once a replica claims
a job, it adds that job to an internal queue, which in turn sorts the
jobs by their scheduledAt time. Once that happens, the replica
updates the job status to running, and starts executing it.
The job runner component of each MSR replica keeps a
heartbeatExpiration entry on the database that is shared by all
replicas. If a replica becomes unhealthy, other replicas notice the
change and update the status of the failing worker to dead. Also,
all the jobs that were claimed by the unhealthy replica enter the
worker_dead state, so that other replicas can claim the job.
A garbage collection job that deletes layers associated with deleted
images.
onlinegc
A garbage collection job that deletes layers associated with deleted
images without putting the registry in read-only mode.
onlinegc_metadata
A garbage collection job that deletes metadata associated with deleted
images.
onlinegc_joblogs
A garbage collection job that deletes job logs based on a configured job
history setting.
metadatastoremigration
A necessary migration that enables the onlinegc feature.
sleep
Used for testing the correctness of the jobrunner. It sleeps for 60
seconds.
false
Used for testing the correctness of the jobrunner. It runs the false
command and immediately fails.
tagmigration
Used for synchronizing tag and manifest information between the MSR
database and the storage backend.
bloblinkmigration
A DTR 2.1 to 2.2 upgrade process that adds references for blobs to
repositories in the database.
license_update
Checks for license expiration extensions if online license updates are
enabled.
scan_check
An image security scanning job. This job does not perform the actual
scanning, rather it spawns scan_check_single jobs (one for each layer
in the image). Once all of the scan_check_single jobs are complete,
this job will terminate.
scan_check_single
A security scanning job for a particular layer given by the parameter:SHA256SUM. This job breaks up the layer into components and checks each
component for vulnerabilities.
scan_check_all
A security scanning job that updates all of the currently scanned images
to display the latest vulnerabilities.
update_vuln_db
A job that is created to update MSR’s vulnerability database. It uses an
Internet connection to check for database updates through
https://dss-cve-updates.docker.com/ and updates the
dtr-scanningstore container if there is a new update available.
scannedlayermigration
A DTR 2.4 to 2.5 upgrade process that restructures scanned image data.
push_mirror_tag
A job that pushes a tag to another registry after a push mirror policy
has been evaluated.
poll_mirror
A global cron that evaluates poll mirroring policies.
webhook
A job that is used to dispatch a webhook payload to a single endpoint.
nautilus_update_db
The old name for the update_vuln_db job. This may be visible on old
log files.
ro_registry
A user-initiated job for manually switching MSR into read-only mode.
tag_pruning
A job for cleaning up unnecessary or unwanted repository tags which can
be configured by repository admins.
To view the list of jobs within MSR, do the following:
Log in to the MSR web UI.
Navigate to System > Job Logs in the left-side navigation panel.
You should see a paginated list of past, running, and
queued jobs. By default, Job Logs shows the latest 10 jobs
on the first page.
If required, filter the jobs by:
Action
Worker ID, which is the ID of the worker in an MSR replica
responsible for running the job
Optional. Click Edit Settings on the right of the filtering
options to update your Job Logs settings.
To view the log details for a specific job, do the following:
Click View Logs next to the job value, Last Updated
You will be redirected to the log detail page of your selected job.
Notice how the job ID is reflected in the URL while the
Action and the abbreviated form of the job ID are reflected
in the heading. Also, the JSON lines displayed are job-specific MSR
container logs.
Enter or select a different line count to truncate the number of
lines displayed. Lines are cut off from the end of the logs.
Each job runner has a limited capacity and will not claim jobs that
require a higher capacity. You can see the capacity of a job runner via
the GET/api/v0/workers endpoint:
If worker 000000000000 notices the jobs in waiting state above,
then it will be able to pick up jobs 0 and 2 since it has the
capacity for both. Job 1 will have to wait until the previous scan
job, 0, is completed. The job queue will then look like:
The schedule field uses a cron expression following the
(seconds)(minutes)(hours)(dayofmonth)(month)(dayofweek)
format. For example, 57543*** with cron ID
48875b1b-5006-48f5-9f3c-af9fbdd82255 will be run at 03:54:57 on
any day of the week or the month, which is 2017-02-22T03:54:57Z in
the example JSON response above.
Mirantis Secure Registry has a global setting for auto-deletion of job logs
which allows them to be removed as part of garbage collection. MSR admins can
enable auto-deletion of repository events in MSR 2.6 based on specified
conditions which are covered below.
Log in to the MSR web UI.
Navigate to System in the left-side navigation panel.
Scroll down to Job Logs and turn on Auto-Deletion.
Specify the conditions with which a job log auto-deletion will be
triggered.
MSR allows you to set your auto-deletion conditions based on the
following optional job log attributes:
Name
Description
Example
Age
Lets you remove job logs which are older than your specified number
of hours, days, weeks or months
2months
Max number of events
Lets you specify the maximum number of job logs allowed within MSR.
100
If you check and specify both, job logs will be removed from MSR
during garbage collection if either condition is met. You should see
a confirmation message right away.
Click Start Deletion if you are ready. Read more about
Garbage collection if you are unsure about this
operation.
Navigate to System > Job Logs in the left-side navigation panel
to verify that onlinegc_joblogs has started.
Note
When you enable auto-deletion of job logs, the logs will be
permanently deleted during garbage collection.
By default, anonymous users can only pull images from public
repositories. They cannot create new repositories or push to existing
ones. You can then grant permissions to enforce fine-grained access
control to image repositories.
Create a user.
Registered users can create and manage their own repositories.
You can also integrate with an LDAP service to manage users from
a single place.
Extend the permissions by adding the user to a team.
To extend a user’s permission and manage their permissions over
repositories, you add the user to a team. A team defines the
permissions users have for a set of repositories.
Note
To monitor users login events, enable the auditAuthLogsEnabled parameter
in the /settings API endpoint:
When a user creates a repository, only that user can make changes to the
repository settings, and push new images to it.
Organizations take permission management one step further by allowing multiple
users to own and manage a common set of repositories. This is useful when
implementing team workflows. With organizations you can delegate the management
of a set of repositories and user permissions to the organization
administrators.
An organization owns a set of repositories and defines a set of teams.
With teams you can define fine-grain permissions that a team of users has
for a set of repositories.
Essential to MSR authentication and authorization is the enablement of LDAP and
the subsequent syncing of your LDAP directory to your MSR-created teams and
users.
To enable LDAP and sync to your LDAP directory:
Log in to the MSR web UI.
In the left-side navigation panel, navigate to System to display
the General tab.
Scroll down the page to the Auth Settings section and select
Click here to configure auth settings. The right pane will
display Authentication & Authorization in the details pane.
In the Identity Provider Integration section, move the
slider next to LDAP to enable the LDAP settings.
Enter the values that correspond with your LDAP server installation.
Choose between the following two methods for matching group members from an
LDAP directory. Refer to the table below for more information.
Select LDAP MATCH METHOD to change the method for
matching group members in the LDAP directory from
Match Search Results (default) to
Match Group Members. Fill out Group DN and
Group Member Attribute as required.
Keep the default Match Search Results method and fill out
Search Base DN, Search filter, and
Search subtree instead of just one level, as required.
Optional. Select Immediately Sync Team Members to run an LDAP
sync operation immediately after saving the configuration for the team.
Click Create.
You can match group members from an LDAP directory either by matching group
members or by matching search results:
Bind method
Description
Match Group Members (direct bind)
Specifies that team members are synced directly with
members of a group in the LDAP directory of your organization. The team
membership is synced to match the membership of the group.
Group DN
The distinguished name of the group from which you select users.
Group Member Attribute
The value of this group attribute corresponds to the distinguished
names of the members of the group.
Match Search Results (search bind)
Specifies that team members are synced using a search
query against the LDAP directory of your organization. The team
membership is synced to match the users in the search results.
Search Base DN
The distinguished name of the node in the directory tree where the
search starts looking for users.
Search filter
Filters to find users. If empty, existing users in the search scope are
added as members of the team.
Search subtree instead of just one level
Defines search through the full LDAP tree, not just one level, starting
at the base DN.
SAML configuration requires that you know the metadata URL for your chosen
identity provider, as well as the URL for the MSR host that contains the IP
address or domain of your MSR installation.
To configure SAML integration on MSR:
Log in to the MSR web UI.
In the left-side navigation panel, navigate to System to display
the General tab.
Scroll down the page to the Auth Settings section and select
Click here to configure auth settings. The right pane will
display Authentication & Authorization in the details pane.
In the Identity Provider Integration section, move the
slider next to SAML to enable the SAML settings.
In the SAML idP Server subsection, enter values for the
following fields: SAML Proxy URL, SAML Proxy User,
SAML Proxy Password , and IdP Metadata URL.
SAML Proxy URL
Optional. URL of the user proxy server used by MSR to fetch the metadata
specified in the IdP Metadata URL field.
SAML Proxy User
Optional. The user name for proxy authentication.
SAML Proxy Password
Optional. The password for proxy authentication.
IdP Metadata URL
URL for the identity provider metadata
Note
If the metadata URL is publicly certified, you can continue with the
default settings:
Skip TLS Verification unchecked
Root Certificates Bundle blank
Mirantis recommends the use of TLS verification in production
environments. If the metadata URL cannot be certified by the default
certificate authority store, you must provide the certificates from the
identity provider in the Root Certificates Bundle field.
Click Test Proxy Settings to verify that the proxy server has
access to the URL entered into the IdP Metadata URL field.
In the SAML Service Provider subsection, in the MSR
Host field, enter the URL that includes the IP address or
domain of your MSR installation.
The port number is optional. The current IP address or domain displays by
default.
Optional. Customize the text of the sign-in button by entering the text for
the button in the Customize Sign In Button Text field. By
default, the button text is Sign in with SAML.
Copy the SERVICE PROVIDER METADATA URL, the
ASSERTION CONSUMER SERVICE (ACS) URL, and the SINGLE
LOGOUT (SLO) URL, to paste later into the identity provider workflow.
Click Save.
Note
To configure a service provider, enter the Service provider metadata URL
to obtain its metadata. To access the URL, you may need to provide the
CA certificate that can verify the remote server.
To link group membership with users, use the Edit or
Create team dialog to associate SAML group assertion with the
MSR team to synchronize user team membership when the user log in.
Simple-Cloud-Identity-Management/System-for-Cross-domain-Identity-Management
(SCIM) provides an LDAP alternative for provisioning and managing users
and groups, as well as syncing users and groups with an upstream
identity provider. Using the SCIM schema and the API, you can use single
sign-on (SSO) across various tools.
SCIM implementation allows proactive synchronization with MSR and eliminates
manual intervention.
Docker’s SCIM implementation uses SCIM version 2.0.
Log in to the MSR web UI.
In the left-side navigation panel, navigate to System to display
the General tab.
Scroll down the page to the Auth Settings section and select
Click here to configure auth settings. The right pane will
display Authentication & Authorization in the details pane.
In the Identity Provider Integration section, move the
slider next to SCIM to enable the SAML settings.
By default, docker-datacenter is the organization to which the SCIM
team belongs. Enter the API token in the UI or have MSR generate a UUID
for you.
The base URL for all SCIM API calls is
https://<HostIP>/enzi/v0/scim/v2/. All SCIM methods are accessible
API endpoints of this base URL.
Bearer Auth is the API authentication method. When configured, SCIM API
endpoints are accessed via the following HTTP header Authorization: Bearer<token>
Note
SCIM API endpoints are not accessible by any other user (or their
token), including the MSR administrator and MSR admin Bearer token.
An HTTP authentication request header that contains
a Bearer token is the only method supported.
Updates a user’s active status. Inactive users can be reactivated by
specifying "active":true. Active users can be deactivated by
specifying "active":false. The value of the {id} should be the
user ID.
Updates existing user information. All attribute values are overwritten,
including attributes for which empty values or no values were provided.
If a previously set attribute value is left blank during a PUT
operation, the value is updated with a blank value in accordance with
the attribute data type and storage provider. The value of the {id}
should be the user ID.
Updates an existing group resource, allowing individual (or groups of)
users to be added or removed from the group with a single operation.
Add is the default operation.
Setting the operation attribute of a member object to delete removes
members from a group.
Updates an existing group resource, overwriting all values for a group
even if an attribute is empty or not provided. PUT replaces all
members of a group with members provided via the members attribute.
If a previously set attribute is left blank during a PUT operation,
the new value is set to blank in accordance with the data type of the
attribute and the storage provider.
Discovers the resource types available on a SCIM service provider, for
example, Users and Groups. Each resource type defines the
endpoints, the core schema URI that defines the resource, and any
supported schema extensions.
Returns a JSON structure that describes the SCIM specification features
available on a service provider using the schemas attribute of
urn:ietf:params:scim:schemas:core:2.0:ServiceProviderConfig.
You can extend a user’s default permissions by granting them individual
permissions in other image repositories, by adding the user to a team. A team
defines the permissions that a set of users has for a set of repositories.
To create a new team:
Log in to the MSR web UI.
Navigate to the Organizations page.
Click the organization within which you want to create the team.
Click Save to create the organization, and then click
the organization to define which users are allowed to manage this
organization. These users will be able to edit the organization settings, edit
all repositories owned by the organization, and define the user permissions for
this organization.
For this, click the Add user button, select the users
that you want to grant permissions to manage the organization, and click
Save. Then change their permissions from Member to
Org Owner.
You can configure MSR to automatically post event notifications to a
webhook URL of your choosing. This lets you build complex CI and CD
pipelines with your Docker images.
To subscribe to the webhook events for a repository or namespace you must have
admin rights for the particular component.
For example, a “foo/bar” repository admin may subscribe to its tag push
events, whereas an MSR admin can subscribe to any event.
In your browser, navigate to https://<msr-url> and log in with
your credentials.
Select Repositories from the left-side navigation panel, and
then click the name of the repository that you want to view. Note that
you will have to click the repository name following the / after the
specific namespace for your repository.
Select the Webhooks tab, and click New Webhook.
From the Notification to receive drop-down list, select the
event that will trigger the webhook.
Set the URL that will receive the JSON payload.
Validate the integration by clicking the Test button
next to the Webhook URL field.
If the integration is working, you will receive a JSON payload at the URL
you specified for the event type notification you selected.
Optional. Assign a TLS certificate to your webhook:
Expand Show advanced settings.
Paste the TLS certificate associated with your webhook URL into the
TLS Cert field.
Note
For testing purposes, you can test your TLS certificate over HTTP
rather than HTTPS.
To circumvent TLS verification, tick the Skip TLS
Verification checkbox.
Optional. Format your webhook message:
You can use Golang templates
to format the webhook messages that are sent.
Expand Show advanced settings.
In the Webhook Message Format field,
paste the configured Golang template for the webhook message.
Click Create to save the webhook. Once saved, your webhook is
active and starts sending POST notifications whenever your selected event
type is triggered.
As a repository admin, you can add or delete a webhook at any point.
Additionally, you can create, view, and delete webhooks for your
organization or trusted registry using the API.
By default, the webhook status is set to Active on its creation.
In your browser, navigate to https://<msr-url> and log in with
your credentials.
Select Repositories from the left-side navigation panel, and
then click the name of the repository that you want to view. Note that
you will have to click the repository name following the / after the
specific namespace for your repository.
Select the Webhooks tab. The existing webhooks display on the
page.
Locate the webhook for which you want to change the status and
move the slider underneath the Active heading accordingly.
You can use Golang TemplatesOverview to dynamically format webhook
messages. This feature enables you to personalize webhook messages based
on your needs.
Create your webhook message template using Golang syntax:
In the Webhooks tab, expand
Show advanced settings.
In the Webhook Message Format field,
configure Golang template for the webhook message.
Define variables and control structures for the system
to make templates dynamic. Defined variables are replaced with their
respective values during the creation of the webhook message.
You can also use standard Golang functions to manipulate the values of
the variables.
Click Create to save the webhook. Once saved, your webhook is
active and starts sending POST notifications whenever your selected event
type is triggered.
{"message":"Tag {{ .Contents.Tag }} was pushed for repository {{ .Contents.Repository }}"}
Example output:
{"message":"Tag 1.0 was pushed for repository example_repo"}
The variables used in the template are defined in the webhook message
and are enclosed in double curly braces.
For example, the variable .Tag is replaced with the value of
the Tag field in the webhook message.
Control structures are used to add conditional logic to the template.
For instance, you can use an if statement to verify the value of a field:
{{ifeq.Contents.Name"test"}}{"message":"The Name field is test"}{{end}}
Every field in the webhook message is accessible through Golang template in the
webhook format field. This includes fields such as .Type, .CreatedAt,
.Location, and the nested fields within .Contents.
Refer to Create a webhook for your repository for more details.
Refer to Webhook types for a list of events that can trigger
notifications through the API.
From the MSR web interface, click API on the bottom left-side
navigation panel to explore the API resources and endpoints. Click
Execute to send your API request.
Your MSR hostname serves as the base URL for your API requests.
Use curl to send HTTP or HTTPS API requests. Note that you must
specify skipTLSVerification:true on your request to test the
webhook endpoint over HTTP.
The namespace/organization or repo to subscribe to. For
example, foo/bar to subscribe to pushes to the bar repository
within the namespace/organization foo.
endpoint
The URL to send the JSON payload to.
You must supply a “key” to scope a particular webhook event
to a repository or a namespace/organization.
If you are an MSR admin, you can omit the “key”, in which case a POST event
notification of the specified type will be triggered for all MSR repositories
and namespaces.
Applies to the event type received at the specified
subscription endpoint.
contents
Refers to the payload of the event itself. Each event is
different, therefore the structure of the JSON object in contents
will change depending on the event type. Refer to Content
structure for more details.
Before subscribing to an event, you can view and test your endpoints
using fake data. To send a test payload, send a POST request to
/api/v0/webhooks/test with the following payload:
Change type to the event type that you want to receive. MSR will
then send an example payload to your specified endpoint. The example
payload sent is always the same.
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (e.g. 10.10.10.1/foo/bar:tag)
"scanSummary": {
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"critical": 0, // (int) number of critical issues, where CVSS >= 7.0
"major": 0, // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
"minor": 0, // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
"last_scan_status": 0, // (int) enum; see scan status section
"check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
...
}
}
{
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"event": "", // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
"author": "", // (string) the name of the user responsible for the event
"data": {} // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
To view the subscriptions for a resource you must first have admin rights for
that resource. After which, you can send requests for all subscriptions from a
particular API endpoint. The response will include data for all resource users.
To view all webhook subscriptions for a repository, run:
You can delete a subscription if you are an MSR repository admin or an
admin of the resource associated with the event subscription. Regular users,
however, can only delete subscriptions for the repositories they manage.
To delete a webhook subscription, send a DELETE/api/v0/webhooks/{id}
request, replacing {id} with the ID of the webhook subscription you intend
to delete.
Starting in DTR 2.6, each repository page includes an Activity tab
which displays a sortable and paginated list of the most recent events within
the repository. This offers better visibility along with the ability to audit
events. Event types listed vary according to your repository
permission level. Additionally, MSR admins can enable auto-deletion
of repository events as part of maintenance and cleanup.
In the following section, we will show you how to view and audit the
list of events in a repository. We will also cover the event types
associated with your permission level.
As of DTR 2.3, admins were able to view a list of MSR events using the API. MSR
2.6 enhances that feature by showing a permission-based events list for each
repository page on the web interface. To view the list of events within a
repository, do the following:
Navigate to https://<msr-url> and log in with your MSR credentials.
Select Repositories from the left-side navigation panel, and
then click on the name of the repository that you want to view. Note that
you will have to click on the repository name following the / after the
specific namespace for your repository.
Select the Activity tab. You should see a paginated list of the
latest events based on your repository permission level. By default,
Activity shows the latest 10 events and excludes pull
events, which are only visible to repository and MSR admins.
If you’re a repository or an MSR admin, uncheck Exclude pull
to view pull events. This should give you a better understanding of who
is consuming your images.
To update your event view, select a different time filter from the
drop-down list.
The following table breaks down the data included in an event and uses
the highlighted CreatePromotionPolicy event as an example.
Event detail
Description
Example
Label
Friendly name of the event.
CreatePromotionPolicy
Repository
This will always be the repository in review following the
<user-or-org>/<repository_name> convention outlined in
Create a repository
test-org/test-repo-1
Tag
Tag affected by the event, when applicable.
test-org/test-repo-1:latest where latest is the affected tag
SHA
The digest value for ``CREATE` operations such as creating a new image
tag or a promotion policy.
sha256:bbf09ba3
Type
Event type. Possible values are: CREATE, GET, UPDATE,
DELETE, SEND, FAIL and SCAN.
CREATE
Initiated by
The actor responsible for the event. For user-initiated events, this
will reflect the user ID and link to that user’s profile. For image
events triggered by a policy – pruning, pull / push mirroring, or
promotion – this will reflect the relevant policy ID except for manual
promotions where it reflects PROMOTIONMANUAL_P, and link to the
relevant policy page. Other event actors may not include a link.
PROMOTIONCA5E7822
Date and Time
When the event happened in your configured time zone.
Given the level of detail on each event, it should be easy for MSR and
security admins to determine what events have taken place inside of MSR.
For example, when an image which shouldn’t have been deleted ends up
getting deleted, the security admin can determine when and who initiated
the deletion.
Refers to CreateManifest and UpdateTag events. Learn more
about pushing images.
Authenticated users
Scan
Requires security scanning to be set
up by an MSR admin.
Once enabled, this will display as a SCAN event type.
Authenticated users
Promotion
Refers to a CreatePromotionPolicy event which links to the
Promotions tab of the repository where you can edit
the existing promotions. See Promotion Policies for different ways to promote
an image.
Repository admin
Delete
Refers to “Delete Tag” events. Learn more about Delete images.
Authenticated users
Pull
Refers to “Get Tag” events. Learn more about Pull an image.
Mirantis Secure Registry has a global setting for repository event
auto-deletion. This allows event records to be removed as part of garbage
collection. MSR administrators can enable auto-deletion of repository
events in DTR 2.6 based on specified conditions which are covered below.
In your browser, navigate to https://<msr-url> and log in with your
admin credentials.
Select System from the left-side navigation panel which displays
the Settings page by default.
Scroll down to Repository Events and turn on
Auto-Deletion.
Specify the conditions with which an event auto-deletion will be triggered.
MSR allows you to set your auto-deletion conditions based on the following
optional repository event attributes:
Name
Description
Example
Age
Lets you remove events older than your specified number of hours, days,
weeks or months.
2months
Max number of events
Lets you specify the maximum number of events allowed in the
repositories.
6000
If you check and specify both, events in your repositories will be removed
during garbage collection if either condition is met. You should see a
confirmation message right away.
Click Start GC if you’re ready.
Navigate to System > Job Logs to confirm that onlinegc has happened.
Mirantis Secure Registry allows you to automatically promote and mirror
images based on a policy. In MSR 2.7, you have the option to promote
applications with the experimental docker app CLI addition. Note that
scanning-based promotion policies do not take effect until all
application-bundled images have been scanned. This way you can create a
Docker-centric development pipeline.
You can mix and match promotion policies, mirroring policies, and
webhooks to create flexible development pipelines that integrate with
your existing CI/CD systems.
Promote an image using policies
One way to create a promotion pipeline is to automatically promote
images to another repository.
You start by defining a promotion policy that’s specific to a
repository. When someone pushes an image to that repository, MSR checks
if it complies with the policy you set up and automatically pushes the
image to another repository.
You can also promote images between different MSR deployments. This not
only allows you to create promotion policies that span multiple MSRs,
but also allows you to mirror images for security and high availability.
You start by configuring a repository with a mirroring policy. When
someone pushes an image to that repository, MSR checks if the policy is
met, and if so pushes it to another MSR deployment or Docker Hub.
Another option is to mirror images from another MSR deployment. You
configure a repository to poll for changes in a remote repository. All
new images pushed into the remote repository are then pulled into MSR.
This is an easy way to configure a mirror for high availability since
you won’t need to change firewall rules that are in place for your
environments.
Mirantis Secure Registry allows you to create image promotion pipelines
based on policies.
In this example we will create an image promotion pipeline such that:
Developers iterate and push their builds to the dev/website
repository.
When the team creates a stable build, they make sure their image is
tagged with -stable.
When a stable build is pushed to the dev/website repository, it
will automatically be promoted to qa/website so that the QA team
can start testing.
With this promotion policy, the development team doesn’t need access to
the QA repositories, and the QA team doesn’t need access to the
development repositories.
Once you’ve created a repository, navigate to the
repository page on the MSR web interface, and select the Promotions tab.
Note
Only administrators can globally create and edit promotion policies.
By default users can only create and edit promotion policies on
repositories within their user namespace.
Click New promotion policy, and define the image promotion
criteria.
MSR allows you to set your promotion policy based on the following image
attributes:
Whether the tag name equals, starts with, ends with, contains, is one
of, or is not one of your specified string values
Promote to Target if Tag name ends in stable
Component
Whether the image has a given component and the component name equals,
starts with, ends with, contains, is one of, or is not one of your
specified string values
Promote to Target if Component name starts with b
Vulnarabilities
Whether the image has vulnerabilities – critical, major, minor,
or all – and your selected vulnerability filter is greater than or
equals, greater than, equals, not equals, less than or equals, or less
than your specified number
Promote to Target if Critical vulnerabilities = 3
Note
Only integer values are supported.
License
Whether the image uses an intellectual property license and is one of
or not one of your specified words
Promote to Target if License name = docker
Now you need to choose what happens to an image that meets all the
criteria.
Select the target organization or namespace and repository
where the image is going to be pushed. You can choose to keep the image
tag, or transform the tag into something more meaningful in the
destination repository, by using a tag template.
In this example, if an image in the dev/website is tagged with a
word that ends in “stable”, MSR will automatically push that image to
the qa/website repository. In the destination repository the image
will be tagged with the timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted. To confirm,
select the Promotions tab on the dev/website repository.
You can also review the newly pushed tag in the target repository by
navigating to qa/website and selecting the Tags tab.
Mirantis Secure Registry allows you to create mirroring policies for a
repository. When an image gets pushed to a repository and meets the
mirroring criteria, MSR automatically pushes it to a repository in a
remote Mirantis Secure Registry or Hub registry.
This not only allows you to mirror images but also allows you to create
image promotion pipelines that span multiple MSR deployments and
datacenters.
Available since MSR 3.1.8
Similarly, Helm charts can also be mirrored between MSR instances.
This capability ensures consistent availability and management of Helm charts
across multiple MSR environments. Furthermore, Helm charts can be mirrored
and pushed to Docker Hub repositories, which enables integration with external
sources.
In this example we will create an image mirroring policy such that:
Developers iterate and push their builds to
msr-example.com/dev/website the repository in the MSR
deployment dedicated to development.
When the team creates a stable build, they make sure their image is
tagged with -stable.
When a stable build is pushed to msr-example.com/dev/website, it
will automatically be pushed to qa-example.com/qa/website,
mirroring the image and promoting it to the next stage of
development.
With this mirroring policy, the development team does not need access to
the QA cluster, and the QA team does not need access to the development
cluster.
You need to have permissions to push to the destination repository in
order to set up the mirroring policy.
Once you have created a repository, navigate to
the repository page on the web interface, and select the Mirrors
tab.
Click New mirror to define where the image will be pushed if it
meets the mirroring criteria.
Under Mirror direction, choose Push to remote registry.
Specify the following details:
Field
Description
Registry type
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
https://index.docker.io
Username and password or access token
Your credentials in the remote repository you wish to push to.
To use an access token instead of your password, see
authentication token.
Repository
Enter the namespace and the repository_name after the /
Show advanced settings
Enter the TLS details for the remote repository or check
Skip TLS verification. If the MSR remote repository is
using self-signed TLS certificates or certificates signed by your own
certificate authority, you also need to provide the public key
certificate for that CA. You can retrieve the certificate by accessing
https://<msr-domain>/ca. Remote certificate authority
is optional for a remote repository in Docker Hub.
Note
Make sure the account you use for the integration has
permissions to write to the remote repository.
Click Connect to test the integration.
In this example, the image gets pushed to the qa/example repository
of an MSR deployment available at qa-example.com using a service
account that was created just for mirroring images between repositories.
Next, set your push triggers. MSR allows you to set your mirroring
policy based on the following image attributes:
Name
Description
Example
Tag name
Whether the tag name equals, starts with, ends with, contains, is one
of, or is not one of your specified string values
Copy image to remote repository if Tag name ends in stable
Component
Whether the image has a given component and the component name equals,
starts with, ends with, contains, is one of, or is not one of your
specified string values
Copy image to remote repository if Component name starts with b
Vulnarabilities
Whether the image has vulnerabilities – critical, major, minor,
or all – and your selected vulnerability filter is greater than or
equals, greater than, equals, not equals, less than or equals, or less
than your specified number
Copy image to remote repository if Critical vulnerabilities = 3
License
Whether the image uses an intellectual property license and is one of
or not one of your specified words
Copy image to remote repository if License name = docker
You can choose to keep the image tag, or transform the tag into
something more meaningful in the remote registry by using a tag
template.
In this example, if an image in the dev/website repository is tagged
with a word that ends in stable, MSR will automatically push that
image to the MSR deployment available at qa-example.com. The image
is pushed to the qa/example repository and is tagged with the
timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted to
qa/example in the remote trusted registry at qa-example.com.
When an image is pushed to another registry using a mirroring policy,
scanning and signing data is not persisted in the destination
repository.
If you have scanning enabled for the destination repository, MSR is
going to scan the image pushed. If you want the image to be signed, you
need to do it manually.
Mirantis Secure Registry allows you to set up a mirror of a repository by
constantly polling it and pulling new image tags as they are pushed.
This ensures your images are replicated across different registries for
high availability. It also makes it easy to create a development
pipeline that allows different users access to a certain image without
giving them access to everything in the remote registry.
To mirror a repository, start by
creating a repository in the MSR deployment that
will serve as your mirror. Previously, you were only able to set up pull
mirroring from the API. Starting in DTR 2.6, you can also mirror and pull
from a remote MSR or Docker Hub repository.
Available since MSR 3.1.8
In addition to mirroring images, Helm charts can also be mirrored between
MSR instances. This capability ensures consistent availability and management
of Helm charts across multiple MSR environments. Furthermore, MSR supports
the mirroring and pulling of Helm charts from Docker Hub repositories,
which enables integration with external sources.
To get started, navigate to https://<msr-url> and log in with your
MKE credentials.
Select Repositories in the left-side navigation panel, and then
click the name of the repository you want to view. Note that you will
have to click on the repository name following the / after the specific
namespace for your repository.
Next, select the Mirrors tab and click New mirror.
On the New mirror page, choose
Pull from remote registry.
Specify the following details:
Field
Description
Registry type
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
https://index.docker.io
Username and password or access token
Your credentials in the remote repository you wish to poll from.
To use an access token instead of your password, see
authentication token.
Repository
Enter the namespace and the repository_name after the /
Show advanced settings
Enter the TLS details for the remote repository or check
SkipTLSverification. If the MSR remote repository is using
self-signed certificates or certificates signed by your own certificate
authority, you also need to provide the public key certificate for that
CA. You can retrieve the certificate by accessing
https://<msr-domain>/ca. Remote certificate authority
is optional for a remote repository in Docker Hub.
After you have filled out the details, click Connect to test the
integration.
Once you have successfully connected to the remote repository, new
buttons appear:
Click Save to mirror future tag, or;
To mirror all existing and future tags, click Save & Apply
instead.
There are a few different ways to send your MSR API requests. To explore
the different API resources and endpoints from the web interface, click
API on the bottom left-side navigation panel.
Click Try it out and enter your HTTP request details.
namespace and reponame refer to the repository that will be poll
mirrored. The boolean field, initialEvaluation, corresponds to
Save when set to false and will only mirror images created
after your API request. Setting it to true corresponds to
Save & Apply which means all tags in the remote repository will
be evaluated and mirrored. The other body parameters correspond to the
relevant remote repository details that you can see on the MSR web
interface. As a best practice,
use a service account just for this purpose. Instead of providing the
password for that account, you should pass an authentication
token.
If the MSR remote repository is using self-signed certificates or
certificates signed by your own certificate authority, you also need to
provide the public key certificate for that CA. You can get it by
accessing https://<msr-domain>/ca. The remoteCA field is
optional for mirroring a Docker Hub repository.
Click Execute. On success, the API returns an HTTP201
response.
Once configured, the system polls for changes in the remote repository
and runs the poll_mirror job every 15 minutes. On success, the
system will pull in new images and mirror them in your local repository.
Starting in DTR 2.6, you can filter for poll_mirror jobs to review
when it was last ran. To manually trigger the job and force pull
mirroring, use the POST/api/v0/jobs API endpoint and specify
poll_mirror as your action.
When defining promotion policies you can use templates to dynamically
name the tag that is going to be created.
Important
Whenever an image promotion event occurs, the MSR timestamp for the event
is in UTC (Coordinated Univeral Time). That timestamp, however, is converted
by the browser and presents in the user’s time zone. Inversely, if a
time-based tag is applied to a target image, MSR captures it in UTC but
cannot convert it to the user’s timezone due to the tags being immutable
strings.
You can use these template keywords to define your new tag:
Helm is a tool that manages Kubernetes packages called charts, which are
put to use in defining, installing, and upgrading Kubernetes applications.
These charts, in conjunction with Helm tooling, deploy applications
into Kubernetes clusters. Charts are comprised of a collection of files and
directories, arranged in a particular structure and packaged as a .tgz
file. Charts define Kubernetes objects, such as the Service
and DaemonSet objects used in the application under deployment.
MSR enables you to use Helm to store and serve Helm charts,
thus allowing users to push charts to and pull charts from MSR
repositories using the Helm CLI and the MSR API.
Available since MSR 3.1.8
Helm charts can be mirrored between MSR instances, thus ensuring consistent
availability and management across multiple environments. Furthermore, MSR
supports the pulling and pushing of Helm charts between its instances, as well
as the mirroring of Helm charts between MSR and external repositories, such
as Docker Hub.
Note
To obtain the CA certificate required by the Helm charts commands, navigate
to https://<msr-url>/ca and download the certificate, or run:
Though the Helm CLI can be used to pull a Helm chart by itself or a Helm
chart and its provenance file, it is not possible to use the Helm CLI to
pull a provenance file by itself.
To push a Helm chart using the Helm CLI, first install the helmcm-pushplugin from chartmuseum/helm-push. It is not possible to push a
provenance file using the Helm CLI.
Use the helm push CLI command to push a Helm chart:
Use the MSR web UI to view the MSR Helm repository charts.
In the MSR web UI, navigate to Repositories.
Click the name of the repository that contains the charts you want to view.
The page will refresh to display the detail for the selected Helm
repository.
Click the Charts tab. The page will refresh to display
all the repository charts.
View
UI sequence
Chart versions
Click the View Chart button associated with the required
Helm repository.
Chart description
Click the View Chart button associated with the required
Helm repository.
Click the View Chart button for the particular chart
version.
Default values
Click the View Chart button associated with the required
Helm repository.
Click the View Chart button for the particular chart
version.
Click Configuration.
Chart templates
Click the View Chart button associated with the required
Helm repository.
Click the View Chart button for the particular chart
version.
Helm chart linting can ensure that Kubernetes YAML files and Helm charts
adhere to a set of best practices, with a focus on production readiness and
security.
A set of established rules forms the basis of Helm chart linting. The process
generates a report that you can use to take any necessary actions.
Indicates when deployments use the deprecated serviceAccount field.
Use the serviceAccountName field instead.
drop-net-raw-capability
Indicates when containers do not drop NET_RAW capability.
NET_RAW makes it so that an application within the container is able
to craft raw packets, use raw sockets, and bind to any address. Remove
this capability in the containers under containerssecuritycontexts.
env-var-secret
Indicates when objects use a secret in an environment variable.
Do not use raw secrets in environment variables. Instead, either mount
the secret as a file or use a secretKeyRef. Refer to Using Secrets
for details.
mismatching-selector
Indicates when deployment selectors fail to match the pod template
labels.
Confirm that your deployment selector correctly matches the labels in
its pod template.
no-anti-affinity
Indicates when deployments with multiple replicas fail to specify
inter-pod anti-affinity, to ensure that the orchestrator attempts to
schedule replicas on different nodes.
Specify anti-affinity in your pod specification to ensure that the
orchestrator attempts to schedule replicas on different nodes. Using
podAntiAffinity, specify a labelSelector that matches pods for
the deployment, and set the topologyKey to
kubernetes.io/hostname. Refer to Inter-pod affinity and anti-affinity
for details.
no-extensions-v1beta
Indicates when objects use deprecated API versions under extensions/v1beta.
Indicates when deployments expose port 22, which is commonly reserved
for SSH access.
Ensure that non-SSH services are not using port 22. Confirm that any
actual SSH servers have been vetted.
unset-cpu-requirements
Indicates when containers do not have CPU requests and limits set.
Set CPU requests and limits for your container based on its
requirements. Refer to Requests and limits
for details.
unset-memory-requirements
Indicates when containers do not have memory requests and limits set.
Set memory requests and limits for your container based on its
requirements. Refer to Requests and limits
for details.
writable-host-mount
Indicates when containers mount a host path as writable.
Set containers to mount host paths as readOnly, if you need to
access files on the host.
cluster-admin-role-binding
CIS Benchmark 5.1.1 Ensure that the cluster-admin role is only used
where required.
Create and assign a separate role that has access to specific
resources/actions needed for the service account.
docker-sock
Alert on deployments with docker.sock mounted in containers.
Ensure the Docker socket is not mounted inside any containers by
removing the associated Volume and VolumeMount in deployment
yaml specification. If the Docker socket is mounted inside a container
it could allow processes running within the container to execute Docker
commands which would effectively allow for full control of the host.
exposed-services
Alert on services for forbidden types.
Ensure containers are not exposed through a forbidden service type such
as NodePort or LoadBalancer.
host-ipc
Alert on pods/deployment-likes with sharing host’s IPC namespace.
Ensure the host’s IPC namespace is not shared.
host-network
Alert on pods/deployment-likes with sharing host’s network namespace.
Ensure the host’s network namespace is not shared.
host-pid
Alert on pods/deployment-likes with sharing host’s process namespace.
Ensure the host’s process namespace is not shared.
privilege-escalation-container
Alert on containers if allowing privilege escalation that could gain
more privileges than its parent process.
Alert on deployments with privileged ports mapped in containers.
Ensure privileged ports [0, 1024] are not mapped within
containers.
sensitive-host-mounts
Alert on deployments with sensitive host system directories mounted in containers.
Ensure sensitive host system directories are not mounted in containers
by removing those Volumes and VolumeMounts.
unsafe-proc-mount
Alert on deployments with unsafe /proc mount
(procMount=Unmasked) that will bypass the default masking behavior
of the container runtime.
Ensure container does not unsafely exposes parts of /proc by setting
procMount=Default. Unmasked ProcMount bypasses the default
masking behavior of the container runtime. See Pod Security Standards
for more details.
unsafe-sysctls
Alert on deployments specifying unsafe sysctls that may lead to
severe problems like wrong behavior of containers.
For the following endpoints, note that while the Swagger API Reference
does not specify example responses for HTTP 200 codes,
this is due to a Swagger bug and responses will be returned.
# Get chart or provenance file from repoGEThttps://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<filename># Template a chart versionGEThttps://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/template
Tag pruning is the process of cleaning up unnecessary or unwanted repository
tags. As of v2.6, you can configure the Mirants Secure Registry (MSR) to
automatically perform tag pruning on repositories that you manage by:
Specifying a tag pruning policy or alternatively,
Setting a tag limit
Note
When run, tag pruning only deletes a tag and does not carry out any
actual blob deletion.
Known Issue
While the tag limit field is disabled when you turn on immutability for a
new repository, this is currently not the case with Repository Settings. As
a workaround, turn off immutability when setting a tag limit via
Repository Settings > Pruning.
In the following section, we will cover how to specify a tag pruning
policy and set a tag limit on repositories that you manage. It will not
include modifying or deleting a tag pruning policy.
As a repository administrator, you can now add tag pruning policies on
each repository that you manage. To get started, navigate to
https://<msr-url> and log in with your credentials.
Select Repositories in the left-side navigation panel, and then
click the name of the repository you want to update. Note that you will
have to click on the repository name following the / after the specific
namespace for your repository.
Select the Pruning tab, and click New pruning policy
to specify your tag pruning criteria:
MSR allows you to set your pruning triggers based on the following image
attributes:
Whether the tag name equals, starts with, ends with, contains, is one
of, or is not one of your specified string values
Tag name = test`
Component name
Whether the image has a given component and the component name equals,
starts with, ends with, contains, is one of, or is not one of your
specified string values
Component name starts with b
Vulnerabilities
Whether the image has vulnerabilities – critical, major, minor, or
all – and your selected vulnerability filter is greater than or equals,
greater than, equals, not equals, less than or equals, or less than
your specified number
Critical vulnerabilities = 3
License
Whether the image uses an intellectual property license and is one of
or not one of your specified words
License name = docker
Last updated at
Whether the last image update was before your specified number of
hours, days, weeks, or months. For details on valid time units, see
Go’s ParseDuration function
Last updated at: Hours = 12
Specify one or more image attributes to add to your pruning criteria,
then choose:
Prune future tags to save the policy and apply your selection to
future tags. Only matching tags after the policy addition will be
pruned during garbage collection.
Prune all tags to save the policy, and evaluate both existing and
future tags on your repository.
Upon selection, you will see a confirmation message and will be
redirected to your newly updated Pruning tab.
If you have specified multiple pruning policies on the repository, the
Pruning tab will display a list of your prune triggers and
details on when the last tag pruning was performed based on the trigger,
a toggle for deactivating or reactivating the trigger, and a
View link for modifying or deleting your selected trigger.
All tag pruning policies on your account are evaluated every 15 minutes.
Any qualifying tags are then deleted from the metadata store. If a tag
pruning policy is modified or created, then the tag pruning policy for
the affected repository will be evaluated.
In addition to pruning policies, you can also set tag limits on
repositories that you manage to restrict the number of tags on a given
repository. Repository tag limits are processed in a first in first out
(FIFO) manner. For example, if you set a tag limit of 2, adding a third
tag would push out the first.
To set a tag limit, do the following:
Select the repository that you want to update and click the
Settings tab.
Turn off immutability for the repository.
Specify a number in the Pruning section and click
Save. The Pruning tab will now display your tag
limit above the prune triggers list along with a link to modify this
setting.
MSR users can automatically block clients from pulling images stored in the
registry by configuring enforcement policies at either the global or repository
level.
An enforcement policy is a collection of rules used to determine whether an
image can be pulled.
A good example of a scenario in which an enforcement policy can be useful is
when an administrator wants to house images in MSR but does not want those
images to be pulled into environments by MSR users. In this case, the
administrator would configure an enforcement policy either at the global or
repository level based on a selected set of rules.
Global image enforcement policies differ from those set at the repository level
in several important respects:
Whereas both administrators and regular users can set up enforcement policies
at the repository level, only administrators can set up enforcement
policies at the global level.
Only one global enforcement policy can be set for each MSR instance, whereas
multiple enforcement policies can be configured at the repository level.
Global enforcement policies are evaluated prior to repository policies.
Global and repository enforcement policies are generated from the same set of
rule attributes.
Note
Images must comply with all the enforcement policy rules to be pulled.
If any rule evaluates to false, the system blocks image pull.
This requirement also applies to tags associated with an image digest.
All tags must meet all the enforcement policy rules for an image digest they
refer to.
Users can only create and edit enforcement policies for repositories
within their user namespace.
To set up a repository enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to Repositories.
Select the repository to edit.
Click the Enforcement tab and select New enforcement
policy.
Define the enforcement policy rules with the desired rule attributes and
select Save. The screen displays the new enforcement policy in
the Enforcement tab. By default, the new enforcement policy is
toggled on.
Once a repository enforcement policy is set up and activated, pull requests
that do not satisfy the policy rules will return the following error message:
Only administrators can set up global enforcement policies.
To set up a global enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to System.
Select the Enforcement tab.
Confirm that the global enforcement function is Enabled.
Define the enforcement policy rules with the desired criteria and select
Save.
Once the global enforcement policy is set up, pull requests against any
repository that do not satisfy the policy rules will return the following
error message:
Administrators and users can monitor enforcement activity in the MSR web UI.
Important
Enforcement events can only be monitored at the repository level. It is not
possible, for example, to view in one location all enforcement events that
correspond to the global enforcement policy.
Navigate to Repositories.
Select the repository whose enforcement activity you want to review.
Select the Activity tab to view enforcement event activity. For
instance you can:
Identify which policy triggered an event using the enforcement ID
displayed on the event entry. (The enforcement IDs for each enforcement
policy are located on the Enforcement tab.)
Identify the user responsible for making a blocked pull request, and the
time of the event.
The information offered herein relates exclusively to upgrades between MSR
3.x.x versions. To upgrade to MSR 3.x.x from MSR 2.x.x, you must use the
Mirantis Migration Tool.
Schedule your upgrade outside of peak hours to avoid any business impact,
as brief interruptions may occur.
MSR uses semantic versioning. While downgrades are not supported,
Mirantis supports upgrades according to the following rules:
When upgrading from one patch version to another, you can skip patch
versions as no data migration takes place between patch versions.
When upgrading between minor releases, you cannot skip releases. You can,
however, upgrade from any patch version from the previous minor release to
any patch version of the subsequent minor release.
When upgrading between major releases, you must upgrade one major
version at a time.
There are two upgrade paths and two upgrade methods to consider in the life of
MSR 3.x.x. The following table presents the methods available to upgrade
between MSR minor and patch versions.
Third-party components are not upgraded alongside MSR, which means they can
become vulnerable to security breaches and exploits. To mitigate this risk,
Mirantis strongly recommends upgrading cert-manager and Postgres Operator
before proceeding with the MSR upgrade.
Run the following command to upgrade cert-manager and Postgres Operator:
Third-party components are not upgraded alongside MSR, which means they can
become vulnerable to security breaches and exploits. To mitigate this risk,
Mirantis strongly recommends upgrading cert-manager and Postgres Operator
before proceeding with the MSR upgrade.
Run the following command to upgrade cert-manager and Postgres Operator:
Mirantis has transitioned to an OCI-based Helm registry for
registry.mirantis.com. As a result, Helm repository management is no
longer required. Commands that rely on Helm repository operations,
such as helm repo update and helm upgrade,
will fail with HTTP 4xx errors.
For both new installations and upgrades, use the OCI-based registry URL
directly. To check for available upgrades, run
helm upgrade --dry-run without specifying a version.
Gain valuable insights into the health of your MSR cluster through effective
monitoring. You can optimize your monitoring strategy either by setting up a
Prometheus server to scrape MSR metrics or by accessing a range of MSR
endpoints to assess the health of your cluster.
MSR provides an extensive set of metrics with which you can monitor and assess
the health of your registry. These metrics are designed to work with
Prometheus, a powerful monitoring system, and can be combined with Grafana to
create interactive metric dashboards.
Herein, we present an example of deploying a Prometheus server
to scrape your MSR metrics. There are, however, multiple valid approaches to
configuring your metrics ecosystem, and you can choose the setup that best
suits your needs.
For the <prometheus-ui-port> value in the ports section, select a
port that is currently available in your Swarm cluster.
Deploy the Prometheus server onto your Swarm cluster:
dockerstackdeploy-cdocker-stack.yamlprometheus
To verify that your Prometheus server is running and scraping the MSR metrics
endpoint:
Verify that your Prometheus service is running:
dockerservicels
In a web browser, navigate to
http://<manager-node-ip>:<prometheus-ui-port>. This is
the same <prometheus-ui-port> that you included in the ports section
of the docker-stack.yaml file.
Select Status > Targets in the Prometheus UI menu bar.
Verify that the MSR metrics endpoint is listed on the page with the
up status. You may need to wait approximately 30 seconds for
this to occur.
The metrics endpoint is labeled with the <metrics-job-name> entered in
the scrape_configs section of the prometheus.yml file.
Comprehensive detail on all of the metrics exposed by MSR is provided herein.
For specific key metrics, refer to the Usage information, which
offers valuable insights on interpreting the data and using it to troubleshoot
your MSR deployment.
Registry metrics capture essential MSR functionality, such as repository
count, tag count, push events, and pull events.
Metrics often incorporate labels to differentiate specific attributes of the
measured item. The table below provides a list of possible values for the
labels associated with registry metrics:
If your tag count increases beyond your needs, you can enable
tag pruning policies on individual repositories to
manage the growth effectively.
Note
Tag pruning selectively removes image tags, but it does not eliminate
the associated data blobs. To completely remove unwanted image tags
and free up cluster resources, it is necessary that you schedule
garbage collection as well.
If an individual repository tag count increases beyond your
needs, you can enable tag pruning policies to
manage the growth effectively.
Note
Tag pruning selectively removes image tags, but it does not eliminate
the associated data blobs. To completely remove unwanted image tags
and free up cluster resources, it is necessary that you schedule
garbage collection as well.
Mirroring metrics track the number of push and pull mirroring jobs, categorized
by job status.
Considered as a whole, these metrics offer real-time insights into the
performance of your mirroring jobs. For example, when you observe a
simultaneous decrease in poll_mirror_running and an increase in
poll_mirror_done, this provides immediate assurance that your poll
mirroring configuration is functioning properly.
Current number of poll mirroring jobs with a ‘waiting’ status
Metric type
Gauge
Labels
None
Usage
If there is a significant number of poll mirroring jobs in the
waiting state, consider updating the
Jobrunner capacity configuration to allow a higher
parallel execution of mirroring jobs.
Current number of push mirroring jobs with a ‘waiting’ status
Metric type
Gauge
Labels
None
Usage
If there is a significant number of push mirroring jobs in the
waiting state, consider updating the
Jobrunner capacity configuration to allow a higher
parallel execution of mirroring jobs.
The metrics for RethinkDB are extracted from the system statistics and current
issues tables, providing a broad range of information about your RethinkDB
deployment.
Metrics often incorporate labels to differentiate specific attributes of the
measured item. The table below provides a list of possible values for the
labels associated with RethinkDB metrics:
Current number of document reads and writes per second from the table
Metric type
Gauge
Labels
db, table, operation
Usage
If you observe that certain tables have a high volume of reads or
writes, it is advisable to evenly distribute the primary replicas
associated with those tables across the RethinkDB servers. This approach
ensures a balanced distribution of the cluster load, leading to improved
performance across the system.
Log write issues refer to situations where RethinkDB encounters failures
while attempting to write to its log file. Refer to
System current issues table in the official RethinkDB
documentation for more information.
Name collision issues arise when multiple servers, databases, or tables
within the same database are assigned identical names. Refer to
System current issues table in the official RethinkDB
documentation for more information.
Outdated index issues occur when indexes that were created using an
older version of RethinkDB need to be rebuilt due to changes in the
indexing mechanism employed by RethinkDB Query Language (ReQL). Refer to
System current issues table in the official RethinkDB
documentation for more information.
Total availability issues occur when a table within the RethinkDB
cluster is missing at least one replica. Refer to
System current issues table in the official RethinkDB
documentation for more information.
Memory availability issues arise when a page fault occurs on a
RethinkDB server and the system starts using swap space. Refer to
System current issues table in the official RethinkDB
documentation for more information.
Connectivity issues occur when certain servers within a RethinkDB
cluster are unable to establish a connection or communicate with all
other servers in the cluster. Refer to
System current issues table in the official RethinkDB
documentation for more information.
Refer to your RethinkDB logs to diagnose the issue.
Note
If the number of other_issues is greater than zero, it indicates
the need to expand the existing set of metrics to cover those
additional issue types. Please reach out to Mirantis and inform us
that you are seeing other_issues tracked in your cluster.
When a specific table in your MSR deployment grows unchecked, it may
indicate a potential issue with the corresponding functionality. For
instance, if the size of the tags table is increasing beyond
expectations, it could be a sign that your pruning policies, which are
responsible for managing tag retention, are not functioning properly.
Similarly, if the blobs table is growing more than anticipated, it
could suggest a problem with the garbage collection process, which is
responsible for removing unused data blobs.
Current number of errors that occurred during metrics collection
Metric type
Gauge
Labels
None
Usage
Since MSR metrics depend heavily on the use of RethinkDB, any scrape
errors encountered are likely to be caused by issues related to
RethinkDB itself. To diagnose and troubleshoot the problem, refer to the
logs of your RethinkDB deployment.
MSR exposes several endpoints that you can use to assess whether or not an MSR
replica is healthy:
/_ping: Checks if the MSR replica is healthy, and returns a
simple JSON response. This is useful for load balancing and other
automated health check tasks.
/nginx_status: Returns the number of connections handled by
the NGINX MSR front end.
/api/v0/meta/cluster_status: Returns detailed information about
all MSR replicas.
The /api/v0/meta/cluster_status endpoint requires administrator
credentials, and returns a JSON object for the entire cluster as observed by
the replica being queried. You can authenticate your requests using HTTP basic
auth.
{"current_issues":[{"critical":false,"description":"... some replicas are not ready. The following servers are not reachable: dtr_rethinkdb_f2277ad178f7",}],"replica_health":{"f2277ad178f7":"OK","f3712d9c419a":"OK","f58cf364e3df":"OK"},}
You can find health status on the current_issues and
replica_health arrays.
For even more detailed troubleshooting information, examine the
individual container logs.
Docker Content Trust (DCT) keeps audit logs of changes made to trusted
repositories. Every time you push a signed image to a repository, or
delete trust data for a repository, DCT logs that information.
To access the audit logs you need to authenticate your requests using an
authentication token. You can get an authentication token for all
repositories, or one that is specific to a single repository.
MSR returns a JSON file with a token, even when the user does not have
access to the repository to which they requested the authentication
token. This token does not grant access to MSR repositories.
The returned JSON file has the following structure:
{"token":"<token>","access_token":"<token>","expires_in":"<expiration in seconds>","issued_at":"<time>"}
Once you have an authentication token, you can use the following
endpoints to get audit logs:
URL
Description
Authorization
GET/v2/_trust/changefeed
Get audit logs for all repositories.
Global scope token
GET/v2/<msr-url>/<repository>/_trust/changefeed
Get audit logs for a specific repository.
Repository-specific token
Both endpoints have the following query string parameters:
Field name
Required
Type
Description
change_id
Yes
String
A non-inclusive starting change ID from which to start
returning results. This will typically be the first or last change ID
from the previous page of records requested, depending on which
direction your are paging in.
The value 0 indicates records should be returned starting from the
beginning of time.
The value 1 indicates records should be returned starting from the
most recent record. If 1 is provided, the implementation will also
assume the records value is meant to be negative, regardless of the
given sign.
records
Yes
String integer
The number of records to return. A negative value indicates the number
of records preceding the change_id should be returned. Records are
always returned sorted from oldest to newest.
Below is the description for each of the fields in the response:
Field name
Description
count
The number of records returned.
ID
The ID of the change record. Should be used in the change_id field of
requests to provide a non-exclusive starting index. It should be treated
as an opaque value that is guaranteed to be unique within an instance of
notary.
CreatedAt
The time the change happened.
GUN
The MSR repository that was changed.
Version
The version that the repository was updated to. This increments every
time there’s a change to the trust repository.
This is always 0 for events representing trusted data being removed
from the repository.
SHA256
The checksum of the timestamp being updated to. This can be used with
the existing notary APIs to request said timestamp.
This is always an empty string for events representing trusted data
being removed from the repository
Category
The kind of change that was made to the trusted repository. Can be
update, or deletion.
The results only include audit logs for events that happened more than
60 seconds ago, and are sorted from oldest to newest.
Even though the authentication API always returns a token, the
changefeed API validates if the user has access to see the audit logs or
not:
If the user is an admin they can see the audit logs for any
repositories,
All other users can only see audit logs for repositories they have
read access.
You can use the Python 3 utility tool to learn the size of your MSR repository.
With the tool, you can make both basic size queries and simple size queries:
Basic size queries return the total size shared with other repositories
and the portion unique to the repository itself.
Simple size queries return the total size of a repository only, without
information as to which portion is shared with other repositories or which
portions are unique to the repository itself.
The host and port to use for MSR access. Default: 127.0.0.1:8443.
username
The MSR username.
password
The MSR password.
page-size
Maximum number of results to return per API request. Default: 10.
simple
If set to True, only the total size of the repository is fetched.
If set to False, the total size of the repository is fetched,
as is the size that is unique to the repository, and the size that
is shared with other repositories through common blobs.
namespaces
List of comma-separated namespaces.
repositories
List of comma-separated repositories.
cacert
Path to the MSR CA certificate file.
insecure
Use an insecure connection.
output
Output the result to a JSON file, or to console if “-” is provided.
To obtain detailed information for a service that is not running:
dockerservicepsmsr_msr-registry--no-trunc
Example output:
IDNAMEIMAGENODEDESIREDSTATECURRENTSTATEERRORPORTS
7o8rjdjydwfqnz0qhekz46tq5msr_msr-registry.1registry.mirantis.com/msr/msr-registry:<releasenumber>@sha256:a4d3a083da310dff374c37850e1e8de81ad9150b770683b1529cabf508ae8f076e1b4b0f0dccReadyReady1secondago
lickekmwnp6d2ot558ohh2cnj\_msr_msr-registry.1registry.mirantis.com/msr/msr-registry:<releasenumber>@sha256:a4d3a083da310dff374c37850e1e8de81ad9150b770683b1529cabf508ae8f07aed603d27071ShutdownFailed1secondago"starting container failed: error while mounting volume '/var/lib/docker/volumes/msr_msr-storage/_data': failed to mount local volume: mount :/:/var/lib/docker/volumes/msr_msr-storage/_data, data: addr=172.17.0.10,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport: connection refused"
To review all of the services that are running on the cluster:
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"info","msg":"Generating an authenticator for eNZi client","time":"2023-06-27T23:01:47Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"info","msg":"Attempting to create or update MSR's Service registration with the eNZi server","time":"2023-06-27T23:01:47Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"info","msg":"Updated service \"Mirantis Secure Registry\"","time":"2023-06-27T23:01:47Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"info","msg":"Obtaining eNZi service registration","time":"2023-06-27T23:01:48Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"error","msg":"failed to obtain repository counts: rethinkdb: Cannot reduce over an empty stream. in:\nr.DB(\"dtr2\").Table(\"repositories\").Group(\"visibility\").Count().Ungroup().Map(func(var_2 r.Term) r.Term { return r.Object(var_2.Field(\"group\"), var_2.Field(\"reduction\")) }).Reduce(func(var_3, var_4 r.Term) r.Term { return var_3.Merge(var_4) })","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"info","msg":"Starting temporary CVE file cleanup within \"/storage/scan_update/\" directory","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"error":"open /storage/scan_update/: no such file or directory","level":"error","msg":"Could not delete all tmp files","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"level":"info","msg":"No files to remove","time":"2023-06-27T23:01:49Z"}
msr_msr-api-server.3.iippai90ljtr@c1138be288cc|{"address":":443","level":"info","msg":"Admin server about to listen for connections","time":"2023-06-27T23:01:49Z"}
To create a shell to examine the contents of a container:
SSH into the host that is running the container to which you want to
connect.
MSR uses RethinkDB to persist and reproduce data across replicas. To review the
internal state of MSR, you can connect directly to the RethinkDB instance that
is running on an MSR replica, using either the RethinkDB web interface or
the MSR API.
Warning
Mirantis does not support direct modifications to RethinkDB, and thus any
unforeseen issues that result from doing so are solely the user’s
responsibility.
Access RethinkDB with the RethinkDB web interface¶
For both Kubernetes and Swarm deployments, you can use the RethinkDB web
interface to directly access RethinkDB.
Kubernetes deployments
Note
If you are using a Helm chart to install and manage your MSR deployment,
enable the RethinkDB Administration Console by including the flag in your
helm install or helm upgrade command:
--setrethinkdb.admin.service.enabled=true
In the cr-sample-manifest.yaml file that you applied when
installing MSR, enable the RethinkDB Administration Console:
spec:rethinkdb:admin:enabled:true
Invoke the following command to run the webhook health check and apply the
changes to the custom resource:
Individual databases and tables are a private MSR implementation detail
that may change from version to version. Instead, you can use dbList()
and tableList() to explore the contents and data structure.
Swarm deployments - MSR 3.1.10 and later
SSH into the manager node and edit the rethinkdb.admin section of
the values.yml file:
Note
For instruction on how to generate the values.yml file, refer to
the Install MSR online.
Individual databases and tables are a private MSR implementation detail
that may change from version to version. Thus, you can use dbList()
and tableList() to explore the contents and data structure.
Individual databases and tables are a private MSR implementation detail
that may change from version to version. Thus, you can use
dbList() and tableList() to explore the contents and data structure.
CVE database connectivity issues are often at the root of any scanning or CVE
updating problems you may encounter. On Kubernetes deployments, a faulty
installation of the PostgreSQL operator is often the root cause for such
issues, whereas on Swarm these issues are likely to be linked to the
Scanningstore service.
Verify that the postgres operator is running by invoking the
kubectl get pods command. If the output you receive resembles the
following example, your PostgreSQL is properly installed:
postgres-operator-6788c8bf6-494lt1/1Running016d
If, however, the command produces no output, or the state that presents is
something other than Running, install PostgreSQL as follows:
Warnings display in a red banner at the top of the MSR web UI to indicate
potential vulnerability scanning issues.
Warning
Cause
Warning: Cannot perform security scans because no
vulnerability database was found.
Displays when vulnerabilty scanning is enabled but there is no
vulnerability database available to MSR. Typically, the warning displays
when a vulnerability database update is run for the first time
and the operation fails, as no usable vulnerability database exists at
this point.
Warning: Last vulnerability database sync failed.
Displays when a vulnerability database update fails, even though there
is a previous usable vulnerabilty database available for vulnerability
scans. The warning typically displays when a vulnerability database
update fails, despite successful completion of a prior vulnerability
database update.
Note
The terms vulnerability database sync and
vulnerability database update are interchangeable, in the
context of MSR web UI warnings.
Note
The issuing of warnings is the same regardless of whether vulnerability
database updating is done manually or is performed automatically through a
job.
MSR undergoes a number of steps in performing a vulnerability database update,
including TAR file download and extraction, file validation, and the update
operation itself. Errors that can trigger warnings can occur at any point in
the update process. These errors can include such system-related matters as low
disk space, issues with the transient network, or configuration
complications. As such, the best strategy for troubleshooting MSR vulnerability
scanning issues is to review the logs.
View the logs for an online vulnerability database update¶
Online vulnerability database updates are performed by a jobrunner container,
the logs for which you can view through a docker CLI command or by using the
MSR web UI
CLI command:
dockerlogs<jobrunner-container-name>
MSR web UI:
Navigate to System > Job Logs in the left-side navigation
panel.
View the logs for an offline vulnerability database update¶
The MSR vulnerability database update occurs through the dtr-api container.
As such, access the logs for that container to ascertain the reason for update
failure.
If the logs do not initially offer adequate detail on the cause of
vulnerability database update failure, you can display additional logs by
setting MSR to enable debug logging.
MSR Operator
Use MSR Operator to enable and disable debug logging.
Edit the custom resource manifest to enable and disable debug
logging. For example:
spec:logLevel:'debug'
Apply the changes to the custom resource:
kubectlapply-fcr-sample-manifest.yaml
Verify completion of the reconciliation process for the custom resource:
Certificate issues when pushing and pulling images¶
If TLS is not properly configured, you are likely to encounter an
x509:certificatesignedbyunknownauthority error when attempting to run
the following commands:
docker login
docker push
docker pull
To resolve the issue:
Verify that your MSR instance has been configured with your TLS certificate
Fully Qualified Domain Name (FQDN). For more information, refer to
Add a custom TLS certificate.
Alternatively, but only in testing scenarios, you can skip using a certificate
by adding your registry host name as an insecure registry in the Docker
daemon.json file:
MSR on Swarm one node to multi node scaling failure¶
The RethinkDB node can fail when scaling MSR on Swarm on a new cluster from one
node to several nodes, resulting in the generation of the following error
message during execution of scale command:
level=fatalmsg="polling failed with 40 attempts 1s apart: service\"msr_msr-rethinkdb\" is not yet ready"
To prevent such a failure, pre-pull RethinkDB images on all nodes. To do so,
run following command on each node in a Swarm cluster:
Mirantis Secure Registry (MSR) uses RethinkDB to store metadata. RethinkDB is a
clustered application, and thus to configure it with high availability it must
have three or more servers, and its tables must be configured to have three or
more replicas.
For a RethinkDB table to be healthy, a majority (n/2 + 1) of replicas per table
must be available. As such, there are three possible failure scenarios:
One or more table replicas are unhealthy, but the overall majority
(n/2 + 1) remains healthy and is able to communicate, each with the
others.
As long as more than half of the table voting replicas and more than
half of the voting replicas for each shard remain available, one of
those voting replicas will be arbitrarily selected as the new primary.
Majority of replicas are unhealthy
Half or more voting replicas of a shard are lost and cannot be
reconnected.
An emergency repair of the cluster remains possible, without having to
restore from a backup, which minimizes the amount of data lost. Refer to
mirantis/msr db emergency-repair for more detail.
All replicas are unhealthy
A complete disaster scenario wherein all replicas are lost, the result
being the loss or corruption of all associated data volumes. In this
scenario, you must restore MSR from a backup. Restoring from a backup
should be a last resort solution. You should first attempt an emergency
repair, as this can mitigate data loss. Refer to
Restore from an MSR backup for more information.
When one or more MSR replicas are unhealthy but the overall majority
(n/2 + 1) is healthy and able to communicate with one another, your MSR
cluster is still functional and healthy.
Given that the MSR cluster is healthy, there is no need to execute a disaster
recovery procedure, such as restoring from a backup. Instead, you should:
Remove the unhealthy replicas from the MSR cluster.
Join new replicas to make MSR highly available.
The order in which you perform these operations is important, as an MSR cluster
requires a majority of replicas to be healthy at all times. If you join more
replicas before removing the ones that are unhealthy, your MSR cluster might
become unhealthy.
To understand why you should remove unhealthy replicas before joining
new ones, imagine you have a five-replica MSR deployment, and something
goes wrong with the overlay network connection the replicas, causing
them to be separated in two groups.
Because the cluster originally had five replicas, it can work as long as
three replicas are still healthy and able to communicate (5 / 2 + 1 =
3). Even though the network separated the replicas in two groups, MSR is
still healthy.
If at this point you join a new replica instead of fixing the network
problem or removing the two replicas that got isolated from the rest,
it’s possible that the new replica ends up in the side of the network
partition that has less replicas.
When this happens, both groups now have the minimum amount of replicas
needed to establish a cluster. This is also known as a split-brain
scenario, because both groups can now accept writes and their histories
start diverging, making the two groups effectively two different
clusters.
Edit the newvalues-swarm.yaml file and specify the worker nodes on which
MSR is to be deployed:
swarm:
## nodeList is a comma separated list of node IDs within the swarm that represent nodes that MSR will be allowed to## deploy to.To retrieve a list of nodes within a swarm execute `docker node ls`. If no nodes are specified then MSR## will be installed on the current node.##nodeList:
For comprehensive information on how to scale MSR on Helm up and down as a
Kubernetes application, refer to the Kubernetes documentation Running Multiple
Instances of Your App.
MSR Operator uses its own lifecycle manager, and thus the number of replicas
are controlled by MSR CRD manifest.
To increase/decrease the number of replicas for MSR Operator, you must adjust
the replicaCount: parameters in the manifest _v1_msr.yaml file. After
doing so, and following a reapplication of the CRD manifest, the required
replica count is spawned.
For an MSR cluster to be healthy, a majority of its replicas (n/2 + 1)
need to be healthy and be able to communicate with the other replicas.
This is known as maintaining quorum.
In a scenario where quorum is lost, but at least one replica is still
accessible, you can use that replica to repair the cluster. That replica
doesn’t need to be completely healthy. The cluster can still be repaired
as the MSR data volumes are persisted and accessible.
Repairing the cluster from an existing replica minimizes the amount of
data lost. If this procedure doesn’t work, you’ll have to restore from
an existing backup.
When a majority of replicas are unhealthy, causing the overall MSR
cluster to become unhealthy, internalservererror presents for operations
such as docker login , docker pull , and
docker push.
Accessing the /_ping endpoint of any replica also returns the same
error. It is also possible that the MSR web UI is partially or fully
unresponsive.
Using the msr db scale command returns an error such as:
{"level":"fatal","msg":"unable to reconfigure replication: unable toreconfigure replication for table \"org_membership\": unable toreconfigure database replication: rethinkdb: The server(s) hosting table`enzi.org_membership` are currently unreachable. The table was notreconfigured. If you do not expect the server(s) to recover, you can use`emergency_repair` to restore availability of the table.\u003chttp://rethinkdb.com/api/javascript/reconfigure/#emergency-repair-mode\u003ein:\nr.DB(\"enzi\").Table(\"org_membership\").Reconfigure(replicas=1, shards=1)","time":"2022-12-09T20:13:47Z"}commandterminatedwithexitcode1
Use the msr db emergency-repair command to repair an
unhealthy MSR cluster from the msr-api Deployment.
This command overrides the standard safety checks that occur when scaling a
RethinkDB cluster. This allows RethinkDB to modify the replication factor to
the setting most appropriate for the number of rethinkdb-cluster Pods that
are connected to the database.
The msr db emergency-repair command is commonly used when the
msr db scale command is no longer able to reliably scale the
database. This typically occurs when there is a prior loss of quorum, which
often happens when you scale rethinkdb.cluster.replicaCount without first
decommissioning and scaling RethinkDB servers. For more information on scaling
down RethinkDB servers, refer to Remove replicas from RethinkDB.
Run the following command to perform an emergency repair:
Kubernetes deployments
kubectlexecdeploy/msr-api--msrdbemergency-repair
Swarm deployments
Specify the number of replicas in the values.yml file and run:
The table that follows describes the various data types that MSR manages, and
indicates which data types are backed up when you perform either an automatic
or a manual backup.
Data
Automatic
Manual
Description
Configurations
Yes
Yes
MSR settings.
Repository metadata
Yes
Yes
Metadata about the repositories, charts, and images deployed, such as
architecture and size.
Access control to repos and images
Yes
Yes
Permissions for teams and repositories.
Notary data
Yes
Yes
Signatures and digests for images that are signed.
Scan results
Yes
Yes
Information about security vulnerabilities in your images.
Image and chart content
Yes, when fullBackup is set to true. No, otherwise
No
The images and charts that have been stored in MSR within a
repository; must be backed up separately, depending on the MSR
configuration.
Users, orgs, teams
Yes
Yes
The data related to users, orgs, and teams that MSR backs up.
Vulnerability database
No
No
Database of vulnerabilities, which you can re-download following a
restore operation.
To schedule automatic backups, you must use the MSR web UI to enable and
configure the SMTP setting.
Log in to the MSR web UI.
In the left-side navigation panel, click System to display the
System pane.
In the General tab, scroll down to SMTP Settings.
Toggle the Enable SMTP control to the right.
Enter the appropriate information into the following fields:
User
Password
Server Address
Server Port
Sender Address
Click Save.
Schedule automatic backups and backup purges using either the MSR web UI or
the MSR API:
Web UI
In the left-side navigation panel, click System to display
the System pane.
Navigate to the Backups tab and click Edit.
Toggle the Enable Backups control to the right.
Click the Backup Type dropdown and select either
Full or Metadata Only.
Select Daily, Weekly, or Monthly
to set the frequency with which backups are performed.
Alternatively, you can set the schedule in the
schedule (cron syntax) field using the
Cronjob format.
Note
You can schedule a single automatic backup using either
relative or absolute scheduling.
To schedule the backup for the beginning of the next hour:
"schedule":"0 0 * * * *"
To schedule the backup for a specific time:
"schedule":"0 30 17 6 OCT *"
To perform only one backup, you must disable automatic backup
scheduling after the backup completes.
Optional. In the Email Notification List field, include
the email addresses to which you want automatic backup notifications
to be sent.
Optional. In the Backup Deadline field, specify the
retention period in minutes or hours. If left empty or set to zero,
the deadline defaults to one hour.
Optional. Configure automatic backup purges.
Toggle the Purge past backups control to the right.
In the Keep backups for field, input the desired
number of Days, Weeks, or
Months to retain backups.
Select the relevant unit of time.
Click Save.
API
Schedule automatic backups by performing a PUT request to the
/api/v0/meta/settings/backup endpoint.
In the following configuration example:
A backup is performed every minute
The backup process is terminated after one hour if the set deadline is reached
The creation of a complete MSR backup requires that you back up both the
contents of repositories (such as images and charts) and the metadata MSR
manages.
As you can configure MSR for several types of storage backends, the method
for backing up images and charts will vary. The example we offer is for
persistentVolume. If you are using a different storage backend, such as
a cloud provider, you should adhere to the recommended practices for that
system.
When MSR is configured with persistentVolume, images and charts are stored
on the local file system or on mounted network storage.
One way you can back up the images and charts data is by creating a tar archive
of the data volume that MSR uses. To find the path of the volume, describe the
PersistentVolume associated with the PersistentVolumeClaim:
Use the msr backup command to create a backup of the MSR metadata.
The command is present in any API Pod and can be run using the
kubectl exec command.
An example follows of how to create a backup for an MSR installation named
mymsr. The backup contents are streamed to standard output, which is
redirected locally to the file backup.tar.
In the event that a majority of the RethinkDB table replicas in use by MSR are
unhealthy, and an emergency repair is unsuccessful, you must restore
the cluster from a backup.
To restore MSR from a backup:
Set up an MSR instance to serve as the restore target.
Verify that the MSR version in use by the cluster matches the one used to
create the backup.
Extract your backup:
If MSR is configured to store images on the local file system, run the
following command:
If MSR uses a different storage back end, follow the best practices
recommended for that system.
Use the msr restore command to restore MSR metadata from a
previously created backup. The command is present in any API Pod and can be
run using the kubectl exec command.
The following is an example of restoring onto an MSR installation named
mymsr. The backup contents are streamed from standard input, which
receives its data from the local file backup.tar.
The migration of MSR metadata and image binaries to a new Kubernetes or Swarm
cluster can be a complex operation. To help you to successfully complete
this task, Mirantis provides the Mirantis Migration Tool (MMT).
With MMT, you can transition to the same MSR version you already have in use,
or you can opt to upgrade to a more recent major, minor, or patch version
of the software. In addition, MMT allows you to switch cluster orchestrators
and deployment methods as part of the migration process.
The <command> argument represents the particular stage of the migration
process:
Migration stage
Description
verify
Verification of the MSR source system configuration. The
verify command must be run on the source MSR system. Refer
to Verify the source system configuration for more information.
Applies only to migrations that originate from MSR 2.9.x systems.
estimate
Estimation of the number of images and the amount of metadata to
migrate. The estimate command must be run on the source MSR
system. Refer to Estimate the migration for more information.
Applies only to migrations that originate from MSR 2.9.x systems.
extract
Extraction of metadata, storage configuration, and blob storage in the
case of the copy storage mode, from the source registry. The
extract command must be run on the source MSR system. Refer
to Extract the data for more information.
transform
Transformation of metadata from the source registry for use with the
target MSR system. The transform command must be run on the
target MSR system. Refer to Transform the data extract for more
information.
Applies only to migrations that originate from MSR 2.9.x systems.
restore
Restoration of transformed metadata, storage configuration, and blob
storage in the case of the copy storage mode, is made onto the
target MSR environment. The restore command must be run on
the target MSR system. Refer to Restore the data extract for
more information.
The <command-mode> argument indicates the mode in which the command is to
run specific to the source or target registry. msr and msr3 are
currently the only accepted values, as MMT currently only supports the
migration of MSR registries.
The --storage-mode flag and its accompanying <storage-mode> argument
indicate the storage mode to use in migrating the
registry blob storage.
Storage mode
Description
inplace
The binary image data remains in its original location.
The target MSR system must be configured to use the same external
storage as the source MSR system. Refer to
Configure external storage for more information.
Important
Due to its ability to handle large amounts of data, Mirantis
recommends the use of inplace storage mode for most migration
scenarios.
copy
The binary image data is copied from the source system to a local
directory on the workstation that is running MMT. This mode allows
movement from one storage location to another. It is especially useful
in air-gapped environments.
The <directory> argument is used to share state across each command. The
resulting directory is typically the destination for the data that is extracted
from the source registry, which then serves as the source for the extracted
data in subsequent commands.
To avoid data inconsistencies, the source registry must remain in
read-only mode throughout the migration to the target MSR system.
Revert the value of readOnlyRegistry to false after the
migration is complete.
Be aware that MSR 3.0.x source systems cannot be placed into
read-only mode. If you are migrating from a 3.0.x source system,
be careful not to write any files during the migration process.
An active MSR 3.x.x installation, version 3.0.3 or later, to serve as the
migration target.
Configuration of the namespace for the MSR target installation, which
you set by running the following command:
You must pull the MMT image to both the source and target systems, using the
following command:
dockerpullregistry.mirantis.com/msr/mmt
2.9.x source systems only. Administrator credentials for the MKE cluster on
which the source MSR 2.9 system is running.
Kubernetes target systems only. A kubectl config file, which is typically
located in $HOME/.kube.
Kubernetes target systems only. Credentials within the kubectl config
file that supply cluster admin access to the Kubernetes cluster that is
running MSR 3.x.x.
Swarm target systems only. Enable all authenticated users, including service
accounts, to schedule services and perform tasks on all nodes.
Once the prerequisites are met, you can select
from two available storage modes for migrating binary image data from a source
MSR system to a target MSR system: inplace and copy.
Note
In all but one stage of the migration workflow, you will indicate the
storage mode of choice in the storage-mode parameter setting. The step
in which you do not indicate the storage mode is
Restore the data extract.
Storage mode
Description
inplace
The binary image data remains in its original location.
The target MSR system must be configured to use the same external
storage as the source MSR system. Refer to
Configure external storage for more information.
Important
Due to its ability to handle large amounts of data, Mirantis
recommends the use of inplace storage mode for most migration
scenarios.
copy
The binary image data is copied from the source system to a local
directory on the workstation that is running MMT. This mode allows
movement from one storage location to another. It is especially useful
in air-gapped environments.
Important
Migrations from source MSR systems that use Docker volumes for image
storage, such as local filesystem storage backend, can only be performed
using the copy storage mode. Refer to
Filesystem storage backends for more information.
The migrate command introduced in MMT 2.0.3 streamlines the legacy
multi-step migration by reducing
verification, estimation, data extraction, transformation, and restoration into
one or two steps.
Before you can perform the migration, it is necessary to:
All MMT commands run on MSR 3.x.x systems, both source
and target deployments, must include the --fullname option, which
specifies the name of the MSR instance.
The migration operation differs, depending on whether you are running it on
a Docker volume storage system or a Filesystem storage system
Docker volume storage
A migration performed on a Docker volume storage system (such as AWS S3,
Google Cloud, or Azure) requires that you run
the migrate command within the MMT container only.
Run the following command sequence, to:
Connect to the MSR 2.9.x or 3.x.x system.
Mount the DTR (Docker Trusted Registry) volume from the 2.9.x or 3.x.x
system to /storage in the MMT container.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume you want to mount to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
A migration performed on a filesystem storage
requires that you run the migrate command twice, once within
the MMT container, and then again on the target system.
Run the following command sequence, to:
Connect to the MSR 2.9.x or 3.x.x system.
Mount the DTR (Docker Trusted Registry) volume from the 2.9.x or 3.x.x
system to /storage in the MMT container.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume you want to mount to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
A migration performed on a filesystem storage
requires that you run the migrate command twice, once within
the Docker container, and then again within the deployed Pod.
Run the migrate command within the Docker container:
Deploy the MMT Pod with the kubectl apply -f, using the
provided YAML configuration.
Pod configuration
apiVersion:v1kind:ServiceAccountmetadata:name:mmt-serviceaccount---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:mmt-rolerules:-apiGroups:["","apps","rbac.authorization.k8s.io","cert-manager.io","acid.zalan.do"]resources:["*"]verbs:["*"]---apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:mmt-rolebindingsubjects:-kind:ServiceAccountname:mmt-serviceaccountroleRef:kind:Rolename:mmt-roleapiGroup:rbac.authorization.k8s.io---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:mmt-clusterrolerules:# Add/remove more permissions as needed-apiGroups:["","msr.mirantis.com","rethinkdb.com","apiextensions.k8s.io"]resources:["msrs","rethinkdbs","customresourcedefinitions","persistentvolumes"]verbs:["*"]---kind:ClusterRoleBindingapiVersion:rbac.authorization.k8s.io/v1metadata:name:mmt-clusterrolebindingsubjects:-kind:ServiceAccountname:mmt-serviceaccount# Change this to the correct namespace.namespace:defaultroleRef:kind:ClusterRolename:mmt-clusterroleapiGroup:rbac.authorization.k8s.io---kind:PersistentVolumeClaimapiVersion:v1metadata:name:mmt-pvcspec:accessModes:-ReadWriteManyresources:requests:storage:"20Gi"---apiVersion:v1kind:PersistentVolumemetadata:name:migrationspec:capacity:storage:100GivolumeMode:FilesystemaccessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:DeletestorageClassName:mmt-migrationhostPath:path:<migration directory>type:Directory---apiVersion:v1kind:PersistentVolumeClaimmetadata:name:migration-pvcnamespace:defaultspec:storageClassName:mmt-migrationaccessModes:-ReadWriteOnceresources:requests:storage:100Gi---apiVersion:v1kind:Podmetadata:name:mmtspec:serviceAccountName:mmt-serviceaccountvolumes:-name:storagepersistentVolumeClaim:claimName:msr# #Locate the appropriate PVC using the kubectl get pvc command and replace as necessary.-name:migrationpersistentVolumeClaim:claimName:migration-pvccontainers:-name:mmtimage:msr.ci.mirantis.com/msr/mmt:latestimagePullPolicy:IfNotPresentcommand:["sh","-c","tail-f/dev/null"]volumeMounts:-name:storagemountPath:/storage-name:migrationmountPath:/migrationresources:limits:cpu:500mmemory:256Mirequests:cpu:100mmemory:256MirestartPolicy:Never
Modify, as required:
The permissions in the rules section of the Role directive.
The spec.resources.storage value in the PersistentVolumeClaim
directive.
The spec.volumes[0].persistentVolumeClaim.claimName value in the Pod
directive, which refers to the PVC in use by the target MSR 3.x system.
The subjects.namespace value in the ClusterRoleBinding directive,
which must refer to your MSR namespace.
Ensure the migration directory used in step 1 is accessible to the MMT Pod.
The Pod YAML configuration file snippet below shows how the migration
directory in use by the Docker command is mounted into the MMT Pod,
from where the directory can be read:
Multi-step migration is intended for users who implement MMT 2.0.2 or earlier
to perform an MSR migration, as well as for those who prefer to maintain
full control over each step of the migration process.
Once you have met the Migration prerequisites, configured your source MSR
system and your target MSR system, and selected the storage
mode, you can perform the migration workflow as a
sequence of individual steps.
Migrations from MSR 2.9.x to 3.x.x must follow each of the five migration
steps, whereas migrations from MSR 3.x.x source systems skip the
verify,
estimate, and
transform steps, and instead begin with
extract before proceeding directly to
restore.
Important
All MMT commands that are run on MSR 3.x.x systems, including both source
and target deployments, must include the --fullname option, which
specifies the name of the MSR instance.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
Optional. Set whether to use an insecure connection.
Valid values: true (skip certificate validation when communicating
with the source system), false (perform certificate validation when
communicating with the source system)
Default: false
Example output:
Note
Sizing information displays only when a migration is run in copy storage mode.
If your migration originates from MSR 3.x.x, proceed directly to
Extract the data.
Before extracting the data for migration you must estimate the number of images
and the amount of metadata to migrate from your source MSR system to the new
MSR target system. To do so, run the following command on the source MSR
system.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
Optional. Set whether to use an insecure connection.
Valid values: true (skip certificate verification when communicating
with the source system), false (perform certificate validation when communicating with the source system)
You can extract metadata and, optionally, binary image data from an MSR source
system using commands that are presented herein.
Important
To avoid data inconsistencies, the source registry must remain in
read-only mode throughout the migration to the target MSR system.
Be aware that MSR 3.0.x source systems cannot be placed into
read-only mode. If you are migrating from a 3.0.x source system,
be careful not to write any files during the migration process.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
Optional. Set whether to use an insecure connection.
Valid values: true (skip certificate verification when communicating
with the source system), false (perform certificate validation when
communicating with the source system)
disable-analytics
Optional. Disables MMT metrics collection for the
extract command. You must include the flag each time you run the
command.
The data extract is rendered as a TAR file with the name
dtr-metadata-mmt-backup.tar in the <local-migration-directory>. The
file name is later converted to msr-backup-<MSR-version>-mmt.tar, following
the transform step.
Optional. Indicates that the source system runs on a Swarm cluster.
Example output:
INFO[0000]Migrationwillbeperformedwith"inplace"storagemode
INFO[0000]Backingupmetadata...
{"level":"info","msg":"Writing RethinkDB backup","time":"2023-07-06T01:25:51Z"}{"level":"info","msg":"Backing up MSR","time":"2023-07-06T01:25:51Z"}{"level":"info","msg":"Recording time of backup","time":"2023-07-06T01:25:51Z"}{"level":"info","msg":"Backup file checksum is: 0e2134abf81147eef953e2668682b5e6b0e9761f3cbbb3551ae30204d0477291","time":"2023-07-06T01:25:51Z"}
INFO[0002]TheMirantisMigrationToolextractedyourregistryofMSR3.x,usingthefollowingparameters:
SourceRegistry:MSR3
Mode:metadataonly
ExistingMSR3storagewillbebackedup.
Thesourceregistrymustremaininread-onlymodeforthedurationoftheoperationtoavoiddatainconsistencies.
The data extract is rendered as a TAR file with the name
msr-backup-<MSR-version>-mmt.tar in the <local-migration-directory>.
Once you have extracted the data from your source MSR system, you must
transform the metadata into a format that is suitable for migration to an
MSR 3.x.x system.
Optional. Disables MMT metrics collection for the
transform command. You must include the flag each time you run the
command.
swarm
Optional. Specifies that the source system runs on Docker Swarm.
Default: false
fullname
Sets the name of the MSR instance to which MMT will migrate the
transformed data extract. Use only when the target system runs on a
Kubernetes cluster.
Default: msr
namespace
Optional. Sets the namespace scope for the given command.
For all Kubernetes-based migrations, Mirantis recommends running MMT in a Pod
rather than using the docker run deployment method. Migration
scenarios in which this does not apply are limited to MSR 2.9.x source systems
and Swarm-based MSR 3.1.x source and target systems.
Important
All Kubernetes-based migrations that use a filesystem backend must run
MMT in a Pod.
When performing a restore from within the
MMT Pod, the Persistent Volume Claim (PVC) used by the Pod must contain
the data extracted from the source MSR system.
Before you perform the multi-step migration,
deploy the following Pod onto your Kubernetes-based source and target systems:
apiVersion:v1kind:ServiceAccountmetadata:name:mmt-serviceaccount---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:mmt-rolerules:-apiGroups:["","apps","rbac.authorization.k8s.io","cert-manager.io","acid.zalan.do"]resources:["*"]verbs:["*"]---apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:mmt-rolebindingsubjects:-kind:ServiceAccountname:mmt-serviceaccountroleRef:kind:Rolename:mmt-roleapiGroup:rbac.authorization.k8s.io---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:mmt-clusterrolerules:# Add/remove more permissions as needed-apiGroups:["","msr.mirantis.com","rethinkdb.com","apiextensions.k8s.io"]resources:["msrs","rethinkdbs","customresourcedefinitions","persistentvolumes"]verbs:["*"]---kind:ClusterRoleBindingapiVersion:rbac.authorization.k8s.io/v1metadata:name:mmt-clusterrolebindingsubjects:-kind:ServiceAccountname:mmt-serviceaccount# Change this to the correct namespace.namespace:defaultroleRef:kind:ClusterRolename:mmt-clusterroleapiGroup:rbac.authorization.k8s.io---kind:PersistentVolumeClaimapiVersion:v1metadata:name:mmt-pvcspec:accessModes:-ReadWriteManyresources:requests:storage:"20Gi"---apiVersion:v1kind:Podmetadata:name:mmtspec:serviceAccountName:mmt-serviceaccountvolumes:-name:storagepersistentVolumeClaim:claimName:msr# #Locate the appropriate PVC using the kubectl get pvc command and replace as necessary.-name:migrationpersistentVolumeClaim:claimName:mmt-pvccontainers:-name:mmtimage:registry.mirantis.com/msr/mmt:2.0.1imagePullPolicy:IfNotPresentcommand:["sh","-c","tail-f/dev/null"]volumeMounts:-name:storagemountPath:/storage-name:migrationmountPath:/migrationresources:limits:cpu:500mmemory:256Mirequests:cpu:100mmemory:256MirestartPolicy:Never
Modify, as required:
The permissions in the rules section of the Role directive.
The spec.resources.storage value in the PersistentVolumeClaim
directive.
The spec.volumes[0].persistentVolumeClaim.claimName value in the Pod
directive, which refers to the PVC in use by the target MSR 3.x system.
The subjects.namespace value in the ClusterRoleBinding directive,
which must refer to your MSR namespace.
By default, MMT sends usage metrics to Mirantis whenever you run the
extract, transform, and restore commands. To disable this
functionality, include the --disable-analytics flag whenever you issue any
of these commands.
MMT collects the following metrics to improve the product and
facilitate its use:
Metric
Description
BlobImageCount
Number of images stored in the source MSR system.
BlobStorageSize
Total size of all the images stored in the source MSR system.
EndTime
Time at which the command stops running.
ErrorCount
Number of errors that occurred during the given migration step.
MigrationStep
Migration step for which metrics are being collected.
For example, extract.
StartTime
Time at which the command begins running.
Status
Command status.
In the case of command failure, MMT reports all associated error
messages.
StorageMode
Storage mode used for migration.
Valid values: copy and inplace.
StorageType
Storage type used in the MSR source and target systems
Valid values: s3, azure, swift, gcs, filesystem, and
nfs.
UserId
Source MSR IP address or URL that is used to associate metrics from
separate commands.
To reuse the extract copy for a restore, reset the appropriate flags
in the migration_summary.json file to false or leave the flags
empty. Otherwise, the MMT restore command will skip the extract.
Migrations from source MSR 2.9.x systems that use Docker volumes for image
storage can only be performed using the copy storage mode. Such migrations must have the Docker volume and
associated Persistent Volume Claims (PVCs) mounted to the MMT container.
The volume name returns as dtr-registry-<volume-id>.
Mount the source MSR 2.9.x volume to the MMT container at /storage to
provide the container with access to the volume data, for both the
Estimate and Extract
migration stages.
Migrate the data extract to the PVC of the target MSR 3.0.x system. To do
this, you must run the MMT container in a Pod with the PVC mounted at the
container /storage directory.
Note
In the event the PVC is not mounted at the MMT container /storage
directory, the Restore migration step may
still complete, and the target MSR 3.0.x system may display the restored
data in the MSR web UI. Pulling images from the target system, however,
will fail, as the source MSR 2.9.x image data is not migrated to the MSR
3.0.x PVC.
Use the YAML template that follows as an example for how to create the MMT
Pod and other required Kubernetes objects:
apiVersion:v1kind:ServiceAccountmetadata:name:<mmt-serviceaccount-name>---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:<mmt-role-name>rules:# Add/remove more permissions as needed-apiGroups:["","apps","rbac.authorization.k8s.io","cert-manager.io","acid.zalan.do"]resources:["*"]verbs:["*"]---apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:<mmt-rolebinding-name>subjects:-kind:ServiceAccountname:<mmt-serviceaccount-name>roleRef:kind:Rolename:<mmt-role-name>apiGroup:rbac.authorization.k8s.io---apiVersion:v1kind:Podmetadata:name:<mmt-pod-name>spec:serviceAccountName:<mmt-serviceaccount-name>volumes:-name:storagepersistentVolumeClaim:# This is the PersistentVolumeClaim that the destination/target MSR 3.0.x is using.# This PVC is acting as the filesystem storage backend for MSR 3.0.x.claimName:<msr-pvc-name>containers:-name:msr-migration-toolimage:registry.mirantis.com/msr/mmt:<mmt-image-tag>imagePullPolicy:IfNotPresentcommand:["sh","-c","whiletrue;dosleep30;done;"]volumeMounts:-name:storagemountPath:/storagerestartPolicy:Never
Once <mmt-pod-name> is running, copy your source MSR 2.9.x data extract
to the /migration location of the MMT container running within the Pod:
When migrating a large source installation to your MSR target environment, MMT
can fail due to too many files being open. If this happens, the following error
message displays:
When running MMT from a Docker container, ensure that the path provided for
storing migration data has been mounted as a docker volume to the local
machine.
When running MMT outside of Docker, ensure the path provided exists.
The error is reported when the rethinkdb Pod for the destination MSR 3.x
installation does not have enough disk space available due to the sizing of its
provisioned volume.
To increase volume size using the MSR Operator:
Edit the custom resource manifest, changing the
rethinkdb.cluster.persistentVolume.size value to match the source
RethinkDB volume size.
Apply the changes to the custom resource:
kubectlapply-fcr-sample-manifest.yaml
Verify completion of the reconciliation process for the custom resource:
Edit the values.yaml file you used for MSR deployment, changing the
rethinkdb.cluster.persistentVolume.size value to match the source
RethinkDB volume size
Run the helm upgrade --values <path to values.yaml> msr
msr/msr command.
The error is reported when the node on which RethinkDB is running on the target
MSR system does not have enough available disk space.
SSH into the node on which RethinkDB is running.
Review the amount of disk space used by the docker daemon on the node:
dockersystemdf
Review the total size and available storage of the node filesystem:
df
Allocate more storage to the host machine on which the target node is
running.
Admin password on MSR 3.0.x target no longer works¶
As a result of the migration, the source MSR system security settings
completely replace the settings in the target MSR system. Thus, to gain
admin access to the target system, you must use the admin password for the
source system.
MMT uses several parallel sub-routines in copying image blobs, the number of
which is controlled by the --parallel-io-count parameter, which has a
default value of 4.
Image blobs are copied only when you are using the copy storage mode for
your migration, during the Extract and Restore stages of the migration workflow. For optimum
performance, the number of CPU resources to allocate for the MMT container
(--cpus=<value>) is --parallel-io-count, plus one for MMT itself.
You may encounter an INFO[0014]Totalblobsize:0 error message during
migration with copy mode.
This indicates that the storage is empty or that blob storage mapping
is defined incorrectly.
The error may result in a panic message in versions prior to MMT 2.0.2-GA.
To resolve the issue, ensure that the correct source volume is specified
in the mount parameter of the MMT command line.
Note that the exact source storage name may vary.
Errors can occur during migration that require the use of additional MMT
parameters at various stages of the migration process.
For scenarios wherein the pulling of Docker images has failed, you can use the
parameters detailed in the following table to pull the needed images to your
MKE cluster running MSR 2.9.x.
You must pull the MMT image to both your source MSR system and your target MSR
system, otherwise the migration will fail with the following error message:
MSR 3.0.3 or later must be running on your target MSR 3.x cluster, otherwise
the restore migration step will fail with the following error message:
{"level":"fatal","msg":"flag provided but not defined: -append","time":"<time>"}
failedtorestoremetadatafrom"/migration/msr-backup-<msr-version>-mmt.tar":restorefailed:commandterminatedwithexitcode1
To resolve the issue, upgrade your target cluster to MSR 3.0.3 or later. Refer
to Upgrade MSR for more information.
Storage configuration is out of sync with metadata¶
With the inplacestorage mode, an error
message will display if you fail to configure the external storage location for
your target MSR system to the same storage location that your source MSR system
uses:
failed to run container: mmt-dtr-rethinkdb-backup¶
During the Estimate and
Extract stages of the migration workflow, you may
encounter the following error message:
FATA[0001] failed to extract MSR metadata:\
failed to run container: \
mmt-dtr-rethinkdb-backup: \
Error response from daemon: \
Conflict: \
The name mmt-dtr-rethinkdb-backup is already assigned. \
You have to delete (or rename) that container to be able to assign \
mmt-dtr-rethinkdb-backup to a container again.
Identify the node on which mmt-dtr-rethinkdb-backup was created.
From the node on which the mmt-dtr-rethinkdb-backup container was
created, delete the RethinkDB backup container:
[ENGDTR-4170] Fixed an issue wherein during migration the LDAP setting was
not appearing in the destination MSR. Now, the setting is completely
transferred to MSR 3.x metadata and can be accessed on
the Settings page of the MSR web UI.
[FIELD-6379] Fixed an issue wherein the estimation command in air-gapped
environments failed due to attempts to pull the MMT image on a random node.
The fix ensures that the MMT image is pulled on the required node, where the
estimation command is executed.
Due to unsanitized NUL values, attackers may be able to
maliciously set environment variables on Windows. In
syscall.StartProcess and os/exec.Cmd, invalid environment
variable values containing NUL values are not properly checked
for. A malicious environment variable value can exploit this behavior
to set a value for a different environment variable. For example, the
environment variable string "A=B\x00C=D" sets the variables
"A=B" and "C=D".
Mirantis Secure Registry (MSR) subscriptions provide access to prioritized
support for designated contacts from your company, agency, team, or
organization. MSR service levels are based on your subscription level and the
cloud or cluster that you designate in your technical support case. Our support
offerings are described on the
Enterprise-Grade Cloud Native and Kubernetes Support page.
You may inquire about Mirantis support subscriptions by using the
contact us form.
The CloudCare Portal is the chief way in
which Mirantis interacts with customers who are experiencing technical
issues. Access to the CloudCare Portal requires prior authorization by your
company, agency, team, or organization, and a brief email verification step.
After Mirantis sets up its back-end systems at the start of the support
subscription, a designated administrator at your company, agency, team, or
organization can designate additional contacts. If you have not already
received and verified an invitation to our CloudCare Portal, contact your local
designated administrator, who can add you to the list of designated contacts.
Most companies, agencies, teams, and organizations have multiple designated
administrators for the CloudCare Portal, and these are often the persons most
closely involved with the software. If you do not know who your
local designated administrator is, or you are having problems accessing the
CloudCare Portal, you can also send an email to Mirantis support at
support@mirantis.com.
Once you have verified your contact details and changed your password, you and
all of your colleagues will have access to all of the cases and purchased
resources. Mirantis recommends that you retain your Welcome to Mirantis
email, as it contains information on how to access the CloudCare Portal,
guidance on submitting new cases, managing your resources, and other related
issues.
Mirantis encourages all customers with technical problems to use the
knowledge base, which you can access on the Knowledge tab
of the CloudCare Portal. We also encourage you to review the
MSR product documentation and release notes prior to
filing a technical case, as the problem may have already been fixed in a later
release, or a workaround solution may be available for a similar problem that
other customers have experienced.
One of the features of the CloudCare Portal is the ability to associate
cases with a specific MSR cluster. The associated cases are referred to in the
Portal as “Clouds”. Mirantis pre-populates your customer account with one or
more clouds based on your subscription(s). You may also create and manage
your Clouds to better match the way in which you use your subscription.
Mirantis also recommends and encourages customers to file new cases based on a
specific Cloud in your account. This is because most Clouds also have
associated support entitlements, licenses, contacts, and cluster
configurations. These submissions greatly enhance the ability of Mirantis to
support you in a timely manner.
You can locate the existing Clouds associated with your account by using the
Clouds tab at the top of the portal home page. Navigate to the
appropriate Cloud and click on the Cloud name. Once you have verified that the
Cloud represents the correct MSR cluster and support entitlement, create a new
case via the New Case button near the top of the Cloud page.
The support bundle, which is a compressed archive in ZIP format of
configuration data and log files from the cluster, is the key to receiving
effective technical support for most MSR cases. There are several ways to
gather a support bundle, each of which is described in the sections that
follow. Once you have obtained a support bundle, you can upload the bundle to
your new technical support case by following the instructions in
the Mirantis knowledge base,
using the Detail view of your case.
Note
MSR users can obtain a support bundle using the
Mirantis Support Console. For those running MSR on Mirantis
Kubernetes Engine (MKE), there are additional methods for obtaining a
support bundle that are detailed in MSR support bundles on MKE.
Once the Support Console is successfully installed, the system returns the
commands needed to access the Support Console UI:
GettheapplicationURLbyrunningthesecommands:
exportPOD_NAME=$(kubectlgetpods--namespacedefault-l"app.kubernetes.io/name=support-console,app.kubernetes.io/instance=support-console"-ojsonpath="{.items[0].metadata.name}")exportCONTAINER_PORT=$(kubectlgetpod--namespacedefault$POD_NAME-ojsonpath="{.spec.containers[0].ports[0].containerPort}")echo"Visit http://127.0.0.1:8000 to use your application"
kubectl--namespacedefaultport-forward$POD_NAME8000:$CONTAINER_PORT
An Internet-connected system is required for offline installation of the
Support Console, for the purpose of downloading and transferring the necessary
files to the offline host.
Download the Support Console image package from
https://s3-us-east-2.amazonaws.com/packages-mirantis.com/caas/msc_image_1.0.0.tar.gz.
Once the Support Console is successfully installed, the system returns the
commands needed to access the Support Console UI:
GettheapplicationURLbyrunningthesecommands:
exportPOD_NAME=$(kubectlgetpods--namespacedefault-l"app.kubernetes.io/name=support-console,app.kubernetes.io/instance=support-console"-ojsonpath="{.items[0].metadata.name}")exportCONTAINER_PORT=$(kubectlgetpod--namespacedefault$POD_NAME-ojsonpath="{.spec.containers[0].ports[0].containerPort}")echo"Visit http://127.0.0.1:8000 to use your application"
kubectl--namespacedefaultport-forward$POD_NAME8000:$CONTAINER_PORT
In your web browser, navigate to localhost:8000 to view the Support
Console UI.
Click Collect Support Bundle.
In the pop-up window, enter the namespace from which you want to collect
support data. By default, the Support Console gathers support data from the
default namespace.
Optional. If you no longer require access to the Support Console, click
Uninstall in the left-side navigation panel to remove the
support-console Pod from your cluster.
Obtain the support bundle using the Support Console API¶
Obtain the support bundle, specifying the namespace from which you want to
collect support data. By default, the Support Console gathers support data
from the default namespace.
curllocalhost:8000/collect?ns=<namespace>-O-J
Optional. If you no longer require access to the Support Console, run the
following command to remove the support-console Pod from your
cluster:
Obtain a full-cluster support bundle using the MKE web UI¶
Log in to the MKE web UI as an administrator.
In the left-side navigation panel, navigate to
<user name> and click Support Bundle. The support
bundle download will require several minutes to complete.
Note
The default name for the generated support bundle file is
docker-support-<cluster-id>-YYYYmmdd-hh_mm_ss.zip. Mirantis suggests
that you not alter the file name before submitting it to the customer
portal. However, if necessary, you can add a custom string between
docker-support and <cluster-id>, as in:
docker-support-MyProductionCluster-<cluster-id>-YYYYmmdd-hh_mm_ss.zip.
Submit the support bundle to Mirantis Customer Support by clicking
Share support bundle on the success prompt that displays
once the support bundle has finished downloading.
Fill in the Jira feedback dialog, and click Submit.
Obtain a full-cluster support bundle using the MKE API¶
Create an environment variable with the user security token:
If SELinux is enabled, include the following flag:
--security-optlabel=disable.
Note
The CLI-derived support bundle only contains logs for the node on which
you are running the command. If you are running a high availability
MKE cluster, collect support bundles from all manager nodes.
Use the MKE CLI with PowerShell to get a support bundle¶
Run the following command on Windows worker nodes to collect the support
information and and have it placed automatically into a zip file:
The eNZi service provides authentication and authorization function for MSR. It
provides a rich API that users and Open ID Connect clients can query identity,
sessions, membership, teams, and label permissions.
You can use the MSR CLI tool to backup and restore the software, perform
database administration tasks, and gather information on RethinkDB
clusters. The tool runs in interactive mode by default, issuing prompts
as necessary for any required values.
To access the MSR CLI:
Kubernetes deployments
Following MSR installation, run the helmgetnotes command for
instructions on how to access the MSR CLI:
helmgetnotes<RELEASE_NAME>
Note
Additional help is available for the CLI and for each command using the
–help option.
Swarm deployments
To access the MSR CLI on Swarm deployments, execute the following command
from any node where MSR is installed:
The msr backup command creates a backup of the metadata in use by
MSR, which you can restore with the msr restore command.
Note
msr backup only creates backups of configuration, image, and
chart metadata. It does not back up the Docker images or Helm charts
stored in your registry. Mirantis suggests that you implement a separate
backup policy for the contents of your storage backend, taking into
consideration whether your MSR installation is configured to store
images on the filesystem or through the use of a cloud provider.
Important
Mirantis recommends that you store your backup in a secure location, as
it contains sensitive information.
Change the replication factor of RethinkDB tables in use by MSR.
When the --replicas flag is present, the dbscale command uses the
associated value as the replication factor for the tables. Otherwise, the
replication factor is set automatically, based on the number of RethinkDB
servers that are connected to the server at that moment.
Use the db emergency-repair command to repair all RethinkDB tables
in use by MSR that have lost quorum. The command accomplishes its work by
running the RethinkDB unsafe_rollback emergency repair on the tables and
then scaling the tables similar to msr db scale command (refer to
mirantis/msr db scale for information on which replication factor to use).
The msr backup command performs a restore of the metadata used
by MSR, from a backup file that has been generated by the
msr backup command.
Note
msr backup does not restore Docker images or Helm charts. Mirantis
suggests that you implement a separate restore procedure
for the contents of your storage backend, taking into consideration
whether your MSR installation is configured to store images on the local
filesystem or through a cloud provider.
Use the rethinkdb decommission command to remove all tags from a
RethinkDB server, so that table replicas are removed from that server the
next time the respective tables are reconfigured (scaled).
Due to the significant changes put forward with the
introduction of the MSR 3.0.0 release, several legacy
MSR CLI commands became unnecessary and have been
removed. The commands that are no longer available are:
[FIELD-5075] Ability to control redirects on pull¶
Users can now enable and disable redirect on pull by setting
the redirect flag in the Helm values.yaml file to true.
[FIELD-7548] Improved error handling and API
behavior for artifact references¶
MSR improved error handling by adding:
ARTIFACT_SCANNER_REPORT_UNAVAILABLE error, to indicate that a report
export failed due to missing layer details for the specified artifact.
This replaces the previously used generic NO_SUCH_TAG error.
NO_DIGEST_PERMITTED error, to indicate that digest-based references
are not supported for report exports.
The list of the addressed issues in MSR 3.1.11 includes:
[FIELD-7515] Fixed an issue wherein the Show/Hide button for layer
vulnerabilities displayed as enabled for non-admin users in scanning results.
The button is now disabled and features a tooltip that explains the
restriction.
[FIELD-7537] Fixed an issue wherein MSR remained in read-only
mode whenever a backup job timed out.
[FIELD-7552] Fixed an issue wherein msr-installer did not update the
storage configuration whenever the backend storage was changed.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to vulnerability-scanning.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
musl libc 0.9.13 through 1.2.5 before 1.2.6 has an out-of-bounds write
vulnerability when an attacker can trigger iconv conversion of untrusted
EUC-KR text to UTF-8.
Calling Decoder.Decode on a message which contains deeply nested
structures can cause a panic due to stack exhaustion. This is a
follow-up to CVE-2022-30635.
The list of the enhancements in MSR 3.1.10 includes:
[ENGDTR-4233] Added the Backup Deadline option to the MSR web UI
to configure a maximum duration for backups, in hours and minutes.
[ENGDTR-4369] Updated MSR Operator to 1.0.4.
[ENGDTR-4370] Updated RethinkDB Operator to 1.0.3.
[ENGDTR-4380] Updated Mirantis eNZi to 1.0.89-ui.
[FIELD-7051] Added support for specifying an initial admin password
during MSR installations that are performed using a Helm chart. The
new adminPassword parameter allows users to set a custom password
for eNZi registration, replacing the default. This parameter applies
only during installation and does not affect upgrades.
The list of the addressed issues in MSR 3.1.10 includes:
[FIELD-6433] Fixed an issue wherein the search function in the
User page of the MSR web UI incorrectly returned organizations
and repositories information in addition to user information.
[FIELD-7005] Fixed an issue wherein the MSR web UI failed to clearly
identify a successful user password change by an administrator.
Now, the Save button is disabled until a valid password
is entered into the New password field, and a popup
presents to indicate that the operation was a success.
[FIELD-7185] Fixed an MSR Operator issue wherein some jobrunner Pods are
pending when podAntiAffinityPreset:hard.
[FIELD-7468] Fixed an issue wherein the RethinkDB CLI could not be
accessed in a Swarm environment.
[FIELD-7476] Fixed an issue wherein immutable repositories were not correctly
skipped, causing the pruning job to fail instead of skipping those tags
and continuing execution. Now, immutable repositories are correctly skipped,
allowing tag pruning to execute for the rest of them.
[FIELD-7476] Fixed an issue wherein repositories with an immutable tag were
not skipped during pruning operations, which caused pruning jobs to fail.
[FIELD-7483] Fixed an issue wherein duplicated resources exist in the
Helm Chart artifact for both MSR Operator and RethinkDB Operator.
[FIELD-7499] Fixed an issue wherein the MSR web UI failed to provide
proper error feedback whenever attempts were made to create pruning
policies for repositories with immutable tags.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to vulnerability-scanning.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
usl libc 0.9.13 through 1.2.5 before 1.2.6 has an out-of-bounds write
vulnerability when an attacker can trigger iconv conversion of untrusted
EUC-KR text to UTF-8.
The list of the addressed issues in MSR 3.1.9 includes:
[FIELD-7274] Fixed an issue wherein tolerations and node selectors were not
applied to the msr-operator-controller-manager and
rethinkdb-operator-controller-manager deployments. The issue caused Pods
to remain in a pending state.
[FIELD-7380] Fixed an issue wherein the MSR Operator deployed RethinkDB Pods
with emptyDir volumes. As the storage was tied to
the Pod lifecycle, data was lost whenever the Pods were updated or deleted.
The fix ensures that RethinkDB Pods use persistent storage, thus preserving
data during Pod updates or deletions.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to vulnerability-scanning.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
An issue was discovered in libexpat before 2.6.3. dtdCopy in xmlparse.c
can have an integer overflow for nDefaultAtts on 32-bit platforms (where
UINT_MAX equals SIZE_MAX).
An issue was discovered in libexpat before 2.6.3. nextScaffoldPart in
xmlparse.c can have an integer overflow for m_groupSize on 32-bit
platforms (where UINT_MAX equals SIZE_MAX).
An issue was discovered in Django 5.1 before 5.1.4, 5.0 before 5.0.10,
and 4.2 before 4.2.17. Direct usage of the
django.db.models.fields.json.HasKey lookup, when an Oracle database is
used, is subject to SQL injection if untrusted data is used as an lhs
value. (Applications that use the jsonfield.has_key lookup via __ are
unaffected.)
Issue summary: Use of the low-level GF(2^m) elliptic curve APIs with
untrusted explicit values for the field polynomial can lead to
out-of-bounds memory reads or writes.
An attacker can craft an input to the Parse functions that would be
processed non-linearly with respect to its length, resulting in extremely
slow parsing. This could cause a denial of service.
Applications and libraries which misuse the
ServerConfig.PublicKeyCallback callback may be susceptible to an
authorization bypass. The documentation for
ServerConfig.PublicKeyCallback says that “A call to this function does
not guarantee that the key offered is in fact used to authenticate.”
Specifically, the SSH protocol allows clients to inquire about whether a
public key is acceptable before proving control of the corresponding
private key. PublicKeyCallback may be called with multiple keys, and the
order in which the keys were provided cannot be used to infer which key
the client successfully authenticated with, if any. Some applications,
which store the key(s) passed to PublicKeyCallback (or derived
information) and make security relevant determinations based on it once
the connection is established, may make incorrect assumptions.
The list of the enhancements in MSR 3.1.8 includes:
[ENGDTR-4349] Updated Mirantis EnZi to 1.0.88-ui.
[FIELD-6955] Enhancements to Helm chart management:
Improved chart placement
Helm charts pulled from Docker Hub are now placed under the
Charts tab in MSR repositories, which streamlines
the Helm chart management experience and ensures proper functionality
for Helm-related operations.
Ability to pull mirrored Helm charts between MSR instances
Helm charts can now be mirrored between different MSR instances, which
enables consistent management of charts across multiple MSR environments.
Ability to push mirrored Helm charts to remote registries
Helm charts can now be pushed to remote registries from MSR to Docker Hub
or other MSR instances, with proper chart handling.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to vulnerability-scanning.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
Issue summary: Calling the OpenSSL API function SSL_select_next_proto
with an empty supported client protocols buffer may cause a crash or
memory contents to be sent to the peer.
Issue summary: Applications performing certificate name checks
(e.g., TLS clients checking server certificates) may attempt to read an
invalid memory address resulting in abnormal termination of the
application process.
Issue summary: Use of the low-level GF(2^m) elliptic curve APIs with
untrusted explicit values for the field polynomial can lead to
out-of-bounds memory reads or writes.
An issue was discovered in Cloud Native Computing Foundation (CNCF) Helm
through 3.13.3. It displays values of secrets when the –dry-run flag is
used. This is a security concern in some use cases, such as a –dry-run
call by a CI/CD tool. NOTE: the vendor’s position is that this behavior
was introduced intentionally, and cannot be removed without breaking
backwards compatibility (some users may be relying on these values).
Also, it is not the Helm Project’s responsibility if a user decides to
use –dry-run within a CI/CD environment whose output is visible to
unauthorized persons.
An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of
header data by sending an excessive number of CONTINUATION frames.
Maintaining HPACK state requires parsing and processing all HEADERS and
CONTINUATION frames on a connection. When a request’s headers exceed
MaxHeaderBytes, no memory is allocated to store the excess headers, but
they are still parsed. This permits an attacker to cause an HTTP/2
endpoint to read arbitrary amounts of header data, all associated with a
request which is going to be rejected. These headers can include
Huffman-encoded data which is significantly more expensive for the
receiver to decode than for an attacker to send. The fix sets a limit on
the amount of excess header frames we will process before closing a
connection.
The protojson.Unmarshal function can enter an infinite loop when
unmarshaling certain forms of invalid JSON. This condition can occur when
unmarshaling into a message which contains a google.protobuf.Any value,
or when the UnmarshalOptions.DiscardUnknown option is set.
Helm is a package manager for Charts for Kubernetes. Versions prior to
3.14.2 contain an uninitialized variable vulnerability when Helm parses
index and plugin yaml files missing expected content. When either an
index.yaml file or a plugins plugin.yaml file were missing all
metadata a panic would occur in Helm. In the Helm SDK, this is found when
using the LoadIndexFile or DownloadIndexFile functions in the repo
package or the LoadDir function in the plugin package. For the Helm
client this impacts functions around adding a repository and all Helm
functions if a malicious plugin is added as Helm inspects all known
plugins on each invocation. This issue has been resolved in Helm v3.14.2.
If a malicious plugin has been added which is causing all Helm client
commands to panic, the malicious plugin can be manually removed from the
filesystem. If using Helm SDK versions prior to 3.14.2, calls to affected
functions can use recover to catch the panic.
The list of the enhancements in MSR 3.1.7 includes:
[ENGDTR-4140] Updated Mirantis eNZI to 1.0.87.
[ENGDTR-4290] Added a new --include-job-logs flag to the
backup command that enables users to include job logs in the
backup.
[ENGDTR-4332] Updated Golang to 1.21.13.
[FIELD-7096] The apply command now includes the --external-url
flag, which allows users to configure the host or load balancer URL at the
time of installation or upgrade. The --external-url flag can be applied
alongside such options as --https-port and --http-port.
Alternatively, the external URL can be configured in the values.yaml
file by setting the value to the global.externalURL field.
The list of the addressed issues in MSR 3.1.7 includes:
[ENGDTR-2623] Fixed an issue wherein eNZi configuration changes required
manual intervention and restart of MSR containers.
[FIELD-7041] Fixed an issue wherein SAML metadata was not deleted
from RethinkDB upon disablement of the SAML configuration.
[FIELD-7118] Fixed an issue wherein the rethinkdb pods failed to retain
the default node selectors when both global.nodeSelector and
rethinkdb.nodeSelector are specified in the values.yaml file or set
in the helm upgrade --install command.
With this update, the rethinkdb.nodeSelector now correctly
overrides the global setting while also maintaining the default node
selector.
[FIELD-7122] Fixed an issue wherein the MSR web UI would crash whenever a tag
had no layers to display. Now in such cases, the MSR web UI reports that
layer details are not available for the particular image.
[FIELD-7124] Fixed an issue wherein MSR failed to create a storage
configuration during installation due to the presence of an existing
configuration.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to vulnerability-scanning.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
pgx is a PostgreSQL driver and toolkit for Go. SQL injection can occur if
an attacker can cause a single query or bind message to exceed 4 GB in
size. An integer overflow in the calculated message size can cause the
one large message to be sent as multiple messages under the attacker’s
control. The problem is resolved in v4.18.2 and v5.5.4. As a workaround,
reject user input large enough to cause a single query or bind message to
exceed 4 GB in size.
Moby is an open-source project created by Docker for software
containerization. A security vulnerability has been detected in certain
versions of Docker Engine, which could allow an attacker to bypass
authorization plugins (AuthZ) under specific circumstances. The base
likelihood of this being exploited is low. Using a specially-crafted API
request, an Engine API client could make the daemon forward the request
or response to an authorization plugin without the body. In certain
circumstances, the authorization plugin may allow a request which it
would have otherwise denied if the body had been forwarded to it.
A security issue was discovered In 2018, where an attacker could bypass
AuthZ plugins using a specially crafted API request. This could lead to
unauthorized actions, including privilege escalation. Although this issue
was fixed in Docker Engine v18.09.1 in January 2019, the fix was not
carried forward to later major versions, resulting in a regression.
Anyone who depends on authorization plugins that introspect the request
and/or response body to make access control decisions is potentially
impacted. Docker EE v19.03.x and all versions of Mirantis Container
Runtime are not vulnerable. docker-ce v27.1.1 contains patches to fix
the vulnerability. Patches have also been merged into the master, 19.03,
20.0, 23.0, 24.0, 25.0, 26.0, and 26.1 release branches. If one is unable
to upgrade immediately, avoid using AuthZ plugins and/or restrict access
to the Docker API to trusted parties, following the principle of least
privilege.
A vulnerability in the package_index module of pypa/setuptools versions
up to 69.1.1 allows for remote code execution via its download functions.
These functions, which are used to download packages from URLs provided
by users or retrieved from package index servers, are susceptible to code
injection. If these functions are exposed to user-controlled inputs, such
as package URLs, they can execute arbitrary commands on the system.
The issue is fixed in version 70.0.
[ENGDTR-4272] /{namespace}/{reponame}/size API endpoint
[ENGDTR-4288] Python 3 utility tool
Capability to install cert-manager and Postgres Operator in different namespace¶
[FIELD-6966] MSR Operator 1.0.2 now allows you to install cert-manager
and Postgres Operator in a different namespace from the one in which
the MSR resource is running.
The list of the addressed issues in MSR 3.1.6 includes:
[ENGDTR-4255] Fixed an issue wherein an uninstall option --destroy was
not removing volumes from the host.
[FIELD-7038] Fixed an issue wherein the scanningstore and
rethinkdb-cli pods did not respect the global.nodeSelector
specified in the values.yaml file or set in the
helm upgrade --install command.
With this update, both the scanningstore and the rethinkdb-cli pods
now correctly apply the global.nodeSelector, ensuring they are
deployed to the nodes specified in the configuration.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to vulnerability-scanning.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Changes to eNZi configuration are not live-reloaded.
To work around the issue, restart the *-api, *-enzi-api,
*-garant, and *-registry Pods every time you change your eNZi
registration using the administrative commands.
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
The various Is methods (IsPrivate, IsLoopback, etc) did not work as
expected for IPv4-mapped IPv6 addresses, returning false for addresses
which would return true in their traditional IPv4 forms.
An out-of-bounds read flaw was found in the CLARRV, DLARRV, SLARRV,
and ZLARRV functions in lapack through version 3.10.0, as also used
in OpenBLAS before version 0.3.18. Specially crafted inputs passed to
these functions could cause an application using lapack to crash or
possibly disclose portions of its memory.
Pillow through 10.1.0 allows PIL.ImageMath.eval Arbitrary Code
Execution via the environment parameter, a different vulnerability
than CVE-2022-22817 (which was about the expression parameter).
The new apply command takes the place of the scale,
upgrade, and install commands.
The msr-installer now determines which operation to perform based on
the configuration specified in the values.yml file.
MSR can now be configured with an even number of replicas¶
Users can now use the --force flag to configure an MSR deployment with an
even number of replicas, and thus override the recommendation
check in the apply command. Be advised that Mirantis does not
recommend running an even number of replicas in a production environment.
The list of the addressed issues in MSR 3.1.5 includes:
[ENGDTR-4225] Fixed an issue wherein login events were not created.
The auditAuthLogsEnabled parameter in /settings API endpoint must
be set to generate login events on any successful or failed login.
[ENGDTR-4239] Fixed an issue wherein during scale down the msr-installer
placed MSR containers on nodes outside those specified in the
swarm.nodeList.
[FIELD-6436] Fixed an issue wherein users who continued to use their default
password were not urged to change it. Now, users receive a warning in the MSR
web UI when they log in using the default password.
[FIELD-6924] Added a new Webhook to include CVSSv3 results in Webhook payload
image scan reports, which previously were absent.
[FIELD-6947] Fixed an issue wherein outdated index warnings appear in
RethinkDB logs after an MSR upgrade.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to Scan images for vulnerabilities.
Running the msr-installer uninstall command on a clean cluster
leaves behind an msr-finalizer service.
Installing MSR is not possible while this service remains in the cluster.
To work around the issue, delete the msr-finalizer service and
re-run installation command.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Changes to eNZi configuration are not live-reloaded.
To work around the issue, restart the *-api, *-enzi-api,
*-garant, and *-registry Pods every time you change your eNZi
registration using the administrative commands.
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
An out-of-bounds read flaw was found in the CLARRV, DLARRV, SLARRV,
and ZLARRV functions in lapack through version 3.10.0, as also used
in OpenBLAS before version 0.3.18. Specially crafted inputs passed to
these functions could cause an application using lapack to crash or
possibly disclose portions of its memory.
Pillow through 10.1.0 allows PIL.ImageMath.eval Arbitrary Code
Execution via the environment parameter, a different vulnerability
than CVE-2022-22817 (which was about the expression parameter).
The backup configuration now includes a deadline setting that prevents
prolonged execution of automatic backups. Access and adjust this
functionality through the /api/v0/meta/settings/backup endpoint.
The list of the addressed issues in MSR 3.1.4 includes:
[ENGDTR-4028] Fixed an issue wherein the loglevel flag in the Swarm
installer was not applied. The loglevel flag needs to be added
between the msr-installer image and the install subcommand.
[ENGDTR-4179] Fixed an issue wherein the scale command for MSR
on Swarm failed to stabilize.
[ENGDTR-4237] Fixed an issue wherein the msr-image-pull command
failed to retrieve images during scaling operation in an air-gapped
environment. A noImagePull option that disables automatic
image pulling was added to values.yml in msr-installer.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to Scan images for vulnerabilities.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Changes to eNZi configuration are not live-reloaded.
To work around the issue, restart the *-api, *-enzi-api,
*-garant, and *-registry Pods every time you change your eNZi
registration using the administrative commands.
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
The list of the addressed issues in MSR 3.1.3 include:
[ENGDTR-4158] Fixed an issue wherein the initialEvaluation flag of
a created or updated tag pruning policy was set to true, which caused its
evaluation to run in the API server. Instead, now the evaluation of the
policy is executed in the JobRunner as a single tag_prune job.
[ENGDTR-4159] Fixed an issue wherein the tag pruning policy feature,
responsible for the automated testing of tags and providing the count of
affected tags, was preventing the creation of policies. To ensure
the reliable creation of tag pruning policies, this feature has been removed.
Consequently, users will not see the number of affected tags when creating
new policies. For testing purposes before evaluation, Mirantis recommends
that you use the /pruningPolicies/test API endpoint.
[ENGDTR-4164] Fixed an issue wherein the documentation for the eNZI API
was not published.
[FIELD-6819] Fixed an issue wherein the MSR scaling would fail when SELinux
was enabled.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to Scan images for vulnerabilities.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Changes to eNZi configuration are not live-reloaded.
To work around the issue, restart the *-api, *-enzi-api,
*-garant, and *-registry Pods every time you change your eNZi
registration using the administrative commands.
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] An MSR administrator who is logged in and closes their browser
instance does not need to log in again when they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
Verifying a certificate chain which contains a certificate with
an unknown public key algorithm will cause Certificate.Verify to
panic. This affects all crypto/tls clients, and servers that set
Config.ClientAuth to VerifyClientCertIfGiven or
RequireAndVerifyClientCert. The default behavior is for TLS servers
to not verify client certificates.
When parsing a multipart form (either explicitly with
Request.ParseMultipartForm or implicitly with
Request.FormValue, Request.PostFormValue, or
Request.FormFile), limits on the total size of the parsed form
were not applied to the memory consumed while reading a single form
line. This permits a maliciously crafted input containing very long
lines to cause allocation of arbitrarily large amounts of memory,
potentially leading to memory exhaustion. With fix,
the ParseMultipartForm function now correctly limits the maximum
size of form lines.
CVE-2023-45288
Resolved
CVE has been reserved by an organization or individual and
is not currently available in the NVD.
When following an HTTP redirect to a domain which is not a subdomain
match or exact match of the initial domain, an http.Client does not
forward sensitive headers such as “Authorization” or “Cookie”. For
example, a redirect from foo.com to www.foo.com will forward
the Authorization header, but a redirect to bar.com will not. A
maliciously crafted HTTP redirect could cause sensitive headers
to be unexpectedly forwarded.
An out-of-bounds read flaw was found in the CLARRV, DLARRV, SLARRV,
and ZLARRV functions in lapack through version 3.10.0, as also used
in OpenBLAS before version 0.3.18. Specially crafted inputs passed
to these functions could cause an application using lapack to crash
or possibly disclose portions of its memory.
Pillow through 10.1.0 allows PIL.ImageMath.eval Arbitrary Code
Execution via the environment parameter, a different vulnerability
than CVE-2022-22817 (which was about the expression parameter).
Whereas job filtering was previously only available for the running job
status, the functionality is now extended to include all available job
status options.
Issues addressed in the MSR 3.1.2 release include:
[FIELD-6748] Fixed an issue wherein the navigation buttons in the MSR web UI
Organizations tab were not enabled, and thus users could not
navigate to organizations that were not in the default view of 10.
[FIELD-6493] Fixed an issue with the MSR web UI wherein the
Customize Sign in Button Text control for
SAML Service Provider did not function.
[ENGDTR-4012] Fixed an issue wherein newly created pull mirrors did not
function.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to Scan images for vulnerabilities.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] A logged-in MSR administrator who closes a browser instance
does not need to log in again if they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
Certifi is a curated collection of Root Certificates for validating
the trustworthiness of SSL certificates while verifying the identity
of TLS hosts. Certifi prior to version 2023.07.22 recognizes “e-Tugra”
root certificates. e-Tugra’s root certificates were subject to an
investigation prompted by reporting of security issues in their
systems. Certifi 2023.07.22 removes root certificates from “e-Tugra”
from the root store.
containerd is an open source container runtime. A bug was found in
containerd prior to versions 1.6.18 and 1.5.18 where supplementary
groups are not set up properly inside a container. If an attacker has
direct access to a container and manipulates their supplementary group
access, they may be able to use supplementary group access to bypass
primary group restrictions in some cases, potentially gaining access
to sensitive information or gaining the ability to execute code in
that container. Downstream applications that use the containerd client
library may be affected as well. This bug has been fixed in containerd
v1.6.18 and v.1.5.18. Users should update to these versions and
recreate containers to resolve this issue. Users who rely on a
downstream application that uses containerd’s client library should
check that application for a separate advisory and instructions. As a
workaround, ensure that the USER$USERNAME Dockerfile instruction
is not used. Instead, set the container entrypoint to a value similar
to ENTRYPOINT["su","-","user"] to allow su to properly set
up supplementary groups.
containerd is an open source container runtime. Before versions 1.6.18
and 1.5.18, when importing an OCI image, there was no limit on the
number of bytes read for certain files. A maliciously crafted image
with a large file where a limit was not applied could cause a denial
of service. This bug has been fixed in containerd 1.6.18 and 1.5.18.
Users should update to these versions to resolve the issue. As a
workaround, ensure that only trusted images are used and that only
trusted users have permissions to import images.
There is a type confusion vulnerability relating to X.400 address
processing inside an X.509 GeneralName. X.400 addresses were parsed as
an ASN1_STRING but the public structure definition for
GENERAL_NAME incorrectly specified the type of the x400Address
field as ASN1_TYPE. This field is subsequently interpreted by the
OpenSSL function GENERAL_NAME_cmp as an ASN1_TYPE rather than
an ASN1_STRING. When CRL checking is enabled (i.e. the application
sets the X509_V_FLAG_CRL_CHECK flag), this vulnerability may allow
an attacker to pass arbitrary pointers to a memcmp call, enabling them
to read memory contents or enact a denial of service. In most cases,
the attack requires the attacker to provide both the certificate chain
and CRL, neither of which need to have a valid signature. If the
attacker only controls one of these inputs, the other input must
already contain an X.400 address as a CRL distribution point, which is
uncommon. As such, this vulnerability is most likely to only affect
applications which have implemented their own functionality for
retrieving CRLs over a network.
The public API function BIO_new_NDEF is a helper function used for
streaming ASN.1 data via a BIO. It is primarily used internally to
OpenSSL to support the SMIME, CMS and PKCS7 streaming capabilities,
but may also be called directly by end user applications. The function
receives a BIO from the caller, prepends a new BIO_f_asn1 filter
BIO onto the front of it to form a BIO chain, and then returns the new
head of the BIO chain to the caller. Under certain conditions, for
example if a CMS recipient public key is invalid, the new filter BIO
is freed and the function returns a NULL result indicating a failure.
However, in this case, the BIO chain is not properly cleaned up and
the BIO passed by the caller still retains internal pointers to the
previously freed filter BIO. If the caller then goes on to call
BIO_pop() on the BIO then a use-after-free will occur. This will
most likely result in a crash. This scenario occurs directly in the
internal function B64_write_ASN1() which may cause
BIO_new_NDEF() to be called and will subsequently call
BIO_pop() on the BIO. This internal function is in turn called by
the public API functions PEM_write_bio_ASN1_stream,
PEM_write_bio_CMS_stream, PEM_write_bio_PKCS7_stream,
SMIME_write_ASN1,SMIME_write_CMS and SMIME_write_PKCS7. Other
public API functions that may be impacted by this include
i2d_ASN1_bio_stream, BIO_new_CMS, BIO_new_PKCS7,
i2d_CMS_bio_stream and i2d_PKCS7_bio_stream. The OpenSSL cms
and smime command line applications are similarly affected.
containerd is an open source container runtime. A bug was found in
containerd’s CRI implementation where a user can exhaust memory on the
host. In the CRI stream server, a goroutine is launched to handle
terminal resize events if a TTY is requested. If the user’s process
fails to launch due to, for example, a faulty command, the goroutine
will be stuck waiting to send without a receiver, resulting in a
memory leak. Kubernetes and crictl can both be configured to use
containerd’s CRI implementation and the stream server is used for
handling container IO. This bug has been fixed in containerd 1.6.12
and 1.5.16. Users should update to these versions to resolve the
issue. Users unable to upgrade should ensure that only trusted images
and commands are used and that only trusted users have permissions to
execute commands in running containers.
The function PEM_read_bio_ex() reads a PEM file from a BIO and
parses and decodes the name (e.g. CERTIFICATE), any header
data and the payload data. If the function succeeds then the
name_out, header and data arguments are populated with
pointers to buffers containing the relevant decoded data. The caller
is responsible for freeing those buffers. It is possible to construct
a PEM file that results in 0 bytes of payload data. In this case
PEM_read_bio_ex() will return a failure code but will populate the
header argument with a pointer to a buffer that has already been
freed. If the caller also frees this buffer then a double free will
occur. This will most likely lead to a crash. This could be exploited
by an attacker who has the ability to supply malicious PEM files for
parsing to achieve a denial of service attack. The functions
PEM_read_bio() and PEM_read() are simple wrappers around
PEM_read_bio_ex() and therefore these functions are also directly
affected. These functions are also called indirectly by a number of
other OpenSSL functions including PEM_X509_INFO_read_bio_ex() and
SSL_CTX_use_serverinfo_file() which are also vulnerable. Some
OpenSSL internal uses of these functions are not vulnerable because
the caller does not free the header argument if PEM_read_bio_ex()
returns a failure code. These locations include the
PEM_read_bio_TYPE() functions as well as the decoders introduced
in OpenSSL 3.0. The OpenSSL asn1parse command line application is also
impacted by this issue.
Heap/stack buffer overflow in the dlang_lname function in
d-demangle.c in libiberty allows attackers to potentially
cause a denial of service (segmentation fault and crash) via a crafted
mangled symbol.
paraparser in ReportLab before 3.5.31 allows remote code execution
because start_unichar in paraparser.py evaluates untrusted user input
in a unichar element in a crafted XML document with
<unicharcode=" followed by arbitrary Python code, a similar issue
to CVE-2019-17626.
ReportLab through 3.5.26 allows remote code execution because of
toColor(eval(arg)) in colors.py, as demonstrated by a crafted
XML document with <spancolor=" followed by arbitrary Python code.
Using MMT, an extract of MSR installed through Helm can now be restored into an
MSR custom resource (CR) that is managed by the MSR Operator. If no CR exists
and if MSR Operator is installed, a new CR is created.
Descriptive message on override of image with same tag¶
Improvement made to displayed message on any attempt to override an image with
the same tag. Previously error500, now denied:Repositoryismarkedasimmutable.
Error message on commands trigger to indicate odd amount of nodes¶
When the MSR installer install or the scale
command is used against an even number of nodes, the installer now exits with
an explicit error message that indicates that an odd number of nodes must be
specified.
A search field is now present on the Organizations screen in the
MSR web UI to aid users in filtering through large numbers of organizations
on their clusters.
When malware is present in user images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to Scan images for vulnerabilities.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] A logged-in MSR administrator who closes a browser instance
does not need to log in again if they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
You can now deploy a Prometheus server on your Kubernetes or Swarm cluster to
scrape a set of key MSR health metrics. The metrics cover such product elements
as core registry functionality, authentication, push and pull mirroring, and
RethinkDB operations.
This section describes the MSR known issues with available workarounds,
along with a list of current product limitations.
Note
When malware is present in customer images, malware scanners operating on
MSR Nodes at runtime can wrongly report MSR as a bad actor. If your
malware scanner detects any issue in a running instance of MSR, refer
to Scan images for vulnerabilities.
Attempting to install MSR on a Swarm cluster running RHEL 9.2 may result in a
failure with the following error message:
FATA[0000] installer prerequisite check failed:\
could not detect docker swarm: \
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: \
Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/swarm": \
dial unix /var/run/docker.sock: connect: permission denied
Workaround:
Use the --privileged flag when installing MSR on a
Swarm cluster that runs on RHEL 9.2, as exemplified below:
Integration with MKE authentication is not yet supported.
Client-certificate authentication for MSR users is not currently available.
MSR operators cannot currently specify passwords for the MSR administrators,
and the Helm chart configures MSR with a static default password at install.
[ENGDTR-3005] A logged-in MSR admin who closes a browser instance does not
need to log in again if they open a new browser instance.
[ENGDTR-3003] Enabling Require users to Log In per Tab Session in
eNZi for MSR does not result in users being required to reenter their
credentials when they open the MSR web UI in a new tab.
Taking into account continuous reorganization and enhancement of Mirantis
Secure Registry (MSR), certain components are deprecated and eventually
removed from the product. This section provides the following details about the
deprecated and removed functionality that may potentially impact existing
MSR deployments:
The MSR release version in which deprecation is announced
The final MSR release version in which a deprecated component is present
The MSR release version in which a deprecated component is removed
Mirantis Secure Registry (MSR, and formerly Docker Trusted Registry) provides
an enterprise grade container registry solution that can be easily integrated
to provide the core of an effective secure software supply chain.
MSR functionality is dependent on MKE, and MKE functionality is dependent on MCR. As such, MSR operating system compatibility is contingent on the operating system compatibility of the MCR versions with which your particular MKE version is compatible.
To determine MSR operating system compatibility:
Access the MKE compatibility matrix
and locate the version of MKE that you are running with MSR.
Note the MCR versions with which that MKE version is compatible.
Access the MCR compatibility matrix
and locate the MCR versions that are compatible with your version of MKE
to determine operating system compatibility.
MSR
Kubernetes required
Compatible MKE versions
3.1.11
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.10
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.9
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.8
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.7
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.6
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.5
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.4
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.3
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.2
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.1
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
3.1.0
1.24 - 1.27
3.6.x, 3.7.x, 3.8.x
Important
The Postgres Operator version you install must be 1.10.0 or later,
as all versions up through 1.8.2 use the PodDisruptionBudgetpolicy/v1beta1
Kubernetes API, which is no longer served as of Kubernetes 1.25.
This being the case, various MSR features may not function properly if
a Postgres Operator prior to 1.10.0 is installed alongside MSR
on Kubernetes 1.25 or later.
Note
MKE 3.7.x and 3.6.x provide Kubernetes versions that are compatible
with MSR 3.1.x.
For use with MSR, Kubernetes requires persistent volumes that support both the
ReadWriteOnce and ReadWriteMany volume access modes, or a
StorageClass that can provision such volumes. Refer to the
System requirements for more information.
The Mirantis Kubernetes Engine (MKE) and Mirantis Secure Registry (MSR) web
user interfaces (UIs) both run in the browser, separate from any backend
software. As such, Mirantis aims to support browsers separately from
the backend software in use.
Mirantis currently supports the following web browsers:
Browser
Supported version
Release date
Operating systems
Google Chrome
96.0.4664 or newer
15 November 2021
MacOS, Windows
Microsoft Edge
95.0.1020 or newer
21 October 2021
Windows only
Firefox
94.0 or newer
2 November 2021
MacOS, Windows
To ensure the best user experience, Mirantis recommends that you use the
latest version of any of the supported browsers. The use of other browsers
or older versions of the browsers we support can result in rendering issues,
and can even lead to glitches and crashes in the event that some JavaScript
language features or browser web APIs are not supported.
Important
Mirantis does not tie browser support to any particular MKE or MSR software
release.
Mirantis strives to leverage the latest in browser technology to build more
performant client software, as well as ensuring that our customers benefit from
the latest browser security updates. To this end, our strategy is to regularly
move our supported browser versions forward, while also lagging behind the
latest releases by approximately one year to give our customers a
sufficient upgrade buffer.
The MKE, MSR, and MCR platform subscription provides software, support, and
certification to enterprise development and IT teams that build and manage
critical apps in production at scale. It provides a trusted platform for all
apps which supply integrated management and security across the app lifecycle,
comprised primarily of Mirantis Kubernetes Engine, Mirantis Secure Registry
(MSR), and Mirantis Container Runtime (MCR).
Detailed here are all currently supported product versions, as well as the
product versions most recently deprecated. It can be assumed that all earlier
product versions are at End of Life (EOL).
Important Definitions
“Major Releases” (X.y.z): Vehicles for delivering major and minor feature
development and enhancements to existing features. They incorporate all
applicable Error corrections made in prior Major Releases, Minor Releases,
and Maintenance Releases.
“Minor Releases” (x.Y.z): Vehicles for delivering minor feature
developments, enhancements to existing features, and defect corrections. They
incorporate all applicable Error corrections made in prior Minor Releases,
and Maintenance Releases.
“Maintenance Releases” (x.y.Z): Vehicles for delivering Error corrections
that are severely affecting a number of customers and cannot wait for the
next major or minor release. They incorporate all applicable defect
corrections made in prior Maintenance Releases.
“End of Life” (EOL): Versions are no longer supported by Mirantis,
updating to a later version is recommended.
With the intent of improving the customer experience, Mirantis strives to offer
maintenance releases for the Mirantis Secure Registry (MSR) software every
six to eight weeks. Primarily, these maintenance releases will aim to resolve
known issues and issues reported by customers, quash CVEs, and reduce technical
debt. The version of each MSR maintenance release is reflected in the third
digit position of the version number (as an example, for MSR 3.0 the most
current maintenance release is MSR 3.1.11).
In parallel with our maintenance MKE release work, each year Mirantis will
develop and release a new major version of MSR, the Mirantis support lifespan
of which will adhere to our legacy two year standard.
End of Life Date
The End of Life (EOL) date for MSR 3.1 is 2025-SEP-27.
The MSR team will make every effort to hold to the release cadence stated here.
Customers should be aware, though, that development and release cycles can
change, and without advance notice.
A Technology Preview feature provides early access to upcoming product
innovations, allowing customers to experiment with the functionality and
provide feedback.
Technology Preview features may be privately or publicly available and neither
are intended for production use. While Mirantis will provide assistance with
such features through official channels, normal Service Level Agreements do not
apply.
As Mirantis considers making future iterations of Technology Preview features
generally available, we will do our best to resolve any issues that customers
experience when using these features.
During the development of a Technology Preview feature, additional components
may become available to the public for evaluation. Mirantis cannot guarantee
the stability of such features. As a result, if you are using Technology
Preview features, you may not be able to seamlessly upgrade to subsequent
product releases.
Mirantis makes no guarantees that Technology Preview features will graduate to
generally available features.