This documentation provides information on how to deploy and operate a
Mirantis Secure Registry (MSR). The documentation is intended to help
operators to understand the core concepts of the product. The documentation
provides sufficient information to deploy and operate the solution.
The information provided in this documentation set is being constantly
improved and amended based on the feedback and kind requests from the
consumers of MSR.
Mirantis Secure Registry (MSR) is a solution that enables enterprises to store
and manage their container images on-premise or in their virtual private
clouds. Built-in security enables you to verify and trust the provenance
and content of your applications and ensure secure separation of concerns.
Using MSR, you meet security and regulatory compliance requirements.
In addition, the automated operations and integration with CI/CD speed up
application testing and delivery. The most common use cases for MSR include:
Helm charts repositories
Deploying applications to Kubernetes can be complex. Setting up a single
application can involve creating multiple interdependent Kubernetes
resources, such as pods, services, deployments, and replica sets. Each of
these requires manual creation of a detailed YAML manifest file as well.
This is a lot of work and time invested. With Helm charts (packages that
consist of a few YAML configuration files and some templates that are
rendered into Kubernetes manifest files) you can save time and install
the software you need with all the dependencies, upgrade, and configure it.
Automated development
Easily create an automated workflow where you push a commit that
triggers a build on a CI provider, which pushes a new image into
your registry. Then, the registry fires off a webhook and triggers
deployment on a staging environment, or notifies other systems
that a new image is available.
Secure and vulnerable free images
When an industry requires applications to comply with certain security
standards to meet regulatory compliances, your applications are as
secure as the images that run those applications. To ensure that your
images are secure and do not have any vulnerabilities, track your
images using a binary image scanner to detect components in images
and identify associated CVEs. In addition, you may also run image
enforcement policies to prevent vulnerable or inappropriate images
from being pulled and deployed from your registry.
The MSR Reference Architecture provides comprehensive technical information on
Mirantis Secure Registry (MSR), including component particulars, infrastructure
specifications, and networking and volumes detail.
Mirantis Secure Registry (MSR) is an enterprise-grade image storage
solution. Installed behind a firewall, either on-premises or on a virtual
private cloud, MSR provides a secure environment where users can store and
manage their images.
The advantages of MSR include the following:
Image and job management
MSR has a web-based user interface used for browsing images and auditing
repository events. With the web UI, you can see which Dockerfile lines
produced an image and, if security scanning is enabled, a list of all of the
software installed in that image and any Common Vulnerabilities and Exposures
(CVEs). You can also audit jobs with the web UI.
MSR can serve as a continuous integration and continuous delivery (CI/CD)
component, in the building, shipping, and running of applications.
Availability
MSR is highly available through the use of multiple replicas of all
containers and metadata. As such, MSR will continue to operate in the event
of machine failure, thus allowing for repair.
Efficiency
MSR can reduce the bandwidth used when pulling images by caching images
closer to users. In addition, MSR can clean up unreferenced manifests and
layers.
Built-in access control
As with Mirantis Kubernetes Engine (MKE), MSR uses role-based access control
(RBAC), which allows you to manage image access, either manually, with LDAP,
or with Active Directory.
Security scanning
A security scanner is built into MSR, which can be used to discover the
versions of the software that is in use in your images. This tool scans each
layer and aggregates the results, offering a complete picture of what is
being shipped as a part of your stack. Most importantly, as the security
scanner is kept up-to-date by tapping into a periodically updated
vulnerability database, it is able to provide unprecedented insight into your
exposure to known security threats.
Image signing
MSR ships with Notary, which allows you to sign and verify images using
Docker Content Trust.
Mirantis Secure Registry (MSR) is a containerized application that runs on a
Mirantis Kubernetes Engine cluster. After deploying MSR, you can use your
Docker CLI client to log in, push, and pull images. For high availability, you
can deploy multiple MSR replicas, one on each MKE worker node.
All MSR replicas run the same set of services, and changes to the configuration
of one is replica is automatically propagated to other replicas.
Installing MSR on a node starts the containers that are detailed in the
following table:
Name
Description
dtr-api-<replica_id>
Executes the MSR business logic, serving the MSR web application and
API.
dtr-garant-<replica_id>
Manages MSR authentication.
dtr-jobrunner-<replica_id>
Runs cleanup jobs in the background.
dtr-nginx-<replica_id>
Receives HTTP and HTTPS requests and proxies those requests to other MSR
components. By default, the container listens to host ports 80 and 443.
dtr-notary-server-<replica_id>
Receives, validates, and serves Content Trust metadata, and is consulted
when pushing to or pulling from MSR with Content Trust enabled.
dtr-notary-signer-<replica_id>
Performs server-side timestamp and snapshot signing for Content Trust
metadata.
dtr-registry-<replica_id>
Implements pull and push functionality for Docker images and manages
the storage of images.
dtr-rethinkdb-<replica_id>
Serves as a database for persisting repository metadata.
dtr-scanningstore-<replica_id>
Stores security scanning data.
Important
Do not use the MSR components in your applications, as they are for internal
MSR use only.
Mirantis Secure Registry can be installed on-premises or on the cloud.
Before installing, be sure your infrastructure has these requirements.
You can install MSR on-premises or on a cloud provider. To install MSR,
all nodes must:
Be a worker node managed by MKE (Mirantis Kubernetes Engine)
Have a fixed hostname
Minimum requirements:
16GB of RAM for nodes running MSR
4 vCPUs for nodes running MSR
25GB of free disk space
Recommended production requirements:
32GB of RAM for nodes running MSR
4 vCPUs for nodes running MSR
100GB of free disk space
Note that Windows container images are typically larger than Linux ones
and for this reason, you should consider provisioning more local storage
for Windows nodes and for MSR setups that will store Windows container
images.
When the image scanning feature is used, we recommend that you have at least
32 GB of RAM. As developers and teams push images into MSR, the
repository grows over time. As such, you should regularly inspect RAM, CPU, and
disk usage on MSR nodes, and increase resources whenever resource saturation is
seen to occur on a regular basis.
MSR creates the dtr-ol network at the time of installation. This
network allows for communication between MSR components running on different
nodes, for the purpose of MSR data replication.
When installing MSR on a node, make sure the following ports are open on
that node:
Port
Direction
Purpose
80/tcp
in
Web app and API client access to MSR.
443/tcp
in
Web app and API client access to MSR.
You can configure these ports during MSR installation.
Root key material for the MSR root CA that issues certificates
dtr-notary-<replica_id>
Certificate and keys for the Notary components
dtr-postgres-<replica_id>
Vulnerability scans data
dtr-registry-<replica_id>
Docker images data, if MSR is configured to store images on the local
filesystem
dtr-rethink-<replica_id>
Repository metadata
dtr-nfs-registry-<replica_id>
Docker images data, if MSR is configured to store images on NFS
You can customize the volume driver used for these volumes, by creating
the volumes before installing MSR. During the installation, MSR checks
which volumes don’t exist in the node, and creates them using the
default volume driver.
By default, the data for these volumes can be found at
/var/lib/docker/volumes/<volume-name>/_data.
By default, Mirantis Secure Registry stores images on the filesystem of
the node where it is running, but you should configure it to use a
centralized storage backend.
The matches operator conforms subject fields to a user-provided regular
expression (regex). The regex for matches must follow the specification
in the official Go documentation: Package syntax.
Each of the following policies uses the rule engine:
Targeted to deployment specialists and QA engineers, the MSR Installation Guide
provides the detailed information and procedures you need to install
and configure Mirantis Secure Registry (MSR).
When installing or backing up MSR on a Mirantis Kubernetes Engine (MKE)
cluster, administrators must be able to deploy containers on MKE manager nodes
or MSR nodes. Take the following steps to enable this setting:
Log in to the MKE web UI.
In the left-side navigation panel, navigate to
<user name> > Admin Settings > Orchestration.
Scroll down to Container Scheduling and toggle the slider next
to Allow administrators to deploy containers on MKE managers or
nodes running MSR.
If MSR administrators are unable to deploy on MKE manager nodes or MSR nodes,
the MSR installation or backup will fail with the following error message:
Errorresponsefromdaemon:{"message":"could not find any nodes on which the container could be created"}
Mirantis Secure Registry (MSR) is a containerized application that runs on a
swarm managed by Mirantis Kubernetes Engine (MKE). It can be installed
on-premises or on a cloud-based infrastructure.
Update Mirantis Container Runtime (MCR) to the latest version. For details,
refer to the section of the
MCR installation guide
that corresponds with your operating system.
MKE and MSR must be installed on different nodes, due to the potential
for resource and port conflicts. Install MSR on worker nodes
that will be managed by MKE. Note also that MSR cannot be installed on a
standalone MCR.
Optional. To run a load balancer that uses HTTP for health probes over port
80 or 443, temporarily reconfigure it to use TCP over a known open
port and enter the load balancer IP address as the value of
``–dtr-external-url ``. Once MSR is installed, you can reconfigure the load
balancer to meet your requirements.
Run the MSR install command on any node that is both connected to the MKE
cluster and running MCR. Running the installation command in interactive TTY
(or -it) mode will prompt you for any required additional
information.
Note
MSR will not be installed on the node where you run the install command.
MSR will be installed on the MKE worker defined by the --ucp-node
flag.
To install a different version of MSR, replace 2.9.21
with the required version of MSR in the provided command.
MSR is deployed with self-signed certificates by default, so MKE might
not be able to successfully pull images from MSR. Use the optional
--dtr-external-url<msr-domain>:<port> flag during installation or
during a reconfiguration to automatically reconfigure MKE to trust MSR.
You can enable browser authentication using client certificates at install
time. This bypasses the MSR login page and hides the logout button, thus
overriding the requirement that you log in with a user name and password.
Verify that MSR is installed by logging in to the MKE web UI and then
navigating to
<user name> > Admin Settings > Mirantis Secure Registry. A
successful installation will display the MSR fully qualified domain name
(FQDN).
Note
MKE modifies /etc/docker/certs.d for each host and
adds the MSR CA certificate. MKE can then pull images from
MSR because MCR for each node in the MKE swarm has been configured to
trust MSR.
Optional. Reconfigure your load balancer back to your desired protocol and
port.
To make MSR highly available, you can add additional replicas to your MSR
cluster. Adding more replicas allows you to load-balance requests across all
replicas, thus enabling MSR to continue working if a replica fails.
For high-availability, you should set 3 or 5 MSR replicas. The replica nodes
must be managed by the same MKE.
The <mke-node-name> following the --ucp-node flag is the
target node to install the MSR replica. This is not the MKE
manager URL.
When you join a replica to an MSR cluster, you need to specify the ID of a
replica that is already part of the cluster. You can find an existing
replica ID by navigating to the Shared Resources > Stacks page
in the MKE web UI.
Verify that all replicas are running:
Log in to the MKE web UI.
Select Shared Resources > Stacks.
All replicas will display.
To install MSR on an offline host, you must first use a separate computer with
an Internet connection to download a single package with all the images and
then copy that package to the host where you will install MSR. Once the package
is on the host and loaded, you can install MSR offline as described in
Install MSR online.
To install MSR offline:
Download the required MSR package:
Note
MSR 2.9.2 is discontinued and thus not available for download.
After you install MSR, download your new MSR license
and apply it using the MSR web UI.
Warning
Users are not authorized to run MSR without a valid license. For more
information, refer to Mirantis Agreements and Terms.
To download your MSR license:
Open an email from Mirantis Support with the subject Welcome to
Mirantis’ CloudCare Portal and follow the instructions for logging in.
If you did not receive the CloudCare Portal email, you likely have not yet
been added as a Designated Contact and should contact your Designated
Administrator.
In the top navigation bar, click Environments.
Click the Cloud Name associated with the license you want to
download.
Scroll down to License Information and click the
License File URL. A new tab opens in your browser.
Click View file to download your license file.
To update your license settings in the MSR web UI:
Log in to your MSR instance as an administrator.
In the left-side navigation panel, click Settings.
On the General tab, click Apply new license. A file
browser dialog displays.
Navigate to where you saved the license key (.lic) file, select it,
and click Open. MSR automatically updates with the new settings.
Each time you run the destroy command, the system will prompt you
for the MKE URL, your MKE credentials, and the name of the replica you want to
destroy.
The MSR Operations Guide provides the detailed information you
need to store and manage images on-premises or in a virtual private
cloud, to meet security or regulatory compliance requirements.
By default Mirantis Container Runtime uses TLS when pushing and pulling images
to an image registry like Mirantis Secure Registry (MSR).
If MSR is using the default configurations or was configured to use
self-signed certificates, you need to configure your Mirantis Container Runtime
to trust MSR. Otherwise, when you try to log in, push to, or pull images
from MSR, you’ll get an error:
The first step to make your Mirantis Container Runtime trust the certificate
authority used by MSR is to get the MSR CA certificate. Then you
configure your operating system to trust that certificate.
In your browser navigate to https://<msr-url>/ca to download the TLS
certificate used by MSR. Open Windows Explorer, right-click the file
you’ve downloaded, and choose Install certificate.
Then, select the following options:
Store location: local machine
Check place all certificates in the following store
Click Browser, and select Trusted Root Certificate
Authorities
# Download the MSR CA certificate
sudocurl-khttps://<msr-domain-name>/ca-o/usr/local/share/ca-certificates/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudoupdate-ca-certificates
# Restart the Docker daemon
sudoservicedockerrestart
# Download the MSR CA certificate
sudocurl-khttps://<msr-domain-name>/ca-o/etc/pki/ca-trust/source/anchors/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudoupdate-ca-trust
# Restart the Docker daemon
sudo/bin/systemctlrestartdocker.service
Mirantis Secure Registry can be configured to have one or more caches.
This allows you to choose from which cache to pull images from for
faster download times.
If an administrator has set up caches, you can
choose which cache to use when pulling images.
In the MSR web UI, navigate to your Account, and check the
Content Cache options.
Once you save, your images are pulled from the cache instead of the
central MSR.
You can create and distribute access tokens in MSR that grant users access at
specific permission levels.
Access tokens are associated with a particular user account. They take on the
permissions of that account when in use, adjusting automatically to any
permissions changes that are made to the associated user account.
Note
Regular MSR users can create access tokens that adopt their own account
permissions, while administrators can create access tokens that adopt the
account permissions of any account they choose, including the admin account.
Access tokens are of use in building CI/CD pipelines and other integrations, as
you can issue separate tokens for each integration and henceforth deactivate or
delete such tokens at any time. You can also use access tokens to generate a
temporary password for a user who is locked out of their account.
Note
To monitor users login events, enable the auditAuthLogsEnabled parameter
in the /settings API endpoint:
Mirantis Secure Registry (MSR) services are exposed using HTTPS by default,
which ensures encrypted communications between clients and your trusted
registry. If you do not pass a PEM-encoded TLS certificate during installation,
MSR generates a self-signed certificate, which can lead to an insecure site
warning whenever you access MSR through a browser. In addition, MSR includes an
HTTP Strict Transport Security (HSTS) header in all API responses, which can
cause your browser not to load the MSR web UI.
You can configure MSR to use your own TLS certificates, so that it is
automatically trusted by your browsers and client tools. You can also enable
user authentication using the client certificates provided by your
organization Public Key Infrastructure (PKI).
You can upload your own TLS certificates and keys using the MSR web UI, or you
can pass them as CLI options during installation or whenever you reconfigure
your MSR instance.
To replace the server certificates using the MSR web UI:
Log in at https://<msr-url>.
In the left-side navigation panel, navigate to System and scroll
down to Domain & Proxies.
Enter your MSR domain name and upload or copy and paste the certificate
information:
Certificate information
Description
Load balancer/public address
The domain name for accessing MSR.
TLS private key
The server private key.
TLS certificate chain
The server certificate and any intermediate public certificates from
your certificate authority (CA). The certificate must be valid for
the MSR public address and have SANs for all addresses that are used
to reach the MSR replicas, including load balancers.
TLS CA
The root CA public certificate.
Click Save.
At this point, if you have added certificates issued by a globally trusted CA,
any web browser or client tool should trust MSR. If you are using an
internal CA, you must configure the client systems to trust that CA.
To replace the server certificates using the CLI:
Refer to install and
reconfigure for TLS certificate options and usage
information.
MSR and MKE share users by default, but the applications have distinct web UIs
that each require separate authentication. You can, however, configure MSR
to use single sign-on with MKE.
Note
Once you configure MSR to use single sign-on, you must create an
access token to interact with MSR using the CLI.
Include --dtr-external-url<msr-url> in the MSR install command, where
<msr-url> is the MSR fully qualified domain name (FQDN) or a load
balancer, if one is in use:
When you navigate to the MSR web UI, you will be redirected to the MKE log in
page for authentication. After authentication, you will be directed back to the
MSR web UI.
In the left-side navigation panel, navigate to System.
On the General tab, scroll down to
Domains & Proxies.
In the Load Balancer / Public Address field, enter the MSR FQDN
or load balancer IP address, if one is in use. This is the URL where users
will be redirected once they are logged in.
Click Save.
Scroll down to Single Sign-On and slide the toggle that is next
to Automatically redirect users to MKE for login.
By default, Mirantis Secure Registry (MSR) uses persistent cookies.
Alternatively, you can switch to using session-based authentication cookies
that expire when you close your browser.
To disable persistent cookies:
Log in to the MSR web UI.
In the left-side navigation panel, navigate to System.
On the General tab, scroll down to Browser Cookies.
Slide the toggle to the right next to
Disable persistent cookies.
Verify that persistent cookies are disabled:
Using Chrome
Log in to the MSR web UI using Chrome.
Right-click any page and select Inspect.
In the Developer Tools panel, navigate to
Application > Cookies > https://<msr-external-url>.
Verify that Expires / Max-Age is set to
Session.
Using Firefox
Log in to the MSR web UI using Firefox.
Right-click any page and select Inspect.
In the Developer Tools panel, navigate to
Storage > Cookies > https://<msr-external-url>.
By default, MSR automatically records and transmits data to Mirantis
through an encrypted channel for monitoring and analysis purposes. The data
collected provides the Mirantis Customer Success Organization with information
that helps Mirantis to better understand the operational use of MSR by our
customers. It also provides key feedback in the form of product usage
statistics, which assists our product teams in making enhancements to Mirantis
products and services.
Caution
To send MSR telemetry, the container runtime and the jobrunner
container must be able to resolve api.segment.io and create a TCP
(HTTPS) connection on port 443.
To disable telemetry for MSR:
Log in to the MSR web UI as an administrator.
Click System in the left-side navigation panel to open the
System page.
Click the General tab in the details pane.
Scroll down in the details pane to the Analytics section.
By default, MSR uses the local file system of the node where it is running to
store your Docker images. You can, though, configure MSR to use an external
storage backend, for improved performance or high availability.
If your MSR deployment has a single replica, you can continue to use the local
file system to store your Docker images. If, though, your MSR deployment has
multiple replicas, make sure that they are all using the same storage backend
for high availability.
Whenever a user pulls an image, the MSR node serving the request needs to have
access to that image.
To configure the storage backend, log in to the MSR web UI as an
administrator, and in the left-side navigation panel navigate to
System > Storage.
The storage configuration details pane presents the most common configuration
options. You can, however, upload your own configuration file in .yml,
.yaml, or .txt format.
By default, MSR creates a volume named dtr-registry-<replica-id> to store
your images using the local file system. You can customize the name and path of
the volume by using mirantis/dtr install --dtr-storage-volume
or mirantis/dtr reconfigure --dtr-storage-volume.
Important
To deploy MSR with high availability, you
must use a centralized storage backends to ensure that all MSR replicas can
access the same set of images.
To verify the amount of space your images are using in the local file system:
You can configure MSR to store Docker images in a Network File System (NFS)
directory.
Note
Changing storage backends involves initializing a new metadata store,
rather than reusing an existing volume, which facilitates online garbage
collection.
Use the format nfs://<nfsserver>/<directory> for the NFS storage URL.
To support NFS v4, you can specify additional options when running the
install command with the --nfs-storage-url
option.
Note
When you join replicas to an MSR cluster, the replicas pick up your storage
configuration, and thus it is not necessary to respecify the configuration.
Configure MSR for S3-compatible cloud storage providers¶
You can configure MSR to store Docker images on Amazon S3 or any other file
servers with an S3-compatible API, such as Cleversafe or Minio.
Amazon S3 and compatible services store files in buckets, and users
have permissions to read, write, and delete files from those buckets.
When you integrate MSR with Amazon S3, MSR sends all read and write
operations to the S3 bucket so that the images are persisted in that location.
Navigate to System > Storage in the left-side navigation panel.
Select the Amazon S3 option in the details pane.
Adjust the S3 Settings.
Toggle the Send data slider to the right to configure MSR to
redirect clients each time a pull operation occurs.
Enter the pertinent information into the provided fields.
Field
Description
AWS Region Name
AWS region that hosts your S3 bucket.
S3 Bucket Name
Name of the S3 bucket in which the images are stored.
Region Endpoint
Endpoint name for the AWS region that hosts your S3 bucket.
Root Directory
Path to the location in the S3 bucket within which
the images are stored.
Access key
AWS access key to use to access the S3 bucket.
Note
If you are using an IAM policy, leave the
AWS access key field empty.
Secret Key
AWS secret key you can use to access the S3 bucket.
Note
If you are using an IAM policy, leave the AWS secret
key field empty.
Click Show advanced settings.
Toggle the Signature version 4 auth slider to the right to
configure MSR to authenticate requests with AWS signature version 4.
Toggle the Use HTTPS slider to the right to
configure MSR to secure all requests using the HTTPS protocol.
Toggle the Skip TLS slider to the right to
configure MSR to encrypt all traffic and not to verify the TLS
certificate in use by the storage backend.
If pertinent, in the Root CA Certificate field, enter the
public key certificate of the root certificate
authority that issued the storage backend certificate.
Click Submit to validate the configuration settings and save
the changes.
Whenever you push or pull an image using MSR, the software redirects the
requests to the storage backend.
If MSR is configured for TLS verification and the TLS certificate in use by
your storage backend is not globally trusted, you must configure all
Mirantis Container Runtime instances that push or pull from MSR to trust that
certificate.
If MSR is configured to skip TLS verification, you must also
configure all Mirantis Container Runtime instances that push or pull from
MSR to skip TLS verification. To do this, add MSR to the list of
insecure registries when starting Docker.
To restore MSR using your previously configured S3 settings, use the
restore command with the
--dtr-use-default-storage option to maintain your metadata.
Your metadata store and your storage both persist when you migrate your data to
a new storage backend. Unlike previous versions of the product, MSR 2.9.x
requires the new storage backend to contain an exact copy of the data for the
prior configuration. If you do not meet the requirement you must reinitialize
the storage using the --reinitialize-storage option with the
dtr reconfigure command, which initializes a new metadata store
and erases your existing tags.
As a best practice, you should always move, back up, and restore MSR storage
backends together with your metadata.
To migrate data to a new storage backend:
Disable garbage collection:
Log in to the MSR web UI.
From the left-side menu, navigate to
System > Garbage Collection.
Select Never Disable garbage collection to ensure that any
blobs referenced in the backup you create will persist.
Important
Garbage collection must remain disabled throughout metadata backup
and the migration of your storage data.
Upon success, you will receive a 202Accepted response.
Migrate the contents of your current storage backend to the new one.
Reconfigure MSR using the reconfigure command with the
--storage-migrated option to preserve your existing tags. For
reconfigure command usage details, refer
to mirantis/dtr reconfigure.
Mirantis Secure Registry is designed to scale horizontally as your usage
increases. You can increase the number of replicas for each of the resources
that MSR deploys.
Note
Additional replicas make MSR more tolerant to failure, but be aware that
performance degradation can result from having too many replicas in your
RethinkDB cluster.
All MSR replicas run the same set of services, and changes to their
configuration are automatically propagated to other replicas.
When sizing your MSR installation for high availability, Mirantis recommends
the following best practices:
Do not scale RethinkDB with only two replicas.
Caution
RethinkDB cannot tolerate a failure with an even number of replicas.
MSR replicas
Failures tolerated
1
0
3
1
5
2
7
3
Address failed replicas quickly, as the number of failures your cluster can
tolerate decreases whenever a replica is offline.
With a load balancer, users can access MSR using a single domain name.
Once you have achieved high availability by joining multiple MSR replica nodes,
you can configure a load balancer to balance user requests across those
replicas. The load balancer detects when a replica fails and immediately stops
forwarding requests to it, thus ensuring that the failure goes unnoticed by
users.
MSR does not provide a load balancing service. You must use either an
on-premises or cloud-based load balancer to balance requests across
multiple MSR replicas.
Important
Additional steps are needed to use the same load balancer with both MSR and
MKE. For more information, refer to Configure a load balancer
in the MKE documentation.
MSR exposes several endpoints that you can use to assess the health of an MSR
replica:
/_ping
Verifies whether the MSR replica is healthy. This is useful for load
balancing and other automated health check tasks. This endpoint is
unauthenticated.
/nginx_status
Returns the number of connections handled by the MSR NGINX front end.
/api/v0/meta/cluster_status
Returns detailed information about all MSR replicas.
You can use the unauthenticated /_ping endpoint on each MSR replica,
to check the health status of the replica and whether it should remain in the
load balancing pool or not.
The /_ping endpoint returns a JSON object for the replica being
queried that takes the following form:
{"Error":"<error-message>",
"Healthy":true}
A response of "Healthy":true indicates that the replica is suitable for
taking requests. It also signifies that the HTTP status code is 200.
An unhealthy replica will return 503 as the status code and populate
"Error" with more details on any of the following services:
Storage container (MSR)
Authorization (Garant)
Metadata persistence (RethinkDB)
Content trust (Notary)
Note that the purpose of the /_ping endpoint is to check the health of a
single replica. To obtain the health of every replica in a cluster, you must
individually query each replica.
For MSR to perform security scanning, you must have a running deployment of
Mirantis Secure Registry (MSR), administrator access, and an MSR license that
includes security scanning.
Before you can set up security scanning, you must verify that your Docker ID
can access and download your MSR license from DockerHub. If you are using a
license that is associated with an organization account, verify that your
Docker ID is a member of the Owners team, as only members of that team can
download license files for an organization. If you are using a license
associated with an individual account, no additional action is needed.
Note
To verify that your MSR license includes security scanning:
Log in to the MSR web UI.
In the left-side navigation panel, click System and navigate
to the Security tab.
If the Enable Scanning toggle displays, the license includes
security scanning.
To learn how to obtain and install your MSR license, refer to
Obtain the license.
In the left-side navigation panel, click System and navigate
to the Security tab.
Slide the Enable Scanning toggle to the right.
Set the security scanning mode by selecting either Online or
Offline.
Online mode:
Online mode downloads the latest vulnerability database from a Docker
server and installs it.
To enable online security scanning, click Sync Database now.
Offline mode:
Offline mode requires that you manually perform the following steps.
Download the most recent CVE database.
Be aware that the example command specifies default values. It
instructs the container to output the database file to the
~/Downloads directory and configures the volume to map from the
local machine into the container. If the destination for the database
is in a separate directory, you must define an additional volume. For
more information, refer to the table that follows this procedure.
MSR security scanning indexes the components in your MSR images and
compares them against a CVE database. This database is routinely updated
with new vulnerability signatures, and thus MSR must be regularly updated with
the latest version to properly scan for all possible vulnerabilities. After
updating the database, MSR matches the components in the new CVE reports to the
indexed components in your images, and generates an updated report.
Note
MSR users with administrator access can learn when the CVE database was last
updated by accessing the Security tab in the MSR
System page.
In online mode, MSR security scanning monitors for updates to
the vulnerability database, and downloads them when available.
To ensure that MSR can access the database updates, verify that the host can
access both https://license.mirantis.com and
https://dss-cve-updates.mirantis.com/ on port 443 using HTTPS.
MSR checks for new CVE database updates every day at 3:00 AM UTC. If an update
is available, it is automatically downloaded and applied, without interrupting
any scans in progress. Once the update is completed, the security scanning
system checks the indexed components for new vulnerabilities.
To set the update mode to online:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate
to the Security tab.
Click Online.
Your choice is saved automatically.
Note
To check immediately for a CVE database update, click
Sync Database now.
When connection to the update server is not possible, you can update the CVE
database for your MSR instance using a .tar file that contains the database
updates.
To set the update mode to offline:
Log in to the MSR web UI as an administrator.
In the left-side navigation panel, click System and navigate
to the Security tab.
Select Offline
Click Select Database and open the downloaded CVE database file.
MSR installs the new CVE database and begins checking the images that are
already indexed for components that match new or updated vulnerabilities.
The time needed to pull and push images is directly influenced by the distance
between your users and the geographic location of your MSR deployment. This is
because the files need to traverse the physical space and cross multiple
networks. You can, however, deploy MSR caches at different geographic
locations, to add greater efficiency and shorten user wait time.
With MSR caches you can:
Accelerate image pulls for users in a variety of geographical regions.
Manage user permissions from a central location.
MSR caches are inconspicuous to your users, as they will continue to log in and
pull images using the provided MSR URL address.
When MSR receives a user request, it first authenticates the request and
verifies that the user has permission to pull the requested image. Assuming
the user has permission, they then receive an image manifest that contains the
list of image layers to pull and which directs them to pull the images from a
particular cache.
When your users request image layers from the indicated cache, the cache pulls
these images from MSR and maintains a copy. This enables the cache to serve the
image layers to other users without having to retrieve them again from MSR.
Note
Avoid using caches if your users need to push images faster or if you want
to implement region-based RBAC policies. Instead, deploy multiple MSR
clusters and apply mirroring policies between them. For further details,
refer to Promotion policies and monitoring.
MSR caches running in different geographic locations can provide your users
with greater efficiency and shorten the amount of time required to pull images
from MSR.
Consider a scenario in which you are running an MSR instance that is installed
in the United States, with a user base that includes developers located in the
United States, Asia, and Europe. The US-based developers can pull their images
from MSR quickly, however those working in Asia and Europe have to contend with
unacceptably long wait times to pull the same images. You can address this
issue by deploying MSR caches in Asia and Europe, thus reducing the wait time
for developers located in those areas.
The described MSR cache scenario requires three datacenters:
The MSR on Swarm deployment detailed herein assumes that you have a
running MSR deployment and that you have provisioned multiple
nodes and joined them into a swarm.
You will deploy your MSR cache as a Docker service, thus ensuring that Docker
automatically schedules and restarts the service in the event of a problem.
You manage the cache configuration using a Docker configuration and the TLS
certificates using Docker secrets. This setup enables you to securely manage
the node configurations for the node on which the cache is running.
To target your deployment to the cache node, you must first label that node. To
do this, SSH into a manager node of the swarm within which you want to deploy
the MSR cache.
Following cache preparation, you will have the following file structure on your
workstation:
├──docker-stack.yml
├──config.yml# The cache configuration file
└──certs
├──cache.cert.pem# The cache public key certificate├──cache.key.pem# The cache private key└──dtr.cert.pem# MSR CA certificate
With the configuration detailed herein, the cache fetches image layers
from MSR and keeps a local copy for 24 hours. After that, if a user requests
that image layer, the cache re-fetches it from MSR.
The cache is configured to persist data inside its container. If something goes
wrong with the cache service, Docker automatically redeploys a new container,
but previously cached data is not persisted. You can customize the storage
parameters, if you want to store the image layers using a persistent storage
backend.
Also, the cache is configured to use port 443. If you are already using that
port in the swarm, update the deployment and configuration files to use another
port. Remember to create firewall rules for the port you choose.
You configure the MSR cache using a configuration file that you mount into the
container.
Edit the sample MSR cache configuration file that follows to fit your
environment, entering the relevant external MSR cache, worker node, or external
loadbalancer FQDN. Once configured, the cache fetches image layers from MSR and
maintains a local copy for 24 hours. If a user requests the image layer after
that period, the cache re-fetches it from MSR.
To deploy the MSR cache with a TLS endpoint, you must generate a TLS
certificate and key from a certificate authority.
Be aware that to expose the MSR cache through a node port or a host port, you
must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your
certificate.
Create a directory called certs and place in it the newly created
certificate cache.cert.pem and key cache.key.pem for your MSR cache.
Configure the cert pem files, as detailed below:
pem file
Content to add
cache.cert.pem
Add the public key certificate for the cache. If the certificate
has been signed by an intermediate certificate authority, append its
public key certificate at the end of the file.
cache.key.pem
Add the unencrypted private key for the cache.
dtr.cert.pem
The cache communicates with MSR using TLS. If you have customized MSR
to use TLS certificates issued by a globally trusted certificate
authority, the cache automatically trusts MSR. If, though, you are
using the default MSR configuration, or MSR is using TLS certificates
signed by your own certificate authority, you need to configure the
cache to trust MSR, and edit the daemon.json file to allow for insecure registries.
Add the MSR CA certificate to the certs/dtr.cert.pem file:
The MSR with Kubernetes deployment detailed herein assumes that you have a
running MSR deployment.
When you establish the MSR cache as a Kubernetes deployment, you ensure that
Kubernetes will automatically schedule and restart the service in the event
of a problem.
You manage the cache configuration with a Kubernetes Config Map and the TLS
certificates with Kubernetes secrets. This setup enables you to securely
manage the configurations of the node on which the cache is running.
To deploy the MSR cache with a TLS endpoint you must generate a TLS
certificate and key from a certificate authority.
The manner in which you expose the MSR cache changes the Storage Area Networks
(SANs) that are required for the certificate. For example:
To deploy the MSR cache with an ingress object you must use an external MSR
cache address that resolves to your ingress controller as part of your
certificate.
To expose the MSR cache through a Kubernetes Cloud Provider, you must have
the external load balancer address as part of your certificate.
To expose the MSR cache through a Node port or a host port you must use a
Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.
In the certs directory, place the newly created certificate
cache.cert.pem and key cache.key.pem for your MSR cache.
Place the certificate authority in the certs directory, including any
intermediate certificate authorities of the certificate from your MSR
deployment. If your MSR deployment uses cert-manager, use kubectl to
source this from the main MSR deployment.
kubectlgetsecretmsr-nginx-ca-cert-ogo-template='{{ index .data "ca.crt" | base64decode }}'
Note
If cert-manager is not in use, you must provide your custom nginx.webtls
certificate.
The MSR cache takes its configuration from a configuration file that you mount
into the container.
You can edit the following MSR cache configuration file for your environment,
entering the relevant external MSR cache, worker node, or external loadbalancer
FQDN. Once you have configured the cache it fetches image layers from MSR and
maintains a local copy for 24 hours. If a user requests the image layer after
that period, the cache fetches it again from MSR.
cat > config.yaml <<EOFversion:0.1log:level:infostorage:delete:enabled:truefilesystem:rootdirectory:/var/lib/registryhttp:addr:0.0.0.0:443secret:generate-random-secrethost:https://<external-fqdn-dtrcache># Could be MSR Cache / Loadbalancer / Worker Node external FQDNtls:certificate:/certs/cache.cert.pemkey:/certs/cache.key.pemmiddleware:registry:-name:downstreamoptions:blobttl:24hupstreams:-https://<msr-url># URL of the Main MSR Deploymentcas:-/certs/msr.cert.pemEOF
By default, the cache stores image data inside its container. Thus, if
something goes wrong with the cache service and Kubernetes deploys a new Pod,
cached data is not persisted. The data is not lost, however, as it
persists in the primary MSR.
Note
Kubernetes persistent volumes or persistent volume claims must be in use to
provide persistent backend storage capabilities for the cache.
To create the Kubernetes resources, you must have the kubectl
command line tool configured to communicate with your Kubernetes cluster,
through either a Kubernetes configuration file or an MKE client bundle.
To provide external access to your MSR cache you must expose the cache Pods.
Important
Expose your MSR cache through only one external interface.
To ensure TLS certificate validity, you must expose the cache through the
same interface for which you previously created a certificate.
Kubernetes supports several methods for exposing a service, based on
your infrastructure and your environment. Detail is offered below for the
NodePort method and the Ingress Controllers method.
Run the following command to determine the port on which you have exposed
the MSR cache:
kubectl-ndtrgetservices
Test the external reachability of your MSR cache. To do this, use curl
to hit the API endpoint, using both the external address of a worker node
and the NodePort:
In the ingress controller exposure scheme, you expose the MSR cache through an
ingress object.
Create a DNS rule in your environment to resolve an MSR cache external FQDN
address to the address of your ingress controller. In addition, specify at
the start the same MSR cache external FQDN within the MSR cache
certificate.
cat > dtrcacheingress.yaml <<EOFapiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:dtr-cachenamespace:dtrannotations:nginx.ingress.kubernetes.io/ssl-passthrough:"true"nginx.ingress.kubernetes.io/secure-backends:"true"spec:tls:-hosts:-<external-msr-cache-fqdn># Replace this value with your external MSR Cache addressrules:-host:<external-msr-cache-fqdn># Replace this value with your external MSR Cache addresshttp:paths:-pathType:Prefixpath:"/cache"backend:service:name:dtr-cacheport:number:443EOFkubectl create -f dtrcacheingress.yaml
Test the external reachability of your MSR cache. To do this, use curl
to hit the API endpoint. The address should be the one you have previously
defined in the service definition file.
You will require the following to deploy MSR caches with high availability:
Multiple nodes, one for each cache replica
A load balancer
Shared storage system that has read-after-write consistency
With high availability, Mirantis recommends that you configure the replicas to
store data using a shared storage system. MSR cache deployment is the same,
though, regardless of whether you are deploying a single replica or multiple
replicas.
When using a shared storage system, once an image layer is cached, any replica
is able to serve it to users without having to fetch a new copy from MSR.
MSR caches support the following storage systems:
Alibaba Cloud Object Storage Service
Amazon S3
Azure Blob Storage
Google Cloud Storage
NFS
Openstack Swift
Note
If you are using NFS as a shared storage system, ensure read-after-write
consistency by verifying that the shared directory is configured with:
/dtr-cache*(rw,root_squash,no_wdelay)
In addition, mount the NFS directory on each node where you will deploy
an MSR cache replica.
To configure caches for high availability:
Use SSH to log in to a manager node of the cluster on which you want to
deploy the MSR cache. If you are using MKE to manage that cluster, you can
also use a client bundle to configure your Docker CLI client to connect to
the cluster.
Label each node that is going to run the cache replica:
Create the cache configuration files by following the instructions for
deploying a single cache replica. Be sure to adapt the storage object,
using the configuration options for the shared storage of your choice.
Deploy a load balancer of your choice to balance requests across your
set of replicas.
MSR caches are based on Docker Registry, and use the same configuration file
format. The MSR cache extends the Docker Registry configuration file format,
though, introducing a new middleware called downstream with three
configuration options: blobttl, upstreams, and cas:
middleware:registry:-name:downstreamoptions:blobttl:24hupstreams:-<Externally-reachable address for upstream registry or content cache in format scheme://host:port>cas:-<Absolute path to next-hop upstream registry or content cacheCA certificate in the container's filesystem>
The following table offers detail specific to MSR caches for each parameter:
Parameter
Required
Description
blobttl
no
The TTL (Time to Live) value for blobs in the cache, offered as a
positive integer and suffix denoting a unit of time.
Valid values:
ns (nanoseconds)
us (microseconds)
ms (milliseconds)
s (seconds)
m (minutes)
h (hours)
Note
If the suffix is omitted, the system interprets the value as
nanoseconds.
If blobttl is configured, storage.delete.enabled must be set to
true.
cas
no
An optional list of absolute paths to PEM-encoded CA certificates of
upstream registries or content caches.
upstreams
yes
A list of externally-reachable addresses for upstream registries of
content caches. If you specify more than one host, it will pull from
registries in a round-robin fashion.
Mirantis Secure Registry (MSR) supports garbage collection, the automatic
cleanup of unused image layers. You can configure garbage collection to occur
at regularly scheduled times, as well as set a specific duration for the
process.
Garbage collection first identifies and marks unused image layers, then
subsequently deletes the layers that have been marked.
In conducting garbage collection, MSR performs the following actions in
sequence:
Establishes a cutoff time.
Marks each referenced manifest file with a timestamp. When manifest files
are pushed to MSR, they are also marked with a timestamp.
Sweeps each manifest file that does not have a timestamp after the cutoff
time.
Deletes the file if it is never referenced, meaning that no image tag uses
it.
Repeats the process for blob links and blob descriptors.
Each image stored in MSR is comprised of the following files:
The image filesystem, which consists of a list of unioned image layers.
A configuration file, which contains the architecture of the image along with
other metadata.
A manifest file, which contains a list of all the image layers and the
configuration file for the image.
MSR tracks these files in its metadata store, using RethinkDB, doing so in a
content-addressable manner in which each file corresponds to a cryptographic
hash of the file content. Thus, if two image tags hold exactly the same
content, MSR only stores the content once, which makes hash collisions nearly
impossible even when image tag names differ. For example, if wordpress:4.8
and wordpress:latest have the same content, MSR will only store that
content once. If you delete one of these tags, the other will remain intact.
As a result, when you delete an image tag, MSR cannot delete the
underlying files as it is possible that other tags also use the same
underlying files.
By default, MSR only allows users to push images to repositories that already
exist, and for which the user has write privileges. Alternatively, you can
configure MSR to create a new private repository when an image is pushed.
To create a new repository when pushing an image:
Log in to the MSR web UI.
In the left-side navigation panel, click Settings and scroll
down to Repositories.
Slide the Create repository on push toggle to the right.
Mirantis Secure Registry (MSR) makes outgoing connections to check for new
versions, automatically renew its license, and update its vulnerability
database. If MSR cannot access the Internet, you must manually apply
any updates.
One option to keep your environment secure while still allowing MSR
access to the Internet is to use a web proxy. If you have an HTTP or
HTTPS proxy, you can configure MSR to use it. To avoid downtime, you
should do this configuration outside business peak hours.
To configure MSR for web proxy use:
Log in as an administrator to a node where MSR is deployed.
In addition to storing individual and multi-architecture container images and
plugins, MSR supports the storage of applications as their own
distinguishable type.
Applications include the following two tags:
Image
Tag
Type
Under the hood
Invocation
<app-tag>-invoc
Container image represented by OS and architecture.
For example, linuxamd64.
Uses Mirantis Container Runtime. The Docker daemon is responsible for
building and pushing the image. Includes scan results for the invocation
image.
Application with bundled components
<app-tag>
Application
Uses the application client to build and push the image. Includes scan
results for the bundled components. Docker App is an experimental Docker
CLI feature.
Use docker app push to push your applications to MSR. For more
information, refer to Docker App
in the official Docker documentation.
While it is possible to enable the just-in-time creation of multi-architecture
image repositories when creating a repository using the API, Mirantis does not
recommend using this option, as it will cause Docker Content Trust to fail
along with other issues. To manage Docker image manifests and manifest
lists, instead use the experimental command docker manifest.
The MSR web UI has an Info page for each repository that
includes the following sections:
A README file, which is editable by admin users.
The docker pull command for pulling the images contained in the
given repository. To learn more about pulling images, refer to
Pull and push images.
The permissions associated with the user who is currently logged in.
To view the Info section:
Log in to the MSR web UI.
In the left-side navigation panel, click Repositories.
Select the required repository by clicking the repository
name rather than the namespace name that precedes the /.
The Info tab displays by default.
To view the repository events that your permissions level has access to,
hover over the question mark next to the permissions level that displays under
Your permission.
Note
Your permissions list may include repository events that are not displayed
in the Activity tab. Also, it is not an exhaustive list of the
event types that are displayed in your activity stream. To learn more about
repository events, refer to Audit repository events.
The base layers of the Microsoft Windows base images have redistribution
restrictions. When you push a Windows image to MSR, Docker only pushes the
image manifest and the layers that are above the Windows base layers. As a
result:
When a user pulls a Windows image from MSR, the Windows base layers
are automatically fetched from Microsoft.
Because MSR does not have access to the image base layers, it cannot scan
those image layers for vulnerabilities. The Windows base layers are,
however, scanned by Docker Hub.
On air-gapped or similarly limited systems, you can configure Docker to push
Windows base layers to MSR by adding the following line to
C:\ProgramData\docker\config\daemon.json:
If your MSR instance uses image signing, you will need to remove any trust
data on the image before you can delete it. For more information, refer to
Delete signed images.
To delete an image:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Click the relevant repository and navigate to the Tags tab.
Select the check box next to the tags that you want to delete.
Click Delete.
Alternatively, you can delete every tag for a particular image by deleting the
relevant repository.
To delete a repository:
Click the required repository and navigate to the Settings
tab.
Scroll down to Delete repository and click
Delete.
Mirantis Secure Registry (MSR) has the ability to scan images for security
vulnerabilities contained in the US National Vulnerability Database. Security
scan results are reported for each image tag contained in a repository.
Security scanning is available as an add-on to MSR. If security scan results
are not available on your repositories, your organization may not have
purchased the security scanning feature or it may be disabled. Administrator
permissions are required to enable security scanning on your MSR instance.
Important
During scanning images for security vulnerabilities, MSR temporarily
extracts the contents of your images to disk. If malware is contained in
these images, external scanners may wrongly attribute that malware
to MSR. The key indication of this is the detection of malware in the
dtr-jobrunner container in /tmp/findlib-workdir-*.
To prevent any recurrence of the issue, Mirantis recommends configuring
the run-time scanner to exclude files found in the MSR dtr-jobrunner
containers in /tmp, or more specifically, if wildcards can be used,
in /tmp/findlib-workdir-*.
The scanner first performs a binary scan on each layer of the image,
identifies the software components in each layer, and indexes the SHA of
each component in a bill-of-materials. A binary scan evaluates the
components on a bit-by-bit level, so vulnerable components are
discovered even if they are statically linked or use a different name.
The scan then compares the SHA of each component against the US National
Vulnerability Database that is installed on your MSR instance. When this
database is updated, MSR verifies whether the indexed components have newly
discovered vulnerabilities.
MSR has the ability to scan both Linux and Windows images. However, because
Docker defaults to not pushing foreign image layers for Windows images,
MSR does not scan those layers. If you want MSR to scan your Windows images,
configure Docker to always push image layers, and
it will scan the non-foreign layers.
A summary of the results displays next to each scanned tag on the repository
Tags tab, and presents in one of the following ways:
If the scan did not find any vulnerabilities, the word Clean
displays in green.
If the scan found vulnerabilities, the severity level, Critical,
Major, or Minor, displays in red or orange with the
number of vulnerabilities. If the scan could not detect the version of
a component, the vulnerabilities are reported for all versions of the
component.
To view the full scanning report, click View details for the
required image tag.
The top of the resulting page includes metadata about the image including
the SHA, image size, last push date, user who initiated the push, security scan
summary, and the security scan progress.
The scan results for each image include two different modes so you can
quickly view details about the image, its components, and any
vulnerabilities found:
The Layers view lists the layers of the image in the order that
they are built by the Dockerfile.
This view can help you identify which command in the build
introduced the vulnerabilities, and which components are associated
with that command. Click a layer to see a summary of its
components. You can then click on a component to switch to the
Component view and obtain more details about the specific item.
Note
The layers view can be long, so be sure to scroll down if
you do not immediately see the reported vulnerabilities.
The Components view lists the individual component libraries
indexed by the scanning system in order of severity and number of
vulnerabilities found, with the most vulnerable library listed first.
Click an individual component to view details on the vulnerability it
introduces, including a short summary and a link to the official CVE database
report. A single component can have multiple vulnerabilities, and the scan
report provides details on each one. In addition, the component details
include the license type used by the component, the file path to the
component in the image, and the number of layers that contain the component.
Note
The CVE count presented in the scan summary of an image with multiple layers
may differ from the count obtained through summation of the CVEs for each
individual image component. This is because the scan summary performs a
summation of the CVEs in every layer of the image, and a component may be
present in more than one layer of an image.
If you find that an image in your registry contains vulnerable
components, you can use the linked CVE scan information in each scan
report to evaluate the vulnerability and decide what to do.
If you discover vulnerable components, you should verify whether there is an
updated version available where the security vulnerability has been
addressed. If necessary, you can contact the component maintainers to
ensure that the vulnerability is being addressed in a future version or
a patch update.
If the vulnerability is in a base layer, such as an operating
system, you might not be able to correct the issue in the image. In this
case, you can switch to a different version of the base layer, or you
can find a less vulnerable equivalent.
You can address vulnerabilities in your repositories by updating the images to
use updated and corrected versions of vulnerable components or by using
a different component that offers the same functionality. When you have
updated the source code, run a build to create a new image, tag the
image, and push the updated image to your MSR instance. You can then
re-scan the image to confirm that you have addressed the
vulnerabilities.
MSR security scanning sometimes reports image vulnerabilities that you know
have already been fixed. In such cases, it is possible to hide the
vulnerability warning.
To override a vulnerability:
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Navigate to the required repository and click View details.
To review the vulnerabilities associated with each component in the image,
click the Components tab.
Select the component with the vulnerability you want to ignore,
navigate to the vulnerability, and click Hide.
Once dismissed, the vulnerability is hidden system-wide and will no
longer be reported as a vulnerability on affected images with the
same layer IDs or digests. In addition, MSR will not re-evaluate the
promotion policies that have been set up for the repository.
To re-evaluate the promotion policy for the affected image:
After hiding a particular vulnerability, you can re-evaluate the promotion
policy for the affected image.
Log in to the MSR web UI.
In the left-side navigation panel, select Repositories.
Navigate to the required repository and click View details.
You can send a scanner report directly to Mirantis Customer Support to help
the group in their troubleshooting efforts.
To send a scanner report directly to Mirantis Customer Support:
Log in to the MSR web UI.
Navigate to View Details and click Components.
Click Show layers affected for the layer you want to
report.
Click Report Issue. A pop-up window displays with the
fields detailed in the following table:
Field
Description
Component
Automatically filled out and not editable. If the information is
incorrect, make a note in the Additional info field.
Reported version or date
Automatically filled out and not editable. If the information is
incorrect, make a note in the Additional info field.
Report layer
Indicate the image or image layer. Options include:
Omit layer, Include layer, Include
image.
False Positive(s)
Optional. Select from the drop-down menu all CVEs you suspect are
false positives. Toggle the False Positive(s) control to
edit the field.
Missing Issue(s)
Optional. List CVEs you suspect are missing from the report. Enter
CVEs in the format CVE-yyyy-#### or CVE-yyyy-##### and
separate each CVE with a comma. Toggle the Missing
Issue(s) control to edit the field.
Incorrect Component Version
Optional. Enter any incorrect component version information in the
Missing Issue(s) field. Toggle the
Incorrect Component Version control to edit the field.
Additional info
Optional. Indicate anything else that does not pertain to other
fields. Toggle the Additional info control to edit this
field.
Fill out the fields in the pop-up window and click Submit.
MSR generates a JSON-formatted scanner report, which it bundles into a file
together with the scan data. This file downloads to your local drive, at which
point you can share it as needed with Mirantis Customer Support.
Important
To submit a scanner report along with the associated image, bundle the items
into a .tgz file and include that file in a Mirantis Customer
Support ticket.
By default, users can push the same tag multiple times to a repository,
thus overwriting the older versions of the tag. This can however lead to
problems if a user pushes an image with the same tag name but different
functionality. Also, when images are overwritten, it can be difficult to
determine which build originally generated the image.
To prevent tags from being overwritten, you can configure a repository
to be immutable. Once configured, MSR will not allow another image with the
same tag to be pushed to the repository.
Note
Enabling tag immutability disables repository tag limits.
Two key components of the Mirantis Secure Registry are the Notary Server and
the Notary Signer. These two containers provide the required components for
using Docker Content Trust (DCT) out of the box. Docker Content Trust allows
you to sign image tags, therefore giving consumers a way to verify the
integrity of your image.
Note
If the MSR certificate authority (CA) is self-signed, you must take steps to
make the machine running the docker trust command trust the MSR
CA. You can do this by creating a folder with the name of MSR
hostname under $HOME/.docker/tls/ and placing the MSR CA file in that
folder. For example:
As part of MSR, both the Notary and the Registry servers are accessed
through a front-end proxy, with both components sharing the MKE’s RBAC
(Role-based Access Control) Engine. Therefore, you do not need
additional Docker client configuration in order to use DCT.
DCT is integrated with the Docker CLI, and allows you to:
MKE has a feature that prevent untrusted images from being deployed
on the cluster. To use the feature, you need to sign and push images
to your MSR. To tie the signed images back to MKE, you need
to sign the images with the private keys of the MKE users.
From an MKE client bundle, use key.pem as your private key,
and cert.pem as your public key on an x509 certificate.
To sign images in a way that MKE can trust, you need to:
Download a client bundle for the user account you want to use for
signing the images.
Add the user’s private key to your machine’s trust store.
Initialize trust metadata for the repository.
Delegate signing for that repository to the MKE user.
Sign the image.
The following example shows the nginx image getting pulled from
Docker Hub, tagged as msr.example.com/dev/nginx:1, pushed to MSR,
and signed in a way that is trusted by MKE.
After downloading and extracting an MKE client bundle into your local
directory, you need to load the private key into the local Docker trust
store (~/.docker/trust). To illustrate the process, we will use
jeff as an example user.
Initialize the trust metadata and add the user’s public certificate¶
Next,initiate trust metadata for an MSR repository. If you have not
already done so, navigate to the MSR web UI, and create a repository
for your image. This example uses the nginx repository in the
prod namespace.
As part of initiating the repository, the public key of the MKE user
needs to be added to the Notary server as a signer for the repository.
You will be asked for a number of passphrases to protect the keys.Make a
note of these passphrases.
Finally, user jeff can sign an image tag. The following steps
include downloading the image from Hub, tagging the image for Jeff’s MSR
repository, pushing the image to Jeff’s MSR, as well as signing the tag
with Jeff’s keys.
You have the option to sign an image using multiple MKE users’ keys. For
example, an image needs to be signed by a member of the Security
team and a member of the Developers team. Let’s assume jeff is a
member of the Developers team. In this case, we only need to add a
member of the Security team.
To do so, first add the private key of the Security team member to the
local Docker trust store.
Upload the user’s public key to the Notary Server and sign the image.
You will be asked for jeff, the developer’s passphrase, as well as
the ian user’s passphrase to sign the tag.
If an administrator wants to delete an MSR repository that contains trust
metadata, they will be prompted to delete the trust metadata first
before removing the repository.
To delete trust metadata, you need to use the Notary CLI.
Using Docker Content Trust with a Remote MKE Cluster¶
For more advanced deployments, you may want to share one Mirantis Secure
Registry across multiple Mirantis Kubernetes Engines. However, customers
wanting to adopt this model alongside the Only Run Signed Images
MKE feature, run into problems as each MKE operates an independent
set of users.
Docker Content Trust (DCT) gets around this problem, since users from a
remote MKE are able to sign images in the central MSR and still apply
runtime enforcement.
In the following example, we will connect MSR managed by MKE cluster 1
with a remote MKE cluster which we are calling MKE cluster 2, sign the
image with a user from MKE cluster 2, and provide runtime enforcement
within MKE cluster 2. This process could be repeated over and over,
integrating MSR with multiple remote MKE clusters, signing the image
with users from each environment, and then providing runtime enforcement
in each remote MKE cluster separately.
Note
Before attempting this guide, familiarize yourself with Docker
Content Trust and Only Run Signed Images on a single MKE.
Many of the concepts within this guide may be new without
that background.
Cluster 1, running UCP 3.0.x or higher, with a DTR 2.5.x or higher
deployed within the cluster.
Cluster 2, running UCP 3.0.x or higher, with no MSR node.
Nodes on Cluster 2 need to trust the Certificate Authority which
signed MSR’s TLS Certificate. This can be tested by logging on to a
cluster 2 virtual machine and running
curlhttps://msr.example.com.
The MSR TLS Certificate needs to be properly configured, ensuring that
the Loadbalancer/Public Address field has been configured, with
this address included within the certificate.
A machine with the Docker Client (CE 17.12 / EE 1803 or newer)
installed, as this contains the relevant
docker trust commands.
Registering MSR with a remote Mirantis Kubernetes Engine¶
As there is no registry running within cluster 2, by default MKE will
not know where to check for trust data. Therefore, the first thing we
need to do is register MSR within the remote MKE in cluster 2. When you
normally install MSR, this registration process happens by default to a
local MKE, or cluster 1.
Note
The registration process allows the remote MKE to get signature data
from MSR, however this will not provide Single Sign On (SSO). Users
on cluster 2 will not be synced with cluster 1’s MKE or MSR.
Therefore when pulling images, registry authentication will still
need to be passed as part of the service definition if the repository
is private. See the Kubernetes
example.
To add a new registry, retrieve the Certificate Authority (CA) used to
sign the MSR TLS Certificate through the MSR URL’s /ca endpoint.
$curl-kshttps://msr.example.com/ca>dtr.crt
Next, convert the MSR certificate into a JSON configuration file for
registration within the MKE for cluster 2.
You can find a template of the dtr-bundle.json below. Replace the
host address with your MSR URL, and enter the contents of the MSR CA
certificate between the new line commands \nand\n.
Note
JSON Formatting
Ensure there are no line breaks between each line of the MSR CA
certificate within the JSON file. Use your favorite JSON formatter
for validation.
$catdtr-bundle.json
{"hostAddress":"msr.example.com",
"caBundle":"-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----END CERTIFICATE-----"}
Now upload the configuration file to cluster 2’s MKE through the MKE API
endpoint, /api/config/trustedregistry_. To authenticate against the
API of cluster 2’s MKE, we have downloaded an MKE client bundle,
extracted it in the current directory, and will reference the keys for
authentication.
Navigate to the MKE web interface to verify that the JSON file was
imported successfully, as the MKE endpoint will not output anything.
Select Admin > Admin Settings > Mirantis Secure Registry. If the
registry has been added successfully, you should see the MSR listed.
Additionally, you can check the full MKE configuration
file within cluster 2’s MKE. Once downloaded, the
ucp-config.toml file should now contain a section called [registries]
We will now sign an image and push this to MSR. To sign images we need a
user’s public private key pair from cluster 2. It can be found in a
client bundle, with key.pem being a private key and cert.pem
being the public key on an X.509 certificate.
First, load the private key into the local Docker trust store
(~/.docker/trust). The name used here is purely metadata to help
keep track of which keys you have imported.
$ docker trust key load --name cluster2admin key.pem
Loading key from "key.pem"...
Enter passphrase for new cluster2admin key with ID a453196:
Repeat passphrase for new cluster2admin key with ID a453196:
Successfully imported key from key.pem
Next initiate the repository, and add the public key of cluster 2’s user
as a signer. You will be asked for a number of passphrases to protect
the keys. Keep note of these passphrases, and see [Docker Content Trust
documentation]
(/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server)
to learn more about managing keys.
$ docker trust signer add --key cert.pem cluster2admin msr.example.com/admin/trustdemo
Adding signer "cluster2admin" to msr.example.com/admin/trustdemo...
Initializing signed repository for msr.example.com/admin/trustdemo...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID dd4460f:
Repeat passphrase for new repository key with ID dd4460f:
Successfully initialized "msr.example.com/admin/trustdemo"
Successfully added signer: cluster2admin to msr.example.com/admin/trustdemo
Finally, sign the image tag. This pushes the image up to MSR, as well as
signs the tag with the user from cluster 2’s keys.
$ docker trust sign msr.example.com/admin/trustdemo:1
Signing and pushing trust data for local image msr.example.com/admin/trustdemo:1, may overwrite remote trust data
The push refers to repository [dtr.olly.dtcntr.net/admin/trustdemo]
27c0b07c1b33: Layer already exists
aa84c03b5202: Layer already exists
5f6acae4a5eb: Layer already exists
df64d3292fd6: Layer already exists
1: digest: sha256:37062e8984d3b8fde253eba1832bfb4367c51d9f05da8e581bd1296fc3fbf65f size: 1153
Signing and pushing trust metadata
Enter passphrase for cluster2admin key with ID a453196:
Successfully signed msr.example.com/admin/trustdemo:1
Within the MSR web interface, you should now be able to see your newly
pushed tag with the Signed text next to the size.
You could sign this image multiple times if required, whether it’s
multiple teams from the same cluster wanting to sign the image, or you
integrating MSR with more remote MKEs so users from clusters 1, 2, 3, or
more can all sign the same image.
We can now enable Only Run Signed Images on the remote MKE. To do
this, login to cluster 2’s MKE web interface as an admin.
Select Admin > Admin Settings > Docker Content Trust.
Finally we can now deploy a workload on cluster 2, using a signed image
from an MSR running on cluster 1. This workload could be a simple
$dockerrun, a Swarm Service, or a Kubernetes workload. As a simple
test, source a client bundle, and try running one of your signed images.
$ source env.sh
$ docker service create msr.example.com/admin/trustdemo:1
nqsph0n6lv9uzod4lapx0gwok
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
nqsph0n6lv9u laughing_lamarr replicated 1/1 msr.example.com/admin/trustdemo:1
If the image is stored in a private repository within MSR, you need to
pass credentials to the Orchestrator as there is no SSO between cluster
2 and MSR. See the relevant
Kubernetes
documentation for more details.
Example Errors¶Image or trust data does not exist¶
This means that the image was signed correctly, however the user who
signed the image does not meet the signing policy in cluster 2. This
could be because you signed the image with the wrong user keys.
Mirantis Secure Registry (MSR) uses a job queue to schedule batch jobs.
Jobs are added to a cluster-wide job queue, and then consumed and
executed by a job runner within MSR.
All MSR replicas have access to the job queue, and have a job runner
component that can get and execute work.
When a job is created, it is added to a cluster-wide job queue and
enters the waiting state. When one of the MSR replicas is ready to
claim the job, it waits a random time of up to 3 seconds to give
every replica the opportunity to claim the task.
A replica claims a job by adding its replica ID to the job. That way,
other replicas will know the job has been claimed. Once a replica claims
a job, it adds that job to an internal queue, which in turn sorts the
jobs by their scheduledAt time. Once that happens, the replica
updates the job status to running, and starts executing it.
The job runner component of each MSR replica keeps a
heartbeatExpiration entry on the database that is shared by all
replicas. If a replica becomes unhealthy, other replicas notice the
change and update the status of the failing worker to dead. Also,
all the jobs that were claimed by the unhealthy replica enter the
worker_dead state, so that other replicas can claim the job.
A garbage collection job that deletes layers associated with deleted
images.
onlinegc
A garbage collection job that deletes layers associated with deleted
images without putting the registry in read-only mode.
onlinegc_metadata
A garbage collection job that deletes metadata associated with deleted
images.
onlinegc_joblogs
A garbage collection job that deletes job logs based on a configured job
history setting.
metadatastoremigration
A necessary migration that enables the onlinegc feature.
sleep
Used for testing the correctness of the jobrunner. It sleeps for 60
seconds.
false
Used for testing the correctness of the jobrunner. It runs the false
command and immediately fails.
tagmigration
Used for synchronizing tag and manifest information between the MSR
database and the storage backend.
bloblinkmigration
A DTR 2.1 to 2.2 upgrade process that adds references for blobs to
repositories in the database.
license_update
Checks for license expiration extensions if online license updates are
enabled.
scan_check
An image security scanning job. This job does not perform the actual
scanning, rather it spawns scan_check_single jobs (one for each layer
in the image). Once all of the scan_check_single jobs are complete,
this job will terminate.
scan_check_single
A security scanning job for a particular layer given by the parameter:SHA256SUM. This job breaks up the layer into components and checks each
component for vulnerabilities.
scan_check_all
A security scanning job that updates all of the currently scanned images
to display the latest vulnerabilities.
update_vuln_db
A job that is created to update MSR’s vulnerability database. It uses an
Internet connection to check for database updates through
https://dss-cve-updates.docker.com/ and updates the
dtr-scanningstore container if there is a new update available.
scannedlayermigration
A DTR 2.4 to 2.5 upgrade process that restructures scanned image data.
push_mirror_tag
A job that pushes a tag to another registry after a push mirror policy
has been evaluated.
poll_mirror
A global cron that evaluates poll mirroring policies.
webhook
A job that is used to dispatch a webhook payload to a single endpoint.
nautilus_update_db
The old name for the update_vuln_db job. This may be visible on old
log files.
ro_registry
A user-initiated job for manually switching MSR into read-only mode.
tag_pruning
A job for cleaning up unnecessary or unwanted repository tags which can
be configured by repository admins.
As of DTR 2.2, admins were able to view and audit jobs within the software
using the API. MSR 2.6 enhances those capabilities by adding a Job
Logs tab under System settings on the user interface. The tab
displays a sortable and paginated list of jobs along with links to associated
job logs.
To view the list of jobs within MSR, do the following:
Navigate to https://<msr-url> and log in with your MKE
credentials.
Select System from the left-side navigation panel, and then
click Job Logs. You should see a paginated list of past,
running, and queued jobs. By default, Job Logs shows the latest
10 jobs on the first page.
Specify a filtering option. Job Logs lets you filter by:
Action
Worker ID (the ID of the worker in an MSR replica that is
responsible for running the job)
Optional: Click Edit Settings on the right of the filtering
options to update your Job Logs settings.
To view the log details for a specific job, do the following:
Click View Logs next to the job’s Last Updated
value. You will be redirected to the log detail page of your selected job.
Notice how the job ID is reflected in the URL while the
Action and the abbreviated form of the job ID are reflected
in the heading. Also, the JSON lines displayed are job-specific MSR
container logs.
Enter or select a different line count to truncate the number of
lines displayed. Lines are cut off from the end of the logs.
This covers troubleshooting batch jobs via the API and was introduced in DTR
2.2. Starting in MSR 2.6, admins have the ability to audit jobs using the web
interface.
Each job runner has a limited capacity and will not claim jobs that
require a higher capacity. You can see the capacity of a job runner via
the GET/api/v0/workers endpoint:
If worker 000000000000 notices the jobs in waiting state above,
then it will be able to pick up jobs 0 and 2 since it has the
capacity for both. Job 1 will have to wait until the previous scan
job, 0, is completed. The job queue will then look like:
The schedule field uses a cron expression following the
(seconds)(minutes)(hours)(dayofmonth)(month)(dayofweek)
format. For example, 57543*** with cron ID
48875b1b-5006-48f5-9f3c-af9fbdd82255 will be run at 03:54:57 on
any day of the week or the month, which is 2017-02-22T03:54:57Z in
the example JSON response above.
Mirantis Secure Registry has a global setting for auto-deletion of job logs
which allows them to be removed as part of garbage collection. MSR admins can
enable auto-deletion of repository events in MSR 2.6 based on specified
conditions which are covered below.
In your browser, navigate to https://<msr-url> and log in with
your MKE credentials.
Select System on the left-side navigation panel, which will
display the Settings page by default.
Scroll down to Job Logs and turn on Auto-Deletion.
Specify the conditions with which a job log auto-deletion will be
triggered.
MSR allows you to set your auto-deletion conditions based on the
following optional job log attributes:
Name
Description
Example
Age
Lets you remove job logs which are older than your specified number
of hours, days, weeks or months
2months
Max number of events
Lets you specify the maximum number of job logs allowed within MSR.
100
If you check and specify both, job logs will be removed from MSR
during garbage collection if either condition is met. You should see
a confirmation message right away.
Click Start Deletion if you’re ready. Read more about
Garbage collection if you’re unsure
about this operation.
Navigate to System > Job Logs to confirm that
onlinegc_joblogs has started.
Note
When you enable auto-deletion of job logs, the logs will be
permanently deleted during garbage collection.
With MSR you get to control which users have access to your image
repositories.
By default, anonymous users can only pull images from public
repositories. They can’t create new repositories or push to existing
ones. You can then grant permissions to enforce fine-grained access
control to image repositories. For that:
Start by creating a user.
Users are shared across MKE and MSR. When you create a new user in
Docker Universal Control Plane, that user becomes available in MSR
and vice versa. Registered users can create and manage their own
repositories.
You can also integrate with an LDAP service to manage users from a
single place.
Extend the permissions by adding the user to a team.
To extend a user’s permission and manage their permissions over
repositories, you add the user to a team. A team defines the
permissions users have for a set of repositories.
Note
To monitor users login events, enable the auditAuthLogsEnabled parameter
in the /settings API endpoint:
When a user creates a repository, only that user can make changes to the
repository settings, and push new images to it.
Organizations take permission management one step further, since they
allow multiple users to own and manage a common set of repositories.
This is useful when implementing team workflows. With organizations you
can delegate the management of a set of repositories and user
permissions to the organization administrators.
An organization owns a set of repositories, and defines a set of teams.
With teams you can define fine-grain permissions that a team of user has
for a set of repositories.
In this example, the ‘Whale’ organization has three repositories and two
teams:
Members of the blog team can only see and pull images from the
whale/java repository,
Members of the billing team can manage the whale/golang repository,
and push and pull images from the whale/java repository.
You can extend a user’s default permissions by granting them individual
permissions in other image repositories, by adding the user to a team. A
team defines the permissions a set of users have for a set of
repositories.
To create a new team, go to the MSR web UI, and navigate to the
Organizations page. Then click the organization where you want
to create the team.
Navigate to the Teams tab, click the New team
button, and give the team a name.
Once you have created a team, click the team name, to manage its
settings. The first thing we need to do is add users to the team. Click
the Add Member button and add users to the team.
The next step is to define the permissions this team has for a set of
repositories. Navigate to the Repositories tab, and click the Add
repository button.
Choose the repositories this team has access to, and what permission
levels the team members have.
Three permission levels are available:
Permission level
Description
Read only
View repository and pull images.
Read & Write
View repository, pull and push images.
Admin
Manage repository and change its settings, pull and push images.
When a user creates a repository, only that user has permissions to make
changes to the repository.
For team workflows, where multiple users have permissions to manage a
set of common repositories, create an organization. By default, MSR has
one organization called ‘docker-datacenter’, that is shared between MSR
and MKE.
To create a new organization, navigate to the MSR web UI, and go to
the Organizations page.
Click the New organization button, and choose a meaningful name for
the organization.
Repositories owned by this organization will contain the organization
name, so to pull an image from that repository, you’ll use:
Click Save to create the organization, and then click the
organization to define which users are allowed to manage this
organization. These users will be able to edit the organization
settings, edit all repositories owned by the organization, and define
the user permissions for this organization.
For this, click the Add user button, select the users that you
want to grant permissions to manage the organization, and click
Save. Then change their permissions from ‘Member’ to Org Owner.
Users are shared across MKE and MSR. When you create a new user in
Mirantis Kubernetes Engine, that user becomes available in MSR and
vice versa. When you create a trusted admin in MSR, the admin has
permissions to manage:
You can configure MSR to automatically post event notifications to a
webhook URL of your choosing. This lets you build complex CI and CD
pipelines with your Docker images.
To subscribe to the webhook events for a repository or namespace you must have
admin rights for the particular component.
For example, a “foo/bar” repository admin may subscribe to its tag push
events, whereas an MSR admin can subscribe to any event.
In your browser, navigate to https://<msr-url> and log in with
your credentials.
Select Repositories from the left-side navigation panel, and
then click the name of the repository that you want to view. Note that
you will have to click the repository name following the / after the
specific namespace for your repository.
Select the Webhooks tab, and click New Webhook.
From the drop-down list, select the event that will trigger the
webhook.
Set the URL that will receive the JSON payload.
Validate the integration by clicking the Test button
next to the Webhook URL field.
If the integration is working, you will receive a JSON payload at the URL
you specified for the event type notification you selected.
Paste the TLS certificate associated with your webhook URL into the
TLS Cert field.
Note
For testing purposes, you can test your TLS certificate over HTTP
rather than HTTPS.
Click Create to save the webhook. Once saved, your webhook is
active and starts sending POST notifications whenever your selected event
type is triggered.
As a repository admin, you can add or delete a webhook at any point.
Additionally, you can create, view, and delete webhooks for your
organization or trusted registry using the API.
Refer to Webhook types for a list of events that can trigger
notifications through the API.
From the MSR web interface, click API on the bottom left-side
navigation panel to explore the API resources and endpoints. Click
Execute to send your API request.
Your MSR hostname serves as the base URL for your API requests.
Use curl to send HTTP or HTTPS API requests. Note that you must
specify skipTLSVerification:true on your request to test the
webhook endpoint over HTTP.
The namespace/organization or repo to subscribe to. For
example, foo/bar to subscribe to pushes to the bar repository
within the namespace/organization foo.
endpoint
The URL to send the JSON payload to.
You must supply a “key” to scope a particular webhook event
to a repository or a namespace/organization.
If you are an MSR admin, you can omit the “key”, in which case a POST event
notification of the specified type will be triggered for all MSR repositories
and namespaces.
Applies to the event type received at the specified
subscription endpoint.
contents
Refers to the payload of the event itself. Each event is
different, therefore the structure of the JSON object in contents
will change depending on the event type. Refer to Content
structure for more details.
Before subscribing to an event, you can view and test your endpoints
using fake data. To send a test payload, send a POST request to
/api/v0/webhooks/test with the following payload:
Change type to the event type that you want to receive. MSR will
then send an example payload to your specified endpoint. The example
payload sent is always the same.
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (e.g. 10.10.10.1/foo/bar:tag)
"scanSummary": {
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"critical": 0, // (int) number of critical issues, where CVSS >= 7.0
"major": 0, // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
"minor": 0, // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
"last_scan_status": 0, // (int) enum; see scan status section
"check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
...
}
}
{
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"event": "", // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
"author": "", // (string) the name of the user responsible for the event
"data": {} // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
To view the subscriptions for a resource you must first have admin rights for
that resource. After which, you can send requests for all subscriptions from a
particular API endpoint. The response will include data for all resource users.
To view all webhook subscriptions for a repository, run:
You can delete a subscription if you are an MSR repository admin or an
admin of the resource associated with the event subscription. Regular users,
however, can only delete subscriptions for the repositories they manage.
To delete a webhook subscription, send a DELETE/api/v0/webhooks/{id}
request, replacing {id} with the ID of the webhook subscription you intend
to delete.
Starting in DTR 2.6, each repository page includes an Activity tab
which displays a sortable and paginated list of the most recent events within
the repository. This offers better visibility along with the ability to audit
events. Event types listed vary according to your repository
permission level. Additionally, MSR admins can enable auto-deletion
of repository events as part of maintenance and cleanup.
In the following section, we will show you how to view and audit the
list of events in a repository. We will also cover the event types
associated with your permission level.
As of DTR 2.3, admins were able to view a list of MSR events using the API. MSR
2.6 enhances that feature by showing a permission-based events list for each
repository page on the web interface. To view the list of events within a
repository, do the following:
Navigate to https://<msr-url> and log in with your MSR credentials.
Select Repositories from the left-side navigation panel, and
then click on the name of the repository that you want to view. Note that
you will have to click on the repository name following the / after the
specific namespace for your repository.
Select the Activity tab. You should see a paginated list of the
latest events based on your repository permission level. By default,
Activity shows the latest 10 events and excludes pull
events, which are only visible to repository and MSR admins.
If you’re a repository or an MSR admin, uncheck Exclude pull
to view pull events. This should give you a better understanding of who
is consuming your images.
To update your event view, select a different time filter from the
drop-down list.
The following table breaks down the data included in an event and uses
the highlighted CreatePromotionPolicy event as an example.
Event detail
Description
Example
Label
Friendly name of the event.
CreatePromotionPolicy
Repository
This will always be the repository in review following the
<user-or-org>/<repository_name> convention outlined in
Create a repository
test-org/test-repo-1
Tag
Tag affected by the event, when applicable.
test-org/test-repo-1:latest where latest is the affected tag
SHA
The digest value for ``CREATE` operations such as creating a new image
tag or a promotion policy.
sha256:bbf09ba3
Type
Event type. Possible values are: CREATE, GET, UPDATE,
DELETE, SEND, FAIL and SCAN.
CREATE
Initiated by
The actor responsible for the event. For user-initiated events, this
will reflect the user ID and link to that user’s profile. For image
events triggered by a policy – pruning, pull / push mirroring, or
promotion – this will reflect the relevant policy ID except for manual
promotions where it reflects PROMOTIONMANUAL_P, and link to the
relevant policy page. Other event actors may not include a link.
PROMOTIONCA5E7822
Date and Time
When the event happened in your configured time zone.
Given the level of detail on each event, it should be easy for MSR and
security admins to determine what events have taken place inside of MSR.
For example, when an image which shouldn’t have been deleted ends up
getting deleted, the security admin can determine when and who initiated
the deletion.
Refers to CreateManifest and UpdateTag events. Learn more
about pushing images.
Authenticated users
Scan
Requires security scanning to be set
up by an MSR admin.
Once enabled, this will display as a SCAN event type.
Authenticated users
Promotion
Refers to a CreatePromotionPolicy event which links to the
Promotions tab of the repository where you can edit
the existing promotions. See Promotion Policies for different ways to promote
an image.
Repository admin
Delete
Refers to “Delete Tag” events. Learn more about Delete images.
Authenticated users
Pull
Refers to “Get Tag” events. Learn more about Pull an image.
Mirantis Secure Registry has a global setting for repository event
auto-deletion. This allows event records to be removed as part of garbage
collection. MSR administrators can enable auto-deletion of repository
events in DTR 2.6 based on specified conditions which are covered below.
In your browser, navigate to https://<msr-url> and log in with your
admin credentials.
Select System from the left-side navigation panel, which
displays the Settings page by default.
Scroll down to Repository Events and turn on
Auto-Deletion.
Specify the conditions with which an event auto-deletion will be triggered.
MSR allows you to set your auto-deletion conditions based on the following
optional repository event attributes:
Name
Description
Example
Age
Lets you remove events older than your specified number of hours, days,
weeks or months.
2months
Max number of events
Lets you specify the maximum number of events allowed in the
repositories.
6000
If you check and specify both, events in your repositories will be removed
during garbage collection if either condition is met. You should see a
confirmation message right away.
Click Start GC if you are ready.
Navigate to System > Job Logs to confirm that onlinegc has
taken place.
Mirantis Secure Registry allows you to automatically promote and mirror
images based on a policy. In MSR 2.7, you have the option to promote
applications with the experimental docker app CLI addition. Note that
scanning-based promotion policies do not take effect until all
application-bundled images have been scanned. This way you can create a
Docker-centric development pipeline.
You can mix and match promotion policies, mirroring policies, and
webhooks to create flexible development pipelines that integrate with
your existing CI/CD systems.
Promote an image using policies
One way to create a promotion pipeline is to automatically promote
images to another repository.
You start by defining a promotion policy that’s specific to a
repository. When someone pushes an image to that repository, MSR checks
if it complies with the policy you set up and automatically pushes the
image to another repository.
You can also promote images between different MSR deployments. This not
only allows you to create promotion policies that span multiple MSRs,
but also allows you to mirror images for security and high availability.
You start by configuring a repository with a mirroring policy. When
someone pushes an image to that repository, MSR checks if the policy is
met, and if so pushes it to another MSR deployment or Docker Hub.
Another option is to mirror images from another MSR deployment. You
configure a repository to poll for changes in a remote repository. All
new images pushed into the remote repository are then pulled into MSR.
This is an easy way to configure a mirror for high availability since
you won’t need to change firewall rules that are in place for your
environments.
Mirantis Secure Registry allows you to create image promotion pipelines
based on policies.
In this example we will create an image promotion pipeline such that:
Developers iterate and push their builds to the dev/website
repository.
When the team creates a stable build, they make sure their image is
tagged with -stable.
When a stable build is pushed to the dev/website repository, it
will automatically be promoted to qa/website so that the QA team
can start testing.
With this promotion policy, the development team doesn’t need access to
the QA repositories, and the QA team doesn’t need access to the
development repositories.
Once you’ve created a repository, navigate to the
repository page on the MSR web interface, and select the Promotions tab.
Note
Only administrators can globally create and edit promotion policies.
By default users can only create and edit promotion policies on
repositories within their user namespace.
Click New promotion policy, and define the image promotion
criteria.
MSR allows you to set your promotion policy based on the following image
attributes:
Whether the tag name equals, starts with, ends with, contains, is one
of, or is not one of your specified string values
Promote to Target if Tag name ends in stable
Component
Whether the image has a given component and the component name equals,
starts with, ends with, contains, is one of, or is not one of your
specified string values
Promote to Target if Component name starts with b
Vulnerabilities
Whether the image has vulnerabilities – critical, major, minor,
or all – and your selected vulnerability filter is greater than or
equals, greater than, equals, not equals, less than or equals, or less
than your specified number
Promote to Target if Critical vulnerabilities = 3
License
Whether the image uses an intellectual property license and is one of
or not one of your specified words
Promote to Target if License name = docker
Now you need to choose what happens to an image that meets all the
criteria.
Select the target organization or namespace and repository
where the image is going to be pushed. You can choose to keep the image
tag, or transform the tag into something more meaningful in the
destination repository, by using a tag template.
In this example, if an image in the dev/website is tagged with a
word that ends in “stable”, MSR will automatically push that image to
the qa/website repository. In the destination repository the image
will be tagged with the timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted. To confirm,
select the Promotions tab on the dev/website repository.
You can also review the newly pushed tag in the target repository by
navigating to qa/website and selecting the Tags tab.
Mirantis Secure Registry allows you to create mirroring policies for a
repository. When an image gets pushed to a repository and meets the
mirroring criteria, MSR automatically pushes it to a repository in a
remote Mirantis Secure Registry or Hub registry.
This not only allows you to mirror images but also allows you to create
image promotion pipelines that span multiple MSR deployments and
datacenters.
In this example we will create an image mirroring policy such that:
Developers iterate and push their builds to
msr-example.com/dev/website the repository in the MSR
deployment dedicated to development.
When the team creates a stable build, they make sure their image is
tagged with -stable.
When a stable build is pushed to msr-example.com/dev/website, it
will automatically be pushed to qa-example.com/qa/website,
mirroring the image and promoting it to the next stage of
development.
With this mirroring policy, the development team does not need access to
the QA cluster, and the QA team does not need access to the development
cluster.
You need to have permissions to push to the destination repository in
order to set up the mirroring policy.
Once you have created a repository, navigate to
the repository page on the web interface, and select the Mirrors
tab.
Click New mirror to define where the image will be pushed if it
meets the mirroring criteria.
Under Mirror direction, choose Push to remote registry.
Specify the following details:
Field
Description
Registry type
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
https://index.docker.io
Username and password or access token
Your credentials in the remote repository you wish to push to.
To use an access token instead of your password, see
authentication token.
Repository
Enter the namespace and the repository_name after the /
Show advanced settings
Enter the TLS details for the remote repository or check
Skip TLS verification. If the MSR remote repository is
using self-signed TLS certificates or certificates signed by your own
certificate authority, you also need to provide the public key
certificate for that CA. You can retrieve the certificate by accessing
https://<msr-domain>/ca. Remote certificate authority
is optional for a remote repository in Docker Hub.
Note
Make sure the account you use for the integration has
permissions to write to the remote repository.
Click Connect to test the integration.
In this example, the image gets pushed to the qa/example repository
of an MSR deployment available at qa-example.com using a service
account that was created just for mirroring images between repositories.
Next, set your push triggers. MSR allows you to set your mirroring
policy based on the following image attributes:
Name
Description
Example
Tag name
Whether the tag name equals, starts with, ends with, contains, is one
of, or is not one of your specified string values
Copy image to remote repository if Tag name ends in stable
Component
Whether the image has a given component and the component name equals,
starts with, ends with, contains, is one of, or is not one of your
specified string values
Copy image to remote repository if Component name starts with b
Vulnerabilities
Whether the image has vulnerabilities – critical, major, minor,
or all – and your selected vulnerability filter is greater than or
equals, greater than, equals, not equals, less than or equals, or less
than your specified number
Copy image to remote repository if Critical vulnerabilities = 3
License
Whether the image uses an intellectual property license and is one of
or not one of your specified words
Copy image to remote repository if License name = docker
You can choose to keep the image tag, or transform the tag into
something more meaningful in the remote registry by using a tag
template.
In this example, if an image in the dev/website repository is tagged
with a word that ends in stable, MSR will automatically push that
image to the MSR deployment available at qa-example.com. The image
is pushed to the qa/example repository and is tagged with the
timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted to
qa/example in the remote trusted registry at qa-example.com.
When an image is pushed to another registry using a mirroring policy,
scanning and signing data is not persisted in the destination
repository.
If you have scanning enabled for the destination repository, MSR is
going to scan the image pushed. If you want the image to be signed, you
need to do it manually.
Mirantis Secure Registry allows you to set up a mirror of a repository by
constantly polling it and pulling new image tags as they are pushed.
This ensures your images are replicated across different registries for
high availability. It also makes it easy to create a development
pipeline that allows different users access to a certain image without
giving them access to everything in the remote registry.
To mirror a repository, start by
creating a repository in the MSR deployment that
will serve as your mirror. Previously, you were only able to set up pull
mirroring from the API. Starting in DTR 2.6, you can also mirror and pull
from a remote MSR or Docker Hub repository.
To get started, navigate to https://<msr-url> and log in with your
MKE credentials.
Select Repositories in the left-side navigation panel, and then
click on the name of the repository that you want to view. Note that you will
have to click on the repository name following the / after the specific
namespace for your repository.
Next, select the Mirrors tab and click New mirror.
On the New mirror page, choose
Pull from remote registry.
Specify the following details:
Field
Description
Registry type
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
https://index.docker.io
Username and password or access token
Your credentials in the remote repository you wish to poll from.
To use an access token instead of your password, see
authentication token.
Repository
Enter the namespace and the repository_name after the /
Show advanced settings
Enter the TLS details for the remote repository or check
SkipTLSverification. If the MSR remote repository is using
self-signed certificates or certificates signed by your own certificate
authority, you also need to provide the public key certificate for that
CA. You can retrieve the certificate by accessing
https://<msr-domain>/ca. Remote certificate authority
is optional for a remote repository in Docker Hub.
After you have filled out the details, click Connect to test the
integration.
Once you have successfully connected to the remote repository, new
buttons appear:
Click Save to mirror future tag, or;
To mirror all existing and future tags, click Save & Apply
instead.
There are a few different ways to send your MSR API requests. To explore
the different API resources and endpoints from the web interface, click
API on the bottom left-side navigation panel.
Click Try it out and enter your HTTP request details.
namespace and reponame refer to the repository that will be poll
mirrored. The boolean field, initialEvaluation, corresponds to
Save when set to false and will only mirror images created
after your API request. Setting it to true corresponds to
Save & Apply which means all tags in the remote repository will
be evaluated and mirrored. The other body parameters correspond to the
relevant remote repository details that you can see on the MSR web
interface. As a best practice,
use a service account just for this purpose. Instead of providing the
password for that account, you should pass an authentication
token.
If the MSR remote repository is using self-signed certificates or
certificates signed by your own certificate authority, you also need to
provide the public key certificate for that CA. You can get it by
accessing https://<msr-domain>/ca. The remoteCA field is
optional for mirroring a Docker Hub repository.
Click Execute. On success, the API returns an HTTP201
response.
Once configured, the system polls for changes in the remote repository
and runs the poll_mirror job every 30 minutes. On success, the
system will pull in new images and mirror them in your local repository.
Starting in DTR 2.6, you can filter for poll_mirror jobs to review
when it was last ran. To manually trigger the job and force pull
mirroring, use the POST/api/v0/jobs API endpoint and specify
poll_mirror as your action.
When defining promotion policies you can use templates to dynamically
name the tag that is going to be created.
Important
Whenever an image promotion event occurs, the MSR timestamp for the event
is in UTC (Coordinated Universal Time). That timestamp, however, is converted
by the browser and presents in the user’s time zone. Inversely, if a
time-based tag is applied to a target image, MSR captures it in UTC but
cannot convert it to the user’s timezone due to the tags being immutable
strings.
You can use these template keywords to define your new tag:
Helm is a tool that manages Kubernetes packages called charts, which are
put to use in defining, installing, and upgrading Kubernetes applications.
These charts, in conjunction with Helm tooling, deploy applications
into Kubernetes clusters. Charts are comprised of a collection of files and
directories, arranged in a particular structure and packaged as a .tgz
file. Charts define Kubernetes objects, such as the Service
and DaemonSet objects used in the application under deployment.
MSR enables you to use Helm to store and serve Helm charts,
thus allowing users to push charts to and pull charts from MSR
repositories using the Helm CLI and the MSR API.
Note
To obtain the CA certificate required by the Helm charts commands, navigate
to https://<msr-url>/ca and download the certificate, or run:
curl-skhttps://<msr-url>/ca>ca.crt
MSR supports both Helm v2 and v3. The two versions differ significantly with
regard to the Helm CLI, which affects the applications under deployment rather
than Helm chart support in MSR. One key difference is that while Helm v2
includes both the Helm CLI and Tiller (Helm Server), Helm v3 includes only the
Helm CLI. Helm charts (referred to as releases following their installation
in Kubernetes) are managed by Tiller in Helm v2 and by Helm CLI in Helm v3.
Though the Helm CLI can be used to pull a Helm chart by itself or a Helm
chart and its provenance file, it is not possible to use the Helm CLI to
pull a provenance file by itself.
To push a Helm chart using the Helm CLI, first install the helmpushplugin from chartmuseum/helm-push. It is not possible to push a
provenance file using the Helm CLI.
Use the helmpush CLI command to push a Helm chart:
Use the MSR web UI to view the MSR Helm repository charts.
In the MSR web UI, navigate to Repositories.
Click the name of the repository that contains the charts you want to view.
The page will refresh to display the detail for the selected Helm
repository.
Click the Charts tab. The page will refresh to display
all the repository charts.
View
UI sequence
Chart versions
Click the View Chart button associated with the required
Helm repository.
Chart description
Click the View Chart button associated with the required
Helm repository.
Click the View Chart button for the particular chart
version.
Default values
Click the View Chart button associated with the required
Helm repository.
Click the View Chart button for the particular chart
version.
Click Configuration.
Chart templates
Click the View Chart button associated with the required
Helm repository.
Click the View Chart button for the particular chart
version.
Helm chart linting can ensure that Kubernetes YAML files and Helm charts
adhere to a set of best practices, with a focus on production readiness and
security.
A set of established rules forms the basis of Helm chart linting. The process
generates a report that you can use to take any necessary actions.
Indicates when deployments use the deprecated serviceAccount field.
Use the serviceAccountName field instead.
drop-net-raw-capability
Indicates when containers do not drop NET_RAW capability.
NET_RAW makes it so that an application within the container is able
to craft raw packets, use raw sockets, and bind to any address. Remove
this capability in the containers under containerssecuritycontexts.
env-var-secret
Indicates when objects use a secret in an environment variable.
Do not use raw secrets in environment variables. Instead, either mount
the secret as a file or use a secretKeyRef. Refer to Using Secrets
for details.
mismatching-selector
Indicates when deployment selectors fail to match the pod template
labels.
Confirm that your deployment selector correctly matches the labels in
its pod template.
no-anti-affinity
Indicates when deployments with multiple replicas fail to specify
inter-pod anti-affinity, to ensure that the orchestrator attempts to
schedule replicas on different nodes.
Specify anti-affinity in your pod specification to ensure that the
orchestrator attempts to schedule replicas on different nodes. Using
podAntiAffinity, specify a labelSelector that matches pods for
the deployment, and set the topologyKey to
kubernetes.io/hostname. Refer to Inter-pod affinity and anti-affinity
for details.
no-extensions-v1beta
Indicates when objects use deprecated API versions under extensions/v1beta.
Indicates when deployments expose port 22, which is commonly reserved
for SSH access.
Ensure that non-SSH services are not using port 22. Confirm that any
actual SSH servers have been vetted.
unset-cpu-requirements
Indicates when containers do not have CPU requests and limits set.
Set CPU requests and limits for your container based on its
requirements. Refer to Requests and limits
for details.
unset-memory-requirements
Indicates when containers do not have memory requests and limits set.
Set memory requests and limits for your container based on its
requirements. Refer to Requests and limits
for details.
writable-host-mount
Indicates when containers mount a host path as writable.
Set containers to mount host paths as readOnly, if you need to
access files on the host.
cluster-admin-role-binding
CIS Benchmark 5.1.1 Ensure that the cluster-admin role is only used
where required.
Create and assign a separate role that has access to specific
resources/actions needed for the service account.
docker-sock
Alert on deployments with docker.sock mounted in containers.
Ensure the Docker socket is not mounted inside any containers by
removing the associated Volume and VolumeMount in deployment
yaml specification. If the Docker socket is mounted inside a container
it could allow processes running within the container to execute Docker
commands which would effectively allow for full control of the host.
exposed-services
Alert on services for forbidden types.
Ensure containers are not exposed through a forbidden service type such
as NodePort or LoadBalancer.
host-ipc
Alert on pods/deployment-likes with sharing host’s IPC namespace.
Ensure the host’s IPC namespace is not shared.
host-network
Alert on pods/deployment-likes with sharing host’s network namespace.
Ensure the host’s network namespace is not shared.
host-pid
Alert on pods/deployment-likes with sharing host’s process namespace.
Ensure the host’s process namespace is not shared.
privilege-escalation-container
Alert on containers if allowing privilege escalation that could gain
more privileges than its parent process.
Alert on deployments with privileged ports mapped in containers.
Ensure privileged ports [0, 1024] are not mapped within
containers.
sensitive-host-mounts
Alert on deployments with sensitive host system directories mounted in containers.
Ensure sensitive host system directories are not mounted in containers
by removing those Volumes and VolumeMounts.
unsafe-proc-mount
Alert on deployments with unsafe /proc mount
(procMount=Unmasked) that will bypass the default masking behavior
of the container runtime.
Ensure container does not unsafely exposes parts of /proc by setting
procMount=Default. Unmasked ProcMount bypasses the default
masking behavior of the container runtime. See Pod Security Standards
for more details.
unsafe-sysctls
Alert on deployments specifying unsafe sysctls that may lead to
severe problems like wrong behavior of containers.
The option to redirect clients on pull for Helm repositories is present in the
web UI. However, it is currently ineffective. Refer to the relevant
issue on GitHub for more
information.
For the following endpoints, note that while the Swagger API Reference
does not specify example responses for HTTP 200 codes,
this is due to a Swagger bug and responses will be returned.
# Get chart or provenance file from repoGEThttps://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<filename># Template a chart versionGEThttps://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/template
Tag pruning is the process of cleaning up unnecessary or unwanted repository
tags. As of v2.6, you can configure the Mirantis Secure Registry (MSR) to
automatically perform tag pruning on repositories that you manage by:
Specifying a tag pruning policy or alternatively,
Setting a tag limit
Note
When run, tag pruning only deletes a tag and does not carry out any
actual blob deletion.
Known Issue
While the tag limit field is disabled when you turn on immutability for a
new repository, this is currently not the case with Repository Settings. As
a workaround, turn off immutability when setting a tag limit via
Repository Settings > Pruning.
In the following section, we will cover how to specify a tag pruning
policy and set a tag limit on repositories that you manage. It will not
include modifying or deleting a tag pruning policy.
As a repository administrator, you can now add tag pruning policies on
each repository that you manage. To get started, navigate to
https://<msr-url> and log in with your credentials.
Select Repositories in the left-side navigation panel, and then
click on the name of the repository that you want to update. Note that you will
have to click on the repository name following the / after the specific
namespace for your repository.
Select the Pruning tab, and click New pruning policy
to specify your tag pruning criteria:
MSR allows you to set your pruning triggers based on the following image
attributes:
Whether the tag name equals, starts with, ends with, contains, is one
of, or is not one of your specified string values
Tag name = test`
Component name
Whether the image has a given component and the component name equals,
starts with, ends with, contains, is one of, or is not one of your
specified string values
Component name starts with b
Vulnerabilities
Whether the image has vulnerabilities – critical, major, minor, or
all – and your selected vulnerability filter is greater than or equals,
greater than, equals, not equals, less than or equals, or less than
your specified number
Critical vulnerabilities = 3
License
Whether the image uses an intellectual property license and is one of
or not one of your specified words
License name = docker
Last updated at
Whether the last image update was before your specified number of
hours, days, weeks, or months. For details on valid time units, see
Go’s ParseDuration function
Last updated at: Hours = 12
Specify one or more image attributes to add to your pruning criteria,
then choose:
Prune future tags to save the policy and apply your selection to
future tags. Only matching tags after the policy addition will be
pruned during garbage collection.
Prune all tags to save the policy, and evaluate both existing and
future tags on your repository.
Upon selection, you will see a confirmation message and will be
redirected to your newly updated Pruning tab.
If you have specified multiple pruning policies on the repository, the
Pruning tab will display a list of your prune triggers and
details on when the last tag pruning was performed based on the trigger,
a toggle for deactivating or reactivating the trigger, and a
View link for modifying or deleting your selected trigger.
All tag pruning policies on your account are evaluated every 15 minutes.
Any qualifying tags are then deleted from the metadata store. If a tag
pruning policy is modified or created, then the tag pruning policy for
the affected repository will be evaluated.
In addition to pruning policies, you can also set tag limits on
repositories that you manage to restrict the number of tags on a given
repository. Repository tag limits are processed in a first in first out
(FIFO) manner. For example, if you set a tag limit of 2, adding a third
tag would push out the first.
To set a tag limit, do the following:
Select the repository that you want to update and click the
Settings tab.
Turn off immutability for the repository.
Specify a number in the Pruning section and click
Save. The Pruning tab will now display your tag
limit above the prune triggers list along with a link to modify this
setting.
MSR users can automatically block clients from pulling images stored in the
registry by configuring enforcement policies at either the global or repository
level.
An enforcement policy is a collection of rules used to determine whether an
image can be pulled.
A good example of a scenario in which an enforcement policy can be useful is
when an administrator wants to house images in MSR but does not want those
images to be pulled into environments by MSR users. In this case, the
administrator would configure an enforcement policy either at the global or
repository level based on a selected set of rules.
Global image enforcement policies differ from those set at the repository level
in several important respects:
Whereas both administrators and regular users can set up enforcement policies
at the repository level, only administrators can set up enforcement
policies at the global level.
Only one global enforcement policy can be set for each MSR instance, whereas
multiple enforcement policies can be configured at the repository level.
Global enforcement policies are evaluated prior to repository policies.
Global and repository enforcement policies are generated from the same set of
rule attributes.
Note
Images must comply with all the enforcement policy rules to be pulled.
If any rule evaluates to false, the system blocks image pull.
This requirement also applies to tags associated with an image digest.
All tags must meet all the enforcement policy rules for an image digest they
refer to.
Users can only create and edit enforcement policies for repositories
within their user namespace.
To set up a repository enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to Repositories.
Select the repository to edit.
Click the Enforcement tab and select New enforcement
policy.
Define the enforcement policy rules with the desired rule attributes and
select Save. The screen displays the new enforcement policy in
the Enforcement tab. By default, the new enforcement policy is
toggled on.
Once a repository enforcement policy is set up and activated, pull requests
that do not satisfy the policy rules will return the following error message:
Only administrators can set up global enforcement policies.
To set up a global enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to System.
Select the Enforcement tab.
Confirm that the global enforcement function is Enabled.
Define the enforcement policy rules with the desired criteria and select
Save.
Once the global enforcement policy is set up, pull requests against any
repository that do not satisfy the policy rules will return the following
error message:
Administrators and users can monitor enforcement activity in the MSR web UI.
Important
Enforcement events can only be monitored at the repository level. It is not
possible, for example, to view in one location all enforcement events that
correspond to the global enforcement policy.
Navigate to Repositories.
Select the repository whose enforcement activity you want to review.
Select the Activity tab to view enforcement event activity. For
instance you can:
Identify which policy triggered an event using the enforcement ID
displayed on the event entry. (The enforcement IDs for each enforcement
policy are located on the Enforcement tab.)
Identify the user responsible for making a blocked pull request, and the
time of the event.
MSR uses semantic versioning. While downgrades are not supported, Mirantis
supports upgrades according to the following rules:
When upgrading from one patch version to another, you can skip patch
versions because no data migration is performed for patch versions.
When upgrading between minor versions, you cannot skip versions, however
you can upgrade from any patch version of the previous minor version to any
patch version of the current minor version.
When upgrading between major versions, make sure to upgrade one major
version at a time - and also to upgrade to the earliest available minor
version. It is strongly recommended that you first upgrade to the latest
minor/patch version for your major version.
Description
From
To
Supported
patch upgrade
x.y.0
x.y.1
yes
skip patch version
x.y.0
x.y.2
yes
patch downgrade
x.y.2
x.y.1
no
minor upgrade
x.y.*
x.y+1.*
yes
skip minor version
x.y.*
x.y+2.*
no
minor downgrade
x.y.*
x.y-1.*
no
skip major version
x..
x+2..
no
major downgrade
x..
x-1..
no
major upgrade
x.y.z
x+1.0.0
yes
major upgrade skipping minor version
x.y.z
x+1.y+1.z
no
A few seconds of interruption may occur during the upgrade of an MSR cluster,
so schedule the upgrade to take place outside of peak hours to avoid any
business impacts.
(if possible) A backup exists of the images stored by MSR, if it is
configured to store images on the local filesystem or within an NFS store.
BACKUP_LOCATION=/example_directory/filename
# If local filesystem
sudotar-cf${BACKUP_LOCATION}-C/var/lib/docker/volumes/dtr-registry-${REPLICA_ID}# If NFS store
sudotar-cf${BACKUP_LOCATION}-C/var/lib/docker/volumes/dtr-registry-nfs-${REPLICA_ID}
None of the MSR replica nodes are exhibiting time drift. To make this
determination, review the kernel log timestamps for each of the nodes. If
time drift is occurring, use clock synchronization (e.g., NTP) to keep
node clocks in sync.
Local filesystems across MSR nodes are not exhibiting any disk storage
issues.
Confirm that at least 16GB RAM is available on the node on which you are
running the upgrade. If the MSR node does not have access to the internet,
follow the offline installation documentation to get the images.
Once you have the latest image on your machine (and the images on the target
nodes, if upgrading offline), run the upgrade command.
Note
The upgrade command can be run from any available node, as MKE is
aware of which worker nodes have replicas.
dockerrun-it--rm\mirantis/dtr:2.9.21upgrade
By default, the upgrade command runs in interactive mode and prompts for any
necessary information. If you are performing the upgrade on an existing
replica, pass the --existing-replica-id flag.
The upgrade command will start replacing every container in your MSR cluster,
one replica at a time. It will also perform certain data migrations. If
anything fails or the upgrade is interrupted for any reason, rerun the upgrade
command (the upgrade will resume from the point of interruption).
To confirm that the newly upgraded MSR environment is ready:
Make sure that all running MSR containers reflect the newly upgraded MSR
version:
dockerps--filtername=dtr
Verify that the MSR web UI is accessible and operational.
Confirm push and pull functionality of Docker images to and from the
registry
Ensure that the MSR metadata store is in good standing:
REPLICA_ID=$(dockerinspect-f'{{.Name}}'$(dockerps-q-fname=dtr-rethink)|cut-f3-d'-')
dockerrun-it--rm--netdtr-ol\-vdtr-ca-$REPLICA_ID:/ca\dockerhubenterprise/rethinkcli:v2.3.0$REPLICA_ID# List problems in the cluster detected by the current node.
>r.db("rethinkdb").table("current_issues")[]
When upgrading from 2.5 to 2.6, the system will run a
metadatastoremigration job following a successful upgrade. This involves
migrating the blob links for your images, which is necessary for online garbage
collection. With 2.6, you can log into the MSR web interface and navigate
to System > Job Logs to check the status of the
metadatastoremigration job.
Garbage collection is disabled while the migration is running. In the case of a
failed metadatastoremigration, the system will retry twice.
If the three attempts fail, it will be necessary to manually retrigger the
metadatastoremigration job. To do this, send a POST request to the
/api/v0/jobs endpoint:
If you have previously deployed a cache, be sure to upgrade the node dedicated
for your cache to keep it in sync with your upstream MSR replicas. This
prevents authentication errors and other strange behaviors.
Mirantis Secure Registry is a Dockerized application. To monitor it, you
can use the same tools and techniques you’re already using to monitor
other containerized applications running on your cluster. One way to
monitor MSR is using the monitoring capabilities of Docker Universal
Control Plane.
In your browser, log in to Mirantis Kubernetes Engine (MKE), and
navigate to the Stacks page. If you have MSR set up for
high-availability, then all the MSR replicas are displayed.
To check the containers for the MSR replica, click the replica you
want to inspect, click Inspect Resource, and choose Containers.
Now you can drill into each MSR container to see its logs and find the
root cause of the problem.
MSR also exposes several endpoints you can use to assess if an MSR
replica is healthy or not:
/_ping: Checks if the MSR replica is healthy, and returns a
simple json response. This is useful for load balancing or other
automated health check tasks.
/nginx_status: Returns the number of connections being handled by
the NGINX front-end used by MSR.
/api/v0/meta/cluster_status: Returns extensive information about
all MSR replicas.
The /api/v0/meta/cluster_status endpoint requires administrator
credentials, and returns a JSON object for the entire cluster as observed by
the replica being queried. You can authenticate your requests using HTTP basic
auth.
{"current_issues":[{"critical":false,"description":"... some replicas are not ready. The following servers arenotreachable:dtr_rethinkdb_f2277ad178f7",}],"replica_health":{"f2277ad178f7":"OK","f3712d9c419a":"OK","f58cf364e3df":"OK"},}
You can find health status on the current_issues and
replica_health arrays. If this endpoint doesn’t provide meaningful
information when trying to troubleshoot, try troubleshooting using
logs.
Docker Content Trust (DCT) keeps audit logs of changes made to trusted
repositories. Every time you push a signed image to a repository, or
delete trust data for a repository, DCT logs that information.
To access the audit logs you need to authenticate your requests using an
authentication token. You can get an authentication token for all
repositories, or one that is specific to a single repository.
MSR returns a JSON file with a token, even when the user doesn’t have
access to the repository to which they requested the authentication
token. This token doesn’t grant access to MSR repositories.
The JSON file returned has the following structure:
{"token":"<token>","access_token":"<token>","expires_in":"<expiration in seconds>","issued_at":"<time>"}
Once you have an authentication token you can use the following
endpoints to get audit logs:
URL
Description
Authorization
GET/v2/_trust/changefeed
Get audit logs for all repositories.
Global scope token
GET/v2/<msr-url>/<repository>/_trust/changefeed
Get audit logs for a specific repository.
Repository-specific token
Both endpoints have the following query string parameters:
Field name
Required
Type
Description
change_id
Yes
String
A non-inclusive starting change ID from which to start
returning results. This will typically be the first or last change ID
from the previous page of records requested, depending on which
direction your are paging in.
The value 0 indicates records should be returned starting from the
beginning of time.
The value 1 indicates records should be returned starting from the
most recent record. If 1 is provided, the implementation will also
assume the records value is meant to be negative, regardless of the
given sign.
records
Yes
String integer
The number of records to return. A negative value indicates the number
of records preceding the change_id should be returned. Records are
always returned sorted from oldest to newest.
Below is the description for each of the fields in the response:
Field name
Description
count
The number of records returned.
ID
The ID of the change record. Should be used in the change_id field of
requests to provide a non-exclusive starting index. It should be treated
as an opaque value that is guaranteed to be unique within an instance of
notary.
CreatedAt
The time the change happened.
GUN
The MSR repository that was changed.
Version
The version that the repository was updated to. This increments every
time there’s a change to the trust repository.
This is always 0 for events representing trusted data being removed
from the repository.
SHA256
The checksum of the timestamp being updated to. This can be used with
the existing notary APIs to request said timestamp.
This is always an empty string for events representing trusted data
being removed from the repository
Category
The kind of change that was made to the trusted repository. Can be
update, or deletion.
The results only include audit logs for events that happened more than
60 seconds ago, and are sorted from oldest to newest.
Even though the authentication API always returns a token, the
changefeed API validates if the user has access to see the audit logs or
not:
If the user is an admin they can see the audit logs for any
repositories,
All other users can only see audit logs for repositories they have
read access.
High availability in MSR depends on swarm overlay networking. One way to
test if overlay networks are working correctly is to deploy containers
to the same overlay network on different nodes and see if they can ping
one another.
MSR uses RethinkDB for persisting data and replicating it across
replicas. It might be helpful to connect directly to the RethinkDB
instance running on an MSR replica to check the MSR internal state.
Warning
Modifying RethinkDB directly is not supported and may cause problems.
The RethinkCLI can be run from a separate
image in the mirantis organization. Note that the
commands below are using separate tags for non-interactive and
interactive modes.
Use SSH to log into a node that is running an MSR replica, and run the
following:
# List problems in the cluster detected by the current node.REPLICA_ID=$(dockercontainerls--filter=name=dtr-rethink--format'{{.Names}}'|cut-d'/'-f2|cut-d'-'-f3|head-n1)&&echo'r.db("rethinkdb").table("current_issues")'|dockerrun--rm-i--netdtr-ol-v"dtr-ca-${REPLICA_ID}:/ca"-eDTR_REPLICA_ID=$REPLICA_IDmirantis/rethinkcli:v2.2.0-ninon-interactive
RethinkDB stores data in different databases that contain multiple
tables. Run the following command to get into interactive mode and query
the contents of the DB:
# List problems in the cluster detected by the current node.
> r.db("rethinkdb").table("current_issues")
[]
# List all the DBs in RethinkDB
> r.dbList()
[ 'dtr2',
'jobrunner',
'notaryserver',
'notarysigner',
'rethinkdb' ]
# List the tables in the dtr2 db
> r.db('dtr2').tableList()
[ 'blob_links',
'blobs',
'client_tokens',
'content_caches',
'events',
'layer_vuln_overrides',
'manifests',
'metrics',
'namespace_team_access',
'poll_mirroring_policies',
'promotion_policies',
'properties',
'pruning_policies',
'push_mirroring_policies',
'repositories',
'repository_team_access',
'scanned_images',
'scanned_layers',
'tags',
'user_settings',
'webhooks' ]
# List the entries in the repositories table
> r.db('dtr2').table('repositories')
[ { enableManifestLists: false,
id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b',
immutableTags: false,
name: 'test-repo-1',
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
namespaceName: 'admin',
pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6',
pulls: 0,
pushes: 0,
scanOnPush: false,
tagLimit: 0,
visibility: 'public' },
{ enableManifestLists: false,
id: '9f43f029-9683-459f-97d9-665ab3ac1fda',
immutableTags: false,
longDescription: '',
name: 'testing',
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
namespaceName: 'admin',
pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be',
pulls: 0,
pushes: 0,
scanOnPush: false,
shortDescription: '',
tagLimit: 1,
visibility: 'public' } ]
Individual DBs and tables are a private implementation detail and may
change in MSR from version to version, but you can always use
dbList() and tableList() to explore the contents and data
structure.
When an MSR replica is unhealthy or down, the MSR web UI displays a
warning:
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
To fix this, you should remove the unhealthy replica from the MSR
cluster, and join a new one. Start by running:
Warnings display in a red banner at the top of the MSR web UI to indicate
potential vulnerability scanning issues.
Warning
Cause
Warning: Cannot perform security scans because no
vulnerability database was found.
Displays when vulnerability scanning is enabled but there is no
vulnerability database available to MSR. Typically, the warning displays
when a vulnerability database update is run for the first time
and the operation fails, as no usable vulnerability database exists at
this point.
Warning: Last vulnerability database sync failed.
Displays when a vulnerability database update fails, even though there
is a previous usable vulnerability database available for vulnerability
scans. The warning typically displays when a vulnerability database
update fails, despite successful completion of a prior vulnerability
database update.
Note
The terms vulnerability database sync and
vulnerability database update are interchangeable, in the
context of MSR web UI warnings.
Note
The issuing of warnings is the same regardless of whether vulnerability
database updating is done manually or is performed automatically through a
job.
MSR undergoes a number of steps in performing a vulnerability database update,
including TAR file download and extraction, file validation, and the update
operation itself. Errors that can trigger warnings can occur at any point in
the update process. These errors can include such system-related matters as low
disk space, issues with the transient network, or configuration
complications. As such, the best strategy for troubleshooting MSR vulnerability
scanning issues is to review the logs.
To view the logs for an online vulnerability database update:
Online vulnerability database updates are performed by a jobrunner container,
the logs for which you can view through a docker CLI command or by using the
MSR web UI:
CLI command:
dockerlogs<jobrunner-container-name>
MSR web UI:
Navigate to System > Job Logs in the left-side navigation
panel.
To view the logs for an offline vulnerability database update:
The MSR vulnerability database update occurs through the dtr-api container.
As such, access the logs for that container to ascertain the reason for update
failure.
To obtain more log information:
If the logs do not initially offer enough detail on the cause of vulnerability
database update failure, set MSR to enable debug logging, which will display
additional debug logs.
Refer to the reconfigure CLI command documentation for
information on how to enable debug logging. For example:
Certificate issues when pushing and pulling images¶
If TLS is not properly configured, you are likely to encounter an
x509:certificatesignedbyunknownauthority error when attempting to run
the following commands:
docker login
docker push
docker pull
To resolve the issue:
Verify that your MSR instance has been configured with your TLS certificate
Fully Qualified Domain Name (FQDN). For more information, refer to
Add a custom TLS certificate.
Alternatively, but only in testing scenarios, you can skip using a certificate
by adding your registry host name as an insecure registry in the Docker
daemon.json file:
Mirantis Secure Registry is a clustered application. You can join
multiple replicas for high availability.
For an MSR cluster to be healthy, a majority of its replicas (n/2 + 1)
must be healthy and able to communicate with the other replicas.
This is also known as maintaining quorum.
The three possible failure scenarios are detailed below.
Replica is unhealthy but cluster maintains quorum¶
One or more replicas are unhealthy, but the overall majority (n/2 + 1)
is still healthy and able to communicate with one another.
Here, the MSR cluster has five replicas but one of the nodes has
stopped working, and the other has problems with the MSR overlay
network.
Even though these two replicas are unhealthy the MSR cluster has a
majority of replicas that are still working, which means that the cluster is
healthy.
Thus, you should repair the unhealthy replicas, or remove them
from the cluster and join new ones.
A majority of replicas are unhealthy, making the cluster lose quorum,
but at least one replica is still healthy, or at least the data volumes
for MSR are accessible from that replica.
Here, the MSR cluster is unhealthy but since one replica is
still running it is possible to repair the cluster without having to
restore from a backup, which minimizes the amount of data loss.
In this total disaster scenario, in which all MSR replicas are lost, the data
volumes for all MSR replicas are corrupted or lost.
Here, you must restore MSR from an existing backup. Such an operation should be
considered as a last resort, as such an emergency repair may prevent some data
loss.
When one or more MSR replicas are unhealthy but the overall majority
(n/2 + 1) is healthy and able to communicate with one another, your MSR
cluster is still functional and healthy.
Given that the MSR cluster is healthy, there’s no need to execute any
disaster recovery procedures like restoring from a backup.
Instead, you should:
Remove the unhealthy replicas from the MSR cluster.
Join new replicas to make MSR highly available.
Since an MSR cluster requires a majority of replicas to be healthy at all
times, the order of these operations is important. If you join more
replicas before removing the ones that are unhealthy, your MSR cluster
might become unhealthy.
To understand why you should remove unhealthy replicas before joining
new ones, imagine you have a five-replica MSR deployment, and something
goes wrong with the overlay network connection the replicas, causing
them to be separated in two groups.
Because the cluster originally had five replicas, it can work as long as
three replicas are still healthy and able to communicate (5 / 2 + 1 =
3). Even though the network separated the replicas in two groups, MSR is
still healthy.
If at this point you join a new replica instead of fixing the network
problem or removing the two replicas that got isolated from the rest,
it’s possible that the new replica ends up in the side of the network
partition that has less replicas.
When this happens, both groups now have the minimum amount of replicas
needed to establish a cluster. This is also known as a split-brain
scenario, because both groups can now accept writes and their histories
start diverging, making the two groups effectively two different
clusters.
To remove unhealthy replicas, you’ll first have to find the replica ID
of one of the replicas you want to keep, and the replica IDs of the
unhealthy replicas you want to remove.
You can find the list of replicas by navigating to Shared Resources >
Stacks or Swarm > Volumes (when using swarm mode) on the MKE web
interface, or by using the MKE client bundle to run:
dockerps--format"{{.Names}}"|grepdtr
# The list of MSR containers with <node>/<component>-<replicaID>, e.g.# node-1/dtr-api-a1640e1c15b6
Another way to determine the replica ID is to SSH into an MSR node and
run the following:
For an MSR cluster to be healthy, a majority of its replicas (n/2 + 1)
need to be healthy and be able to communicate with the other replicas.
This is known as maintaining quorum.
In a scenario where quorum is lost, but at least one replica is still
accessible, you can use that replica to repair the cluster. That replica
doesn’t need to be completely healthy. The cluster can still be repaired
as the MSR data volumes are persisted and accessible.
Repairing the cluster from an existing replica minimizes the amount of
data lost. If this procedure doesn’t work, you’ll have to restore from
an existing backup.
When a majority of replicas are unhealthy, causing the overall MSR
cluster to become unhealthy, operations like dockerlogin,
dockerpull, and dockerpush present internalservererror.
Accessing the /_ping endpoint of any replica also returns the same
error. It’s also possible that the MSR web UI is partially or fully
unresponsive.
Use the mirantis/dtremergency-repair command to try to repair an
unhealthy MSR cluster, from an existing replica.
This command checks the data volumes for the MSR replica are
uncorrupted, redeploys all internal MSR components and reconfigured them
to use the existing volumes. It also reconfigures MSR removing all other nodes
from the cluster, leaving MSR as a single-replica cluster with the replica you
chose.
Start by finding the ID of the MSR replica that you want to repair from. You
can find the list of replicas by navigating to Shared Resources > Stacks or
Swarm > Volumes (when using swarm mode) on the MKE web interface, or by
using an MKE client bundle to run:
dockerps--format"{{.Names}}"|grepdtr
# The list of MSR containers with <node>/<component>-<replicaID>, e.g.# node-1/dtr-api-a1640e1c15b6
Another way to determine the replica ID is to SSH into an MSR node and
run the following:
If the emergency repair command fails, try running it again using a
different replica ID. As a last resort, you can restore your cluster
from an existing backup.
Metadata on the deployed repositories and images, such as image
architecture and size.
Access control to repos and images
Permissions for teams and repositories, pertaining to who has access to
which images.
Notary data
Notary tags and signatures, as applicable to images that are signed.
Scan results
Image security scanning results, pertaining to vulnerabilities in your
images.
Certificates and keys
The certificates, public keys, and private keys in use by MSR for mutual
TLS communication.
Images content
The images you push to MSR. This data can either be stored on the file
system of the node that is running MSR, or depending on the
configuration it can be stored on a different storage system.
Note
A number of data types are not backed up as when you run mirantis/dtr
backup, including:
Image content, which must be backed up separately, depending on the MSR
configuration.
User, organization, and teams information, the backup for
which must be made as part of your MKE backup.
Vulnerability database,
which can be redownloaded following a restore operation.
You need your MSR replica ID to create a backup. You can determine your replica
ID through the MKE web UI, using the MKE client bundle, or by accessing an MSR
node with SSH and running a command.
To locate your replica ID through the MKE web UI:
Navigate to Shared Resources > Stacks or Swarm >
Volumes (when using swarm mode).
To locate your replica ID using the MKE client bundle:
From a terminal that uses an MKE client bundle, run:
dockerps--format"{{.Names}}"|grepdtr
# The list of MSR containers with <node>/<component>-<replicaID>, e.g.# node-1/dtr-api-a1640e1c15b6
To locate your replica ID using SSH to access an MSR node:
The chained commands detailed above perform the following tasks:
Set your MSR version and replica ID.
To back up a specific replica, modify the --existing-replica-id
flag in the backup command to manually set the replica ID.
Prompt for the <mke-url> and <mke-username>.
Prompt for your <mkepassword> without saving it to your disk or
printing it to the terminal.
Retrieve the CA certificate for the specified <mke-url>.
To skip TLS verification, replace the --ucp-ca flag with
--ucp-insecure-tls. Mirantis does not recommend this flag for
production environments.
Include the MSR version and timestamp in your tar backup file.
Important
To ensure constant user access to MSR, by default the backup
command does not pause the MSR replica that is undergoing the backup
operation. As such, you can continue to make changes to the replica, however
those changes will not be saved into the backup. To circumvent this
behavior, use the --offline-backup option and be sure to remove the
replica from the load balancing pool to avoid user interruption.
As the backup contains sensitive information, such as private keys, you may opt
to encrypt it. To do so, run:
gpg--symmetric{{metadata_backup_file}}
This command prompts you for a password to encrypt the backup, copies the
backup file, and encrypts it.
Refer to mirantis/dtr backup for more information on supported command
options.
The backup of the MSR metadata should present as follows:
tar-tf{{metadata_backup_file}}# The archive should look like this
dtr-backup-v2.9.21/
dtr-backup-v2.9.21/rethink/
dtr-backup-v2.9.21/rethink/properties/
dtr-backup-v2.9.21/rethink/properties/0
Use the following command if you have encrypted the metadata backup:
gpg-d{{metadata_backup_file}}|tar-t
Alternatively, you can create a backup of an MKE cluster and restore it to a
new cluster, and then restore MSR on the new cluster to confirm your MSR
backup.
In the event that a majority of the RethinkDB table replicas in use by MSR are
unhealthy, and an emergency repair is unsuccessful, you must restore
the cluster from a backup.
Important
Use the same MKE cluster upon which you created the
backup. If you restore on a different MKE cluster, the MSR resources will
be owned by non-existent users, and thus you will not be able to manage
the resources despite their being stored in the MSR data store.
Use the same version of the mirantis/dtr
image that you used in creating the backup.
To restore MSR from a backup:
Using the client bundle, run the following command to stop and remove any
MSR container that is still running:
The migration of MSR metadata and image binaries to a new Kubernetes or Swarm
cluster can be a complex operation. To help you to successfully complete
this task, Mirantis provides the Mirantis Migration Tool (MMT).
With MMT, you can transition to the same MSR version you already have in use,
or you can opt to upgrade to a more recent major, minor, or patch version
of the software. In addition, MMT allows you to switch cluster orchestrators
and deployment methods as part of the migration process.
The <command> argument represents the particular stage of the migration
process:
Migration stage
Description
verify
Verification of the MSR source system configuration. The
verify command must be run on the source MSR system. Refer
to Verify the source system configuration for more information.
Applies only to migrations that originate from MSR 2.9.x systems.
estimate
Estimation of the number of images and the amount of metadata to
migrate. The estimate command must be run on the source MSR
system. Refer to Estimate the migration for more information.
Applies only to migrations that originate from MSR 2.9.x systems.
extract
Extraction of metadata, storage configuration, and blob storage in the
case of the copy storage mode, from the source registry. The
extract command must be run on the source MSR system. Refer
to Extract the data for more information.
transform
Transformation of metadata from the source registry for use with the
target MSR system. The transform command must be run on the
target MSR system. Refer to Transform the data extract for more
information.
Applies only to migrations that originate from MSR 2.9.x systems.
restore
Restoration of transformed metadata, storage configuration, and blob
storage in the case of the copy storage mode, is made onto the
target MSR environment. The restore command must be run on
the target MSR system. Refer to Restore the data extract for
more information.
The <command-mode> argument indicates the mode in which the command is to
run specific to the source or target registry. msr and msr3 are
currently the only accepted values, as MMT currently only supports the
migration of MSR registries.
The --storage-mode flag and its accompanying <storage-mode> argument
indicate the storage mode to use in migrating the
registry blob storage.
Storage mode
Description
inplace
The binary image data remains in its original location.
The target MSR system must be configured to use the same external
storage as the source MSR system. Refer to
Configure external storage for more information.
Important
Due to its ability to handle large amounts of data, Mirantis
recommends the use of inplace storage mode for most migration
scenarios.
copy
The binary image data is copied from the source system to a local
directory on the workstation that is running MMT. This mode allows
movement from one storage location to another. It is especially useful
in air-gapped environments.
The <directory> argument is used to share state across each command. The
resulting directory is typically the destination for the data that is extracted
from the source registry, which then serves as the source for the extracted
data in subsequent commands.
To avoid data inconsistencies, the source registry must remain in
read-only mode throughout the migration to the target MSR system.
Revert the value of readOnlyRegistry to false after the
migration is complete.
Be aware that MSR 3.0.x source systems cannot be placed into
read-only mode. If you are migrating from a 3.0.x source system,
be careful not to write any files during the migration process.
An active MSR 3.x.x installation, version 3.0.3 or later, to serve as the
migration target.
Configuration of the namespace for the MSR target installation, which
you set by running the following command:
You must pull the MMT image to both the source and target systems, using the
following command:
dockerpullregistry.mirantis.com/msr/mmt
2.9.x source systems only. Administrator credentials for the MKE cluster on
which the source MSR 2.9 system is running.
Kubernetes target systems only. A kubectl config file, which is typically
located in $HOME/.kube.
Kubernetes target systems only. Credentials within the kubectl config
file that supply cluster admin access to the Kubernetes cluster that is
running MSR 3.x.x.
Once the prerequisites are met, you can select
from two available storage modes for migrating binary image data from a source
MSR system to a target MSR system: inplace and copy.
Note
In all but one stage of the migration workflow, you will indicate the
storage mode of choice in the storage-mode parameter setting. The step
in which you do not indicate the storage mode is
Restore the data extract.
Storage mode
Description
inplace
The binary image data remains in its original location.
The target MSR system must be configured to use the same external
storage as the source MSR system. Refer to
Configure external storage for more information.
Important
Due to its ability to handle large amounts of data, Mirantis
recommends the use of inplace storage mode for most migration
scenarios.
copy
The binary image data is copied from the source system to a local
directory on the workstation that is running MMT. This mode allows
movement from one storage location to another. It is especially useful
in air-gapped environments.
Important
Migrations from source MSR systems that use Docker volumes for image
storage, such as local filesystem storage backend, can only be performed
using the copy storage mode. Refer to
Filesystem storage backends for more information.
For all Kubernetes-based migrations, Mirantis recommends running MMT in a Pod
rather than using the docker run deployment method. Migration
scenarios in which this does not apply are limited to MSR 2.9.x source systems
and Swarm-based MSR 3.1.x source and target systems.
Important
All Kubernetes-based migrations that use a filesystem backend must run
MMT in a Pod.
When performing a restore from within the
MMT Pod, the Persistent Volume Claim (PVC) used by the Pod must contain
the data extracted from the source MSR system.
Before you perform the migration, deploy the
following Pod onto your Kubernetes-based source and target systems:
In the rules section of the Role definition, add or remove
permissions according to your requirements.
For the PersistentVolumeClaim definition, modify the the
spec.resources.storage value according to your requirements.
In the Pod definition, the
spec.volumes[0].persistentVolumeClaim.claimName field refers to the
PVC used by the target MSR 3.x system. Modify the value as required.
Once you have met the Migration prerequisites, configured your source MSR
system and your target MSR system, and selected the storage
mode, you can perform the migration workflow as a
sequence of individual steps.
Migrations from MSR 2.9.x to 3.x.x must follow each of the five migration
steps, whereas migrations from MSR 3.x.x source systems skip the
verify,
estimate, and
transform steps, and instead begin with
extract before proceeding directly to
restore.
Important
All MMT commands that are run on MSR 3.x.x systems, including both source
and target deployments, must include the --fullname option, which
specifies the name of the MSR instance.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
Optional. Set whether to use an insecure connection.
Valid values: true (skip certificate validation when communicating
with the source system), false (perform certificate validation when
communicating with the source system)
Default: false
Example output:
Note
Sizing information displays only when a migration is run in copy storage mode.
If your migration originates from MSR 3.x.x, proceed directly to
Extract the data.
Before extracting the data for migration you must estimate the number of images
and the amount of metadata to migrate from your source MSR system to the new
MSR target system. To do so, run the following command on the source MSR
system.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
Optional. Set whether to use an insecure connection.
Valid values: true (skip certificate verification when communicating
with the source system), false (perform certificate validation when communicating with the source system)
You can extract metadata and, optionally, binary image data from an MSR source
system using commands that are presented herein.
Important
To avoid data inconsistencies, the source registry must remain in
read-only mode throughout the migration to the target MSR system.
Be aware that MSR 3.0.x source systems cannot be placed into
read-only mode. If you are migrating from a 3.0.x source system,
be careful not to write any files during the migration process.
Migrations that use the copy storage mode and a filesystem storage
backend must also include the --mount option, to specify the MSR 2.9.x
Docker volume that will be mounted to the MMT container at the /storage
directory. As --mount is a Docker option, it must be included prior to
the registry.mirantis.com/msr/mmt:<mmt-version> portion of the command.
Optional. Set whether to use an insecure connection.
Valid values: true (skip certificate verification when communicating
with the source system), false (perform certificate validation when
communicating with the source system)
disable-analytics
Optional. Disables MMT metrics collection for the
extract command. You must include the flag each time you run the
command.
The data extract is rendered as a TAR file with the name
dtr-metadata-mmt-backup.tar in the <local-migration-directory>. The
file name is later converted to msr-backup-<MSR-version>-mmt.tar, following
the transform step.
Optional. Indicates that the source system runs on a Swarm cluster.
Example output:
INFO[0000]Migrationwillbeperformedwith"inplace"storagemode
INFO[0000]Backingupmetadata...
{"level":"info","msg":"Writing RethinkDB backup","time":"2023-07-06T01:25:51Z"}{"level":"info","msg":"Backing up MSR","time":"2023-07-06T01:25:51Z"}{"level":"info","msg":"Recording time of backup","time":"2023-07-06T01:25:51Z"}{"level":"info","msg":"Backup file checksum is: 0e2134abf81147eef953e2668682b5e6b0e9761f3cbbb3551ae30204d0477291","time":"2023-07-06T01:25:51Z"}
INFO[0002]TheMirantisMigrationToolextractedyourregistryofMSR3.x,usingthefollowingparameters:
SourceRegistry:MSR3
Mode:metadataonly
ExistingMSR3storagewillbebackedup.
Thesourceregistrymustremaininread-onlymodeforthedurationoftheoperationtoavoiddatainconsistencies.
The data extract is rendered as a TAR file with the name
msr-backup-<MSR-version>-mmt.tar in the <local-migration-directory>.
Once you have extracted the data from your source MSR system, you must
transform the metadata into a format that is suitable for migration to an
MSR 3.x.x system.
Optional. Disables MMT metrics collection for the
transform command. You must include the flag each time you run the
command.
swarm
Optional. Specifies that the source system runs on Docker Swarm.
Default: false
fullname
Sets the name of the MSR instance to which MMT will migrate the
transformed data extract. Use only when the target system runs on a
Kubernetes cluster.
Default: msr
namespace
Optional. Sets the namespace scope for the given command.
By default, MMT sends usage metrics to Mirantis whenever you run the
extract, transform, and restore commands. To disable this
functionality, include the --disable-analytics flag whenever you issue any
of these commands.
MMT collects the following metrics to improve the product and
facilitate its use:
Metric
Description
BlobImageCount
Number of images stored in the source MSR system.
BlobStorageSize
Total size of all the images stored in the source MSR system.
EndTime
Time at which the command stops running.
ErrorCount
Number of errors that occurred during the given migration step.
MigrationStep
Migration step for which metrics are being collected.
For example, extract.
StartTime
Time at which the command begins running.
Status
Command status.
In the case of command failure, MMT reports all associated error
messages.
StorageMode
Storage mode used for migration.
Valid values: copy and inplace.
StorageType
Storage type used in the MSR source and target systems
Valid values: s3, azure, swift, gcs, filesystem, and
nfs.
UserId
Source MSR IP address or URL that is used to associate metrics from
separate commands.
To reuse the extract copy for a restore, reset the appropriate flags
in the migration_summary.json file to false or leave the flags
empty. Otherwise, the MMT restore command will skip the extract.
When migrating a large source installation to your MSR target environment, MMT
can fail due to too many files being open. If this happens, the following error
message displays:
When running MMT from a Docker container, ensure that the path provided for
storing migration data has been mounted as a docker volume to the local
machine.
When running MMT outside of Docker, ensure the path provided exists.
The error is reported when the rethinkdb Pod for the destination MSR 3.x
installation does not have enough disk space available due to the sizing of its
provisioned volume.
Edit the values.yaml file you used for MSR deployment, changing the
rethinkdb.cluster.persistentVolume.size value to match the source
RethinkDB volume size
Run the helm upgrade --values <path to values.yaml> msr
msr/msr command.
The error is reported when the node on which RethinkDB is running on the target
MSR system does not have enough available disk space.
SSH into the node on which RethinkDB is running.
Review the amount of disk space used by the docker daemon on the node:
dockersystemdf
Review the total size and available storage of the node filesystem:
df
Allocate more storage to the host machine on which the target node is
running.
Admin password on MSR 3.0.x target no longer works¶
As a result of the migration, the source MSR system security settings
completely replace the settings in the target MSR system. Thus, to gain
admin access to the target system, you must use the admin password for the
source system.
MMT uses several parallel sub-routines in copying image blobs, the number of
which is controlled by the --parallel-io-count parameter, which has a
default value of 4.
Image blobs are copied only when you are using the copy storage mode for
your migration, during the Extract and Restore stages of the migration workflow. For optimum
performance, the number of CPU resources to allocate for the MMT container
(--cpus=<value>) is --parallel-io-count, plus one for MMT itself.
You may encounter an INFO[0014]Totalblobsize:0 error message during
migration with copy mode.
This indicates that the storage is empty or that blob storage mapping
is defined incorrectly.
The error may result in a panic message in versions prior to MMT 2.0.2-GA.
To resolve the issue, ensure that the correct source volume is specified
in the mount parameter of the MMT command line.
Note that the exact source storage name may vary.
Errors can occur during migration that require the use of additional MMT
parameters at various stages of the migration process.
For scenarios wherein the pulling of Docker images has failed, you can use the
parameters detailed in the following table to pull the needed images to your
MKE cluster running MSR 2.9.x.
You must pull the MMT image to both your source MSR system and your target MSR
system, otherwise the migration will fail with the following error message:
MSR 3.0.3 or later must be running on your target MSR 3.x cluster, otherwise
the restore migration step will fail with the following error message:
{"level":"fatal","msg":"flag provided but not defined: -append","time":"<time>"}
failedtorestoremetadatafrom"/migration/msr-backup-<msr-version>-mmt.tar":restorefailed:commandterminatedwithexitcode1
To resolve the issue, upgrade your target cluster to MSR 3.0.3 or later. Refer
to Upgrade MSR for more information.
Storage configuration is out of sync with metadata¶
With the inplacestorage mode, an error
message will display if you fail to configure the external storage location for
your target MSR system to the same storage location that your source MSR system
uses:
failed to run container: mmt-dtr-rethinkdb-backup¶
During the Estimate and
Extract stages of the migration workflow, you may
encounter the following error message:
FATA[0001] failed to extract MSR metadata:\
failed to run container: \
mmt-dtr-rethinkdb-backup: \
Error response from daemon: \
Conflict: \
The name mmt-dtr-rethinkdb-backup is already assigned. \
You have to delete (or rename) that container to be able to assign \
mmt-dtr-rethinkdb-backup to a container again.
Identify the node on which mmt-dtr-rethinkdb-backup was created.
From the node on which the mmt-dtr-rethinkdb-backup container was
created, delete the RethinkDB backup container:
[ENGDTR-4170] Fixed an issue wherein during migration the LDAP setting was
not appearing in the destination MSR. Now, the setting is completely
transferred to MSR 3.x metadata and can be accessed on
the Settings page of the MSR web UI.
[FIELD-6379] Fixed an issue wherein the estimation command in air-gapped
environments failed due to attempts to pull the MMT image on a random node.
The fix ensures that the MMT image is pulled on the required node, where the
estimation command is executed.
Due to unsanitized NUL values, attackers may be able to
maliciously set environment variables on Windows. In
syscall.StartProcess and os/exec.Cmd, invalid environment
variable values containing NUL values are not properly checked
for. A malicious environment variable value can exploit this behavior
to set a value for a different environment variable. For example, the
environment variable string "A=B\x00C=D" sets the variables
"A=B" and "C=D".
Subscriptions for MKE, MSR, and MCR provide access to
prioritized support for designated contacts from your company, agency, team,
or organization. Mirantis service levels for MKE, MSR, and MCR are
based on your subscription level and the Cloud (or cluster) you designate in
your technical support case. Our support offerings are described
here,
and if you do not already have a support subscription, you may inquire about
one via the contact us form.
Mirantis’ primary means of interacting with customers who have technical
issues with MKE, MSR, or MCR is our
CloudCare Portal. Access to our
CloudCare Portal requires prior authorization by your company, agency, team,
or organization, and a brief email verification step. After Mirantis sets up
its backend systems at the start of the support subscription, a designated
administrator at your company, agency, team or organization, can designate
additional contacts. If you have not already received and verified an
invitation to our CloudCare Portal, contact your local designated
administrator, who can add you to the list of designated contacts.
Most companies, agencies, teams, and organizations have multiple designated
administrators for the CloudCare Portal, and these are often the persons most
closely involved with the software. If you don’t know who is a
local designated administrator, or are having problems accessing the
CloudCare Portal, you may also send us an email.
Once you have verified your contact details via our verification email, and
changed your password as part of your first login, you and all your colleagues
will have access to all of the cases and resources purchased. We
recommend you retain your ‘Welcome to Mirantis’ email, because it contains
information on accessing our CloudCare Portal, guidance on
submitting new cases, managing your resources, and so forth. Thus, it can
serve as a reference for future visits.
We encourage all customers with technical problems to use the
knowledge base, which you can access on the Knowledge tab
of our CloudCare Portal. We also encourage you to review the
MKE, MSR, and MCR products documentation which includes release notes,
solution guides, and reference architectures. These are
available in several formats. We encourage use of
these resources prior to filing a technical case; we may already have fixed
the problem in a later release of software, or provided a solution or
technical workaround to a problem experienced by other customers.
One of the features of the CloudCare Portal is the ability to associate
cases with a specific MKE cluster; these are known as “Clouds” in
our portal. Mirantis has pre-populated customer accounts with one or more
Clouds based on your subscription(s). Customers may also create and manage
their Clouds to better match how you use your subscription.
We also recommend and encourage our customers to file new cases based on a
specific Cloud in your account. This is because most Clouds also have
associated support entitlements, licenses, contacts, and cluster
configurations. These greatly enhance Mirantis’ ability to support you in a
timely manner.
You can locate the existing Clouds associated with your account by using the
“Clouds” tab at the top of the portal home page. Navigate to the appropriate
Cloud, and click on the Cloud’s name. Once you’ve verified that Cloud
represents the correct MKE cluster and support entitlement, you
can create a new case via the New Case button towards the top of the
Cloud’s page.
One of the key items required for technical support of most MKE, MSR, and MCR
cases is the support dump. This is a compressed archive of configuration data
and log files from the cluster. There are several ways to gather a support
dump, each described in the paragraphs below. After you have collected a
support dump, you can upload the dump to your new technical support case
by following
this guidance
and using the “detail” view of your case.
The support dump only contains logs for the node where you’re running the
command. If your MKE is highly available, you should collect support dumps
from all of the manager nodes.
To submit the support dump to Mirantis Customer Support, add the
--submit option to the support command. This will send
the support dump along with the following information:
The CLI tool has commands to install, configure, and backup Mirantis Secure
Registry (MSR). It also allows uninstalling MSR. By default the tool runs in
interactive mode. It prompts you for the values needed.
Additional help is available for each command with the –help option.
If not specified, mirantis/dtr uses the latest tag by default. To
work with a different version, specify it in the command. For example,
dockerrun-it--rmmirantis/dtr:2.9.21.
The backup command creates a tar file with the contents
of the volumes used by MSR, and prints it. You can then use
mirantis/dtrrestore to restore the data from an existing backup.
Note
This command only creates backups of configurations, and image
metadata. It does not back up users and organizations. Users and
organizations can be backed up during an MKE backup.
It also does not back up Docker images stored in your registry. You
should implement a separate backup policy for the Docker images
stored in your registry, taking into consideration whether your MSR
installation is configured to store images on the filesystem or is
using a cloud provider.
This backup contains sensitive information and should be stored
securely.
Using the --offline-backup flag temporarily shuts down the
RethinkDB container. Take the replica out of your load balancer to
avoid downtime.
The ID of an existing MSR replica. To add, remove or modify an MSR
replica, you must connect to the database of an existing replica.
--help-extended
$DTR_EXTENDED_HELP
Display extended help text for a given command.
--ignore-events-table
$DTR_IGNORE_EVENTS_TABLE
Option to prevent backup of the events table for online backups, to
reduce backup size (the option is not available for offline backups).
--ignore-scan-data
$DTR_IGNORE_SCAN_DATA
Option to prevent backup of the scanning data for online backups, to
reduce backup size (the option is not available for offline backups).
--include-job-logs
$DTR_INCLUDE_JOB_LOGS
Option to include job logs in online backups.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--offline-backup
$DTR_OFFLINE_BACKUP
This flag takes RethinkDB down during backup and
takes a more reliable backup. If you back up MSR with this flag,
RethinkDB will go down during backup. However, offline backups are
guaranteed to be more consistent than online backups.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE
TLS CA certificate from https://<mke-url>/ca, and use--ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation
uses TLS but always trusts the TLS certificate used by MKE, which can
lead to man-in-the-middle attacks. For production deployments,
use --ucp-ca"$(catca.pem)" instead.
The destroy command forcefully removes all containers
and volumes associated with an MSR replica without notifying
the rest of the cluster.
Use this command on all replicas uninstall MSR.
Use the remove command to gracefully scale down your MSR cluster.
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to
man-in-the-middle attacks. For production deployments, use --ucp-ca"$(catca.pem)" instead.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE.Download the MKE TLS CA
certificate from https:///ca, and use --ucp-ca"$(catca.pem)".
The emergency-repair command repairs an MSR cluster that has lost
quorum by reverting your cluster to a single MSR replica.
There are three actions you can take to recover an unhealthy MSR cluster:
If the majority of replicas are healthy, remove the unhealthy nodes
from the cluster, and join new ones for high availability.
If the majority of replicas are unhealthy, use the
emergency-repair command to revert your cluster to a single MSR
replica.
If you cannot repair your cluster to a single replica, you must
restore from an existing backup, using the restore command.
When you run this command, an MSR replica of your choice is repaired and
turned into the only replica in the whole MSR cluster. The containers
for all the other MSR replicas are stopped and removed. When using the
force option, the volumes for these replicas are also deleted.
After repairing the cluster, you should use the join command to add
more MSR replicas for high availability.
The ID of an existing MSR replica. To add, remove or modify MSR, you
must connect to the database of an existing healthy replica.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--overlay-subnet
$MSR_OVERLAY_SUBNET
The subnet used by the dtr-ol overlay network.
Example: 10.0.0.0/24. For high-availability, MSR creates an overlay
network between MKE nodes. This flag allows you to choose the subnet for
that network. Make sure the subnet you choose is not used on any machine
where MSR replicas are deployed.
--prune
$PRUNE
Delete the data volumes of all unhealthy replicas. With this
option, the volume of the MSR replica you’re restoring is preserved but
the volumes for all other replicas are deleted. This has the same result
as completely uninstalling MSR from those replicas.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE
TLS CA certificate from https:///ca, and use --ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation
uses TLS but always trusts the TLS certificate used by MKE, which can
lead to man-in-the-middle attacks. For production deployments,
use --ucp-ca"$(catca.pem)" instead.
--ucp-password
$UCP_PASSWORD
The MKE administrator password.
--ucp-url
$UCP_URL
The MKE URL including domain and port.
--ucp-username
$UCP_USERNAME
The MKE administrator username.
--y,yes
$YES
Answer yes to any prompts.
--max-wait
$MAX_WAIT
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, "10m"
represents 10 minutes.
Use async NFS volume options on the replica specified in the
--existing-replica-id option. The NFS configuration must be set with
--nfs-storage-url explicitly to use this option. Using
--async-nfs will bring down any containers on the replica that use
the NFS volume, delete the NFS volume, bring it back up with the
appropriate configuration, and restart any containers that were brought
down.
--client-cert-auth-ca
$CLIENT_CA
Specify root CA certificates for client authentication with
--client-cert-auth-ca"$(catca.pem)".
--custom-ca-cert-bundle
$CUSTOM_CA_CERTS_BUNDLE
Provide a file containing additional CA certificates for MSR service
containers to use when verifying TLS server certificates.
--debug
$DEBUG
Enable debug mode for additional logs.
--dtr-ca
$MSR_CA
Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own root
CA public certificate with --dtr-ca"$(catca.pem)".
--dtr-cert
$MSR_CERT
Use a PEM-encoded TLS certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own
public key certificate with --dtr-cert"$(catcert.pem)". If the
certificate has been signed by an intermediate certificate authority,
append its public key certificate at the end of the file to establish a
chain of trust.
--dtr-external-url
$MSR_EXTERNAL_URL
URL of the host or load balancer clients use to reach MSR. When you use
this flag, users are redirected to MKE for logging in. Once
authenticated they are redirected to the URL you specify in this flag.
If you do not use this flag, MSR is deployed without single sign-on with
MKE. Users and teams are shared but users log in separately into the two
applications. You can enable and disable single sign-on within your MSR
system settings. Format https://host[:port], where port is the
value you used with --replica-https-port. Since HSTS (HTTP
Strict-Transport-Security) header is included in all API responses, make
sure to specify the FQDN (Fully Qualified Domain Name) of your MSR, or
your browser fails to load the web interface.
--dtr-key
$MSR_KEY
Use a PEM-encoded TLS private key for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
private key with --dtr-key"$(catkey.pem)".
--dtr-storage-volume
$MSR_STORAGE_VOLUME
Customize the volume to store Docker images. By default MSR creates a
volume to store the Docker images in the local filesystem of the node
where MSR is running, without high-availability. Use this flag to
specify a full path or volume name for MSR to store images. For
high-availability, make sure all MSR replicas can read and write data on
this volume. If you are using NFS, use --nfs-storage-url instead.
--enable-client-cert-auth
$ENABLE_CLIENT_CERT_AUTH
Enables TLS client certificate authentication. Use
--enable-client-cert-auth=false to disable it. If enabled, MSR will
additionally authenticate users via TLS client certificates. You must
also specify the root certificate authorities (CAs) that issued the
certificates with --client-cert-auth-ca.
--enable-pprof
$MSR_PPROF
Enables pprof profiling of the server. Use --enable-pprof=false to
disable it. Once MSR is deployed with this flag, you can access the
pprof endpoint for the API server at /debug/pprof, and the registry
endpoint at /registry_debug_pprof/debug/pprof.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--http-proxy
$MSR_HTTP_PROXY
The HTTP proxy used for outgoing requests.
--https-proxy
$MSR_HTTPS_PROXY
The HTTPS proxy used for outgoing requests.
--log-host
$LOG_HOST
The syslog system to send logs to. The endpoint to send logs to. Use
this flag if you set --log-protocol to tcp or udp.
--log-level
$LOG_LEVEL
Log level for all container logs when logging to syslog. Default: INFO.
The supported log levels are debug, info, warn, error, or fatal.
--log-protocol
$LOG_PROTOCOL
The protocol for sending logs. Default is internal. By default, MSR
internal components log information using the logger specified in the
Docker daemon in the node where the MSR replica is deployed. Use this
option to send MSR logs to an external syslog system. The supported
values are tcp, udp, or internal. Internal is the default
option, stopping MSR from sending logs to an external system. Use this
flag with --log-host.
--nfs-options
$NFS_OPTIONS
Pass in NFS volume options verbatim for the replica specified in the
--existing-replica-id option. The NFS configuration must be set with
--nfs-storage-url explicitly to use this option. Specifying
--nfs-options will pass in character-for-character the options
specified in the argument when creating or recreating the NFS volume.
For instance, to use NFS v4 with async, pass in “rw,nfsvers=4,async” as
the argument.
--nfs-storage-url
$NFS_STORAGE_URL
Use NFS to store Docker images following this format: nfs://<ip|hostname>/<mountpoint>. By default, MSR creates a volume to store the
Docker images in the local filesystem of the node where MSR is running,
without high availability. To use this flag, you need to install an NFS
client library like nfs-common in the node where you are deploying MSR.
You can test this by running showmount-e<nfs-server>. When you
join new replicas, they will start using NFS so there is no need to
specify this flag. To reconfigure MSR to stop using NFS, leave
--nfs-storage-url"" option empty. Refer to
Deploy MSR on NFS for more details.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--no-proxy
$MSR_NO_PROXY
List of domains the proxy should not be used for. When using
--http-proxy you can use this flag to specify a list of domains that
you do not want to route through the proxy. Format acme.com[,acme.org].
--overlay-subnet
$MSR_OVERLAY_SUBNET
The subnet used by the dtr-ol overlay network. Example: 10.0.0.0/24.
For high-availability, MSR creates an overlay network between MKE nodes.
This flag allows you to choose the subnet for that network. Make sure
the subnet you choose is not used on any machine where MSR replicas are
deployed.
--replica-http-port
$REPLICA_HTTP_PORT
The public HTTP port for the MSR replica. Default is 80. This allows
you to customize the HTTP port where users can reach MSR. Once users
access the HTTP port, they are redirected to use an HTTPS connection,
using the port specified with --replica-https-port. This port can
also be used for unencrypted health checks.
--replica-https-port
$REPLICA_HTTPS_PORT
The public HTTPS port for the MSR replica. Default is 443. This
allows you to customize the HTTPS port where users can reach MSR. Each
replica can use a different port.
--replica-id
$MSR_INSTALL_REPLICA_ID
Assign a 12-character hexadecimal ID to the MSR replica. Random by
default.
--replica-rethinkdb-cache-mb
$RETHINKDB_CACHE_MB
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is (available_memory-1024)/2. This config allows changing the RethinkDB cache usage per
replica. You need to run it once per replica to change each one.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from https://<mke-url>/ca, and use --ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to
man-in-the-middle attacks. For production deployments, use --ucp-ca"$(catca.pem)" instead.
--ucp-node
$UCP_NODE
The hostname of the MKE node to use to deploy MSR. Random by default.
You can find the hostnames of the nodes in the cluster in the MKE web
interface, or by running docker node ls command on an MKE manager
node. Note that MKE and MSR must not be installed on the same node, and you should instead install MSR on worker nodes that will be managed by MKE.
The ID of an existing MSR replica. To add, remove or modify MSR, you
must connect to the database of an existing healthy replica.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--replica-http-port
$REPLICA_HTTP_PORT
The public HTTP port for the MSR replica. Default is 80. This allows
you to customize the HTTP port where users can reach MSR. Once users
access the HTTP port, they are redirected to use an HTTPS connection,
using the port specified with --replica-https-port. This port can
also be used for unencrypted health checks.
--replica-https-port
$REPLICA_HTTPS_PORT
The public HTTPS port for the MSR replica. Default is 443. This
allows you to customize the HTTPS port where users can reach MSR. Each
replica can use a different port.
--replica-id
$MSR_INSTALL_REPLICA_ID
Assign a 12-character hexadecimal ID to the MSR replica. Random by
default.
--replica-rethinkdb-cache-mb
$RETHINKDB_CACHE_MB
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is (available_memory-1024)/2. This config allows changing the RethinkDB cache usage per
replica. You need to run it once per replica to change each one.
--skip-network-test
$MSR_SKIP_NETWORK_TEST
Do not test whether overlay networks are working correctly between MKE nodes.
For high-availability, MSR creates an overlay network between MKE nodes
and tests it before joining replicas.
.. important:
Do not use the --skip-network-test option in production deployments.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from https://<mke-url>/ca, and use --ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to
man-in-the-middle attacks. For production deployments, use --ucp-ca"$(catca.pem)" instead.
--ucp-node
$UCP_NODE
The hostname of the MKE node to use to deploy MSR. Random by default.
You can find the hostnames of the nodes in the cluster in the MKE web
interface, or by running docker node ls on an MKE manager
node. Note that MKE and MSR cannot be installed on the same node,
instead install MSR on worker nodes that will be managed by MKE.
--ucp-password
$UCP_PASSWORD
The MKE administrator password.
--ucp-url
$UCP_URL
The MKE URL including domain and port.
--ucp-username
$UCP_USERNAME
The MKE administrator username.
--unsafe-join
$MSR_UNSAFE_JOIN
Join a new replica even if the cluster is unhealthy. Joining replicas to
an unhealthy MSR cluster leads to split-brain scenarios, and data loss.
Don’t use this option for production deployments.
--max-wait
$MAX_WAIT
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, "10m"
represents 10 minutes.
Use async NFS volume options on the replica specified in the
--existing-replica-id option. The NFS configuration must be set with
--nfs-storage-url explicitly to use this option. Using
--async-nfs will bring down any containers on the replica that use
the NFS volume, delete the NFS volume, bring it back up with the
appropriate configuration, and restart any containers that were brought
down.
--client-cert-auth-ca
$CLIENT_CA
Specify root CA certificates for client authentication with
--client-cert-auth-ca"$(catca.pem)".
--custom-ca-cert-bundle
$CUSTOM_CA_CERTS_ BUNDLE
Specify additional CA certificates for MSR service containers to use
when verifying TLS server certificates with
--custom-ca-cert-bundle"$(catca.pem)"
--debug
$DEBUG
Enable debug mode for additional logs of this bootstrap container (the
log level of downstream MSR containers can be set with --log-level).
--dtr-ca
$MSR_CA
Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own root
CA public certificate with --dtr-ca"$(catca.pem)".
--dtr-cert
$MSR_CERT
Use a PEM-encoded TLS certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own
public key certificate with --dtr-cert"$(catcert.pem)". If the
certificate has been signed by an intermediate certificate authority,
append its public key certificate at the end of the file to establish a
chain of trust.
--dtr-external-url
$MSR_EXTERNAL_URL
URL of the host or load balancer clients use to reach MSR. When you use
this flag, users are redirected to MKE for logging in. Once
authenticated they are redirected to the url you specify in this flag.
If you don’t use this flag, MSR is deployed without single sign-on with
MKE. Users and teams are shared but users login separately into the two
applications. You can enable and disable single sign-on in the MSR
settings. Format https://host[:port], where port is the value you
used with --replica-https-port. Since HSTS (HTTP
Strict-Transport-Security) header is included in all API responses, make
sure to specify the FQDN (Fully Qualified Domain Name) of your MSR, or
your browser may refuse to load the web interface.
--dtr-key
$MSR_KEY
Use a PEM-encoded TLS private key for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
private key with --dtr-key"$(catkey.pem)".
--dtr-storage-volume
$MSR_STORAGE_ VOLUME
Customize the volume to store Docker images. By default MSR creates a
volume to store the Docker images in the local filesystem of the node
where MSR is running, without high-availability. Use this flag to
specify a full path or volume name for MSR to store images. For
high-availability, make sure all MSR replicas can read and write data on
this volume. If you’re using NFS, use --nfs-storage-url instead.
--enable-client-cert-auth
$ENABLE_CLIENT_CERT_ AUTH
Enables TLS client certificate authentication; use
--enable-client-cert-auth=false to disable it. If enabled, MSR will
additionally authenticate users via TLS client certificates. You must
also specify the root certificate authorities (CAs) that issued the
certificates with --client-cert-auth-ca.
--enable-pprof
$MSR_PPROF
Enables pprof profiling of the server. Use --enable-pprof=false to
disable it. Once MSR is deployed with this flag, you can access the
pprof endpoint for the api server at /debug/pprof, and the registry
endpoint at /registry_debug_pprof/debug/pprof.
--existing-replica-id
$MSR_REPLICA_ID
The ID of an existing MSR replica. To add, remove or modify MSR, you
must connect to an existing healthy replica’s database.
--force-recreate-nfs-volume
$FORCE_RECREATE_NFS_ VOLUME
Force MSR to recreate NFS volumes on the replica specified by
--existing-replica-id.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--http-proxy
$MSR_HTTP_PROXY
The HTTP proxy used for outgoing requests.
--https-proxy
$MSR_HTTPS_PROXY
The HTTPS proxy used for outgoing requests.
--log-host
$LOG_HOST
The syslog system to send logs to. The endpoint to send logs to. Use
this flag if you set --log-protocol to tcp or udp.
--log-level
$LOG_LEVEL
Log level for all container logs when logging to syslog. Default: INFO.
The supported log levels are debug, info, warn, error,
or fatal.
--log-protocol
$LOG_PROTOCOL
The protocol for sending logs. Default is internal. By default, MSR
internal components log information using the logger specified in the
Docker daemon in the node where the MSR replica is deployed. Use this
option to send MSR logs to an external syslog system. The supported
values are tcp, udp, and internal. Internal is the default
option, stopping MSR from sending logs to an external system. Use this
flag with --log-host.
--max-wait
$MAX_WAIT
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, "10m"
represents 10 minutes.
--nfs-options
$NFS_OPTIONS
Pass in NFS volume options verbatim for the replica specified in the
--existing-replica-id option. The NFS configuration must be set with
--nfs-storage-url explicitly to use this option. Specifying
--nfs-options will pass in character-for-character the options
specified in the argument when creating or recreating the NFS volume.
For instance, to use NFS v4 with async, pass in “rw,nfsvers=4,async” as
the argument.
List of domains the proxy should not be used for. When using
--http-proxy you can use this flag to specify a list of domains that
you don’t want to route through the proxy. Format acme.com[,acme.org].
--reinitialize-storage
$REINITIALIZE_STORAGE
Set the flag when you have changed storage backends but have not moved
the contents of the old storage backend to the new one. Erases
all tags in the registry.
--replica-http-port
$REPLICA_HTTP_PORT
The public HTTP port for the MSR replica. Default is 80. This allows
you to customize the HTTP port where users can reach MSR. Once users
access the HTTP port, they are redirected to use an HTTPS connection,
using the port specified with –replica-https-port. This port can also
be used for unencrypted health checks.
--replica-https-port
$REPLICA_HTTPS_PORT
The public HTTPS port for the MSR replica. Default is 443. This
allows you to customize the HTTPS port where users can reach MSR. Each
replica can use a different port.
--replica-rethinkdb-cache-mb
$RETHINKDB_CACHE_ MB
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is (available_memory-1024)/2. This config allows changing the RethinkDB cache usage per
replica. You need to run it once per replica to change each one.
--storage-migrated
$STORAGE_MIGRATED
A flag added in 2.6.4 which lets you indicate the migration status of
your storage data. Specify this flag if you are migrating to a new
storage backend and have already moved all contents from your old
backend to your new one. If not specified, MSR will assume the new
backend is empty during a backend storage switch, and consequently
destroy your existing tags and related image metadata.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from https://<mke-url>/ca, and use --ucp-ca"$(catca.pem)".
The remove command scales down your MSR cluster by removing exactly
one replica. All other replicas must be healthy and will remain healthy
after this operation.
The ID of an existing MSR replica. To add, remove or modify MSR, you
must connect to the database of an existing healthy replica.
--force
$DTR_FORCE_REMOVE_REPLICA
Ignore pre-checks when removing a replica.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--replica-id
$MSR_REMOVE_REPLICA_ID
DEPRECATED Alias for --replica-ids
--replica-ids
$MSR_REMOVE_REPLICA_IDS
A comma separated list of IDs of replicas to remove from the cluster.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from https://<mke-url>/ca, and use --ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to MITM
(man-in-the-middle) attacks. For production deployments, use --ucp-ca"$(catca.pem)" instead.
The restore command performs a fresh installation of MSR, and
reconfigures it with configuration data from a tar file generated by
mirantis/dtrbackup. If you are restoring MSR after a failure, please make
sure you have destroyed the old MSR fully.
There are three actions you can take to recover an unhealthy MSR cluster:
If the majority of replicas are healthy, remove the unhealthy nodes
from the cluster, and join new nodes for high availability.
If the majority of replicas are unhealthy, use the
emergency-repair command to revert your
cluster to a single MSR replica.
If you cannot repair your cluster to a single replica, you must
restore from an existing backup, using the restore command.
This command does not restore Docker images. You should implement a
separate restore procedure for the Docker images stored in your
registry, taking in consideration whether your MSR installation is
configured to store images on the local filesystem or using a cloud
provider.
After restoring the cluster, you should use the :command`join` command to add
more MSR replicas for high availability.
Use async NFS volume options on the replica specified by
--existing-replica-id.
--client-cert-auth-ca
$CLIENT_CA
PEM-encoded TLS root CA certificates for client certificate
authentication.
--custom-ca-cert-bundle
$CUSTOM_CA_CERTS_BUNDLE
Provide a file containing additional CA certificates for MSR service
containers to use when verifying TLS server certificates.
--debug
$DEBUG
Enable debug mode for additional logs.
--existing-replica-id
$MSR_REPLICA_ID
The ID of an existing MSR replica. To add, remove or modify MSR, you
must connect to an existing healthy replica’s database.
--dtr-ca
$MSR_CA
Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
CA certificate with --dtr-ca"$(catca.pem)".
--dtr-cert
$MSR_CERT
Use a PEM-encoded TLS certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
certificate with --dtr-cert"$(catca.pem)".
--dtr-external-url
$MSR_EXTERNAL_URL
URL of the host or load balancer clients use to reach MSR. When you use
this flag, users are redirected to MKE for logging in. Once
authenticated they are redirected to the URL you specify in this flag.
If you don’t use this flag, MSR is deployed without single sign-on with
MKE. Users and teams are shared but users log in separately into the two
applications. You can enable and disable single sign-on within your MSR
system settings. Format https://host[:port], where port is the value
you used with --replica-https-port.
--dtr-key
$MSR_KEY
Use a PEM-encoded TLS private key for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
private key with --dtr-key"$(catca.pem)".
--dtr-storage-volume
$MSR_STORAGE_VOLUME
Note
One of three options you can use for MSR backend storage, the other
two being dtr_use-default-storage and nfs_storage_url. The
use of one of the three options is mandatory, depending on your
setup, to allow for MSR to fall back to the storage setting you
configured at the time of backup.
If you have previously configured MSR to use a full path or volume name
for storage, specify the --dtr-storage-volume option as this will
cause MSR to use the same setting on restore. Refer to
mirantis/dtr install <msr-cli-install>` and mirantis/dtr
reconfigure <msr-cli-reconfigure>` for usage detail.
--dtr-use-default-storage
$MSR_DEFAULT_STORAGE
Note
One of three options you can use for MSR backend storage, the other
two being dtr_storage-volume and nfs_storage_url. The
use of one of the three options is mandatory, depending on your
setup, to allow for MSR to fall back to the storage setting you
configured at the time of backup.
If cloud storage was previously configured, then the default storage on
restore is cloud storage. Otherwise, local storage is used.
--nfs-storage-url
$NFS_STORAGE_URL
Note
One of three options you can use for MSR backend storage, the other
two being dtr_storage-volume and --dtr-use-default-storage.
The use of one of the three options is mandatory, depending on your
setup, to allow for MSR to fall back to the storage setting you
configured at the time of backup.
If NFS was previously configured, you must manually create a storage
volume on each MSR node and specify --dtr-storage-volume with the
newly-created volume instead. For additional NFS configuration options
to support NFS v4, refer to mirantis/dtr install and mirantis/dtr reconfigure.
--enable-client-cert-auth
$ENABLE_CLIENT_CERT_AUTH
Enables TLS client certificate authentication; use
--enable-client-cert-auth=false to disable it.
--enable-pprof
$MSR_PPROF
Enables pprof profiling of the server. Use --enable-pprof=false to
disable it. Once MSR is deployed with this flag, you can access the
pprof endpoint for the api server at /debug/pprof, and the registry
endpoint at /registry_debug_pprof/debug/pprof.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--http-proxy
$MSR_HTTP_PROXY
The HTTP proxy used for outgoing requests.
--https-proxy
$MSR_HTTPS_PROXY
The HTTPS proxy used for outgoing requests.
--log-host
$LOG_HOST
The syslog system to send logs to.The endpoint to send logs to. Use this
flag if you set --log-protocol to tcp or udp.
--log-level
$LOG_LEVEL
Log level for all container logs when logging to syslog. Default:
INFO. The supported log levels are debug, info, warn,
error, or fatal.
--log-protocol
$LOG_PROTOCOL
The protocol for sending logs. Default is internal.By default, MSR
internal components log information using the logger specified in the
Docker daemon in the node where the MSR replica is deployed. Use this
option to send MSR logs to an external syslog system. The supported
values are tcp, udp, and internal. Internal is the default option,
stopping MSR from sending logs to an external system. Use this flag with
--log-host.
--max-wait
$MAX_WAIT
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, "10m"
represents 10 minutes.
--nfs-options
$NFS_OPTIONS
Pass in NFS volume options verbatim for the replica specified by
--existing-replica-id.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--no-proxy
$MSR_NO_PROXY
List of domains the proxy should not be used for.When using
--http-proxy you can use this flag to specify a list of domains that
you don’t want to route through the proxy. Format acme.com[,acme.org].
--replica-http-port
$REPLICA_HTTP_PORT
The public HTTP port for the MSR replica. Default is 80. This allows
you to customize the HTTP port where users can reach MSR. Once users
access the HTTP port, they are redirected to use an HTTPS connection,
using the port specified with --replica-https-port. This port can
also be used for unencrypted health checks.
--replica-https-port
$REPLICA_HTTPS_PORT
The public HTTPS port for the MSR replica. Default is 443. This
allows you to customize the HTTPS port where users can reach MSR. Each
replica can use a different port.
--replica-id
$MSR_INSTALL_REPLICA_ID
Assign a 12-character hexadecimal ID to the MSR replica. Mandatory.
--replica-rethinkdb-cache-mb
$RETHINKDB_CACHE_MB
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is (available_memory-1024)/2. This config allows changing the RethinkDB cache usage per
replica. You need to run it once per replica to change each one.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from https://<mke-url>/ca, and use --ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to
man-in-the-middle attacks. For production deployments, use --ucp-ca"$(catca.pem)" instead.
--ucp-node
$UCP_NODE
The hostname of the MKE node to use to deploy MSR. Random by default.
You can find the hostnames of the nodes in the cluster in the MKE web
interface, or by running docker node ls on an MKE manager
node. Note that MKE and MSR must not be installed on the same node,
instead install MSR on worker nodes that will be managed by MKE.
The ID of an existing MSR replica. To add, remove or modify MSR, you
must connect to an existing healthy replica’s database.
--help-extended
$MSR_EXTENDED_HELP
Display extended help text for a given command.
--nocolor
$NOCOLOR
Disable output coloring in logs.
--ucp-ca
$UCP_CA
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from https://<mke-url>/ca, and use --ucp-ca"$(catca.pem)".
--ucp-insecure-tls
$UCP_INSECURE_TLS
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to
man-in-the-middle attacks. For production deployments, use --ucp-ca"$(catca.pem)" instead.
--ucp-password
$UCP_PASSWORD
The MKE administrator password.
--ucp-url
$UCP_URL
The MKE URL including domain and port.
--ucp-username
$UCP_USERNAME
The MKE administrator username.
--max-wait
$MAX_WAIT
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, "10m"
represents 10 minutes.
[FIELD-7122] Fixed an issue wherein the MSR web UI would crash whenever a tag
had no layers to display. Now in such cases, the MSR web UI reports that
layer details are not available for the particular image.
[FIELD-7180] Fixed an issue wherein dtr-registry with S3 storage could crash.
A vulnerability in the package_index module of pypa/setuptools versions
up to 69.1.1 allows for remote code execution via its download functions.
These functions, which are used to download packages from URLs provided
by users or retrieved from package index servers, are susceptible to code
injection. If these functions are exposed to user-controlled inputs, such
as package URLs, they can execute arbitrary commands on the system.
The issue is fixed in version 70.0.
[ENGDTR-4225] Fixed an issue wherein login events were not created.
The auditAuthLogsEnabled parameter in /settings API endpoint must
be set to generate login events on any successful or failed login.
[ENGDTR-4158] Fixed an issue wherein the initialEvaluation flag of
a created or updated tag pruning policy was set to true, which caused its
evaluation to run in the API server. Instead, now the evaluation of the
policy is executed in the JobRunner as a single tag_prune job.
[ENGDTR-4159] Fixed an issue wherein the tag pruning policy feature,
responsible for the automated testing of tags and providing the count of
affected tags, was preventing the creation of policies. To ensure
the reliable creation of tag pruning policies, this feature has been removed.
Consequently, users will not see the number of affected tags when creating
new policies. For testing purposes before evaluation, Mirantis recommends
that you use the /pruningPolicies/test API endpoint.
Verifying a certificate chain which contains a certificate with
an unknown public key algorithm will cause Certificate.Verify to
panic. This affects all crypto/tls clients, and servers that set
Config.ClientAuth to VerifyClientCertIfGiven or
RequireAndVerifyClientCert. The default behavior is for TLS servers
to not verify client certificates.
When parsing a multipart form (either explicitly with
Request.ParseMultipartForm or implicitly with
Request.FormValue, Request.PostFormValue, or
Request.FormFile), limits on the total size of the parsed form
were not applied to the memory consumed while reading a single form
line. This permits a maliciously crafted input containing very long
lines to cause allocation of arbitrarily large amounts of memory,
potentially leading to memory exhaustion. With fix,
the ParseMultipartForm function now correctly limits the maximum
size of form lines.
CVE-2023-45288
Resolved
CVE has been reserved by an organization or individual and
is not currently available in the NVD.
When following an HTTP redirect to a domain which is not a subdomain
match or exact match of the initial domain, an http.Client does not
forward sensitive headers such as “Authorization” or “Cookie”. For
example, a redirect from foo.com to www.foo.com will forward
the Authorization header, but a redirect to bar.com will not. A
maliciously crafted HTTP redirect could cause sensitive headers
to be unexpectedly forwarded.
An out-of-bounds read flaw was found in the CLARRV, DLARRV, SLARRV,
and ZLARRV functions in lapack through version 3.10.0, as also used
in OpenBLAS before version 0.3.18. Specially crafted inputs passed
to these functions could cause an application using lapack to crash
or possibly disclose portions of its memory.
Pillow through 10.1.0 allows PIL.ImageMath.eval Arbitrary Code
Execution via the environment parameter, a different vulnerability
than CVE-2022-22817 (which was about the expression parameter).
(FIELD-6040) Previously slow to respond, the performance of the
/repositories/tags API call has been significantly revamped, and thus
operators will no longer need to wait long periods of time for the tags to
display.
(ENGDTR-4066) Whereas job filtering was previously only available for the
running job status, the functionality is now extended to include
all available job status options.
(FIELD-6748) Fixed an issue wherein the navigation buttons in the MSR web UI
Organizations tab were not enabled, and thus users could not
navigate to organizations that were not in the default view of 10.
Certifi is a curated collection of Root Certificates for validating
the trustworthiness of SSL certificates while verifying the identity
of TLS hosts. Certifi prior to version 2023.07.22 recognizes “e-Tugra”
root certificates. e-Tugra’s root certificates were subject to an
investigation prompted by reporting of security issues in their
systems. Certifi 2023.07.22 removes root certificates from “e-Tugra”
from the root store.
containerd is an open source container runtime. A bug was found in
containerd prior to versions 1.6.18 and 1.5.18 where supplementary
groups are not set up properly inside a container. If an attacker has
direct access to a container and manipulates their supplementary group
access, they may be able to use supplementary group access to bypass
primary group restrictions in some cases, potentially gaining access
to sensitive information or gaining the ability to execute code in
that container. Downstream applications that use the containerd client
library may be affected as well. This bug has been fixed in containerd
v1.6.18 and v.1.5.18. Users should update to these versions and
recreate containers to resolve this issue. Users who rely on a
downstream application that uses containerd’s client library should
check that application for a separate advisory and instructions. As a
workaround, ensure that the USER$USERNAME Dockerfile instruction
is not used. Instead, set the container entrypoint to a value similar
to ENTRYPOINT["su","-","user"] to allow su to properly set
up supplementary groups.
containerd is an open source container runtime. Before versions 1.6.18
and 1.5.18, when importing an OCI image, there was no limit on the
number of bytes read for certain files. A maliciously crafted image
with a large file where a limit was not applied could cause a denial
of service. This bug has been fixed in containerd 1.6.18 and 1.5.18.
Users should update to these versions to resolve the issue. As a
workaround, ensure that only trusted images are used and that only
trusted users have permissions to import images.
There is a type confusion vulnerability relating to X.400 address
processing inside an X.509 GeneralName. X.400 addresses were parsed as
an ASN1_STRING but the public structure definition for
GENERAL_NAME incorrectly specified the type of the x400Address
field as ASN1_TYPE. This field is subsequently interpreted by the
OpenSSL function GENERAL_NAME_cmp as an ASN1_TYPE rather than
an ASN1_STRING. When CRL checking is enabled (i.e. the application
sets the X509_V_FLAG_CRL_CHECK flag), this vulnerability may allow
an attacker to pass arbitrary pointers to a memcmp call, enabling them
to read memory contents or enact a denial of service. In most cases,
the attack requires the attacker to provide both the certificate chain
and CRL, neither of which need to have a valid signature. If the
attacker only controls one of these inputs, the other input must
already contain an X.400 address as a CRL distribution point, which is
uncommon. As such, this vulnerability is most likely to only affect
applications which have implemented their own functionality for
retrieving CRLs over a network.
The public API function BIO_new_NDEF is a helper function used for
streaming ASN.1 data via a BIO. It is primarily used internally to
OpenSSL to support the SMIME, CMS and PKCS7 streaming capabilities,
but may also be called directly by end user applications. The function
receives a BIO from the caller, prepends a new BIO_f_asn1 filter
BIO onto the front of it to form a BIO chain, and then returns the new
head of the BIO chain to the caller. Under certain conditions, for
example if a CMS recipient public key is invalid, the new filter BIO
is freed and the function returns a NULL result indicating a failure.
However, in this case, the BIO chain is not properly cleaned up and
the BIO passed by the caller still retains internal pointers to the
previously freed filter BIO. If the caller then goes on to call
BIO_pop() on the BIO then a use-after-free will occur. This will
most likely result in a crash. This scenario occurs directly in the
internal function B64_write_ASN1() which may cause
BIO_new_NDEF() to be called and will subsequently call
BIO_pop() on the BIO. This internal function is in turn called by
the public API functions PEM_write_bio_ASN1_stream,
PEM_write_bio_CMS_stream, PEM_write_bio_PKCS7_stream,
SMIME_write_ASN1,SMIME_write_CMS and SMIME_write_PKCS7. Other
public API functions that may be impacted by this include
i2d_ASN1_bio_stream, BIO_new_CMS, BIO_new_PKCS7,
i2d_CMS_bio_stream and i2d_PKCS7_bio_stream. The OpenSSL cms
and smime command line applications are similarly affected.
containerd is an open source container runtime. A bug was found in
containerd’s CRI implementation where a user can exhaust memory on the
host. In the CRI stream server, a goroutine is launched to handle
terminal resize events if a TTY is requested. If the user’s process
fails to launch due to, for example, a faulty command, the goroutine
will be stuck waiting to send without a receiver, resulting in a
memory leak. Kubernetes and crictl can both be configured to use
containerd’s CRI implementation and the stream server is used for
handling container IO. This bug has been fixed in containerd 1.6.12
and 1.5.16. Users should update to these versions to resolve the
issue. Users unable to upgrade should ensure that only trusted images
and commands are used and that only trusted users have permissions to
execute commands in running containers.
The function PEM_read_bio_ex() reads a PEM file from a BIO and
parses and decodes the name (e.g. CERTIFICATE), any header
data and the payload data. If the function succeeds then the
name_out, header and data arguments are populated with
pointers to buffers containing the relevant decoded data. The caller
is responsible for freeing those buffers. It is possible to construct
a PEM file that results in 0 bytes of payload data. In this case
PEM_read_bio_ex() will return a failure code but will populate the
header argument with a pointer to a buffer that has already been
freed. If the caller also frees this buffer then a double free will
occur. This will most likely lead to a crash. This could be exploited
by an attacker who has the ability to supply malicious PEM files for
parsing to achieve a denial of service attack. The functions
PEM_read_bio() and PEM_read() are simple wrappers around
PEM_read_bio_ex() and therefore these functions are also directly
affected. These functions are also called indirectly by a number of
other OpenSSL functions including PEM_X509_INFO_read_bio_ex() and
SSL_CTX_use_serverinfo_file() which are also vulnerable. Some
OpenSSL internal uses of these functions are not vulnerable because
the caller does not free the header argument if PEM_read_bio_ex()
returns a failure code. These locations include the
PEM_read_bio_TYPE() functions as well as the decoders introduced
in OpenSSL 3.0. The OpenSSL asn1parse command line application is also
impacted by this issue.
Heap/stack buffer overflow in the dlang_lname function in
d-demangle.c in libiberty allows attackers to potentially
cause a denial of service (segmentation fault and crash) via a crafted
mangled symbol.
paraparser in ReportLab before 3.5.31 allows remote code execution
because start_unichar in paraparser.py evaluates untrusted user input
in a unichar element in a crafted XML document with
<unicharcode=" followed by arbitrary Python code, a similar issue
to CVE-2019-17626.
ReportLab through 3.5.26 allows remote code execution because of
toColor(eval(arg)) in colors.py, as demonstrated by a crafted
XML document with <spancolor=" followed by arbitrary Python code.
(FIELD-5384) A search field is now present on the Organizations
screen in the MSR web UI, to aid customers in filtering through large numbers
of organizations on their clusters.
(ENGDTR-3949) Improvement made to displayed message on any attempt to
override an image with the same tag. Previously error500, now denied:Repositoryismarkedasimmutable.
In the extension script, a SQL Injection vulnerability was found in
PostgreSQL if it uses @extowner@, @extschema@, or
@extschema:...@ inside a quoting construct (dollar quoting,
'', or ""). If an administrator has installed files of a
vulnerable, trusted, non-bundled extension, an attacker with
database-level CREATE privilege can execute arbitrary code as the
bootstrap superuser.
A NULL pointer can be dereferenced when signatures are being verified on
PKCS7 signed or signedAndEnveloped data. In case the hash algorithm used
for the signature is known to the OpenSSL library but the implementation of
the hash algorithm is not available the digest initialization will fail.
There is a missing check for the return value from the initialization
function which later leads to invalid usage of the digest API most likely
leading to a crash. The unavailability of an algorithm can be caused by
using FIPS enabled configuration of providers or more commonly by not
loading the legacy provider. PKCS7 data is processed by the SMIME library
calls and also by the time stamp (TS) library calls. The TLS implementation
in OpenSSL does not call these functions however third party applications
would be affected if they call these functions to verify signatures on
untrusted data.
An invalid pointer dereference on read can be triggered when an application
tries to check a malformed DSA public key by the
EVP_PKEY_public_check() function. This will most likely lead to an
application crash. This function can be called on public keys supplied from
untrusted sources which could allow an attacker to cause a denial of
service attack. The TLS implementation in OpenSSL does not call this
function but applications might call the function if there are additional
security requirements imposed by standards such as FIPS 140-3.
An invalid pointer dereference on read can be triggered when an application
tries to load malformed PKCS7 data with the d2i_PKCS7(),
d2i_PKCS7_bio() or d2i_PKCS7_fp() functions. The result of the
dereference is an application crash which could lead to a denial of service
attack. The TLS implementation in OpenSSL does not call this function
however third party applications might call these functions on untrusted
data.
In PostgreSQL, a modified, unauthenticated server can send an unterminated
string during the establishment of Kerberos transport encryption. In
certain conditions a server can cause a libpq client to over-read and
report an error message containing uninitialized bytes.
If an X.509 certificate contains a malformed policy constraint and policy
processing is enabled, then a write lock will be taken twice recursively.
On some operating systems (most widely: Windows) this results in a denial
of service when the affected process hangs. Policy processing being enabled
on a publicly facing server is not considered to be a common setup. Policy
processing is enabled by passing the -policy argument to the command
line utilities or by calling the X509_VERIFY_PARAM_set1_policies()
function. Update (31 March 2023): The description of the policy processing
enablement was corrected based on CVE-2023-0466.
An issue was discovered in Oniguruma 6.2.0, as used in Oniguruma-mod
in Ruby through 2.4.1 and mbstring in PHP through 7.1.5. A stack
out-of-bounds write in
onigenc_unicode_get_case_fold_codes_by_str() occurs during regular
expression compilation. Code point 0xFFFFFFFF is not properly
handled in unicode_unfold_key(). A malformed regular expression
could result in 4 bytes being written off the end of a stack buffer of
expand_case_fold_string() during the call to
onigenc_unicode_get_case_fold_codes_by_str(), a typical stack
buffer overflow.
(FIELD-5447) Fixed an issue with the /api/v0/api_tokens endpoint wherein
changing the value of the pageStart parameter did not change the page
returned in the request output.
When upgrading from a previous MSR version, for the fix to go into effect you
must run a particular command sequence using the RethinkDB CLI. Contact
Mirantis support for the RethinkDB CLI instructions. Fresh installations do
not require the manual CLI steps.
(ENGDTR-3421) Fixed an issue wherein the MSR web UI would break whenever a
user tried to access the repository page for an organization from a
repository list.
(FIELD-4211) MSR now issues a warning when installations or upgrades fail due
to the disabling of MKE admin container scheduling.
SQLite through 3.40.0, when relying on --safe for execution of an
untrusted CLI script, does not properly implement the
azProhibitedFunctions protection mechanism, and instead allows UDF
functions such as WRITEFILE.
An issue was discovered in Oniguruma 6.2.0, as used in
Oniguruma-mod in Ruby through 2.4.1 and mbstring in PHP
through 7.1.5. A stack out-of-bounds write in
onigenc_unicode_get_case_fold_codes_by_str() occurs during regular
expression compilation. Code point 0xFFFFFFFF is not properly
handled in unicode_unfold_key(). A malformed regular expression
could result in 4 bytes being written off the end of a stack buffer of
expand_case_fold_string() during the call to
onigenc_unicode_get_case_fold_codes_by_str(), a typical stack
buffer overflow.
(FIELD-5205) MSR repo names are now limited to 55 characters at creation.
Prior to this fix, MSR users could create repo names in excess of 55
characters, this despite a 55 character system limitation that resulted in
non-specific error messages.
(FIELD-4421) Fixed an issue wherein the MSR web UI would sometimes go blank
when the user clicked any of the toggles on the Settings
page.
(FIELD-5131) Fixed an issue wherein API calls to push mirror tags from MSR
2.9.x to MSR 3.0.x would fail.
(FIELD-5121) Fixed an issue wherein promotion policies listed using the API
were missing a counter header.
(ENGDTR-2783) Fixed an issue wherein API requests with an
improperly specified Helm chart version returned an internal server error.
A bug was found in the containerd CRI implementation where programs
inside a container can cause the containerd daemon to consume memory
without bound during invocation of the ExecSync API. This can
cause containerd to consume all available memory on the computer,
denying service to other legitimate workloads. Kubernetes and crictl
can both be configured to use the containerd CRI implementation;
ExecSync may be used when running probes or when executing
processes via an exec facility. This bug has been fixed in
containerd 1.6.6 and 1.5.13. Users should update to these versions to
resolve the issue. Users unable to upgrade should ensure that only
trusted images and commands are used.
An issue was discovered in the HTTP FileResponse class in Django 3.2
before 3.2.15 and 4.0 before 4.0.7. An application is vulnerable to a
reflected file download (RFD) attack that sets the Content-Disposition
header of a FileResponse when the filename is derived from
user-supplied input.
When curl < 7.84.0 saves cookies, alt-svc and hsts data to local
files, it makes the operation atomic by finalizing the operation with
a rename from a temporary name to the final target file name. In that
rename operation, it might accidentally widen the permissions for
the target file, leaving the updated file accessible to more users
than intended.
OpenSSL supports creating a custom cipher via the legacy
EVP_CIPHER_meth_new() function and associated function calls. This
function was deprecated in OpenSSL 3.0 and application authors are
instead encouraged to use the new provider mechanism in order to
implement custom ciphers. OpenSSL versions 3.0.0 to 3.0.5 incorrectly
handle legacy custom ciphers passed to the EVP_EncryptInit_ex2(),
EVP_DecryptInit_ex2(), and EVP_CipherInit_ex2() functions (as
well as other similarly named encryption and decryption initialization
functions). Instead of using the custom cipher directly it incorrectly
tries to fetch an equivalent cipher from the available providers. An
equivalent cipher is found based on the NID passed to
EVP_CIPHER_meth_new(). This NID is supposed to represent the
unique NID for a given cipher. However it is possible for an
application to incorrectly pass NID_undef as this value in the call to
EVP_CIPHER_meth_new(). When NID_undef is used in this way the
OpenSSL encryption/decryption initialization function will match the
NULL cipher as being equivalent and will fetch this from the
available providers. This will succeed if the default provider has
been loaded (or if a third party provider has been loaded that offers
this cipher). Using the NULL cipher means that the plaintext is
emitted as the ciphertext. Applications are only affected by this
issue if they call EVP_CIPHER_meth_new() using NID_undef and
subsequently use it in a call to an encryption/decryption
initialization function. Applications that only use SSL/TLS are not
impacted by this issue. Fixed in OpenSSL 3.0.6 (Affected 3.0.0-3.0.5).
A buffer overrun can be triggered in X.509 certificate verification,
specifically in name constraint checking. Note that this occurs after
certificate chain signature verification and requires either a CA to
have signed the malicious certificate or for the application to
continue certificate verification despite failure to construct a path
to a trusted issuer. An attacker can craft a malicious email address
to overflow four attacker-controlled bytes on the stack. This buffer
overflow could result in a crash (causing a denial of service) or
potentially remote code execution. Many platforms implement stack
overflow protections which would mitigate against the risk of remote
code execution. The risk may be further mitigated based on stack
layout for any given platform/compiler. Users are encouraged to
upgrade to a new version as soon as possible. In a TLS client, this
can be triggered by connecting to a malicious server. In a TLS server,
this can be triggered if the server requests client authentication and
a malicious client connects. Fixed in OpenSSL 3.0.7 (Affected
3.0.0,3.0.1,3.0.2,3.0.3,3.0.4,3.0.5,3.0.6).
A buffer overrun can be triggered in X.509 certificate verification,
specifically in name constraint checking. Note that this occurs after
certificate chain signature verification and requires either a CA to
have signed a malicious certificate or for an application to continue
certificate verification despite failure to construct a path to a
trusted issuer. An attacker can craft a malicious email address in a
certificate to overflow an arbitrary number of bytes containing the
‘.’ character (decimal 46) on the stack. This buffer overflow could
result in a crash (causing a denial of service). In a TLS client, this
can be triggered by connecting to a malicious server. In a TLS server,
this can be triggered if the server requests client authentication and
a malicious client connects.
Updated Golang to version 1.17.13 to resolve vulnerabilities. For more
information, refer to the following announcements for versions 1.17.12 and 1.17.13.
ncurses 6.3 before patch 20220416 has an out-of-bounds read and
segmentation violation in convert_strings in
tinfo/read_entry.c in the terminfo library.
In OpenLDAP 2.x before 2.5.12 and 2.6.x before 2.6.2, an SQL injection
vulnerability exists in the experimental back-sql backend to
slapd, through an SQL statement within an LDAP query. This can
occur during an LDAP search operation when the search filter is
processed, due to a lack of proper escaping.
An issue was discovered in Django 3.2 before 3.2.14 and 4.0 before
4.0.6. The Trunc() and Extract() database functions are
subject to SQL injection if untrusted data is used as a
kind/lookup_name value. Applications that constrain the
lookup name and kind choice to a known safe list are unaffected.
The OpenSSL 3.0.4 release introduced a serious bug in the RSA
implementation for X86_64 CPUs supporting the AVX512IFMA
instructions. This issue makes the RSA implementation with 2048-bit
private keys incorrect on such machines and memory corruption will
happen during the computation. As a consequence of the memory
corruption, an attacker may be able to trigger a remote code execution
on the machine performing the computation. SSL/TLS servers or other
servers using 2048-bit RSA private keys running on machines that
support AVX512IFMA instructions of the X86_64 architecture are
affected by this issue.
In Python (aka CPython) through 3.10.4, the mailcap module does not
add escape characters into commands discovered in the system mailcap
file. This may allow attackers to inject shell commands into
applications that call mailcap.findmatch with untrusted input
(if they lack validation of user-provided file names or arguments).
A use-after-free in Busybox 1.35-x’s awk applet leads to denial of
service and possibly code execution when processing a crafted awk
pattern in the copyvar function.
libcurl would reuse a previously created connection even when a
TLS or SSH-related option had been changed that should have prohibited
reuse. libcurl keeps previously used connections in a connection
pool for subsequent transfers to reuse if one of them matches the
setup. However, several TLS and SSH settings were left out from the
configuration match checks, making them match too easily.
libcurl provides the CURLOPT_CERTINFO option to allow
applications to request details to be returned about a server’s
certificate chain. Due to an erroneous function, a malicious server
could make libcurl built with NSS get stuck in a never-ending
busy-loop when trying to retrieve that information.
sqclass.cpp in Squirrel through 2.2.5 and 3.x through 3.1 allows
an out-of-bounds read in the core interpreter that can lead to code
execution. If a victim executes an attacker-controlled squirrel
script, it is possible for the attacker to break out of the squirrel
script sandbox even if all dangerous functionality such as file system
functions have been disabled. An attacker might abuse this bug to
target, for example, cloud services that allow customization using
SquirrelScripts, or distribute malware through video games that embed
a Squirrel Engine.
(FIELD-4718) Fixed a pagination issue in the MSR API GET
/api/v0/imagescan/scansummary/cve/{cve} endpoint. The fix requires that
you upgrade MSR to 2.9.8 and that you take certain manual steps using the
database CLI (contact Mirantis Support for the steps). Note that the manual
CLI steps are not required for fresh MSR installations.
(ENGDTR-3184) Fixed an issue wherein Ubuntu 22.04-based images could not be
successfully scanned for vulnerabilities.
BusyBox up through version 1.35.0 allows remote attackers to execute
arbitrary code when netstat is used to print the value of a DNS PTR
record to a VT-compatible terminal. Alternatively, attackers can
choose to change the colors of the terminal.
Prior to 1.9.10, GORM permits SQL injection through incomplete
parentheses. Note that misusing GORM by passing untrusted user input
when GORM expects trusted SQL fragments is not a vulnerability in GORM
but in the application.
A bug was found in containerd prior to versions 1.6.1, 1.5.10, and
1.14.12 in which containers launched through containerd’s CRI
implementation on Linux with a specially-crafted image configuration
could gain access to read-only copies of arbitrary files and
directories on the host.
The CVE is present in the JobRunner image, however while it is a
required dependency of a component running in JobRunner, its
functionality is never excercised.
In OpenLDAP 2.x prior to 2.5.12 and in 2.6.x prior to 2.6.2, a SQL
injection vulnerability exists in the experimental back-sql backend to
slapd, via a SQL statement within an LDAP query. This can occur during
an LDAP search operation when the search filter is processed, due to a
lack of proper escaping.
Though Alpine Linux contains the affected OpenSSL version, the
c_rehash script has been replaced by a C binary.
The c_rehash script does not properly sanitize shell
metacharacters to prevent command injection. Some operating systems
distribute this script in a manner in which it is automatically
executed, in which case attackers can execute arbitrary commands with
the privileges of the script. Use of this script is considered
obsolete and should be replaced by the OpenSSL rehash command line
tool. The vulernability is fixed in OpenSSL 3.0.3, OpenSSL 1.1.1o, and
in OpenSSL 1.0.2ze.
NumPy 1.16.0 and earlier use the pickle Python module in an unsafe
manner that allows remote attackers to execute arbitrary code via a
crafted serialized object, as demonstrated by a numpy.load call.
Note that third parties dispute the issue as, for example, it is a
behavior that can have legitimate applications in loading serialized
Python object arrays from trusted and authenticated sources.
Improvements have been made to clarify the presentation of vulnerability scan
summary counts in the MSR web UI, for Critical,
High, Medium, and Low in both the
Vulnerabilities column and in the View Details view.
Note
Although ENGDTR-3008 was reported as a known issue for MSR 2.9.6, the reported counts were at
all times reliable and factually correct.
Fixed an issue in the MSR web UI wherein an input was missing from the team
LDAP sync form that prevented users from submitting the form (ENGDTR-3089,
FIELD-4587).
Fixed an issue wherein, on logout from the MSR web UI, users sometimes
received the warning: Sorry,wedon'trecognizethispath (FIELD-4339).
Fixed an issue with the MSR web UI wherein a user could not be added to an
organization that has “team” in its name (FIELD-4436).
Fixed an issue in the MSR web UI wherein if a user who wants to change their
password entered an incorrect password into the Current password
field and clicked Save, the screen would go blank (ENGDTR-2785).
The summary counts that MSR displays for Critical,
High, Medium, and Low in both the
Vulnerabilities column and in the View Details view
are unreliable and may be incorrect when displaying non-zero values. The
Components tab displays correct values for each component.
Workaround:
Navigate to the Components tab, review the individual non-green
components, and separately calculate the total of the numbers that present as
Critical, High, Medium, and
Low.
Added new sub-command rotate-certificates to the
rethinkops binary that exists inside of the dtr-rethinkdb image.
This command allows you to rotate the certificates that provide
intracluster communication between the MSR system containers and RethinkDB.
To rotate certificates, docker exec into the dtr-rethinkdb
container and use the command below (you can provide the --debug flag for
more information):
Fixed an issue wherein the webhook could fail to trigger, thus issuing the
“argument list too long” error (FIELD-3424).
Fixed an issue with the MSR web UI wherein the value of {{tag}} is absent
from the scanning report (FIELD-3931).
Fixed an issue wherein the MSR image scan CSV report was missing the CVSS3
score and only had the CVSS2 score (FIELD-3946).
Fixed issues wherein the list of org repositories was limited to ten and was
wrapping incorrectly (FIELD-3987).
Fixed an issue with the MSR web UI wherein the Teams page
displayed no more than 10 users and 10 repositories and the
Organizations page displayed no more than 10 teams
(FIELD-4187).
Fixed an issue with the MSR web UI wherein the Add User button
failed to display for organization owners (FIELD-4261).
Fixed an issue with the MSR web UI wherein performing a search from the
left-side navigation panel produced search results that displayed on top of
the background text (FIELD-4268).
Made improvements to MSR administrative actions to circumvent failures
that can result from stale containers (FIELD-4270) (FIELD-4291).
Fixed an image signing regression issue that applies to MSR 2.9.3 and
MSR 2.9.4 (FIELD-4320).
To help administrators troubleshoot authorization issues, MSR now includes
the name and ID of the requesting user in log messages from the
dtr-garant container when handling /auth/token API requests
(FIELD-3509).
MSR now includes support for the GET/v2/_catalog endpoint from the
Docker Registry HTTP API V2. Authenticated MSR users can use this API to list
all the repositories in the registry that they have permission to view
(ENGDTR-2667).
MSR now accepts only JWT licenses. To upgrade MSR, customers using a Docker
Hub-issued license must first replace it with the new license version
(ENGDTR-2631).
KubeLinter has been updated to version 0.2.2, which includes 11 additional
rules, and new rule-mediation descriptions have been added to existing rules
(ENGDTR-2624).
The following MSR commands now include a --max-wait option:
emergency-repair
join
reconfigure
restore
upgrade
With this new option you can set the maximum amount of time that MSR allows
for operations to complete. The --max-wait option is especially useful
when allocating additional startup time for very large MSR databases
(FIELD-4070).
Fixed an issue wherein the webhook client timeout settings caused
reconnections to wait too long (FIELD-4083).
Fixed an issue with the MSR web UI wherein the enforcement policy page did
not allow users to enable or disable enforcement policies within a repository
(ENGDTR-2679).
Fixed an issue wherein connecting to MSR with IPv6 failed after an MCR
upgrade to version 20.10.0 or later (FIELD-4144).
MSR administrative actions such as backup, restore, and
reconfigure can continuously fail with the
invalidsessiontoken error shortly after entering phase 2. The error
resembles the following example:
MSR now tags all analytics reports with the user license ID when telemetry
is enabled. It does not, though, collect any further identifying information.
In line with this change, the MSR settings API no longer contains
anonymizeAnalytics, and the MSR web UI no longer includes the
Make data anonymous toggle (ENGDTR-2607).
The response for the /api/v0/meta/settings/compliance security
compliance API now includes the following information:
Product version
Global enforcement policy
For each repository, a list of the following:
Enforcement policies
Promotion policies
Pruning policies
Push mirroring policies
Poll mirroring policies
(ENGDTR-2532)
Added a matches operator to the rule engine that matches subject fields
to a user-provided regex. This operator can be used for promotion, pruning,
image enforcement, and push mirroring policies (ENGDTR-2498).
MSR now boosts container security by running the scanner process in a sandbox
with restricted permissions. In the event the scanner process is
compromised, it does not have access to the Rethink database private keys or
any portion of the file system that it does not require access to
(ENGDTR-1915).
Updated Django to version 3.1.10, resolving the following CVEs:
CVE-2021-31542
and CVE-2021-32052
(ENGDTR-2651).
Fixed an issue with the MSR web UI wherein the repository listing on the
Organizations > Teams > Permissions tab displayed no more than
ten teams (FIELD-3998).
Fixed an issue in the MSR web UI wherein the Scanning enabled
setting failed to display correctly after changing it, navigating away from,
and back to the Security tab (FIELD-3541).
Fixed an issue in the MSR web UI wherein after clicking
Sync Database Now, the In Progress icon failed to
disappear at the correct time and the scanning information (including the
database version) failed to update without a browser refresh (FIELD-3541).
Fixed an issue in the MSR web UI wherein the value of
Scanning timeout limit failed to display correctly after changing
it, navigating away from, and back to the Security tab
(FIELD-3541).
Fixed an issue wherein one or more RethinkDB servers in an unavailable state
caused dtremergency-repair to hang indefinitely (ENGDTR-2640).
Fixed an issue in MSR 2.9.2 that caused bootstrapper to panic when performing
manual operations in an unhealthy environment.
Vulnerability scans no longer reveal a false positive for
CVE-2020-17541
as of CVE database version 1388, published 2021-06-24 at 1:04 PM EST
(ENGDTR-2634).
Vulnerability scans no longer reveal a false positive for
CVE-2021-23017
as of CVE database version 1437, published 2021-06-27 at 5:11 PM EST
(ENGDTR-2634).
Vulnerability scans may reveal the following CVE, though MSR is not impacted:
CVE-2021-29921
(ENGDTR-2634).
MSR 2.9.2 was discontinued shortly after release due to an issue wherein
bootstrapper panicked when performing manual operations in an unhealthy
environment. The product enhancements and bug fixes planned for MSR 2.9.2
are a part of MSR 2.9.3, which also resolves the bootstrapper issue.
Mirantis strongly recommends that customers who deployed MSR 2.9.2 upgrade
to MSR 2.9.3.
MSR now applies a 56-character limit on “namespace/repository” length at
creation, and thus eliminates a situation wherein attempts to push tags
to repos with too-long names return a 500 Internal Server Error
(ENGDTR-2525).
MSR now alerts administrators if the storage backend contents do not match
the metadata, or if a new install of MSR uses a storage
backend that contains data from a different MSR installation (ENGDTR-2501).
Updated golang to 1.16.3 and kube-linter to 0.2.1 (ENGDTR-2561).
Added activity log type DELETE for TagLimit pruning (ENGDTR-2497).
The MSR UI now includes a horizontal scrollbar (in addition to the existing
vertical scrollbar), thus allowing users to better adjust the window
dimensions.
The enableManifestLists setting is no longer needed and has been removed
due to breaking Docker Content Trust (FIELD-2642, FIELD-2644).
Updated the MSR web UI Last updated at trigger for the promotion
and mirror policies to include the option to specify before a particular
time (after already exists) (FIELD-2180).
The mirantis/dtr--help documentation no longer recommends using the
--rm option when invoking commands. Leaving it out preserves containers
after they have finished running, thus allowing users to retrieve logs at a
later time (FIELD-2204).
Fixed broken links to MSR documentation in the MSR web UI (FIELD-3822).
Fixed “nasa bootstrap” integration test (and emergency repair procedure)
(ENGDTR-2433).
Fixed an issue wherein pushing images with previously-pushed layer data that
has been deleted from storage caused unknownblob errors. Pushing such
images now replaces missing layer data. Sweeping image layers with image
layer data missing from storage no longer causes garbage collection to error
out (FIELD-1836).
Though the version of busybox within the container is not vulnerable,
dtr-rethink vulnerability scans may present false positives for
CVE-2018-1000500
and CVE-2021-28831 in
the busybox component (ENGDTR-2571).
Though the jvm-hotspot-openjdk component is not present in the
dtr-jobrunner container, dtr-jobrunner vulnerability scans may
detect CVE-2021-2161 and
CVE-2021-2163 in the component (ENGDTR-2571).
Vulnerability scans no longer report CVE-2016-4074 as a result of the 2021.03 scanner
update.
A self scan of MSR 2.9.1 reveals five vulnerabilities, however these CVEs
are not a threat to MSR:
urllib3 version 1.26.4 and later fixes CVE-2021-28363, however the
dtr-jobrunner container uses Alpine which has yet to release urllib3
1.26.4 in a stable repository.
The dtr-jobrunner container does not make any outgoing HTTP requests to
containers external to MSR and therefore is not susceptible to
CVE-2021-28363 (ENGDTR-2581).
A self-scan can report a false positive for CVE-2021-29482
(ENGDTR-2608).
Added running image enforcement policy support to MSR, which allows users to
block clients from pulling images based on specified criteria.
Users can configure policies scoped either globally or at the repository
level.
MSR logs all enforcement events to the activity log in the MSR web UI. (MSR
tracks enforcement events triggered by global enforcement policies as
GLOBAL event types.)
The MSR API now includes the following endpoints for supporting running
image enforcement in MSR:
Added the capability to download scanner reports and optionally
bundle them with the following information (which you can then send to
Mirantis, as required):
Starting with this release, you no longer need to indicate that storage has
migrated when modifying backend storage configuration. MSR now assumes that
the new storage backend is not empty and that its contents match those of
the old backend. Thus, we have removed the --storage-migrated
flag and web UI storage migration checkbox from MSR.
If the backend is empty or if the new backend content does not match the
the old backend content, MSR produces unknownblob errors during
pushes and pulls.
If you deploy a brand new storage backend and the data inside does not match
the old backend, you must first reinitialize the storage with the new
--reinitialize-storage flag within reconfigure. Note that this
action erases all tag metadata (FIELD-2571).
All analytics reports for instances of MSR with a Mirantis-issued license key
now include the license ID (even when the anonymizeanalytics setting is
enabled). The license subject reads License ID in the web UI
(ENGDTR-2327).
Intermittent failures no longer occur during metadata garbage collection when
using Google Cloud Storage as the backend (ENGDTR-2376).
Pulling images from a repository using crictl no longer returns a 500
error (FIELD-3331).
Lengthy tag names no longer overlap with adjacent text in the repository
tag list (FIELD-1631).
MSR is not vulnerable to CVE-2019-15562, despite its detection in
dtr-notary-signer and dtr-notary-server vulnerability scans, as the
SQL backend is not used in Notary deployment (ENGDTR-2319).
Vulnerability scans of the dtr-jobrunner can give false positives for
CVE-2020-29363, CVE-2020-29361, and CVE-2020-29362 in the p11-kit component.
The container’s version of p11-kit is not vulnerable to these CVEs.
(ENGDTR-2319).
Resolved CVE-2019-20907 (ENGDTR-2259).
Considerations
CentOS 8 entered EOL status as of 31-December-2021. For this reason,
Mirantis no longer supports CentOS 8 for all versions of MSR. We encourage
customers who are using CentOS 8 to migrate onto any one of the supported
operating systems, as further bug fixes will not be forthcoming.
In developing MSR 2.9.x, Mirantis has been transitioning from legacy
Docker Hub-issued licenses to JWT licenses, as detailed below:
Versions 2.9.0 to 2.9.3: Docker Hub licenses and JWT licenses
Versions 2.9.4 and later: JWT licenses only
When malware is present in customer images, malware scanners operating on
MSR nodes at runtime can wrongly report MSR as a bad actor. If your malware
scanner detects any issue in a running instance of MSR, refer to
Scan images for vulnerabilities.
Mirantis Secure Registry (MSR, and formerly Docker Trusted Registry) provides
an enterprise grade container registry solution that can be easily integrated
to provide the core of an effective secure software supply chain.
MSR functionality is dependent on MKE, and MKE functionality is dependent on MCR. As such, MSR operating system compatibility is contingent on the operating system compatibility of the MCR versions with which your particular MKE version is compatible.
To determine MSR operating system compatibility:
Access the MKE compatibility matrix
and locate the version of MKE that you are running with MSR.
Note the MCR versions with which that MKE version is compatible.
Access the MCR compatibility matrix
and locate the MCR versions that are compatible with your version of MKE
to determine operating system compatibility.
Postgres Operator up through 1.8.2 uses the PodDisruptionBudgetpolicy/v1beta1 Kubernetes API, which is no longer served as of
Kubernetes 1.25. As such, various features of MSR may not function
properly if Postgres Operator 1.8.2 or earlier is installed alongside
MSR on Kube v1.25 or later.
MSR 2.9.2 was discontinued shortly after release due to an issue
wherein bootstrapper panicked when performing manual operations in an
unhealthy environment. The product enhancements and bug fixes planned
for MSR 2.9.2 are a part of MSR 2.9.3, which also resolves the
bootstrapper issue. Mirantis strongly recommends that customers who
deployed MSR 2.9.2 upgrade to MSR 2.9.3 (or later).
The Mirantis Kubernetes Engine (MKE) and Mirantis Secure Registry (MSR) web
user interfaces (UIs) both run in the browser, separate from any backend
software. As such, Mirantis aims to support browsers separately from
the backend software in use.
Mirantis currently supports the following web browsers:
Browser
Supported version
Release date
Operating systems
Google Chrome
96.0.4664 or newer
15 November 2021
MacOS, Windows
Microsoft Edge
95.0.1020 or newer
21 October 2021
Windows only
Firefox
94.0 or newer
2 November 2021
MacOS, Windows
To ensure the best user experience, Mirantis recommends that you use the
latest version of any of the supported browsers. The use of other browsers
or older versions of the browsers we support can result in rendering issues,
and can even lead to glitches and crashes in the event that some JavaScript
language features or browser web APIs are not supported.
Important
Mirantis does not tie browser support to any particular MKE or MSR software
release.
Mirantis strives to leverage the latest in browser technology to build more
performant client software, as well as ensuring that our customers benefit from
the latest browser security updates. To this end, our strategy is to regularly
move our supported browser versions forward, while also lagging behind the
latest releases by approximately one year to give our customers a
sufficient upgrade buffer.
The MKE, MSR, and MCR platform subscription provides software, support, and
certification to enterprise development and IT teams that build and manage
critical apps in production at scale. It provides a trusted platform for all
apps which supply integrated management and security across the app lifecycle,
comprised primarily of Mirantis Kubernetes Engine, Mirantis Secure Registry
(MSR), and Mirantis Container Runtime (MCR).
Detailed here are all currently supported product versions, as well as the
product versions most recently deprecated. It can be assumed that all earlier
product versions are at End of Life (EOL).
Important Definitions
“Major Releases” (X.y.z): Vehicles for delivering major and minor feature
development and enhancements to existing features. They incorporate all
applicable Error corrections made in prior Major Releases, Minor Releases,
and Maintenance Releases.
“Minor Releases” (x.Y.z): Vehicles for delivering minor feature
developments, enhancements to existing features, and defect corrections. They
incorporate all applicable Error corrections made in prior Minor Releases,
and Maintenance Releases.
“Maintenance Releases” (x.y.Z): Vehicles for delivering Error corrections
that are severely affecting a number of customers and cannot wait for the
next major or minor release. They incorporate all applicable defect
corrections made in prior Maintenance Releases.
“End of Life” (EOL): Versions are no longer supported by Mirantis,
updating to a later version is recommended.
With the intent of improving the customer experience, Mirantis strives to offer
maintenance releases for the Mirantis Secure Registry (MSR) software every
six to eight weeks. Primarily, these maintenance releases will aim to resolve
known issues and issues reported by customers, quash CVEs, and reduce technical
debt. The version of each MSR maintenance release is reflected in the third
digit position of the version number (as an example, for MSR 2.9 the most
current maintenance release is MSR 2.9.21).
In parallel with our maintenance MSR release work, each year Mirantis will
develop and release a new major version of MSR, the Mirantis support lifespan
of which will adhere to our legacy two year standard.
End of Life Date
The End of Life (EOL) date for MSR 2.9 is 2025-SEP-27.
The MSR team will make every effort to hold to the release cadence stated here.
Customers should be aware, though, that development and release cycles can
change, and without advance notice.
A Technology Preview feature provides early access to upcoming product
innovations, allowing customers to experiment with the functionality and
provide feedback.
Technology Preview features may be privately or publicly available and neither
are intended for production use. While Mirantis will provide assistance with
such features through official channels, normal Service Level Agreements do not
apply.
As Mirantis considers making future iterations of Technology Preview features
generally available, we will do our best to resolve any issues that customers
experience when using these features.
During the development of a Technology Preview feature, additional components
may become available to the public for evaluation. Mirantis cannot guarantee
the stability of such features. As a result, if you are using Technology
Preview features, you may not be able to seamlessly upgrade to subsequent
product releases.
Mirantis makes no guarantees that Technology Preview features will graduate to
generally available features.