Introduction¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
This documentation provides information on how to deploy and operate a Mirantis Secure Registry (MSR). The documentation is intended to help operators to understand the core concepts of the product. The documentation provides sufficient information to deploy and operate the solution.
The information provided in this documentation set is being constantly improved and amended based on the feedback and kind requests from the consumers of MSR.
Product Overview¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
Mirantis Secure Registry (MSR) is a solution that enables enterprises to store and manage their container images on-premise or in their virtual private clouds. Built-in security enables you to verify and trust the provenance and content of your applications and ensure secure separation of concerns. Using MSR, you meet security and regulatory compliance requirements. In addition, the automated operations and integration with CI/CD speed up application testing and delivery. The most common use cases for MSR include:
- Helm charts repositories
Deploying applications to Kubernetes can be complex. Setting up a single application can involve creating multiple interdependent Kubernetes resources, such as pods, services, deployments, and replica sets. Each of these requires manual creation of a detailed YAML manifest file as well. This is a lot of work and time invested. With Helm charts (packages that consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files) you can save time and install the software you need with all the dependencies, upgrade, and configure it.
- Automated development
Easily create an automated workflow where you push a commit that triggers a build on a CI provider, which pushes a new image into your registry. Then, the registry fires off a webhook and triggers deployment on a staging environment, or notifies other systems that a new image is available.
- Secure and vulnerable free images
When an industry requires applications to comply with certain security standards to meet regulatory compliances, your applications are as secure as the images that run those applications. To ensure that your images are secure and do not have any vulnerabilities, track your images using a binary image scanner to detect components in images and identify associated CVEs. In addition, you may also run image enforcement policies to prevent vulnerable or inappropriate images from being pulled and deployed from your registry.
Reference Architecture¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
The MSR Reference Architecture provides comprehensive technical information on Mirantis Secure Registry (MSR), including component particulars, infrastructure specifications, and networking and volumes detail.
Introduction to MSR¶
Mirantis Secure Registry (MSR) is Mirantis’s enterprise-grade image storage solution. Installed behind the firewall, either on-premises or on a virtual private cloud, MSR provides a secure environment from which users can store and manage Docker images.
Image and job management
MSR has a web-based user interface that you can use to browse images and audit repository events. With the UI, you can see which Dockerfile lines produced an image and, if security scanning is enabled, a list of all of the software installed in that image. You can also audit jobs with the web interface.
MSR can serve as a Continuous Integration and Continuous Delivery (CI/CD) component, in the building, shipping, and running of applications.
Availability
MSR is highly available through the use of multiple replicas of all containers and metadata. As such, MSR will continue to operate in the event of machine failure, thus allowing for repair.
Efficiency
MSR is able to reduce the bandwidth used when pulling Docker images by caching images closer to users. In addition, MSR can clean up unreferenced manifests and layers.
Built-in access control
As with Mirantis Kubernetes Engine (MKE), MSR uses Role Based Access Control (RBAC), which allows you to manage image access, either manually, with LDAP, or with Active Directory.
Security scanning
A security scanner is built into MSR, which can be used to discover the versions of the software that is in use in your images. This tool scans each layer and aggregates the results, offering a complete picture of what is being shipped as a part of your stack. Most importantly, as the security scanner is kept up-to-date by tapping into a periodically updated vulnerability database, it is able to provide unprecedented insight into your exposure to known security threats.
Image signing
MSR ships with Notary, which allows you to sign and verify images using Docker Content Trust. For more information on managing Notary data in MSR, refer to the Using Notary to sign an image.
Components¶
Mirantis Secure Registry (MSR) is a containerized application that runs on a Mirantis Kubernetes Engine cluster.
Once you have MSR deployed, you use your Docker CLI client to login, push, and pull images.
For high-availability you can deploy multiple MSR replicas, one on each MKE worker node.
All MSR replicas run the same set of services and changes to their configuration are automatically propagated to other replicas.
When you install MSR on a node, the following containers are started:
Name |
Description |
---|---|
dtr-api-<replica_id> |
Executes the MSR business logic. It serves the MSR web application and API |
dtr-garant-<replica_id> |
Manages MSR authentication |
dtr-jobrunner-<replica_id> |
Runs cleanup jobs in the background |
dtr-nginx-<replica_id> |
Receives http and https requests and proxies them to other MSR components. By default it listens to ports 80 and 443 of the host |
dtr-notary-server-<replica_id> |
Receives, validates, and serves content trust metadata, and is consulted when pushing or pulling to MSR with content trust enabled |
dtr-notary-signer-<replica_id> |
Performs server-side timestamp and snapshot signing for content trust metadata |
dtr-registry-<replica_id> |
Implements the functionality for pulling and pushing Docker images. It also handles how images are stored |
dtr-rethinkdb-<replica_id> |
A database for persisting repository metadata |
dtr-scanningstore-<replica_id> |
Stores security scanning data |
All these components are for internal use of MSR. Don’t use them in your applications.
System requirements¶
Mirantis Secure Registry can be installed on-premises or on the cloud. Before installing, be sure your infrastructure has these requirements.
You can install MSR on-premises or on a cloud provider. To install MSR, all nodes must:
Be a worker node managed by MKE (Mirantis Kubernetes Engine)
Have a fixed hostname
Minimum requirements:
16GB of RAM for nodes running MSR
4 vCPUs for nodes running MSR
25GB of free disk space
Recommended production requirements:
32GB of RAM for nodes running MSR
4 vCPUs for nodes running MSR
100GB of free disk space
Note that Windows container images are typically larger than Linux ones and for this reason, you should consider provisioning more local storage for Windows nodes and for MSR setups that will store Windows container images.
When the image scanning feature is used, we recommend that you have at least 32 GB of RAM. As developers and teams push images into MSR, the repository grows over time. As such, you should regularly inspect RAM, CPU, and disk usage on MSR nodes, and increase resources whenever resource saturation is seen to occur on a regular basis.
Networking¶
To allow containers to communicate, when installing MSR the following networks are created:
Name |
Type |
Description |
---|---|---|
dtr-ol |
overlay |
Allows MSR components running on different nodes to communicate, to replicate MSR data |
When installing MSR on a node, make sure the following ports are open on that node:
Direction |
Port |
Purpose |
---|---|---|
in |
80/tcp |
Web app and API client access to MSR. |
in |
443/tcp |
Web app and API client access to MSR. |
These ports are configurable when installing MSR.
Volumes¶
MSR uses these named volumes for persisting data:
Volume name |
Description |
---|---|
dtr-ca-<replica_id> |
Root key material for the MSR root CA that issues certificates |
dtr-notary-<replica_id> |
Certificate and keys for the Notary components |
dtr-postgres-<replica_id> |
Vulnerability scans data |
dtr-registry-<replica_id> |
Docker images data, if MSR is configured to store images on the local filesystem |
dtr-rethink-<replica_id> |
Repository metadata |
dtr-nfs-registry-<replica_id> |
Docker images data, if MSR is configured to store images on NFS |
You can customize the volume driver used for these volumes, by creating the volumes before installing MSR. During the installation, MSR checks which volumes don’t exist in the node, and creates them using the default volume driver.
By default, the data for these volumes can be found at
/var/lib/docker/volumes/<volume-name>/_data
.
Storage¶
By default, Mirantis Secure Registry stores images on the filesystem of the node where it is running, but you should configure it to use a centralized storage backend.
MSR supports the following storage systems:
Persistent volume |
|
Cloud storage providers |
|
Note
MSR cannot be deployed to Windows nodes.
MSR Web UI¶
MSR has a web UI where you can manage settings and user permissions.
You can push and pull images using the standard Docker CLI client or other tools that can interact with a Docker registry.
Rule engine¶
MSR uses a rule engine to evaluate policies, such as tag pruning and image enforcement.
The rule engine supports the following operators:
|
|
Note
The matches
operator conforms subject fields to a user-provided regular
expression (regex). The regex for matches
must follow the specification
in the official Go documentation: Package syntax.
Each of the following policies uses the rule engine:
Installation Guide¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
Targeted to deployment specialists and QA engineers, the MSR Installation Guide provides the detailed information and procedures you need to install and configure Mirantis Secure Registry (MSR).
Pre-configure MKE¶
When installing or backing up MSR on a MKE cluster, Administrators need to be able to deploy containers on MKE manager nodes or nodes running MSR”. This setting can be adjusted in the MKE Settings menu.
The MSR installation or backup will fail with the following error message if Administrators are unable to deploy on MKE manager nodes or nodes running MSR”.
Error response from daemon: {"message":"could not find any nodes on which the container could be created"}
See also
compatibility-matrix
Install MSR online¶
Mirantis Secure Registry (MSR) is a containerized application that runs on a swarm managed by the Mirantis Kubernetes Engine (MKE). It can be installed on-premises or on a cloud infrastructure.
Step 1. Validate the system requirements¶
Before installing MSR, make sure your infrastructure meets the MSR system requirements.
Step 2. Install MKE¶
MSR requires Mirantis Kubernetes Engine (MKE) to run.
Note
Prior to installing MSR:
When upgrading, upgrade MKE before MSR for each major version. For example, if you are upgrading four major versions, upgrade one major version at a time, first MKE, then MSR, and then repeat for the remaining three versions. - MKE upgraded to the most recent version before an initial install of MSR.
Mirantis Container Runtime should be updated to the most recent version before installing or updating MKE.
MKE and MSR must not be installed on the same node, due to the potential for resource and port conflicts. Instead, install MSR on worker nodes that will be managed by MKE. Note also that MSR cannot be installed on a standalone Mirantis Container Runtime.
Step 3. Install MSR¶
Once MKE is installed, navigate to the MKE web interface as an admin. Expand your profile on the left-side navigation panel, and Select Admin Settings > Mirantis Secure Registry.
After you configure all the options, you should see a Docker CLI command that you can use to install MSR. Before you run the command, take note of the
--dtr-external-url
parameter:$ docker run -it --rm \ mirantis/dtr:2.8.13 install \ --dtr-external-url <msr.example.com> \ --ucp-node <mke-node-name> \ --ucp-username admin \ --ucp-url <mke-url>
If you want to point this parameter to a load balancer that uses HTTP for health probes over port
80
or443
, temporarily reconfigure the load balancer to use TCP over a known open port. Once MSR is installed, you can configure the load balancer however you need to.Run the MSR install command on any node connected to the MKE cluster, and with Mirantis Container Runtime installed. MSR will not be installed on the node you run the install command on. MSR will be installed on the MKE worker defined by the
--ucp-node
flag.For example, you could SSH into a MKE node and run the MSR install command from there. Running the installation command in interactive TTY or
-it
mode means you will be prompted for any required additional information.Here are some useful options you can set during installation:
To install a different version of MSR, replace
2.8.13
with your desired version in the installation command above.MSR is deployed with self-signed certificates by default, so MKE might not be able to pull images from MSR. Use the
--dtr-external-url <msr-domain>:<port>
optional flag during installation, or during a reconfiguration, so that MKE is automatically reconfigured to trust MSR.Starting with MSR 2.7, you can enable browser authentication via client certificates at install time. This bypasses the MSR login page and hides the logout button, thereby skipping the need for entering your username and password.
Verify that MSR is installed. Either:
See
https://<mke-fqdn>/manage/settings/dtr
, or;Navigate to Admin Settings > Mirantis Secure Registry from the MKE web UI. Under the hood, MKE modifies
/etc/docker/certs.d
for each host and adds MSR’s CA certificate. MKE can then pull images from MSR because the Mirantis Container Runtime for each node in the MKE swarm has been configured to trust MSR.
Reconfigure your load balancer back to your desired protocol and port.
Step 4. Check that MSR is running¶
In your browser, navigate to the MKE web interface.
Select Shared Resources > Stacks from the left-side navigation panel. You should see MSR listed as a stack.
To verify that MSR is accessible from the browser, enter your MSR IP address or FQDN on the address bar. Since HSTS (HTTP Strict-Transport-Security) header is included in all API responses, make sure to specify the FQDN (Fully Qualified Domain Name) of your MSR prefixed with
https://
, or your browser may refuse to load the web interface.
Step 5. Configure MSR¶
After installing MSR, you should configure:
The certificates used for TLS communication
The storage back end to store the Docker images
Web interface¶
To update your TLS certificates, access MSR from the browser and navigate to System > General.
To configure your storage back end, navigate to System > Storage.
Command line interface¶
To reconfigure MSR using the CLI, refer to MSR Operations Guide: CLI reference.
Step 6. Test pushing and pulling¶
Now that you have a working installation of MSR, you should test that you can push and pull images:
Configure your local Mirantis Container Runtime
Create a repository
Push and pull images
Step 7. Join replicas to the cluster¶
This step is optional.
To set up MSR for high availability, you can add more replicas to your MSR cluster. Adding more replicas allows you to load-balance requests across all replicas, and keep MSR working if a replica fails.
For high-availability, you should set 3 or 5 MSR replicas. The replica nodes also need to be managed by the same MKE.
To add replicas to a MSR cluster, use the join command:
Load your MKE user bundle.
Run the join command.
docker run -it --rm \ mirantis/dtr:2.8.13 join \ --ucp-node <mke-node-name> \ --ucp-insecure-tls
Important
The
<mke-node-name>
following the--ucp-node
flag is the target node to install the MSR replica. This is NOT the MKE Manager URL.When you join a replica to a MSR cluster, you need to specify the ID of a replica that is already part of the cluster. You can find an existing replica ID by going to the Shared Resources > Stacks page in the MKE web UI.
Check that all replicas are running.
In your browser, navigate to the MKE web UI.
Select Shared Resources > Stacks. All replicas should be displayed.
Install MSR offline¶
The procedure to install Mirantis Secure Registry on a host is the same, whether that host has access to the internet or not.
The only difference when installing on an offline host, is that instead of pulling the MKE images from Docker Hub, you use a computer that is connected to the internet to download a single package with all the images. Then you copy that package to the host where you’ll install MSR.
Versions available¶
Download the offline package¶
Use a computer with internet access to download a package with all MSR images:
$ wget <package-url> -O dtr.tar.gz
Now that you have the package in your local machine, you can transfer it to the machines where you want to install MSR.
For each machine where you want to install MSR:
Copy the MSR package to that machine.
$ scp dtr.tar.gz <user>@<host>
Use ssh to log into the hosts where you transferred the package.
Load the MSR images.
Once the package is transferred to the hosts, you can use the
docker load
command to load the Docker images from the tar archive:$ docker load -i dtr.tar.gz
Install MSR¶
Now that the offline hosts have all the images needed to install MSR, you can install MSR on that host.
Preventing outgoing connections¶
MSR makes outgoing connections to:
report analytics,
check for new versions,
check online licenses,
update the vulnerability scanning database
All of these uses of online connections are optional. You can choose to disable or not use any or all of these features on the admin settings page.
Obtain the license¶
After you install MSR, download your new MSR license and apply it using the MSR web UI.
Warning
Users are not authorized to run MSR on production workloads without a valid license. Refer to Mirantis Agreements and Terms for more information.
To download your MSR license:
Open an email from Mirantis Support with the subject Welcome to Mirantis’ CloudCare Portal and follow the instructions for logging in.
If you did not receive the CloudCare Portal email, you likely have not yet been added as a Designated Contact and should contact your Designated Administrator.
In the top navigation bar, click Environments.
Click the Cloud Name associated with the license you want to download.
Scroll down to License Information and click the License File URL. A new tab opens in your browser.
Click View file to download your license file.
To update your license settings in the MSR web UI:
Log in to your MSR instance as an administrator.
In the left-side navigation panel, click Settings.
On the General tab, click Apply new license. A file browser dialog displays.
Navigate to where you saved the license key (
.lic
) file, select it, and click Open. MSR automatically updates with the new settings.
Uninstall MSR¶
Uninstalling MSR can be done by simply removing all data associated with each replica. To do that, you just run the destroy command once per replica:
docker run -it --rm \
mirantis/dtr:2.8.13 destroy \
--ucp-insecure-tls
You will be prompted for the MKE URL, MKE credentials, and which replica to destroy.
Operations Guide¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
The MSR Operations Guide provides the detailed information you need to store and manage images on-premises or in a virtual private cloud, to meet security or regulatory compliance requirements.
Access MSR¶
Configure your Mirantis Container Runtime¶
By default Mirantis Container Runtime uses TLS when pushing and pulling images to an image registry like Mirantis Secure Registry (MSR).
If MSR is using the default configurations or was configured to use self-signed certificates, you need to configure your Mirantis Container Runtime to trust MSR. Otherwise, when you try to log in, push to, or pull images from MSR, you’ll get an error:
docker login msr.example.org
x509: certificate signed by unknown authority
The first step to make your Mirantis Container Runtime trust the certificate authority used by MSR is to get the MSR CA certificate. Then you configure your operating system to trust that certificate.
Configure your host¶
macOS¶
In your browser navigate to https://<msr-url>/ca
to download the TLS
certificate used by MSR. Then add that certificate to macOS
Keychain.
After adding the CA certificate to Keychain, restart Docker Desktop for Mac.
Windows¶
In your browser navigate to https://<msr-url>/ca
to download the TLS
certificate used by MSR. Open Windows Explorer, right-click the file
you’ve downloaded, and choose Install certificate.
Then, select the following options:
Store location: local machine
Check place all certificates in the following store
Click Browser, and select Trusted Root Certificate Authorities
Click Finish
Learn more about managing TLS certificates.
After adding the CA certificate to Windows, restart Docker Desktop for Windows.
Ubuntu/ Debian¶
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /usr/local/share/ca-certificates/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-certificates
# Restart the Docker daemon
sudo service docker restart
RHEL/ CentOS¶
# Download the MSR CA certificate
sudo curl -k https://<msr-domain-name>/ca -o /etc/pki/ca-trust/source/anchors/<msr-domain-name>.crt
# Refresh the list of certificates to trust
sudo update-ca-trust
# Restart the Docker daemon
sudo /bin/systemctl restart docker.service
Boot2Docker¶
Log into the virtual machine with ssh:
docker-machine ssh <machine-name>
Create the
bootsync.sh
file, and make it executable:sudo touch /var/lib/boot2docker/bootsync.sh sudo chmod 755 /var/lib/boot2docker/bootsync.sh
Add the following content to the
bootsync.sh
file. You can use nano or vi for this.#!/bin/sh cat /var/lib/boot2docker/server.pem >> /etc/ssl/certs/ca-certificates.crt
Add the MSR CA certificate to the
server.pem
file:curl -k https://<msr-domain-name>/ca | sudo tee -a /var/lib/boot2docker/server.pem
Run
bootsync.sh
and restart the Docker daemon:sudo /var/lib/boot2docker/bootsync.sh sudo /etc/init.d/docker restart
Log into MSR¶
To validate that your Docker daemon trusts MSR, try authenticating against MSR.
docker login msr.example.org
Where to go next¶
Configure your Notary client¶
Configure your Notary client as described in Delegations for content trust.
Use a cache¶
Mirantis Secure Registry can be configured to have one or more caches. This allows you to choose from which cache to pull images from for faster download times.
If an administrator has set up caches, you can choose which cache to use when pulling images.
In the MSR web UI, navigate to your Account, and check the Content Cache options.
Once you save, your images are pulled from the cache instead of the central MSR.
Manage access tokens¶
Mirantis Secure Registry lets you create and distribute access tokens to enable programmatic access to MSR. Access tokens are linked to a particular user account and duplicate whatever permissions that account has at the time of use. If the account changes permissions, so will the token.
Access tokens are useful in cases such as building integrations since you can issue multiple tokens – one for each integration – and revoke them at any time.
Create an access token¶
To create an access token for the first time, log in to
https://<msr-url>
with your MKE credentials.Expand your Profile from the left-side navigation panel and select Profile > Access Tokens.
Add a description for your token. Use something that indicates where the token is going to be used, or set a purpose for the token. Administrators can also create tokens for other users.
Modify an access token¶
Once the token is created, you will not be able to see it again. You do have the option to rename, deactivate, or delete the token as needed. You can delete the token by selecting it and clicking Delete, or you can click View Details.
Use the access token¶
You can use an access token anywhere that requires your MSR password. As
an example you can pass your access token to the --password
or
-p
option when logging in from your Docker CLI client:
docker login dtr.example.org --username <username> --password <token>
To use the MSR API to list the repositories your user has access to:
curl --silent --insecure --user <username>:<token> dtr.example.org/api/v0/repositories
Configure MSR¶
Use your own TLS certificates¶
Mirantis Secure Registry (MSR) services are exposed using HTTPS by default. This ensures encrypted communications between clients and your trusted registry. If you do not pass a PEM-encoded TLS certificate during installation, MSR will generate a self-signed certificate. This leads to an insecure site warning when accessing MSR through a browser. Additionally, MSR includes an HSTS (HTTP Strict-Transport-Security) header in all API responses which can further lead to your browser refusing to load MSR’s web interface.
You can configure MSR to use your own TLS certificates, so that it is automatically trusted by your users’ browser and client tools. As of v2.7, you can also enable user authentication via client certificates provided by your organization’s public key infrastructure (PKI).
You can upload your own TLS certificates and keys using the web interface, or pass them as CLI options when installing or reconfiguring your MSR instance.
Use the web interface to replace the server certificates¶
Navigate to :samp:
https://<msr-url>
and log in with your credentials.Select System from the left-side navigation panel, and scroll down to Domain & Proxies.
Enter your MSR domain name and upload or copy and paste the certificate details:
Load balancer/public address. The domain name clients will use to access MSR.
TLS private key. The server private key.
TLS certificate chain. The server certificate and any intermediate public certificates from your certificate authority (CA). This certificate needs to be valid for the MSR public address, and have SANs for all addresses used to reach the MSR replicas, including load balancers.
TLS CA. The root CA public certificate.
Click Save to apply your changes.
At this point, if you’ve added certificates issued by a globally trusted CA, any web browser or client tool should now trust MSR. If you’re using an internal CA, you will need to configure the client systems to trust that CA.
Use the command line interface to replace the server certificates¶
See install and reconfigure for TLS certificate options and usage.
Enable single sign-on¶
Users are shared between MKE and MSR by default, but the applications have separate browser-based interfaces which require authentication.
To only authenticate once, you can configure MSR to have single sign-on (SSO) with MKE.
Note
After configuring single sign-on with MSR, users accessing MSR via docker
login
should create an access token and use it to authenticate.
At install time¶
When installing MSR, pass --dtr-external-url <url>
to enable SSO.
Specify the Fully Qualified Domain Name (FQDN) of your MSR, or a load
balancer, to load-balance requests across multiple MSR replicas.
docker run --rm -it \
mirantis/dtr:2.8.13 install \
--dtr-external-url msr.example.com \
--dtr-cert "$(cat cert.pem)" \
--dtr-ca "$(cat dtr_ca.pem)" \
--dtr-key "$(cat key.pem)" \
--ucp-url mke.example.com \
--ucp-username admin \
--ucp-ca "$(cat ucp_ca.pem)"
This makes it so that when you access MSR’s web user interface, you are redirected to the MKE login page for authentication. Upon successfully logging in, you are then redirected to your specified MSR external URL during installation.
Post-installation¶
Web user interface¶
Navigate to
https://<msr-url>
and log in with your credentials.Select System from the left-side navigation panel, and scroll down to Domain & Proxies.
Update the Load balancer / Public Address field with the external URL where users should be redirected once they are logged in. Click Save to apply your changes.
Toggle Single Sign-on to automatically redirect users to MKE for logging in.
Command line interface¶
You can also enable single sign-on from the command line by reconfiguring your MSR. To do so, run the following:
docker run --rm -it \
mirantis/dtr:2.8.13 reconfigure \
--dtr-external-url msr.example.com \
--dtr-cert "$(cat cert.pem)" \
--dtr-ca "$(cat dtr_ca.pem)" \
--dtr-key "$(cat key.pem)" \
--ucp-url mke.example.com \
--ucp-username admin \
--ucp-ca "$(cat ucp_ca.pem)"
Enable MSR telemetry¶
You can set MSR to automatically record and transmit data to Mirantis through an encrypted channel for monitoring and analysis purposes. The data collected provides the Mirantis Customer Success Organization with information that helps us to better understand the operational use of MSR by our customers. It also provides key feedback in the form of product usage statistics, which enable our product teams to enhance Mirantis products and services.
Caution
To send the telemetry, verify that dockerd and the MSR application container
can resolve api.segment.io
and create a TCP (HTTPS) connection on port
443.
Log in to the MSR Web UI as an administrator.
Click System in the left-side navigation panel to open the System page.
Click the General tab in the detail pane.
Scroll down in the detail pane to the Analytics section.
Toggle the Send data slider to the right.
External storage¶
Configure MSR image storage¶
Configure your storage back end¶
By default MSR uses the local filesystem of the node where it is running to store your Docker images. You can configure MSR to use an external storage back end, for improved performance or high availability.
If your MSR deployment has a single replica, you can continue using the local filesystem for storing your Docker images. If your MSR deployment has multiple replicas, make sure all replicas are using the same storage back end for high availability. Whenever a user pulls an image, the MSR node serving the request needs to have access to that image.
MSR supports the following storage systems:
Local filesystem
NFS
Bind Mount
Volume
Cloud Storage Providers
Amazon S3
Microsoft Azure
OpenStack Swift
Google Cloud Storage
Note
Some of the previous links are meant to be informative and are not representative of MSR’s implementation of these storage systems.
To configure the storage back end, log in to the MSR web interface as an admin, and navigate to System > Storage.
The storage configuration page gives you the most common configuration options,
but you have the option to upload a configuration file in .yml
, .yaml
,
or .txt
format.
Local filesystem¶
By default, MSR creates a volume named dtr-registry-<replica-id>
to store
your images using the local filesystem. You can customize the name and path of
the volume by using mirantis/dtr install --dtr-storage-volume
or mirantis/dtr reconfigure --dtr-storage-volume
.
Important
When running 2.6.0 to 2.6.3 (with experimental online garbage
collection), there is an issue with reconfiguring MSR with
--nfs-storage-url
which leads to erased tags. Make sure to back up your
MSR metadata before you proceed. To work around the `–nfs-storage-url``
flag issue, manually create a storage volume on each MSR node. If MSR is
already installed in your cluster, reconfigure MSR with the
--dtr-storage-volume
flag using your newly-created volume.
If you’re deploying MSR with high-availability, you need to use NFS or any other centralized storage back end so that all your MSR replicas have access to the same images.
To check how much space your images are utilizing in the local filesystem, SSH into the MSR node and run:
# Find the path to the volume
docker volume inspect dtr-registry-<replica-id>
# Check the disk usage
sudo du -hs \
$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<msr-replica>))
You can configure your MSR replicas to store images on an NFS partition, so that all replicas can share the same storage back end.
Cloud Storage¶
MSR supports Amazon S3 or other storage systems that are S3-compatible like Minio.
Switching storage back ends¶
Switching storage back ends initializes a new metadata store and erases your
existing tags. This helps facilitate online garbage collection. In earlier
versions, MSR would subsequently start a tagmigration
job to rebuild tag
metadata from the file layout in the image layer store. This job has been
discontinued for DTR 2.5.x (with garbage collection) and DTR 2.6, as your
storage back end could get out of sync with your MSR metadata, like your
manifests and existing repositories. As a best practice, MSR storage back ends
and metadata should always be moved, backed up, and restored together.
The --storage-migrated
flag in reconfigure lets
you indicate the migration status of your storage data during a reconfigure. If
you are not worried about losing your existing tags, you can skip the
recommended steps below and perform a reconfigure.
Note
Starting with MSR 2.9.0, switching your storage back end does not initialize a new metadata store or erase your existing storage. MSR now requires the new storage back end to contain an exact copy of the prior configuration’s data. If this requirement is not met, the storage must be reinitialized using the --reinitialize-storage flag with the dtr reconfigure command, which reinitializes a new metadata store and erases your existing tags.
It is a best practice to always move, back up, and restore your storage back ends with your metadata.
Best practice for data migration¶
Disable garbage collection by selecting “Never” under System > Garbage Collection, so blobs referenced in the backup that you create continue to exist. Make sure to keep it disabled while you’re performing the metadata backup and migrating your storage data.
Back up your existing metadata.
Migrate the contents of your current storage back end to the new one you are switching to. For example, upload your current storage data to your new NFS server.
Restore MSR from your backup and specify your new storage back end.
With MSR restored from your backup and your storage data migrated to your new back end, garbage collect any dangling blobs using the following API request:
curl -u <username>:$TOKEN -X POST "https://<msr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action": \"onlinegc_blobs\" }"
On success, you should get a
202 Accepted
response with a jobid
and other related details. This ensures any blobs which are not referenced in your previously created backup get destroyed.
Alternative option for data migration¶
If you have a long maintenance window, you can skip some steps from above and do the following:
Put MSR in “read-only” mode using the following API request:
curl -u <username>:$TOKEN -X POST "https://<msr-url>/api/v0/meta/settings" -H "accept: application/json" -H "content-type: application/json" -d "{ \"readOnlyRegistry\": true }"
On success, you should get a
202 Accepted
response.Migrate the contents of your current storage back end to the new one you are switching to. For example, upload your current storage data to your new NFS server.
Reconfigure MSR while specifying the
--storage-migrated
flag to preserve your existing tags.
Regarding previous versions…
Make sure to perform a backup before you change your storage back end when running DTR 2.5 (with online garbage collection) and 2.6.0-2.6.3.
Upgrade to DTR 2.6.4 and follow best practice for data migration to avoid the wiped tags issue when moving from one NFS server to another.
Configuring MSR for S3¶
You can configure MSR to store Docker images on Amazon S3, or other file servers with an S3-compatible API like Cleversafe or Minio.
Amazon S3 and compatible services store files in “buckets”, and users have permissions to read, write, and delete files from those buckets. When you integrate MSR with Amazon S3, MSR sends all read and write operations to the S3 bucket so that the images are persisted there.
Create a bucket on Amazon S3¶
Before configuring MSR you need to create a bucket on Amazon S3. To get faster pulls and pushes, you should create the S3 bucket on a region that’s physically close to the servers where MSR is running.
Start by creating a bucket. Then, as a best practice you should create a new IAM user just for the MSR integration and apply an IAM policy that ensures the user has limited permissions.
This user only needs permissions to access the bucket that you’ll use to store images, and be able to read, write, and delete files.
Here’s an example of a user policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::<bucket-name>"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::<bucket-name>/*"
}
]
}
Configure MSR¶
Once you’ve created a bucket and user, you can configure MSR to use it.
In your browser, navigate to https://<msr-url
. Select System >
Storage.
Select the S3 option, and fill-in the information about the bucket and user.
Field |
Description |
---|---|
Root directory |
The path in the bucket where images are stored |
AWS Region name |
The region where the bucket is. |
S3 bucket name |
The name of the bucket to store the images. |
AWS access key |
The access key to use to access the S3 bucket. This can be left empty if you’re using an IAM policy. |
AWS secret key |
The secret key to use to access the S3 bucket. This can be left empty if you’re using an IAM policy. |
Region endpoint |
The endpoint name for the region you’re using. |
There are also some advanced settings.
Field |
Description |
---|---|
Signature version 4 auth |
Authenticate the requests using AWS signature version 4. |
Use HTTPS |
Secure all requests with HTTPS, or make requests in an insecure way. |
Skip TLS verification |
Encrypt all traffic, but don’t verify the TLS certificate used by the storage back end. |
Root CA certificate |
The public key certificate of the root certificate authority that issued the storage back end certificate. |
Once you click Save, MSR validates the configurations and saves the changes.
Configure your clients¶
If you’re using a TLS certificate in your storage back end that’s not globally trusted, you’ll have to configure all Mirantis Container Runtimes that push or pull from MSR to trust that certificate. When you push or pull an image MSR redirects the requests to the storage back end, so if clients don’t trust the TLS certificates of both MSR and the storage back end, they won’t be able to push or pull images.
And if you’ve configured MSR to skip TLS verification, you also need to configure all Mirantis Container Runtimes that push or pull from MSR to skip TLS verification. You do this by adding MSR to the list of insecure registries when starting Docker.
Supported regions¶
MSR supports the following S3 regions:
S3 Regions |
|||
---|---|---|---|
us-east-1 |
us-east-1 |
us-east-2 |
us-west-1 |
us-west-2 |
eu-west-1 |
eu-west-2 |
eu-central-1 |
ap-south-1 |
ap-southeast-1 |
ap-southeast-2 |
ap-northeast-1 |
ap-northeast-2 |
sa-east-1 |
cn-north-1 |
us-gov-west-1 |
ca-central-1 |
Update your S3 settings on the web interface¶
When running 2.6.0 to 2.6.4 (with experimental online garbage collection), there is an issue with changing your S3 settings on the web interface which leads to erased metadata. Make sure to back up your MSR metadata before you proceed.
Restore MSR with S3¶
To restore MSR using your previously configured S3 settings, use
restore with --dtr-use-default-storage
to keep your
metadata.
Configuring MSR for NFS¶
You can configure MSR to store Docker images in an NFS directory. Starting in DTR 2.6, changing storage back ends involves initializing a new metadatastore instead of reusing an existing volume. This helps facilitate online garbage collection. See changes to NFS reconfiguration below if you have previously configured MSR to use NFS.
Before installing or configuring MSR to use an NFS directory, make sure that:
The NFS server has been correctly configured
The NFS server has a fixed IP address
All hosts running MSR have the correct NFS libraries installed
To confirm that the hosts can connect to the NFS server, try to list the directories exported by your NFS server:
showmount -e <nfsserver>
You should also try to mount one of the exported directories:
mkdir /tmp/mydir && sudo mount -t nfs <nfs server>:<directory> /tmp/mydir
Install MSR with NFS¶
One way to configure MSR to use an NFS directory is at install time:
docker run -it --rm mirantis/dtr:2.8.13 install \
--nfs-storage-url <nfs-storage-url> \
<other options>
Use the format nfs://<nfs server>/<directory>
for the NFS storage URL.
To support NFS v4, you can now specify additional options when running
install with --nfs-storage-url
.
When joining replicas to a MSR cluster, the replicas will pick up your storage configuration, so you will not need to specify it again.
You can use the --storage-migrated
flag with the
reconfigure CLI command to indicate the migration
status of your storage data during a reconfigure.
To reconfigure MSR using an NFSv4 volume as a storage back end:
docker run --rm -it \
mirantis/dtr:2.8.13 reconfigure \
--ucp-url <mke_url> \
--ucp-username <mke_username> \
--nfs-storage-url <msr-registry-nf>
--async-nfs
--storage-migrated
To reconfigure MSR to stop using NFS storage, leave the --nfs-storage-url
option blank:
docker run -it --rm mirantis/dtr:2.8.13 reconfigure \
--nfs-storage-url ""
Set up high availability¶
Mirantis Secure Registry is designed to scale horizontally as your usage increases. You can add more replicas to make MSR scale to your demand and for high availability.
All MSR replicas run the same set of services and changes to their configuration are automatically propagated to other replicas.
To make MSR tolerant to failures, add additional replicas to the MSR cluster.
MSR replicas |
Failures tolerated |
---|---|
1 |
0 |
3 |
1 |
5 |
2 |
7 |
3 |
When sizing your MSR installation for high-availability, follow these rules of thumb:
Don’t create a MSR cluster with just two replicas. Your cluster won’t tolerate any failures, and it’s possible that you experience performance degradation.
When a replica fails, the number of failures tolerated by your cluster decreases. Don’t leave that replica offline for long.
Adding too many replicas to the cluster might also lead to performance degradation, as data needs to be replicated across all replicas.
To have high-availability on MKE and MSR, you need a minimum of:
3 dedicated nodes to install MKE with high availability,
3 dedicated nodes to install MSR with high availability,
As many nodes as you want for running your containers and applications.
You also need to configure the MSR replicas to share the same object storage.
Join more MSR replicas¶
To add replicas to an existing MSR deployment:
Use ssh to log into any node that is already part of MKE.
Run the MSR join command:
docker run -it --rm \ mirantis/dtr:2.8.13 join \ --ucp-node <mke-node-name> \ --ucp-insecure-tls
Where the
--ucp-node
is the hostname of the MKE node where you want to deploy the MSR replica.--ucp-insecure-tls
tells the command to trust the certificates used by MKE.If you have a load balancer, add this MSR replica to the load balancing pool.
Remove existing replicas¶
To remove a MSR replica from your deployment:
Use ssh to log into any node that is part of MKE.
Run the MSR remove command:
docker run -it --rm \ mirantis/dtr:2.8.13 remove \ --ucp-insecure-tls
You will be prompted for:
Existing replica id: the id of any healthy MSR replica of that cluster
Replica id: the id of the MSR replica you want to remove. It can be the id of an unhealthy replica
MKE username and password: the administrator credentials for MKE
If you’re load-balancing user requests across multiple MSR replicas, don’t forget to remove this replica from the load balancing pool.
Use a load balancer¶
Once you’ve joined multiple MSR replicas nodes for high-availability, you can configure your own load balancer to balance user requests across all replicas.
This allows users to access MSR using a centralized domain name. If a replica goes down, the load balancer can detect that and stop forwarding requests to it, so that the failure goes unnoticed by users.
MSR exposes several endpoints you can use to assess if a MSR replica is healthy or not:
/_ping
: Is an unauthenticated endpoint that checks if the MSR replica is healthy. This is useful for load balancing or other automated health check tasks./nginx_status
: Returns the number of connections being handled by the NGINX front-end used by MSR./api/v0/meta/cluster_status
: Returns extensive information about all MSR replicas.
Load balance MSR¶
MSR does not provide a load balancing service. You can use an on-premises or cloud-based load balancer to balance requests across multiple MSR replicas.
Important
Additional load balancer requirements for MKE
If you are also using MKE, there are additional requirements if you plan to load balance both MKE and MSR using the same load balancer.
You can use the unauthenticated /_ping
endpoint on each MSR replica,
to check if the replica is healthy and if it should remain in the load
balancing pool or not.
Also, make sure you configure your load balancer to:
Load balance TCP traffic on ports 80 and 443.
Not terminate HTTPS connections.
Not buffer requests.
Forward the
Host
HTTP header correctly.Have no timeout for idle connections, or set it to more than 10 minutes.
The /_ping
endpoint returns a JSON object for the replica being
queried of the form:
{
"Error": "error message",
"Healthy": true
}
A response of "Healthy": true
means the replica is suitable for
taking requests. It is also sufficient to check whether the HTTP status
code is 200.
An unhealthy replica will return 503 as the status code and populate
"Error"
with more details on any one of these services:
Storage container (registry)
Authorization (garant)
Metadata persistence (rethinkdb)
Content trust (notary)
Note that this endpoint is for checking the health of a single replica. To get the health of every replica in a cluster, querying each replica individually is the preferred way to do it in real time.
Configuration examples¶
Use the following examples to configure your load balancer for MSR.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
upstream dtr_80 {
server <MSR_REPLICA_1_IP>:80 max_fails=2 fail_timeout=30s;
server <MSR_REPLICA_2_IP>:80 max_fails=2 fail_timeout=30s;
server <MSR_REPLICA_N_IP>:80 max_fails=2 fail_timeout=30s;
}
upstream dtr_443 {
server <MSR_REPLICA_1_IP>:443 max_fails=2 fail_timeout=30s;
server <MSR_REPLICA_2_IP>:443 max_fails=2 fail_timeout=30s;
server <MSR_REPLICA_N_IP>:443 max_fails=2 fail_timeout=30s;
}
server {
listen 443;
proxy_pass dtr_443;
}
server {
listen 80;
proxy_pass dtr_80;
}
}
global
log /dev/log local0
log /dev/log local1 notice
defaults
mode tcp
option dontlognull
timeout connect 5s
timeout client 50s
timeout server 50s
timeout tunnel 1h
timeout client-fin 50s
### frontends
# Optional HAProxy Stats Page accessible at http://<host-ip>:8181/haproxy?stats
frontend dtr_stats
mode http
bind 0.0.0.0:8181
default_backend dtr_stats
frontend dtr_80
mode tcp
bind 0.0.0.0:80
default_backend dtr_upstream_servers_80
frontend dtr_443
mode tcp
bind 0.0.0.0:443
default_backend dtr_upstream_servers_443
### backends
backend dtr_stats
mode http
option httplog
stats enable
stats admin if TRUE
stats refresh 5m
backend dtr_upstream_servers_80
mode tcp
option httpchk GET /_ping HTTP/1.1\r\nHost:\ <MSR_FQDN>
server node01 <MSR_REPLICA_1_IP>:80 check weight 100
server node02 <MSR_REPLICA_2_IP>:80 check weight 100
server node03 <MSR_REPLICA_N_IP>:80 check weight 100
backend dtr_upstream_servers_443
mode tcp
option httpchk GET /_ping HTTP/1.1\r\nHost:\ <MSR_FQDN>
server node01 <MSR_REPLICA_1_IP>:443 weight 100 check check-ssl verify none
server node02 <MSR_REPLICA_2_IP>:443 weight 100 check check-ssl verify none
server node03 <MSR_REPLICA_N_IP>:443 weight 100 check check-ssl verify none
{
"Subnets": [
"subnet-XXXXXXXX",
"subnet-YYYYYYYY",
"subnet-ZZZZZZZZ"
],
"CanonicalHostedZoneNameID": "XXXXXXXXXXX",
"CanonicalHostedZoneName": "XXXXXXXXX.us-west-XXX.elb.amazonaws.com",
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 443,
"LoadBalancerPort": 443,
"Protocol": "TCP",
"InstanceProtocol": "TCP"
},
"PolicyNames": []
}
],
"HealthCheck": {
"HealthyThreshold": 2,
"Interval": 10,
"Target": "HTTPS:443/_ping",
"Timeout": 2,
"UnhealthyThreshold": 4
},
"VPCId": "vpc-XXXXXX",
"BackendServerDescriptions": [],
"Instances": [
{
"InstanceId": "i-XXXXXXXXX"
},
{
"InstanceId": "i-XXXXXXXXX"
},
{
"InstanceId": "i-XXXXXXXXX"
}
],
"DNSName": "XXXXXXXXXXXX.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-XXXXXXXXX"
],
"Policies": {
"LBCookieStickinessPolicies": [],
"AppCookieStickinessPolicies": [],
"OtherPolicies": []
},
"LoadBalancerName": "ELB-MSR",
"CreatedTime": "2017-02-13T21:40:15.400Z",
"AvailabilityZones": [
"us-west-2c",
"us-west-2a",
"us-west-2b"
],
"Scheme": "internet-facing",
"SourceSecurityGroup": {
"OwnerAlias": "XXXXXXXXXXXX",
"GroupName": "XXXXXXXXXXXX"
}
}
You can deploy your load balancer using:
# Create the nginx.conf file, then
# deploy the load balancer
docker run --detach \
--name dtr-lb \
--restart=unless-stopped \
--publish 80:80 \
--publish 443:443 \
--volume ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro \
nginx:stable-alpine
# Create the haproxy.cfg file, then
# deploy the load balancer
docker run --detach \
--name dtr-lb \
--publish 443:443 \
--publish 80:80 \
--publish 8181:8181 \
--restart=unless-stopped \
--volume ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
haproxy:1.7-alpine haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg
Set up security scanning in MSR¶
This page explains how to set up and enable Docker Security Scanning on an existing installation of Mirantis Secure Registry.
Prerequisites¶
These instructions assume that you have already installed Mirantis Secure Registry (MSR), and have access to an account on the MSR instance with administrator access.
Before you begin, make sure that you or your organization has purchased a MSR license that includes Docker Security Scanning, and that your Docker ID can access and download this license from the Docker Hub.
If you are using a license associated with an individual account, no
additional action is needed. If you are using a license associated with
an organization account, you may need to make sure your Docker ID is a
member of the Owners
team. Only Owners
team members can download
license files for an Organization.
If you will be allowing the Security Scanning database to update itself
automatically, make sure that the server hosting your MSR instance can
access both http://license.mirantis.com
and
https://dss-cve-updates.mirantis.com/
on the standard https port 443.
Get the security scanning license¶
If your MSR instance already has a license that includes Security Scanning, skip this section and proceed to Enable MSR security scanning.
Tip
To check if your existing MSR license includes scanning, navigate to the MSR Settings page, and click Security. If an Enable scannin toggle appears, the license includes scanning.
If your current MSR license doesn’t include scanning, you must first download the new license.
Search for an email from Mirantis Support with the subject Welcome to Mirantis’ CloudCare Portal, and follow the instructions for logging in.
If you did not receive the CloudCare Portal email, it is likely that you have not yet been added as a Designated Contact. To remedy this, contact your Designated Administrator.
Click Environments in the top navigation bar.
Click the Cloud Name associated with the license you want to download.
Scroll down to License Information and click the License File url. A new tab will open in your browser.
Click View file to download your license file.
Next, install the new license on the MSR instance.
Log in to your MSR instance using an administrator account.
Click Settings in the left-side navigation panel.
On the General tab click Apply new license.
A file browser dialog displays.
Navigate to where you saved the license key (
.lic
) file, select it, and click Open.
Enable MSR security scanning¶
To enable security scanning in MSR:
Log in to your MSR instance with an administrator account.
Click Settings in the left-side navigation panel.
Click the Security tab.
Click the Enable scanning toggle so that it turns blue and says “on”.
Next, provide a security database for the scanner. Security scanning will not function until MSR has a security database to use.
By default, security scanning is enabled in Online mode. In this mode, MSR attempts to download a security database from a Docker server. If your installation cannot access
https://dss-cve-updates.docker.com/
you must manually upload a.tar
file containing the security database.If you are using
Online
mode, the MSR instance will contact a Docker server, download the latest vulnerability database, and install it. Scanning can begin once this process completes.If you are using
Offline
mode, use the instructions in Update scanning database - offline mode to upload an initial security database.
By default when Security Scanning is enabled, new repositories will
automatically scan on docker push
. If you had existing repositories
before you enabled security scanning, you might want to change
repository scanning behavior.
Set repository scanning mode¶
Two modes are available when Security Scanning is enabled:
Scan on push & Scan manually
: the image is re-scanned on eachdocker push
to the repository, and whenever a user withwrite
access clicks the Start Scan links or Scan button.Scan manually
: the image is scanned only when a user withwrite
access clicks the Start Scan links or Scan button.
By default, new repositories are set to Scan on push & Scan manually
, but
you can change this setting during repository creation.
Any repositories that existed before scanning was enabled are set to
Scan manually
mode by default. If these repositories are still in
use, you can change this setting from each repository’s Settings
page.
Note
To change an individual repository scanning mode, you must have write
or admin
access to the repo.
To change an individual repository’s scanning mode:
Navigate to the repository, and click the Settings tab.
Scroll down to the Image scanning section.
Select the desired scanning mode.
Update the CVE scanning database¶
Docker Security Scanning indexes the components in your MSR images and compares them against a known CVE database. When new vulnerabilities are reported, Docker Security Scanning matches the components in new CVE reports to the indexed components in your images, and quickly generates an updated report.
Users with administrator access to MSR can check when the CVE database was last updated from the Security tab in the MSR Settings pages.
Update CVE database - online mode¶
By default Docker Security Scanning checks automatically for updates to the vulnerability database, and downloads them when available. If your installation does not have access to the public internet, use the Offline mode instructions below.
To ensure that MSR can access these updates, confirm that the host can reach
both http://license.mirantis.com
and
https://dss-cve-updates.mirantis.com/
on port 443 using https.
MSR checks for new CVE database updates at 3:00 AM UTC every day. If an update is found it is downloaded and applied without interrupting any scans in progress. Once the update is complete, the security scanning system looks for new vulnerabilities in the indexed components.
To set the update mode to Online:
Log in to MSR as a user with administrator rights.
Click Settings in the left-side navigation panel and click Security.
Click Online.
Your choice is saved automatically.
Note
MSR also checks for CVE database updates when scanning is first enabled, and when you switch update modes. If you need to check for a CVE database update immediately, you can briefly switch modes from online to offline and back again.
Update CVE database - offline mode¶
To update the CVE database for your MSR instance when connection to the update
server is not possible, download and install a .tar
file that contains
the database updates.
Run the following command to download the most recent CVE database:
Note
The example command specifies default values. It assumes that you want
the container to output the database file to ~/Downloads
and that the
volume should map from the local machine into the container. If the
destination for the database is in a separate directory, you must define
an additional volume.
docker run -it --rm \
-v ${HOME}/Downloads:/data \
-e CVE_DB_URL_ONLY=false \
-e CLOBBER_FILE=false \
-e DATABASE_OUTPUT="/data" \
-e DATABASE_SCHEMA=3 \
-e DEBUG=false \
-e VERSION_ONLY=false \
mirantis/get-dtr-cve-db:latest
Variable |
Default |
Override detail |
---|---|---|
CLOBBER_FILE |
|
Set to |
CVE_DB_URL_ONLY |
|
Set to |
DATABASE_OUTPUT |
|
Indicates the database download directory inside the container. |
DATABASE_SCHEMA |
|
|
DEBUG |
|
Set to |
VERSION_ONLY |
|
Set to |
To manually update the MSR CVE database using the downloaded .tar
file:
Log in to MSR as a user with administrator rights.
Click Settings in the left-side navigation panel and click Security.
Click Upload .tar database file.
Browse to the latest
.tar
file that you received, and click Open.
MSR installs the new CVE database, and begins checking already indexed images for components that match new or updated vulnerabilities.
Note
The Upload button is unavailable while MSR applies CVE database updates.
Enable or disable automatic database updates¶
To change the update mode:
Log in to MSR as a user with administrator rights.
Click Settings in the left-side navigation panel and click Security.
Click Online/Offline.
Your choice is saved automatically.
Caches¶
MSR cache fundamentals¶
The further away you are from the geographical location where MSR is deployed, the longer it will take to pull and push images. This happens because the files being transferred from MSR to your machine need to travel a longer distance, across multiple networks.
To decrease the time to pull an image, you can deploy MSR caches geographically closer to users.
Caches are transparent to users, since users still log in and pull images using the MSR URL address. MSR checks if users are authorized to pull the image, and redirects the request to the cache.
In this example, MSR is deployed on a datacenter in the United States, and a cache is deployed in the Asia office.
Users in the Asia office update their user profile within MSR to fetch from the cache in their office. They pull an image using:
# Log in to MSR
docker login msr.example.org
# Pull image
docker image pull msr.example.org/website/ui:3-stable
MSR authenticates the request and checks if the user has permission to pull the image they are requesting. If they have permissions, they get an image manifest containing the list of image layers to pull and redirecting them to pull the images from the Asia cache.
When users request those image layers from the Asia cache, the cache pulls them from MSR and keeps a copy that can be used to serve to other users without having to pull the image layers from MSR again.
Caches or mirroring policies¶
Use caches if you:
Want to make image pulls faster for users in different geographical regions.
Want to manage user permissions from a central place.
If you need users to be able to push images faster, or you want to implement RBAC policies based on different regions, do not use caches. Instead, deploy multiple MSR clusters and implement mirroring policies between them.
With mirroring policies you can set up a development pipeline where images are automatically pushed between different MSR repositories, or across MSR deployments.
As an example you can set up a development pipeline with three different stages. Developers can push and pull images from the development environment, only pull from QA, and have no access to Production.
With multiple MSR deployments you can control the permissions developers have for each deployment, and you can create policies to automatically push images from one deployment to the next.
Cache deployment strategy¶
The main reason to use a MSR cache is so that users can pull images from a service that’s geographically closer to them.
For example, a company has developers spread across three locations: United States, Asia, and Europe. Developers working in the US office can pull their images from MSR without problem, but developers in the Asia and Europe offices complain that it takes them a long time to pulls images.
To address that, you can deploy MSR caches in the Asia and Europe offices, so that developers working from there can pull images much faster.
Deployment overview¶
To deploy the MSR caches for the example scenario, you need three datacenters:
The US datacenter runs MSR configured for high availability.
The Asia datacenter runs a MSR cache.
The Europe datacenter runs another MSR cache.
Both caches are configured to fetch images from MSR.
System requirements¶
Before deploying a MSR cache in a datacenter, make sure you:
Provision multiple nodes and install Docker on them.
Join the nodes into a Swarm.
Have one or more dedicated worker nodes just for running the MSR cache.
Have TLS certificates to use for securing the cache.
Have a shared storage system, if you want the cache to be highly available.
Ports used¶
You can customize the port used by the MSR cache, so you’ll have to configure your firewall rules to make sure users can access the cache using the port you chose.
By default the documentation guides you in deploying caches that are exposed on port 443/TCP using the swarm routing mesh.
Deploy a MSR cache with Swarm¶
This example guides you in deploying a MSR cache, assuming that you’ve got a MSR deployment up and running. It also assumes that you’ve provisioned multiple nodes and joined them into a swarm.
The MSR cache is going to be deployed as a Docker service, so that Docker automatically takes care of scheduling and restarting the service if something goes wrong.
We’ll manage the cache configuration using a Docker configuration, and the TLS certificates using Docker secrets. This allows you to manage the configurations securely and independently of the node where the cache is actually running.
Dedicate a node for the cache¶
To make sure the MSR cache is performant, it should be deployed on a node dedicated just for it. Start by labelling the node where you want to deploy the cache, so that you target the deployment to that node.
Use SSH to log in to a manager node of the swarm where you want to deploy the MSR cache. If you’re using MKE to manage that swarm, use a client bundle to configure your Docker CLI client to connect to the swarm.
docker node update --label-add dtr.cache=true <node-hostname>
Prepare the cache deployment¶
Create a file structure, as illustrated below:
├── docker-stack.yml # Stack file to deploy cache with a single command
├── config.yml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate
├── cache.key.pem # The cache private key
└── dtr.cert.pem # MSR CA certificate
Add the following content to each of the two YAML files, docker-stack.yml and config.yml:
version: "3.3"
services:
cache:
image: mirantis/dtr-content-cache:2.8.2
entrypoint:
- /start.sh
- "/config.yml"
ports:
- 443:443
deploy:
replicas: 1
placement:
constraints: [node.labels.dtr.cache == true]
restart_policy:
condition: on-failure
configs:
- config.yml
secrets:
- dtr.cert.pem
- cache.cert.pem
- cache.key.pem
configs:
config.yml:
file: ./config.yml
secrets:
dtr.cert.pem:
file: ./certs/dtr.cert.pem
cache.cert.pem:
file: ./certs/cache.cert.pem
cache.key.pem:
file: ./certs/cache.key.pem
version: 0.1
log:
level: info
storage:
delete:
enabled: true
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
secret: generate-random-secret
host: https://<cache-url>
tls:
certificate: /run/secrets/cache.cert.pem
key: /run/secrets/cache.key.pem
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- https://<msr-url>:<msr-port>
cas:
- /run/secrets/dtr.cert.pem
Add the public key certificate for the cache here. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file.
Add the unencrypted private key for the cache here.
The cache communicates with MSR using TLS. If you’ve customized MSR to use TLS certificates issued by a globally trusted certificate authority, the cache automatically trusts MSR.</p>
But if you’re using the default MSR configuration, or MSR is using TLS certificates signed by your own certificate authority, you need to configure the cache to trust MSR.</p>
Add the MSR CA certificate to the certs/dtr.cert.pem
file. You can do
this by running:
curl -sk https://<dtr-url>/ca > certs/dtr.cert.pem
Next, add content to the three cert pem files, as described.
pem file |
Content to add |
---|---|
cache.cert.pem |
Add the public key certificate for the cache. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file. |
cache.key.pem |
Add the unencrypted private key for the cache. |
dtr.cert.pem |
The cache communicates with MSR using TLS. If you’ve customized MSR to use TLS certificates issued by a globally trusted certificate authority, the cache automatically trusts MSR. If, though, you are using the default MSR configuration, or MSR is using TLS certificates signed by your own certificate authority, you need to configure the cache to trust MSR. To do this, add the MSR CA certificate to the certs/dtr.cert. pem file. curl -sk https://<msr-url>/ca > certs/dtr.cert.pem
|
With this configuration, the cache fetches image layers from MSR and keeps a local copy for 24 hours. After that, if a user requests that image layer, the cache fetches it again from MSR.
The cache is configured to persist data inside its container. If something goes wrong with the cache service, Docker automatically redeploys a new container, but previously cached data is not persisted. You can customize the storage parameters, if you want to store the image layers using a persistent storage backend.
Also, the cache is configured to use port 443. If you’re already using that port in the swarm, update the deployment and configuration files to use another port. Don’t forget to create firewall rules for the port you choose.
Deploy the cache¶
Now that everything is set up, you can deploy the cache by running:
docker stack deploy --compose-file docker-stack.yml dtr-cache
You can check if the cache has been successfully deployed by running:
docker stack ps dtr-cache
Docker should show the dtr-cache stack is running.
Register the cache with MSR¶
Now that you’ve deployed a cache, you need to configure MSR to know
about it. This is done using the POST /api/v0/content_caches
API.
You can use the MSR interactive API documentation to use this API.
In the MSR web UI, click the top-right menu, and choose API docs.
Navigate to the POST /api/v0/content_caches
line and click it to
expand. In the body field include:
{
"name": "region-asia",
"host": "https://<cache-url>:<cache-port>"
}
Click the Try it out! button to make the API call.
Configure your user account¶
Now that you’ve registered the cache with MSR, users can configure their user profile to pull images from MSR or the cache.
In the MSR web UI, navigate to your Account, click the Settings tab, and change the Content Cache settings to use the cache you deployed.
If you need to set this for multiple users at the same time, use the
/api/v0/accounts/{username}/settings
API endpoint.
Now when you pull images, you’ll be using the cache.
Test that the cache is working¶
To validate that the cache is working as expected:
Push an image to MSR.
Make sure your user account is configured to use the cache.
Delete the image from your local system.
Pull the image from MSR.
To validate that the cache is actually serving your request, and to troubleshoot misconfigurations, check the logs for the cache service by running:
docker service logs --follow dtr-cache_cache
The most common causes of configuration are due to TLS authentication:
MSR not trusting the cache TLS certificates.
The cache not trusting MSR TLS certificates.
Your machine not trusting MSR or the cache.
When this happens, check the cache logs to troubleshoot the misconfiguration.
Clean up sensitive files¶
The certificates and private keys are now managed by Docker in a secure way. Don’t forget to delete sensitive files you’ve created on disk, like the private keys for the cache:
rm -rf certs
Deploy a MSR cache with Kubernetes¶
This example guides you through deploying a MSR cache, assuming that you’ve got a MSR deployment up and running.
The MSR cache is going to be deployed as a Kubernetes Deployment, so that Kubernetes automatically takes care of scheduling and restarting the service if something goes wrong.
We’ll manage the cache configuration using a Kubernetes Config Map, and the TLS certificates using Kubernetes secrets. This allows you to manage the configurations securely and independently of the node where the cache is actually running.
Prepare the cache deployment¶
At the end of this exercise you should have the following file structure on your workstation:
├── dtrcache.yaml # Yaml file to deploy cache with a single command
├── config.yaml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate, including any intermediaries
├── cache.key.pem # The cache private key
└── dtr.cert.pem # MSR CA certificate
The MSR cache will be deployed with a TLS endpoint. For this you will need to generate a TLS ceritificate and key from a certificate authority. The way you expose the MSR Cache will change the SANs required for this certificate.
For example:
If you are deploying the MSR Cache with an Ingress Object you will need to use an external MSR cache address which resolves to your ingress controller as part of your certificate.
If you are exposing the MSR cache through a Kubernetes Cloud Provider then you will need the external Loadbalancer address as part of your certificate.
If you are exposing the MSR Cache through a Node Port or a Host Port you will need to use a node’s FQDN as a SAN in your certificate.
On your workstation, create a directory called certs
. Within it
place the newly created certificate cache.cert.pem
and key
cache.key.pem
for your MSR cache. Also place the certificate
authority (including any intermedite certificate authorities) of the
certificate from your MSR deployment. This could be sourced from the
main MSR deployment using curl.
$ curl -s https://<dtr-fqdn>/ca -o certs/dtr.cert.pem`.
The MSR Cache will take its configuration from a file mounted into the container. Below is an example configuration file for the MSR Cache. This yaml should be customised for your environment with the relevant external msr cache, worker node or external loadbalancer FQDN.
With this configuration, the cache fetches image layers from MSR and keeps a local copy for 24 hours. After that, if a user requests that image layer, the cache will fetch it again from MSR.
The cache, by default, is configured to store image data inside its container. Therefore if something goes wrong with the cache service, and Kubernetes deploys a new pod, cached data is not persisted. Data will not be lost as it is still stored in the primary MSR. You can customize the storage parameters, if you want the cached images to be backended by persistent storage.
Note
Kubernetes Peristent Volumes or Persistent Volume Claims would have to be used to provide persistent backend storage capabilities for the cache.
cat > config.yaml <<EOF
version: 0.1
log:
level: info
storage:
delete:
enabled: true
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
secret: generate-random-secret
host: https://<external-fqdn-dtrcache> # Could be MSR Cache / Loadbalancer / Worker Node external FQDN
tls:
certificate: /certs/cache.cert.pem
key: /certs/cache.key.pem
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- https://<msr-url> # URL of the Main MSR Deployment
cas:
- /certs/msr.cert.pem
EOF
The Kubernetes Manifest file to deploy the MSR Cache is independent of how you choose to expose the MSR cache within your environment. The below example has been tested to work on Universal Control Plane 3.1, however it should work on any Kubernetes Cluster 1.8 or higher.
cat > dtrcache.yaml <<EOF
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: dtr-cache
namespace: dtr
spec:
replicas: 1
selector:
matchLabels:
app: dtr-cache
template:
metadata:
labels:
app: dtr-cache
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: dtr-cache
image: mirantis/dtr-content-cache:2.8.13
command: ["bin/sh"]
args:
- start.sh
- /config/config.yaml
ports:
- name: https
containerPort: 443
volumeMounts:
- name: dtr-certs
readOnly: true
mountPath: /certs/
- name: dtr-cache-config
readOnly: true
mountPath: /config
volumes:
- name: dtr-certs
secret:
secretName: dtr-certs
- name: dtr-cache-config
configMap:
defaultMode: 0666
name: dtr-cache-config
EOF
Create Kubernetes Resources¶
At this point you should have a file structure on your workstation which looks like this:
├── dtrcache.yaml # Yaml file to deploy cache with a single command
├── config.yaml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate
├── cache.key.pem # The cache private key
└── dtr.cert.pem # MSR CA certificate
You will also need the kubectl
command line tool configured to talk
to your Kubernetes cluster, either through a Kubernetes Config file or a
Mirantis Kubernetes Engine client bundle.
First we will create a Kubernetes namespace to logically separate all of our MSR cache components.
$ kubectl create namespace dtr
Create the Kubernetes Secrets, containing the MSR cache TLS certificates, and a Kubernetes ConfigMap containing the MSR cache configuration file.
$ kubectl -n dtr create secret generic dtr-certs \
--from-file=certs/dtr.cert.pem \
--from-file=certs/cache.cert.pem \
--from-file=certs/cache.key.pem
$ kubectl -n dtr create configmap dtr-cache-config \
--from-file=config.yaml
Finally create the Kubernetes Deployment.
$ kubectl create -f dtrcache.yaml
You can check if the deployment has been successful by checking the
running pods in your cluster: kubectl -n dtr get pods
If you need to troubleshoot your deployment, you can use
kubectl -n dtr describe pods <pods>
and / or
kubectl -n dtr logs <pods>
.
For external access to the MSR cache we need to expose the Cache Pods to the outside world. In Kubernetes there are multiple ways for you to expose a service, dependent on your infrastructure and your environment. For more information, see Publishing services - service types on the Kubernetes docs. It is important though that you are consistent in exposing the cache through the same interface you created a certificate for previously. Otherwise the TLS certificate may not be valid through this alternative interface.
MSR Cache Exposure
You only need to expose your MSR cache through one external interface.
The first example exposes the MSR cache through NodePort. In this example you would have added a worker node’s FQDN to the TLS Certificate in step 1. Here you will be accessing the MSR cache through an exposed port on a worker node’s FQDN.
cat > dtrcacheservice.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: dtr-cache
namespace: dtr
spec:
type: NodePort
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: dtr-cache
EOF
kubectl create -f dtrcacheservice.yaml
To find out which port the MSR cache has been exposed on, you will need to run:
$ kubectl -n dtr get services
You can test that your MSR cache is externally reachable by using
curl
to hit the API endpoint, using both a worker node’s external
address, and the NodePort.
curl -X GET https://<workernodefqdn>:<nodeport>/v2/_catalog
{"repositories":[]}
This second example will expose the MSR cache through an ingress object. In this example you will need to create a DNS rule in your environment that will resolve a MSR cache external FQDN address to the address of your ingress controller. You should have also specified the same MSR cache external FQDN address within the MSR cache certificate in step 1.
Note
An ingress controller is a prerequisite for this example. If you have not deployed an ingress controller on your cluster, refer to Layer 7 Routing for MKE. This ingress controller will also need to support SSL passthrough.
cat > dtrcacheservice.yaml <<EOF
kind: Service
apiVersion: v1
metadata:
name: dtr-cache
namespace: dtr
spec:
selector:
app: dtr-cache
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dtr-cache
namespace: dtr
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
tls:
- hosts:
- <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
rules:
- host: <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
http:
paths:
- backend:
serviceName: dtr-cache
servicePort: 443
EOF
kubectl create -f dtrcacheservice.yaml
You can test that your MSR cache is externally reachable by using curl to hit the API endpoint. The address should be the one you have defined above in the serivce definition file.
curl -X GET https://external-dtr-cache-fqdn/v2/_catalog
{"repositories":[]}
Configure caches for high availability¶
If you’re deploying a MSR cache in a zone with few users and with no uptime SLAs, a single cache service is enough.
But if you want to make sure your MSR cache is always available to users and is highly performant, you should configure your cache deployment for high availability.
System requirements¶
Multiple nodes, one for each cache replica.
A load balancer.
Shared storage system that has read-after-write consistency.
The way you deploy a MSR cache is the same, whether you’re deploying a single replica or multiple ones. The difference is that you should configure the replicas to store data using a shared storage system.
When using a shared storage system, once an image layer is cached, any replica is able to serve it to users without having to fetch a new copy from MSR.
MSR caches support the following storage systems:
Alibaba Cloud Object Storage Service
Amazon S3
Azure Blob Storage
Google Cloud Storage
NFS
Openstack Swift
If you’re using NFS as a shared storage system, make sure the shared directory is configured with:
/dtr-cache *(rw,root_squash,no_wdelay)
This ensures read-after-write consistency for NFS.
You should also mount the NFS directory on each node where you’ll deploy a MSR cache replica.
Label the MSR cache nodes¶
Use SSH to log in to a manager node of the swarm where you want to deploy the MSR cache.
If you’re using MKE to manage that swarm you can also use a client bundle to configure your Docker CLI client to connect to that swarm.
Label each node that is going to run the cache replica, by running:
docker node update --label-add dtr.cache=true <node-hostname>
Configure and deploy the cache¶
Create the cache configuration files by following the instructions for deploying a single cache replica.
Make sure you adapt the storage
object, using the configuration
options for the shared storage of your choice.
Configure your load balancer¶
The last step is to deploy a load balancer of your choice to load-balance requests across the multiple replicas you deployed.
MSR cache configuration reference¶
MSR caches are based on Docker Registry, and use the same configuration file format.
The MSR cache extends the Docker Registry configuration file format by
introducing a new middleware called downstream
that has three
configuration options: blobttl
, upstreams
, and cas
:
# Settings that you would include in a
# Docker Registry configuration file followed by
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- <Externally-reachable address for upstream registry or content cache in format scheme://host:port>
cas:
- <Absolute path to next-hop upstream registry or content cache CA certificate in the container's filesystem>
Below you can find the description for each parameter, specific to MSR caches.
Parameter |
Required |
Description |
---|---|---|
|
no |
A positive integer and an optional unit of time suffix to determine the
TTL (Time to Live) value for blobs in the cache. If blobttl is
configured, storage.delete.enabled must be set to true. Acceptable units
of time are:
- |
|
no |
An optional list of absolute paths to PEM-encoded CA certificates of upstream registries or content caches. |
|
yes |
A list of externally-reachable addresses for upstream registries of content caches. If more than one host is specified, it will pull from registries in round-robin order. |
Garbage collection¶
You can configure the Mirantis Secure Registry (MSR) to automatically delete unused image layers, thus saving you disk space. This process is also known as garbage collection.
How MSR deletes unused layers¶
First you configure MSR to run a garbage collection job on a fixed schedule. At the scheduled time, MSR:
Identifies and marks unused image layers.
Deletes the marked image layers.
MSR uses online garbage collection. This allows MSR to run garbage collection without setting MSR to read-only/offline mode. In previous versions, garbage collection would set MSR to read-only/offline mode so MSR would reject pushes.
Schedule garbage collection¶
In your browser, navigate to https://<msr-url>
and log in with your
credentials. Select System in the left-side navigation panel, and
then click the Garbage collection tab to schedule garbage
collection.
Select for how long the garbage collection job should run:
Until done: Run the job until all unused image layers are deleted.
For x minutes: Only run the garbage collection job for a maximum of x minutes at a time.
Never: Never delete unused image layers.
If you select Until done or For x minutes, you can specify a recurring schedule in UTC (Coordinated Universal Time) with the following options:
Custom cron schedule - (Hour, Day of Month, Month, Weekday)
Daily at midnight UTC
Every Saturday at 1am UTC
Every Sunday at 1am UTC
Do not repeat
Once everything is configured you can choose to Save & Start to run the garbage collection job immediately, or just Save to run the job on the next scheduled interval.
Review the garbage collection job log¶
If you clicked Save & Start previously, verify that the garbage collection routine started by navigating to Job Logs.
Under the hood¶
Each image stored in MSR is made up of multiple files:
A list of image layers that are unioned which represents the image filesystem
A configuration file that contains the architecture of the image and other metadata
A manifest file containing the list of all layers and configuration file for an image
All these files are tracked in MSR’s metadata store in RethinkDB. These files are tracked in a content-addressable way such that a file corresponds to a cryptographic hash of the file’s content. This means that if two image tags hold exactly the same content, MSR only stores the image content once while making hash collisions nearly impossible, even if the tag name is different.
As an example, if wordpress:4.8
and wordpress:latest
have the
same content, the content will only be stored once. If you delete one of
these tags, the other won’t be deleted.
This means that when you delete an image tag, MSR cannot delete the underlying files of that image tag since other tags may also use the same files.
To facilitate online garbage collection, MSR makes a couple of changes to how it uses the storage backend:
Layer links – the references within repository directories to their associated blobs – are no longer in the storage backend. That is because MSR stores these references in RethinkDB instead to enumerate through them during the marking phase of garbage collection.
Any layers created after an upgrade to 2.6 are no longer content-addressed in the storage backend. Many cloud provider backends do not give the sequential consistency guarantees required to deal with the simultaneous deleting and re-pushing of a layer in a predictable manner. To account for this, MSR assigns each newly pushed layer a unique ID and performs the translation from content hash to ID in RethinkDB.
To delete unused files, MSR does the following:
Establish a cutoff time.
Mark each referenced manifest file with a timestamp. When manifest files are pushed to MSR, they are also marked with a timestamp.
Sweep each manifest file that does not have a timestamp after the cutoff time.
If a file is never referenced – which means no image tag uses it – delete the file.
Repeat the process for blob links and blob descriptors.
Allow users to create repositories when pushing¶
By default MSR only allows pushing images if the repository exists, and you have write access to the repository.
As an example, if you try to push to dtr.example.org/library/java:9
,
and the library/java
repository doesn’t exist yet, your push fails.
You can configure MSR to allow pushing to repositories that don’t exist yet. As an administrator, log into the MSR web UI, navigate to the Settings page, and enable Create repository on push.
From now on, when a user pushes to their personal sandbox
(<user-name>/<repository>
), or if the user is an administrator for
the organization (<org>/<repository>
), MSR will create a repository
if it doesn’t exist yet. In that case, the repository is created as
private.
Once MSR is configured to allow pushing torepositories that do not yet exist, you can do so usig the CLI:
curl --user <admin-user>:<password> \
--request POST "<msr-url>/api/v0/meta/settings" \
--header "accept: application/json" \
--header "content-type: application/json" \
--data "{ \"createRepositoryOnPush\": true}"
Use a web proxy¶
Mirantis Secure Registry makes outgoing connections to check for new versions, automatically renew its license, and update its vulnerability database. If MSR can’t access the internet, then you’ll have to manually apply updates.
One option to keep your environment secure while still allowing MSR access to the internet is to use a web proxy. If you have an HTTP or HTTPS proxy, you can configure MSR to use it. To avoid downtime you should do this configuration outside business peak hours.
As an administrator, log into a node where MSR is deployed, and run:
docker run -it --rm \
mirantis/dtr:2.8.13 reconfigure \
--http-proxy http://<domain>:<port> \
--https-proxy https://<doman>:<port> \
--ucp-insecure-tls
To confirm how MSR is configured, check the Settings page on the web UI.
If by chance the web proxy requires authentication you can submit the username and password, in the command, as shown below:
docker run -it --rm \
mirantis/dtr:2.8.13 reconfigure \
--http-proxy username:password@<domain>:<port> \
--https-proxy username:password@<doman>:<port> \
--ucp-insecure-tls
Note
MSR will hide the password portion of the URL, when it is displayed in the MSR UI.
Manage applications¶
With the introduction of the experimental app plugin to the Docker CLI, MSR has been enhanced to include application management. Starting from MSR 2.7, you can push an app to your MSR repository and have an application be clearly distinguished from individual and multi-architecture container images as well as plugins. When you push an application to MSR, you see two image tags:
Image |
Tag |
Type |
Under the hood |
---|---|---|---|
Invocation |
|
Container image represented by OS and architecture (e.g.
|
Uses Mirantis Container Runtime. The Docker daemon is responsible for building and pushing the image. |
Application with bundled components |
|
Application |
Uses the app client to build and push the image. |
Notice the app-specific tags, app
and app-invoc
, with scan
results for the bundled components in the former and the invocation
image in the latter. To view the scanning results for the bundled
components, click “View Details” next to the app
tag.
Click on the image name or digest to see the vulnerabilities for that specific image.
Parity with existing repository and image features¶
The following repository and image management events also apply to applications:
Creation
MSR pushes
Limitations¶
You cannot sign an application since the Notary signer cannot sign OCI (Open Container Initiative) indices.
Scanning-based policies do not take effect until after all images bundled in the application have been scanned.
Docker Content Trust (DCT) does not work for applications and multi-arch images, which are the same under the hood.
Troubleshooting tips¶
x509 certificate errors¶
fixing up "35.165.223.150/admin/lab-words:0.1.0" for push: failed to resolve "35.165.223.150/admin/lab-words:0.1.0-invoc", push the image to the registry before pushing the bundle: failed to do request: Head https://35.165.223.150/v2/admin/lab-words/manifests/0.1.0-invoc: x509: certificate signed by unknown authority
Workaround¶
Check that your MSR has been configured with your TLS certificate’s Fully Qualified Domain Name (FQDN).
For docker app
testing purposes, you can pass the
--insecure-registries
option for pushing an application`.
docker app push hello-world --tag 35.165.223.150/admin/lab-words:0.1.0 --insecure-registries 35.165.223.150
35.165.223.150/admin/lab-words:0.1.0-invoc
Successfully pushed bundle to 35.165.223.150/admin/lab-words:0.1.0. Digest is sha256:bd1a813b6301939fa46e617f96711e0cca1e4065d2d724eb86abde6ef7b18e23.
Known Issues¶
See MSR 2.7 Release Notes for known issues related to applications in MSR.
Manage images¶
Create a repository¶
Since MSR is secure by default, you need to create the image repository before being able to push the image to MSR.
In this example, we’ll create the wordpress
repository in MSR.
To create an image repository for the first time, log in to
https://<msr-url>
with your MKE credentials.Select Repositories from the left-side navigation panel and click New repository on the upper right corner of the Repositories page.
Select your namespace and enter a name for your repository (upper case letters and some special characters not accepted). You can optionally add a description.
Choose whether your repository is
public
orprivate
:Public repositories are visible to all users, but can only be changed by users with write permissions to them.
Private repositories can only be seen by users that have been granted permissions to that repository.
Click Create to create the repository.
When creating a repository in MSR, the full name of the repository becomes
<msr-domain-name>/<user-or-org>/<repository-name>
. In this example, the full name of our repository will bemsr-example.com/test-user-1/wordpress
.Optional. Click Show advanced settings to make your tags immutable or set your image scanning trigger.
Note
Starting in DTR 2.6, repository admins can enable tag pruning by setting a tag limit. This can only be set if you turn off Immutability and allow your repository tags to be overwritten.
Image name size for MSR
When creating an image name for use with MSR ensure that the organization and repository name has less than 56 characters and that the entire image name which includes domain, organization and repository name does not exceed 255 characters.
The 56-character <user-or-org/repository-name> limit in MSR is due to an underlying limitation in how the image name information is stored within MSR metadata in RethinkDB. RethinkDB currently has a Primary Key length limit of 127 characters.
When MSR stores the above data it appends a sha256sum comprised of 72 characters to the end of the value to ensure uniqueness within the database. If the <user-or-org/repository-name> exceeds 56 characters it will then exceed the 127 character limit in RethinkDB (72+56=128).
Multi-architecture images
While you can enable just-in-time creation of multi-archictecture image repositories when creating a repository via the API, Docker does not recommend using this option. This breaks content trust and causes other issues. To manage Docker image manifests and manifest lists, use the experimental CLI command, docker manifest, instead.
Where to go next¶
Review repository info¶
The Repository Info tab, which you can view by clicking the View details link for any visible repository on the Repositories page, includes the following details:
README, which you can edit if you have admin rights to the repository
Docker Pull Command
Your repository permissions
To learn more about pulling images, see Pull and push images. To review your repository permissions, do the following:
Navigate to
https://<msr-url>
and log in with your MKE credentials.Select Repositories in the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the
/
after the specific namespace for your repository.You should see the Info tab by default. Notice Your Permission under Docker Pull Command.
Hover over the question mark next to your permission level to view the list of repository events you have access to.
Limitations
Your permissions list may include repository events that are not displayed in the Activity tab. It is also not an exhaustive list of event types displayed on your activity stream. To learn more about repository events, see Audit Repository Events.
Where to go next¶
Pull and push images¶
You interact with Mirantis Secure Registry in the same way you interact with Docker Hub or any other registry:
docker login <msr-url>
: authenticates you on MSRdocker pull <image>:<tag>
: pulls an image from MSRdocker push <image>:<tag>
: pushes an image to MSR
Pull an image¶
Pulling an image from Mirantis Secure Registry is the same as pulling an image from Docker Hub or any other registry. Since MSR is secure by default, you always need to authenticate before pulling images.
In this example, MSR can be accessed at msr-example.com
, and the user was
granted permissions to access the nginx
and wordpress
repositories in the library
organization.
Click on the repository name to see its details.
To pull the latest tag of the library/wordpress
image, run:
docker login msr-example.com
docker pull msr-example.com/library/wordpress:latest
Push an image¶
Before you can push an image to MSR, you need to create a
repository to store the image. In this example the full
name of our repository is msr-example.com/library/wordpress
.
Tag the image¶
In this example we’ll pull the wordpress image from Docker Hub and tag with the full MSR and repository name. A tag defines where the image was pulled from, and where it will be pushed to.
# Pull from Docker Hub the latest tag of the wordpress image
docker pull wordpress:latest
# Tag the wordpress:latest image with the full repository name we've created in MSR
docker tag wordpress:latest msr-example.com/library/wordpress:latest
Push the image¶
Now that you have tagged the image, you only need to authenticate and push the image to MSR.
docker login msr-example.com
docker push msr-example.com/library/wordpress:latest
On the web interface, navigate to the Tags tab on the repository page to confirm that the tag was successfully pushed.
Windows images¶
The base layers of the Microsoft Windows base images have restrictions on how they can be redistributed. When you push a Windows image to MSR, Docker only pushes the image manifest and all the layers on top of the Windows base layers. The Windows base layers are not pushed to MSR. This means that:
MSR won’t be able to scan those images for vulnerabilities since MSR doesn’t have access to the layers (the Windows base layers are scanned by Docker Hub, however).
When a user pulls a Windows image from MSR, the Windows base layers are automatically fetched from Microsoft and the other layers are fetched from MSR.
This default behavior is recommended for Mirantis Container Runtime installations, but for air-gapped or similarly limited setups Docker can optionally optionally also push the Windows base layers to MSR.
To configure Docker to always push Windows layers to MSR, add the
following to your C:\ProgramData\docker\config\daemon.json
configuration file:
"allow-nondistributable-artifacts": ["<msr-domain>:<msr-port>"]
Where to go next¶
Delete images¶
To delete an image, navigate to the Tags tab of the repository page on the MSR web interface. In the Tags tab, select all the image tags you want to delete, and click Delete.
You can also delete all image versions by deleting the repository. To delete a repository, navigate to Settings and click Delete under Delete Repository.
Delete signed images¶
MSR only allows deleting images if the image has not been signed. You first need to delete all the trust data associated with the image before you are able to delete the image.
There are three steps to delete a signed image:
Find which roles signed the image.
Remove the trust data for each role.
The image is now unsigned, so you can delete it.
Find which roles signed an image¶
To find which roles signed an image, you first need to learn which roles are trusted to sign the image.
Configure your Notary client and run:
notary delegation list msr-example.com/library/wordpress
In this example, the repository owner delegated trust to the
targets/releases
and targets/qa
roles:
ROLE PATHS KEY IDS THRESHOLD
---- ----- ------- ---------
targets/releases "" <all paths> c3470c45cefde5...2ea9bc8 1
targets/qa "" <all paths> c3470c45cefde5...2ea9bc8 1
Now that you know which roles are allowed to sign images in this repository, you can learn which roles actually signed it:
# Check if the image was signed by the "targets" role
notary list msr-example.com/library/wordpress
# Check if the image was signed by a specific role
notary list msr-example.com/library/wordpress --roles <role-name>
In this example the image was signed by three roles: targets
,
targets/releases
, and targets/qa
.
Remove trust data for a role¶
Once you know which roles signed an image, you’ll be able to remove trust data for those roles. Only users with private keys that have the roles are able to do this operation.
For each role that signed the image, run:
notary remove msr-example.com/library/wordpress <tag> \
--roles <role-name> --publish
Once you’ve removed trust data for all roles, MSR shows the image as unsigned. Then you can delete it.
Where to go next¶
Scan images for vulnerabilities¶
Mirantis Secure Registry (MSR) can scan images in your repositories to verify that they are free from known security vulnerabilities or exposures, using Docker Security Scanning. The results of these scans are reported for each image tag in a repository.
Security Scanning is available as an add-on to MSR, and an administrator configures it for your MSR instance. If you do not see security scan results available on your repositories, your organization may not have purchased the Security Scanning feature or it may be disabled.
Note
Only users with write access to a repository can manually start a scan. Users with read-only access can view the scan results, but cannot start a new scan.
The Docker Security Scan process¶
Scans run either on demand when you click the Start a Scan link or
Scan button, or automatically on any docker push
to the repository.
First the scanner performs a binary scan on each layer of the image, identifies the software components in each layer, and indexes the SHA of each component in a bill-of-materials. A binary scan evaluates the components on a bit-by-bit level, so vulnerable components are discovered even if they are statically linked or under a different name.
The scan then compares the SHA of each component against the US National Vulnerability Database that is installed on your MSR instance. When this database is updated, MSR reviews the indexed components for newly discovered vulnerabilities.
MSR scans both Linux and Windows images, but by default Docker doesn’t push foreign image layers for Windows images so MSR won’t be able to scan them. If you want MSR to scan your Windows images, configure Docker to always push image layers <pull-and-push-images>, and it will scan the non-foreign layers.
Security scan on push¶
By default, Docker Security Scanning runs automatically on
docker push
to an image repository.
If your MSR instance is configured in this way, you do not need to do
anything once your docker push
completes. The scan runs
automatically, and the results are reported in the repository’s Tags
tab after the scan finishes.
Manual scanning¶
If your repository owner enabled Docker Security Scanning but disabled
automatic scanning, you can manually start a scan for images in
repositories you have write
access to.
To start a security scan, navigate to the repository Tags tab on the web interface, click “View details” next to the relevant tag, and click Scan.
MSR begins the scanning process. You will need to refresh the page to see the results once the scan is complete.
Change the scanning mode¶
You can change the scanning mode for each individual repository at any time. You might want to disable scanning if you are pushing an image repeatedly during troubleshooting and don’t want to waste resources scanning and re-scanning, or if a repository contains legacy code that is not used or updated frequently.
Note
To change an individual repository’s scanning mode, you
must have write
or administrator
access to the repo.
To change the repository scanning mode:
Navigate to the repository, and click the Settings tab.
Scroll down to the Image scanning section.
Select the desired scanning mode.
View security scan results¶
Once MSR has run a security scan for an image, you can view the results.
The Tags tab for each repository includes a summary of the most recent scan results for each image.
The text Clean in green indicates that the scan did not find any vulnerabilities.
A red or orange text indicates that vulnerabilities were found, and the number of vulnerabilities is included on that same line according to severity: Critical, Major, Minor.
If the vulnerability scan could not detect the version of a component, it reports the vulnerabilities for all versions of that component.
From the repository Tags tab, you can click View details for a specific tag to see the full scan results. The top of the page also includes metadata about the image, including the SHA, image size, last push date, user who initiated the push, the security scan summary, and the security scan progress.
The scan results for each image include two different modes so you can quickly view details about the image, its components, and any vulnerabilities found.
The Layers view lists the layers of the image in the order that they are built by Dockerfile.
This view can help you find exactly which command in the build introduced the vulnerabilities, and which components are associated with that single command. Click a layer to see a summary of its components. You can then click on a component to switch to the Component view and get more details about the specific item.
Note
The layers view can be long, so be sure to scroll down if you don’t immediately see the reported vulnerabilities.
The Components view lists the individual component libraries indexed by the scanning system, in order of severity and number of vulnerabilities found, with the most vulnerable library listed first.
Click on an individual component to view details about the vulnerability it introduces, including a short summary and a link to the official CVE database report. A single component can have multiple vulnerabilities, and the scan report provides details on each one. The component details also include the license type used by the component, and the filepath to the component in the image.
If you find that an image in your registry contains vulnerable components, you can use the linked CVE scan information in each scan report to evaluate the vulnerability and decide what to do.
If you discover vulnerable components, you should check if there is an updated version available where the security vulnerability has been addressed. If necessary, you can contact the component’s maintainers to ensure that the vulnerability is being addressed in a future version or a patch update.
If the vulnerability is in a base layer
(such as an operating
system) you might not be able to correct the issue in the image. In this
case, you can switch to a different version of the base layer, or you
can find an equivalent, less vulnerable base layer.
Address vulnerabilities in your repositories by updating the images to use updated and corrected versions of vulnerable components, or by using a different component offering the same functionality. When you have updated the source code, run a build to create a new image, tag the image, and push the updated image to your MSR instance. You can then re-scan the image to confirm that you have addressed the vulnerabilities.
Where to go next¶
Override a vulnerability¶
MSR scans images for vulnerabilities. At times, however, it may report image vulnerabilities that you know have been fixed, and whenever that happens the warning can be dismissed.
Log in to the MSR web UI.
Click Repositories in the left-side navigation panel, and locate the repository that has been scanned.
Click View details to review the image scan results, and select Components to see the vulnerabilities for each component packaged in the image.
Select the component with the vulnerability you want to ignore, navigate to the vulnerability, and click Hide.
Once dismissed, the vulnerability is hidden system-wide and will no longer be reported as a vulnerability on affected images with the same layer IDs or digests. In addition, MSR will not reevaluate the promotion policies that have been set up for the repository.
If after hiding a particular vulnerability you want the promotion policy for the image to be reevaluated, click Promote.
Where to go next¶
Sign images¶
Sign an image¶
Two key components of the Mirantis Secure Registry are the Notary Server and the Notary Signer. These two containers provide the required components for using Docker Content Trust (DCT) out of the box. Docker Content Trust allows you to sign image tags, therefore giving consumers a way to verify the integrity of your image.
As part of MSR, both the Notary and the Registry servers are accessed through a front-end proxy, with both components sharing the MKE’s RBAC (Role-based Access Control) Engine. Therefore, you do not need additional Docker client configuration in order to use DCT.
DCT is integrated with the Docker CLI, and allows you to:
Configure repositories
Add signers
Sign images using the docker trust command
Sign images that MKE can trust¶
MKE has a feature that prevent untrusted images from being deployed
on the cluster. To use the feature, you need to sign and push images
to your MSR. To tie the signed images back to MKE, you need
to sign the images with the private keys of the MKE users.
From a MKE client bundle, use key.pem
as your private key,
and cert.pem
as your public key on an x509
certificate.
To sign images in a way that MKE can trust, you need to:
Download a client bundle for the user account you want to use for signing the images.
Add the user’s private key to your machine’s trust store.
Initialize trust metadata for the repository.
Delegate signing for that repository to the MKE user.
Sign the image.
The following example shows the nginx
image getting pulled from
Docker Hub, tagged as msr.example.com/dev/nginx:1
, pushed to MSR,
and signed in a way that is trusted by MKE.
After downloading and extracting a MKE client bundle into your local
directory, you need to load the private key into the local Docker trust
store (~/.docker/trust)
. To illustrate the process, we will use
jeff
as an example user.
$ docker trust key load --name jeff key.pem
Loading key from "key.pem"...
Enter passphrase for new jeff key with ID a453196:
Repeat passphrase for new jeff key with ID a453196:
Successfully imported key from key.pem
Next,initiate trust metadata for a MSR repository. If you have not
already done so, navigate to the MSR web UI, and create a repository
for your image. This example uses the nginx
repository in the
prod
namespace.
As part of initiating the repository, the public key of the MKE user needs to be added to the Notary server as a signer for the repository. You will be asked for a number of passphrases to protect the keys.Make a note of these passphrases.
$ docker trust signer add --key cert.pem jeff msr.example.com/prod/nginx
Adding signer "jeff" to msr.example.com/prod/nginx...
Initializing signed repository for msr.example.com/prod/nginx...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID e0d15a2:
Repeat passphrase for new repository key with ID e0d15a2:
Successfully initialized "msr.example.com/prod/nginx"
Successfully added signer: jeff to msr.example.com/prod/nginx
Inspect the trust metadata of the repository to make sure the user has been added correctly.
$ docker trust inspect --pretty msr.example.com/prod/nginx
No signatures for msr.example.com/prod/nginx
List of signers and their keys for msr.example.com/prod/nginx
SIGNER KEYS
jeff 927f30366699
Administrative keys for msr.example.com/prod/nginx
Repository Key: e0d15a24b7...540b4a2506b
Root Key: b74854cb27...a72fbdd7b9a
Finally, user jeff
can sign an image tag. The following steps
include downloading the image from Hub, tagging the image for Jeff’s MSR
repository, pushing the image to Jeff’s MSR, as well as signing the tag
with Jeff’s keys.
$ docker pull nginx:latest
$ docker tag nginx:latest msr.example.com/prod/nginx:1
$ docker trust sign msr.example.com/prod/nginx:1
Signing and pushing trust data for local image msr.example.com/prod/nginx:1, may overwrite remote trust data
The push refers to repository [msr.example.com/prod/nginx]
6b5e2ed60418: Pushed
92c15149e23b: Pushed
0a07e81f5da3: Pushed
1: digest: sha256:5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 size: 948
Signing and pushing trust metadata
Enter passphrase for jeff key with ID 927f303:
Successfully signed msr.example.com/prod/nginx:1
Inspect the trust metadata again to make sure the image tag has been signed successfully.
$ docker trust inspect --pretty msr.example.com/prod/nginx:1
Signatures for msr.example.com/prod/nginx:1
SIGNED TAG DIGEST SIGNERS
1 5b49c8e2c8...90fbb2033 jeff
List of signers and their keys for msr.example.com/prod/nginx:1
SIGNER KEYS
jeff 927f30366699
Administrative keys for msr.example.com/prod/nginx:1
Repository Key: e0d15a24b74...96540b4a2506b
Root Key: b74854cb27c...1ea72fbdd7b9a
Alternatively, you can review the signed image from the MSR web UI.
You have the option to sign an image using multiple MKE users’ keys. For
example, an image needs to be signed by a member of the Security
team and a member of the Developers
team. Let’s assume jeff
is a
member of the Developers team. In this case, we only need to add a
member of the Security team.
To do so, first add the private key of the Security team member to the local Docker trust store.
$ docker trust key load --name ian key.pem
Loading key from "key.pem"...
Enter passphrase for new ian key with ID 5ac7d9a:
Repeat passphrase for new ian key with ID 5ac7d9a:
Successfully imported key from key.pem
Upload the user’s public key to the Notary Server and sign the image.
You will be asked for jeff
, the developer’s passphrase, as well as
the ian
user’s passphrase to sign the tag.
$ docker trust signer add --key cert.pem ian msr.example.com/prod/nginx
Adding signer "ian" to msr.example.com/prod/nginx...
Enter passphrase for repository key with ID e0d15a2:
Successfully added signer: ian to msr.example.com/prod/nginx
$ docker trust sign msr.example.com/prod/nginx:1
Signing and pushing trust metadata for msr.example.com/prod/nginx:1
Existing signatures for tag 1 digest 5b49c8e2c890fbb0a35f6050ed3c5109c5bb47b9e774264f4f3aa85bb69e2033 from:
jeff
Enter passphrase for jeff key with ID 927f303:
Enter passphrase for ian key with ID 5ac7d9a:
Successfully signed msr.example.com/prod/nginx:1
Finally, check the tag again to make sure it includes two signers.
$ docker trust inspect --pretty msr.example.com/prod/nginx:1
Signatures for msr.example.com/prod/nginx:1
SIGNED TAG DIGEST SIGNERS
1 5b49c8e2c89...5bb69e2033 jeff, ian
List of signers and their keys for msr.example.com/prod/nginx:1
SIGNER KEYS
jeff 927f30366699
ian 5ac7d9af7222
Administrative keys for msr.example.com/prod/nginx:1
Repository Key: e0d15a24b741ab049470298734397afbea539400510cb30d3b996540b4a2506b
Root Key: b74854cb27cc25220ede4b08028967d1c6e297a759a6939dfef1ea72fbdd7b9a
Delete trust data¶
If an administrator wants to delete a MSR repository that contains trust metadata, they will be prompted to delete the trust metadata first before removing the repository.
To delete trust metadata, you need to use the Notary CLI.
$ notary delete msr.example.com/prod/nginx --remote
Deleting trust data for repository msr.example.com/prod/nginx
Enter username: admin
Enter password:
Successfully deleted local and remote trust data for repository msr.example.com/prod/nginx
If you don’t include the --remote
flag, Notary deletes local cached
content but will not delete data from the Notary server.
Where to go next¶
Using Docker Content Trust with a Remote MKE Cluster¶
For more advanced deployments, you may want to share one Mirantis Secure Registry across multiple Mirantis Kubernetes Engines. However, customers wanting to adopt this model alongside the Only Run Signed Images MKE feature, run into problems as each MKE operates an independent set of users.
Docker Content Trust (DCT) gets around this problem, since users from a remote MKE are able to sign images in the central MSR and still apply runtime enforcement.
In the following example, we will connect MSR managed by MKE cluster 1 with a remote MKE cluster which we are calling MKE cluster 2, sign the image with a user from MKE cluster 2, and provide runtime enforcement within MKE cluster 2. This process could be repeated over and over, integrating MSR with multiple remote MKE clusters, signing the image with users from each environment, and then providing runtime enforcement in each remote MKE cluster separately.
Note
Before attempting this guide, familiarize yourself with Docker Content Trust and Only Run Signed Images on a single MKE. Many of the concepts within this guide may be new without that background.
Prerequisites¶
Cluster 1, running UCP 3.0.x or higher, with a DTR 2.5.x or higher deployed within the cluster.
Cluster 2, running UCP 3.0.x or higher, with no MSR node.
Nodes on Cluster 2 need to trust the Certificate Authority which signed MSR’s TLS Certificate. This can be tested by logging on to a cluster 2 virtual machine and running
curl https://msr.example.com
.The MSR TLS Certificate needs to be properly configured, ensuring that the Loadbalancer/Public Address field has been configured, with this address included within the certificate.
A machine with the Docker Client (CE 17.12 / EE 1803 or newer) installed, as this contains the relevant docker trust commands.
Registering MSR with a remote Mirantis Kubernetes Engine¶
As there is no registry running within cluster 2, by default MKE will not know where to check for trust data. Therefore, the first thing we need to do is register MSR within the remote MKE in cluster 2. When you normally install MSR, this registration process happens by default to a local MKE, or cluster 1.
Note
The registration process allows the remote MKE to get signature data from MSR, however this will not provide Single Sign On (SSO). Users on cluster 2 will not be synced with cluster 1’s MKE or MSR. Therefore when pulling images, registry authentication will still need to be passed as part of the service definition if the repository is private. See the Kubernetes example.
To add a new registry, retrieve the Certificate Authority (CA) used to
sign the MSR TLS Certificate through the MSR URL’s /ca
endpoint.
$ curl -ks https://msr.example.com/ca > dtr.crt
Next, convert the MSR certificate into a JSON configuration file for registration within the MKE for cluster 2.
You can find a template of the dtr-bundle.json
below. Replace the
host address with your MSR URL, and enter the contents of the MSR CA
certificate between the new line commands \n and \n
.
Note
JSON Formatting
Ensure there are no line breaks between each line of the MSR CA certificate within the JSON file. Use your favorite JSON formatter for validation.
$ cat dtr-bundle.json
{
"hostAddress": "msr.example.com",
"caBundle": "-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----END CERTIFICATE-----"
}
Now upload the configuration file to cluster 2’s MKE through the MKE API
endpoint, /api/config/trustedregistry_
. To authenticate against the
API of cluster 2’s MKE, we have downloaded an MKE client bundle,
extracted it in the current directory, and will reference the keys for
authentication.
$ curl --cacert ca.pem --cert cert.pem --key key.pem \
-X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d @dtr-bundle.json \
https://cluster2.example.com/api/config/trustedregistry_
Navigate to the MKE web interface to verify that the JSON file was imported successfully, as the MKE endpoint will not output anything. Select Admin > Admin Settings > Mirantis Secure Registry. If the registry has been added successfully, you should see the MSR listed.
Additionally, you can check the full MKE configuration
file within cluster 2’s MKE. Once downloaded, the
ucp-config.toml
file should now contain a section called [registries]
$ curl --cacert ca.pem --cert cert.pem --key key.pem https://cluster2.example.com/api/ucp/config-toml > ucp-config.toml
If the new registry isn’t shown in the list, check the
ucp-controller
container logs on cluster 2.
Signing an image in MSR¶
We will now sign an image and push this to MSR. To sign images we need a
user’s public private key pair from cluster 2. It can be found in a
client bundle, with key.pem
being a private key and cert.pem
being the public key on an X.509 certificate.
First, load the private key into the local Docker trust store
(~/.docker/trust)
. The name used here is purely metadata to help
keep track of which keys you have imported.
$ docker trust key load --name cluster2admin key.pem
Loading key from "key.pem"...
Enter passphrase for new cluster2admin key with ID a453196:
Repeat passphrase for new cluster2admin key with ID a453196:
Successfully imported key from key.pem
Next initiate the repository, and add the public key of cluster 2’s user as a signer. You will be asked for a number of passphrases to protect the keys. Keep note of these passphrases, and see [Docker Content Trust documentation] (/engine/security/trust/trust_delegation/#managing-delegations-in-a-notary-server) to learn more about managing keys.
$ docker trust signer add --key cert.pem cluster2admin msr.example.com/admin/trustdemo
Adding signer "cluster2admin" to msr.example.com/admin/trustdemo...
Initializing signed repository for msr.example.com/admin/trustdemo...
Enter passphrase for root key with ID 4a72d81:
Enter passphrase for new repository key with ID dd4460f:
Repeat passphrase for new repository key with ID dd4460f:
Successfully initialized "msr.example.com/admin/trustdemo"
Successfully added signer: cluster2admin to msr.example.com/admin/trustdemo
Finally, sign the image tag. This pushes the image up to MSR, as well as signs the tag with the user from cluster 2’s keys.
$ docker trust sign msr.example.com/admin/trustdemo:1
Signing and pushing trust data for local image msr.example.com/admin/trustdemo:1, may overwrite remote trust data
The push refers to repository [dtr.olly.dtcntr.net/admin/trustdemo]
27c0b07c1b33: Layer already exists
aa84c03b5202: Layer already exists
5f6acae4a5eb: Layer already exists
df64d3292fd6: Layer already exists
1: digest: sha256:37062e8984d3b8fde253eba1832bfb4367c51d9f05da8e581bd1296fc3fbf65f size: 1153
Signing and pushing trust metadata
Enter passphrase for cluster2admin key with ID a453196:
Successfully signed msr.example.com/admin/trustdemo:1
Within the MSR web interface, you should now be able to see your newly pushed tag with the Signed text next to the size.
You could sign this image multiple times if required, whether it’s multiple teams from the same cluster wanting to sign the image, or you integrating MSR with more remote MKEs so users from clusters 1, 2, 3, or more can all sign the same image.
Troubleshooting¶
If the image is stored in a private repository within MSR, you need to pass credentials to the Orchestrator as there is no SSO between cluster 2 and MSR. See the relevant Kubernetes documentation for more details.
image or trust data does not exist for msr.example.com/admin/trustdemo:1
This means something went wrong when initiating the repository or signing the image, as the tag contains no signing data.
Error response from daemon: image did not meet required signing policy
msr.example.com/admin/trustdemo:1: image did not meet required signing policy
This means that the image was signed correctly, however the user who signed the image does not meet the signing policy in cluster 2. This could be because you signed the image with the wrong user keys.
Error response from daemon: msr.example.com must be a registered trusted registry. See 'docker run --help'.
This means you have not registered MSR to work with a remote MKE instance yet, as outlined in Registering MSR with a remote Mirantis Kubernetes Engine.
Manage jobs¶
Job queue¶
Mirantis Secure Registry (MSR) uses a job queue to schedule batch jobs. Jobs are added to a cluster-wide job queue, and then consumed and executed by a job runner within MSR.
All MSR replicas have access to the job queue, and have a job runner component that can get and execute work.
How it works¶
When a job is created, it is added to a cluster-wide job queue and
enters the waiting
state. When one of the MSR replicas is ready to
claim the job, it waits a random time of up to 3
seconds to give
every replica the opportunity to claim the task.
A replica claims a job by adding its replica ID to the job. That way,
other replicas will know the job has been claimed. Once a replica claims
a job, it adds that job to an internal queue, which in turn sorts the
jobs by their scheduledAt
time. Once that happens, the replica
updates the job status to running
, and starts executing it.
The job runner component of each MSR replica keeps a
heartbeatExpiration
entry on the database that is shared by all
replicas. If a replica becomes unhealthy, other replicas notice the
change and update the status of the failing worker to dead
. Also,
all the jobs that were claimed by the unhealthy replica enter the
worker_dead
state, so that other replicas can claim the job.
Job types¶
MSR runs periodic and long-running jobs. The following is a complete list of jobs you can filter for via the user interface or the API.
Job |
Description |
---|---|
gc |
A garbage collection job that deletes layers associated with deleted images. |
onlinegc |
A garbage collection job that deletes layers associated with deleted images without putting the registry in read-only mode. |
onlinegc_metadata |
A garbage collection job that deletes metadata associated with deleted images. |
onlinegc_joblogs |
A garbage collection job that deletes job logs based on a configured job history setting. |
metadatastoremigration |
A necessary migration that enables the |
sleep |
Used for testing the correctness of the jobrunner. It sleeps for 60 seconds. |
false |
Used for testing the correctness of the jobrunner. It runs the |
tagmigration |
Used for synchronizing tag and manifest information between the MSR database and the storage backend. |
bloblinkmigration |
A DTR 2.1 to 2.2 upgrade process that adds references for blobs to repositories in the database. |
license_update |
Checks for license expiration extensions if online license updates are enabled. |
scan_check |
An image security scanning job. This job does not perform the actual
scanning, rather it spawns |
scan_check_single |
A security scanning job for a particular layer given by the |
scan_check_all |
A security scanning job that updates all of the currently scanned images to display the latest vulnerabilities. |
update_vuln_db |
A job that is created to update MSR’s vulnerability database. It uses an
Internet connection to check for database updates through
|
scannedlayermigration |
A DTR 2.4 to 2.5 upgrade process that restructures scanned image data. |
push_mirror_tag |
A job that pushes a tag to another registry after a push mirror policy has been evaluated. |
poll_mirror |
A global cron that evaluates poll mirroring policies. |
webhook |
A job that is used to dispatch a webhook payload to a single endpoint. |
nautilus_update_db |
The old name for the |
ro_registry |
A user-initiated job for manually switching MSR into read-only mode. |
tag_pruning |
A job for cleaning up unnecessary or unwanted repository tags which can be configured by repository admins. |
Job status¶
Jobs can have one of the following status values:
Status |
Description |
---|---|
waiting |
Unclaimed job waiting to be picked up by a worker. |
running |
The job is currently being run by the specified |
done |
The job has succesfully completed. |
errors |
The job has completed with errors. |
cancel_request |
The status of a job is monitored by the worker in the database. If the
job status changes to |
cancel |
The job has been canceled and ws not fully executed. |
deleted |
The job and its logs have been removed. |
worker_dead |
The worker for this job has been declared |
worker_shutdown |
The worker that was running this job has been gracefully stopped. |
worker_resurrection |
The worker for this job has reconnected to the databsase and will cancel this job. |
Audit jobs with the web interface¶
As of DTR 2.2, admins were able to view and audit jobs within the software using the API. MSR 2.6 enhances those capabilities by adding a Job Logs tab under System settings on the user interface. The tab displays a sortable and paginated list of jobs along with links to associated job logs.
Prerequisite¶
Job Queue
View jobs list¶
To view the list of jobs within MSR, do the following:
Navigate to
https://<msr-url>
and log in with your MKE credentials.Select System from the left-side navigation panel, and then click Job Logs. You should see a paginated list of past, running, and queued jobs. By default, Job Logs shows the latest
10
jobs on the first page.Specify a filtering option. Job Logs lets you filter by:
Action
Worker ID (the ID of the worker in a MSR replica that is responsible for running the job)
Optional: Click Edit Settings on the right of the filtering options to update your Job Logs settings.
Job details¶
The following is an explanation of the job-related fields displayed in
Job Logs and uses the filtered online_gc
action from above.
Job Detail |
Description |
Example |
---|---|---|
Action |
The type of action or job being performed. |
|
ID |
The ID of the job. |
|
Worker |
The ID of the worker node responsible for ruinning the job. |
|
Status |
Current status of the action or job. |
|
Start Time |
Time when the job started. |
|
Last updated |
Time when the job was last updated. |
|
View Logs |
Links to the full logs for the job. |
|
View job-specific logs¶
To view the log details for a specific job, do the following:
Click View Logs next to the job’s Last Updated value. You will be redirected to the log detail page of your selected job.
Notice how the job
ID
is reflected in the URL while theAction
and the abbreviated form of the jobID
are reflected in the heading. Also, the JSON lines displayed are job-specific MSR container logs.Enter or select a different line count to truncate the number of lines displayed. Lines are cut off from the end of the logs.
Audit jobs with the API¶
Overview¶
This covers troubleshooting batch jobs via the API and was introduced in DTR 2.2. Starting in MSR 2.6, admins have the ability to audit jobs using the web interface.
Prerequisite¶
Job Queue
Job capacity¶
Each job runner has a limited capacity and will not claim jobs that
require a higher capacity. You can see the capacity of a job runner via
the GET /api/v0/workers
endpoint:
{
"workers": [
{
"id": "000000000000",
"status": "running",
"capacityMap": {
"scan": 1,
"scanCheck": 1
},
"heartbeatExpiration": "2017-02-18T00:51:02Z"
}
]
}
This means that the worker with replica ID 000000000000
has a
capacity of 1 scan
and 1 scanCheck
. Next, review the list of
available jobs:
{
"jobs": [
{
"id": "0",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scan": 1
}
},
{
"id": "1",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scan": 1
}
},
{
"id": "2",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scanCheck": 1
}
}
]
}
If worker 000000000000
notices the jobs in waiting
state above,
then it will be able to pick up jobs 0
and 2
since it has the
capacity for both. Job 1
will have to wait until the previous scan
job, 0
, is completed. The job queue will then look like:
{
"jobs": [
{
"id": "0",
"workerID": "000000000000",
"status": "running",
"capacityMap": {
"scan": 1
}
},
{
"id": "1",
"workerID": "",
"status": "waiting",
"capacityMap": {
"scan": 1
}
},
{
"id": "2",
"workerID": "000000000000",
"status": "running",
"capacityMap": {
"scanCheck": 1
}
}
]
}
You can get a list of jobs via the GET /api/v0/jobs/
endpoint. Each
job looks like:
{
"id": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
"retryFromID": "1fcf4c0f-ff3b-471a-8839-5dcb631b2f7b",
"workerID": "000000000000",
"status": "done",
"scheduledAt": "2017-02-17T01:09:47.771Z",
"lastUpdated": "2017-02-17T01:10:14.117Z",
"action": "scan_check_single",
"retriesLeft": 0,
"retriesTotal": 0,
"capacityMap": {
"scan": 1
},
"parameters": {
"SHA256SUM": "1bacd3c8ccb1f15609a10bd4a403831d0ec0b354438ddbf644c95c5d54f8eb13"
},
"deadline": "",
"stopTimeout": ""
}
The JSON fields of interest here are:
id
: The ID of the jobworkerID
: The ID of the worker in a MSR replica that is running this jobstatus
: The current state of the jobaction
: The type of job the worker will actually performcapacityMap
: The available capacity a worker needs for this job to run
Cron jobs¶
Several of the jobs performed by MSR are run in a recurrent schedule.
You can see those jobs using the GET /api/v0/crons
endpoint:
{
"crons": [
{
"id": "48875b1b-5006-48f5-9f3c-af9fbdd82255",
"action": "license_update",
"schedule": "57 54 3 * * *",
"retries": 2,
"capacityMap": null,
"parameters": null,
"deadline": "",
"stopTimeout": "",
"nextRun": "2017-02-22T03:54:57Z"
},
{
"id": "b1c1e61e-1e74-4677-8e4a-2a7dacefffdc",
"action": "update_db",
"schedule": "0 0 3 * * *",
"retries": 0,
"capacityMap": null,
"parameters": null,
"deadline": "",
"stopTimeout": "",
"nextRun": "2017-02-22T03:00:00Z"
}
]
}
The schedule
field uses a cron expression following the
(seconds) (minutes) (hours) (day of month) (month) (day of week)
format. For example, 57 54 3 * * *
with cron ID
48875b1b-5006-48f5-9f3c-af9fbdd82255
will be run at 03:54:57
on
any day of the week or the month, which is 2017-02-22T03:54:57Z
in
the example JSON response above.
Enable auto-deletion of job logs¶
Mirantis Secure Registry has a global setting for auto-deletion of job logs which allows them to be removed as part of garbage collection. MSR admins can enable auto-deletion of repository events in MSR 2.6 based on specified conditions which are covered below.
In your browser, navigate to
https://<msr-url>
and log in with your MKE credentials.Select System on the left-side navigation panel, which will display the Settings page by default.
Scroll down to Job Logs and turn on Auto-Deletion.
Specify the conditions with which a job log auto-deletion will be triggered.
MSR allows you to set your auto-deletion conditions based on the following optional job log attributes:
Name
Description
Example
Age
Lets you remove job logs which are older than your specified number of hours, days, weeks or months
2 months
Max number of events
Lets you specify the maximum number of job logs allowed within MSR.
100
If you check and specify both, job logs will be removed from MSR during garbage collection if either condition is met. You should see a confirmation message right away.
Click Start Deletion if you’re ready. Read more about configure-garbage-collection> if you’re unsure about this operation.
Navigate to System > Job Logs to confirm that
onlinegc_joblogs
has started.
Note
When you enable auto-deletion of job logs, the logs will be permanently deleted during garbage collection.
Manage users¶
Create and manage teams¶
You can extend a user’s default permissions by granting them individual permissions in other image repositories, by adding the user to a team. A team defines the permissions a set of users have for a set of repositories.
To create a new team, go to the MSR web UI, and navigate to the Organizations page. Then click the organization where you want to create the team.
Click + to create a new team, and give it a name.
Add users to a team¶
Once you have created a team, click the team name, to manage its settings. The first thing we need to do is add users to the team. Click the Add user button and add users to the team.
Manage team permissions¶
The next step is to define the permissions this team has for a set of repositories. Navigate to the Repositories tab, and click the Add repository button.
Choose the repositories this team has access to, and what permission levels the team members have.
Three permission levels are available:
Permission level |
Description |
---|---|
Read only |
View repository and pull images. |
Read & Write |
View repository, pull and push images. |
Admin |
Manage repository and change its settings, pull and push images. |
Delete a team¶
If you’re an organization owner, you can delete a team in that organization. Navigate to the Team, choose the Settings tab, and click Delete.
Create and manage organizations¶
When a user creates a repository, only that user has permissions to make changes to the repository.
For team workflows, where multiple users have permissions to manage a set of common repositories, create an organization. By default, MSR has one organization called ‘docker-datacenter’, that is shared between MSR and MKE.
To create a new organization, navigate to the MSR web UI, and go to the Organizations page.
Click the New organization button, and choose a meaningful name for the organization.
Repositories owned by this organization will contain the organization name, so to pull an image from that repository, you’ll use:
docker pull <msr-domain-name>/<organization>/<repository>:<tag>
Click Save to create the organization, and then click the organization to define which users are allowed to manage this organization. These users will be able to edit the organization settings, edit all repositories owned by the organization, and define the user permissions for this organization.
For this, click the Add user button, select the users that you want to grant permissions to manage the organization, and click Save. Then change their permissions from ‘Member’ to Org Owner.
Permission levels¶
Mirantis Secure Registry allows you to define fine-grain permissions over image repositories.
Administrators¶
Users are shared across MKE and MSR. When you create a new user in Mirantis Kubernetes Engine, that user becomes available in MSR and vice versa. When you create a trusted admin in MSR, the admin has permissions to manage:
Users across MKE and MSR
MSR repositories and settings
MKE resources and settings
Team permission levels¶
With Teams you can define the repository permissions for a set of users (read, read-write, and admin).
Repository operation |
read |
read-write |
admin |
---|---|---|---|
View/browse |
x |
x |
x |
Pull |
x |
x |
x |
Push |
x |
x |
|
Start a scan |
x |
x |
|
Delete tags |
x |
x |
|
Edit description |
x |
||
Set public or private |
x |
||
Manage user access |
x |
||
Delete repository |
x |
Note
Team permissions are additive. When a user is a member of multiple teams, they have the highest permission level defined by those teams.
Overall permissions¶
Permission level |
Description |
---|---|
Anonymous or unauthenticated Users |
Can search and pull public repositories. |
Authenticated Users |
Can search and pull public repos, and create and manage their own repositories. |
Team Member |
Everything a user can do, plus the permissions granted by the team the user is a member of.. |
Organization Owner |
Can manage repositories and teams for the organization. |
Admin |
Can manage anything across MKE and MSR. |
Manage webhooks¶
You can configure MSR to automatically post event notifications to a webhook URL of your choosing. This lets you build complex CI and CD pipelines with your Docker images.
Webhook types¶
Event type |
Scope |
Access level |
Availability |
---|---|---|---|
Tag pushed to repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Tag pulled from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Tag deleted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Manifest pushed to repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Manifest pulled from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Manifest deleted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Security scan completed ( |
Individual repositories |
Repository admin |
Web UI and API |
Security scan failed ( |
Individual repositories |
Repository admin |
Web UI and API |
Image promoted from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Image mirrored from repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Image mirrored from remote repository ( |
Individual repositories |
Repository admin |
Web UI and API |
Repository created, updated, or deleted ( |
Namespace, organizations |
Namespace, organization owners |
API only |
Security scanner update completed ( |
Global |
MSR admin |
API only |
You must have admin privileges to a repository or namespace in order to subscribe to its webhook events. For example, a user must be an admin of repository “foo/bar” to subscribe to its tag push events. A MSR admin can subscribe to any event.
Manage repository webhooks with the web interface¶
Prerequisites¶
You must have admin privileges to the repository in order to create a webhook.
See Webhook types a list of events you can trigger notifications for using the web interface.
Create a webhook for your repository¶
In your browser, navigate to
https://<msr-url>
and log in with your credentials.Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the
/
after the specific namespace for your repository.Select the Webhooks tab, and click New Webhook.
From the drop-down list, select the event that will trigger the webhook.
Set the URL that will receive the JSON payload. Click Test next to the Webhook URL field, so that you can validate that the integration is working. At your specified URL, you should receive a JSON payload for your chosen event type notification.
{ "type": "TAG_PUSH", "createdAt": "2019-05-15T19:39:40.607337713Z", "contents": { "namespace": "foo", "repository": "bar", "tag": "latest", "digest": "sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c", "imageName": "foo/bar:latest", "os": "linux", "architecture": "amd64", "author": "", "pushedAt": "2015-01-02T15:04:05Z" }, "location": "/repositories/foo/bar/tags/latest" }
Expand “Show advanced settings” to paste the TLS certificate associated with your webhook URL. For testing purposes, you can test over HTTP instead of HTTPS.
Click Create. Once saved, your webhook is active and starts sending POST notifications whenever your chosen event type is triggered.
As a repository admin, you can add or delete a webhook at any point. Additionally, you can create, view, and delete webhooks for your organization or trusted registry using the API.
Manage repository webhooks with the API¶
Triggering notifications
Refer to Webhook types for a list of events you can trigger notifications for via the API.
Your MSR hostname serves as the base URL for your API requests.
From the MSR web UI, click API on the in the left-side navigation panel to explore the API resources and endpoints. Click Execute to send your API request.
API requests via curl¶
You can use curl to send HTTP or HTTPS API requests. Note that you will have to
specify skipTLSVerification: true
on your request in order to test the
webhook endpoint over HTTP.
Example curl request¶
curl -u test-user:$TOKEN -X POST "https://msr-example.com/api/v0/webhooks" -H "accept: application/json" -H "content-type: application/json" -d "{ \"endpoint\": \"https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019\", \"key\": \"maria-testorg/lab-words\", \"skipTLSVerification\": true, \"type\": \"TAG_PULL\"}"
Example JSON response¶
{
"id": "b7bf702c31601efb4796da59900ddc1b7c72eb8ca80fdfb1b9fecdbad5418155",
"type": "TAG_PULL",
"key": "maria-testorg/lab-words",
"endpoint": "https://webhook.site/441b1584-949d-4608-a7f3-f240bdd31019",
"authorID": "194efd8e-9ee6-4d43-a34b-eefd9ce39087",
"createdAt": "2019-05-22T01:55:20.471286995Z",
"lastSuccessfulAt": "0001-01-01T00:00:00Z",
"inactive": false,
"tlsCert": "",
"skipTLSVerification": true
}
Subscribe to events¶
To subscribe to events, send a POST
request to /api/v0/webhooks
with the following JSON payload:
Example usage¶
{
"type": "TAG_PUSH",
"key": "foo/bar",
"endpoint": "https://example.com"
}
The keys in the payload are:
type
: The event type to subcribe to.key
: The namespace/organization or repo to subscribe to. For example, “foo/bar” to subscribe to pushes to the “bar” repository within the namespace/organization “foo”.endpoint
: The URL to send the JSON payload to.
Normal users must supply a “key” to scope a particular webhook event to a repository or a namespace/organization. MSR admins can choose to omit this, meaning a POST event notification of your specified type will be sent for all MSR repositories and namespaces.
Receive a payload¶
Whenever your specified event type occurs, MSR will send a POST request to the given endpoint with a JSON-encoded payload. The payload will always have the following wrapper:
{
"type": "...",
"createdAt": "2012-04-23T18:25:43.511Z",
"contents": {...}
}
type
refers to the event type received at the specified subscription endpoint.contents
refers to the payload of the event itself. Each event is different, therefore the structure of the JSON object incontents
will change depending on the event type. See Content structure for more details.
Test payload subscriptions¶
Before subscribing to an event, you can view and test your endpoints
using fake data. To send a test payload, send POST
request to
/api/v0/webhooks/test
with the following payload:
{
"type": "...",
"endpoint": "https://www.example.com/"
}
Change type
to the event type that you want to receive. MSR will
then send an example payload to your specified endpoint. The example
payload sent is always the same.
Content structure¶
Comments after (//
) are for informational purposes only, and the
example payloads have been clipped for brevity.
Repository event content structure¶
Tag push
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"os": "", // (string) the OS for the tag's manifest
"architecture": "", // (string) the architecture for the tag's manifest
"author": "", // (string) the username of the person who pushed the tag
"pushedAt": "", // (string) JSON-encoded timestamp of when the push occurred
...
}
Tag delete
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just deleted
"digest": "", // (string) sha256 digest of the manifest the tag points to (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"os": "", // (string) the OS for the tag's manifest
"architecture": "", // (string) the architecture for the tag's manifest
"author": "", // (string) the username of the person who deleted the tag
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
...
}
Manifest push
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"os": "", // (string) the OS for the manifest
"architecture": "", // (string) the architecture for the manifest
"author": "", // (string) the username of the person who pushed the manifest
...
}
Manifest delete
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"digest": "", // (string) sha256 digest of the manifest (eg. "sha256:0afb...")
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"os": "", // (string) the OS for the manifest
"architecture": "", // (string) the architecture for the manifest
"author": "", // (string) the username of the person who deleted the manifest
"deletedAt": "", // (string) JSON-encoded timestamp of when the delete occurred
...
}
Security scan completed
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar:tag)
"scanSummary": {
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag just pushed
"critical": 0, // (int) number of critical issues, where CVSS >= 7.0
"major": 0, // (int) number of major issues, where CVSS >= 4.0 && CVSS < 7
"minor": 0, // (int) number of minor issues, where CVSS > 0 && CVSS < 4.0
"last_scan_status": 0, // (int) enum; see scan status section
"check_completed_at": "", // (string) JSON-encoded timestamp of when the scan completed
...
}
}
Security scan failed
{
"namespace": "", // (string) namespace/organization for the repository
"repository": "", // (string) repository name
"tag": "", // (string) the name of the tag scanned
"imageName": "", // (string) the fully-qualified image name including MSR host used to pull the image (eg. 10.10.10.1/foo/bar@sha256:0afb...)
"error": "", // (string) the error that occurred while scanning
...
}
Namespace-specific event structure¶
Repository event (created/updated/deleted)
{
"namespace": "", // (string) repository's namespace/organization name
"repository": "", // (string) repository name
"event": "", // (string) enum: "REPO_CREATED", "REPO_DELETED" or "REPO_UPDATED"
"author": "", // (string) the name of the user responsible for the event
"data": {} // (object) when updating or creating a repo this follows the same format as an API response from /api/v0/repositories/{namespace}/{repository}
}
Global event structure¶
Security scanner update complete
{
"scanner_version": "",
"scanner_updated_at": "", // (string) JSON-encoded timestamp of when the scanner updated
"db_version": 0, // (int) newly updated database version
"db_updated_at": "", // (string) JSON-encoded timestamp of when the database updated
"success": <true|false> // (bool) whether the update was successful
"replicas": { // (object) a map keyed by replica ID containing update information for each replica
"replica_id": {
"db_updated_at": "", // (string) JSON-encoded time of when the replica updated
"version": "", // (string) version updated to
"replica_id": "" // (string) replica ID
},
...
}
}
Security scan status codes¶
0: Failed. An error occurred checking an image’s layer
1: Unscanned. The image has not yet been scanned
2: Scanning. Scanning in progress
3: Pending: The image will be scanned when a worker is available
4: Scanned: The image has been scanned but vulnerabilities have not yet been checked
5: Checking: The image is being checked for vulnerabilities
6: Completed: The image has been fully security scanned
View and manage existing subscriptions¶
View all subscriptions¶
To view existing subscriptions, send a GET
request to
/api/v0/webhooks
. As a normal user (i.e., not a MSR admin), this will
show all of your current subscriptions across every
namespace/organization and repository. As a MSR admin, this will show
every webhook configured for your MSR.
The API response will be in the following format:
[
{
"id": "", // (string): UUID of the webhook subscription
"type": "", // (string): webhook event type
"key": "", // (string): the individual resource this subscription is scoped to
"endpoint": "", // (string): the endpoint to send POST event notifications to
"authorID": "", // (string): the user ID resposible for creating the subscription
"createdAt": "", // (string): JSON-encoded datetime when the subscription was created
},
...
]
View subscriptions for a particular resource¶
You can also view subscriptions for a given resource that you are an admin of. For example, if you have admin rights to the repository “foo/bar”, you can view all subscriptions (even other people’s) from a particular API endpoint. These endpoints are:
GET /api/v0/repositories/{namespace}/{repository}/webhooks
: View all webhook subscriptions for a repositoryGET /api/v0/repositories/{namespace}/webhooks
: View all webhook subscriptions for a namespace/organization
Delete a subscription¶
To delete a webhook subscription, send a DELETE
request to
/api/v0/webhooks/{id}
, replacing {id}
with the webhook
subscription ID which you would like to delete.
Only a MSR admin or an admin for the resource with the event subscription can delete a subscription. As a normal user, you can only delete subscriptions for repositories which you manage.
Manage repository events¶
Audit repository events¶
Starting in DTR 2.6, each repository page includes an Activity tab which displays a sortable and paginated list of the most recent events within the repository. This offers better visibility along with the ability to audit events. Event types listed vary according to your repository permission level. Additionally, MSR admins can enable auto-deletion of repository events as part of maintenance and cleanup.
In the following section, we will show you how to view and audit the list of events in a repository. We will also cover the event types associated with your permission level.
View List of Events¶
As of DTR 2.3, admins were able to view a list of MSR events using the API. MSR 2.6 enhances that feature by showing a permission-based events list for each repository page on the web interface. To view the list of events within a repository, do the following:
Navigate to
https://<msr-url>
and log in with your MSR credentials.Select Repositories from the left-side navigation panel, and then click on the name of the repository that you want to view. Note that you will have to click on the repository name following the
/
after the specific namespace for your repository.Select the Activity tab. You should see a paginated list of the latest events based on your repository permission level. By default, Activity shows the latest
10
events and excludes pull events, which are only visible to repository and MSR admins.If you’re a repository or a MSR admin, uncheck Exclude pull to view pull events. This should give you a better understanding of who is consuming your images.
To update your event view, select a different time filter from the drop-down list.
Activity Stream¶
The following table breaks down the data included in an event and uses
the highlighted Create Promotion Policy
event as an example.
Event detail |
Description |
Example |
---|---|---|
Label |
Friendly name of the event. |
|
Repository |
This will always be the repository in review following the
|
|
Tag |
Tag affected by the event, when applicable. |
|
SHA |
The digest value for ``CREATE` operations such as creating a new image tag or a promotion policy. |
|
Type |
Event type. Possible values are: |
|
Initiated by |
The actor responsible for the event. For user-initiated events, this
will reflect the user ID and link to that user’s profile. For image
events triggered by a policy – pruning, pull / push mirroring, or
promotion – this will reflect the relevant policy ID except for manual
promotions where it reflects |
|
Date and Time |
When the event happened in your configured time zone. |
|
Event Audits¶
Given the level of detail on each event, it should be easy for MSR and security admins to determine what events have taken place inside of MSR. For example, when an image which shouldn’t have been deleted ends up getting deleted, the security admin can determine when and who initiated the deletion.
Event Permissions¶
Repository event |
Description |
Minimum permission level |
---|---|---|
Push |
Refers to |
Authenticated users |
Scan |
Requires security scanning to be set
up by an MSR admin.
Once enabled, this will display as a |
Authenticated users |
Promotion |
Refers to a |
Repository admin |
Delete |
Refers to “Delete Tag” events. Learn more about Delete images. |
Authenticated users |
Pull |
Refers to “Get Tag” events. Learn more about Pull an image. |
Repository admin |
Mirror |
Refers to |
Repository admin |
Create repo |
Refers to |
Authenticated users |
Where to go next¶
Enable Auto-Deletion of Repository Events¶
Mirantis Secure Registry has a global setting for repository event auto-deletion. This allows event records to be removed as part of garbage collection. MSR administrators can enable auto-deletion of repository events in DTR 2.6 based on specified conditions which are covered below.
In your browser, navigate to
https://<msr-url>
and log in with your admin credentials.Select System from the left-side navigation panel, which displays the Settings page by default.
Scroll down to Repository Events and turn on Auto-Deletion.
Specify the conditions with which an event auto-deletion will be triggered.
MSR allows you to set your auto-deletion conditions based on the following optional repository event attributes:
Name |
Description |
Example |
---|---|---|
Age |
Lets you remove events older than your specified number of hours, days, weeks or months. |
|
Max number of events |
Lets you specify the maximum number of events allowed in the repositories. |
|
If you check and specify both, events in your repositories will be removed during garbage collection if either condition is met. You should see a confirmation message right away.
Click Start GC if you are ready.
Navigate to System > Job Logs to confirm that
onlinegc
has taken place.
Where to go next¶
Promotion policies and monitoring¶
Promotion policies overview¶
Mirantis Secure Registry allows you to automatically promote and mirror images based on a policy. In MSR 2.7, you have the option to promote applications with the experimental docker app CLI addition. Note that scanning-based promotion policies do not take effect until all application-bundled images have been scanned. This way you can create a Docker-centric development pipeline.
You can mix and match promotion policies, mirroring policies, and webhooks to create flexible development pipelines that integrate with your existing CI/CD systems.
Promote an image using policies
One way to create a promotion pipeline is to automatically promote images to another repository.
You start by defining a promotion policy that’s specific to a repository. When someone pushes an image to that repository, MSR checks if it complies with the policy you set up and automatically pushes the image to another repository.
Learn how to promote an image using policies.
Mirror images to another registry
You can also promote images between different MSR deployments. This not only allows you to create promotion policies that span multiple MSRs, but also allows you to mirror images for security and high availability.
You start by configuring a repository with a mirroring policy. When someone pushes an image to that repository, MSR checks if the policy is met, and if so pushes it to another MSR deployment or Docker Hub.
Learn how to mirror images to another registry.
Mirror images from another registry
Another option is to mirror images from another MSR deployment. You configure a repository to poll for changes in a remote repository. All new images pushed into the remote repository are then pulled into MSR.
This is an easy way to configure a mirror for high availability since you won’t need to change firewall rules that are in place for your environments.
Promote an image using policies¶
Mirantis Secure Registry allows you to create image promotion pipelines based on policies.
In this example we will create an image promotion pipeline such that:
Developers iterate and push their builds to the
dev/website
repository.When the team creates a stable build, they make sure their image is tagged with
-stable
.When a stable build is pushed to the
dev/website
repository, it will automatically be promoted toqa/website
so that the QA team can start testing.
With this promotion policy, the development team doesn’t need access to the QA repositories, and the QA team doesn’t need access to the development repositories.
Configure your repository¶
Once you’ve created a repository, navigate to the repository page on the MSR web interface, and select the Promotions tab.
Note
Only administrators can globally create and edit promotion policies. By default users can only create and edit promotion policies on repositories within their user namespace.
Click New promotion policy, and define the image promotion criteria.
MSR allows you to set your promotion policy based on the following image attributes:
Name |
Description |
Example |
---|---|---|
Tag name |
Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Promote to Target if Tag name ends in |
Component |
Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Promote to Target if Component name starts with |
Vulnarabilities |
Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number |
Promote to Target if Critical vulnerabilities = |
License |
Whether the image uses an intellectual property license and is one of or not one of your specified words |
Promote to Target if License name = |
Now you need to choose what happens to an image that meets all the criteria.
Select the target organization or namespace and repository where the image is going to be pushed. You can choose to keep the image tag, or transform the tag into something more meaningful in the destination repository, by using a tag template.
In this example, if an image in the dev/website
is tagged with a
word that ends in “stable”, MSR will automatically push that image to
the qa/website
repository. In the destination repository the image
will be tagged with the timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted. To confirm,
select the Promotions tab on the dev/website
repository.
You can also review the newly pushed tag in the target repository by
navigating to qa/website
and selecting the Tags tab.
Where to go next¶
Mirror images to another registry¶
Mirantis Secure Registry allows you to create mirroring policies for a repository. When an image gets pushed to a repository and meets the mirroring criteria, MSR automatically pushes it to a repository in a remote Mirantis Secure Registry or Hub registry.
This not only allows you to mirror images but also allows you to create image promotion pipelines that span multiple MSR deployments and datacenters.
In this example we will create an image mirroring policy such that:
Developers iterate and push their builds to
msr-example.com/dev/website
the repository in the MSR deployment dedicated to development.When the team creates a stable build, they make sure their image is tagged with
-stable
.When a stable build is pushed to
msr-example.com/dev/website
, it will automatically be pushed toqa-example.com/qa/website
, mirroring the image and promoting it to the next stage of development.
With this mirroring policy, the development team does not need access to the QA cluster, and the QA team does not need access to the development cluster.
You need to have permissions to push to the destination repository in order to set up the mirroring policy.
Configure your repository connection¶
Once you have created a repository, navigate to the repository page on the web interface, and select the Mirrors tab.
Click New mirror to define where the image will be pushed if it meets the mirroring criteria.
Under Mirror direction, choose Push to remote registry. Specify the following details:
Field |
Description |
---|---|
Registry type |
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
|
Username and password or access token |
Your credentials in the remote repository you wish to push to. To use an access token instead of your password, see authentication token. |
Repository |
Enter the |
Show advanced settings |
Enter the TLS details for the remote repository or check
Skip TLS verification. If the MSR remote repository is
using self-signed TLS certificates or certificates signed by your own
certificate authority, you also need to provide the public key
certificate for that CA. You can retrieve the certificate by accessing
|
Note
Make sure the account you use for the integration has permissions to write to the remote repository.
Click Connect to test the integration.
In this example, the image gets pushed to the qa/example
repository
of a MSR deployment available at qa-example.com
using a service
account that was created just for mirroring images between repositories.
Next, set your push triggers. MSR allows you to set your mirroring policy based on the following image attributes:
Name |
Description |
Example |
---|---|---|
Tag name |
Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Copy image to remote repository if Tag name ends in |
Component |
Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Copy image to remote repository if Component name starts with |
Vulnarabilities |
Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number |
Copy image to remote repository if Critical vulnerabilities = |
License |
Whether the image uses an intellectual property license and is one of or not one of your specified words |
Copy image to remote repository if License name = |
You can choose to keep the image tag, or transform the tag into something more meaningful in the remote registry by using a tag template.
In this example, if an image in the dev/website
repository is tagged
with a word that ends in stable
, MSR will automatically push that
image to the MSR deployment available at qa-example.com
. The image
is pushed to the qa/example
repository and is tagged with the
timestamp of when the image was promoted.
Everything is set up! Once the development team pushes an image that
complies with the policy, it automatically gets promoted to
qa/example
in the remote trusted registry at qa-example.com
.
Metadata persistence¶
When an image is pushed to another registry using a mirroring policy, scanning and signing data is not persisted in the destination repository.
If you have scanning enabled for the destination repository, MSR is going to scan the image pushed. If you want the image to be signed, you need to do it manually.
Where to go next¶
Mirror images from another registry¶
Mirantis Secure Registry allows you to set up a mirror of a repository by constantly polling it and pulling new image tags as they are pushed. This ensures your images are replicated across different registries for high availability. It also makes it easy to create a development pipeline that allows different users access to a certain image without giving them access to everything in the remote registry.
To mirror a repository, start by creating a repository in the MSR deployment that will serve as your mirror. Previously, you were only able to set up pull mirroring from the API. Starting in DTR 2.6, you can also mirror and pull from a remote MSR or Docker Hub repository.
Pull mirroring on the web interface¶
To get started, navigate to https://<msr-url>
and log in with your
MKE credentials.
Select Repositories in the left-side navigation panel, and then
click on the name of the repository that you want to view. Note that you will
have to click on the repository name following the /
after the specific
namespace for your repository.
Next, select the Mirrors tab and click New mirror. On the New mirror page, choose Pull from remote registry.
Specify the following details:
Field |
Description |
---|---|
Registry type |
You can choose between Mirantis Secure Registry and
Docker Hub. If you choose MSR, enter your MSR URL.
Otherwise, Docker Hub defaults to
|
Username and password or access token |
Your credentials in the remote repository you wish to poll from. To use an access token instead of your password, see authentication token. |
Repository |
Enter the |
Show advanced settings |
Enter the TLS details for the remote repository or check
|
After you have filled out the details, click Connect to test the integration.
Once you have successfully connected to the remote repository, new buttons appear:
Click Save to mirror future tag, or;
To mirror all existing and future tags, click Save & Apply instead.
Pull mirroring on the API¶
There are a few different ways to send your MSR API requests. To explore the different API resources and endpoints from the web interface, click API on the bottom left-side navigation panel.
Search for the endpoint:
POST /api/v0/repositories/{namespace}/{reponame}/pollMirroringPolicies
Click Try it out and enter your HTTP request details.
namespace
and reponame
refer to the repository that will be poll
mirrored. The boolean field, initialEvaluation
, corresponds to
Save when set to false
and will only mirror images created
after your API request. Setting it to true
corresponds to
Save & Apply which means all tags in the remote repository will
be evaluated and mirrored. The other body parameters correspond to the
relevant remote repository details that you can see on the MSR web
interface. As a best practice,
use a service account just for this purpose. Instead of providing the
password for that account, you should pass an authentication
token.
If the MSR remote repository is using self-signed certificates or
certificates signed by your own certificate authority, you also need to
provide the public key certificate for that CA. You can get it by
accessing https://<msr-domain>/ca
. The remoteCA
field is
optional for mirroring a Docker Hub repository.
Click Execute. On success, the API returns an HTTP 201
response.
Review the poll mirror job log¶
Once configured, the system polls for changes in the remote repository
and runs the poll_mirror
job every 30 minutes. On success, the
system will pull in new images and mirror them in your local repository.
Starting in DTR 2.6, you can filter for poll_mirror
jobs to review
when it was last ran. To manually trigger the job and force pull
mirroring, use the POST /api/v0/jobs
API endpoint and specify
poll_mirror
as your action.
curl -X POST "https:/<msr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action\": \"poll_mirror\"}"
See Manage jobs to learn more about job management within MSR.
Where to go next¶
Template reference¶
When defining promotion policies you can use templates to dynamically name the tag that is going to be created.
Important
Whenever an image promotion event occurs, the MSR timestamp for the event is in UTC (Coordinated Univeral Time). That timestamp, however, is converted by the browser and presents in the user’s time zone. Inversely, if a time-based tag is applied to a target image, MSR captures it in UTC but cannot convert it to the user’s timezone due to the tags being immutable strings.
You can use these template keywords to define your new tag:
Template |
Description |
Example result |
---|---|---|
|
The tag to promote |
1, 4.5, latest |
|
Day of the week |
Sunday, Monday |
|
Day of the week, abbreviated |
Sun, Mon, Tue |
|
Day of the week, as a number |
0, 1, 6 |
|
Number for the day of the month |
01, 15, 31 |
|
Month |
January, December |
|
Month, abbreviated |
Jan, Jun, Dec |
|
Month, as a number |
01, 06, 12 |
|
Year |
1999, 2015, 2048 |
|
Year, two digits |
99, 15, 48 |
|
Hour, in 24 hour format |
00, 12, 23 |
|
Hour, in 12 hour format |
01, 10, 10 |
|
Period of the day |
AM, PM |
|
Minute |
00, 10, 59 |
|
Second |
00, 10, 59 |
|
Microsecond |
000000, 999999 |
|
Name for the timezone |
UTC, PST, EST |
|
Day of the year |
001, 200, 366 |
|
Week of the year |
00, 10, 53 |
Use Helm charts¶
Helm is a tool that manages Kubernetes packages called charts, which are
put to use in defining, installing, and upgrading Kubernetes applications.
These charts, in conjunction with Helm tooling, deploy applications
into Kubernetes clusters. Charts are comprised of a collection of files and
directories, arranged in a particular structure and packaged as a .tgz
file. Charts define Kubernetes objects, such as the Service
and DaemonSet objects used in the application under deployment.
MSR enables you to use Helm to store and serve Helm charts, thus allowing users to push charts to and pull charts from MSR repositories using the Helm CLI and the MSR API.
MSR supports both Helm v2 and v3. The two versions differ significantly with regard to the Helm CLI, which affects the applications under deployment rather than Helm chart support in MSR. One key difference is that while Helm v2 includes both the Helm CLI and Tiller (Helm Server), Helm v3 includes only the Helm CLI. Helm charts (referred to as releases following their installation in Kubernetes) are managed by Tiller in Helm v2 and by Helm CLI in Helm v3.
Note
For a breakdown of the key differences between Helm v2 and Helm v3, refer to Helm official documentation.
Add a Helm chart repository¶
Users can add a Helm chart repository to MSR through the MSR web UI.
Login to the MSR web UI.
Click Repositories in the navigation menu.
Click New repository.
In the name field, enter the name for the new repository and click Create.
To add the new MSR repository as a Helm repository:
helm repo add <reponame> https://<msrhost>/charts/<namespace>/<reponame> --username <username> --password <password> --ca-file ca.crt "<reponame>" has been added to your repositories
To verify that the new MSR Helm repository has been added:
helm repo list NAME URL <reponame> https://<msrhost>/charts/<namespace>/<reponame>
Pull charts and their provenance files¶
Helm charts can be pulled from MSR Helm repositories using either the MSR API or the Helm CLI.
Pulling with the MSR API¶
Note
Though the MSR API can be used to pull both Helm charts and provenance files, it is not possible to use it to pull both at the same time.
Pulling a chart¶
To pull a Helm chart:
curl --request GET https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz -u <username>:<password> -o <chartname>-<chartversion>.tgz --cacert ca.crt
Pulling a provenance file¶
To pull a provenance file:
curl --request GET
https://msrhost/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz.prov
-u <username>:<password> -o <chartname>-<chartversion>.tgz.prov --cacert
ca.crt
Pulling with the Helm CLI¶
Note
Though the Helm CLI can be used to pull a Helm chart by itself or a Helm chart and its provenance file, it is not possible to use the Helm CLI to pull a provenance file by itself.
Pulling a chart¶
Use the helm pull
CLI command to pull a Helm chart:
helm pull <reponame>/<chartname> --version <chartversion>
ls
ca.crt <chartname>-<chartversion>.tgz
Alternatively, use the following command:
helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt
Pulling a chart and a provenance file in tandem¶
Use the helm pull
CLI command with the --prov
option to pull a Helm
chart and a provenance file at the same time:
helm pull <reponame>/<chartname> --version <chartversion> --prov
ls
ca.crt <chartname>-<chartversion>.tgz <chartname>-<chartversion>.tgz.prov
Alternatively, use the following command:
helm pull https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<chartname>-<chartversion>.tgz --username <username> --password <password> --ca-file ca.crt --prov
Push charts and their provenance files¶
You can use the MSR API or the Helm CLI to push Helm charts and their provenance files to an MSR Helm repository.
Note
Pushing and pulling Helm charts can be done with or without a provenance file.
Pushing charts with the MSR API¶
Using the MSR API, you can push Helm charts with application/octet-stream
or multipart/form-data
.
Pushing with application/octet-stream¶
To push a Helm chart through the MSR API with application/octet-stream
:
curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Pushing with multipart/form-data¶
To push a Helm chart through the MSR API with multipart/form-data
:
curl -F "chart=@<chartname>-<chartversion>.tgz" https://<msrhost>/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force pushing a chart¶
To overwrite an existing chart, turn off repository immutability and include a
?force
query parameter in the HTTP request.
Navigate to Repositories and click the Settings tab.
Under Immutability, select Off.
To force push a Helm chart using the MSR API:
curl -H "Content-Type:application/octet-stream" --data-binary "@<chartname>-<chartversion>.tgz" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Pushing provenance files with the MSR API¶
You can use the MSR API to separately push provenance files related to Helm charts.
To push a provenance file through the MSR API:
curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" https://<msrhost>/charts/api/<namespace>/<reponame>/prov -u <username>:<password> --cacert ca.crt
Note
Attempting to push a provenance file for a nonexistent chart will result in an error.
Force pushing a provenance file¶
To force push a provenance file using the MSR API:
curl -H "Content-Type:application/json" --data-binary "@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/prov?force" -u <username>:<password> --cacert ca.crt
Pushing a chart and its provenance file with a single API request¶
To push a Helm chart and a provenance file with a single API request:
curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" https://msrhost/charts/api/<namespace>/<reponame>/charts -u <username>:<password> --cacert ca.crt
Force pushing a chart and a provenance file¶
To force push both a Helm chart and a provenance file using a single API request:
curl -k -F "chart=@<chartname>-<chartversion>.tgz" -F "prov=@<chartname>-<chartversion>.tgz.prov" "https://<msrhost>/charts/api/<namespace>/<reponame>/charts?force" -u <username>:<password> --cacert ca.crt
Pushing charts with the Helm CLI¶
Note
To push a Helm chart using the Helm CLI, first install the helm push
plugin
from chartmuseum/helm-push. It is not possible to push a
provenance file using the Helm CLI.
Use the helm push
CLI command to push a Helm chart:
helm push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt
Force pushing a chart¶
Use the helm push
CLI command with the --force
option to force push a
Helm chart:
helm push <chartname>-<chartversion>.tgz <reponame> --username <username> --password <password> --ca-file ca.crt --force
View charts in a Helm repository¶
View charts in a Helm repository using either the MSR API or the MSR web UI.
Viewing charts with the MSR API¶
To view charts that have been pushed to a Helm repository using the MSR API, consider the following options:
Option |
CLI command |
---|---|
View the index file |
curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt
|
View a paginated list of all charts |
curl --request GET
https://<msrhost>/charts/<namespace>/<reponame>/index.yaml -u
<username>:<password> --cacert ca.crt
|
View a paginated list of chart versions |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname> -u <username>:<password> \
--cacert ca.crt
|
Describe a version of a particular chart |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion> -u \
<username>:<password> --cacert ca.crt
|
Return the default values of a version of a particular chart |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/values -u \
<username>:<password> --cacert ca.crt
|
Produce a template of a version of a particular chart |
curl --request GET https://<msrhost>/charts/api/<namespace>/ \
<reponame>/charts/<chartname>/<chartversion>/template -u \
<username>:<password> --cacert ca.crt
|
Viewing charts with the MSR web UI¶
Use the MSR web UI to view the MSR Helm repository charts.
In the MSR web UI, navigate to Repositories.
Click the name of the repository that contains the charts you want to view. The page will refresh to display the detail for the selected Helm repository.
Click the Charts tab. The page will refresh to display all the repository charts.
View |
UI sequence |
---|---|
Chart versions |
Click the View Chart button associated with the required Helm repository. |
Chart description |
|
Default values |
|
Chart templates |
|
Delete charts from a Helm repository¶
You can only delete charts from MSR Helm repositories using the MSR API, not the web UI.
To delete a version of a particular chart from a Helm repository through the MSR API:
curl --request DELETE https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion> -u <username>:<password> --cacert ca.crt
Helm chart linting¶
Helm chart linting can enure that Kubernets YAML files and Helm charts adhere to a set of best practices, with a focus on production readiness and security.
A set of established rules forms the basis of Helm chart linting. The process generates a report that you can use to take any necessary actions.
Implement Helm linting¶
Perform Helm linting using either the MSR web UI or the MSR API.
Helm linting with the web UI¶
Open the MSR web UI.
Navigate to Repositories.
Click the name of the repository that contains the chart you want to lint.
Click the Charts tab.
Click the View Chart button associated with the required Helm chart.
Click the View Chart button for the required chart version.
Click the Linting Summary tab.
Click the Lint Chart button to generate a Helm chart linting report.
Helm linting with the API¶
Run the Helm chart linter on a particular chart.
curl -k -H "Content-Type: application/json" --request POST "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lint" -u <username>:<password>
Generate a Helm chart linting report.
curl -k -X GET "https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/lintsummary" -u <username>:<password>
Helm chart linting rules¶
Helm liniting reports offer the linting rules, rule decriptions, and remediations as they are presented in the following table.
Name |
Description |
Remediation |
---|---|---|
|
Alert on services that don’t have any matching deployments |
Make sure your service selector correctly matches the labels on one of your deployments. |
|
Alert on pods that use the default service account |
Create a dedicated service account for your pod. See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ for more details. |
|
Alert on deployments that use the deprecated |
Use the |
|
Alert on containers not dropping |
|
|
Alert on objects using a secret in an environment variable |
Don’t use raw secrets in an environment variable. Instead, either mount
the secret as a file or use a |
|
Alert on deployments where the selector doesn’t match the pod template labels |
Make sure your deployment’s selector correctly matches the labels in its pod template. |
|
Alert on deployments with multiple replicas that don’t specify inter pod anti-affinity to ensure that the orchestrator attempts to schedule replicas on different nodes |
Specify anti-affinity in your pod spec to ensure that the orchestrator
attempts to schedule replicas on different nodes. You can do this by
using |
|
Alert on objects using deprecated API versions under extensions v1beta |
Migrate to using the apps/v1 API versions for these objects. See https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/ for more details. |
|
Alert on containers which don’t specify a liveness probe |
Specify a liveness probe in your container. See https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ for more details. |
|
Alert on containers not running with a read-only root filesystem |
Set |
|
Alert on containers which don’t specify a readiness probe |
Specify a readiness probe in your container. See https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ for more details. |
|
Alert on pods referencing a service account that isn’t found |
Make sure to create the service account, or to refer to an existing service account. |
|
Alert on deployments with containers running in privileged mode |
Don’t run your container as privileged unless required. |
|
Alert on objects without an |
Add an |
|
Alert on objects without the |
Add an email annotation to your object with information about the object’s owner. |
|
Alert on containers not set to |
Set runAsUser to a non-zero number, and |
|
Alert on deployments exposing port 22, commonly reserved for SSH access |
Ensure that non-SSH services are not using port 22. Ensure that any actual SSH servers have been vetted. |
|
Alert on containers without CPU requests and limits set |
Set your container’s CPU requests and limits depending on its requirements. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits for more details. |
|
Alert on containers without memory requests and limits set |
Set your container’s memory requests and limits depending on its requirements. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits for more details. |
|
Alert on containers that mount a host path as writable |
If you need to access files on the host, mount them as |
Helm limitations¶
Storage redirects¶
The option to redirect clients on pull for Helm repositories is present in the web UI. However, it is currently ineffective. Refer to the relevant issue on GitHub for more information.
MSR API endpoints¶
For the following endpoints, note that while the Swagger API Reference Updated does not specify example responses for HTTP 200 codes, this is due to a Swagger bug and responses will be returned.
# Get chart or provenance file from repo
GET https://<msrhost>/charts/<namespace>/<reponame>/<chartname>/<filename>
# Template a chart version
GET https://<msrhost>/charts/api/<namespace>/<reponame>/charts/<chartname>/<chartversion>/template
Chart storage limit¶
Users can safely store up to 100,000 charts per repository; storing a greater number may compromise some MSR functionality.
Tag pruning¶
Tag pruning is the process of cleaning up unnecessary or unwanted repository tags. As of v2.6, you can configure the Mirants Secure Registry (MSR) to automatically perform tag pruning on repositories that you manage by:
Specifying a tag pruning policy or alternatively,
Setting a tag limit
Note
When run, tag pruning only deletes a tag and does not carry out any actual blob deletion.
Known Issue
While the tag limit field is disabled when you turn on immutability for a new repository, this is currently not the case with Repository Settings. As a workaround, turn off immutability when setting a tag limit via Repository Settings > Pruning.
In the following section, we will cover how to specify a tag pruning policy and set a tag limit on repositories that you manage. It will not include modifying or deleting a tag pruning policy.
Specify a tag pruning policy¶
As a repository administrator, you can now add tag pruning policies on
each repository that you manage. To get started, navigate to
https://<msr-url>
and log in with your credentials.
Select Repositories in the left-side navigation panel, and then
click on the name of the repository that you want to update. Note that you will
have to click on the repository name following the /
after the specific
namespace for your repository.
Select the Pruning tab, and click New pruning policy to specify your tag pruning criteria:
MSR allows you to set your pruning triggers based on the following image attributes:
Name |
Description |
Example |
---|---|---|
Tag name |
Whether the tag name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Tag name = test` |
Component name |
Whether the image has a given component and the component name equals, starts with, ends with, contains, is one of, or is not one of your specified string values |
Component name starts with |
Vulnerabilities |
Whether the image has vulnerabilities – critical, major, minor, or all – and your selected vulnerability filter is greater than or equals, greater than, equals, not equals, less than or equals, or less than your specified number |
Critical vulnerabilities = |
License |
Whether the image uses an intellectual property license and is one of or not one of your specified words |
License name = |
Last updated at |
Whether the last image update was before your specified number of hours, days, weeks, or months. For details on valid time units, see Go’s ParseDuration function |
Last updated at: Hours = |
Specify one or more image attributes to add to your pruning criteria, then choose:
Prune future tags to save the policy and apply your selection to future tags. Only matching tags after the policy addition will be pruned during garbage collection.
Prune all tags to save the policy, and evaluate both existing and future tags on your repository.
Upon selection, you will see a confirmation message and will be redirected to your newly updated Pruning tab.
If you have specified multiple pruning policies on the repository, the Pruning tab will display a list of your prune triggers and details on when the last tag pruning was performed based on the trigger, a toggle for deactivating or reactivating the trigger, and a View link for modifying or deleting your selected trigger.
All tag pruning policies on your account are evaluated every 15 minutes. Any qualifying tags are then deleted from the metadata store. If a tag pruning policy is modified or created, then the tag pruning policy for the affected repository will be evaluated.
Set a tag limit¶
In addition to pruning policies, you can also set tag limits on repositories that you manage to restrict the number of tags on a given repository. Repository tag limits are processed in a first in first out (FIFO) manner. For example, if you set a tag limit of 2, adding a third tag would push out the first.
To set a tag limit, do the following:
Select the repository that you want to update and click the Settings tab.
Turn off immutability for the repository.
Specify a number in the Pruning section and click Save. The Pruning tab will now display your tag limit above the prune triggers list along with a link to modify this setting.
Vulnerability scanning¶
In addition to its primary function of storing Docker images, MSR offers a deeply integrated vulnerability scanner that analyzes container images, either by manual user request or automatically whenever an image is uploaded to the registry.
MSR image scanning occurs in a service known as the dtr-jobrunner container. To scan an image, MSR:
Extracts a copy of the image layers from backend storage.
Extracts the files from the layer into a working directory inside the dtr-jobrunner container.
Executes the scanner against the files in this working directory, collecting a series of scanning data. Once the scanning data is collected, the working directory for the layer is removed.
Important
In scanning images for security vulnerabilities, MSR temporarily extracts
the contents of your images to disk. If malware is contained in these
images, external malware scanners may wrongly attribute that malware to MSR.
The key indication of this is the detection of malware in the dtr-jobrunner
container in /tmp/findlib-workdir-*
. To prevent any recurrence of the
issue, Mirantis recommends configuring the run-time scanner to exclude files
found in the MSR dtr-jobrunner containers in /tmp
or more specifically,
if wildcards can be used, /tmp/findlib-workdir-*
.
Image enforcement policies and monitoring¶
MSR users can automatically block clients from pulling images stored in the registry by configuring enforcement policies at either the global or repository level.
An enforcement policy is a collection of rules used to determine whether an image can be pulled.
A good example of a scenario in which an enforcement policy can be useful is when an administrator wants to house images in MSR but does not want those images to be pulled into environments by MSR users. In this case, the administrator would configure an enforcement policy either at the global or repository level based on a selected set of rules.
Enforcement policies: global versus repository¶
Global image enforcement policies differ from those set at the repository level in several important respects:
Whereas both administrators and regular users can set up enforcement policies at the repository level, only administrators can set up enforcement policies at the global level.
Only one global enforcement policy can be set for each MSR instance, whereas multiple enforcement policies can be configured at the repository level.
Global enforcement policies are evaluated prior to repository policies.
Enforcement policy rule attributes¶
Global and repository enforcement policies are generated from the same set of rule attributes.
Note
All rules must evaluate to true
for an image to be pulled; if any rules
evaluate to false
, the image pull will be blocked.
Name |
Filters |
Example |
---|---|---|
Tag name |
|
Tag name starts with |
Component name |
|
Component name starts with |
All CVSS 3 vulnerabilities |
|
All CVSS 3 vulnerabilities less than |
Critical CVSS 3 vulnerabilities |
|
Critical CVSS vulnerabilities less than |
High CVSS 3 vulnerabilities |
|
High CVSS 3 vulnerabilities less than |
Medium CVSS 3 vulnerabilities |
|
Medium CVSS 3 vulnerabilities less than |
Low CVSS 3 vulnerabilities |
|
Low CVSS 3 vulnerabilities less than |
License name |
|
License name one of |
Last updated at |
|
Last updated at before |
Configure enforcement policies¶
Use the MSR web UI to set up enforcement policies for both repository and global enforcement.
Set up repository enforcement¶
Important
Users can only create and edit enforcement policies for repositories within their user namespace.
To set up a repository enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to Repositories.
Select the repository to edit.
Click the Enforcement tab and select New enforcement policy.
Define the enforcement policy rules with the desired rule attributes and select Save. The screen displays the new enforcement policy in the Enforcement tab. By default, the new enforcement policy is toggled on.
Once a repository enforcement policy is set up and activated, pull requests that do not satisfy the policy rules will return the following error message:
Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: enforcement policies '<enforcement-policy-id>'
blocked request
Set up global enforcement¶
Important
Only administrators can set up global enforcement policies.
To set up a global enforcement policy using the MSR web UI:
Log in to the MSR web UI.
Navigate to System.
Select the Enforcement tab.
Confirm that the global enforcement function is Enabled.
Define the enforcement policy rules with the desired criteria and select Save.
Once the global enforcement policy is set up, pull requests against any repository that do not satisfy the policy rules will return the following error message:
Error response from daemon: unknown: pull access denied against
<namespace>/<reponame>: global enforcement policy blocked request
Monitor enforcement activity¶
Administrators and users can monitor enforcement activity in the MSR web UI.
Important
Enforcement events can only be monitored at the repository level. It is not possible, for example, to view in one location all enforcement events that correspond to the global enforcement policy.
Navigate to Repositories.
Select the repository whose enforcement activity you want to review.
Select the Activity tab to view enforcement event activity. For instance you can:
Identify which policy triggered an event using the enforcement ID displayed on the event entry. (The enforcement IDs for each enforcement policy are located on the Enforcement tab.)
Identify the user responsible for making a blocked pull request, and the time of the event.
Upgrade MSR¶
Upgrade MSR¶
MSR uses semantic versioning. While downgrades are not supported, Mirantis supports upgrades according to the following rules:
When upgrading from one patch version to another, you can skip patch versions because no data migration is performed for patch versions.
When upgrading between minor versions, you cannot skip versions, however you can upgrade from any patch version of the previous minor version to any patch version of the current minor version.
When upgrading between major versions, make sure to upgrade one major version at a time - and also to upgrade to the earliest available minor version. It is strongly recommended that you first upgrade to the latest minor/patch version for your major version.
Description |
From |
To |
Supported |
---|---|---|---|
patch upgrade |
x.y.0 |
x.y.1 |
yes |
skip patch version |
x.y.0 |
x.y.2 |
yes |
patch downgrade |
x.y.2 |
x.y.1 |
no |
minor upgrade |
x.y.* |
x.y+1.* |
yes |
skip minor version |
x.y.* |
x.y+2.* |
no |
minor downgrade |
x.y.* |
x.y-1.* |
no |
skip major version |
x.. |
x+2.. |
no |
major downgrade |
x.. |
x-1.. |
no |
major upgrade |
x.y.z |
x+1.0.0 |
yes |
major upgrade skipping minor version |
x.y.z |
x+1.y+1.z |
no |
A few seconds of interruption may occur during the upgrade of a MSR cluster, so schedule the upgrade to take place outside of peak hours to avoid any business impacts.
Minor upgrade¶
Important
Only perform the MSR upgrade once any correlating upgrades to Mirantis Kubernetes Engine (MKE) and/or Mirantis Container Runtime (MCR) have completed.
Mirantis recommends the following upgrade sequence:
MCR
MKE
MSR
Before starting the MSR upgrade, confirm that:
The version of MKE in use is supported by the upgrade version of MSR.
The MKE and MSR backups are both recent.
A backup of current swarm state has been created.
To create a swarm state backup, perform the following from a MKE manager node:
ENGINE=$(docker version -f '{{.Server.Version}}') systemctl stop docker sudo tar czvf "/tmp/swarm-${ENGINE}-$(hostname -s)-$(date +%s%z).tgz" /var/lib/docker/swarm/ systemctl start docker
(if possible) A backup exists of the images stored by MSR, if it is configured to store images on the local filesystem or within an NFS store.
BACKUP_LOCATION=/example_directory/filename # If local filesystem sudo tar -cf ${BACKUP_LOCATION} -C /var/lib/docker/volumes/dtr-registry-${REPLICA_ID} # If NFS store sudo tar -cf ${BACKUP_LOCATION} -C /var/lib/docker/volumes/dtr-registry-nfs-${REPLICA_ID}
None of the MSR replica nodes are exhibiting time drift. To make this determination, review the kernel log timestamps for each of the nodes. If time drift is occurring, use clock synchronization (e.g., NTP) to keep node clocks in sync.
Local filesystems across MSR nodes are not exhibiting any disk storage issues.
Docker Content Trust in MKE is disabled.
All system requirements are met.
Step 1. Upgrade MSR to 2.7 if necessary¶
Confirm that you are running MSR 2.7.x. If you are still using an earlier version of MSR, upgrade your installation to MSR 2.7.13.
Step 2. Upgrade MSR¶
Pull the latest version of MSR:
docker pull mirantis/dtr:2.8.13
Confirm that at least 16GB RAM is available on the node on which you are running the upgrade. If the MSR node does not have access to the internet, follow the offline installation documentation to get the images.
Once you have the latest image on your machine (and the images on the target nodes, if upgrading offline), run the upgrade command.
Note
The upgrade command can be run from any available node, as MKE is aware of which worker nodes have replicas.
docker run -it --rm \
mirantis/dtr:2.8.13 upgrade
By default, the upgrade command runs in interactive mode and prompts for any
necessary information. If you are performing the upgrade on an existing
replica, pass the --existing-replica-id
flag.
The upgrade command will start replacing every container in your MSR cluster, one replica at a time. It will also perform certain data migrations. If anything fails or the upgrade is interrupted for any reason, rerun the upgrade command (the upgrade will resume from the point of interruption).
Step 3. Verify Upgrade Success¶
To confirm that the newly upgraded MSR environment is ready:
Make sure that all running MSR containers reflect the newly upgraded MSR version:
docker ps --filter name=dtr
Verify that the MSR web UI is accessible and operational.
Confirm push and pull functionality of Docker images to and from the registry
Ensure that the MSR metadata store is in good standing:
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-') docker run -it --rm --net dtr-ol \ -v dtr-ca-$REPLICA_ID:/ca \ dockerhubenterprise/rethinkcli:v2.3.0 $REPLICA_ID # List problems in the cluster detected by the current node. > r.db("rethinkdb").table("current_issues") []
Metadata Store Migration¶
When upgrading from 2.5
to 2.6
, the system will run a
metadatastoremigration
job following a successful upgrade. This involves
migrating the blob links for your images, which is necessary for online garbage
collection. With 2.6
, you can log into the MSR web interface and navigate
to System > Job Logs to check the status of the
metadatastoremigration
job.
Garbage collection is disabled while the migration is running. In the case of a
failed metadatastoremigration
, the system will retry twice.
If the three attempts fail, it will be necessary to manually retrigger the
metadatastoremigration
job. To do this, send a POST
request to the
/api/v0/jobs
endpoint:
curl https://<msr-external-url>/api/v0/jobs -X POST \
-u username:accesstoken -H 'Content-Type':'application/json' -d \
'{"action": "metadatastoremigration"}'
Alternatively, select API from the bottom left-side navigation panel of the MSR web interface and use the Swagger UI to send your API request.
Patch upgrade¶
A patch upgrade changes only the MSR containers and is always safer than a minor version upgrade. The command is the same as for a minor upgrade.
MSR cache upgrade¶
If you have previously deployed a cache, be sure to upgrade the node dedicated for your cache to keep it in sync with your upstream MSR replicas. This prevents authentication errors and other strange behaviors.
Download the vulnerability database¶
After upgrading MSR, it is necessary to redownload the vulnerability database.
Monitor MSR¶
Mirantis Secure Registry is a Dockerized application. To monitor it, you can use the same tools and techniques you’re already using to monitor other containerized applications running on your cluster. One way to monitor MSR is using the monitoring capabilities of Docker Universal Control Plane.
In your browser, log in to Mirantis Kubernetes Engine (MKE), and navigate to the Stacks page. If you have MSR set up for high-availability, then all the MSR replicas are displayed.
To check the containers for the MSR replica, click the replica you want to inspect, click Inspect Resource, and choose Containers.
Now you can drill into each MSR container to see its logs and find the root cause of the problem.
Health check endpoints¶
MSR also exposes several endpoints you can use to assess if a MSR replica is healthy or not:
/_ping
: Checks if the MSR replica is healthy, and returns a simple json response. This is useful for load balancing or other automated health check tasks./nginx_status
: Returns the number of connections being handled by the NGINX front-end used by MSR./api/v0/meta/cluster_status
: Returns extensive information about all MSR replicas.
Cluster status¶
The /api/v0/meta/cluster_status
endpoint requires administrator
credentials, and returns a JSON object for the entire cluster as observed by
the replica being queried. You can authenticate your requests using HTTP basic
auth.
curl -ksL -u <user>:<pass> https://<msr-domain>/api/v0/meta/cluster_status
{
"current_issues": [
{
"critical": false,
"description": "... some replicas are not ready. The following servers are
not reachable: dtr_rethinkdb_f2277ad178f7",
}],
"replica_health": {
"f2277ad178f7": "OK",
"f3712d9c419a": "OK",
"f58cf364e3df": "OK"
},
}
You can find health status on the current_issues
and
replica_health
arrays. If this endpoint doesn’t provide meaningful
information when trying to troubleshoot, try troubleshooting using
logs.
Check notary audit logs¶
Docker Content Trust (DCT) keeps audit logs of changes made to trusted repositories. Every time you push a signed image to a repository, or delete trust data for a repository, DCT logs that information.
These logs are only available from the MSR API.
Get an authentication token¶
To access the audit logs you need to authenticate your requests using an authentication token. You can get an authentication token for all repositories, or one that is specific to a single repository.
curl --insecure --silent \
--user <user>:<password> \
"https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=registry:catalog:*"
curl --insecure --silent \
--user <user>:<password> \
"https://<dtr-url>/auth/token?realm=dtr&service=dtr&scope=repository:<dtr-url>/<repository>:pull"
MSR returns a JSON file with a token, even when the user doesn’t have access to the repository to which they requested the authentication token. This token doesn’t grant access to MSR repositories.
The JSON file returned has the following structure:
{
"token": "<token>",
"access_token": "<token>",
"expires_in": "<expiration in seconds>",
"issued_at": "<time>"
}
Changefeed API¶
Once you have an authentication token you can use the following endpoints to get audit logs:
URL |
Description |
Authorization |
---|---|---|
|
Get audit logs for all repositories. |
Global scope token |
|
Get audit logs for a specific repository. |
Repositorhy-specific token |
Both endpoints have the following query string parameters:
Field name |
Required |
Type |
Description |
---|---|---|---|
|
Yes |
String |
A non-inclusive starting change ID from which to start returning results. This will typically be the first or last change ID from the previous page of records requested, depending on which direction your are paging in. The value 0 indicates records should be returned starting from the beginning of time. The value 1 indicates records should be returned starting from the most recent record. If 1 is provided, the implementation will also assume the records value is meant to be negative, regardless of the given sign. |
|
Yes |
String integer |
The number of records to return. A negative value indicates the number of records preceding the change_id should be returned. Records are always returned sorted from oldest to newest. |
The response is a JSON like:
{
"count": 1,
"records": [
{
"ID": "0a60ec31-d2aa-4565-9b74-4171a5083bef",
"CreatedAt": "2017-11-06T18:45:58.428Z",
"GUN": "msr.example.org/library/wordpress",
"Version": 1,
"SHA256": "a4ffcae03710ae61f6d15d20ed5e3f3a6a91ebfd2a4ba7f31fc6308ec6cc3e3d",
"Category": "update"
}
]
}
Below is the description for each of the fields in the response:
Field name |
Description |
---|---|
|
The number of records returned. |
|
The ID of the change record. Should be used in the change_id field of requests to provide a non-exclusive starting index. It should be treated as an opaque value that is guaranteed to be unique within an instance of notary. |
|
The time the change happened. |
|
The MSR repository that was changed. |
|
The version that the repository was updated to. This increments every time there’s a change to the trust repository. This is always 0 for events representing trusted data being removed from the repository. |
|
The checksum of the timestamp being updated to. This can be used with the existing notary APIs to request said timestamp. This is always an empty string for events representing trusted data being removed from the repository |
|
The kind of change that was made to the trusted repository. Can be update, or deletion. |
The results only include audit logs for events that happened more than 60 seconds ago, and are sorted from oldest to newest.
Even though the authentication API always returns a token, the changefeed API validates if the user has access to see the audit logs or not:
If the user is an admin they can see the audit logs for any repositories,
All other users can only see audit logs for repositories they have read access.
Troubleshoot MSR¶
This guide contains tips and tricks for troubleshooting MSR problems.
Troubleshoot overlay networks¶
High availability in MSR depends on swarm overlay networking. One way to test if overlay networks are working correctly is to deploy containers to the same overlay network on different nodes and see if they can ping one another.
Use SSH to log into a node and run:
docker run -it --rm \
--net dtr-ol --name overlay-test1 \
--entrypoint sh mirantis/dtr
Then use SSH to log into another node and run:
docker run -it --rm \
--net dtr-ol --name overlay-test2 \
--entrypoint ping mirantis/dtr -c 3 overlay-test1
If the second command succeeds, it indicates overlay networking is working correctly between those nodes.
You can run this test with any attachable overlay network and any Docker
image that has sh
and ping
.
Access RethinkDB directly¶
MSR uses RethinkDB for persisting data and replicating it across replicas. It might be helpful to connect directly to the RethinkDB instance running on a MSR replica to check the MSR internal state.
Warning
Modifying RethinkDB directly is not supported and may cause problems.
via RethinkCLI¶
The RethinkCLI can be run from a separate
image in the mirantis
organization. Note that the
commands below are using separate tags for non-interactive and
interactive modes.
Non-interactive¶
Use SSH to log into a node that is running a MSR replica, and run the following:
# List problems in the cluster detected by the current node.
REPLICA_ID=$(docker container ls --filter=name=dtr-rethink --format '{{.Names}}' | cut -d'/' -f2 | cut -d'-' -f3 | head -n 1) && echo 'r.db("rethinkdb").table("current_issues")' | docker run --rm -i --net dtr-ol -v "dtr-ca-${REPLICA_ID}:/ca" -e MSR_REPLICA_ID=$REPLICA_ID mirantis/rethinkcli:v2.2.0-ni non-interactive
On a healthy cluster the output will be []
.
Interactive¶
Starting in DTR 2.5.5, you can run RethinkCLI from a separate image. First, set an environment variable for your MSR replica ID:
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
RethinkDB stores data in different databases that contain multiple tables. Run the following command to get into interactive mode and query the contents of the DB:
docker run -it --rm --net dtr-ol -v dtr-ca-$REPLICA_ID:/ca mirantis/rethinkcli:v2.3.0 $REPLICA_ID
# List problems in the cluster detected by the current node.
> r.db("rethinkdb").table("current_issues")
[]
# List all the DBs in RethinkDB
> r.dbList()
[ 'dtr2',
'jobrunner',
'notaryserver',
'notarysigner',
'rethinkdb' ]
# List the tables in the dtr2 db
> r.db('dtr2').tableList()
[ 'blob_links',
'blobs',
'client_tokens',
'content_caches',
'events',
'layer_vuln_overrides',
'manifests',
'metrics',
'namespace_team_access',
'poll_mirroring_policies',
'promotion_policies',
'properties',
'pruning_policies',
'push_mirroring_policies',
'repositories',
'repository_team_access',
'scanned_images',
'scanned_layers',
'tags',
'user_settings',
'webhooks' ]
# List the entries in the repositories table
> r.db('dtr2').table('repositories')
[ { enableManifestLists: false,
id: 'ac9614a8-36f4-4933-91fa-3ffed2bd259b',
immutableTags: false,
name: 'test-repo-1',
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
namespaceName: 'admin',
pk: '3a4a79476d76698255ab505fb77c043655c599d1f5b985f859958ab72a4099d6',
pulls: 0,
pushes: 0,
scanOnPush: false,
tagLimit: 0,
visibility: 'public' },
{ enableManifestLists: false,
id: '9f43f029-9683-459f-97d9-665ab3ac1fda',
immutableTags: false,
longDescription: '',
name: 'testing',
namespaceAccountID: 'fc3b4aec-74a3-4ba2-8e62-daed0d1f7481',
namespaceName: 'admin',
pk: '6dd09ac485749619becaff1c17702ada23568ebe0a40bb74a330d058a757e0be',
pulls: 0,
pushes: 0,
scanOnPush: false,
shortDescription: '',
tagLimit: 1,
visibility: 'public' } ]
Individual DBs and tables are a private implementation detail and may
change in MSR from version to version, but you can always use
dbList()
and tableList()
to explore the contents and data
structure.
via API¶
To check on the overall status of your MSR cluster without interacting with RethinkCLI, run the following API request:
curl -u admin:$TOKEN -X GET "https://<msr-url>/api/v0/meta/cluster_status" -H "accept: application/json"
Example API Response¶
{
"rethink_system_tables": {
"cluster_config": [
{
"heartbeat_timeout_secs": 10,
"id": "heartbeat"
}
],
"current_issues": [],
"db_config": [
{
"id": "339de11f-b0c2-4112-83ac-520cab68d89c",
"name": "notaryserver"
},
{
"id": "aa2e893f-a69a-463d-88c1-8102aafebebc",
"name": "dtr2"
},
{
"id": "bdf14a41-9c31-4526-8436-ab0fed00c2fd",
"name": "jobrunner"
},
{
"id": "f94f0e35-b7b1-4a2f-82be-1bdacca75039",
"name": "notarysigner"
}
],
"server_status": [
{
"id": "9c41fbc6-bcf2-4fad-8960-d117f2fdb06a",
"name": "dtr_rethinkdb_5eb9459a7832",
"network": {
"canonical_addresses": [
{
"host": "dtr-rethinkdb-5eb9459a7832.dtr-ol",
"port": 29015
}
],
"cluster_port": 29015,
"connected_to": {
"dtr_rethinkdb_56b65e8c1404": true
},
"hostname": "9e83e4fee173",
"http_admin_port": "<no http admin>",
"reql_port": 28015,
"time_connected": "2019-02-15T00:19:22.035Z"
},
}
...
]
}
}
Recover from an unhealthy replica¶
When a MSR replica is unhealthy or down, the MSR web UI displays a warning:
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
To fix this, you should remove the unhealthy replica from the MSR cluster, and join a new one. Start by running:
docker run -it --rm \
mirantis/dtr:2.8.13 remove \
--ucp-insecure-tls
And then:
docker run -it --rm \
mirantis/dtr:2.8.13 join \
--ucp-node <mke-node-name> \
--ucp-insecure-tls
Vulnerability scan warnings¶
Warnings display in a red banner at the top of the MSR web UI to indicate potential vulnerability scanning issues.
Warning |
Cause |
---|---|
Warning: Cannot perform security scans because no vulnerability database was found. |
Displays when vulnerabilty scanning is enabled but there is no vulnerability database available to MSR. Typically, the warning displays when a vulnerability database update is run for the first time and the operation fails, as no usable vulnerability database exists at this point. |
Warning: Last vulnerability database sync failed. |
Displays when a vulnerability database update fails, even though there is a previous usable vulnerabilty database available for vulnerability scans. The warning typically displays when a vulnerability database update fails, despite successful completion of a prior vulnerability database update. |
Note
The terms vulnerability database sync and vulnerability database update are interchangeable, in the context of MSR web UI warnings.
Note
The issuing of warnings is the same regardless of whether vulnerability database updating is done manually or is performed automatically through a job.
MSR undergoes a number of steps in performing a vulnerability database update, including TAR file download and extraction, file validation, and the update operation itself. Errors that can trigger warnings can occur at any point in the update process. These errors can include such system-related matters as low disk space, issues with the transient network, or configuration complications. As such, the best strategy for troubleshooting MSR vulnerability scanning issues is to review the logs.
To view the logs for an online vulnerability database update:
Online vulnerability database updates are performed by a jobrunner container, the logs for which you can view through a docker CLI command or by using the MSR web UI:
CLI command:
docker logs <jobrunner-container-name>
MSR web UI:
Navigate to System > Job Logs in the left-side navigation panel.
To view the logs for an offline vulnerability database update:
The MSR vulnerability database update occurs through the dtr-api
container.
As such, access the logs for that container to ascertain the reason for update
failure.
To obtain more log information:
If the logs do not initially offer enough detail on the cause of vulnerability database update failure, set MSR to enable debug logging, which will display additional debug logs.
Refer to the reconfigure CLI command documentation for information on how to enable debug logging. For example:
docker run -it --rm mirantis/dtr:<version-number> reconfigure --ucp-url
$MKE_URL --ucp-username $USER --ucp-password $PASSWORD --ucp-insecure-tls
--dtr-external-url $MSR_URL --log-level debug
Disaster recovery¶
Disaster recovery overview¶
Mirantis Secure Registry is a clustered application. You can join multiple replicas for high availability.
For a MSR cluster to be healthy, a majority of its replicas (n/2 + 1) need to be healthy and be able to communicate with the other replicas. This is also known as maintaining quorum.
This means that there are three failure scenarios possible.
Replica is unhealthy but cluster maintains quorum¶
One or more replicas are unhealthy, but the overall majority (n/2 + 1) is still healthy and able to communicate with one another.
In this example the MSR cluster has five replicas but one of the nodes stopped working, and the other has problems with the MSR overlay network.
Even though these two replicas are unhealthy the MSR cluster has a majority of replicas still working, which means that the cluster is healthy.
In this case you should repair the unhealthy replicas, or remove them from the cluster and join new ones.
The majority of replicas are unhealthy¶
A majority of replicas are unhealthy, making the cluster lose quorum, but at least one replica is still healthy, or at least the data volumes for MSR are accessible from that replica.
In this example the MSR cluster is unhealthy but since one replica is still running it’s possible to repair the cluster without having to restore from a backup. This minimizes the amount of data loss.
All replicas are unhealthy¶
This is a total disaster scenario where all MSR replicas were lost, causing the data volumes for all MSR replicas to get corrupted or lost.
In a disaster scenario like this, you’ll have to restore MSR from an existing backup. Restoring from a backup should be only used as a last resort, since doing an emergency repair might prevent some data loss.
Repair a single replica¶
When one or more MSR replicas are unhealthy but the overall majority (n/2 + 1) is healthy and able to communicate with one another, your MSR cluster is still functional and healthy.
Given that the MSR cluster is healthy, there’s no need to execute any disaster recovery procedures like restoring from a backup.
Instead, you should:
Remove the unhealthy replicas from the MSR cluster.
Join new replicas to make MSR highly available.
Since a MSR cluster requires a majority of replicas to be healthy at all times, the order of these operations is important. If you join more replicas before removing the ones that are unhealthy, your MSR cluster might become unhealthy.
Split-brain scenario¶
To understand why you should remove unhealthy replicas before joining new ones, imagine you have a five-replica MSR deployment, and something goes wrong with the overlay network connection the replicas, causing them to be separated in two groups.
Because the cluster originally had five replicas, it can work as long as three replicas are still healthy and able to communicate (5 / 2 + 1 = 3). Even though the network separated the replicas in two groups, MSR is still healthy.
If at this point you join a new replica instead of fixing the network problem or removing the two replicas that got isolated from the rest, it’s possible that the new replica ends up in the side of the network partition that has less replicas.
When this happens, both groups now have the minimum amount of replicas needed to establish a cluster. This is also known as a split-brain scenario, because both groups can now accept writes and their histories start diverging, making the two groups effectively two different clusters.
Remove replicas¶
To remove unhealthy replicas, you’ll first have to find the replica ID of one of the replicas you want to keep, and the replica IDs of the unhealthy replicas you want to remove.
You can find the list of replicas by navigating to Shared Resources > Stacks or Swarm > Volumes (when using swarm mode) on the MKE web interface, or by using the MKE client bundle to run:
docker ps --format "{{.Names}}" | grep dtr
# The list of MSR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6
Another way to determine the replica ID is to SSH into a MSR node and run the following:
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
&& echo $REPLICA_ID
Then use the MKE client bundle to remove the unhealthy replicas:
docker run -it --rm mirantis/dtr:2.8.13 remove \
--existing-replica-id <healthy-replica-id> \
--replica-ids <unhealthy-replica-id> \
--ucp-insecure-tls \
--ucp-url <mke-url> \
--ucp-username <user> \
--ucp-password <password>
You can remove more than one replica at the same time, by specifying multiple IDs with a comma.
Join replicas¶
Once you’ve removed the unhealthy nodes from the cluster, you should join new ones to make sure your cluster is highly available.
Use your MKE client bundle to run the following command which prompts you for the necessary parameters:
docker run -it --rm \
mirantis/dtr:2.8.13 join \
--ucp-node <mke-node-name> \
--ucp-insecure-tls
Where to go next¶
Repair a cluster¶
For a MSR cluster to be healthy, a majority of its replicas (n/2 + 1) need to be healthy and be able to communicate with the other replicas. This is known as maintaining quorum.
In a scenario where quorum is lost, but at least one replica is still accessible, you can use that replica to repair the cluster. That replica doesn’t need to be completely healthy. The cluster can still be repaired as the MSR data volumes are persisted and accessible.
Repairing the cluster from an existing replica minimizes the amount of data lost. If this procedure doesn’t work, you’ll have to restore from an existing backup.
Diagnose an unhealthy cluster¶
When a majority of replicas are unhealthy, causing the overall MSR
cluster to become unhealthy, operations like docker login
,
docker pull
, and docker push
present internal server error
.
Accessing the /_ping
endpoint of any replica also returns the same
error. It’s also possible that the MSR web UI is partially or fully
unresponsive.
Perform an emergency repair¶
Use the mirantis/dtr emergency-repair
command to try to repair an
unhealthy MSR cluster, from an existing replica.
This command checks the data volumes for the MSR replica are uncorrupted, redeploys all internal MSR components and reconfigured them to use the existing volumes. It also reconfigures MSR removing all other nodes from the cluster, leaving MSR as a single-replica cluster with the replica you chose.
Start by finding the ID of the MSR replica that you want to repair from. You can find the list of replicas by navigating to Shared Resources > Stacks or Swarm > Volumes (when using swarm mode) on the MKE web interface, or by using a MKE client bundle to run:
docker ps --format "{{.Names}}" | grep dtr
# The list of MSR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6
Another way to determine the replica ID is to SSH into a MSR node and run the following:
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
&& echo $REPLICA_ID
Then, use your MKE client bundle to run the emergency repair command:
docker run -it --rm mirantis/dtr:2.8.13 emergency-repair \
--ucp-insecure-tls \
--existing-replica-id <replica-id>
If the emergency repair procedure is successful, your MSR cluster now has a single replica. You should now join more replicas for high availability.
Note
Learn more about the high availability configuration in Set up high availability.
If the emergency repair command fails, try running it again using a different replica ID. As a last resort, you can restore your cluster from an existing backup.
Where to go next¶
Create a backup¶
Data managed by MSR¶
Mirantis Secure Registry maintains data about:
Data |
Description |
---|---|
Con figurations |
The MSR cluster configurations |
Repository metadata |
The metadata about the repositories and images deployed |
Access control to repos and images |
Permissions for teams and repositories |
Notary data |
Notary tags and signatures |
Scan results |
Security scanning results for images |
C ertificates and keys |
The certificates, public keys, and private keys that are used for mutual TLS communication |
Images content |
The images you push to MSR. This can be stored on the file system of the node running MSR, or other storage system, depending on the configuration |
This data is persisted on the host running MSR, using named volumes.
To perform a backup of a MSR node, run the mirantis/dtr backup <msr-cli-backup> command. This command backs up the following data:
Data |
Backed up |
Description |
---|---|---|
Configurations |
yes |
MSR settings |
Repository metadata |
yes |
Metadata such as image architecture and size |
Access control to repos and images |
yes |
Data about who has access to which images |
Notary data |
yes |
Signatures and digests for images that are signed |
Scan results |
yes |
Information about vulnerabilities in your images |
Certificates and keys |
yes |
TLS certificates and keys used by MSR |
Image content |
no |
Needs to be backed up separately, depends on MSR configuration |
Users, orgs, teams |
no |
Create a MKE backup to back up this data |
Vulnerability database |
no |
Can be redownloaded after a restore |
Back up MSR data¶
To create a backup of MSR, you need to:
Back up image content
Back up MSR metadata
You should always create backups from the same MSR replica, to ensure a smoother restore. If you have not previously performed a backup, the web interface displays a warning for you to do so:
Find your replica ID¶
Since you need your MSR replica ID during a backup, the following covers a few ways for you to determine your replica ID:
You can find the list of replicas by navigating to Shared Resources > Stacks or Swarm > Volumes (when using swarm mode) on the MKE web interface.
From a terminal using a MKE client bundle, run:
docker ps --format "{{.Names}}" | grep dtr
# The list of MSR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6
Another way to determine the replica ID is to log into a MSR node using SSH and run the following:
REPLICA_ID=$(docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-') && echo $REPLICA_ID
Back up image content¶
Since you can configure the storage backend that MSR uses to store images, the way you back up images depends on the storage backend you’re using.
If you’ve configured MSR to store images on the local file system or NFS
mount, you can back up the images by using SSH to log in to an MSR node,
and creating a tar
archive of the MSR volume.
Example backup command for local images:
sudo tar -cvf image-backup.tar /var/lib/docker/volumes/dtr-registry-<replica-id>
Expected system respond:
tar: Removing leading '/' from member names
If you’re using a different storage backend, follow the best practices recommended for that system.
Back up MSR metadata¶
To create a MSR backup, load your MKE client bundle, and run the following command.
Chained commands (Linux only):
DTR_VERSION=$(docker container inspect $(docker container ps -f name=dtr-registry -q) | \
grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \
REPLICA_ID=$(docker ps --format '{{.Names}}' -f name=dtr-rethink | cut -f 3 -d '-'); \
read -p 'mke-url (The MKE URL including domain and port): ' UCP_URL; \
read -p 'mke-username (The MKE administrator username): ' UCP_ADMIN; \
read -sp 'mke password: ' UCP_PASSWORD; \
docker run --log-driver none -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
mirantis/dtr:$DTR_VERSION backup \
--ucp-username $UCP_ADMIN \
--ucp-url $UCP_URL \
--ucp-ca "$(curl https://${UCP_URL}/ca)" \
--existing-replica-id $REPLICA_ID > dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar
MKE field prompts¶
<mke-url>
is the URL you use to access MKE.<mke-username>
is the username of a MKE administrator.<mke-password>
is the password for the indicated MKE administrator.
The above chained commands run through the following tasks:
Sets your MSR version and replica ID. To back up a specific replica, set the replica ID manually by modifying the
--existing-replica-id
flag in the backup command.Prompts you for your MKE URL (domain and port) and admin username.
Prompts you for your MKE password without saving it to your disk or printing it on the terminal.
Retrieves the CA certificate for your specified MKE URL. To skip TLS verification, replace the
--ucp-ca
flag with--ucp-insecure-tls
. Docker does not recommend this flag for production environments.Includes MSR version and timestamp to your
tar
backup file.
Important
To ensure constant user access to MSR, by default the backup
command
does not pause the MSR replica that is undergoing the backup operation. As
such, you can continue to make changes to the replica, however those changes
will not be saved into the backup. To circumvent this behavior, use the
--offline-backup
option and be sure to remove the replica from the load
balancing pool to avoid user interruption.
As the backup contains sensitive information (for example, private keys), you can encrypt it by running:
gpg --symmetric {{ metadata_backup_file }}
This prompts you for a password to encrypt the backup, copies the backup file and encrypts it.
Refer to mirantis/dtr backup for more information on supported command options.
Test your backups¶
To validate that the backup was correctly performed, you can print the contents of the tar file created. The backup of the images should look like:
tar -tf {{ images_backup_file }}
dtr-backup-v2.8.13/
dtr-backup-v2.8.13/rethink/
dtr-backup-v2.8.13/rethink/layers/
And the backup of the MSR metadata should look like:
tar -tf {{ metadata_backup_file }}
# The archive should look like this
dtr-backup-v2.8.13/
dtr-backup-v2.8.13/rethink/
dtr-backup-v2.8.13/rethink/properties/
dtr-backup-v2.8.13/rethink/properties/0
If you’ve encrypted the metadata backup, you can use:
gpg -d {{ metadata_backup_file }} | tar -t
You can also create a backup of a MKE cluster and restore it into a new cluster. Then restore MSR on that new cluster to confirm that everything is working as expected.
Restore from backup¶
Restore MSR data¶
If your MSR has a majority of unhealthy replicas, the one way to restore it to a working state is by restoring from an existing backup.
To restore MSR, you need to:
Stop any MSR containers that might be running
Restore the images from a backup
Restore MSR metadata from a backup
Re-fetch the vulnerability database
You need to restore MSR on the same MKE cluster where you’ve created the backup. If you restore on a different MKE cluster, all MSR resources will be owned by users that don’t exist, so you’ll not be able to manage the resources, even though they’re stored in the MSR data store.
When restoring, you need to use the same version of the mirantis/dtr
image that you’ve used when creating the update. Other versions are not
guaranteed to work.
Remove MSR containers¶
Start by removing any MSR container that is still running:
docker run -it --rm \
mirantis/dtr:2.8.13 destroy \
--ucp-insecure-tls
Restore images¶
If you had MSR configured to store images on the local filesystem, you can extract your backup:
sudo tar -xf {{ image_backup_file }} -C /var/lib/docker/volumes
If you’re using a different storage backend, follow the best practices recommended for that system.
Restore MSR metadata¶
You can restore the MSR metadata with the mirantis/dtr restore
command. This performs a fresh installation of MSR, and reconfigures it
with the configuration created during a backup.
Load your MKE client bundle, and run the following command, replacing the placeholders for the real values:
read -sp 'ucp password: ' UCP_PASSWORD;
This prompts you for the MKE password. Next, run the following to restore MSR from your backup. You can learn more about the supported flags in mirantis/dtr restore.
docker run -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
mirantis/dtr:2.8.13 restore \
--ucp-url <mke-url> \
--ucp-insecure-tls \
--ucp-username <mke-username> \
--ucp-node <hostname> \
--replica-id <replica-id> \
--dtr-external-url <msr-external-url> < {{ metadata_backup_file }}
Where:
<mke-url>
is the url you use to access MKE<mke-username>
is the username of a MKE administrator<hostname>
is the hostname of the node where you’ve restored the images<replica-id>
the id of the replica you backed up<msr-external-url>
the url that clients use to access MSR
If you’re using NFS as a storage backend, also include
--nfs-storage-url
as part of your restore command, otherwise MSR is
restored but starts using a local volume to persist your Docker images.
Warning
When running 2.6.0 to 2.6.3 (with experimental online garbage collection),
there is an issue with reconfiguring and restoring MSR with
--nfs-storage-url
, which leads to erased tags.
Make sure to back up your MSR metadata before you proceed.
To work around the --nfs-storage-url
flag
issue, manually create a storage volume on each MSR node. To restore
MSR from an existing backup, use mirantis/dtr restore
with
--dtr-storage-volume
and the new volume.
Re-fetch the vulnerability database¶
If you’re scanning images, you now need to download the vulnerability database.
After you successfully restore MSR, you can join new replicas the same way you would after a fresh installation.
Where to go next¶
compatibility-matrix
Customer feedback¶
You can submit feedback on MSR to Mirantis either by rating your experience or through a Jira ticket.
To rate your MSR experience:
Log in to the MSR web UI.
Click Give feedback at the bottom of the screen.
Rate your MSR experience from one to five stars, and add any additional comments in the provided field.
Click Send feedback.
To offer more detailed feedback:
Log in to the MSR web UI.
Click Give feedback at the bottom of the screen.
Click create a ticket in the 5-star review dialog to open a Jira feedback collector.
Fill in the Jira feedback collector fields and add attachments as necessary.
Click Submit.
Get Support¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
Subscriptions for MKE, MSR, and MCR provide access to prioritized support for designated contacts from your company, agency, team, or organization. Mirantis service levels for MKE, MSR, and MCR are based on your subscription level and the Cloud (or cluster) you designate in your technical support case. Our support offerings are described here, and if you do not already have a support subscription, you may inquire about one via the contact us form.
Mirantis’ primary means of interacting with customers who have technical issues with MKE, MSR, or MCR is our CloudCare Portal. Access to our CloudCare Portal requires prior authorization by your company, agency, team, or organization, and a brief email verification step. After Mirantis sets up its back end systems at the start of the support subscription, a designated administrator at your company, agency, team or organization, can designate additional contacts. If you have not already received and verified an invitation to our CloudCare Portal, contact your local designated administrator, who can add you to the list of designated contacts. Most companies, agencies, teams, and organizations have multiple designated administrators for the CloudCare Portal, and these are often the persons most closely involved with the software. If you don’t know who is a local designated administrator, or are having problems accessing the CloudCare Portal, you may also send us an email.
Once you have verified your contact details via our verification email, and changed your password as part of your first login, you and all your colleagues will have access to all of the cases and resources purchased. We recommend you retain your ‘Welcome to Mirantis’ email, because it contains information on accessing our CloudCare Portal, guidance on submitting new cases, managing your resources, and so forth. Thus, it can serve as a reference for future visits.
We encourage all customers with technical problems to use the knowledge base, which you can access on the Knowledge tab of our CloudCare Portal. We also encourage you to review the MKE, MSR, and MCR products documentation which includes release notes, solution guides, and reference architectures. These are available in several formats. We encourage use of these resources prior to filing a technical case; we may already have fixed the problem in a later release of software, or provided a solution or technical workaround to a problem experienced by other customers.
One of the features of the CloudCare Portal is the ability to associate cases with a specific MKE cluster; these are known as “Clouds” in our portal. Mirantis has pre-populated customer accounts with one or more Clouds based on your subscription(s). Customers may also create and manage their Clouds to better match how you use your subscription.
We also recommend and encourage our customers to file new cases based on a specific Cloud in your account. This is because most Clouds also have associated support entitlements, licenses, contacts, and cluster configurations. These greatly enhance Mirantis’ ability to support you in a timely manner.
You can locate the existing Clouds associated with your account by using the “Clouds” tab at the top of the portal home page. Navigate to the appropriate Cloud, and click on the Cloud’s name. Once you’ve verified that Cloud represents the correct MKE cluster and support entitlement, you can create a new case via the New Case button towards the top of the Cloud’s page.
One of the key items required for technical support of most MKE, MSR, and MCR cases is the support dump. This is a compressed archive of configuration data and log files from the cluster. There are several ways to gather a support dump, each described in the paragraphs below. After you have collected a support dump, you can upload the dump to your new technical support case by following this guidance and using the “detail” view of your case.
Use the Web UI to get a support dump¶
To get the support dump from the web UI:
Log into the MKE web UI with an administrator account.
In the top-left menu, click your username and choose Support Dump.
It may take a few minutes for the download to complete.
To submit the support dump to Mirantis Customer Support:
Click Share support bundle on the success prompt that displays when the support dump finishes downloading.
Fill in the Jira feedback dialog, and click Submit.
Use the CLI to get a support dump¶
To get the support dump from the CLI, use SSH to log into a node and run:
MKE_VERSION=$((docker container inspect ucp-proxy --format '{{index .Config.Labels "com.docker.ucp.version"}}' 2>/dev/null || echo -n 3.2.6)|tr -d [[:space:]])
docker container run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
--log-driver none \
mirantis/ucp:${MKE_VERSION} \
support > \
docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz
Note
The support dump only contains logs for the node where you’re running the command. If your MKE is highly available, you should collect support dumps from all of the manager nodes.
To submit the support dump to Mirantis Customer Support, add the --submit option to the support command. This will send the support dump along with the following information:
Cluster ID
MKE version
MCR version
OS/architecture
Cluster size
Use PowerShell to get a support dump¶
On Windows worker nodes, run the following command to generate a local support dump:
docker container run --name windowssupport -v 'C:\ProgramData\docker\daemoncerts:C:\ProgramData\docker\daemoncerts' -v 'C:\Windows\system32\winevt\logs:C:\eventlogs:ro' mirantis/ucp-dsinfo-win:3.2.6; docker cp windowssupport:'C:\dsinfo' .; docker rm -f windowssupport
This command creates a directory named dsinfo in your current directory. If you want an archive file, you need to create it from the dsinfo directory.
API Reference Updated¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
The Mirantis Secure Registry (MSR) API is a REST API, available using HTTPS, that enables programmatic access to resources managed by MSR.
CLI Reference¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
The CLI tool has commands to install, configure, and backup Mirantis Secure Registry (MSR). It also allows uninstalling MSR. By default the tool runs in interactive mode. It prompts you for the values needed.
Additional help is available for each command with the –help
option.
Usage:
docker run -it --rm mirantis/dtr \
command [command options]
If not specified, mirantis/dtr
uses the latest
tag by default. To
work with a different version, specify it in the command. For example,
docker run -it --rm
mirantis/dtr:2.8.13
.
mirantis/dtr backup¶
Create a backup of MSR
Usage¶
docker run -i --rm mirantis/dtr \
backup [command options] > backup.tar
Example Commands¶
Basic¶
docker run -i --rm --log-driver none mirantis/dtr:2.8.13 \
backup --ucp-ca "$(cat ca.pem)" --existing-replica-id 5eb9459a7832 > backup.tar
Advanced (with chained commands)¶
The following command has been tested on Linux:
DTR_VERSION=$(docker container inspect $(docker container ps -f \
name=dtr-registry -q) | grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-'); \
read -p 'ucp-url (The MKE URL including domain and port): ' UCP_URL; \
read -p 'ucp-username (The MKE administrator username): ' UCP_ADMIN; \
read -sp 'ucp password: ' UCP_PASSWORD; \
docker run --log-driver none -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
mirantis/dtr:$DTR_VERSION backup \
--ucp-username $UCP_ADMIN \
--ucp-url $UCP_URL \
--ucp-ca "$(curl https://${UCP_URL}/ca)" \
--existing-replica-id $REPLICA_ID > \
dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar
Description¶
This command creates a tar
file with the contents of the volumes
used by MSR, and prints it. You can then use mirantis/dtr restore
to
restore the data from an existing backup.
Note
This command only creates backups of configurations, and image metadata. It does not back up users and organizations. Users and organizations can be backed up during a MKE backup.
It also does not back up Docker images stored in your registry. You should implement a separate backup policy for the Docker images stored in your registry, taking into consideration whether your MSR installation is configured to store images on the filesystem or is using a cloud provider.
This backup contains sensitive information and should be stored securely.
Using the
--offline-backup
flag temporarily shuts down the RethinkDB container. Take the replica out of your load balancer to avoid downtime.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify a MSR replica, you must connect to an existing healthy replica’s database. |
|
$$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
MSR_IGNORE_EVENTS_TABLE |
Option to prevent backup of the events table for online backups, to reduce backup size (the option is not available for offline backups). |
|
$MSR_OFFLINE_BACKUP |
This flag takes RethinkDB down during backup and takes a more reliable backup. If you back up MSR with this flag, RethinkDB will go down during backup. However, offline backups are guaranteed to be more consistent than online backups. |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE
TLS CA certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation
uses TLS but always trusts the TLS certificate used by MKE, which can
lead to MITM (man-in-the-middle) attacks. For production deployments,
use |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
mirantis/dtr destroy¶
Destroy a MSR replica’s data
Usage¶
docker run -it --rm mirantis/dtr \
destroy [command options]
Description¶
This command forcefully removes all containers and volumes associated with a MSR replica without notifying the rest of the cluster. Use this command on all replicas uninstall MSR.
Use the ‘remove’ command to gracefully scale down your MSR cluster.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$MSR_DESTROY_REPLICA_ID |
The ID of the replica to destroy. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to man-in-the-middle attacks. For production deployments, use
|
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE.Download the MKE TLS CA
certificate from |
mirantis/dtr emergency-repair¶
Recover MSR from loss of quorum
Usage¶
docker run -it --rm mirantis/dtr \
emergency-repair [command options]
Description¶
The emergency-repair command repairs a MSR cluster that has lost quorum by reverting your cluster to a single MSR replica.
There are three actions you can take to recover an unhealthy MSR cluster:
If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join new ones for high availability.
If the majority of replicas are unhealthy, use the emergency-repair command to revert your cluster to a single MSR replica.
If you canot repair your cluster to a single replica, you must restore from an existing backup, using the restore command.
When you run this command, a MSR replica of your choice is repaired and
turned into the only replica in the whole MSR cluster. The containers
for all the other MSR replicas are stopped and removed. When using the
force
option, the volumes for these replicas are also deleted.
After repairing the cluster, you should use the join command to add more MSR replicas for high availability.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database. |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$MSR_OVERLAY_SUBNET |
The subnet used by the dtr-ol overlay network.
Example: |
|
$PRUNE |
Delete the data volumes of all unhealthy replicas. With this option, the volume of the MSR replica you’re restoring is preserved but the volumes for all other replicas are deleted. This has the same result as completely uninstalling MSR from those replicas. |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE
TLS CA certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation
uses TLS but always trusts the TLS certificate used by MKE, which can
lead to MITM (man-in-the-middle) attacks. For production deployments,
use |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
|
$YES |
Answer yes to any prompts. |
|
$MAX_WAIT |
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, |
mirantis/dtr images¶
List all the images necessary to install MSR
Usage¶
docker run -it --rm mirantis/dtr \
images [command options]
Description¶
This command lists all the images necessary to install MSR.
mirantis/dtr install¶
Install Mirantis Secure Registry
Usage¶
docker run -it --rm mirantis/dtr \
install [command options]
Description¶
This command installs Mirantis Secure Registry (MSR) on a node managed by Mirantis Kubernetes Engine (MKE).
After installing MSR, you can join additional MSR replicas using
mirantis/dtr join
.
Example Usage¶
$ docker run -it --rm mirantis/dtr:2.8.13 install \
--ucp-node <UCP_NODE_HOSTNAME> \
--ucp-insecure-tls
Note
Use --ucp-ca "$(cat ca.pem)"
instead of --ucp-insecure-tls
for a production deployment.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$ASYNC_NFS |
Use async NFS volume options on the replica specified in the
|
|
$CLIENT_CA |
Specify root CA certificates for client authentication with
|
|
$CUSTOM_CA_CERTS_BUNDLE |
Provide a file containing additional CA certificates for MSR service containers to use when verifying TLS server certificates. |
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_CA |
Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own root
CA public certificate with |
|
$MSR_CERT |
Use a PEM-encoded TLS certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own
public key certificate with |
|
$MSR_EXTERNAL_URL |
URL of the host or load balancer clients use to reach MSR. When you use
this flag, users are redirected to MKE for logging in. Once
authenticated they are redirected to the URL you specify in this flag.
If you don’t use this flag, MSR is deployed without single sign-on with
MKE. Users and teams are shared but users log in separately into the two
applications. You can enable and disable single sign-on within your MSR
system settings. Format |
|
$MSR_KEY |
Use a PEM-encoded TLS private key for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
private key with |
|
$MSR_STORAGE_VOLUME |
Customize the volume to store Docker images. By default MSR creates a
volume to store the Docker images in the local filesystem of the node
where MSR is running, without high-availability. Use this flag to
specify a full path or volume name for MSR to store images. For
high-availability, make sure all MSR replicas can read and write data on
this volume. If you’re using NFS, use |
|
$ENABLE_CLIENT_CERT_AUTH |
Enables TLS client certificate authentication; use
|
|
$MSR_PPROF |
Enables pprof profiling of the server. Use |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$MSR_HTTP_PROXY |
The HTTP proxy used for outgoing requests. |
|
$MSR_HTTPS_PROXY |
The HTTPS proxy used for outgoing requests. |
|
$LOG_HOST |
The syslog system to send logs to. The endpoint to send logs to. Use
this flag if you set |
|
$LOG_LEVEL |
Log level for all container logs when logging to syslog. Default: INFO. The supported log levels are debug, info, warn, error, or fatal. |
|
$LOG_PROTOCOL |
The protocol for sending logs. Default is internal. By default, MSR
internal components log information using the logger specified in the
Docker daemon in the node where the MSR replica is deployed. Use this
option to send MSR logs to an external syslog system. The supported
values are |
|
$NFS_OPTIONS |
Pass in NFS volume options verbatim for the replica specified in the
|
|
$NFS_STORAGE_URL |
Use NFS to store Docker images following this format: |
|
$MSR_NO_PROXY |
List of domains the proxy should not be used for. When using
|
|
$MSR_OVERLAY_SUBNET |
The subnet used by the dtr-ol overlay network. Example: |
|
$REPLICA_HTTP_PORT |
The public HTTP port for the MSR replica. Default is |
|
$REPLICA_HTTPS_PORT |
The public HTTPS port for the MSR replica. Default is |
|
$MSR_INSTALL_REPLICA_ID |
Assign a 12-character hexadecimal ID to the MSR replica. Random by default. |
|
$RETHINKDB_CACHE_MB |
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to MITM
(man-in-the-middle) attacks. For production deployments, use |
|
$UCP_NODE |
The hostname of the MKE node to use to deploy MSR. Random by default. You can find the hostnames of the nodes in the cluster in the MKE web interface, or by running docker node ls on a MKE manager node. Note that MKE and MSR must not be installed on the same node, and thus you should instead install MSR on worker nodes that will be managed by MKE. |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
mirantis/dtr join¶
Add a new replica to an existing MSR cluster. Use SSH to log into any node that is already part of MKE.
Usage¶
docker run -it --rm \
mirantis/dtr:2.8.13 join \
--ucp-node <mke-node-name> \
--ucp-insecure-tls
Description¶
This command creates a replica of an existing MSR on a node managed by Mirantis Kubernetes Engine (MKE).
For setting MSR for high-availability, create 3, 5, or 7 replicas of MSR.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database. |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$REPLICA_HTTP_PORT |
The public HTTP port for the MSR replica. Default is |
|
$REPLICA_HTTPS_PORT |
The public HTTPS port for the MSR replica. Default is |
|
$MSR_INSTALL_REPLICA_ID |
Assign a 12-character hexadecimal ID to the MSR replica. Random by default. |
|
$RETHINKDB_CACHE_MB |
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is |
|
$MSR_SKIP_NETWORK_TEST |
Don’t test if overlay networks are working correctly between MKE nodes. For high-availability, MSR creates an overlay network between MKE nodes and tests that it is working when joining replicas. Don’t use this option for production deployments. |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to MITM
(man-in-the-middle) attacks. For production deployments, use |
|
$UCP_NODE |
The hostname of the MKE node to use to deploy MSR. Random by default. You can find the hostnames of the nodes in the cluster in the MKE web interface, or by running docker node ls on a MKE manager node. Note that MKE and MSR must not be installed on the same node, and thus you should instead install MSR on worker nodes that will be managed by MKE. |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
|
$MSR_UNSAFE_JOIN |
Join a new replica even if the cluster is unhealthy.Joining replicas to an unhealthy MSR cluster leads to split-brain scenarios, and data loss. Don’t use this option for production deployments. |
|
$MAX_WAIT |
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, |
mirantis/dtr reconfigure¶
Change MSR configurations.
Usage¶
docker run -it --rm mirantis/dtr reconfigure [command options]
Description¶
This command changes MSR configuration settings.
MSR is restarted for the new configurations to take effect. To have no down time, configure your MSR for high availability.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$ASYNC_NFS |
Use async NFS volume options on the replica specified in the
|
|
$CLIENT_CA |
Specify root CA certificates for client authentication with
|
|
$CUSTOM_CA_CERTS_ BUNDLE |
Specify additional CA certificates for MSR service containers to use
when verifying TLS server certificates with
|
|
$DEBUG |
Enable debug mode for additional logs of this bootstrap container (the
log level of downstream MSR containers can be set with |
|
$MSR_CA |
Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own root
CA public certificate with |
|
$MSR_CERT |
Use a PEM-encoded TLS certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own
public key certificate with |
|
$MSR_EXTERNAL_URL |
URL of the host or load balancer clients use to reach MSR. When you use
this flag, users are redirected to MKE for logging in. Once
authenticated they are redirected to the url you specify in this flag.
If you don’t use this flag, MSR is deployed without single sign-on with
MKE. Users and teams are shared but users login separately into the two
applications. You can enable and disable single sign-on in the MSR
settings. Format |
|
$MSR_KEY |
Use a PEM-encoded TLS private key for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
private key with |
|
$MSR_STORAGE_ VOLUME |
Customize the volume to store Docker images. By default MSR creates a
volume to store the Docker images in the local filesystem of the node
where MSR is running, without high-availability. Use this flag to
specify a full path or volume name for MSR to store images. For
high-availability, make sure all MSR replicas can read and write data on
this volume. If you’re using NFS, use |
|
$ENABLE_CLIENT_CERT_ AUTH |
Enables TLS client certificate authentication; use
|
|
$MSR_PPROF |
Enables pprof profiling of the server. Use |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database. |
|
$FORCE_RECREATE_NFS_ VOLUME |
Force MSR to recreate NFS volumes on the replica specified by
|
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$MSR_HTTP_PROXY |
The HTTP proxy used for outgoing requests. |
|
$MSR_HTTPS_PROXY |
The HTTPS proxy used for outgoing requests. |
|
$LOG_HOST |
The syslog system to send logs to. The endpoint to send logs to. Use
this flag if you set |
|
$LOG_LEVEL |
Log level for all container logs when logging to syslog. Default: INFO.
The supported log levels are |
|
$LOG_PROTOCOL |
The protocol for sending logs. Default is internal. By default, MSR
internal components log information using the logger specified in the
Docker daemon in the node where the MSR replica is deployed. Use this
option to send MSR logs to an external syslog system. The supported
values are |
|
$MAX_WAIT |
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, |
|
$NFS_OPTIONS |
Pass in NFS volume options verbatim for the replica specified in the
|
|
$NFS_STORAGE_URL |
Set the URL for the NFS storage back end. docker run -it --rm mirantis/dtr:2.8.13 reconfigure --nfs-storage-url nfs://<IP-of-NFS-server>/path/to/mountdir
To reconfigure MSR to stop using NFS, leave the option empty: docker run -it --rm mirantis/dtr:{{ page.dtr_version}} reconfigure --nfs-storage-url ""
Refer to Reconfigure MSR to use NFS for more details. |
|
$MSR_NO_PROXY |
List of domains the proxy should not be used for. When using
|
|
$REINITIALIZE_STORAGE |
Set the flag when you have changed storage back ends but have not moved the contents of the old storage back end to the new one. Erases all tags in the registry. |
|
$REPLICA_HTTP_PORT |
The public HTTP port for the MSR replica. Default is |
|
$REPLICA_HTTPS_PORT |
The public HTTPS port for the MSR replica. Default is |
|
$RETHINKDB_CACHE_ MB |
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is |
|
$STORAGE_MIGRATED |
A flag added in 2.6.4 which lets you indicate the migration status of your storage data. Specify this flag if you are migrating to a new storage back end and have already moved all contents from your old back end to your new one. If not specified, MSR will assume the new back end is empty during a back end storage switch, and consequently destroy your existing tags and related image metadata. |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
mirantis/dtr remove¶
Remove a MSR replica from a cluster
Usage¶
docker run -it --rm mirantis/dtr \
remove [command options]
Description¶
This command gracefully scales down your MSR cluster by removing exactly one replica. All other replicas must be healthy and will remain healthy after this operation.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database. |
|
$DTR_FORCE_REMOVE_REPLICA |
Ignore pre-checks when removing a replica. |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$MSR_REMOVE_REPLICA_ID |
DEPRECATED Alias for |
|
$MSR_REMOVE_REPLICA_IDS |
A comma separated list of IDs of replicas to remove from the cluster. |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to MITM
(man-in-the-middle) attacks. For production deployments, use |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
mirantis/dtr restore¶
Install and restore MSR from an existing backup
Usage¶
docker run -i --rm mirantis/dtr \
restore
--replica-id <replica-id>
[command options] < backup.tar
Description¶
The restore command performs a fresh installation of MSR, and
reconfigures it with configuration data from a tar
file generated by
mirantis/dtr backup
. If you are restoring MSR after a failure, please make
sure you have destroyed the old MSR fully.
There are three actions you can take to recover an unhealthy MSR cluster:
If the majority of replicas are healthy, remove the unhealthy nodes from the cluster, and join new nodes for high availability.
If the majority of replicas are unhealthy, use the emergency-repair command to revert your cluster to a single MSR replica.
If you cannot repair your cluster to a single replica, you must restore from an existing backup, using the
restore
command.
This command does not restore Docker images. You should implement a separate restore procedure for the Docker images stored in your registry, taking in consideration whether your MSR installation is configured to store images on the local filesystem or using a cloud provider.
After restoring the cluster, you should use the :command`join` command to add more MSR replicas for high availability.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$ASYNC_NFS |
Use async NFS volume options on the replica specified by
|
|
$CLIENT_CA |
PEM-encoded TLS root CA certificates for client certificate authentication. |
|
$CUSTOM_CA_CERTS_BUNDLE |
Provide a file containing additional CA certificates for MSR service containers to use when verifying TLS server certificates. |
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database. |
|
$MSR_CA |
Use a PEM-encoded TLS CA certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
CA certificate with |
|
$MSR_CERT |
Use a PEM-encoded TLS certificate for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
certificate with |
|
$MSR_EXTERNAL_URL |
URL of the host or load balancer clients use to reach MSR. When you use
this flag, users are redirected to MKE for logging in. Once
authenticated they are redirected to the URL you specify in this flag.
If you don’t use this flag, MSR is deployed without single sign-on with
MKE. Users and teams are shared but users log in separately into the two
applications. You can enable and disable single sign-on within your MSR
system settings. Format |
|
$MSR_KEY |
Use a PEM-encoded TLS private key for MSR. By default MSR generates a
self-signed TLS certificate during deployment. You can use your own TLS
private key with |
|
$MSR_STORAGE_VOLUME |
Mandatory flag to allow for MSR to fall back to your configured storage setting at the time of backup. If you have previously configured MSR to use a full path or volume name for storage, specify this flag to use the same setting on restore. See mirantis/dtr install and mirantis/dtr reconfigure for usage details. |
|
$MSR_DEFAULT_STORAGE |
Mandatory flag to allow for MSR to fall back to your configured storage backend at the time of backup. If cloud storage was configured, then the default storage on restore is cloud storage. Otherwise, local storage is used. When running 2.6.0 to 2.6.3 (with experimental online garbage collection), this flag must be specified in order to keep your MSR metadata. If you encounter an issue with lost tags, see Restore to Cloud Storage for Docker’s recommended recovery strategy. Upgrade to 2.6.4 and follow Best practice for data migration in 2.6.4 when switching storage backends. |
|
$ENABLE_CLIENT_CERT_AUTH |
Enables TLS client certificate authentication; use
|
|
$MSR_PPROF |
Enables pprof profiling of the server. Use |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$MSR_HTTP_PROXY |
The HTTP proxy used for outgoing requests. |
|
$MSR_HTTPS_PROXY |
The HTTPS proxy used for outgoing requests. |
|
$LOG_HOST |
The syslog system to send logs to.The endpoint to send logs to. Use this
flag if you set |
|
$LOG_LEVEL |
Log level for all container logs when logging to syslog. Default:
|
|
$LOG_PROTOCOL |
The protocol for sending logs. Default is internal.By default, MSR
internal components log information using the logger specified in the
Docker daemon in the node where the MSR replica is deployed. Use this
option to send MSR logs to an external syslog system. The supported
values are tcp, udp, and internal. Internal is the default option,
stopping MSR from sending logs to an external system. Use this flag with
|
|
$MAX_WAIT |
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, |
|
$NFS_OPTIONS |
Pass in NFS volume options verbatim for the replica specified by
|
|
$NFS_STORAGE_URL |
Mandatory flag to allow for MSR to fall back to your configured storage
setting at the time of backup. When running DTR 2.6.0-2.6.3 (with
experimental online garbage collection), there is an issue with
reconfiguring and restoring MSR with |
|
$MSR_NO_PROXY |
List of domains the proxy should not be used for.When using
|
|
$REPLICA_HTTP_PORT |
The public HTTP port for the MSR replica. Default is |
|
$REPLICA_HTTPS_PORT |
The public HTTPS port for the MSR replica. Default is |
|
$MSR_INSTALL_REPLICA_ID |
Assign a 12-character hexadecimal ID to the MSR replica. Mandatory. |
|
$RETHINKDB_CACHE_MB |
The maximum amount of space in MB for RethinkDB in-memory cache used by
the given replica. Default is auto. Auto is |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to MITM
(man-in-the-middle) attacks. For production deployments, use |
|
$UCP_NODE |
The hostname of the MKE node to use to deploy MSR. Random by default. You can find the hostnames of the nodes in the cluster in the MKE web interface, or by running docker node ls on a MKE manager node. Note that MKE and MSR must not be installed on the same node, and thus you should instead install MSR on worker nodes that will be managed by MKE. |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
mirantis/dtr upgrade¶
Upgrade a DTR 2.7.x cluster to DTR 2.8.x.
Usage¶
docker run -it --rm mirantis/dtr \
upgrade [command options]
Description¶
The dtr upgrade command upgrades DTR 2.7.x to the current version (2.8.x) of the image.
Options¶
Option |
Environment variable |
Description |
---|---|---|
|
$DEBUG |
Enable debug mode for additional logs. |
|
$MSR_REPLICA_ID |
The ID of an existing MSR replica. To add, remove or modify MSR, you must connect to an existing healthy replica’s database. |
|
$MSR_EXTENDED_HELP |
Display extended help text for a given command. |
|
$UCP_CA |
Use a PEM-encoded TLS CA certificate for MKE. Download the MKE TLS CA
certificate from |
|
$UCP_INSECURE_TLS |
Disable TLS verification for MKE. The installation uses TLS but always
trusts the TLS certificate used by MKE, which can lead to MITM
(man-in-the-middle) attacks. For production deployments, use |
|
$UCP_PASSWORD |
The MKE administrator password. |
|
$UCP_URL |
The MKE URL including domain and port. |
|
$UCP_USERNAME |
The MKE administrator username. |
|
$MAX_WAIT |
The maximum amount of time MSR allows an operation to complete within.
This is frequently used to allocate more startup time to very large MSR
databases. The value is a Golang duration string. For example, |
Release Notes¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
This document describes the latest changes, enhancements, known issues, and fixes for Mirantis Secure Registry (MSR) for versions 2.8.x.
Caution
In developing MSR 2.8.x, Mirantis has been transitioning from legacy Docker Hub-issued licenses to JWT licenses, as detailed below:
Version 2.8.0: Docker Hub licenses only
Versions 2.8.1 to 2.8.8: Docker Hub licenses and JWT licenses
Versions 2.8.9 and later: JWT licenses only
2.8.13¶
Important
MSR 2.8.13 is the final patch release for MSR 2.8.x as that version of the software reached end of life (EOL) status on 2022-05-27. In correlation, Mirantis has halted maintenance of the MSR 2.8.x documentation set.
(2022-06-22)
Bug fixes¶
(FIELD-4718) Fixed a pagination issue in the MSR API GET /api/v0/imagescan/scansummary/cve/{cve} endpoint. The fix requires that you upgrade MSR to 2.8.13 and that you take certain manual steps using the database CLI (contact Mirantis Support for the steps). Note that the manual CLI steps are not required for fresh MSR installations.
(ENGDTR-3184) Fixed an issue wherein Ubuntu 22.04 based images could not be successfully scanned for vulnerabilities.
Security¶
Resolved CVEs, as detailed:
CVE |
Status |
Description |
---|---|---|
Resolved |
Prior to 1.2.12, zlib allows memory corruption when deflating when the input has many distant matches. |
|
Resolved |
BusyBox up through version 1.35.0 allows remote attackers to execute arbitrary code when netstat is used to print the value of a DNS PTR record to a VT-compatible terminal. Alternatively, attackers can choose to change the colors of the terminal. |
|
Resolved |
Prior to 1.9.10, GORM permits SQL injection through incomplete parentheses. Note that misusing GORM by passing untrusted user input when GORM expects trusted SQL fragments is not a vulnerability in GORM but in the application. |
|
Resolved/False Positive |
Prior to 4.0.0-preview1, jwt-go allows attackers to bypass intended
access restrictions in situations with |
|
Resolved |
A bug was found in containerd prior to versions 1.6.1, 1.5.10, and 1.14.12 in which containers launched through containerd’s CRI implementation on Linux with a specially-crafted image configuration could gain access to read-only copies of arbitrary files and directories on the host. |
|
Not Vulnerable |
The CVE is present in the JobRunner image, however while it is a required dependency of a component running in JobRunner, its functionality is never excercised. In OpenLDAP 2.x prior to 2.5.12 and in 2.6.x prior to 2.6.2, a SQL injection vulnerability exists in the experimental back-sql backend to slapd, via a SQL statement within an LDAP query. This can occur during an LDAP search operation when the search filter is processed, due to a lack of proper escaping. |
|
False Positive |
Though Alpine Linux contains the affected OpenSSL version, the
The |
|
False Positive |
All CVEs reported in NumPy are false positives, the result of being picked up from cache but for a version not in use with MSR. NumPy 1.16.0 and earlier use the pickle Python module in an unsafe
manner that allows remote attackers to execute arbitrary code via a
crafted serialized object, as demonstrated by a |
All CVEs reported in OpenJDK 1.8.0u302 have been resolved by removal of the component.
All CVEs reported in NumPy are false positives, the result of being picked up from cache but for a version not in use with MSR.
Upgraded Synopsys scanner to version 2022.3.1.
2.8.12¶
(2022-04-18)
What’s new¶
Improvements have been made to clarify the presentation of vulnerability scan summary counts in the MSR web UI, for Critical, High, Medium, and Low in both the Vulnerabilities column and in the View Details view.
Note
Although ENGDTR-3008 was reported as a known issue for MSR 2.8.11, the reported counts were at all times reliable and factually correct.
(ENGDTR-3008)
Security¶
Upgraded Cyrus SASL to version 2.1.28-r0 in Alpine 3.15.2 to resolve CVE-2022-24407.
Resolved the following golang runtime vulnerabilities:
CVE-2021-38297, CVE-2019-14809, CVE-2019-11888, CVE-2017-15041, CVE-2022-23806, CVE-2022-24921, CVE-2022-23773, CVE-2022-23772, CVE-2021-44716, CVE-2021-41772, CVE-2021-41771, CVE-2021-39293, CVE-2021-33198, CVE-2021-33196, CVE-2021-33194, CVE-2021-27918, CVE-2021-3115, CVE-2020-28367, CVE-2020-28366, CVE-2020-28362, CVE-2020-16845, and CVE-2021-33195.
Vulnerability scans may reveal the following CVEs, though there is no impact on MSR:
CVE-2019-15562, CVE-2022-2364, CVE-2022-0778, CVE-2019-16884, CVE-2018-7187, CVE-2019-6486, CVE-2018-16874, CVE-2018-16873, CVE-2022-25365, CVE-2021-3162, CVE-2019-9634, CVE-2019-3466, CVE-2018-6574, CVE-2021-36690, CVE-2021-29923, CVE-2019-16276, and CVE-2018-16875,
2.8.11¶
(2022-02-10)
What’s new¶
A Synopsys scanner update, to release 2021.12.0.
With the 2021.12.0 release, Synopsys scanner can now self-scan all MSR components and run other test cases without any regressions.
(ENGDTR-2816)
Bug fixes¶
Fixed an issue wherein, on logout from the MSR web UI, users sometimes received the warning:
Sorry, we don't recognize this path
(FIELD-4339).Fixed an issue in the MSR web UI wherein if a user who wants to change their password entered an incorrect password into the Current password field and clicked Save, the screen would go blank (ENGDTR-2785).
Security¶
Resolved the following OpenSSL vulnerability: CVE-2021-3712
Resolved the following django vulnerability: CVE-2021-44420
Resolved the following libexpat vulnerabilities: CVE-2022-23990 CVE-2022-23852
Resolved the following golang runtime vulnerabilities: CVE-2021-38297 CVE-2021-44716 CVE-2021-41772 CVE-2021-41771 CVE-2021-39293 CVE-2021-33198 CVE-2021-33196 CVE-2021-33195 CVE-2021-34558 CVE-2021-33197
Resolved the following postgresql vulnerabilities: CVE-2021-32027 CVE-2021-32029 CVE-2021-32028.
Vulnerability scans may reveal the following CVEs, though there is no impact on MSR:
CVE-2022-23990, CVE-2022-23852, CVE-2021-38297, CVE-2021-3711, CVE-2019-14809, CVE-2019-11888, CVE-2017-15041, CVE-2021-32027, CVE-2018-7187, CVE-2021-30465, CVE-2019-6486, CVE-2018-16874, CVE-2018-16873, CVE-2021-3162, CVE-2019-9634, CVE-2018-6574, CVE-2021-44716, CVE-2021-41772, CVE-2021-41771, CVE-2021-39293, CVE-2021-33198, CVE-2021-33196, CVE-2021-33194, CVE-2021-29923, CVE-2021-27918, CVE-2021-3115, CVE-2020-28367, CVE-2020-28366, CVE-2020-28362, CVE-2020-26160, CVE-2020-16845, CVE-2019-16884, CVE-2019-16276, CVE-2018-16875, CVE-2021-21284, CVE-2021-36976, CVE-2021-3114, CVE-2020-24553, CVE-2021-31525, CVE-2020-15586, CVE-2017-15042, CVE-2017-8932, CVE-2021-3572, CVE-2020-29510, CVE-2022-21365, CVE-2022-21360, CVE-2022-21349, CVE-2022-21341, CVE-2022-21340, CVE-2022-21248, CVE-2021-43784, CVE-2020-14039, CVE-2020-27534.
Known issue¶
Vulnerability scan miscalculation in MSR web UI
The summary counts that MSR displays for Critical, High, Medium, and Low in both the Vulnerabilities column and in the View Details view are unreliable and may be incorrect when displaying non-zero values. The Components tab displays correct values for each component.
Workaround:
Navigate to the Components tab, review the individual non-green components, and separately calculate the total of the numbers that present as Critical, High, Medium, and Low.
(ENGDTR-3008)
2.8.10¶
(2021-11-09)
What’s new¶
Added new sub-command rotate-certificates to the
rethinkops
binary that exists inside of thedtr-rethinkdb
image. This command allows you to rotate the certificates that provide intracluster communication between the MSR system containers and RethinkDB.To rotate certificates, docker exec into the
dtr-rethinkdb
container and use the command below (you can provide the--debug
flag for more information):REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3) $ docker exec -e DTR_REPLICA_ID=$REPLICA_ID -it $(docker ps -q --filter name=dtr-rethinkdb) # rethinkops rotate-certificates --replica-id $DTR_REPLICA_ID --debug
(FIELD-4044)
Bug fixes¶
Fixed an issue wherein the webhook could fail to trigger, thus issuing the “argument list too long” error (FIELD-3424).
Fixed an issue wherein the MSR image scan CSV report was missing the CVSS3 score and only had the CVSS2 score (FIELD-3946).
Fixed issues wherein the list of org repositories was limited to ten and was wrapping incorrectly (FIELD-3987).
Fixed an issue with the MSR web UI wherein performing a search from the left-side navigation panel produced search results that displayed on top of the background text (FIELD-4268).
Made improvements to MSR administrative actions to circumvent failures that can result from stale containers (FIELD-4270) (FIELD-4291).
Fixed an image signing regression issue that applies to MSR 2.8.9 (FIELD-4320).
Security¶
Resolved the following OpenSSL vulnerabilities: CVE-2021-3711 and CVE-2021-3712 (FIELD-4387).
Resolved the following libxml2 vulnerability: CVE-2021-3541 (FIELD-4394).
Resolved the following urlllib3 vulnerabilities: CVE-2021-33503 and CVE-2021-28363 (FIELD-4399).
Resolved the following curl vulnerabilities: CVE-2021-22945, CVE-2021-22946, CVE-2021-22926, CVE-2021-22922, CVE-2021-22947, CVE-2021-22925, and CVE-2021-22923 (FIELD-4401).
Known issue¶
The image signing functionality in MSR 2.8.9 is incompatible with other MSR versions.
Workaround:
For images signed by MSR 2.8.9 it is necessary to delete trust data and re-sign the images using MSR 2.8.10 (FIELD-4320).
2.8.9¶
(2021-08-19)
What’s new¶
To help administrators troubleshoot authorization issues, MSR now includes the name and ID of the requesting user in log messages from the
dtr-garant
container when handling/auth/token
API requests (FIELD-3509).MSR now includes support for the
GET /v2/_catalog
endpoint from the Docker Registry HTTP API V2. Authenticated MSR users can use this API to list all the repositories in the registry that they have permission to view (ENGDTR-2667).MSR now accepts only JWT licenses. To upgrade MSR, customers using a Docker Hub-issued license must first replace it with the new license version (ENGDTR-2631).
To request a JWT license, contact support@mirantis.com.
The following MSR commands now include a
--max-wait
option:emergency-repair
join
reconfigure
restore
upgrade
With this new option you can set the maximum amount of time that MSR allows for operations to complete. The
--max-wait
option is especially useful when allocating additional startup time for very large MSR databases (FIELD-4070).
Bug fixes¶
Fixed an issue wherein the webhook client timeout settings caused reconnections to wait too long (FIELD-4083).
Fixed an issue wherein connecting to MSR with IPv6 failed after an MCR upgrade to version 20.10.0 or later (FIELD-4144).
Security¶
Resolved the following Django vulnerabilities: CVE-2021-35042, CVE-2021-33571, and CVE-2021-33203 (ENGDTR-2707).
Resolved the following curl vulnerabilities: CVE-2021-22901, CVE-2021-22897, and CVE-2021-22898 (ENGDTR-2708).
Deprecation notes¶
In correlation with the End of Life date for MKE 3.2.x and MSR 2.7.x, Mirantis stopped maintaining the associated documentation set on 2021-07-21.
Known issue¶
MSR administrative actions such as backup, restore, and reconfigure can continuously fail with the
invalid session token
error shortly after entering phase 2. The error resembles the following example:FATA[0000] Failed to get new conv client: Docker version check failed: \ Failed to get docker version: Error response from daemon: \ {"message":"invalid session token"}
Workaround:
Before running any bootstrap command, source a client bundle in order to locate the existing
dtr-phase2
container.Remove the existing
dtr-phase2
container.
Refer to MSR Bootstrap Commands (Restore, Backup, Reconfigure) Fail with “invalid session token” in the Mirantis knowledge base for more information.
FIELD-4270
2.8.8¶
(2021-06-29)
What’s new¶
MSR now tags all analytics reports with the user license ID when telemetry is enabled. It does not, though, collect any further identifying information. In line with this change, the MSR settings API no longer contains
anonymizeAnalytics
, and the MSR web UI no longer includes the Make data anonymous toggle (ENGDTR-2607).MSR now boosts container security by running the scanner process in a sandbox with restricted permissions. In the event the scanner process is compromised, it does not have access to the Rethink database private keys or any portion of the file system that it does not require access to (ENGDTR-1915).
Updated Django to version 3.1.10, resolving the following CVEs: CVE-2021-31542 and CVE-2021-32052 (ENGDTR-2651).
Bug fixes¶
Fixed an issue in the MSR web UI wherein the Scanning enabled setting failed to display correctly after changing it, navigating away from, and back to the Security tab (FIELD-3541).
Fixed an issue in the MSR web UI wherein after clicking Sync Database Now, the In Progress icon failed to disappear at the correct time and the scanning information (including the database version) failed to update without a browser refresh (FIELD-3541).
Fixed an issue in the MSR web UI wherein the value of Scanning timeout limit failed to display correctly after changing it, navigating away from, and back to the Security tab (FIELD-3541).
Fixed an issue in the MSR web UI wherein the search function was unable to find repositories in an organization (FIELD-3519).
Fixed an issue wherein one or more RethinkDB servers in an unavailable state caused
dtr emergency-repair
to hang indefinitely (ENGDTR-2640).
Security¶
Vulnerability scans no longer reveal a false positive for CVE-2020-17541 as of CVE database version 1388, published 2021-06-24 at 1:04 PM EST (ENGDTR-2635).
Vulnerability scans no longer reveal a false positive for CVE-2021-23017 as of CVE database version 1437, published 2021-06-27 at 5:11 PM EST (ENGDTR-2635).
Vulnerability scans may reveal a false positive for the following CVE: CVE-2021-23017 (ENGDTR-2635).
Vulnerability scans may reveal the following CVE, though MSR is not impacted: CVE-2021-29921 (ENGDTR-2635).
Resolved the following CVEs in MSR containers:
libxml: CVE-2021-3517, CVE-2021-3537, and CVE-2021-3518
posgresql: CVE-2021-32027
(ENGDTR-2635)
2.8.7¶
(2021-05-17)
What’s new¶
MSR now applies a 56-character limit on “namespace/repository” length at creation, and thus eliminates a situation wherein attempts to push tags to repos with too-long names return a 500 Internal Server Error (ENGDTR-2525).
MSR now alerts administrators if the storage back end contents do not match the metadata, or if a new install of MSR uses a storage back end that contains data from a different MSR installation (ENGDTR-2501).
The MSR UI now includes a horizontal scrollbar (in addition to the existing vertical scrollbar), thus allowing users to better adjust the window dimensions.
The
enableManifestLists
setting is no longer needed and has been removed due to breaking Docker Content Trust (FIELD-2642, FIELD-2644).Updated the MSR web UI Last updated at trigger for the promotion and mirror policies to include the option to specify
before
a particular time (after
already exists) (FIELD-2180).The
mirantis/dtr --help
documentation no longer recommends using the--rm
option when invoking commands. Leaving it out preserves containers after they have finished running, thus allowing users to retrieve logs at a later time (FIELD-2204).
Bug fixes¶
Pulling images from a repository using
crictl
no longer returns a 500 error (FIELD-3331, ENGDTR-2569).Fixed broken links to MSR documentation in the MSR web UI (FIELD-3822).
Fixed an issue wherein pushing images with previously-pushed layer data that has been deleted from storage caused
unknown blob
errors. Pushing such images now replaces missing layer data. Sweeping image layers with image layer data missing from storage no longer causes garbage collection to error out (FIELD-1836).
Security¶
Vulnerability scans no longer report CVE-2016-4074 as a result of the 2021.03 scanner update.
A self scan of MSR 2.9.1 reveals five vulnerabilities, however these CVEs have been analyzed and determined not to impact MSR:
(ENGDTR-2543)
A self-scan can report a false positive for CVE-2021-29482 (ENGDTR-2608).
2.8.6¶
(2021-04-12)
What’s new¶
Intermittent failures no longer occur during metadata garbage collection when using Google Cloud Storage as the back end (ENGDTR-2376).
All analytics reports for instances of MSR with a Mirantis-issued license key now include the license ID (even when the
anonymize analytics
setting is enabled). The license subject reads License ID in the web UI (ENGDTR-2327).
Bug fixes¶
Fixed an issue wherein the MSR web UI presented no more than 10 user organizations on the Users page (FIELD-3520).
Fixed an issue wherein the S3 back-end storage settings did not display in the web UI following an upgrade (FIELD-3395).
Fixed an issue with the MSR web UI wherein lengthy tag names overlapped with adjacent text in the repository tag list (FIELD-1631).
Security¶
MSR is not vulnerable to CVE-2019-15562, despite its detection in
dtr-notary-signer
anddtr-notary-server
vulnerability scans, as the SQL back end is not used in Notary deployment (ENGDTR-2319).Vulnerability scans of the
dtr-jobrunner
can give false positives for CVE-2020-29363, CVE-2020-29361, and CVE-2020-29362 in the p11-kit component. The container’s version of p11-kit is not vulnerable to these CVEs. (ENGDTR-2319).Resolved CVE-2019-20907 (ENGDTR-2259).
2.8.5¶
(2020-12-17)
Security¶
Resolved CVE-2019-17495 in Swagger UI bundle and standalone preset libraries (ENGDTR-1780, ENGDOCS-1781).
Updated various UI dependencies, thus resolving the following CVEs: CVE-2020-7707, CVE-2018-1000620, and CVE-2020-14040 (ENGDTR-2271, ENGDTR-2272).
Known issues¶
After upgrading MSR from 2.7 to 2.8 the Cloud Storage settings on the Storage page (System > Storage) are blank, which is solely a display issue. The backend storage will still be properly configured (FIELD-3395).
2.8.4¶
(2020-11-12)
Bug fixes¶
Fixed issue wherein intermittent scanner failures occurred whenever multiple scanning jobs were running concurrently. Also fixed scanner failures that occurred when scanning certain Go binaries (ENGDTR-2116, ENGDTR-2053).
Fixed an issue wherein whenever a webhook for repository events was registered, garant would crash when a push created a repository (ENGDTR-2123).
Fixed an issue wherein the DTR API did not return a resource count (FIELD-2628).
Security¶
CVE-2020-1404 has been resolved (ENGDTR-2146).
2.8.3¶
(2020-09-15)
What’s new¶
In the bootstrapper, all visible name references to Universal Control Plane have been changed to ** Mirantis Kubernetes Engine, and all name references to UCP have been changed to MKE (ENGDTR-2246).
Messaging information has been edited to refer to Mirantis.
The default TLS server certificate generated when MSR is installed can now be used for server authentication. Chrome running in its default configuration will now permit users to bypass the certificate error and access MSR.
MSR is now fully functional without a license, with the exception of image scanning, which continues to require an Advanced license (ENGDTR-1812).
MSR now creates events for changes to repository descriptions.
MSR now creates events for a change to a repository’s
ImmutableTags
field.Documented API endpoints now display in the Swagger Live API documentation:
/_ping
/health
/nginx_status
/admin/settings
(ENGDTR-1701)
Bug fixes¶
Fixed an issue that caused tags to appear as if they were pushed 2019 years ago by a nameless entity.
Fixed an issue wherein repository team access was not cleaned up following team deletion (ENGDTR-989).
Fixed the following API handlers so that they correctly return an HTTP 401 Unauthorized response when unauthenticated:
/repositories
/index/dockersearch
/index/autocomplete
(ENGDTR-1824)
Fixed an issue wherin a blank page would display when viewing scanned image components while looking at multi arch image constituents.
Fixed an issue wherein the read-only registry banner would remain following a backup/restore, even once the registry was returned to read-write mode. In addition, also fixed an issue in which following a backup/restore the registry could not be set back into read-only mode after it had been unset (ENGDTR-2015, FIELD-2775).
Fixed an issue wherein the UI was not properly handling a fresh MSR setup without a garbage collection cron set, which resulted in seemingly infinite loading (ENGDTR-2029).
Fixed an issue wherein garbage collection cron job could not be disabled from the UI (ENGDTR-2030).
Fixed an issue wherein users were not able to configure MSR to check for upgrades after having previously disabled the feature (ENGDTR-2036).
Fixed an issue wherein non-admin users were seeing admin options on the settings page (ENGDTR-2032).
Fixed an issue in which the update_vuln_db (vulnerability database update) job returned success even when a replica failed to update its database (ENGDTR-2039).
Fixed an issue in which usage analytics were sometimes sent even when the Analytics: Send data setting was turned off.
Fixed an issue whereby scanning data was not cleaned up following images garbage collection (ENGDTR-1692).
Security¶
Updated component signature files used for image scanning.
Bumped Alpine base image to 3.12.
Fixed an issue wherein requests to remote /v2/ endpoints for mirroring would leak information about the remote registry (ENGDTR-1821).
Updated RethinkDB Client used to v6 and bump many other component libraries
Updated images to be built from Go 1.14 (ENGDTR-1989).
Known issues¶
If an image’s vulnerability information is not available, rescan the image. If this does not resolve the situation, contact customer support. (Intermittent failures will be addressed in an upcoming release.) (ENGDTR-2053)
Several of the highest severity CVEs have been resolved in MSR, and this work will continue going forward (ENGDTR-1874).
2.8.2¶
(2020-08-10)
What’s new¶
Starting with this release, we moved the location of our offline bundles for MSR from https://packages.docker.com/caas/ to https://packages.mirantis.com/caas/ for the following versions.
MSR 2.8.2
MSR 2.7.8
DTR 2.6.15
Offline bundles for other previous versions of MSR will remain on the docker domain.
Due to infrastructure changes, licenses will no longer auto-update and the related screens in MSR have been removed (ENGORC-1848).
Bug fixes¶
We fixed an issue that caused the system to become unresponsive when using /api/v1/repositories/{namespace}/{reponame}/tags/{reference}/scan
We updated help links in the MSR user interface so that the user can see the correct help topics.
Previously a MSR license may not have been successfully retrieved during installation, even when the license was available. It is now fetched properly (ENGDTR-1870).
Security¶
We upgraded our Synopsis vulnerability scanner to version 2020.03. This will result in improved vulnerability scanning both by finding more vulnerabilities andsignificantly reducing false positives that may have been previously reported (ENGDTR-1868).
2.8.1¶
(2020-06-24)
Enhancements¶
MSR now uses Mirantis’s JWT-based licensing flow, in addition to the legacy Docker Hub licensing method). (ENGDTR-1604)
Bug fixes¶
Removal of auto refresh license toggle from the UI license screen (ENGDTR-1846).
Information leak tied to the remote registry endpoint (ENGDTR-1821).
The text/csv response file gained from using the scan summary API endpoint to obtain the latest security scanning results contains “column headers” but no true response data (ENGDTR-1646).
Due to scanner improvements, libidn2 no longer displays false positives ( ENGDTR-1816).
Security¶
Changes to the whitelist URLs for outgoing connections:
URLs to de-whitelist if there are no pre-Patch 2020-06 versions of Mirantis Container Runtime running:
http://license.enterprise.docker.com
http://dss-cve-updates.enterprise.docker.com
URLs to whitelist for Patch 2020-06 and later:
http://license.mirantis.com
http://dss-cve-updates.mirantis.com
(ENGDTR-1847)
2.8.0¶
(2020-05-28)
New features¶
Support for CVSS Version 3 scanning.
Enhancements¶
Users can now filter through repository tags with a type of either app, image, or plugin.
All cron jobs are now included in backups.
An alert now displays in the bottom right side of the MSR web interface when a user scans a tag.
Improvement to performance on Scan Summary API (POST api/v0/imagescan/ scansummary/ tags)
Addition of pagination for promotion policies in the MSR web interface.
An option is now availalbe for reducing backup size by not backing up the events table for online backups (offline backups do not have this option). This adds a new flag to MSR CLI for the backup command that is --ignore-events-table.
Addition of an Event parameter validation to include parameters for event or object type.
Create events for repository permission changes.
Addition of a check prior to running MSR remove that determine whether a replica id exists in the cluster. Can be overridden with –force.
Improvement to the error messaging for default crons when there is no advance license.
Bug fixes¶
Pull mirroring policies now do a full pull mirror for a repository when the tag limit is increased, a pruning policy is deleted, or when a policy pulls a tag that has just been deleted.
Addition of a repository event that will distinguish policy promotions from manual promotions that are done on a single image using the Promote button in the MSR web interface.
Fix of an issue that prevented license information from updating after the license is changed in the MSR web interface.
Improvements to the MSR web interface for organizations, including the organization list, the organization viewer, the organization repo, and the new organization screen.
Fixed an issue where the constituent image platforms was not populated for the
/api/v1/repositories/{namespace}/{reponame}/tags
and/api/v1/repositories/{namespace}/{reponame}/tags/{reference}
API endpointsFxed an issue with invoking
/api/v0/workers/{id}/capacity
API with an invalid{id}
, which should cause a 404 error but instead returns200 (OK)
.Fixed misleading error messaging on immutable repos.
Fixed issue where scan summaries were not exporting correctly.
Fixed an issue where the repository readme wouldn’t update.
Fixed an issue where the repository readme submission wouldn’t show.
Fixed pull / push mirroring validation logic.
Fixed broken webhook
skipTLS
button.Fixed issue where scanning information wasn’t being copied over with promotion policies.
Fixed issue where notification banners were making part of the UI inaccessible.
Fixed a bug where webhook events weren’t being tracked correctly.
Fixed a bug where pagination for namespace repositories for a non admin user was not working.
Scanning data that corresponds to images and layers marked for deletion is deleted during garbage collection.
Security¶
Fixed problem where storage backend credentials were being returned in API calls to admin/settings.
Known issues¶
MSR does not yet offer a method for deleting scanned data that has been orphaned following the garbage collection of associated metadata.
Considerations
CentOS 8 entered EOL status as of 31-December-2021. For this reason, Mirantis no longer supports CentOS 8 for all versions of MSR. We encourage customers who are using CentOS 8 to migrate onto any one of the supported operating systems, as further bug fixes will not be forthcoming.
In developing MSR 2.8.x, Mirantis has been transitioning from legacy Docker Hub-issued licenses to JWT licenses, as detailed below:
Version 2.8.0: Docker Hub licenses only
Versions 2.8.1 to 2.8.8: Docker Hub licenses and JWT licenses
Versions 2.8.9 and later: JWT licenses only
When malware is present in customer images, malware scanners operating on MSR nodes at runtime can wrongly report MSR as a bad actor. If your malware scanner detects any issue in a running instance of MSR, refer to Vulnerability scanning.
Release Compatibility Matrix¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
MSR 2.8 Compatibility Matrix¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
Mirantis Secure Registry (MSR, and formerly Docker Trusted Registry) provides an enterprise grade container registry solution that can be easily integrated to provide the core of an effective secure software supply chain.
Support for MSR is defined in the Mirantis Cloud Native Platform Subscription Services agreement.
MSR version |
Required MKE version |
---|---|
2.8.13 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.12 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.11 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.10 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.9 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.8 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.7 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.6 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.5 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.4 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.3 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
2.8.0 |
3.3.17, 3.3.16, 3.3.15, 3.3.14, 3.3.13, 3.3.12, 3.3.11, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.4, 3.3.3, 3.3.2, 3.3.0 |
Storage back ends¶
MSR supports the following storage systems:
Persistent volume |
|
Cloud storage providers |
|
Note
MSR cannot be deployed to Windows nodes.
MKE and MSR Browser compatibility¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
The Mirantis Kubernetes Engine (MKE) and Mirantis Secure Registry (MSR) web user interfaces (UIs) both run in the browser, separate from any backend software. As such, Mirantis aims to support browsers separately from the backend software in use.
Mirantis currently supports the following web browsers:
Browser |
Supported version |
Release date |
Operating systems |
---|---|---|---|
Google Chrome |
91.0.4472 or newer |
25 May 2021 |
macOS, Windows |
Microsoft Edge |
91.0.864 or newer |
27 May 2021 |
Windows |
Firefox |
89 or newer |
1 June 2021 |
macOS, Windows |
To ensure the best user experience, Mirantis recommends that you use the latest version of any of the supported browsers. The use of other browsers or older versions of the browsers we support can result in rendering issues, and can even lead to glitches and crashes in the event that some JavaScript language features or browser web APIs are not supported.
Important
Mirtantis does not tie browser support to any particular MKE or MSR software release.
Mirantis strives to leverage the latest in browser technology to build more performant client software, as well as ensuring that our customers benefit from the latest browser security updates. To this end, our strategy is to regularly move our supported browser versions forward, while also lagging behind the latest releases by approximately one year to give our customers a sufficient upgrade buffer.
MKE, MSR, and MCR Maintenance Lifecycle¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
The MKE, MSR, and MCR platform subscription provides software, support, and certification to enterprise development and IT teams that build and manage critical apps in production at scale. It provides a trusted platform for all apps which supply integrated management and security across the app lifecycle, comprised primarily of Mirantis Kubernetes Engine, Mirantis Secure Registry (MSR), and Mirantis Container Runtime (MCR).
Mirantis validates the MKE, MSR, and MCR platform for the operating system environments specified in the compatibility-matrix, adhering to the Maintenance Lifecycle detailed here. Support for the MKE, MSR, and MCR platform is defined in the Mirantis Cloud Native Platform Subscription Services agreement.
Detailed here are all currently supported product versions, as well as the product versions most recently deprecated. It can be assumed that all earlier product versions are at End of Life (EOL).
Important Definitions
“Major Releases” (X.y.z): Vehicles for delivering major and minor feature development and enhancements to existing features. They incorporate all applicable Error corrections made in prior Major Releases, Minor Releases, and Maintenance Releases.
“Minor Releases” (x.Y.z): Vehicles for delivering minor feature developments, enhancements to existing features, and defect corrections. They incorporate all applicable Error corrections made in prior Minor Releases, and Maintenance Releases.
“Maintenance Releases” (x.y.Z): Vehicles for delivering Error corrections that are severely affecting a number of customers and cannot wait for the next major or minor release. They incorporate all applicable defect corrections made in prior Maintenance Releases.
“End of Life” (EOL): Versions are no longer supported by Mirantis, updating to a later version is recommended.
GA to 12 months |
12 to 18 months |
18 to 24 months |
---|---|---|
Full support |
Full support 1 |
Limited Support for existing installations 2 |
1 Software patches for critical bugs and security issues only; no feature enablement.
2 Software patches for critical security issues only.
Mirantis Kubernetes Engine (MKE)¶
3.6.z |
3.7.z |
|
---|---|---|
General Availability (GA) |
2022-OCT-13 (3.6.0) |
2023-AUG-30 (3.7.0) |
End of Life (EOL) |
2024-OCT-13 |
2025-AUG-29 |
Release frequency |
x.y.Z every 6 weeks |
x.y.Z every 6 weeks |
Patch release content |
As needed:
|
As needed:
|
Supported lifespan |
2 years 1 |
2 years 1 |
1 Refer to the Support lifecycle table for details.
EOL MKE Versions¶
MKE Version |
EOL date |
---|---|
2.0.z |
2017-AUG-16 |
2.1.z |
2018-FEB-07 |
2.2.z |
2019-NOV-01 |
3.0.z |
2020-APR-16 |
3.1.z |
2020-NOV-06 |
3.2.z |
2021-JUL-21 |
3.3.z |
2022-MAY-27 |
3.4.z |
2023-APR-11 |
3.5.z |
2023-NOV-22 |
Mirantis Secure Registry (MSR)¶
2.9.z |
3.1.z |
|
---|---|---|
General Availability (GA) |
2021-APR-12 (2.9.0) |
2023-SEP-28 (3.1.0) |
End of Life (EOL) |
2024-OCT-13 |
2025-SEP-27 |
Release frequency |
x.y.Z every 6 weeks |
x.y.Z every 6 weeks |
Patch release content |
As needed:
|
As needed:
|
Supported lifespan |
2 years 1 |
2 years 1 |
1 Refer to the Support lifecycle table for details.
EOL MSR Versions¶
MSR Version |
EOL date |
---|---|
2.1.z |
2017-AUG-16 |
2.2.z |
2018-FEB-07 |
2.3.z |
2019-FEB-15 |
2.4.z |
2019-NOV-01 |
2.5.z |
2020-APR-16 |
2.6.z |
2020-NOV-06 |
2.7.z |
2021-JUL-21 |
2.8.z |
2022-MAY-27 |
3.0.z |
2024-APR-20 |
Mirantis Container Runtime (MCR)¶
Enterprise 23.0 |
|
---|---|
General Availability (GA) |
2023-FEB-23 (23.0.1) |
End of Life (EOL) |
2025-FEB-22 |
Release frequency |
x.y.Z every 6 weeks |
Patch release content |
As needed:
|
Supported lifespan |
2 years 1 |
1 Refer to the Support lifecycle table for details.
EOL MCR Versions¶
MCR Version |
EOL date |
---|---|
CSE 1.11.z |
2017-MAR-02 |
CSE 1.12.z |
2017-NOV-14 |
CSE 1.13.z |
2018-FEB-07 |
EE 17.03.z |
2018-MAR-01 |
Docker Engine - Enterprise v17.06 |
2020-APR-16 |
Docker Engine - Enterprise 18.03 |
2020-JUN-16 |
Docker Engine - Enterprise 18.09 |
2020-NOV-06 |
Docker Engine - Enterprise 19.03 |
2021-JUL-21 |
MCR 19.03.8+ |
2022-MAY-27 |
MCR 20.10.0+ |
2023-DEC-10 |
Open Source Components and Licenses¶
Warning
In correlation with the end of life (EOL) date for MSR 2.8.x, Mirantis stopped maintaining this documentation version as of 2022-05-27. The latest MSR product documentation is available here.
Click any product component license below to download a text file of that license to your local system.