Mirantis provides the MSR4 documentation to help you understand the core
concepts of Mirantis Secure Registry 4, and to provide
information on how to deploy and operate the product.
Mirantis Secure Registry (MSR) 4 is an Enterprise-grade container registry
solution that can be integrated easily with standard Kubernetes distributions
to provide tight security controls for cloud native development. Based on
Harbor, which is open source and the only CNCF graduated container registry,
this Mirantis product can serve as the core of an effective secure software
supply chain.
Using MSR 4, you can automate the security of your software supply chain,
securely storing, sharing, and managing images in your own private container
registry, to automate the security of your software supply chain.
With MSR 4, you can:
Run the software alongside your other applications in any standard Kubernetes
version from 1.10 and up, deploying it with Docker Compose or a Helm chart.
Secure artifacts through policies and role-based access control (RBAC),
to ensure your container images are free from vulnerabilities.
Improve DevOps collaboration while maintaining clear boundaries, by
creating and pushing multiservice applications and images and making these
resources accessible within your company.
Accelerate image distribution using peer-to-peer (P2P) preheating
capabilities.
Automatically promote images from testing through to production in a
controlled manner, thus ensuring that they comply with your defined security
minimums, before mirroring containerized content to distributed teams using
policy-based controls.
Integrate the software into your development pipeline using webhooks. In this
way, policy-based promotion automates compliance checks to secure your
application supply chain.
Mirantis Secure Registry (MSR) marks a major evolution in our container
image management solution. With a new foundation based on the CNCF Harbor
project, MSR4 delivers improved performance,
scalability, and flexibility for modern DevOps workflows.
This section outlines the key changes and improvements introduced in MSR4,
highlights differences compared to MSR2 and MSR3, and provides guidance for
a smooth transition.
SAML Support: MSR4 no longer supports SAML authentication and instead
uses OpenID Connect (OIDC), a more modern and flexible standard that better
aligns with cloud-native environments and improves security and scalability.
Please refer to OIDC Authentication for more information on
configuring OIDC.
Promotion Policies: Automated promotion policies are no longer included.
Customers can adapt their CI/CD pipelines to achieve similar workflows.
Swarm support customers can use MSR4 as a single instance for Swarm
environments instead of HA clusters
Feature
MSR2
MSR3
MSR4 (Harbor-Based)
Distribution
Proprietary
Proprietary
CNCF Harbor
Database
RethinkDB
RethinkDB
PostgreSQL
Redis - Caching
Swarm
Supported
Supported
Not supported, but customers can use single instance install
Use our migration guide to transition from MSR2
and MSR3 to MSR4.
Tools are provided to migrate repositories and configurations to the
new platform.
Project and Repository permissions
When migrating repositories from MSR2 and MSR3 the repositories will migrate
under a project. The project permissions will be admin.
If you need to retain custom permissions from the previous version of MSR,
Mirantis will publish a tooling that helps migrate the permissions and
validate it shortly.
Image Signing
When migrating images which were previously signed the image signing will not
be retained. Due to architectural and security differences it will not be
possible to migrate this security attribute during the migration. Customers
can refer to Signing Artifacts with Cosign for more information on
signing artifacts after migration.
Image Signing DCT vs Cosign
MSR2 and MSR3 use Docker Content Trust (DCT) for image signing. DCT is
based on Notary v1, which uses The Update Framework (TUF) to ensure the
integrity and publisher authenticity of container images.
MSR4 supports Cosign for image signing and verification. Cosign is part of
the Sigstore project and is more modern and widely adopted for cloud-native
environments. Unlike DCT, Cosign allows signing without relying on a
separate, heavyweight service like Notary and supports keyless signing with
OIDC identities. Harbor integrates this natively, providing better
interoperability with Kubernetes-native tools and workflows.
Updated APIs and Webhooks
While general functionality remains similar, some API endpoints and webhook
implementations have changed. Customers may need to adjust their scripts and
integrations.
Adaptation for Removed Features
Swarm Support: While MSR4 no longer supports Swarm HA clusters,
single-instance deployments remain viable for Swarm users.
For more information please visit Install MSR single host using Docker Compose.
Promotion Policies: Automate promotion workflows through updated CI/CD
pipelines.
Authentication
SAML support has been removed. Customers should use other supported
authentication methods, such as LDAP or OIDC.
Mirantis Secure Registry (MSR) 4 is now based on CNCF Harbor, bringing
increased stability, improved feature sets, and a broader ecosystem of
integrations. This document outlines key changes, migration paths, and
considerations for customers transitioning from MSR2 or MSR3 to MSR4.
Since MSR4 is built on a new codebase, customers will observe functional
differences compared to MSR2 and MSR3. These changes impact exportable
metrics, job runner operations, webhooks, and API access methods. Below are
the most notable changes:
MSR4 uses OpenID instead of legacy SAML. For MSR4 and cloud-native
applications, OIDC is the better choice due to its lightweight nature,
modern API compatibility, and stronger support for mobile, and microservices
architectures. If a customer is still using SAML for authentication, they might
need an Identity Provider (IdP) that bridges SAML and OIDC (e.g., Okta,
Keycloak, or Azure AD). Open ID has broader support with the Enterprise and
Cloud Identity Providers (IdPs) supporting Azure AD, Okta, Google Identity
Platform, Amazon Cognito, Ping Identity, IBM Security Verify, OneLogin, and
VMware Workspace ONE.
Teams RBAC
MSR4 does not include MSR2/3 Teams or Enzi. Customers can manually add
individual users to projects. Group permissions are available only through AD
Groups which requires LDAP/AD and OIDC authentication.
Upstream Harbor is changing in favor of OCI registries which supports OCI Helm.
Both Harbor and Helm CLI can manage charts as OCI artifacts, but Helm CLI
search functionality is currently limited. Searching through the Harbor UI
remains fully supported, and the upcoming Harbor CLI tool may introduce
artifact search capabilities.
In Harbor, Helm charts are managed as OCI artifacts rather than using a
dedicated Helm repository. Traditionally, Helm stored charts in a proprietary
Helm Chart Repository, which allowed direct Helm CLI interactions such as
helmsearchrepo and helmshow. With OCI-based Helm storage, charts are
pushed and pulled using standard OCI commands (helmpushoci://andhelmpulloci://), aligning with container registry best practices.
However, this shift introduces some functional differences: searching for
charts using helm search repo is no longer possible, requiring users to rely
on the Harbor UI or future enhancements in the Harbor CLI. The change
to OCI-based Helm storage improves interoperability with OCI-compliant
registries but requires minor workflow adjustments for Helm users accustomed to
traditional chart repositories.
Promotion Policies
Promotion Policies are not formally supported in Harbor. Customers relying on
Promotion Policies should consider modifying their CI/CD pipelines.
Upstream Harbor does not support Swarm. Customers running Swarm are advised to
deploy MSR4 as a single-node instance using Docker Compose. For high
availability (HA) deployments, Kubernetes is required. Most customers with HA
demands typically have Kubernetes in their environments and can leverage it for
MSR4.
Backup and Disaster Recovery
In MSR2 and MSR3, backup functionality was built-in, allowing customers to
create and restore backups easily. MSR4 introduces a different approach where
backups must be managed externally using Velero, an open-source backup tool
widely used in enterprise environments, including on platforms like Azure.
Unlike the previous versions, which handled backups natively, Velero requires
a Kubernetes-based deployment.
One of the key improvements with MSR4 is the ability to perform in-place
upgrades with significantly shorter maintenance windows, in contrast, MSR2
and MSR3 which necessitated scheduling large maintenance windows. Moving
forward, upgrades in the MSR4.x series will be faster, more efficient, and
require minimal downtime.
CNCF Harbor (MSR4) fully supports mirroring migration from MSR2 and MSR3,
allowing customers to seamlessly transfer:
Images
Helm Charts
Tags
Repository structure
A key advantage of this migration process is the ability to use mirroring,
which reduces the need for extended maintenance windows previously required by
MMT. With mirroring, both MSR2/3 and MSR4 can remain active, minimizing
disruption and allowing teams to update their pipelines while maintaining
system availability.
MSR4 also supports migration from other registry platforms. For a full list of
supported platforms and migration instructions, please refer to this artifact.
Migrating to MSR4 provides enhanced performance, improved upgrade processes,
and a broader feature set. However, some functional differences require
customers to adapt workflows, particularly around authentication, promotion
policies, and backup strategies. Customers should review the outlined
differences and plan their migration accordingly.
For further details, refer to the full documentation on this site or contact
Mirantis Support.
The Mirantis Secure Registry 4 features are briefly described in the
following table, which also offers links to the corresponding upstream
Harbor documentation:
Project quotas can be set as a means for controlling the use of
resources, and thus it is possible to limit the amount of storage
that a project can consume.
Integrate with AD/LDAP internal user directories and OIDC to implement
fine-grained access policies and prevent malicious actors from uploading
unsafe images. Multiple repositories can be linked to provide a
separation of duties from development through production.
Deploy vulnerability scanning to analyze images for vulnerabilities
prior to their being promoted to production. The default scanner, Aqua
Trivy, can be installed
during MSR 4 installation using the --with-trivy flag. It
supports flexible scanning policies and integrates easily into CI/CD
systems.
An application programming interface is included that conforms to the
constraints of REST architectural style and allows for interaction with
RESTful web services.
The Mirantis Secure Registry (MSR) Reference Architecture provides
comprehensive technical information on MSR, including component particulars,
infrastructure specifications, and networking and volumes detail.
MSR 4 natively supports various related clients, including the Docker CLI,
Cosign client, and OCI-compatible clients like Oras and Helm. In addition to
these clients, MSR 4 features a web portal that enables administrators to
manage and monitor all artifacts seamlessly.
The MSR 4 Web Portal is a graphical user interface that helps users manage
images on the Registry.
These are the core functional services of MSR 4, including Proxy, Core, and Job
services, all built on Harbor. This layer can also accommodate third-party
services installed and integrated to enhance functionality, such as improved
replication, advanced logging capabilities, and additional integration drivers.
Harbor’s core service, which provides the following functions, is illustrated
in the diagram below:
Function
Description
API Server
An HTTP server that accepts REST API requests and responds by utilizing
its submodules, including Authentication and Authorization,
Middleware, and API Handlers, to process and manage the
requests effectively.
Authentication and Authorization
The authentication service can secure requests, which can be powered by
a local database, AD/LDAP, or OIDC. The RBAC (Role-Based
Access Control) mechanism authorizes actions such as pulling or
pushing images. The Token service issues tokens for each
Docker push/pull command based on the user’s role within a project.
If a request from a Docker client lacks a token, the Registry
redirects the request to the Token service for token issuance.
Middleware
This component preprocesses incoming requests to determine whether they
meet the required criteria before passing them to backend services for
further processing. Various functions, including quota management,
signature verification, vulnerabilityseverity checks,
and robot account parsing, are implemented as middleware.
MSR4 supports Cosign for image signing and verification. Cosign is part
of the Sigstore project. Cosign allows signing without relying on a
separate, heavyweight service like Notary and supports keyless signing
with OIDC identities. Harbor integrates this natively, providing better
interoperability with Kubernetes-native tools and workflows.
API Handlers
These handle the corresponding REST API requests, primarily parsing and
validating request parameters. They execute the business logic
associated with the relevant API controller and generate a response,
which is then written back to the client.
API Controller
The API controller plays a critical role in orchestrating the processing
of REST API requests. It’s a key component within the system’s
architecture that manages the interaction between the user’s requests
and the backend services.
Configuration Manager
Manages all system configurations, including settings for authentication
types, email configurations, certificates, and other essential
parameters.
Project Management
Oversees the core data and associated metadata of projects, which are
created to isolate and manage the artifacts effectively.
Quota Manager
Manages project quota settings and validates quotas whenever new pushes
are made, ensuring that usage limits are followed.
Chart Controller
Acts as a proxy for chart-related requests to the OCI-compatible
registry backend and provides various extensions to enhance the chart
management experience.
Retention Manager
Manages tag retention policies and oversees the execution and
monitoring of tag retention processes, ensuring efficient storage
management.
Content Trust
Enhances the trust capabilities provided by the backend Cosign,
facilitating a seamless content trust process for secure and verified
operations.
Replication Controller
Manages replication policies and registry adapters while also triggering
and monitoring concurrent replication processes to ensure consistency
and reliability across systems.
Scan Manager
Oversees multiple configured scanners from different providers and
generates scan summaries and reports for specified artifacts, ensuring
comprehensive security and vulnerability assessments.
Label Manager
The Label Manager is responsible for the creation and management of
labels that can be applied to projects and resources within the
registry.
P2P Manager
This component is crucial for enhancing the efficiency of image
distribution across different instances using peer-to-peer (P2P)
technology. It’s role involves setting up and managing P2P preheat
provider instances. These instances allow specified images to be
preheated into a P2P network, facilitating faster access and
distribution across various nodes.
Notification Manager (Webhook)
A mechanism configured in Harbor that sends artifact status changes to
designated webhook endpoints. Interested parties can trigger follow-up
actions by listening to related webhook events, such as HTTP POST
requests or Slack notifications.
OCI Artifact Manager
The core component manages the entire lifecycle of OCI artifacts across
the Harbor registry, ensuring efficient storage, retrieval,
and management.
Registry Driver
Implemented as a registry client SDK, it facilitates communication with
the underlying registry (currently Docker Distribution), enabling
seamless interaction and data management.
Robot Manager
The Robot Manager manages robot accounts, which are used to automate
operations through APIs without requiring interactive user login.
These accounts facilitate automated workflows such as CI/CD pipelines,
allowing tasks like pushing or pulling images and Helm charts, among
other operations, through command-line interfaces (CLI) like Docker and
Helm.
Log Collector
Responsible for aggregating logs from various modules into a centralized
location, ensuring streamlined access and management of log data.
GC Controller
Manages the online garbage collection (GC) schedule, initiating and
tracking the progress of GC tasks to ensure efficient resource
utilization and cleanup.
Traffic Proxy
The Traffic Proxy in Harbor primarily functions through its Proxy Cache
feature, which allows Harbor to act as a middleman between users and
external Docker registries.
The MSR 4 Job Service is a general job execution queue service to let other
components/services submit requests of running asynchronous tasks concurrently
with simple restful APIs.
Trivy is a powerful and versatile security scanner with tools to detect
security vulnerabilities across various targets, ensuring comprehensive scans
for potential issues. However, if customers prefer to use a different scanner,
MSR 4 allows such customization in the configuration.
The MSR 4 Data Access Layer manages data storage, retrieval, and caching
within the system. It encompasses Key-Value storage for caching,
an SQL database for storing metadata such as project details, user
information, policies, and image data, and Data Storage, which serves as
the backend for the registry.
Data Access Layer Elements
Description
Key Value Storage
MSR 4 Key-Value (K-V) storage, powered by Redis, provides data
caching functionality and temporarily persists job metadata for
the Job Service.
Database
The MSR 4 database stores essential metadata for Harbor models,
including information on projects, users, roles, replication policies,
tag retention policies, scanners, charts, and images. PostgreSQL is
used as the database solution.
Data Storage
Multiple storage options are supported for data persistence, serving as
backend storage for the OCI-compatible registry.
Multiple providers can support image storage in MSR 4. By default,
MSR 4 uses an internal registry that stores data on Data Storage, as
outlined in the Data Access Layer. Alternatively, various registry providers
can be enabled, including:
Distribution (Docker Registry)
Docker Hub
Huawei SWR
Amazon ECR
Google GCR
Azure ACR
Ali ACR
Helm Hub
Quay
Artifactory
GitLab Registry
Once a provider is attached, MSR 4 will use it as a backend registry
replication, pushing and pulling images. For more information regarding
the replication and Backend Registry configuration please refer to
the Configuring Replication.
MSR 4 offers two primary deployment options, each with the flexibility to
accommodate various modifications. For instance, in the all-in-one deployment,
local storage can be replaced with shared storage, and databases or key-value
stores can be made remote. This adaptability allows MSR 4 to support various
configurations and deployment scenarios.
However, to establish a standardized approach, we propose two primary
deployment options tailored for specific use cases:
All-in-One on a Single Node – Ideal for testing and development
Multi-Node HA Deployment – Designed for production environments
Since MSR 4 operates as a Kubernetes workload, all of its core services
run as Kubernetes pods. As a result, we consider a worker node as the minimum
footprint for an all-in-one MSR 4 deployment, and three workers as the minimum
footprint for an HA deployment. Master nodes, however, are not included in
this count, giving you the flexibility to design and deploy the underlying
Kubernetes cluster according to your needs.
The All-in-One Deployment consolidates all services onto a single worker
node, making it the most straightforward way to deploy MSR 4. In this setup,
all services run as single-instance components without high availability (HA)
or replication. Such approach is not applicable for production usage but is
useful for testing or Proof of Concept. Refer to the installation
guidance in the MSR 4 documentation Install MSR single host using Docker Compose or you can use
a Helm chart approach (that is mentioned in
HA deployment variant) instead,
but scaling replicas to 1 in variables configuration.
While this deployment effectively showcases MSR 4’s capabilities and
functionality, it is not intended for production use due to its lack of
redundancy. Instead, it is a lightweight option suitable for demonstrations,
training, testing, and development.
The following diagram illustrates a single worker node running all
MSR 4-related services.
There are two methods for installing the all-in-one MSR 4:
Each approach has its own advantages. The Kubernetes method is similar to
High Availability (HA) mode and allows for easy scaling from a single-node
to a multi-node deployment. On the other hand, Docker Compose is ideal for
those not using Kubernetes in their infrastructure, enabling them to
leverage MSR 4’s capabilities by running all services in containers.
The Highly Available (HA) Deployment of MSR 4 is distributed across three
or more worker nodes, ensuring resilience and reliability through multiple
service instances. For installation guidance, refer to
the Install MSR with High Availability.
A key aspect of this deployment is that Job Service and Registry
utilize a shared volume, which should be backed by a non-local, shared file
system or external storage cluster, such as Ceph (CephFS). Additionally,
Redis and PostgreSQL run in a replicated mode within this example,
co-hosted on the same worker nodes as MSR 4’s core services. However, it is
also possible to integrate existing corporate Redis and PostgreSQL instances
outside of these nodes, leveraging an enterprise-grade key-value store and
database infrastructure.
The following diagram illustrates the service placement in an HA deployment.
Dashed boxes indicate potential additional replicas for certain services. As a
reference, we recommend deploying at least two instances of Portal, Core,
Job Service, Registry, and Trivy—though this number can be adjusted based on
specific requirements, workload, and use cases. These services are not
quorum-based.
While the number of replicas for these services can scale as needed,
Redis and PostgreSQL must always have a minimum of three replicas to ensure
proper replication and fault tolerance. This requirement should be carefully
considered when planning a production deployment. Redis and PostgreSQL are
quorum-based services, so the number of replicas should always be odd,
specifically 1, 3, 5, and so on.
The reference HA deployment of an MSR 4 is presented in the following diagram.
As previously emphasized, MSR 4 components operate as a Kubernetes workload.
This section provides a reference visualization of the resources involved in
deploying each component. Additionally, it outlines how service deployment
differs between a single-node and a highly available (HA) setup,
highlighting key structural changes in each approach.
MSR 4 deployment includes the following components:
The Web Portal is a graphical user interface designed to help users manage
images within the Registry. To ensure scalability and redundancy, it is
deployed as a ReplicaSet, with a single instance in an All-in-One
deployment and multiple instances in a Highly Available (HA) setup.
These replicas are not quorum-based, meaning there are no limits on the number
of replicas. The instance count should be determined by your specific use case
and load requirements. To ensure high availability, it is recommended to have
at least two replicas.
An API proxy, specifically NGINX, runs as a ReplicaSet. It can
operate with a single instance in All-in-One deployments or scale with
multiple instances in an HA deployment. The proxy uses a ConfigMap to
store the nginx.conf and a Secret to provide and manage
TLS certificates.
Important to know is that if services are exposed through Ingress,
the NGINX Proxy will not be utilized. It happens because the Ingress
controller in Kubernetes, often NGINX-based, handles the required tasks such
as load balancing and SSL termination. So in such a case, all the functionality
of an API Routing Proxy will be handed over to Ingress.
The Core is a monolithic application that encompasses multiple controller
and manager functions. The Fundamental Services -> Core section
provides a detailed description. It is deployed as a Replica Set, with a
single instance for All-in-One deployments and multiple replicas for HA
deployments. These replicas are not quorum-based, meaning there are no limits
on the number of replicas. The instance count should be determined by your
specific use case and load requirements. To ensure high availability, it is
recommended to have at least two replicas. The Core uses a ConfigMap to
store non-sensitive configurations while securely attaching encrypted
parameters, such as passwords, to sensitive data.
The Harbor Job Service runs as a ReplicaSet, with a single replica in
All-in-One deployments and multiple replicas in HA deployments. These
replicas are not quorum-based, meaning there are no limits on the number of
replicas. The instance count should be determined by your specific use case
and load requirements. To ensure high availability, it is recommended to have
at least two replicas. It utilizes a PVC to store job-related data, which
can be configured using local or remote shared storage. Please refer to the
separate Storage section for more details on storage options.
The Job Service also uses a ConfigMap to retrieve the config.yaml
and a Secret to access sensitive parameters, such as keys and passwords.
The Harbor Registry is deployed as a ReplicaSet, running as a single
instance in All-in-One deployments and supporting multiple replicas in
HA mode. These replicas are not quorum-based, meaning there are no limits
on the number of replicas. The instance count should be determined by your
specific use case and load requirements. To ensure high availability, it is
recommended to have at least two replicas. Like the Job Service, it utilizes
a PVC to store registry data, using either local or shared backend storage.
For more details on storage options, please refer to the Storage section.
The Registry workload relies on a ConfigMap to store the config.yaml
and uses Secrets to manage sensitive parameters, such as keys and
passwords.
The Trivy service is deployed as a StatefulSet and utilizes a PVC,
with a separate volume for each Trivy instance. The number of instances can
range from a single instance in All-in-One deployments to multiple
instances in HA deployments. These replicas are not quorum-based,
meaning there are no limits on the number of replicas. The instance count
should be determined by your specific use case and load requirements. To
ensure high availability, it is recommended to have at least two replicas.
Trivy also uses a Secret to store connection details for the
Key-Value store.
Unlike other fundamental services in MSR 4, K-V storage is part of the
Data Access Layer. It can either be installed as a simplified,
single-instance setup using the same Harbor Helm Chart suitable for
All-in-One deployments or deployed in HA mode using a separate
Redis Helm Chart. Alternatively, an individual instance of K-V storage
can be used and integrated into MSR 4 as an independent storage service. In
this case, it is not considered part of the deployment footprint but rather
a dependency managed by a dedicated corporate team. While a remote service
is an option, it is not part of the reference architecture and is more
suited for specific customization in particular deployment scenarios.
Unlike the previous single-instance deployment, this setup is more robust and
comprehensive. It involves deploying K-V Redis storage in replication mode,
distributed across multiple worker nodes. This configuration includes two
types of pods: replicas and master. Each pod uses a PVC for
storage and a ConfigMap to store scripts and configuration files, while
sensitive data, such as passwords, is securely stored in a Secret.
Redis is a quorum-based service, so the number of replicas should always be
odd—specifically 1, 3, 5, and so on.
Like K-V Storage, the SQL Database service is not part of
the Fundamental Services but is included in the Data Access Layer.
It can be installed as a simplified, single-instance setup using the same
Harbor Helm Chart, making it suitable for All-in-One deployments,
or deployed in HA mode using a separate PostgreSQL Helm Chart.
Alternatively, a separate SQLDatabase instance can be integrated
into MSR 4 as an independent storage service. In this case, it is
considered a dependency rather than part of the deployment footprint and is
managed by a dedicated corporate team. While a remote service is an option,
it is not part of the reference architecture and is more suited for custom
deployments based on specific needs.
Unlike the previous single-node deployment, this setup is more robust and
comprehensive. It involves deploying PostgreSQL in replication mode across
multiple worker nodes. The configuration includes two types of pods:
replicas, managed as a StatefulSet, and pgpool, running as
a ReplicaSet. Each pod uses a PVC for storage and a ConfigMap
to store scripts and configuration files, while sensitive data, such as
passwords, is securely stored in a Secret.
Pgpool operates as an efficient middleware positioned between PostgreSQL
servers and PostgreSQL database clients. It maintains and reuses connections
to PostgreSQL servers. When a new connection request with identical properties
(such as username, database, and protocol version) is made, Pgpool reuses
the existing connection. This minimizes connection overhead and significantly
improves the system’s overall throughput.
PostgreSQL is a quorum-based service, so the number of replicas should always
be odd—specifically 1, 3, 5, and so on.
MSR 4 deployment is performed through the Helm charts. The following resources,
described in the following tables, are expected to be present in
the environment after the deployment.
Stores data needed for integration with other fundamental and data
storage services and API-related keys, certificates, and passwords for
DB integration.
msr-4-harbor-database
default
Contains a DB password.
msr-4-harbor-jobservice
default
Contains a job service secret and a registry credential password.
Stores configuration for core services, defining integrations,
databases, URLs, ports, and other non-sensitive settings (excluding
passwords, keys, and certs).
msr-4-harbor-jobservice-env
default
Job service configuration parameters such as URLs, ports, users, proxy
configuration, etc.
For a Highly Available (HA) deployment, a dedicated Redis Helm chart
can be used to deploy a Redis instance, ensuring distribution across nodes for
replication and enhanced reliability.
Helps maintain the availability of applications during voluntary
disruptions like node drains or rolling updates. It specifies the
minimum number or percentage of pods that must remain available during
a disruption for redis-master pods.
For a Highly Available (HA) deployment, a dedicated
PostgreSQL Helm chart can be used to deploy a PostgreSQL instance, ensuring
distribution across nodes for replication and enhanced reliability.
Helps maintain the availability of applications during voluntary
disruptions like node drains or rolling updates. It specifies the
minimum number or percentage of pods that must remain available during
a disruption for postgres-pgpool pods.
Storage is a critical component of the MSR 4 deployment, serving multiple
purposes, such as temporary job-related data and image storage. It can be
configured as local storage on the worker nodes or as shared storage,
utilizing a remote standalone storage cluster like Ceph, or by attaching a
dedicated storage application license.
Local storage is used for non-critical data that can be safely discarded
during development, testing, or when service instances are reinitialized.
This setup is primarily applicable in All-in-One deployments or when
storage redundancy is provided through hardware solutions, such as RAID
arrays on the worker nodes.
The shared storage option offloads storage management to a separate device,
cluster, or appliance, such as a Ceph cluster. In the following PVC
example, CephFS is used to store the created volume. This approach ensures
that data is stored in a secure, robust, and reliable environment, making it an
ideal solution for multi-node deployments and production environments.
Please refer to the
Volume access type
outlined in the installation section. While volumes used in
All-in-One deployments can utilize
the WriteToOne access mode, volumes that leverage shared storage may be
configured with the ReadWriteMany access mode. This allows the same volume
to be accessed by multiple replicas of services, such as Job Service or
Registry.
Please be aware that Harbor also offers the capability to integrate with
external object storage solutions, allowing data to be stored directly on
these platforms without the need for configuring Volumes and Persistent Volume
Claims (PVCs). This integration remains optional.
MSR 4 is deployed as a workload within a Kubernetes (K8s) cluster and offers
multiple deployment options. The diagram below illustrates the network
communication between the MSR 4 components.
Network communication between the MSR 4 components varies depending on the
deployment configuration.
In a closed deployment, where all components—including Data Layer
services—are deployed within the same Kubernetes cluster (either as an
all-in-one or high-availability setup), communication occurs over the internal
workload network. These components interact through Kubernetes Service
resources, with the only externally exposed endpoints belonging to MSR 4.
To ensure security, these endpoints must be protected with proper firewall
configurations and TLS encryption.
For deployments where Data Layer components are remote, as depicted in
the diagram, communication must be secured between the Cluster IP network used
by Kubernetes worker nodes and the external endpoints of the key-value (K-V)
and database (DB) storage systems.
For a comprehensive list of ports requiring security configurations,
refer to Network requirements.
Securing MSR 4 requires a comprehensive approach that encompasses all its
components, including Harbor, Redis, and PostgreSQL running on Kubernetes,
along with additional services such as Trivy and others if enabled. Ensuring
the integrity, confidentiality, and availability of data and services is
paramount.
This section provides guidance on securing both individual system components
and the broader Kubernetes environment.
By implementing security best practices for Kubernetes, Harbor, Redis, and
PostgreSQL, you can enhance the security, reliability, and resilience of MSR 4
against potential threats. Continuous monitoring and proactive assessment of
your security posture are essential to staying ahead of emerging risks.
Kubernetes serves as the foundation for MSR 4, making its security a top
priority. Adhering to best practices and maintaining vigilance over the
underlying infrastructure that supports MSR 4 is essential.
Since MSR 4 is deployed as a workload within Kubernetes, the following
sections outline best practices and recommendations for strengthening the
security of the underlying infrastructure.
To ensure security, the MSR 4 workload should be isolated from other
services within the cluster. Ideally, it should be the only workload
running on a dedicated Kubernetes cluster. However, if it is co-hosted with
other applications, strict access control becomes essential.
A well-configured Role-Based Access Control (RBAC) system is crucial in
such cases. Kubernetes RBAC should be enabled and carefully configured to
enforce the principle of least privilege, ensuring that each component has
only the necessary permissions.
Additionally, using dedicated service accounts for each MSR 4 component,
such as Harbor, Redis, and PostgreSQL, helps minimize the attack surface
and prevent unnecessary cross-service access.
Securing the Kubernetes platform itself is equally important. The API
server must be protected against unauthorized access by implementing strong
authentication mechanisms, such as certificate-based or token-based
authentication. These measures help safeguard MSR 4 and its infrastructure
from potential threats.
Defining proper Network Policies is essential to restrict traffic between
pods and ensure that only authorized components, such as Redis and
PostgreSQL, can communicate with each other and with Harbor.
As outlined in the deployment resources, specific NetworkPolicies are
provided for Redis and PostgreSQL when they are deployed separately from
the Harbor core. The same level of attention must be given to securing
remote data storage solutions if they are used, ensuring that communication
remains controlled and protected from unauthorized access.
Kubernetes Secrets store sensitive information such as passwords and
tokens, making their protection a critical aspect of security.
Enabling encryption of secrets at rest using Kubernetes’ built-in
encryption feature ensures that even if an attacker gains access to the
backend storage, they cannot easily retrieve the secrets’ contents.
For environments with more complex security requirements, integrating an
external secrets management solution like HashiCorp Vault can provide an
additional layer of protection, offering enhanced control and security for
sensitive data.
All internal communications within the Kubernetes cluster must be encrypted
using TLS to protect data in transit.
Kubernetes’ native support for TLS certificates should be utilized, or
alternatively, integration with a service like cert-manager can streamline
certificate management through automation.
Implementing these measures ensures secure communication between components
and reduces the risk of unauthorized access or data interception.
Harbor serves as the container registry in MSR 4, making its security
crucial for safeguarding both container images and their associated
metadata. Ensuring proper security measures are in place helps protect
against unauthorized access, image tampering, and potential vulnerabilities
within the registry.
It is essential to enable Harbor’s authentication mechanisms, such as
OpenID Connect (OIDC), LDAP, or local accounts, to manage access to
repositories and projects effectively.
For testing and development purposes, using local accounts may suffice, as
seen in deployment examples, since the solution is not intended for
production. However, for production environments, integrating corporate
OAuth or Active Directory (AD)/LDAP with MSR 4 is necessary to enable
Single Sign-On (SSO) capabilities, enhancing security and user management.
Additionally, leveraging Role-Based Access Control (RBAC) within Harbor
allows for the assignment of specific roles to users, restricting access to
sensitive resources and ensuring that only authorized individuals can
interact with critical data and operations.
Cosign is used to sign images stored in Harbor, ensuring their authenticity
and providing a layer of trust.
In addition, vulnerability scanning via Trivy is enabled by default for all
images pushed to Harbor. This helps identify potential security flaws
before the images are deployed, ensuring that only secure and trusted
images are used in production environments.
It is crucial to configure Harbor to use HTTPS with strong SSL/TLS
certificates to secure client-server communications.
For production environments, corporate-signed certificates should be used
rather than self-signed ones. Self-signed certificates are acceptable only
for testing purposes and should not be used in production, as they do not
provide the same level of trust and security as certificates issued by a
trusted certificate authority.
For added security, it is important to assess your specific use case and
disable any unused features in Harbor, such as unnecessary APIs, to reduce
the attack surface. Regularly reviewing and disabling non-essential
functionalities can help minimize potential vulnerabilities.
Additionally, credentials used to access Harbor—such as API tokens and
system secrets—should be rotated regularly to enhance security.
Since these credentials are not managed by the internal MSR 4 mechanism, it
is recommended to use third-party CI tools or scripts to automate and
manage the credential rotation process, ensuring that sensitive resources
are updated and protected consistently.
Redis is an in-memory data store, and securing its configuration and
access is critical to maintaining the integrity of cached data. While Redis
is often part of MSR 4 installations, it’s important to note that in some
cases, a corporate key-value (K-V) storage solution may be used instead. In
such scenarios, the responsibility for securing the K-V storage is transferred
to the corresponding corporate service team, which must ensure the storage is
appropriately configured and protected against unauthorized access or data
breaches.
To secure Redis, it is essential to enable authentication by setting a strong
password using the requirepass directive in the Redis configuration. This
ensures that only authorized clients can access the Redis instance.
Additionally, TLS/SSL encryption should be enabled to secure communication
between Redis clients and the Redis server. This helps protect sensitive data
in transit, preventing unauthorized interception or tampering of the
information being exchanged.
Since the placement of the K-V Storage service may vary—whether cohosted on
the same cluster, accessed from another cluster, or deployed entirely
separately—it is crucial to bind Redis to a private network to prevent
unauthorized external access. Redis should only be accessible from trusted
sources, and access should be restricted to the minimum necessary.
To achieve this, Kubernetes Network Policies should be used to enforce strict
controls on which pods can communicate with the Redis service. This ensures
that only authorized pods within the cluster can access Redis, further
minimizing the attack surface and enhancing security.
To enhance security, the CONFIG command should be disabled in Redis to
prevent unauthorized users from making changes to the Redis configuration.
This reduces the risk of malicious users altering critical settings.
Additionally, for Redis instances that should not be exposed to the internet,
consider enabling Redis’ protected mode. This mode ensures that Redis only
accepts connections from trusted sources, blocking any unauthorized access
attempts from external networks.
PostgreSQL is a relational database, and its security is vital for ensuring
data protection and maintaining compliance with regulations. Securing
PostgreSQL helps safeguard sensitive information from unauthorized access,
tampering, and potential breaches, ensuring that both the integrity and
confidentiality of the data are preserved. Proper security measures are
essential for both operational efficiency and regulatory adherence.
It is essential to enforce strong password policies for all database users to
prevent unauthorized access. Additionally, enabling SSL for encrypted
connections ensures that data transmitted between clients and the PostgreSQL
server is secure.
To further enhance security, use PostgreSQL roles to implement least
privileged access to databases and tables. Each application component should
have its own dedicated database user, with only the minimum required
permissions granted. This reduces the risk of unauthorized actions and ensures
that users can only access the data they need to perform their tasks.
To protect sensitive data stored on disk, enable data-at-rest encryption in
PostgreSQL. This ensures that any data stored in the database is encrypted
and remains secure even if the underlying storage is compromised.
Additionally, use SSL/TLS for data-in-transit encryption to secure
communications between PostgreSQL and application components. This ensures
that data exchanged between the database and clients is encrypted, preventing
interception or tampering during transit.
To enhance security, ensure that PostgreSQL is not directly accessible from
the public internet. Use Kubernetes Network Policies to restrict access to
authorized services only, ensuring that only trusted internal services can
communicate with the database.
Additionally, apply restrictions to limit access based on IP addresses,
allowing only trusted sources to connect to PostgreSQL. Furthermore, configure
client authentication methods, such as certificate-based authentication, to
further secure access and ensure that only authenticated clients can interact
with the database.
Regularly backing up the PostgreSQL database is crucial to ensure data
integrity and availability. It is essential that backup files are stored
securely, preferably in an encrypted format, to protect them from unauthorized
access or tampering.
Additionally, enable point-in-time recovery (PITR) to provide the ability to
recover the database to a specific state in case of corruption or failure.
PITR ensures minimal data loss and allows for quick recovery in the event of
an incident.
Proper logging and monitoring are crucial for identifying and responding to
security incidents in a timely manner. By capturing detailed logs of database
activity, access attempts, and system events, you can detect anomalies and
potential security threats. Implementing comprehensive monitoring allows you
to track system health, performance, and security metrics, providing visibility
into any suspicious behavior. This enables a proactive response to mitigate
risks and maintain the integrity and security of the system.
Implementing centralized logging for Harbor, Redis, PostgreSQL, and Kubernetes
is essential for maintaining visibility into system activity and detecting
potential security incidents. By aggregating logs from all components in a
centralized location, you can more easily monitor and analyze events, track
anomalies, and respond to threats quickly.
To achieve this, consider using tools like Fluentd, Elasticsearch, and Kibana
(EFK stack). Fluentd can collect and aggregate logs, Elasticsearch stores and
indexes the logs, and Kibana provides a user-friendly interface for visualizing
and analyzing log data. This setup allows for efficient log management and
better insights into system behavior, enabling prompt detection of security
incidents.
Setting up Prometheus and Grafana is an effective way to monitor the health
and performance of the system, as well as detect any unusual behavior.
Prometheus can collect and store metrics from various components, while Grafana
provides powerful dashboards for visualizing those metrics in real-time.
For enhanced security, integrating with external monitoring solutions like
Falco or Sysdig is recommended for runtime security monitoring. These tools
help detect suspicious activity and provide real-time alerts for potential
security breaches, ensuring a comprehensive security monitoring strategy.
Mirantis hosts and controls all sources of MSR 4 that are delivered to the
environment, ensuring a secure supply chain. This controlled process is
essential for preventing any malware injections or unauthorized modifications
to the system infrastructure. By maintaining tight control over the software
delivery pipeline, Mirantis helps safeguard the integrity and security of the
environment from the outset.
Helm charts and images used for building MSR 4 are hosted and maintained by
Mirantis. These resources are regularly scanned and updated according to
Mirantis’ corporate schedule, ensuring that they remain secure and up-to-date.
To ensure the security of the environment, the customer must establish a secure
communication channel between their infrastructure and Mirantis’ repositories
and registries. This can be achieved through specific proxy configurations,
which ensure a direct and controlled connection, minimizing the risk of
unauthorized access or data breaches.
Regularly applying security patches to all components—such as Harbor, Redis,
PostgreSQL, and Kubernetes—is essential to mitigate vulnerabilities promptly
and maintain a secure environment. Keeping components up-to-date with the
latest security patches helps protect the system from known threats and
exploits.
It is also important to monitor security bulletins and advisories for updates
and fixes relevant to your stack. Staying informed about new vulnerabilities
and their corresponding patches allows for quick action when necessary.
While Mirantis handles the security of sources delivered from its repositories
and registries, third-party integrations require additional security measures.
These must be secured with proper scanning and a regular patching schedule to
ensure they meet the same security standards as internal components, reducing
the risk of introducing vulnerabilities into the environment.
Implementing audit trails is essential for tracking and monitoring system
activity, enabling you to detect and respond to potential security incidents.
Audit logs should capture all critical events, such as access attempts,
configuration changes, and data modifications, ensuring accountability and
traceability.
Additionally, sensitive data must be encrypted both at rest and in transit.
Encryption at rest protects stored data from unauthorized access, while
encryption in transit ensures that data exchanged between systems remains
secure during transmission. This dual-layer approach helps safeguard sensitive
information from potential breaches and attacks.
Mirantis actively checks the sources for Common Vulnerabilities and Exposures
(CVEs) and malware injections. This proactive approach ensures that the
software and components delivered from Mirantis repositories are thoroughly
vetted for security risks, helping to prevent vulnerabilities and malicious
code from being introduced into the environment. By conducting these checks,
Mirantis maintains a secure supply chain for MSR 4 deployments.
Ensure that the environment adheres to relevant compliance standards such as
GDPR, HIPAA, or PCI-DSS, depending on your use case.
Mirantis Secure Registry (MSR) supports various installation scenarios
designed to meet most customers needs. This documentation provides
step-by-step instructions for standard deployment configurations across
commonly used clouds and on-premises environments. Following these guidelines
ensures a reliable and fully supported installation.
Some organizations may have unique infrastructure requirements or prefer
custom deployment approaches that extend
beyond the scope of this documentation. While Mirantis strives to support
diverse range of use cases, official support is limited to the configurations
outlined in this section. For specialized installation
assistance or custom deployment strategies, contact Mirantis Professional
Services team for expert guidance and implementation
support.
For more information about Mirantis Professional Services, refer to
Services Descriptions.
This procedure applies only to Kubernetes environments running MKE 3.x.
If you are using MKE 4.x, no additional preparation is required before
installing MSR.
To install MSR on MKE you must first configure both the
default:postgres-operator user account and the default:postgres-pod
service account in MKE 3.x with the privileged permission.
To prepare MKE 3.x for MSR install:
Log in to the MKE web UI.
In the left-side navigation panel, click the <username>
drop-down to display the available options.
Click Admin Settings > Privileges.
Navigate to the User account privileges section.
Enter <namespace-name>:postgres-operator into the User
accounts field.
Note
You can replace <namespace-name> with default to indicate the use
of the default namespace.
Select the privileged check box.
Scroll down to the Service account privileges section.
Enter <namespace-name>:postgres-pod into the Service accounts
field.
Note
You can replace <namespace-name> with default to indicate the use
of the default namespace.
Select the privileged checkbox.
Click Save.
Important
For already deployed MSR instances, issue a rolling restart of the
postgres-operator deployment:
This section describes how to perform a new single-node Mirantis Secure
Registry (MSR) installation and configuration using Docker Compose. By
following the procedure, you will have a fully functioning single-node
MSR installation with SSL encryption.
To ensure that all of the key prerequisites are met:
Verify that your system is running a Linux-based operating system.
Recommended distributions include Red Hat Enterprise Linux (RHEL), Rocky
Linux, and Ubuntu.
Verify the Docker installation. If Docker is not installed, run:
Locate the .tgz installer package of the latest release of MSR at
https://packages.mirantis.com/?prefix=msr/.
The release is available as a single bundle and is suitable only for
offline installations.
Right-click on the installer package and copy the download link.
Once the services are running, you can access MSR from a web browser at
http://<YOUR-DOMAIN.COM> using the admin credentials set in
harbor.yml. You will get redirected to https if SSL is enabled
on the instance.
HA MSR runs on an existing MKE or other Kubernetes cluster, preferably with
a highly available control plane (at least three controllers),
a minimum of three worker nodes, and highly available ingress.
Kubernetes storage backend with ReadWriteMany (RWX) support
A storage backend that allows a Persistent Volume Claim to be shared across
all worker nodes in the host cluster (for example, CephFS, AWS EFS,
Azure Files).
Obtain and install a Kubernetes clientbundle or
kubeconfig with embedded certificates on your management workstation to
allow kubectl and Helm to manage your cluster.
This depends on your Kubernetes distribution and configuration.
HA MSR requires a Persistent Volume Claim (PVC) that can be shared across all
worker nodes.
Note
MSR4 can use any StorageClass and PVC that you configure on your
Kubernetes cluster. The following example sets cephfs up as your
default StorageClass. For more information, see
Storage Classes
in the official Kubernetes documentation.
Create a StorageClass, the specifics of which depend on the storage
backend you are using. The following example illustrates how to create a
StorageClass class with a CephFS backend and Ceph CSI:
Helm automatically creates certificates. To manually
create your own, follow these steps:
Create a directory for certificates named certs:
mkdircerts
Create a certs.conf text file in the certs directory:
[req]distinguished_name=req_distinguished_name
x509_extensions=v3_req
prompt=no
[req_distinguished_name]C=US
ST=State
L=City
O=Organization
OU=OrganizationalUnit
CN=msr
[v3_req]keyUsage=digitalSignature,keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
[alt_names]
IP.1=<IP-ADDRESS-OF-WORKERNODE># Replace with your actual IP address
Generate the certificate and the key using the certs.conf file you
just created:
If you are using the Helm certificates skip this step. If you
manually created your own certificates, create the Kubernetes secret.
Run the following command from outside of the certs folder:
expose:# Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"# and fill the information in the corresponding sectiontype:nodePort
Set the cert source to TLS and the secret name:
certSource:secretsecret:# The name of secret which contains keys named:# "tls.crt" - the certificate# "tls.key" - the private keysecretName:"<NAME-OF-YOUR-SECRET>"
Set the nodePort ports to allow nodePortingress. You can use any
ephemeral port. Some Kubernetes distributions restrict the range.
Generally accepted range is 32768-35535.
nodePort:# The name of NodePort servicename:harborports:http:# The service port Harbor listens on when serving HTTPport:80# The node port Harbor listens on when serving HTTPnodePort:32769https:# The service port Harbor listens on when serving HTTPSport:443# The node port Harbor listens on when serving HTTPSnodePort:32770
Set the external URL, if using nodePort use a worker node IP address (the
same one that you used in generating the cert):
Check you settings against a full example of MSR configuration:
expose:type:loadBalancerpersistence:enabled:trueresourcePolicy:"keep"persistentVolumeClaim:registry:storageClass:"<STORAGE-CLASS-NAME>"accessMode:ReadWriteOncesize:5Gijobservice:jobLog:storageClass:"<STORAGE-CLASS-NAME>"accessMode:ReadWriteOncesize:5Gitrivy:storageClass:"<STORAGE-CLASS-NAME>"accessMode:ReadWriteOncesize:5Giportal:replicas:2core:replicas:2jobservice:replicas:2registry:replicas:2trivy:replicas:2database:type:externalexternal:sslmode:requirehost:"<POSTGRES-SERVICE-IP-ADDRESS>"# Replace with actual IPport:"<POSTGRES-SERVICE-PORT-NUMBER>"# Replace with actual portcoreDatabase:registryusername:msrexistingSecret:msr.msr-postgres.credentials.postgresql.acid.zalan.doredis:type:externalexternal:addr:"msr-redis-master:<REDIS-PORT-NUMBER>"existingSecret:msr-redis-secret
Access the MSR UI at https://<WORKER-NODE-EXTERNAL-IP>:32767
provided the same NodePort numbers were used as specified in this guide.
You can also log in using:
Authentication in MSR ensures secure access by validating user credentials
against an external provider or internal database. Supported methods include:
LDAP Authentication: Leverages existing LDAP directories to authenticate
users.
OpenID Connect (OIDC): A federated identity standard for single sign-on
(SSO) and secure authentication.
Database Authentication: Built-in method that manages user credentials
locally within MSR. This is the default authentication option.
Each authentication method offers unique advantages depending on your
organization’s requirements. Database Authentication offers the option for
smaller organizations or for sandbox and testing environments that don’t need
or have access to an external provider to get started. For larger organizations
and production environments the protocols LDAP or OIDC can be used for bulk
user onboarding and group management.
Log in as an administrator and navigate to the
Administration > Configuration section.
Set Auth Mode to LDAP:
Under the Authentication tab, select LDAP from the
Auth Mode dropdown.
Provide LDAP Server Details:
Auth Mode will say LDAP.
LDAP URL: Enter the server URL (e.g.,
ldap://example.com or ldaps://example.com for secure connections).
LDAP Search DN and LDAP Search Password: When a user logs in to
Harbor with their LDAP username and password, Harbor uses these values to
bind to the LDAP/AD server. For example, cn=admin,dc=example.com.
LDAP Base DN: Harbor looks up the user under the LDAP Base DN entry,
including the subtree. For example, dc=example.com.
LDAP Filter: The filter to search for LDAP/AD users. For example,
objectclass=user.
LDAP UID: An attribute, for example uid, or cn, that is used to match
a user with the username. If a match is found, the user’s password is
verified by a bind request to the LDAP/AD server.
LDAP Scope: The scope to search for LDAP/AD users. Select from
Subtree, Base, and OneLevel.
Optional. To manage user authentication with LDAP groups configure the group
settings:
LDAP Group Base DN: Base DN for group lookup.
Required when LDAP group feature is enabled.
LDAP Group Filter: Search filter for LDAP/AD groups. Required when
LDAP group feature is enabled. Available options:
OpenLDAP: objectclass=groupOfNames
Active Directory: objectclass=group
LDAP Group GID: Attribute naming an LDAP/AD group. Required when LDAP
group feature is enabled.
LDAP Group Admin DN: Group DN for users with Harbor admin access.
LDAP Group Admin Filter: Grants Harbor system administrator privileges
to all users in groups that match the specified filter.
LDAP Group Membership: User attribute for group membership.
Default: memberof.
LDAP Scope: Scope for group search: Subtree,
Base, or OneLevel.
LDAP Group Attached in Parallel: Attaches groups in parallel to
prevent login timeouts.
Uncheck LDAP Verify Cert if the LDAP/AD server uses a self-signed or
untrusted certificate.
Test LDAP Connection:
Use the Test LDAP Server button to validate the connection.
Troubleshoot any errors before proceeding.
Log in and navigate to Administration >
Configuration > Authentication.
Set Authentication Mode to OIDC:
Select OIDC as the authentication mode.
Enter OIDC Provider Details:
OIDC Provider Name: The name of the OIDC provider.
OIDC Provider Endpoint: The URL of the endpoint of the OIDC provider
which must start with https.
OIDC Client ID: The client ID with which Harbor is registered with the
OIDC provider.
OIDC Client Secret: The secret with which Harbor is registered with
the OIDC provider.
Group Claim Name: The name of a custom group claim that you have
configured in your OIDC provider, that includes the groups to add to
Harbor.
OIDC Admin Group: The name of the admin group, if the ID token of the
user shows that he is a member of this group, the user will have admin
privilege in Harbor. Note: You can only set one Admin Group. Please
also make sure the value in this field matches the value of group item in
ID token.
OIDC Scope: A comma-separated string listing the scopes to be used
during authentication.
The OIDC scope must contain openid and usually also contains profile and
email. To obtain refresh tokens it should also contain offline_access.
If you are using OIDC groups, a scope must identify the group claim.
Check with your OIDC provider administrator for precise details of how
to identify the group claim scope, as this differs from vendor to vendor.
Uncheck Verify Certificate if the OIDC Provider uses a self-signed or
untrusted certificate.
Check the Automatic onboarding if you don’t want the user to set their
username in Harbor during their first login. When this option is checked,
the attribute Username Claim must be set, Harbor will read the value
of this claim from ID token and use it as the username for onboarding the
user. Therefore, you must make sure the value you set in
Username Claim is included in the ID token returned by the OIDC
provider you set, otherwise there will be a system error when Harbor tries
to onboard the user.
Verify that the Redirect URI that you configured in your OIDC provider is
the same as the one displayed at the bottom of the page on the Mirantis
Harbor configuration page.
Test OIDC Server Connection:
Use the Test OIDC Server button to verify the configuration.
Database authentication is the simplest method, ideal for environments without
external authentication services. The one limitation is you will not be able
to use groups in the MSR environment.
Purpose of Replication: Replication is a critical feature that allows the
synchronization of container images across multiple registry instances.
It is often employed for:
Disaster Recovery: Creating replicas in geographically distant
locations provides redundancy and ensures accessibility during outages.
Load Balancing: Distributing image pull requests across several
registries improves performance and reduces latency.
Collaborative Environments: In complex deployment scenarios,
replication enables teams across locations to access synchronized image
repositories.
Key Concepts:
Replication Endpoint: An endpoint defines the registry location MSR
will replicate images to or from. This includes both internal and external
registries.
Replication Rule: Rules specify which images to replicate, with filters
based on namespace, tags, or patterns. This rule framework ensures only
relevant data is synchronized, saving time and storage space.
Triggers: Triggers determine the timing and conditions under which
replication occurs. Common triggers include manual, immediate replication,
or scheduled replications.
We start by creating a Replication Endpoint in the MSR4 UI
Log into the MSR4 Web Interface: Use your admin credentials to access
the MSR4 web interface.
Navigate to Registries:
From the main menu, select Administration >
Registries.
Here, you will manage all endpoints that your MSR4 instance connects to
for replication purposes
Creating a New Endpoint:
Click + New Endpoint to start setting up an endpoint.
Select Provider Type
Choose from options like MSR, Docker Registry, Harbor, or
AWS ECR, each with unique requirements.
Endpoint Name: Enter a name that clearly describes the endpoint’s
function (e.g., “US-West Registry” or “Production Backup”). You can add
additional information in the Description field.
Access ID: Is the username for the remote registry
Access Secret: Is the password for the account to access the remote
registry.
Verify Connection:
Click Test Connection to ensure MSR4 can reach the endpoint
successfully. A success message confirms network connectivity and
credential accuracy.
Save Endpoint Configuration:
After successful testing, click Save to finalize the endpoint
configuration.
Considerations: Always verify that the registry URL and credentials are
current and correct. Expired tokens or incorrect URLs can interrupt replication
jobs and require troubleshooting.
Replication rules define the replication’s scope, ensuring that only necessary
images are synchronized. This approach conserves bandwidth and maintains
efficient storage use.
Setting Up a New Replication Rule in MSR4
Access the Replication Rules Panel:
In the MSR4 web interface, go to Administration >
Replications.
The Replications page displays all existing rules and allows
you to add new rules or modify existing ones.
Define a New Rule:
Click + New Replication Rule to open the rule configuration
screen.
Name: Assign a unique name (e.g., “Sync to Europe Backup”) that
indicates the rule’s purpose.
Replication Mode: Select Push to send data to the remote location, or
pull to copy data from the remote location.
Source Resource Filter: This is where you can filter a subset of
images by name, tag, label, or resource type.
Namespace: Sync only images within specific namespaces.
Tag Patterns: Define tag patterns to limit replication to specific
versions or releases (e.g., *latest).
Label: Replicate images tagged with specific labels.
If you set name to ** you will download all images. .
Destination Registry: Select from the list of previously configured
endpoints.
Name Space & Flattening: When you mirror MSR4 Harbor has the ability
to flatten the name space.
Configure the Trigger Mode:: Specify how and when the replication
should occur:
Manual: Requires an admin to start replication manually
Immediate: Begins replication as soon as an image is pushed to the
source registry.
Scheduled: Allows you to define a CRON-based schedule (e.g., daily
at midnight).
Save and Activate the Rule:
Once configured, click Create to save and activate the rule.
Efficient replication management and monitoring are essential to ensure
seamless synchronization and detect issues early.
Monitoring Replication Jobs
Accessing Replication Jobs:
Go to Administration > Replications in the MSR4
interface to view all replication rules.
Select the replication rule of interest, then selection
Actions > Edit., You can now modify the existing
replication rule.
Running a Replication Job Manually:
In Administration > Replications. To manually
start a replication, select the relevant rule and click
Replicate. This action initiates replication immediately,
even if the rule is set to a schedule.
Viewing Job Details:
Go to Administration > Replications in the MSR4
interface to monitor and manage ongoing and completed replication jobs.
Select the replication rule, and below you should see the historical data
of executions. Including any current and past replications.
Click on a job entry ID to view logs, error messages, and
specific replication statistics. This information aids in troubleshooting
and verifying data integrity.
Re-running Failed Jobs:
For any job that has encountered issues, select Replicate.
Ensure that the endpoint connection and credentials are valid before
re-running jobs.
As a project administrator, you can establish connections between your Harbor
projects and external webhook endpoints. This integration enables Harbor to
notify specified endpoints of particular events occurring within your projects,
thereby facilitating seamless integration with other tools and enhancing
continuous integration and development workflows.
Harbor supports two types of webhook endpoints: HTTP and Slack. You can define
multiple webhook endpoints per project. Webhook notifications are delivered in
JSON format via HTTP or HTTPS POST requests to the specified endpoint URL or
Slack address. Harbor supports two JSON payload formats:
Default: The traditional format used in previous versions.
CloudEvents: A format adhering to the CloudEvents specification.
The following table outlines the events that trigger notifications and the
contents of each notification:
Event
Webhook Event Type
Contents of Notification
Push artifact to registry
PUSH_ARTIFACT
Repository namespace name, repository name, resource URL, tags, manifest
digest, artifact name, push time timestamp, username of user who pushed
artifact
Pull artifact from registry
PULL_ARTIFACT
Repository namespace name, repository name, manifest digest, artifact
name, pull time timestamp, username of user who pulled artifact
Delete artifact from registry
DELETE_ARTIFACT
Repository namespace name, repository name, manifest digest, artifact
name, artifact size, delete time timestamp, username of user who deleted
image
Artifact scan completed
SCANNING_COMPLETED
Repository namespace name, repository name, tag scanned, artifact name,
number of critical issues, number of major issues, number of minor
issues, last scan status, scan completion time timestamp, username of
user who performed scan
Artifact scan stopped
SCANNING_STOPPED
Repository namespace name, repository name, tag scanned, artifact name,
scan status
Artifact scan failed
SCANNING_FAILED
Repository namespace name, repository name, tag scanned, artifact name,
error that occurred, username of user who performed scan
Project quota exceeded
QUOTA_EXCEED
Repository namespace name, repository name, tags, manifest digest,
artifact name, push time timestamp, username of user who pushed artifact
Project quota near threshold
QUOTA_WARNING
Repository namespace name, repository name, tags, manifest digest,
artifact name, push time timestamp, username of user who pushed artifact
Artifact replication status changed
REPLICATION
Repository namespace name, repository name, tags, manifest digest,
artifact name, push time timestamp, username of user who trigger the
replication
When an artifact is pushed to the registry, and you’ve configured a webhook
for the PUSH_ARTIFACT event, Harbor sends a JSON payload to the specified
endpoint. Below is an example of such a payload in the Default format:
In the CloudEvents format, the payload would be structured differently,
adhering to the CloudEvents specification.
Recommendations for Webhook Endpoints
HTTP Endpoints: Ensure that the endpoint has a listener capable of
interpreting the JSON payload and acting upon the information,
such as executing a script or triggering a build process.
Slack Endpoints: Follow Slack’s guidelines for incoming webhooks to
integrate Harbor notifications into Slack channels.
By configuring webhook notifications, you can automate responses to various
events within your Harbor projects, thereby enhancing your continuous
integration and deployment pipelines.
Differences Between MSR 3 Webhooks and MSR 4 Webhooks (Harbor-Based)¶
When migrating from Mirantis Secure Registry (MSR) 3 to MSR 4 (based on
Harbor), several key differences in webhook functionality should be noted.
These changes reflect the enhanced architecture and expanded event support in
Harbor, offering greater flexibility and compatibility while addressing certain
legacy limitations.
Event Coverage:
In MSR 3, webhook notifications were primarily focused on
repository-level events, such as image push and deletion. However,
MSR 4 expands the event coverage significantly, including
notifications for:
Artifact scans (completed, stopped, or failed).
Project quota thresholds (exceeded or nearing limits).
Replication and tag retention processes.
This expanded event set allows for more granular monitoring and automation
opportunities.
Payload Format Options:
MSR 3 supported a single JSON payload format for webhook events,
designed to integrate with basic CI/CD pipelines. In contrast, MSR 4
introduces dual payload format options:
Default Format: Maintains backward compatibility for simple
integrations.
CloudEvents Format: Complies with the CloudEvents specification,
enabling integration with modern cloud-native tools and ecosystems.
Webhook Management Interface:
In MSR 3, managing webhooks required navigating a simpler interface
with limited options for customization. In MSR 4, the management UI is
more sophisticated, allowing users to configure multiple endpoints, select
specific event types, and apply authentication or SSL verification for
secure communication.
Slack Integration:
MSR 3 did not natively support direct Slack notifications. With MSR 4,
you can configure webhook notifications to integrate directly with Slack
channels, streamlining team collaboration and real-time monitoring
Authentication and Security Enhancements:
MSR 4 enhances webhook security by supporting authentication headers and
remote certificate verification for HTTPS endpoints, which were limited or
unavailable in MSR 3.
Ease of Configuration:
The MSR 4 webhook interface provides a user-friendly experience for
creating, testing, and managing webhooks, compared to the more rudimentary
configuration options in MSR 3.
While MSR 4 webhooks offer enhanced functionality, a few MSR 3-specific
behaviors are no longer present:
Tight Coupling with Legacy Components:
MSR 3 webhooks were tightly integrated with certain Mirantis-specific
features and configurations. MSR 4’s Harbor-based webhooks embrace open
standards, which may mean that legacy integrations require adjustments.
Simplistic Event Payloads:
For users relying on MSR 3’s minimalistic payloads, the more detailed JSON
structures in MSR 4 may require updates to existing automation scripts or
parsers.
By understanding these differences and new capabilities, organizations can
better adapt their workflows and take full advantage of the modernized webhook
architecture in MSR 4.
Mirantis Secure Registry (MSR) maintains a comprehensive audit log of all image
pull, push, and delete operations. To effectively manage these logs, MSR
provides functionalities to configure audit log retention periods and to
forward logs to a syslog endpoint.
Mirantis Secure Registry (MSR) supports garbage collection, the automatic
cleanup of unused image layers. Effective management of storage resources is
crucial for maintaining optimal performance in Mirantis Secure Registry (MSR).
When images are deleted, the associated storage is not immediately reclaimed.
To free up this space, you must perform garbage collection, which removes
unreferenced blobs from the filesystem.
Access the MSR Interface: Log in with an account that has system
administrator privileges.
Navigate to Administration:
Click on the Administration tab.
Select Clean Up from the dropdown menu.
Configure Garbage Collection Settings:
Allow Garbage Collection on Untagged Artifacts:
To enable the deletion of untagged artifacts during garbage collection,
select the checkbox labeled Allow garbage collection on
untagged artifacts.
Dry Run Option:
To preview the blobs eligible for deletion and estimate the space that
will be freed without actually removing any data, click
DRY RUN.
Initiate Garbage Collection:
To start the garbage collection process immediately, click
GC Now.
Note
MSR introduces a 2-hour time window to protect recently uploaded layers from
being deleted during garbage collection. This ensures that artifacts
uploaded within the last two hours are not affected. Additionally, MSR
allows you to continue pushing, pulling, or deleting artifacts while garbage
collection is running. To prevent frequent triggering,
the GC Now button can only be activated once per minute.
To automate garbage collection at regular intervals:
Access the Garbage Collection Tab:
Navigate to Administration > Clean Up.
Select the Garbage Collection tab.
Set the Schedule:
Use the dropdown menu to choose the desired frequency:
None: No scheduled garbage collection.
Hourly: Runs at the beginning of every hour.
Daily: Runs at midnight every day.
Weekly: Runs at midnight every Saturday.
Custom: Define a custom schedule using a cron expression.
Enable Garbage Collection on Untagged Artifacts:
If you want untagged artifacts to be deleted during the scheduled garbage
collection, select the checkbox labeled Allow garbage
collection on untagged artifacts.
In the history table, check the box next to the Job ID of
the running garbage collection you wish to stop.
Stop the Job:
Click Stop.
Confirm the action in the modal that appears.
Caution
Stopping a garbage collection job will prevent it from processing additional
artifacts. However, any artifacts that have already been garbage collected
will not be restored.
By following these procedures, you can effectively manage storage resources
in Mirantis Secure Registry, ensuring optimal performance and efficient use
of space.
Purpose: Permissions allow controlled access to projects, ensuring only
authorized users can modify and interact with registry content.
Key Terms:
Project: A logical container in goharbor.io where users can store,
manage, and share images.
User Roles: Project Admin, Maintainer, Developer, Guest—each with
specific permission levels.
Key Concepts
Security Best Practices
Least-Privilege Principle: Regularly audit and apply the minimum
required permissions.
Review and Audit: Routinely check project member lists, adjust roles
as needed, and remove users who no longer need access.
There are two System-Level Roles in MSR
Harbor System Administrator: The Harbor System Administrator role
holds the highest level of privileges within the system. In addition to
the standard user permissions, a system administrator can:
View and manage all projects, including private and public projects.
Assign administrative privileges to regular users.
Delete user accounts.
Configure vulnerability scanning policies for all images.
Manage the default public project, “library”, which is owned by
the system administrator.
Anonymous User. A user who is not logged into the system is
classified as an Anonymous User. Anonymous users:
ProjectAdmin: When creating a new project, you will be assigned the
“ProjectAdmin” role to the project. Besides read-write privileges, the
“ProjectAdmin” also has some management privileges, such as adding and
removing members, starting a vulnerability scan.
Developer: Developer has read and write privileges for a project.
Maintainer: Maintainer has elevated permissions beyond those of
‘Developer’ including the ability to scan images, view replication jobs, and
delete images and helm charts.
Guest: Guest has read-only privilege for a specified project. They can
pull and retag images, but cannot push.
Limited Guest: A Limited Guest does not have full read privileges for a
project. They can pull images but cannot push, and they cannot see logs or
the other members of a project. For example, you can create limited guests
for users from different organizations who share access to a project.
Log in to the MSR4 web interface using your admin credentials.
Navigate to Projects from the main menu.
Click + New Project.
Project Name: Enter a unique name for your project.
Access Level: Choose between Private (restricted access) or
Public (accessible to all authenticated users).
Select Project quota limits to enable any quota as desired by MiB,
GiB, and TiB sizes.
Select Proxy Cache to enable this to allow this project to act as
a pull-through cache for a particular target registry instance.
MSR4 can only act a proxy for DockerHub, Docker Registry, Harbor,
Aws ECR, Azure ACR, Alibaba Cloud ACR, Quay, Google GCR, Github GHCR,
and JFrog Artifactory registries.
Tag retention rules are essential for maintaining an efficient and organized
registry. They help manage storage by defining policies that determine which
image tags to retain and which to remove. This process is crucial for
preventing the accumulation of outdated or unused images, optimizing storage
usage, and supporting organizational policies for image lifecycle management.
Key Concepts:
Tag Retention Rules: Policies that specify criteria for keeping or
deleting image tags in a registry.
Policy Filters: Parameters such as tags, repositories, or labels used to
control the application of rules.
Priority: The order in which rules are executed, allowing granular
control over tag retention or removal.
Tag retention rules are evaluated against repositories within a project to
determine which tags to keep and which to remove. By utilizing a combination
of filters—such as specific tag patterns or image age—administrators can
fine-tune retention policies to meet their organization’s needs.
Example Use Cases:
Development Projects: Retain only the latest five tags of a repository to
keep the environment clean and manageable.
Production Repositories: Retain tags with specific labels like stable or
release to ensure critical versions are preserved.
Cleanup Operations: Remove all tags older than 30 days to free up storage
space and eliminate obsolete images.
Log in to the MSR web interface using your credentials.
Navigate to Projects and select the specific project where you want
to configure tag retention.
Select Policy.
Click on Tag Retention under the project settings.
Define a New Rule
Click + New Rule to initiate the configuration process.
Select matching or excluding rule
In the Repositories drop-down menu, select matching or excluding.
Use the Repositories text box to specify the repositories to which
the rule will apply. You can define the target repositories using any of
the following formats:
A specific repository name, such as my_repo_1.
A comma-separated list of repository names, such as
my_repo_1,my_repo_2,your_repo_3.
A partial repository name with wildcard characters (*), for example:
my_* to match repositories starting with my_.
*_3 to match repositories ending with _3.
*_repo_* to match repositories containing repo in their name.
** to apply the rule to all repositories within the project.
Select by artifact count or number of days to define how many tags to
retain or the period to retain tags.
Option
Description
retain the most recently pushed # artifacts
Enter the maximum number of artifacts to retain, keeping the ones
that have been pushed most recently. There is no maximum age for an
artifact.
retain the most recently pulled # artifacts
Enter the maximum number of artifacts to retain, keeping only the
ones that have been pulled recently. There is no maximum age for an
artifact.
retain the artifacts pushed within the last # days
Enter the number of days to retain artifacts, keeping only the ones
that have been pushed during this period. There is no maximum number
of artifacts.
retain the artifacts pulled within the last # days
Enter the number of days to retain artifacts, keeping only the ones
that have been pulled during this period. There is no maximum number
of artifacts.
retain always
Always retain the artifacts identified by this rule.
Specifying Tags for Rule Application
Use the Tags text box to define the tags that the rule will target.
You can specify tags using the following formats:
A single tag name, such as my_tag_1.
A comma-separated list of tag names, such as
my_tag_1,my_tag_2,your_tag_3.
A partial tag name with wildcards (*), such as:
my_* to match tags starting with my_.
*_3 to match tags ending with _3.
*_tag_* to match tags containing tag.
** to apply the rule to all tags within the project.
The behavior of the rule depends on your selection:
If you select matching, the rule is applied only to the tags you specify.
If you select excluding, the rule is applied to all tags in the repository
except the ones you specify.
Save and Activate the Rule
Once all fields are complete, click Save. The rule will now appear in
the Tag Retention Rules table.
Under Projects select the project you would like to adjust the
retention runs for.
Select Policy
Under retention rules ensure there is a policy in place.
Under Schedule select Hourly, Daily, Weekly, or
Custom.
Selecting Custom will have you modify a cron schedule.
Manual Execution:
Under Projects select the project you would like to adjust the
retention runs for.
Select Policy
Under retention rules ensure there is a policy in place.
You can now select DRY RUN to ensure the run is successful without any
adverse impact or RUN NOW.
Review Execution Logs:
After execution, view logs to confirm the outcome or troubleshoot issues.
Logs display details on retained and deleted tags, along with any errors
encountered.
Under Policy then Retention runs, select the job you would like to
investigate, then select the > symbol.
You will see the policy for each repository in the project. To
view the logs for each repository select the Log on the far right
which shows a log per repository.
Interaction Between Tag Retention Rules and Project Quotas¶
The Harbor system administrator can configure project quotas to set limits on
the number of tags a project can contain and the total amount of storage it can
consume. For details about configuring project quotas, refer to Configure
Project Quotas.
When a quota is applied to a project, it acts as a strict limit that cannot be
exceeded. Even if you configure tag retention rules that would retain more
tags than the quota allows, the quota takes precedence. Retention rules cannot
override or bypass project quotas.
During the initial deployment, or updating an existing MSR cluster,
you need to pass an additional value to the MSR Helm chart.
For more information, see Install highly available MSR.
You can now visualize the collected MSR metrics.
Because Prometheus is already configured as a data source in Grafana,
the only remaining step is to create a dashboard.
Mirantis provides an MSR4-specific dashboard, available at the following URL:
Artifact signing and signature verification are essential security measures
that ensure the integrity and authenticity of artifacts. MSR facilitates
content trust through integrations with
Cosign. This guide provides detailed
instructions on utilizing Cosign to sign your artifacts within MSR.
Note
Project administrators can enforce content trust, requiring all artifacts to
be signed before they can be pulled from a MSR registry.
MSR integrates support for Cosign, an OCI artifact signing and verification
solution that is part of the Sigstore project.
Cosign signs OCI artifacts and
uploads the generated signature to MSR, where it is stored as an artifact
accessory alongside the signed artifact. MSR manages the link between the
signed artifact and its Cosign signature, allowing the application of tag
retention and immutability rules to both the artifact and its signature.
Signature Management: MSR treats Cosign signatures as artifact
accessories, enabling consistent management alongside the signed artifacts.
Replication Support: MSR’s replication capabilities extend to signatures,
ensuring that both artifacts and their associated signatures are replicated
together.
Limitations:
Vulnerability scans of Cosign signatures are not supported.
Only manual and scheduled replication trigger modes are applicable;
event-based replication is currently unsupported.
The information offered herein relates exclusively to upgrades between
MSR 4.x.x versions. To upgrade to MSR 4.x.x from MSR 2.x.x, or 3.x.x,
you must use the Migration Guide.
Upgrade instructions for MSR 4.0 to 4.13 coming soon
We are currently finalizing the validated upgrade path for MSR 4.0 to 4.13.
Detailed instructions will be published shortly.
If you are performing a migration from versions 2.9.x or 3.1.x, or a new
installation, refer to the existing guides:
Mirantis Secure Registry (MSR) 4, built on the Harbor open-source project,
includes powerful tools for vulnerability scanning. Scanning container images
for vulnerabilities is a critical step in ensuring your applications are secure
before deploying them into production environments. This document provides
detailed instructions for configuring and using the vulnerability scanning
features in MSR 4. By default, MSR 4 leverages Trivy, an efficient and fast
vulnerability scanner. Additionally, MSR supports advanced capabilities,
including integration with other scanners like Grype and Anchore, as well as
third-party security tools.
To get started with vulnerability scanning, follow these steps:
Enabling Vulnerability Scanning with Trivy (Default Scanner)¶
Log in to the MSR web console using your administrator credentials.
Navigate to the Administration section from the left-hand
navigation menu.
Under Interrogation Services, select Scanners.
Trivy is enabled as the default scanner in MSR 4.
If Trivy is not marked as “Default” select the scanner and click the “SET
AS DEFAULT” button.
To test connection, select the scanner, click ACTION drop down,
and select EDIT. In the popup click Test Connection
to verify Trivy is functional. If the connection is successful, save the
configuration by clicking Save.
Trivy provides fast, lightweight scanning for common vulnerabilities and
exposures (CVEs) in container images. This setup ensures all images pushed to
MSR 4 are scanned for security issues by default.
To enhance your vulnerability scanning strategy, you can integrate additional
scanners, such as Grype and Anchore, into MSR 4. These tools provide broader
coverage and specialized features for detecting vulnerabilities.
Deploy the scanner you want to add (e.g., Grype or Anchore) according to its
documentation.
In the MSR web console, navigate to Administration >
Interrogation Services > Scanners and click
+ New Scanner.
Provide the required details for the new scanner:
Name: A unique identifier for the scanner (e.g., Grype-Primary).
Endpoint URL: The API endpoint for the scanner.
Select the appropriate Authorization mechanism and provide the
appropriate credentials, tokens, or key.
Click Test Connection to validate the configuration, and then
click Add.
Once additional scanners are configured, they can be used alongside Trivy or
set as the default scanner for specific projects.
Automated scans ensure that images are evaluated for vulnerabilities
immediately when they are pushed to the registry. This helps enforce security
policies consistently across your container ecosystem.
To enable automated scans,
Navigate to Projects in the MSR web console.
Select a Project, then click Configuration.
enable the Automatically Scan Images on Push option.
After a scan is completed, results are accessible in the MSR web console.
Navigate to the image repository in the desired project, select the
image
Then select the artifact digest.
Scroll down to Artifacts then Vulnerabilities
The report includes detailed information about detected vulnerabilities,
categorized by severity (Critical, High, Medium, Low, Unknown). Export the
results in JSON or CSV format for further analysis if needed.
In addition to using Trivy and integrating scanners like Grype and Anchore,
MSR 4 supports third-party scanners to create a comprehensive vulnerability
management strategy. Leveraging multiple tools enables a layered security
approach, enhancing protection against various types of vulnerabilities and
compliance risks.
Each of these tools brings unique advantages to your container security
strategy. For instance, Aqua CSP and Sysdig Secure extend vulnerability
scanning into runtime environments, ensuring your containers remain protected
after deployment. TensorSecurity uses machine learning to identify patterns in
vulnerability data, uncovering risks that traditional scanners might miss.
Deploy the third-party scanner on your infrastructure or subscribe to its
hosted service.
Retrieve API credentials and endpoint details from the scanner’s
documentation.
Add the scanner to MSR 4 by navigating to Administration >
Interrogation Services and using the Add Scanner
workflow described earlier.
Validate the scanner’s functionality by running test scans and analyzing
the results.
By integrating third-party scanners, MSR 4 empowers you to customize your
security strategy to meet specific organizational needs and regulatory
requirements.
Mirantis Secure Registry (MSR) 4 provides a robust and flexible vulnerability
scanning solution. With Trivy enabled by default, organizations can quickly
detect and mitigate vulnerabilities in container images. The ability to
integrate additional scanners, including third-party tools, allows you to
create a comprehensive security strategy tailored to your needs.
A backup method that works with almost any storage type, including NFS,
local disks, or cloud storage that doesn’t support snapshots. Useful when
snapshots aren’t available or when fine-grained control over files is
needed.
Snapshot Backup
A fast, efficient way to back up entire volumes that is tightly integrated
with the storage provider. Ideal for cloud-native environments where CSI
snapshots are supported.
Note
Filesystem backups are NOT truly cross-platform because they capture
files and directories in a way that depends on the underlying storage
system. If you back up on AWS, for example, restoring to Azure might not
work smoothly.
Snapshot backups are also NOT cross-platform by default because they
rely on storage provider technology (like AWS EBS snapshots or Azure Disk
snapshots). However, if you use a snapshot with a data mover, you can
transfer it between cloud providers, making it more portable.
This method leverages Velero’s integration with Container Storage
Interface (CSI) drivers to create volume snapshots, providing efficient and
consistent backups for cloud-native environments.
Ensure Velero is installed with CSI snapshot support enabled. This requires
the EnableCSI flag during installation. For detailed instructions, refer to
the official Velero documentation Container Storage Interface Snapshot
Support in Velero.
CSI Driver Installation
Confirm that a compatible CSI driver is installed and configured in your
Kubernetes cluster. The CSI driver should support snapshot operations for
your storage provider.
After the full backup, incremental backups happen automatically.
They capture only the changes since the last backup if the CSI Storage
driver supports this capability. Please check with the manufacturer of
your CSI driver.
When running incremental backups, use the --from-backup flag:
This guide provides instructions for performing a manual migration from
MSR 2.9 or 3.1 to MSR 4. Manual migration is recommended for small
environments or limited migration scopes because it transfers repository data
only. Permissions and policies are not included.
Manual migration is easy to implement and does not require additional tools.
Use this guide if you need to preserve your existing registry content and
organizational layout while maintaining full control over each migration
step.
Before you begin the migration process, complete the following
steps to ensure a smooth and secure transition:
Administrative access
Confirm that you have administrative access to both source
(MSR 2.9 and MSR 3.1) and target (MSR 4.x) environments to read all source
data and configure the destination from your migration workstation.
Backup
Perform a full backup of existing data to prevent any data loss in case of a
misstep:
Ensure that the target system has sufficient storage capacity to accommodate
all migrated artifact. The storage must be separate from MSR 2.9 or MSR 3.1.
The PostgreSQL database must have enough space for the following:
Current MSR RethinkDB
Plus 25% overhead
The BLOB storage must have enough space for the following:
Current used storage
Extra space for new images, based on your requirements
When migrating from MSR 2.x or MSR 3.x to MSR 4.x, Helm charts do not
automatically migrate.
You must manually migrate any existing Helm charts to the new environment.
To migrate images, repositories, and tags from an MSR 2.x or MSR 3.x
environment to an MSR 4.x environment, follow these steps:
Access the MSR Web UI.
Navigate to Administration → Registries.
Select New Endpoint to add a new registry connection.
Fill in the pop-up with the following details:
Provider: DTR
Name: <your-identifier>
Endpoint URL: <root-of-the-registry>
Access ID: <admin-username>
Access Secret: <admin-password>
Note
Avoid specifying a user or repository namespace, as this will restrict
access. Using the root enables full crawling of the host.
Navigate to Administration → Replications.
Select New Replication Rule to create a replication rule.
In the pop-up window, review and confirm the following settings:
Replication mode: Ensure it is set to Pull-based.
Source registry: Verify that the MSR 2 and MSR 3 hosts added in
previous steps are listed.
Source resource filter: Ensure the Name field is set to **,
with all other fields left blank.
Destination: Make sure flattening is set to Flatten1Level.
If your environment uses an organization namespace in MSR 2 or MSR 3,
you may choose an alternative flattening option.
Click to learn more about flattening options
You can choose to flatten or retain the original structure of any
organization or namespace.
Enabling the flattening option will merge all content into a single
namespace (ns). If your organization uses a more flexible
namespace or organizational structure, review the following guidelines
to understand how flattening may affect your setup:
Flatten All Levels: a/b/c/d/img → ns/img
No Flattening: a/b/c/d/img → ns/a/b/c/d/img
Flatten 1 Level: a/b/c/d/img → ns/b/c/d/img
Flatten 2 Levels: a/b/c/d/img → ns/c/d/img
Flatten 3 Levels: a/b/c/d/img → ns/d/img
The term Levels refers to the directory depth of the source
path (a/b/c/d/img).
Select the rule created in the previous step and click
Replicate. Be aware that pulling down the entire host may take
some time to complete.
To check the status of the replication process, click the job ID.
After upgrading MSR, several settings will not carry over automatically.
Below are key aspects to consider after a successful migration:
Configuration area
Required actions
Project Visibility
Project visibility (public/private) must be configured manually.
In MSR 3.x, private and public image repositories could coexist under
a single organization. In MSR 4, visibility is set only at the project
level. Mixed public/private repositories under one organization in
MSR 3.x must be manually adjusted.
Project Permissions
MSR 4 organizes repositories within projects. Ensure that project-level
permissions are properly recreated.
See: Managing Project Permissions.
Registry Replication
Re-establish any replication or mirroring rules and schedules in MSR 4.
See: Configuring Replication.
Image Tag Retention
Manually configure existing retention policies for images in MSR 4
to ensure appropriate lifecycle management.
See: Managing Tag Retention Rules.
Pruning behavior in MSR 4 differs fundamentally from earlier versions.
While previous releases used pruning policies to remove images that matched
defined criteria, MSR 4 introduces retention policies, which are based on
preserving images that meet certain tag patterns.
Use the mapping guide below to manually translate existing pruning rules into
MSR 4 retention policies.
This guide offers comprehensive, step-by-step instructions for migrating
artifacts from Mirantis Secure Registry (MSR) versions 2.9 and 3.1 to MSR 4
using the official migration tool.
The migration process is designed as an A/B operation. Your existing MSR
deployment remains active and unaffected while data is copied to a new MSR 4.x
instance. The migration tool runs independently on a separate host with
network access to both source and destination environments. This design
ensures operational continuity and limits risk to the current deployment.
Key characteristics of the migration:
Migration is non-disruptive to your existing MSR system until the final
cutover.
Metadata are transferred using offline copies for consistency.
The database backend changes from RethinkDB to PostgreSQL.
Team names and repository paths may change. You will need to update pipelines
accordingly.
Image data migration can take significant amount of time dependent on
attributes of the customer environment such as image and layer count and
size, as well as network and storage capabilities. It may be scheduled
to manage network and storage usage or run immediately.
To minimize downtime during the final cutover, image migration can be
repeated to reduce the size of the remaining delta before the last sync.
Mirantis Secure Registry (MSR) 4 represents a significant evolution in managing
container images and associated metadata. The transition introduces a new
architecture centered around projects, improved security models,
and streamlined policy-based configuration.
The transition may take a significant amount of time, depending on
your system and data volume. However, your current MSR instance may remain
fully operational throughout the migration, allowing you to continue work
without interruption.
Most core data will be transferred automatically, but some settings and
features require manual reconfiguration after migration. Understanding what is
and is not migrated will help you plan the migration effectively.
The following items must be recreated or reconfigured after the migration:
Audit Logs
Set up new logging and compliance monitoring mechanisms.
API Updates
Some endpoints have changed; update as needed to maintain
automation and tooling compatibility.
Authentication
SAML support is removed. Use LDAP or OIDC instead.
Certificate Management
Define retention and cleanup rules in the new system.
Garbage Collection Settings
Manually reconfigure garbage collection policies in MSR 4.
Image Tag Retention
Reconfigure rules to manage image lifecycle in MSR 4.
Labels
Update image and repository labels.
Local Groups and Users
Manually recreate any local groups and users that are defined only in Enzi
and not managed by an external identity provider.
Project Permissions
Depending on your permission settings you may need
to recreate user and team access rules using MSR 4’s project-level model.
Project Visibility
Set project visibility manually for each project.
MSR 4 does not support mixed visibility within a single organization
as shown in the diagram below:
Pruning Policies
Configure pruning policies manually.
These settings cannot be imported directly, as MSR 4 uses reversed logic
when evaluating pruning rules.
Scanning Settings
Enable and configure Trivy to support image vulnerability scanning in MSR 4.
Signed Images
Existing image signatures are not preserved. They need to be re-signed using
Cosign.
Tag Immutability
Tag immutability is configured at the project level, and must be set up
manually for each relevant project.
However, if a repository had tag immutability previously set to false,
there is no need to apply a new tag immutability rule after the migration.
Tokens
Tokens from previous versions are not preserved. Generate new
tokens in MSR 4
Webhooks
Recreate and redirect webhooks to MSR 4 endpoints.
The following features are not supported in MSR 4:
Swarm Support
While MSR 4 no longer supports Swarm HA clusters,
single-instance deployments remain viable for Swarm users, though not
recommended for production use.
For more information visit Install MSR single host using Docker Compose.
Promotion Policies
Automate promotion workflows through updated CI/CD pipelines.
Before you begin the migration process, complete the following
steps to ensure a smooth and secure transition:
Administrative access
Confirm that you have administrative access to both source
(MSR 2.9 and MSR 3.1) and target (MSR 4.x) environments to read all source
data and configure the destination from your migration workstation.
Backup
Perform a full backup of existing data to prevent any data loss in case of a
misstep:
Ensure that the target system has sufficient storage capacity to accommodate
all migrated artifact. The storage must be separate from MSR 2.9 or MSR 3.1.
The PostgreSQL database must have enough space for the following:
Current Enzi RethinkDB
Current MSR RethinkDB
Plus 25% overhead
The BLOB storage must have enough space for the following:
Current used storage
Extra space for new images, based on your requirements
Plus at least 5% overhead for working space
Migration workstation
Set up a dedicated migration workstation to manage the migration process.
This workstation must have:
This guide assumes you are working on a dedicated migration workstation,
a machine with access to both the source and destination environments,
used for managing the migration.
Save the manage_source_registry_db.sh script to your local machine.
This script copies the Enzi and MSR databases and starts local instances.
Click for the script
#!/bin/bashset-euopipefail
SCRIPT_VERSION="1.0.1"# Default portsENZI_RETHINKDB_PORT=28015ENZI_CLUSTER_PORT=29015MSR_RETHINKDB_PORT=28016MSR_CLUSTER_PORT=29016SCRIPT_NAME=$(basename"$0")
check_client_bundle_sourced(){if[[-z"${DOCKER_HOST:-}"]]||[[-z"${DOCKER_TLS_VERIFY:-}"]]||[[-z"${DOCKER_CERT_PATH:-}"]];thenechoecho"WARNING: Docker client environment variables not detected."echo"It is recommended to source the MKE admin client bundle (e.g., 'source env.sh')"echo"to ensure access to the source registry cluster."echofi}
show_help(){echoecho"Overview:"echo" Use this script to copy and expose the source registry databases (MKE auth"echo" store and MSR DB store) to the MSR 4 migration tool."echoecho"Prerequisites:"echo" All prerequisites apply to the system where this script is executed."echo" - Docker (or MCR) installed and running (see https://docs.docker.com/get-docker)."echo" - RethinkDB installed (see https://rethinkdb.com/docs/install)."echo" - MKE admin client bundle applied to access the source registry cluster (see"echo" https://docs.mirantis.com/mke/3.8/ops/access-cluster/client-bundle/download-client-bundle.html)."echoecho"Usage:"echo" $SCRIPT_NAME [options]"echoecho"Options:"echo" -c, --copy Copy both eNZi and MSR databases (requires Docker)"echo" --copy-enzidb Copy only the eNZi DB (requires Docker)"echo" --copy-msrdb Copy only the MSR DB (requires Docker)"echo" -e, --start-enzidb Start eNZi DB (requires RethinkDB)"echo" -m, --start-msrdb Start MSR DB (requires RethinkDB)"echo" --enzi-driver PORT Override eNZi driver port (default: 28015)"echo" --enzi-cluster PORT Override eNZi cluster port (default: 29015)"echo" --msr-driver PORT Override MSR driver port (default: 28016)"echo" --msr-cluster PORT Override MSR cluster port (default: 29016)"echo" -v, --version Show script version"echo" -h, --help Show this help message"echoecho"Notes:"echo" The --start-enzidb and --start-msrdb options run RethinkDB in the foreground (i.e. blocking)."echo" The script will not return until the database process exits."echo" Do not use both options in the same invocation (use a separate terminal for each)."echoecho"Examples:"echo" $ # Copy and start the MKE auth store (eNZi) DB"echo" $ ./$SCRIPT_NAME --copy-enzidb --start-enzidb"echoecho" $ # Copy and start the MSR DB"echo" $ ./$SCRIPT_NAME --copy-msrdb --start-msrdb"echoexit0}
error_missing_binary(){echo"Error: Required binary '$1' is not installed or not in PATH.">&2exit1}
check_docker(){if!command-vdocker>/dev/null2>&1;thenerror_missing_binary"docker"fi}
check_rethinkdb(){if!command-vrethinkdb>/dev/null2>&1;thenerror_missing_binary"rethinkdb"fi}
copyEnziDb(){check_docker
mkdir-pdb_data
echo"Copying eNZi DB from Swarm leader..."# Step 1: Get Swarm leader hostnamelocalLEADER_HOSTNAME
LEADER_HOSTNAME=$(dockernodels--format'{{.Hostname}}\t{{.ManagerStatus}}'|awk'$2 == "Leader" {print $1}')if[-z"$LEADER_HOSTNAME"];thenecho"ERROR: Could not identify Swarm leader node.">&2exit1fiecho"Swarm leader is: $LEADER_HOSTNAME"# Step 2: Find matching containerlocalCONTAINER
CONTAINER=$(dockerps-a--format'{{.Names}}'|grep"$LEADER_HOSTNAME/ucp-auth-store")if[-z"$CONTAINER"];thenecho"ERROR: Could not find ucp-auth-store container on leader node ($LEADER_HOSTNAME).">&2exit1fiecho"Using container: $CONTAINER"# Step 3: Perform the copy with retrieslocalRETRIES=3localSUCCESS=falseforiin$(seq1$RETRIES);doifdockercp"$CONTAINER:/var/data"db_data/enzi;thenSUCCESS=truebreakfiecho"Retry $i failed. Retrying in 3 seconds..."sleep3doneif!$SUCCESS;thenecho"ERROR: Failed to copy eNZi DB after $RETRIES attempts.">&2exit1fi}
copyMsrDb(){check_docker
mkdir-pdb_data
echo"Copying MSR DB..."REPLICA_ID=$(dockercontainerls--format'{{.Names}}'-fname=dtr-rethink|awk-F'-''{print $NF}'|sort|head-n1)if[[-z"$REPLICA_ID"]];thenecho"Error: Could not determine DTR replica ID.">&2exit1filocalRETRIES=3localSUCCESS=falseforiin$(seq1$RETRIES);doifdockercpdtr-rethinkdb-"$REPLICA_ID":/datadb_data/msr;thenSUCCESS=truebreakfiecho"Retry $i failed. Retrying in 3 seconds..."sleep3doneif!$SUCCESS;thenecho"ERROR: Failed to copy MSR DB after $RETRIES attempts.">&2exit1fi}
startEnziDb(){check_rethinkdb
echo"Starting eNZi DB on driver port $ENZI_RETHINKDB_PORT and cluster port $ENZI_CLUSTER_PORT..."rethinkdb--bindall--no-update-check--no-http-admin\--directory./db_data/enzi/rethinkdb\--driver-port"$ENZI_RETHINKDB_PORT"\--cluster-port"$ENZI_CLUSTER_PORT"}
startMsrDb(){check_rethinkdb
echo"Starting MSR DB on driver port $MSR_RETHINKDB_PORT and cluster port $MSR_CLUSTER_PORT..."rethinkdb--bindall--no-update-check--no-http-admin\--directory./db_data/msr/rethink\--driver-port"$MSR_RETHINKDB_PORT"\--cluster-port"$MSR_CLUSTER_PORT"}# FlagsCOPY_DB=falseCOPY_ENZI=falseCOPY_MSR=falseSTART_ENZI=falseSTART_MSR=false# Parse argumentsTEMP=$(getopt-ocemhv--longcopy,copy-enzidb,copy-msrdb,start-enzidb,start-msrdb,help,version,enzi-driver:,enzi-cluster:,msr-driver:,msr-cluster:-n"$SCRIPT_NAME"--"$@")if[$?!=0];thenshow_help;fievalset--"$TEMP"whiletrue;docase"$1"in-c|--copy)COPY_DB=true;shift;;--copy-enzidb)COPY_ENZI=true;shift;;--copy-msrdb)COPY_MSR=true;shift;;-e|--start-enzidb)START_ENZI=true;shift;;-m|--start-msrdb)START_MSR=true;shift;;--enzi-driver)ENZI_RETHINKDB_PORT="$2";shift2;;--enzi-cluster)ENZI_CLUSTER_PORT="$2";shift2;;--msr-driver)MSR_RETHINKDB_PORT="$2";shift2;;--msr-cluster)MSR_CLUSTER_PORT="$2";shift2;;-v|--version)echo"$SCRIPT_NAME version $SCRIPT_VERSION"exit0;;-h|--help)show_help;;--)shift;break;;*)echo"Unexpected option: $1";show_help;;esacdone# Show help if no actionable options were passedif!$COPY_DB&&!$COPY_ENZI&&!$COPY_MSR&&!$START_ENZI&&!$START_MSR;thenshow_help
fi# Prevent simultaneous start (both are blocking)if$START_ENZI&&$START_MSR;thenechoecho"ERROR: Cannot start both eNZi and MSR DBs in the same script run."echo"These are blocking processes. Please run them in separate terminal sessions."echoexit1fi# Prevent mismatched copy/start combinations unless using --copyif!$COPY_DB;thenif{$COPY_ENZI&&$START_MSR;}||{$COPY_MSR&&$START_ENZI;};thenechoecho"ERROR: Cannot mix eNZi and MSR operations in a single invocation."echo"For example, do not use --copy-msrdb with --start-enzidb."echo"Use consistent options for the same registry component."echoexit1fifi# Warn if copying without client bundleif$COPY_DB||$COPY_ENZI||$COPY_MSR;thencheck_client_bundle_sourced
fi# Perform copyif$COPY_DB||$COPY_ENZI;thencopyEnziDb
fiif$COPY_DB||$COPY_MSR;thencopyMsrDb
fi# Start DBsif$START_ENZI;thenstartEnziDb
fiif$START_MSR;thenstartMsrDb
fi
Start the required local databases:
Note
You need to source a client bundle that has access to the source registry
to use the copy commands.
MSR 2.9
Important
Both commands must be executed, and the processes must remain
active throughout the migration.
Select one of the following options to ensure they stay running:
Open each command in a separate terminal window or tab.
Run each command in the background by appending &.
Enzi database access
To copy and start a local Enzi database instance, run:
The secret key in Harbor is required for replicating container images.
Configure the replication schedule in the config/config.env file.
If you are running the migration immediately, update the default cron value
to match your intended schedule.
To migrate images, repositories, and tags from an MSR 2.9 or MSR 3.1
environment to MSR 4.x, you can either run the migration as a single
comprehensive operation, which is the recommended path, or break it into
specific steps if needed.
The migration tool supports both full and partial migrations, with detailed
options described in the --help flag and active configuration in
the --config flag.
During migration, source organizations and repositories are recreated as
projects. You can configure replication behavior both during and after
migration using the options provided by the migration tool.
To migrate repositories as projects:
Run the migration tool with the --projects flag to prepare the MSR 2.9
or 3.1 repositories for migration:
The migration tool first exports data from MSR and Enzi. It then processes
this data to import all repositories into MSR 4.
Exported data is stored in the csv directory, while data prepared
for import resides in the sql directory.
Optional. Verify if data has been exported:
Verify the ./csv directory for exported data:
ls-lcsv
Within the csv directory, all exported files are prefixed with
either msr_ or enzi_, indicating their source. Files prefixed
with harbor_ represent data migrated to MSR 4, exported for
verification purposes.
Verify the ./sql directory for SQL files that contain data to be
imported into MSR 4:
ls-lsql
The migration recreates source organizations and repositories as projects.
Open the MSR web UI and verify if the projects are visible.
The migration process may take a significant amount of time, depending on
factors such as storage and network speed, and the volume of data in your
project.
To verify that all replication tasks have completed, run the following
command with your environment-specific values:
In MSR 4, repositories and organizations are migrated as projects.
As a result, permissions are added at the organization project level, and do
not follow the same inheritance structure as in earlier MSR versions.
See What to expect when transitioning to MSR4 for detailed description.
Warning
If the permissions target paths are business-critical,
you should migrate them manually to ensure accuracy and avoid
disruptions.
To migrate permissions to MSR 4, you must transfer:
Team access at the repository level.
Team access at the organization (namespace) level.
Ensure that the MSR 4 authorization is properly configured to
enable Groups section in the main menu.
Refer to the Authentication Configuration for setup instructions.
Optional. Configure permission migration in the config/config.env file:
Specify whether the organization name is added as a prefix (default)
or suffix to team names by setting the value to prefix or suffix
in the configuration.
If all group names are already unique across the environment,
you can prevent MSR from appending the organization name during import by
setting:
IS_ENZI_TEAM_NAME_UNIQUE=True
Warning
Do not modify these environment variables after the migration begins.
Changing them mid-process may cause duplicate groups or inconsistent team
references.
Export groups data from MSR and Enzi, and import it into MSR 4:
Confirm that group data appears under Groups in the MSR web UI.
Note
If the Groups section is missing from the main menu, LDAP may
not be configured. See LDAP Authentication for instructions on
how to set up user authentication.
Migrate team permissions for namespaces and repositories:
Follow the steps below to migrate push and poll mirroring policies.
Each set of policies can be exported, triggered, and optionally reconfigured
to use manual scheduling.
Verify the imported policies in Administration > Replications.
All push mirroring policies will have the prefix push-. Each policy is
migrated with its associated registry.
Verify the imported policies in Administration > Replications.
All poll mirroring policies will have the prefix pull-. Each policy is
migrated with its associated registry.
This section outlines optional steps you can take to ensure that the data was
imported successfully. These steps verify the artifacts generated by the
migration tool, help confirm that the tool produced the expected outputs,
and applied the correct translations and naming conventions.
Core validation procedures are already built into the
migration workflow. To ensure all required checks are completed, follow
the validation steps provided in every step of the migration guide.
To verify that all repositories have been migrated:
Truncate and sort the data on both versions of MSR:
Count how many namespace and repository name entries exist in the original
MSR data:
catmsr_repo|wc-l
Repeat the process for MSR 4 data:
catharbor_repo|wc-l
Compare the results. The MSR 4 output should have exactly one more entry.
This extra entry comes from the default library repository included with
the MSR 4 instance.
To verify the migration, remove the library project from the MSR 4
results.
Use vimdiff or a similar tool to compare the files and confirm that
repository names match between MSR versions.
Note
vimdiff is not included in the container and must be installed
separately if used.
Compare the contents of msr_groups and msr4_groups. Verify whether
group names have been correctly prefixed by their namespaces. Use tools such
as delta or mlr for a side-by-side comparison. These tools are
available both locally and within the migration tool container.
After upgrading MSR, several settings will not carry over automatically.
Below are key aspects to consider after a successful migration:
Configuration area
Required actions
Project Visibility
Project visibility (public/private) must be configured manually.
In MSR 3.x, private and public image repositories could coexist under
a single organization. In MSR 4, visibility is set only at the project
level. Mixed public/private repositories under one organization in
MSR 3.x must be manually adjusted.
Project Permissions
MSR 4 organizes repositories within projects. Ensure that project-level
permissions are properly recreated.
See: Managing Project Permissions.
Registry Replication
Re-establish any replication or mirroring rules and schedules in MSR 4.
See: Configuring Replication.
Image Tag Retention
Manually configure existing retention policies for images in MSR 4
to ensure appropriate lifecycle management.
See: Managing Tag Retention Rules.
Pruning behavior in MSR 4 differs fundamentally from earlier versions.
While previous releases used pruning policies to remove images that matched
defined criteria, MSR 4 introduces retention policies, which are based on
preserving images that meet certain tag patterns.
Use the mapping guide below to manually translate existing pruning rules into
MSR 4 retention policies.
Re-running the script with --trigger-replication-rules re-enables
scheduled execution for all migration-rule replication rules. The schedule
is defined by the REPLICATION_TRIGGER_CRON environment variable.
Use the appropriate command-line flags based on the replication policy type:
--trigger-push-replication-rules and
--remove-push-replication-rules-trigger for push policies
--trigger-pull-replication-rules and
--remove-pull-replication-rules-trigger for pull policies
Before performing any deprecating operations, use
--export-all-replication-rules to back up all replication rules from
the replication_policy table in MSR 4.
This guide provides a reference for using the MSR (Mirantis Secure Registry)
migration tool to map data from older MSR (2.9 or 3.1) tables to MSR 4.
The tool can run one or multiple commands in a single execution, depending on
your migration needs.
The reference includes:
Command Reference – Provides detailed breakdown of each migration
tool command and their mapping between MSR versions.
This table provides the most frequently used commands in the
Mirantis Secure Registry (MSR) migration tool,
along with their equivalent entities in both source MSR and target MSR 4.
This section provides detailed breakdown of each command used in the MSR
migration tool, including behavior, transformations, and the database tables
affected.
Exports repositories and namespaces. A namespace name is prefixed to
repository name to avoid issues with accessLevel permissions. The
project_metadata table on MSR 4 is populated with information such as
auto_scan (from scanOnPush on MSR) or public (from visibility
on MSR).
Additionally, quota and quota_usage tables on MSR 4 are populated
during project migration. These tables reference the project_id.
During migration, the tool initializes:
Exports team permissions. In MSR 4, project membership is per project, not
per repository. Therefore, a team on MSR 2.9 or MSR 3.1 is migrated as
a project member on MSR 4.
The repository_team_access table, which contains teamId and
repositoryId mappings, is used to populate the project_member
table by referencing a project_id. Therefore, projects must be created
before this step; otherwise, an error will occur. Each team is assigned an
entity_type of group, and roles are mapped as shown in the table below.
Team role mapping:
MSR 2.9 / MSR 3.1 Role
MSR 2.9 / MSR 3.1 Permissions
MSR 4 Role
MSR 4 Permissions
MSR 4 DB Role Type
admin
All permissions on given repository
Project Admin
All permissions on given repository
1
read-write
Same as read-only + Push + Start Scan + Delete Tags
Maintainer
Same as Limited Guest + Push + Start Scan + Create/Delete Tags + etc
4
read-only
View/Browse + Pull
Limited Guest
See a list of repositories + See a list of images + Pull Images + etc
Exports LDAP groups. Because group names must be unique in MSR 4, each group is
prefixed with its organization name in the format
<organization>-<groupname>. This naming convention helps prevent name
collisions. The LDAP group distinguished name (DN) in MSR 4 is set using the
groupDN field from Enzi.
Exporting LDAP groups only migrates the group definitions, it does not include
memberships or permissions. To migrate those, use the --members command.
Mirantis Secure Registry 4 subscriptions provide access to
prioritized support for designated contacts from your company, agency, team, or
organization. MSR4 service levels are based on your subscription level and the
cloud or cluster that you designate in your technical support case.
The CloudCare Portal is the contact point
through which customers with technical issues can interact directly with
Mirantis.
Access to the CloudCare Portal requires prior internal authorization, and an
email verification step. Once you have verified your contact details and
changed your password, you can access all cases and purchased resources.
Note
Once Mirantis has set up its backend systems at the start of the support
subscription, a designated internal administrator can appoint additional
contacts. Thus, if you have not received and verified an invitation to the
CloudCare Portal, you can arrange with your designated administrator to
become a contact. If you do not know who your designated administrator is,
or you are having problems accessing the CloudCare Portal, email Mirantis
support at support@mirantis.com.
Retain your Welcome to Mirantis email, as it contains
information on how to access the CloudCare Portal, guidance on submitting
new cases, managing your resources, and other related issues.
If you have a technical issue you should first consult the knowledge base,
which you can access through the Knowledge tab of the CloudCare
Portal. You should also review the MSR4 product documentation and
Release Notes prior to filing a technical case, as the problem may have
been fixed in a later release, or a workaround solution may be available for a
similar problem.
One of the features of the CloudCare Portal is the ability to associate
cases with a specific MSR4 cluster. The associated cases are referred to in the
Portal as Clouds. Mirantis pre-populates your customer account with one or
more clouds based on your subscription(s). You may also create and manage
your Clouds to better match the way in which you use your subscription.
Mirantis also recommends and encourages that you file new cases based on a
specific Cloud in your account. This is because most Clouds also have
associated support entitlements, licenses, contacts, and cluster
configurations. These submissions greatly enhance the ability of Mirantis to
support you in a timely manner.
To locate existing Clouds associated with your account:
Click the Clouds tab at the top of the portal home page.
Navigate to the appropriate Cloud and click on the Cloud name.
Verify that the Cloud represents the correct MSR4 cluster and support
entitlement.
Click the New Case button near the top of the Cloud page to
create a new case.
Obtain full-cluster support bundle using the MKE web UI¶
To obtain a full-cluster support bundle using the MKE web UI:
Log in to the MKE web UI as an administrator.
In the left-side navigation panel, navigate to
<user name> and click Support Bundle. The support
bundle download will require several minutes to complete.
Note
The default name for the generated support bundle file is
docker-support-<cluster-id>-YYYYmmdd-hh_mm_ss.zip. Mirantis suggests
that you not alter the file name before submitting it to the customer
portal. However, if necessary, you can add a custom string between
docker-support and <cluster-id>, as in:
docker-support-MyProductionCluster-<cluster-id>-YYYYmmdd-hh_mm_ss.zip.
Submit the support bundle to Mirantis Customer Support by clicking
Share support bundle on the success prompt that displays
once the support bundle has finished downloading.
Fill in the Jira feedback dialog, and click Submit.
Obtain full-cluster support bundle using the MKE API¶
To obtain a full-cluster support bundle using the MKE API:
Create an environment variable with the user security token:
If SELinux is enabled, include the --security-optlabel=disable flag.
Note
The CLI-derived support bundle only contains logs for the node on which
you are running the command. If you are running a high availability
MKE cluster, collect support bundles from all manager nodes.
Obtain support bundle using the MKE CLI with PowerShell¶
To obtain a support bundle using the MKE CLI with PowerShell:
Run the following command on Windows worker nodes to collect the support
information and have it placed automatically into a .zip file:
An attacker can craft an input to the Parse functions that would be
processed non-linearly with respect to its length, resulting in extremely
slow parsing. This could cause a denial of service.
Matching of hosts against proxy patterns can improperly treat an IPv6
zone ID as a hostname component. For example, when the NO_PROXY
environment variable is set to “*.example.com”, a request to
“[::1%25.example.com]:80` will incorrectly match and not be proxied.
The net/http package improperly accepts a bare LF as a line terminator
in chunked data chunk-size lines. This can permit request smuggling if
a net/http server is used in conjunction with a server that incorrectly
accepts a bare LF as part of a chunk-ext.
The tokenizer incorrectly interprets tags with unquoted attribute values
that end with a solidus character (/) as self-closing. When directly
using Tokenizer, this can result in such tags incorrectly being marked
as self-closing, and when using the Parse functions, this can result in
content following such tags as being placed in the wrong scope during
DOM construction, but only when tags are in foreign content
(e.g. <math>, <svg>, etc contexts).
Helm is a tool for managing Charts. A chart archive file can be crafted
in a manner where it expands to be significantly larger uncompressed than
compressed (e.g., >800x difference). When Helm loads this specially
crafted chart, memory can be exhausted causing the application to
terminate. This issue has been resolved in Helm v3.17.3.
Helm is a package manager for Charts for Kubernetes. A JSON Schema file
within a chart can be crafted with a deeply nested chain of references,
leading to parser recursion that can exceed the stack size limit and
trigger a stack overflow. This issue has been resolved in Helm v3.17.3.
Open Policy Agent (OPA) is an open source, general-purpose policy engine.
Prior to version 1.4.0, when run as a server, OPA exposes an HTTP Data
API for reading and writing documents. Requesting a virtual document
through the Data API entails policy evaluation, where a Rego query
containing a single data document reference is constructed from the
requested path. This query is then used for policy evaluation. A HTTP
request path can be crafted in a way that injects Rego code into the
constructed query. The evaluation result cannot be made to return any
other data than what is generated by the requested path, but this path
can be misdirected, and the injected Rego code can be crafted to make
the query succeed or fail; opening up for oracle attacks or, given the
right circumstances, erroneous policy decision results. Furthermore, the
injected code can be crafted to be computationally expensive, resulting
in a Denial Of Service (DoS) attack. This issue has been patched in
version 1.4.0. A workaround involves having network access to OPA’s
RESTful APIs being limited to localhost and/or trusted networks,
unless necessary for production reasons.
containerd is an open-source container runtime. A bug was found in the
containerd’s CRI implementation where containerd, starting in version
2.0.1 and prior to version 2.0.5, doesn’t put usernamespaced containers
under the Kubernetes’ cgroup hierarchy, therefore some Kubernetes limits
are not honored. This may cause a denial of service of the Kubernetes
node. This bug has been fixed in containerd 2.0.5+ and 2.1.0+. Users
should update to these versions to resolve the issue. As a workaround,
disable usernamespaced pods in Kubernetes temporarily.
gorilla/csrf provides Cross Site Request Forgery (CSRF) prevention
middleware for Go web applications & services. Prior to 1.7.2,
gorilla/csrf does not validate the Origin header against an allowlist.
Its executes its validation of the Referer header for cross-origin
requests only when it believes the request is being served over TLS.
It determines this by inspecting the r.URL.Scheme value. However, this
value is never populated for “server” requests per the Go spec, and so
this check does not run in practice. This vulnerability allows an
attacker who has gained XSS on a subdomain or top level domain to
perform authenticated form submissions against gorilla/csrf protected
targets that share the same top level domain. This vulnerability is
fixed in 1.7.2.
setuptools is a package that allows users to download, build, install,
upgrade, and uninstall Python packages. A path traversal vulnerability in
PackageIndex is present in setuptools prior to version 78.1.1. An
attacker would be allowed to write files to arbitrary locations on the
filesystem with the permissions of the process running the Python code,
which could escalate to remote code execution depending on the context.
Version 78.1.1 fixes the issue.
When deploying MSR in High Availability mode using Helm on Red Hat Enterprise
Linux (RHEL) 9.4 or later, installation may fail due to a
segmentation fault in the bg_mon module.
This issue occurs when PostgreSQL is deployed using the zalando/spilo image.
The failure manifests with the following error messages:
In the harbor-core pod:
2025-06-24T07:58:01Z [INFO] [/common/dao/pgsql.go:135]: Upgrading schema for pgsql ...2025-06-24T07:58:01Z [ERROR] [/common/dao/pgsql.go:140]: Failed to upgrade schema, error: "Dirty database version 11. Fix and force version."2025-06-24T07:58:01Z [FATAL] [/core/main.go:204]: failed to migrate the database, error: Dirty database version 11. Fix and force version.
On the node hosting the msr-postgres pod:
Jun 24 07:55:19 ip-172-31-0-252.eu-central-1.compute.internal systemd[1]: Created slice Slice /system/systemd-coredump.Jun 24 07:55:19 ip-172-31-0-252.eu-central-1.compute.internal systemd[1]: Started Process Core Dump (PID 34335/UID 0).Jun 24 07:55:19 ip-172-31-0-252.eu-central-1.compute.internal systemd-coredump[34336]: [🡕] Process 27789 (postgres) of user 101 dumped core.
Workaround:
Exclude the bg_mon module from the PostgreSQL configuration:
MSR 4.13.0 comprises the Harbor 2.13 upstream release. In addition, changes are
included for the interceding upstream 2.11 and 2.12 releases, for which there
was no MSR release.
The upstream pull requests detailed in the sections that follow are those that
pertain to the MSR product. For the complete list of changes and pull requests
upstream, refer to the:
SBOM Generation and Management: Harbor supports generating Software
Bill of Materials (SBOM) both manually and automatically. Users can view,
download, and replicate SBOMs across multiple Harbor instances.
VolcEngine Registry Integration: Users can replicate images to and
from the VolcEngine registry, which enhances interoperability and
flexibility.
Enhanced Robot Account Management: Improved robot account
functionality in Harbor v2.12.0 strengthens access control and automates
CI/CD processes.
Proxy Cache Speed Limit: Harbor now allows setting speed limits for
proxy cache projects, which provides better bandwidth management.
Improved LDAP Onboarding: Enhanced LDAP onboarding in Harbor v2.12.0
accelerates user login and improves authentication performance.
ACR & ACR EE Registry Integration: Users can now replicate images to
and from Azure Container Registry (ACR) and ACR Enterprise Edition.
Extended Audit Logging: Harbor now provides more granular audit
logging, with detailed user action tracking, enhanced API logging, and
improved query performance.
Enhanced OIDC Integration: Improved OpenID Connect (OIDC) support adds
user session logout and Proof Key for Code Exchange (PKCE) functionality.
CloudNativeAI Integration: Harbor integrates with CloudNativeAI
(CNAI), which enables seamless management, versioning, and retrieval of AI
models.
Redis TLS Support: Secure Redis communication in Harbor with TLS,
which protects data in transit between components.
Enhanced Dragonfly Preheating: Improved Dragonfly preheating supports
new parameters, customizable scopes, and cluster ID targeting.
This optimizes image distribution for large-scale deployments.
Deprecations
Remove robotV1 from code base (#20958) by @sgaist in #20991
Breaking changes
Update csrf key generation by @wy65701436 in #21154
The tokenizer incorrectly interprets tags with unquoted attribute values
that end with a solidus character (/) as self-closing. When directly
using Tokenizer, this can result in such tags incorrectly being marked as
self-closing, and when using the Parse functions, this can result in
content following such tags as being placed in the wrong scope during DOM
construction, but only when tags are in foreign content (e.g. <math>,
<svg>, etc contexts).
An issue was discovered in Cloud Native Computing Foundation (CNCF) Helm
through 3.13.3. It displays values of secrets when the –dry-run flag is
used. This is a security concern in some use cases, such as a –dry-run
call by a CI/CD tool. NOTE: the vendor’s position is that this behavior
was introduced intentionally, and cannot be removed without breaking
backwards compatibility (some users may be relying on these values).
Also, it is not the Helm Project’s responsibility if a user decides to
use –dry-run within a CI/CD environment whose output is visible to
unauthorized persons.
Helm is a package manager for Charts for Kubernetes. A JSON Schema file
within a chart can be crafted with a deeply nested chain of references,
leading to parser recursion that can exceed the stack size limit and
trigger a stack overflow. This issue has been resolved in Helm v3.17.3.
Helm is a tool for managing Charts. A chart archive file can be crafted
in a manner where it expands to be significantly larger uncompressed than
compressed (e.g., >800x difference). When Helm loads this specially
crafted chart, memory can be exhausted causing the application to
terminate. This issue has been resolved in Helm v3.17.3.
Beego is an open-source web framework for the Go programming language.
Prior to 2.3.6, a Cross-Site Scripting (XSS) vulnerability exists in
Beego’s RenderForm() function due to improper HTML escaping of
user-controlled data. This vulnerability allows attackers to inject
malicious JavaScript code that executes in victims’ browsers, potentially
leading to session hijacking, credential theft, or account takeover.
The vulnerability affects any application using Beego’s RenderForm()
function with user-provided data. Since it is a high-level function
generating an entire form markup, many developers would assume it
automatically escapes attributes (the way most frameworks do).
This vulnerability is fixed in 2.3.6.
golang-jwt is a Go implementation of JSON Web Tokens. Starting in
version 3.2.0 and prior to versions 5.2.2 and 4.5.2, the function
parse.ParseUnverified splits (via a call to strings.Split) its argument
(which is untrusted data) on periods. As a result, in the face of a
malicious request whose Authorization header consists of Bearer followed
by many period characters, a call to that function incurs allocations
to the tune of O(n) bytes (where n stands for the length of the
function’s argument), with a constant factor of about 16. This issue is
fixed in 5.2.2 and 4.5.2.
containerd is an open-source container runtime. A bug was found in
containerd prior to versions 1.6.38, 1.7.27, and 2.0.4 where containers
launched with a User set as a UID:GID larger than the maximum 32-bit
signed integer can cause an overflow condition where the container
ultimately runs as root (UID 0). This could cause unexpected behavior for
environments that require containers to run as a non-root user.
This bug has been fixed in containerd 1.6.38, 1.7.27, and 2.04.
As a workaround, ensure that only trusted images are used and that only
trusted users have permissions to import images.
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key exchange
slowly, or not at all, causing pending content to be read into memory,
but never transmitted.
go-redis is the official Redis client library for the Go programming
language. Prior to 9.5.5, 9.6.3, and 9.7.3, go-redis potentially responds
out of order when CLIENT SETINFO times out during connection
establishment. This can happen when the client is configured to transmit
its identity, there are network connectivity issues, or the client was
configured with aggressive timeouts. The problem occurs for multiple
use cases. For sticky connections, you receive persistent out-of-order
responses for the lifetime of the connection. All commands in the
pipeline receive incorrect responses. When used with the default ConnPool
once a connection is returned after use with ConnPool#Put the read buffer
will be checked and the connection will be marked as bad due to the
unread data. This means that at most one out-of-order response before the
connection is discarded. This issue is fixed in 9.5.5, 9.6.3, and 9.7.3.
You can prevent the vulnerability by setting the flag DisableIndentity to
true when constructing the client instance.
Matching of hosts against proxy patterns can improperly treat an IPv6
zone ID as a hostname component. For example, when the NO_PROXY
environment variable is set to *.example.com, a request to
[::1%25.example.com]:80 will incorrectly match and not be proxied.
A vulnerability in the package_index module of pypa/setuptools versions
up to 69.1.1 allows for remote code execution via its download functions.
These functions, which are used to download packages from URLs provided
by users or retrieved from package index servers, are susceptible to code
injection. If these functions are exposed to user-controlled inputs,
such as package URLs, they can execute arbitrary commands on the system.
The issue is fixed in version 70.0.
Jinja is an extensible templating engine. Prior to 3.1.5, An oversight in
how the Jinja sandboxed environment detects calls to str.format allows an
attacker that controls the content of a template to execute arbitrary
Python code. To exploit the vulnerability, an attacker needs to control
the content of a template. Whether that is the case depends on the type
of application using Jinja. This vulnerability impacts users of
applications which execute untrusted templates. Jinja’s sandbox does
catch calls to str.format and ensures they don’t escape the sandbox.
However, it’s possible to store a reference to a malicious string’s
format method, then pass that to a filter that calls it. No such filters
are built-in to Jinja, but could be present through custom filters in
an application. After the fix, such indirect calls are also handled by
the sandbox. This vulnerability is fixed in 3.1.5.
Jinja is an extensible templating engine. Prior to 3.1.6, an oversight
in how the Jinja sandboxed environment interacts with the |attr
filter allows an attacker that controls the content of a template to
execute arbitrary Python code. To exploit the vulnerability, an attacker
needs to control the content of a template. Whether that is the case
depends on the type of application using Jinja. This vulnerability
impacts users of applications which execute untrusted templates.
Jinja’s sandbox does catch calls to str.format and ensures they
don’t escape the sandbox. However, it’s possible to use the |attr
filter to get a reference to a string’s plain format method,
bypassing the sandbox. After the fix, the |attr filter no longer
bypasses the environment’s attribute lookup. This vulnerability is
fixed in 3.1.6.
Jinja is an extensible templating engine. In versions on the 3.x branch
prior to 3.1.5, a bug in the Jinja compiler allows an attacker that
controls both the content and filename of a template to execute arbitrary
Python code, regardless of if Jinja’s sandbox is used. To exploit the
vulnerability, an attacker needs to control both the filename and the
contents of a template. Whether that is the case depends on the type of
application using Jinja. This vulnerability impacts users of applications
which execute untrusted templates where the template author can also
choose the template filename. This vulnerability is fixed in 3.1.5.
SSH servers which implement file transfer protocols are vulnerable to a
denial of service attack from clients which complete the key exchange
slowly, or not at all, causing pending content to be read into memory,
but never transmitted.
Go JOSE provides an implementation of the Javascript Object Signing and
Encryption set of standards in Go, including support for JSON Web
Encryption (JWE), JSON Web Signature (JWS), and JSON Web Token (JWT)
standards. In versions on the 4.x branch prior to version 4.0.5, when
parsing compact JWS or JWE input, Go JOSE could use excessive memory.
The code used strings.Split(token, “.”) to split JWT tokens, which is
vulnerable to excessive memory consumption when processing maliciously
crafted tokens with a large number of . characters. An attacker could
exploit this by sending numerous malformed tokens, leading to memory
exhaustion and a Denial of Service. Version 4.0.5 fixes this issue. As a
workaround, applications could pre-validate that payloads passed to Go
JOSE do not contain an excessive number of . characters.
Distribution is a toolkit to pack, ship, store, and deliver container
content. Systems running registry versions 3.0.0-beta.1 through
3.0.0-rc.2 with token authentication enabled may be vulnerable to an
issue in which token authentication allows an attacker to inject an
untrusted signing key in a JSON web token (JWT). The issue lies in how
the JSON web key (JWK) verification is performed. When a JWT contains a
JWK header without a certificate chain, the code only checks if the KeyID
(kid) matches one of the trusted keys, but doesn’t verify that the
actual key material matches. A fix for the issue is available at commit
5ea9aa028db65ca5665f6af2c20ecf9dc34e5fcd and expected to be a part of
version 3.0.0-rc.3. There is no way to work around this issue without
patching if the system requires token authentication.
A certificate with a URI which has a IPv6 address with a zone ID may
incorrectly satisfy a URI name constraint that applies to the certificate
chain. Certificates containing URIs are not permitted in the web PKI, so
this only affects users of private PKIs which make use of URIs.
The HTTP client drops sensitive headers after following a cross-domain
redirect. For example, a request to a.com/ containing an Authorization
header which is redirected to b.com/ will not send that header to b.com.
In the event that the client received a subsequent same-domain redirect,
however, the sensitive headers would be restored. For example, a chain of
redirects from a.com/, to b.com/1, and finally to b.com/2 would
incorrectly send the Authorization header to b.com/2.
setuptools is a package that allows users to download, build, install,
upgrade, and uninstall Python packages. A path traversal vulnerability in
PackageIndex is present in setuptools prior to version 78.1.1.
An attacker would be allowed to write files to arbitrary locations on
the filesystem with the permissions of the process running the Python
code, which could escalate to remote code execution depending
on the context. Version 78.1.1 fixes the issue.
When deploying MSR in High Availability mode using Helm on Red Hat Enterprise
Linux (RHEL) 9.4 or later, installation may fail due to a
segmentation fault in the bg_mon module.
This issue occurs when PostgreSQL is deployed using the zalando/spilo image.
The failure manifests with the following error messages:
In the harbor-core pod:
2025-06-24T07:58:01Z [INFO] [/common/dao/pgsql.go:135]: Upgrading schema for pgsql ...2025-06-24T07:58:01Z [ERROR] [/common/dao/pgsql.go:140]: Failed to upgrade schema, error: "Dirty database version 11. Fix and force version."2025-06-24T07:58:01Z [FATAL] [/core/main.go:204]: failed to migrate the database, error: Dirty database version 11. Fix and force version.
On the node hosting the msr-postgres pod:
Jun 24 07:55:19 ip-172-31-0-252.eu-central-1.compute.internal systemd[1]: Created slice Slice /system/systemd-coredump.Jun 24 07:55:19 ip-172-31-0-252.eu-central-1.compute.internal systemd[1]: Started Process Core Dump (PID 34335/UID 0).Jun 24 07:55:19 ip-172-31-0-252.eu-central-1.compute.internal systemd-coredump[34336]: [🡕] Process 27789 (postgres) of user 101 dumped core.
Workaround:
Exclude the bg_mon module from the PostgreSQL configuration:
With the intent of improving the customer experience, Mirantis strives to offer
maintenance releases for the Mirantis Secure Registry (MSR) software every
six to eight weeks. Primarily, these maintenance releases will aim to resolve
known issues and issues reported by customers, quash CVEs, and reduce technical
debt. The version of each MSR maintenance release is reflected in the third
digit position of the version number (as an example, for MSR 4.0 the most
current maintenance release is MSR 4.13.1).
In parallel with our maintenance MKE release work, each year Mirantis will
develop and release a new major version of MSR, the Mirantis support lifespan
of which will adhere to our legacy two year standard.
The MSR team will make every effort to hold to the release cadence stated here.
Customers should be aware, though, that development and release cycles can
change, and without advance notice.
A Technology Preview feature provides early access to upcoming product
innovations, allowing customers to experiment with the functionality and
provide feedback.
Technology Preview features may be privately or publicly available and neither
are intended for production use. While Mirantis will provide assistance with
such features through official channels, normal Service Level Agreements do not
apply.
As Mirantis considers making future iterations of Technology Preview features
generally available, we will do our best to resolve any issues that customers
experience when using these features.
During the development of a Technology Preview feature, additional components
may become available to the public for evaluation. Mirantis cannot guarantee
the stability of such features. As a result, if you are using Technology
Preview features, you may not be able to seamlessly upgrade to subsequent
product releases.
Mirantis makes no guarantees that Technology Preview features will graduate to
generally available features.